mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-05 01:02:37 +00:00
Compare commits
86 Commits
luzk/propa
...
feat/agent
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aee571b775 | ||
|
|
cb46a1c4ba | ||
|
|
d9046b98dd | ||
|
|
b0e2fda105 | ||
|
|
69d777ca50 | ||
|
|
77b2835a1d | ||
|
|
c77f1632dd | ||
|
|
69461076df | ||
|
|
55937d7523 | ||
|
|
bc2fb71560 | ||
|
|
3e9deaf9c0 | ||
|
|
3f7637455c | ||
|
|
fdf3101b39 | ||
|
|
c94f2e8f28 | ||
|
|
944fe6d435 | ||
|
|
3be2fb65dc | ||
|
|
160e25c1a9 | ||
|
|
b34b336273 | ||
|
|
42d6c03ebc | ||
|
|
d4f9f875f7 | ||
|
|
6d153284d4 | ||
|
|
84a4d47aa7 | ||
|
|
9caed61f36 | ||
|
|
d45ed61db5 | ||
|
|
3b01da9ad9 | ||
|
|
874405b825 | ||
|
|
d6d04717c2 | ||
|
|
01b8437940 | ||
|
|
2c08f54341 | ||
|
|
bc1f1b85a4 | ||
|
|
0b408534ab | ||
|
|
48f391092c | ||
|
|
ae242c507d | ||
|
|
0b120fac90 | ||
|
|
f879909526 | ||
|
|
c9b0004d0e | ||
|
|
a8994347b0 | ||
|
|
5ca62c20f2 | ||
|
|
11989da4b1 | ||
|
|
19ac7d2f64 | ||
|
|
2f48937ce4 | ||
|
|
c5192b970c | ||
|
|
54391fdbdf | ||
|
|
6136228a66 | ||
|
|
fbe2a04064 | ||
|
|
baf91d8f0a | ||
|
|
7e01c5a030 | ||
|
|
105a9778cc | ||
|
|
32ec4414bf | ||
|
|
63fc2e7588 | ||
|
|
749fe85325 | ||
|
|
0bb6faa9d3 | ||
|
|
aa28eeab6a | ||
|
|
29b5531f78 | ||
|
|
74d061e994 | ||
|
|
18d0fd6b80 | ||
|
|
1c90d574ab | ||
|
|
3a7c550512 | ||
|
|
5b6f89fe64 | ||
|
|
ad5e66d1d0 | ||
|
|
94e7d86df1 | ||
|
|
0dba95e166 | ||
|
|
58208fdbae | ||
|
|
655e75038b | ||
|
|
8e2a529d94 | ||
|
|
58bbd0a400 | ||
|
|
9708b94979 | ||
|
|
0b0521b315 | ||
|
|
c8694fbed2 | ||
|
|
a4e7b322c5 | ||
|
|
ee049999cb | ||
|
|
1d6f84c7aa | ||
|
|
8dc2655cbf | ||
|
|
121720cbb3 | ||
|
|
16bf24001e | ||
|
|
29fc4ac226 | ||
|
|
25fcf39cc1 | ||
|
|
3b280e41fb | ||
|
|
8de4421705 | ||
|
|
62484934c1 | ||
|
|
298fc7b9c0 | ||
|
|
9537ba0413 | ||
|
|
ace9617722 | ||
|
|
7e1672447b | ||
|
|
ea58f8d34d | ||
|
|
fe93333066 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -30,3 +30,4 @@ chromadb-*.lock
|
||||
.crewai/memory
|
||||
blogs/*
|
||||
secrets/*
|
||||
UNKNOWN.egg-info/
|
||||
|
||||
@@ -24,6 +24,14 @@ repos:
|
||||
rev: 0.11.3
|
||||
hooks:
|
||||
- id: uv-lock
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: pip-audit
|
||||
name: pip-audit
|
||||
entry: bash -c 'source .venv/bin/activate && uv run pip-audit --skip-editable --ignore-vuln CVE-2025-69872 --ignore-vuln CVE-2026-25645 --ignore-vuln CVE-2026-27448 --ignore-vuln CVE-2026-27459 --ignore-vuln PYSEC-2023-235' --
|
||||
language: system
|
||||
pass_filenames: false
|
||||
stages: [pre-push, manual]
|
||||
- repo: https://github.com/commitizen-tools/commitizen
|
||||
rev: v4.10.1
|
||||
hooks:
|
||||
|
||||
27
README.md
27
README.md
@@ -83,6 +83,7 @@ intelligent automations.
|
||||
|
||||
## Table of contents
|
||||
|
||||
- [Build with AI](#build-with-ai)
|
||||
- [Why CrewAI?](#why-crewai)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Key Features](#key-features)
|
||||
@@ -101,6 +102,32 @@ intelligent automations.
|
||||
- [Telemetry](#telemetry)
|
||||
- [License](#license)
|
||||
|
||||
## Build with AI
|
||||
|
||||
Using an AI coding agent? Teach it CrewAI best practices in one command:
|
||||
|
||||
**Claude Code:**
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
Four skills that activate automatically when you ask relevant CrewAI questions:
|
||||
|
||||
| Skill | When it runs |
|
||||
|-------|--------------|
|
||||
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
|
||||
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
|
||||
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
|
||||
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
|
||||
|
||||
**Cursor, Codex, Windsurf, and others ([skills.sh](https://skills.sh/crewaiinc/skills)):**
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
|
||||
This installs the official [CrewAI Skills](https://github.com/crewAIInc/skills) — structured instructions that teach coding agents how to scaffold Flows, configure Crews, design agents and tasks, and follow CrewAI patterns.
|
||||
|
||||
## Why CrewAI?
|
||||
|
||||
<div align="center" style="margin-bottom: 30px;">
|
||||
|
||||
@@ -4,6 +4,292 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="25 أبريل 2026">
|
||||
## v1.14.3
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة أحداث دورة الحياة لعمليات نقطة التحقق
|
||||
- إضافة دعم لـ e2b
|
||||
- الرجوع إلى DefaultAzureCredential عند عدم توفير مفتاح API في تكامل Azure
|
||||
- إضافة دعم Bedrock V4
|
||||
- إضافة أدوات Daytona sandbox لوظائف محسّنة
|
||||
- إضافة دعم نقطة التحقق والتفرع للوكلاء المستقلين
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح execution_id ليكون منفصلًا عن state.id
|
||||
- حل مشكلة إعادة تشغيل أحداث الطريقة المسجلة عند استئناف نقطة التحقق
|
||||
- إصلاح تسلسل مراجع class initial_state كـ JSON schema
|
||||
- الحفاظ على مهارات الوكلاء التي تحتوي على بيانات وصفية فقط
|
||||
- تمرير أسماء @CrewBase الضمنية إلى أحداث الطاقم
|
||||
- دمج بيانات التنفيذ عند تهيئة دفعة مكررة
|
||||
- إصلاح تسلسل حقول مراجع class Task لنقاط التحقق
|
||||
- التعامل مع نتيجة BaseModel في حلقة إعادة المحاولة guardrail
|
||||
- الحفاظ على thought_signature في استدعاءات أدوات Gemini للبث
|
||||
- إصدار task_started عند استئناف التفرع وإعادة تصميم واجهة المستخدم النصية لنقطة التحقق
|
||||
- استخدام تواريخ مستقبلية في اختبارات تقليم نقطة التحقق لمنع الفشل المعتمد على الوقت
|
||||
- إصلاح ترتيب التشغيل الجاف والتعامل مع الفرع القديم الذي تم التحقق منه في إصدار أدوات التطوير
|
||||
- ترقية lxml إلى >=6.1.0 لرقعة الأمان
|
||||
- رفع python-dotenv إلى >=1.2.2 لرقعة الأمان
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.3
|
||||
- إضافة صفحة "بناء باستخدام الذكاء الاصطناعي" وتحديث التنقل لجميع اللغات
|
||||
- إزالة الأسئلة الشائعة حول التسعير من صفحة البناء باستخدام الذكاء الاصطناعي عبر جميع المواقع
|
||||
|
||||
### الأداء
|
||||
- تحسين MCP SDK وأنواع الأحداث لتقليل بدء التشغيل البارد بنسبة ~29%
|
||||
|
||||
### إعادة الهيكلة
|
||||
- إعادة هيكلة مساعدي نقطة التحقق للقضاء على التكرار وتشديد تلميحات نوع الحالة
|
||||
|
||||
## المساهمون
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="23 أبريل 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة دعم لـ e2b
|
||||
- تنفيذ التراجع إلى DefaultAzureCredential عند عدم توفير مفتاح API
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- ترقية lxml إلى >=6.1.0 لمعالجة مشكلة الأمان GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### الوثائق
|
||||
- إزالة الأسئلة الشائعة حول التسعير من صفحة البناء باستخدام الذكاء الاصطناعي عبر جميع اللغات
|
||||
|
||||
### الأداء
|
||||
- تحسين وقت بدء التشغيل البارد بنسبة ~29% من خلال التحميل الكسول لمجموعة أدوات MCP وأنواع الأحداث
|
||||
|
||||
## المساهمون
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="22 أبريل 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة دعم لـ bedrock V4
|
||||
- إضافة أدوات Daytona sandbox لوظائف محسّنة
|
||||
- إضافة صفحة "البناء باستخدام الذكاء الاصطناعي" — مستندات أصلية للذكاء الاصطناعي لوكلاء البرمجة
|
||||
- إضافة "البناء باستخدام الذكاء الاصطناعي" إلى التنقل في صفحة "البدء" وملفات الصفحات لجميع اللغات (en, ko, pt-BR, ar)
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح انتشار أسماء @CrewBase الضمنية إلى أحداث الطاقم
|
||||
- حل مشكلة تكرار تهيئة الدفعات في دمج بيانات التنفيذ الوصفية
|
||||
- إصلاح تسلسل حقول مرجع فئة Task لعمليات التحقق من النقاط
|
||||
- التعامل مع نتيجة BaseModel في حلقة إعادة المحاولة للحدود
|
||||
- تحديث python-dotenv إلى الإصدار >=1.2.2 للامتثال الأمني
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.3a1
|
||||
- تحديث الأوصاف وتطبيق الترجمات الفعلية
|
||||
|
||||
## المساهمون
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="21 أبريل 2026">
|
||||
## v1.14.3a1
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة دعم نقاط التحقق والفروع لوكلاء مستقلين
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- الحفاظ على thought_signature في استدعاءات أداة البث Gemini
|
||||
- إصدار task_started عند استئناف الفرع وإعادة تصميم واجهة المستخدم النصية لنقاط التحقق
|
||||
- تصحيح ترتيب التشغيل الجاف ومعالجة الفرع القديم الذي تم التحقق منه في إصدار أدوات التطوير
|
||||
- استخدام تواريخ مستقبلية في اختبارات تقليم نقاط التحقق لمنع الفشل المعتمد على الوقت (#5543)
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.2
|
||||
|
||||
## المساهمون
|
||||
|
||||
@alex-clawd, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="17 أبريل 2026">
|
||||
## v1.14.2
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة أوامر استئناف النقاط التفتيش، والاختلاف، والتنظيف مع تحسين إمكانية الاكتشاف.
|
||||
- إضافة معلمة `from_checkpoint` إلى `Agent.kickoff` والطرق ذات الصلة.
|
||||
- إضافة أوامر إدارة القوالب لقوالب المشاريع.
|
||||
- إضافة تلميحات استئناف إلى إصدار أدوات المطور عند الفشل.
|
||||
- إضافة واجهة سطر الأوامر للتحقق من النشر وتعزيز سهولة استخدام تهيئة LLM.
|
||||
- إضافة تقسيم النقاط التفتيشية مع تتبع النسب.
|
||||
- إثراء تتبع رموز LLM مع رموز الاستدلال ورموز إنشاء التخزين المؤقت.
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح المطالبة بشأن تعارضات الفروع القديمة في إصدار أدوات المطور.
|
||||
- تصحيح الثغرات في `authlib` و `langchain-text-splitters` و `pypdf`.
|
||||
- تحديد نطاق معالجات البث لمنع تلوث أجزاء التشغيل المتقاطعة.
|
||||
- إرسال نقاط التفتيش عبر واجهات Flow في TUI.
|
||||
- استخدام نمط البحث المتكرر لاكتشاف نقاط التفتيش بتنسيق JSON.
|
||||
- التعامل مع مخططات JSON الدائرية في أداة حل MCP.
|
||||
- الحفاظ على معلمات استدعاء أداة Bedrock من خلال إزالة القيمة الافتراضية الصحيحة.
|
||||
- إصدار حدث flow_finished بعد استئناف HITL.
|
||||
- إصلاح ثغرات متنوعة من خلال تحديث التبعيات، بما في ذلك `requests` و `cryptography` و `pytest`.
|
||||
- إصلاح لإيقاف تمرير وضع صارم إلى واجهة برمجة التطبيقات Bedrock Converse.
|
||||
|
||||
### الوثائق
|
||||
- توثيق المعلمات المفقودة وإضافة قسم النقاط التفتيشية.
|
||||
- تحديث سجل التغييرات والإصدار للإصدار v1.14.2 ومرشحي الإصدار السابقين.
|
||||
- إضافة توثيق ميزة A2A الخاصة بالشركات وتحديث وثائق A2A المفتوحة المصدر.
|
||||
|
||||
## المساهمون
|
||||
|
||||
@Yanhu007، @alex-clawd، @github-actions[bot]، @greysonlalonde، @iris-clawd، @lorenzejay، @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="16 أبريل 2026">
|
||||
## v1.14.2rc1
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح معالجة مخططات JSON الدائرية في أداة MCP
|
||||
- إصلاح ثغرة أمنية من خلال تحديث python-multipart إلى 0.0.26
|
||||
- إصلاح ثغرة أمنية من خلال تحديث pypdf إلى 6.10.1
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.2a5
|
||||
|
||||
## المساهمون
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="15 أبريل 2026">
|
||||
## v1.14.2a5
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.2a4
|
||||
|
||||
## المساهمون
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="15 أبريل 2026">
|
||||
## v1.14.2a4
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة تلميحات استئناف إلى إصدار أدوات المطورين عند الفشل
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح توجيه وضع الصرامة إلى واجهة برمجة تطبيقات Bedrock Converse
|
||||
- إصلاح إصدار pytest إلى 9.0.3 لثغرة الأمان GHSA-6w46-j5rx-g56g
|
||||
- رفع الحد الأدنى لـ OpenAI إلى >=2.0.0
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.2a3
|
||||
|
||||
## المساهمون
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="13 أبريل 2026">
|
||||
## v1.14.2a3
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة واجهة سطر الأوامر للتحقق من النشر
|
||||
- تحسين سهولة استخدام تهيئة LLM
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- تجاوز pypdf و uv إلى إصدارات مصححة لـ CVE-2026-40260 و GHSA-pjjw-68hj-v9mw
|
||||
- ترقية requests إلى >=2.33.0 لمعالجة ثغرة ملف مؤقت CVE
|
||||
- الحفاظ على معلمات استدعاء أداة Bedrock من خلال إزالة القيمة الافتراضية الصحيحة
|
||||
- تنظيف مخططات الأدوات لوضع صارم
|
||||
- إصلاح اختبار تسلسل تضمين MemoryRecord
|
||||
|
||||
### الوثائق
|
||||
- تنظيف لغة A2A الخاصة بالمؤسسات
|
||||
- إضافة وثائق ميزات A2A الخاصة بالمؤسسات
|
||||
- تحديث وثائق A2A الخاصة بالمصادر المفتوحة
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.2a2
|
||||
|
||||
## المساهمون
|
||||
|
||||
@Yanhu007, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="10 أبريل 2026">
|
||||
## v1.14.2a2
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة واجهة مستخدم نصية لنقطة التحقق مع عرض شجري، ودعم التفرع، ومدخلات/مخرجات قابلة للتعديل
|
||||
- إثراء تتبع رموز LLM مع رموز الاستدلال ورموز إنشاء التخزين المؤقت
|
||||
- إضافة معلمة `from_checkpoint` إلى طرق الانطلاق
|
||||
- تضمين `crewai_version` في نقاط التحقق مع إطار عمل الهجرة
|
||||
- إضافة تفرع نقاط التحقق مع تتبع السلالة
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح توجيه الوضع الصارم إلى مزودي Anthropic وBedrock
|
||||
- تعزيز NL2SQLTool مع وضع القراءة فقط الافتراضي، والتحقق من الاستعلامات، والاستعلامات المعلمة
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.2a1
|
||||
|
||||
## المساهمون
|
||||
|
||||
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="9 أبريل 2026">
|
||||
## v1.14.2a1
|
||||
|
||||
|
||||
214
docs/ar/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/ar/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "البناء باستخدام الذكاء الاصطناعي"
|
||||
description: "كل ما يحتاجه وكلاء البرمجة بالذكاء الاصطناعي للبناء والنشر والتوسع مع CrewAI — المهارات، وثائق مقروءة آلياً، النشر، وميزات المؤسسات."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# البناء باستخدام الذكاء الاصطناعي
|
||||
|
||||
CrewAI مُصمَّم أصلاً للعمل مع الذكاء الاصطناعي. تجمع هذه الصفحة ما يحتاجه وكيل البرمجة بالذكاء الاصطناعي للبناء مع CrewAI — سواءً كان Claude Code أو Codex أو Cursor أو Gemini CLI أو أي مساعد آخر يساعد المطوّر على إيصال الـ crews والـ flows.
|
||||
|
||||
### وكلاء البرمجة المدعومون
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
صُممت هذه الصفحة للبشر وللمساعدين الذكيين على حدٍّ سواء. إذا كنت وكيل برمجة، ابدأ بـ **Skills** للحصول على سياق CrewAI، ثم استخدم **llms.txt** للوصول الكامل إلى الوثائق.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — علِّم وكيلك CrewAI
|
||||
|
||||
**Skills** حزم تعليمات تمنح وكلاء البرمجة معرفة عميقة بـ CrewAI — كيفية إنشاء هيكل Flows، وضبط Crews، استخدام الأدوات، واتباع اتفاقيات الإطار.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (سوق الإضافات)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
مهارات CrewAI متاحة في **سوق إضافات Claude Code** — نفس قناة التوزيع التي تستخدمها شركات رائدة في مجال الذكاء الاصطناعي:
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
تُفعَّل أربع مهارات تلقائياً عند طرح أسئلة متعلقة بـ CrewAI:
|
||||
|
||||
| المهارة | متى تُستخدم |
|
||||
|---------|-------------|
|
||||
| `getting-started` | مشاريع جديدة، الاختيار بين `LLM.call()` / `Agent` / `Crew` / `Flow`، ربط `crew.py` / `main.py` |
|
||||
| `design-agent` | ضبط الوكلاء — الدور، الهدف، الخلفية، الأدوات، نماذج اللغة، الذاكرة، الحدود الآمنة |
|
||||
| `design-task` | وصف المهام، التبعيات، المخرجات المنظمة (`output_pydantic`، `output_json`)، المراجعة البشرية |
|
||||
| `ask-docs` | الاستعلام من [خادم CrewAI docs MCP](https://docs.crewai.com/mcp) للحصول على تفاصيل واجهة البرمجة الحالية |
|
||||
</Tab>
|
||||
<Tab title="npx (أي وكيل)">
|
||||
يعمل مع Claude Code أو Codex أو Cursor أو Gemini CLI أو أي وكيل برمجة:
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
يُجلب من [سجل skills.sh](https://skills.sh/crewaiinc/skills).
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="ثبِّت حزمة المهارات الرسمية">
|
||||
استخدم إحدى الطريقتين أعلاه — سوق إضافات Claude Code أو `npx skills add`. كلاهما يثبّت الحزمة الرسمية [crewAIInc/skills](https://github.com/crewAIInc/skills).
|
||||
</Step>
|
||||
<Step title="يحصل وكيلك فوراً على خبرة CrewAI">
|
||||
تعلّم الحزمة وكيلك:
|
||||
- **Flows** — تطبيقات ذات حالة، خطوات، وتشغيل crews
|
||||
- **Crews والوكلاء** — أنماط YAML أولاً، الأدوار، المهام، التفويض
|
||||
- **الأدوات والتكاملات** — البحث، واجهات API، خوادم MCP، وأدوات CrewAI الشائعة
|
||||
- **هيكل المشروع** — هياكل CLI واتفاقيات المستودع
|
||||
- **أنماط محدثة** — يتماشى مع وثائق CrewAI الحالية وأفضل الممارسات
|
||||
</Step>
|
||||
<Step title="ابدأ البناء">
|
||||
يمكن لوكيلك الآن إنشاء هيكل وبناء مشاريع CrewAI دون أن تعيد شرح الإطار في كل جلسة.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="مفهوم Skills" icon="bolt" href="/ar/concepts/skills">
|
||||
كيف تعمل المهارات في وكلاء CrewAI — الحقن، التفعيل، والأنماط.
|
||||
</Card>
|
||||
<Card title="صفحة Skills" icon="wand-magic-sparkles" href="/ar/skills">
|
||||
نظرة على حزمة crewAIInc/skills وما تتضمنه.
|
||||
</Card>
|
||||
<Card title="AGENTS.md والأدوات" icon="terminal" href="/ar/guides/coding-tools/agents-md">
|
||||
إعداد AGENTS.md لـ Claude Code وCodex وCursor وGemini CLI.
|
||||
</Card>
|
||||
<Card title="سجل skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
القائمة الرسمية — المهارات، إحصاءات التثبيت، والتدقيق.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — وثائق مقروءة آلياً
|
||||
|
||||
ينشر CrewAI ملف `llms.txt` يمنح المساعدين الذكيين وصولاً مباشراً إلى الوثائق الكاملة بصيغة مقروءة آلياً.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="ما هو llms.txt؟">
|
||||
[`llms.txt`](https://llmstxt.org/) معيار ناشئ لجعل الوثائق قابلة للاستهلاك من قبل نماذج اللغة الكبيرة. بدلاً من استخراج HTML، يمكن لوكيلك جلب ملف نصي واحد منظم بكل المحتوى المطلوب.
|
||||
|
||||
ملف `llms.txt` الخاص بـ CrewAI **متاح فعلياً** — يمكن لوكيلك استخدامه الآن.
|
||||
</Tab>
|
||||
<Tab title="كيفية الاستخدام">
|
||||
وجِّه وكيل البرمجة إلى عنوان URL عندما يحتاج إلى مرجع CrewAI:
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
يمكن للعديد من وكلاء البرمجة (Claude Code، Cursor، وغيرهما) جلب عناوين URL مباشرة. يحتوي الملف على وثائق منظمة تغطي مفاهيم CrewAI وواجهات البرمجة والأدلة.
|
||||
</Tab>
|
||||
<Tab title="لماذا يهم">
|
||||
- **دون استخراج ويب** — محتوى نظيف ومنظم في طلب واحد
|
||||
- **دائماً محدث** — يُقدَّم مباشرة من docs.crewai.com
|
||||
- **محسّن لنماذج اللغة** — مُنسَّق لنوافذ السياق لا للمتصفحات
|
||||
- **يُكمّل Skills** — المهارات تعلّم الأنماط، وllms.txt يوفّر المرجع
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. النشر للمؤسسات
|
||||
|
||||
انتقل من crew محلي إلى الإنتاج على **CrewAI AMP** (منصة إدارة الوكلاء) في دقائق.
|
||||
|
||||
<Steps>
|
||||
<Step title="ابنِ محلياً">
|
||||
أنشئ الهيكل واختبر crew أو flow:
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="جهّز للنشر">
|
||||
تأكد أن هيكل مشروعك جاهز:
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
راجع [دليل التحضير](/ar/enterprise/guides/prepare-for-deployment) لتفاصيل الهيكل والمتطلبات.
|
||||
</Step>
|
||||
<Step title="انشر على AMP">
|
||||
ادفع إلى منصة CrewAI AMP:
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
يمكنك أيضاً النشر عبر [تكامل GitHub](/ar/enterprise/guides/deploy-to-amp) أو [Crew Studio](/ar/enterprise/guides/enable-crew-studio).
|
||||
</Step>
|
||||
<Step title="الوصول عبر API">
|
||||
يحصل الـ crew المنشور على نقطة نهاية REST. دمجه في أي تطبيق:
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="النشر على AMP" icon="rocket" href="/ar/enterprise/guides/deploy-to-amp">
|
||||
دليل النشر الكامل — CLI وGitHub وCrew Studio.
|
||||
</Card>
|
||||
<Card title="مقدمة عن AMP" icon="globe" href="/ar/enterprise/introduction">
|
||||
نظرة على المنصة — ما يوفّره AMP لـ crews في الإنتاج.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. ميزات المؤسسات
|
||||
|
||||
CrewAI AMP مُصمَّم لفرق الإنتاج. إليك ما تحصل عليه بعد النشر.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="المراقبة والرصد" icon="chart-line">
|
||||
مسارات تنفيذ مفصّلة، وسجلات، ومقاييس أداء لكل تشغيل crew. راقب قرارات الوكلاء، استدعاءات الأدوات، وإكمال المهام في الوقت الفعلي.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
واجهة منخفضة/بدون كود لإنشاء crews وتخصيصها ونشرها بصرياً — ثم التصدير إلى الشيفرة أو النشر مباشرة.
|
||||
</Card>
|
||||
<Card title="بث الويبهوك" icon="webhook">
|
||||
بث أحداث فورية من تنفيذات الـ crews إلى أنظمتك. تكامل مع Slack أو Zapier أو أي مستهلك ويبهوك.
|
||||
</Card>
|
||||
<Card title="إدارة الفريق" icon="users">
|
||||
SSO وRBAC وضوابط على مستوى المؤسسة. أدر من يمكنه إنشاء crews ونشرها والوصول إليها.
|
||||
</Card>
|
||||
<Card title="مستودع الأدوات" icon="toolbox">
|
||||
انشر وشارك أدواتاً مخصصة عبر مؤسستك. ثبّت أدوات المجتمع من السجل.
|
||||
</Card>
|
||||
<Card title="Factory (استضافة ذاتية)" icon="server">
|
||||
شغّل CrewAI AMP على بنيتك التحتية. قدرات المنصة كاملة مع ضوابط إقامة البيانات والامتثال.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="لمن مخصص AMP؟">
|
||||
لفرق تحتاج نقل سير عمل وكلاء الذكاء الاصطناعي من النماذج الأولية إلى الإنتاج — مع المراقبة وضوابط الوصول والبنية التحتية القابلة للتوسع. سواءً كنت ناشئاً أو مؤسسة كبيرة، يتولى AMP التعقيد التشغيلي لتتفرغ لبناء الوكلاء.
|
||||
</Accordion>
|
||||
<Accordion title="ما خيارات النشر المتاحة؟">
|
||||
- **السحابة (app.crewai.com)** — تُدار من CrewAI، أسرع طريق إلى الإنتاج
|
||||
- **Factory (استضافة ذاتية)** — على بنيتك التحتية لسيطرة كاملة على البيانات
|
||||
- **هجين** — دمج السحابة والاستضافة الذاتية حسب حساسية البيانات
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="استكشف CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
سجّل وانشر أول crew لك في الإنتاج.
|
||||
</Card>
|
||||
@@ -196,7 +196,7 @@ python3 --version
|
||||
- يدعم أي مزود سحابي بما في ذلك النشر المحلي
|
||||
- تكامل مع أنظمة الأمان الحالية
|
||||
|
||||
<Card title="استكشف خيارات المؤسسات" icon="building" href="https://crewai.com/enterprise">
|
||||
<Card title="استكشف خيارات المؤسسات" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
|
||||
تعرّف على عروض CrewAI للمؤسسات وجدول عرضًا توضيحيًا
|
||||
</Card>
|
||||
</Note>
|
||||
|
||||
3862
docs/docs.json
3862
docs/docs.json
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,292 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="Apr 25, 2026">
|
||||
## v1.14.3
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add lifecycle events for checkpoint operations
|
||||
- Add support for e2b
|
||||
- Fall back to DefaultAzureCredential when no API key is provided in Azure integration
|
||||
- Add Bedrock V4 support
|
||||
- Add Daytona sandbox tools for enhanced functionality
|
||||
- Add checkpoint and fork support to standalone agents
|
||||
|
||||
### Bug Fixes
|
||||
- Fix execution_id to be separate from state.id
|
||||
- Resolve replay of recorded method events on checkpoint resume
|
||||
- Fix serialization of initial_state class references as JSON schema
|
||||
- Preserve metadata-only agent skills
|
||||
- Propagate implicit @CrewBase names to crew events
|
||||
- Merge execution metadata on duplicate batch initialization
|
||||
- Fix serialization of Task class-reference fields for checkpointing
|
||||
- Handle BaseModel result in guardrail retry loop
|
||||
- Preserve thought_signature in Gemini streaming tool calls
|
||||
- Emit task_started on fork resume and redesign checkpoint TUI
|
||||
- Use future dates in checkpoint prune tests to prevent time-dependent failures
|
||||
- Fix dry-run order and handle checked-out stale branch in devtools release
|
||||
- Upgrade lxml to >=6.1.0 for security patch
|
||||
- Bump python-dotenv to >=1.2.2 for security patch
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.3
|
||||
- Add 'Build with AI' page and update navigation for all languages
|
||||
- Remove pricing FAQ from build-with-ai page across all locales
|
||||
|
||||
### Performance
|
||||
- Optimize MCP SDK and event types to reduce cold start by ~29%
|
||||
|
||||
### Refactoring
|
||||
- Refactor checkpoint helpers to eliminate duplication and tighten state type hints
|
||||
|
||||
## Contributors
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 23, 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add support for e2b
|
||||
- Implement fallback to DefaultAzureCredential when no API key is provided
|
||||
|
||||
### Bug Fixes
|
||||
- Upgrade lxml to >=6.1.0 to address security issue GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### Documentation
|
||||
- Remove pricing FAQ from build-with-ai page across all locales
|
||||
|
||||
### Performance
|
||||
- Improve cold start time by ~29% through lazy-loading of MCP SDK and event types
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 22, 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add support for bedrock V4
|
||||
- Add Daytona sandbox tools for enhanced functionality
|
||||
- Add 'Build with AI' page — AI-native docs for coding agents
|
||||
- Add Build with AI to Get Started navigation and page files for all languages (en, ko, pt-BR, ar)
|
||||
|
||||
### Bug Fixes
|
||||
- Fix propagation of implicit @CrewBase names to crew events
|
||||
- Resolve issue with duplicate batch initialization in execution metadata merge
|
||||
- Fix serialization of Task class-reference fields for checkpointing
|
||||
- Handle BaseModel result in guardrail retry loop
|
||||
- Bump python-dotenv to version >=1.2.2 for security compliance
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.3a1
|
||||
- Update descriptions and apply actual translations
|
||||
|
||||
## Contributors
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 21, 2026">
|
||||
## v1.14.3a1
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add checkpoint and fork support to standalone agents
|
||||
|
||||
### Bug Fixes
|
||||
- Preserve thought_signature in Gemini streaming tool calls
|
||||
- Emit task_started on fork resume and redesign checkpoint TUI
|
||||
- Correct dry-run order and handle checked-out stale branch in devtools release
|
||||
- Use future dates in checkpoint prune tests to prevent time-dependent failures (#5543)
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.2
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 17, 2026">
|
||||
## v1.14.2
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add checkpoint resume, diff, and prune commands with improved discoverability.
|
||||
- Add `from_checkpoint` parameter to `Agent.kickoff` and related methods.
|
||||
- Add template management commands for project templates.
|
||||
- Add resume hints to devtools release on failure.
|
||||
- Add deploy validation CLI and enhance LLM initialization ergonomics.
|
||||
- Add checkpoint forking with lineage tracking.
|
||||
- Enrich LLM token tracking with reasoning tokens and cache creation tokens.
|
||||
|
||||
### Bug Fixes
|
||||
- Fix prompt on stale branch conflicts in devtools release.
|
||||
- Patch vulnerabilities in `authlib`, `langchain-text-splitters`, and `pypdf`.
|
||||
- Scope streaming handlers to prevent cross-run chunk contamination.
|
||||
- Dispatch Flow checkpoints through Flow APIs in TUI.
|
||||
- Use recursive glob for JSON checkpoint discovery.
|
||||
- Handle cyclic JSON schemas in MCP tool resolution.
|
||||
- Preserve Bedrock tool call arguments by removing truthy default.
|
||||
- Emit flow_finished event after HITL resume.
|
||||
- Fix various vulnerabilities by updating dependencies, including `requests`, `cryptography`, and `pytest`.
|
||||
- Fix to stop forwarding strict mode to Bedrock Converse API.
|
||||
|
||||
### Documentation
|
||||
- Document missing parameters and add Checkpointing section.
|
||||
- Update changelog and version for v1.14.2 and previous release candidates.
|
||||
- Add enterprise A2A feature documentation and update OSS A2A docs.
|
||||
|
||||
## Contributors
|
||||
|
||||
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 16, 2026">
|
||||
## v1.14.2rc1
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Bug Fixes
|
||||
- Fix handling of cyclic JSON schemas in MCP tool resolution
|
||||
- Fix vulnerability by bumping python-multipart to 0.0.26
|
||||
- Fix vulnerability by bumping pypdf to 6.10.1
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.2a5
|
||||
|
||||
## Contributors
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 15, 2026">
|
||||
## v1.14.2a5
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.2a4
|
||||
|
||||
## Contributors
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 15, 2026">
|
||||
## v1.14.2a4
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add resume hints to devtools release on failure
|
||||
|
||||
### Bug Fixes
|
||||
- Fix strict mode forwarding to Bedrock Converse API
|
||||
- Fix pytest version to 9.0.3 for security vulnerability GHSA-6w46-j5rx-g56g
|
||||
- Bump OpenAI lower bound to >=2.0.0
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.2a3
|
||||
|
||||
## Contributors
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 13, 2026">
|
||||
## v1.14.2a3
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add deploy validation CLI
|
||||
- Improve LLM initialization ergonomics
|
||||
|
||||
### Bug Fixes
|
||||
- Override pypdf and uv to patched versions for CVE-2026-40260 and GHSA-pjjw-68hj-v9mw
|
||||
- Upgrade requests to >=2.33.0 for CVE temp file vulnerability
|
||||
- Preserve Bedrock tool call arguments by removing truthy default
|
||||
- Sanitize tool schemas for strict mode
|
||||
- Deflake MemoryRecord embedding serialization test
|
||||
|
||||
### Documentation
|
||||
- Clean up enterprise A2A language
|
||||
- Add enterprise A2A feature documentation
|
||||
- Update OSS A2A documentation
|
||||
- Update changelog and version for v1.14.2a2
|
||||
|
||||
## Contributors
|
||||
|
||||
@Yanhu007, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 10, 2026">
|
||||
## v1.14.2a2
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add checkpoint TUI with tree view, fork support, and editable inputs/outputs
|
||||
- Enrich LLM token tracking with reasoning tokens and cache creation tokens
|
||||
- Add `from_checkpoint` parameter to kickoff methods
|
||||
- Embed `crewai_version` in checkpoints with migration framework
|
||||
- Add checkpoint forking with lineage tracking
|
||||
|
||||
### Bug Fixes
|
||||
- Fix strict mode forwarding to Anthropic and Bedrock providers
|
||||
- Harden NL2SQLTool with read-only default, query validation, and parameterized queries
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.2a1
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 09, 2026">
|
||||
## v1.14.2a1
|
||||
|
||||
|
||||
@@ -33,7 +33,14 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
||||
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
|
||||
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
|
||||
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
|
||||
| **Chat LLM** _(optional)_ | `chat_llm` | The language model used to orchestrate `crewai chat` CLI interactions with the crew. Accepts a model name string or `LLM` instance. Defaults to `None`. |
|
||||
| **Before Kickoff Callbacks** _(optional)_ | `before_kickoff_callbacks` | A list of callable functions executed **before** the crew starts. Each callback receives and can modify the inputs dict. Distinct from the `@before_kickoff` decorator. Defaults to `[]`. |
|
||||
| **After Kickoff Callbacks** _(optional)_ | `after_kickoff_callbacks` | A list of callable functions executed **after** the crew finishes. Each callback receives and can modify the `CrewOutput`. Distinct from the `@after_kickoff` decorator. Defaults to `[]`. |
|
||||
| **Tracing** _(optional)_ | `tracing` | Controls OpenTelemetry tracing for the crew. `True` = always enable, `False` = always disable, `None` = inherit from environment / user settings. Defaults to `None`. |
|
||||
| **Skills** _(optional)_ | `skills` | A list of `Path` objects (skill search directories) or pre-loaded `Skill` objects applied to all agents in the crew. Defaults to `None`. |
|
||||
| **Security Config** _(optional)_ | `security_config` | A `SecurityConfig` instance managing crew fingerprinting and identity. Defaults to `SecurityConfig()`. |
|
||||
| **Checkpoint** _(optional)_ | `checkpoint` | Enables automatic checkpointing. Pass `True` for sensible defaults, a `CheckpointConfig` for full control, `False` to opt out, or `None` to inherit. See the [Checkpointing](#checkpointing) section below. Defaults to `None`. |
|
||||
|
||||
<Tip>
|
||||
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
@@ -271,6 +278,72 @@ crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name
|
||||
|
||||
|
||||
|
||||
## Checkpointing
|
||||
|
||||
Checkpointing lets a crew automatically save its state after key events (e.g. task completion) so that long-running or interrupted runs can be resumed exactly where they left off without re-executing completed tasks.
|
||||
|
||||
### Quick Start
|
||||
|
||||
Pass `checkpoint=True` to enable checkpointing with sensible defaults (saves to `.checkpoints/` after every task):
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_task],
|
||||
process=Process.sequential,
|
||||
checkpoint=True, # saves to .checkpoints/ after every task
|
||||
)
|
||||
|
||||
crew.kickoff(inputs={"topic": "AI trends"})
|
||||
```
|
||||
|
||||
### Full Control with `CheckpointConfig`
|
||||
|
||||
Use `CheckpointConfig` for fine-grained control over location, trigger events, storage backend, and retention:
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_task],
|
||||
process=Process.sequential,
|
||||
checkpoint=CheckpointConfig(
|
||||
location="./.checkpoints", # directory for JSON files (default)
|
||||
on_events=["task_completed"], # trigger after each task (default)
|
||||
max_checkpoints=5, # keep only the 5 most recent checkpoints
|
||||
),
|
||||
)
|
||||
|
||||
crew.kickoff(inputs={"topic": "AI trends"})
|
||||
```
|
||||
|
||||
### Resuming from a Checkpoint
|
||||
|
||||
Use `Crew.from_checkpoint()` to restore a crew from a saved checkpoint file, then call `kickoff()` to resume:
|
||||
|
||||
```python Code
|
||||
# Resume from the most recent checkpoint
|
||||
crew = Crew.from_checkpoint(".checkpoints/latest.json")
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
<Note>
|
||||
When restoring from a checkpoint, `checkpoint_inputs`, `checkpoint_train`, and `checkpoint_kickoff_event_id` are automatically reconstructed — you do not need to set these manually.
|
||||
</Note>
|
||||
|
||||
### `CheckpointConfig` Attributes
|
||||
|
||||
| Attribute | Type | Default | Description |
|
||||
| :----------------- | :------------------------------------- | :------------------- | :-------------------------------------------------------------------------------------------- |
|
||||
| `location` | `str` | `"./.checkpoints"` | Storage destination. For `JsonProvider` this is a directory path; for `SqliteProvider` a database file path. |
|
||||
| `on_events` | `list[str]` | `["task_completed"]` | Event types that trigger a checkpoint write. Use `["*"]` to checkpoint on every event. |
|
||||
| `provider` | `JsonProvider \| SqliteProvider` | `JsonProvider()` | Storage backend. Defaults to `JsonProvider` (plain JSON files). |
|
||||
| `max_checkpoints` | `int \| None` | `None` | Maximum checkpoints to keep. Oldest are pruned after each write. `None` keeps all. |
|
||||
|
||||
## Memory Utilization
|
||||
|
||||
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
|
||||
|
||||
227
docs/en/enterprise/features/a2a.mdx
Normal file
227
docs/en/enterprise/features/a2a.mdx
Normal file
@@ -0,0 +1,227 @@
|
||||
---
|
||||
title: A2A on AMP
|
||||
description: Production-grade Agent-to-Agent communication with distributed state and multi-scheme authentication
|
||||
icon: "network-wired"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Warning>
|
||||
A2A server agents on AMP are in early release. APIs may change in future versions.
|
||||
</Warning>
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI AMP extends the open-source [A2A protocol implementation](/en/learn/a2a-agent-delegation) with production infrastructure for deploying distributed agents at scale. AMP supports A2A protocol versions 0.2 and 0.3. When you deploy a crew or agent with A2A server configuration to AMP, the platform automatically provisions distributed state management, authentication, multi-transport endpoints, and lifecycle management.
|
||||
|
||||
<Note>
|
||||
For A2A protocol fundamentals, client/server configuration, and authentication schemes, see the [A2A Agent Delegation](/en/learn/a2a-agent-delegation) documentation. This page covers what AMP adds on top of the open-source implementation.
|
||||
</Note>
|
||||
|
||||
### Usage
|
||||
|
||||
Add `A2AServerConfig` to any agent in your crew and deploy to AMP. The platform detects agents with server configuration and automatically registers A2A endpoints, generates agent cards, and provisions the infrastructure described below.
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.a2a import A2AServerConfig
|
||||
from crewai.a2a.auth import EnterpriseTokenAuth
|
||||
|
||||
agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze datasets and provide insights",
|
||||
backstory="Expert data scientist with statistical analysis skills",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AServerConfig(
|
||||
auth=EnterpriseTokenAuth()
|
||||
)
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze the provided dataset",
|
||||
expected_output="Statistical summary with key insights",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
```
|
||||
|
||||
After [deploying to AMP](/en/enterprise/guides/deploy-to-amp), the platform registers two levels of A2A endpoints:
|
||||
|
||||
- **Crew-level**: an aggregate agent card at `/.well-known/agent-card.json` where each agent with `A2AServerConfig` is listed as a skill, with a JSON-RPC endpoint at `/a2a`
|
||||
- **Per-agent**: isolated agent cards and JSON-RPC endpoints mounted at `/a2a/agents/{role}/`, each with its own tenancy
|
||||
|
||||
Clients can interact with the crew as a whole or target a specific agent directly. To route a request to a specific agent through the crew-level endpoint, include `"target_agent"` in the message metadata with the agent's slugified role name (e.g., `"data-analyst"` for an agent with role `"Data Analyst"`). If no `target_agent` is provided, the request is handled by the first agent in the crew.
|
||||
|
||||
See [A2A Agent Delegation](/en/learn/a2a-agent-delegation#server-configuration-options) for the full list of `A2AServerConfig` options.
|
||||
|
||||
<Warning>
|
||||
Per the A2A protocol, agent cards are publicly accessible to enable discovery. This includes both the crew-level card at `/.well-known/agent-card.json` and per-agent cards at `/a2a/agents/{role}/.well-known/agent-card.json`. Do not include sensitive information in agent names, descriptions, or skill definitions.
|
||||
</Warning>
|
||||
|
||||
### File Inputs and Structured Output
|
||||
|
||||
A2A on AMP supports passing files and requesting structured output in both directions. Clients can send files as `FilePart`s and request structured responses by embedding a JSON schema in the message. Server agents receive files as `input_files` on the task, and return structured data as `DataPart`s when a schema is provided. See [File Inputs and Structured Output](/en/learn/a2a-agent-delegation#file-inputs-and-structured-output) for details.
|
||||
|
||||
### What AMP Adds
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Distributed State" icon="database">
|
||||
Persistent task, context, and result storage
|
||||
</Card>
|
||||
<Card title="Enterprise Authentication" icon="shield-halved">
|
||||
OIDC, OAuth2, mTLS, and Enterprise token validation beyond simple bearer tokens
|
||||
</Card>
|
||||
<Card title="gRPC Transport" icon="bolt">
|
||||
Full gRPC server with TLS and authentication
|
||||
</Card>
|
||||
<Card title="Context Lifecycle" icon="clock-rotate-left">
|
||||
Automatic idle detection, expiration, and cleanup of long-running conversations
|
||||
</Card>
|
||||
<Card title="Signed Webhooks" icon="signature">
|
||||
HMAC-SHA256 signed push notifications with replay protection
|
||||
</Card>
|
||||
<Card title="Multi-Transport" icon="arrows-split-up-and-left">
|
||||
REST, JSON-RPC, and gRPC endpoints served simultaneously from a single deployment
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## Distributed State Management
|
||||
|
||||
In the open-source implementation, task and context state lives in memory on a single process. AMP replaces this with persistent, distributed stores.
|
||||
|
||||
### Storage Layers
|
||||
|
||||
| Store | Purpose |
|
||||
|---|---|
|
||||
| **Task Store** | Persists A2A task state and metadata |
|
||||
| **Context Store** | Tracks conversation context, creation time, last activity, and associated tasks |
|
||||
| **Result Store** | Caches task results for retrieval |
|
||||
| **Push Config Store** | Manages webhook subscriptions per task |
|
||||
|
||||
Multiple A2A deployments are automatically isolated from each other, preventing data collisions when sharing infrastructure.
|
||||
|
||||
---
|
||||
|
||||
## Enterprise Authentication
|
||||
|
||||
AMP supports six authentication schemes for incoming A2A requests, configurable per deployment. Authentication works across both HTTP and gRPC transports.
|
||||
|
||||
| Scheme | Description | Use Case |
|
||||
|---|---|---|
|
||||
| **SimpleTokenAuth** | Static bearer token from `AUTH_TOKEN` env var | Development, simple deployments |
|
||||
| **EnterpriseTokenAuth** | Token verification via CrewAI PlusAPI with integration token claims | AMP-to-AMP agent communication |
|
||||
| **OIDCAuth** | OpenID Connect JWT validation with JWKS endpoint caching | Enterprise SSO integration |
|
||||
| **OAuth2ServerAuth** | OAuth2 with configurable scopes | Fine-grained access control |
|
||||
| **APIKeyServerAuth** | API key validation via header or query parameter | Third-party integrations |
|
||||
| **MTLSServerAuth** | Mutual TLS certificate-based authentication | Zero-trust environments |
|
||||
|
||||
The configured auth scheme automatically populates the agent card's `securitySchemes` and `security` fields. Clients discover authentication requirements by fetching the agent card before making requests.
|
||||
|
||||
---
|
||||
|
||||
## Extended Agent Cards
|
||||
|
||||
AMP supports role-based skill visibility through extended agent cards. Unauthenticated users see the standard agent card with public skills. Authenticated users receive an extended card with additional capabilities.
|
||||
|
||||
This enables patterns like:
|
||||
- Public agents that expose basic skills to anyone, with advanced skills available to authenticated clients
|
||||
- Internal agents that advertise different capabilities based on the caller's identity
|
||||
|
||||
---
|
||||
|
||||
## gRPC Transport
|
||||
|
||||
If enabled, AMP provides full gRPC support alongside the default JSON-RPC transport.
|
||||
|
||||
- **TLS termination** with configurable certificate and key paths
|
||||
- **gRPC reflection** for debugging with tools like `grpcurl`
|
||||
- **Authentication** using the same schemes available for HTTP
|
||||
- **Extension validation** ensuring clients support required protocol extensions
|
||||
- **Version negotiation** across A2A protocol versions 0.2 and 0.3
|
||||
|
||||
For deployments exposing multiple agents, AMP automatically allocates per-agent gRPC ports and coordinates TLS, startup, and shutdown across all servers.
|
||||
|
||||
---
|
||||
|
||||
## Context Lifecycle Management
|
||||
|
||||
AMP tracks the lifecycle of A2A conversation contexts and automatically manages cleanup.
|
||||
|
||||
### Lifecycle States
|
||||
|
||||
| State | Condition | Action |
|
||||
|---|---|---|
|
||||
| **Active** | Context has recent activity | None |
|
||||
| **Idle** | No activity for a configured period | Marked idle, event emitted |
|
||||
| **Expired** | Context exceeds its maximum lifetime | Marked expired, associated tasks cleaned up, event emitted |
|
||||
|
||||
A background cleanup task runs hourly to scan for idle and expired contexts. All state transitions emit CrewAI events that integrate with the platform's observability features.
|
||||
|
||||
---
|
||||
|
||||
## Signed Push Notifications
|
||||
|
||||
When an A2A agent sends push notifications to a client webhook, AMP signs each request with HMAC-SHA256 to ensure integrity and prevent tampering.
|
||||
|
||||
### Signature Headers
|
||||
|
||||
| Header | Purpose |
|
||||
|---|---|
|
||||
| `X-A2A-Signature` | HMAC-SHA256 signature in `sha256={hex_digest}` format |
|
||||
| `X-A2A-Signature-Timestamp` | Unix timestamp bound to the signature |
|
||||
| `X-A2A-Notification-Token` | Optional notification auth token |
|
||||
|
||||
### Security Properties
|
||||
|
||||
- **Integrity**: payload cannot be modified without invalidating the signature
|
||||
- **Replay protection**: signatures are timestamp-bound with a configurable tolerance window
|
||||
- **Retry with backoff**: failed deliveries retry with exponential backoff
|
||||
|
||||
---
|
||||
|
||||
## Distributed Event Streaming
|
||||
|
||||
In the open-source implementation, SSE streaming works within a single process. AMP propagates SSE events across instances so that clients receive updates even when the instance holding the streaming connection differs from the instance executing the task.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Transport Endpoints
|
||||
|
||||
AMP serves REST and JSON-RPC by default. gRPC is available as an additional transport if enabled.
|
||||
|
||||
| Transport | Path Convention | Description |
|
||||
|---|---|---|
|
||||
| **REST** | `/v1/message:send`, `/v1/message:stream`, `/v1/tasks` | Google API conventions |
|
||||
| **JSON-RPC** | Standard A2A JSON-RPC endpoint | Default A2A protocol transport |
|
||||
| **gRPC** | Per-agent port allocation | Optional, high-performance binary protocol |
|
||||
|
||||
All active transports share the same authentication, version negotiation, and extension validation. Agent cards are generated from agent and crew metadata — roles, goals, and tools become skills and descriptions — and automatically include interfaces for each active transport. They can also be manually configured via `A2AServerConfig`.
|
||||
|
||||
---
|
||||
|
||||
## Version and Extension Negotiation
|
||||
|
||||
AMP validates A2A protocol versions and extensions at the transport layer.
|
||||
|
||||
### Version Negotiation
|
||||
|
||||
- Clients send the `A2A-Version` header with their preferred version
|
||||
- AMP validates against supported versions (0.2, 0.3) and falls back to 0.3 if unspecified
|
||||
- The negotiated version is returned in the response headers
|
||||
|
||||
### Extension Validation
|
||||
|
||||
- Clients declare supported extensions via the `X-A2A-Extensions` header
|
||||
- AMP validates that clients support all extensions the agent requires
|
||||
- Requests from clients missing required extensions receive an `UnsupportedExtensionError`
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [A2A Agent Delegation](/en/learn/a2a-agent-delegation) — A2A protocol fundamentals and configuration
|
||||
- [A2UI](/en/learn/a2ui) — Interactive UI rendering over A2A
|
||||
- [Deploy to AMP](/en/enterprise/guides/deploy-to-amp) — General deployment guide
|
||||
- [Webhook Streaming](/en/enterprise/features/webhook-streaming) — Event streaming for deployed automations
|
||||
214
docs/en/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/en/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "Build with AI"
|
||||
description: "Everything AI coding agents need to build, deploy, and scale with CrewAI — skills, machine-readable docs, deployment, and enterprise features."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# Build with AI
|
||||
|
||||
CrewAI is AI-native. This page brings together everything an AI coding agent needs to build with CrewAI — whether you're Claude Code, Codex, Cursor, Gemini CLI, or any other assistant helping a developer ship crews and flows.
|
||||
|
||||
### Supported Coding Agents
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
This page is designed to be consumed by both humans and AI assistants. If you're a coding agent, start with **Skills** to get CrewAI context, then use **llms.txt** for full docs access.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — Teach Your Agent CrewAI
|
||||
|
||||
**Skills** are instruction packs that give coding agents deep CrewAI knowledge — how to scaffold Flows, configure Crews, use tools, and follow framework conventions.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (Plugin Marketplace)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
CrewAI skills are available in the **Claude Code plugin marketplace** — the same distribution channel used by top AI-native companies:
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
Four skills activate automatically when you ask relevant CrewAI questions:
|
||||
|
||||
| Skill | When it runs |
|
||||
|-------|--------------|
|
||||
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
|
||||
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
|
||||
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
|
||||
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
|
||||
</Tab>
|
||||
<Tab title="npx (Any Agent)">
|
||||
Works with Claude Code, Codex, Cursor, Gemini CLI, or any coding agent:
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
Pulls from the [skills.sh registry](https://skills.sh/crewaiinc/skills).
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="Install the official skill pack">
|
||||
Use either method above — the Claude Code plugin marketplace or `npx skills add`. Both install the official [crewAIInc/skills](https://github.com/crewAIInc/skills) pack.
|
||||
</Step>
|
||||
<Step title="Your agent gets instant CrewAI expertise">
|
||||
The skill pack teaches your agent:
|
||||
- **Flows** — stateful apps, steps, and crew kickoffs
|
||||
- **Crews & Agents** — YAML-first patterns, roles, tasks, delegation
|
||||
- **Tools & Integrations** — search, APIs, MCP servers, and common CrewAI tools
|
||||
- **Project layout** — CLI scaffolds and repo conventions
|
||||
- **Up-to-date patterns** — tracks current CrewAI docs and best practices
|
||||
</Step>
|
||||
<Step title="Start building">
|
||||
Your agent can now scaffold and build CrewAI projects without you re-explaining the framework each session.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Skills concept" icon="bolt" href="/en/concepts/skills">
|
||||
How skills work in CrewAI agents — injection, activation, and patterns.
|
||||
</Card>
|
||||
<Card title="Skills landing page" icon="wand-magic-sparkles" href="/en/skills">
|
||||
Overview of the crewAIInc/skills pack and what it includes.
|
||||
</Card>
|
||||
<Card title="AGENTS.md & coding tools" icon="terminal" href="/en/guides/coding-tools/agents-md">
|
||||
Set up AGENTS.md for Claude Code, Codex, Cursor, and Gemini CLI.
|
||||
</Card>
|
||||
<Card title="Skills registry (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
Official listing — skills, install stats, and audits.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — Machine-Readable Docs
|
||||
|
||||
CrewAI publishes an `llms.txt` file that gives AI assistants direct access to the full documentation in a machine-readable format.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="What is llms.txt?">
|
||||
[`llms.txt`](https://llmstxt.org/) is an emerging standard for making documentation consumable by large language models. Instead of scraping HTML, your agent can fetch a single structured text file with all the content it needs.
|
||||
|
||||
CrewAI's `llms.txt` is **already live** — your agent can use it right now.
|
||||
</Tab>
|
||||
<Tab title="How to use it">
|
||||
Point your coding agent at the URL when it needs CrewAI reference docs:
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
Many coding agents (Claude Code, Cursor, etc.) can fetch URLs directly. The file contains structured documentation covering all CrewAI concepts, APIs, and guides.
|
||||
</Tab>
|
||||
<Tab title="Why it matters">
|
||||
- **No scraping required** — clean, structured content in one request
|
||||
- **Always up-to-date** — served directly from docs.crewai.com
|
||||
- **Optimized for LLMs** — formatted for context windows, not browsers
|
||||
- **Complements skills** — skills teach patterns, llms.txt provides reference
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. Deploy to Enterprise
|
||||
|
||||
Go from a local crew to production on **CrewAI AMP** (Agent Management Platform) in minutes.
|
||||
|
||||
<Steps>
|
||||
<Step title="Build locally">
|
||||
Scaffold and test your crew or flow:
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="Prepare for deployment">
|
||||
Ensure your project structure is ready:
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
See the [preparation guide](/en/enterprise/guides/prepare-for-deployment) for details on project structure and requirements.
|
||||
</Step>
|
||||
<Step title="Deploy to AMP">
|
||||
Push to the CrewAI AMP platform:
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
You can also deploy via [GitHub integration](/en/enterprise/guides/deploy-to-amp) or [Crew Studio](/en/enterprise/guides/enable-crew-studio).
|
||||
</Step>
|
||||
<Step title="Access via API">
|
||||
Your deployed crew gets a REST API endpoint. Integrate it into any application:
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
|
||||
Full deployment guide — CLI, GitHub, and Crew Studio methods.
|
||||
</Card>
|
||||
<Card title="AMP introduction" icon="globe" href="/en/enterprise/introduction">
|
||||
Platform overview — what AMP provides for production crews.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. Enterprise Features
|
||||
|
||||
CrewAI AMP is built for production teams. Here's what you get beyond deployment.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Observability" icon="chart-line">
|
||||
Detailed execution traces, logs, and performance metrics for every crew run. Monitor agent decisions, tool calls, and task completion in real time.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
No-code/low-code interface to create, customize, and deploy crews visually — then export to code or deploy directly.
|
||||
</Card>
|
||||
<Card title="Webhook Streaming" icon="webhook">
|
||||
Stream real-time events from crew executions to your systems. Integrate with Slack, Zapier, or any webhook consumer.
|
||||
</Card>
|
||||
<Card title="Team Management" icon="users">
|
||||
SSO, RBAC, and organization-level controls. Manage who can create, deploy, and access crews across your team.
|
||||
</Card>
|
||||
<Card title="Tool Repository" icon="toolbox">
|
||||
Publish and share custom tools across your organization. Install community tools from the registry.
|
||||
</Card>
|
||||
<Card title="Factory (Self-Hosted)" icon="server">
|
||||
Run CrewAI AMP on your own infrastructure. Full platform capabilities with data residency and compliance controls.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Who is AMP for?">
|
||||
AMP is for teams that need to move AI agent workflows from prototypes to production — with observability, access controls, and scalable infrastructure. Whether you're a startup or enterprise, AMP handles the operational complexity so you can focus on building agents.
|
||||
</Accordion>
|
||||
<Accordion title="What deployment options are available?">
|
||||
- **Cloud (app.crewai.com)** — managed by CrewAI, fastest path to production
|
||||
- **Factory (self-hosted)** — run on your own infrastructure for full data control
|
||||
- **Hybrid** — mix cloud and self-hosted based on sensitivity requirements
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="Explore CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
Sign up and deploy your first crew to production.
|
||||
</Card>
|
||||
@@ -199,7 +199,7 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
|
||||
- Supports any hyperscaler including on prem deployments
|
||||
- Integration with your existing security systems
|
||||
|
||||
<Card title="Explore Enterprise Options" icon="building" href="https://crewai.com/enterprise">
|
||||
<Card title="Explore Enterprise Options" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
|
||||
Learn about CrewAI's enterprise offerings and schedule a demo
|
||||
</Card>
|
||||
</Note>
|
||||
|
||||
@@ -7,6 +7,10 @@ mode: "wide"
|
||||
|
||||
## A2A Agent Delegation
|
||||
|
||||
<Info>
|
||||
Deploying A2A agents to production? See [A2A on AMP](/en/enterprise/features/a2a) for distributed state, enterprise authentication, gRPC transport, and horizontal scaling.
|
||||
</Info>
|
||||
|
||||
CrewAI treats [A2A protocol](https://a2a-protocol.org/latest/) as a first-class delegation primitive, enabling agents to delegate tasks, request information, and collaborate with remote agents, as well as act as A2A-compliant server agents.
|
||||
In client mode, agents autonomously choose between local execution and remote delegation based on task requirements.
|
||||
|
||||
@@ -96,24 +100,28 @@ The `A2AClientConfig` class accepts the following parameters:
|
||||
Update mechanism for receiving task status. Options: `StreamingConfig`, `PollingConfig`, or `PushNotificationConfig`.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
|
||||
Transport protocol for A2A communication. Options: `JSONRPC` (default), `GRPC`, or `HTTP+JSON`.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="accepted_output_modes" type="list[str]" default='["application/json"]'>
|
||||
Media types the client can accept in responses.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="supported_transports" type="list[str]" default='["JSONRPC"]'>
|
||||
Ordered list of transport protocols the client supports.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="use_client_preference" type="bool" default="False">
|
||||
Whether to prioritize client transport preferences over server.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="extensions" type="list[str]" default="[]">
|
||||
Extension URIs the client supports.
|
||||
A2A protocol extension URIs the client supports.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="client_extensions" type="list[A2AExtension]" default="[]">
|
||||
Client-side processing hooks for tool injection, prompt augmentation, and response modification.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="transport" type="ClientTransportConfig" default="ClientTransportConfig()">
|
||||
Transport configuration including preferred transport, supported transports for negotiation, and protocol-specific settings (gRPC message sizes, keepalive, etc.).
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
|
||||
**Deprecated**: Use `transport=ClientTransportConfig(preferred=...)` instead.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="supported_transports" type="list[str]" default="None">
|
||||
**Deprecated**: Use `transport=ClientTransportConfig(supported=...)` instead.
|
||||
</ParamField>
|
||||
|
||||
## Authentication
|
||||
@@ -405,11 +413,7 @@ agent = Agent(
|
||||
Preferred endpoint URL. If set, overrides the URL passed to `to_agent_card()`.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
|
||||
Transport protocol for the preferred endpoint.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="protocol_version" type="str" default="0.3">
|
||||
<ParamField path="protocol_version" type="str" default="0.3.0">
|
||||
A2A protocol version this agent supports.
|
||||
</ParamField>
|
||||
|
||||
@@ -441,8 +445,36 @@ agent = Agent(
|
||||
Whether agent provides extended card to authenticated users.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="signatures" type="list[AgentCardSignature]" default="[]">
|
||||
JSON Web Signatures for the AgentCard.
|
||||
<ParamField path="extended_skills" type="list[AgentSkill]" default="[]">
|
||||
Additional skills visible only to authenticated users in the extended agent card.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="signing_config" type="AgentCardSigningConfig" default="None">
|
||||
Configuration for signing the AgentCard with JWS. Supports RS256, ES256, PS256, and related algorithms.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="server_extensions" type="list[ServerExtension]" default="[]">
|
||||
Server-side A2A protocol extensions with `on_request`/`on_response` hooks that modify agent behavior.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="push_notifications" type="ServerPushNotificationConfig" default="None">
|
||||
Configuration for outgoing push notifications, including HMAC-SHA256 signing secret.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="transport" type="ServerTransportConfig" default="ServerTransportConfig()">
|
||||
Transport configuration including preferred transport, gRPC server settings, JSON-RPC paths, and HTTP+JSON settings.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="auth" type="ServerAuthScheme" default="None">
|
||||
Authentication scheme for incoming A2A requests. Defaults to `SimpleTokenAuth` using the `AUTH_TOKEN` environment variable.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
|
||||
**Deprecated**: Use `transport=ServerTransportConfig(preferred=...)` instead.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="signatures" type="list[AgentCardSignature]" default="None">
|
||||
**Deprecated**: Use `signing_config=AgentCardSigningConfig(...)` instead.
|
||||
</ParamField>
|
||||
|
||||
### Combined Client and Server
|
||||
@@ -468,6 +500,14 @@ agent = Agent(
|
||||
)
|
||||
```
|
||||
|
||||
### File Inputs and Structured Output
|
||||
|
||||
A2A supports passing files and requesting structured output in both directions.
|
||||
|
||||
**Client side**: When delegating to a remote A2A agent, files from the task's `input_files` are sent as `FilePart`s in the outgoing message. If `response_model` is set on the `A2AClientConfig`, the Pydantic model's JSON schema is embedded in the message metadata, requesting structured output from the remote agent.
|
||||
|
||||
**Server side**: Incoming `FilePart`s are extracted and passed to the agent's task as `input_files`. If the client included a JSON schema, the server creates a response model from it and applies it to the task. When the agent returns structured data, the response is sent back as a `DataPart` rather than plain text.
|
||||
|
||||
## Best Practices
|
||||
|
||||
<CardGroup cols={2}>
|
||||
|
||||
@@ -4,6 +4,292 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2026년 4월 25일">
|
||||
## v1.14.3
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 체크포인트 작업을 위한 생명주기 이벤트 추가
|
||||
- e2b 지원 추가
|
||||
- Azure 통합에서 API 키가 제공되지 않을 경우 DefaultAzureCredential로 대체
|
||||
- Bedrock V4 지원 추가
|
||||
- 향상된 기능을 위한 Daytona 샌드박스 도구 추가
|
||||
- 독립형 에이전트에 체크포인트 및 포크 지원 추가
|
||||
|
||||
### 버그 수정
|
||||
- execution_id를 state.id와 분리되도록 수정
|
||||
- 체크포인트 재개 시 기록된 메서드 이벤트 재생 문제 해결
|
||||
- initial_state 클래스 참조의 JSON 스키마 직렬화 수정
|
||||
- 메타데이터 전용 에이전트 기술 보존
|
||||
- 암묵적인 @CrewBase 이름을 크루 이벤트로 전파
|
||||
- 중복 배치 초기화 시 실행 메타데이터 병합
|
||||
- 체크포인트를 위한 Task 클래스 참조 필드의 직렬화 수정
|
||||
- 가드레일 재시도 루프에서 BaseModel 결과 처리
|
||||
- Gemini 스트리밍 도구 호출에서 thought_signature 보존
|
||||
- 포크 재개 시 task_started 방출 및 체크포인트 TUI 재설계
|
||||
- 체크포인트 가지치기 테스트에서 미래 날짜 사용하여 시간 의존적 실패 방지
|
||||
- 드라이 런 주문 수정 및 devtools 릴리스에서 체크아웃된 오래된 브랜치 처리
|
||||
- 보안 패치를 위해 lxml을 >=6.1.0으로 업그레이드
|
||||
- 보안 패치를 위해 python-dotenv를 >=1.2.2로 업그레이드
|
||||
|
||||
### 문서
|
||||
- v1.14.3에 대한 변경 로그 및 버전 업데이트
|
||||
- 'AI로 빌드하기' 페이지 추가 및 모든 언어에 대한 내비게이션 업데이트
|
||||
- 모든 로케일에서 build-with-ai 페이지의 가격 FAQ 제거
|
||||
|
||||
### 성능
|
||||
- MCP SDK 및 이벤트 유형 최적화하여 콜드 스타트를 약 29% 감소
|
||||
|
||||
### 리팩토링
|
||||
- 중복 제거 및 상태 유형 힌트를 강화하기 위해 체크포인트 헬퍼 리팩토링
|
||||
|
||||
## 기여자
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 23일">
|
||||
## v1.14.3a3
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- e2b 지원 추가
|
||||
- API 키가 제공되지 않을 경우 DefaultAzureCredential로 대체 구현
|
||||
|
||||
### 버그 수정
|
||||
- 보안 문제 GHSA-vfmq-68hx-4jfw를 해결하기 위해 lxml을 >=6.1.0으로 업그레이드
|
||||
|
||||
### 문서
|
||||
- 모든 지역에서 build-with-ai 페이지의 가격 FAQ 제거
|
||||
|
||||
### 성능
|
||||
- MCP SDK 및 이벤트 유형의 지연 로딩을 통해 콜드 스타트 시간을 약 29% 개선
|
||||
|
||||
## 기여자
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 22일">
|
||||
## v1.14.3a2
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 베드록 V4 지원 추가
|
||||
- 향상된 기능을 위한 데이토나 샌드박스 도구 추가
|
||||
- 'AI와 함께 빌드' 페이지 추가 — 코딩 에이전트를 위한 AI 네이티브 문서
|
||||
- 모든 언어(en, ko, pt-BR, ar)에 대한 시작하기 탐색 및 페이지 파일에 AI와 함께 빌드 추가
|
||||
|
||||
### 버그 수정
|
||||
- 크루 이벤트에 대한 암묵적 @CrewBase 이름 전파 수정
|
||||
- 실행 메타데이터 병합에서 중복 배치 초기화 문제 해결
|
||||
- 체크포인트를 위한 Task 클래스 참조 필드 직렬화 수정
|
||||
- 가드레일 재시도 루프에서 BaseModel 결과 처리
|
||||
- 보안 준수를 위해 python-dotenv를 버전 >=1.2.2로 업데이트
|
||||
|
||||
### 문서
|
||||
- v1.14.3a1에 대한 변경 로그 및 버전 업데이트
|
||||
- 설명 업데이트 및 실제 번역 적용
|
||||
|
||||
## 기여자
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 21일">
|
||||
## v1.14.3a1
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 독립형 에이전트에 체크포인트 및 포크 지원 추가
|
||||
|
||||
### 버그 수정
|
||||
- Gemini 스트리밍 도구 호출에서 thought_signature 보존
|
||||
- 포크 재개 시 task_started 방출 및 체크포인트 TUI 재설계
|
||||
- dry-run 순서 수정 및 devtools 릴리스에서 체크아웃된 오래된 브랜치 처리
|
||||
- 체크포인트 가지치기 테스트에서 미래 날짜 사용하여 시간 의존성 실패 방지 (#5543)
|
||||
|
||||
### 문서
|
||||
- v1.14.2에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@alex-clawd, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 17일">
|
||||
## v1.14.2
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 체크포인트 재개, 차이(diff), 및 가지치기(prune) 명령을 추가하여 가시성을 개선했습니다.
|
||||
- `Agent.kickoff` 및 관련 메서드에 `from_checkpoint` 매개변수를 추가했습니다.
|
||||
- 프로젝트 템플릿을 위한 템플릿 관리 명령을 추가했습니다.
|
||||
- 실패 시 개발 도구 릴리스에 재개 힌트를 추가했습니다.
|
||||
- 배포 검증 CLI를 추가하고 LLM 초기화의 사용 편의성을 향상시켰습니다.
|
||||
- 계보 추적이 가능한 체크포인트 포킹을 추가했습니다.
|
||||
- 추론 토큰 및 캐시 생성 토큰으로 LLM 토큰 추적을 풍부하게 했습니다.
|
||||
|
||||
### 버그 수정
|
||||
- 개발 도구 릴리스에서 오래된 브랜치 충돌에 대한 프롬프트를 수정했습니다.
|
||||
- `authlib`, `langchain-text-splitters`, 및 `pypdf`의 취약점을 패치했습니다.
|
||||
- 스트리밍 핸들러의 범위를 설정하여 교차 실행 청크 오염을 방지했습니다.
|
||||
- TUI에서 Flow API를 통해 Flow 체크포인트를 전송했습니다.
|
||||
- JSON 체크포인트 발견을 위해 재귀적 글로브를 사용했습니다.
|
||||
- MCP 도구 해상도에서 순환 JSON 스키마를 처리했습니다.
|
||||
- 진리값이 있는 기본값을 제거하여 Bedrock 도구 호출 인수를 보존했습니다.
|
||||
- HITL 재개 후 flow_finished 이벤트를 발생시켰습니다.
|
||||
- `requests`, `cryptography`, 및 `pytest`를 포함한 종속성을 업데이트하여 다양한 취약점을 수정했습니다.
|
||||
- Bedrock Converse API에 엄격 모드를 전달하지 않도록 수정했습니다.
|
||||
|
||||
### 문서
|
||||
- 누락된 매개변수를 문서화하고 체크포인팅 섹션을 추가했습니다.
|
||||
- v1.14.2 및 이전 릴리스 후보에 대한 변경 로그 및 버전을 업데이트했습니다.
|
||||
- 기업 A2A 기능 문서를 추가하고 OSS A2A 문서를 업데이트했습니다.
|
||||
|
||||
## 기여자
|
||||
|
||||
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 16일">
|
||||
## v1.14.2rc1
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 버그 수정
|
||||
- MCP 도구 해상도에서 순환 JSON 스키마 처리 수정
|
||||
- python-multipart를 0.0.26으로 업데이트하여 취약점 수정
|
||||
- pypdf를 6.10.1로 업데이트하여 취약점 수정
|
||||
|
||||
### 문서
|
||||
- v1.14.2a5에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 15일">
|
||||
## v1.14.2a5
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 문서
|
||||
- v1.14.2a4의 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 15일">
|
||||
## v1.14.2a4
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 실패 시 devtools 릴리스에 이력서 힌트 추가
|
||||
|
||||
### 버그 수정
|
||||
- Bedrock Converse API로의 엄격 모드 포워딩 수정
|
||||
- 보안 취약점 GHSA-6w46-j5rx-g56g에 대해 pytest 버전을 9.0.3으로 수정
|
||||
- OpenAI 하한을 >=2.0.0으로 상향 조정
|
||||
|
||||
### 문서
|
||||
- v1.14.2a3에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 13일">
|
||||
## v1.14.2a3
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 배포 검증 CLI 추가
|
||||
- LLM 초기화 사용성 개선
|
||||
|
||||
### 버그 수정
|
||||
- CVE-2026-40260 및 GHSA-pjjw-68hj-v9mw에 대한 패치된 버전으로 pypdf 및 uv 재정의
|
||||
- CVE 임시 파일 취약점에 대해 requests를 >=2.33.0으로 업그레이드
|
||||
- 진리값 기본값을 제거하여 Bedrock 도구 호출 인수 보존
|
||||
- 엄격 모드를 위한 도구 스키마 정리
|
||||
- MemoryRecord 임베딩 직렬화 테스트의 불안정성 제거
|
||||
|
||||
### 문서
|
||||
- 기업 A2A 언어 정리
|
||||
- 기업 A2A 기능 문서 추가
|
||||
- OSS A2A 문서 업데이트
|
||||
- v1.14.2a2에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@Yanhu007, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 10일">
|
||||
## v1.14.2a2
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 트리 뷰, 포크 지원 및 편집 가능한 입력/출력을 갖춘 체크포인트 TUI 추가
|
||||
- 추론 토큰 및 캐시 생성 토큰으로 LLM 토큰 추적 강화
|
||||
- 킥오프 메서드에 `from_checkpoint` 매개변수 추가
|
||||
- 마이그레이션 프레임워크와 함께 체크포인트에 `crewai_version` 포함
|
||||
- 계보 추적이 가능한 체크포인트 포킹 추가
|
||||
|
||||
### 버그 수정
|
||||
- Anthropic 및 Bedrock 공급자로의 엄격 모드 포워딩 수정
|
||||
- 읽기 전용 기본값, 쿼리 검증 및 매개변수화된 쿼리로 NL2SQLTool 강화
|
||||
|
||||
### 문서
|
||||
- v1.14.2a1에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 9일">
|
||||
## v1.14.2a1
|
||||
|
||||
|
||||
214
docs/ko/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/ko/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "AI와 함께 빌드하기"
|
||||
description: "CrewAI로 빌드·배포·확장하는 데 필요한 모든 것 — 스킬, 기계가 읽을 수 있는 문서, 배포, 엔터프라이즈 기능을 AI 코딩 에이전트용으로 정리했습니다."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# AI와 함께 빌드하기
|
||||
|
||||
CrewAI는 AI 네이티브입니다. 이 페이지는 Claude Code, Codex, Cursor, Gemini CLI 등 개발자가 crew와 flow를 배포하도록 돕는 코딩 에이전트가 CrewAI로 빌드할 때 필요한 내용을 한곳에 모았습니다.
|
||||
|
||||
### 지원 코딩 에이전트
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
이 페이지는 사람과 AI 어시스턴트 모두를 위해 작성되었습니다. 코딩 에이전트라면 CrewAI 맥락은 **Skills**부터, 전체 문서 접근은 **llms.txt**를 사용하세요.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — 에이전트에게 CrewAI 가르치기
|
||||
|
||||
**Skills**는 코딩 에이전트에게 Flow 스캐폴딩, Crew 구성, 도구 사용, 프레임워크 관례 등 CrewAI에 대한 깊은 지식을 담은 지침 묶음입니다.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (플러그인 마켓플레이스)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
CrewAI 스킬은 **Claude Code 플러그인 마켓플레이스**에서 제공됩니다. AI 네이티브 기업들이 쓰는 것과 같은 배포 채널입니다.
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
CrewAI와 관련된 질문을 하면 다음 네 가지 스킬이 자동으로 활성화됩니다.
|
||||
|
||||
| 스킬 | 실행 시점 |
|
||||
|------|-------------|
|
||||
| `getting-started` | 새 프로젝트 스캐폴딩, `LLM.call()` / `Agent` / `Crew` / `Flow` 선택, `crew.py` / `main.py` 연결 |
|
||||
| `design-agent` | 에이전트 구성 — 역할, 목표, 배경 이야기, 도구, LLM, 메모리, 가드레일 |
|
||||
| `design-task` | 태스크 설명, 의존성, 구조화된 출력(`output_pydantic`, `output_json`), 사람 검토 |
|
||||
| `ask-docs` | 최신 API 정보를 위해 [CrewAI 문서 MCP 서버](https://docs.crewai.com/mcp) 조회 |
|
||||
</Tab>
|
||||
<Tab title="npx (모든 에이전트)">
|
||||
Claude Code, Codex, Cursor, Gemini CLI 등 모든 코딩 에이전트에서 사용할 수 있습니다.
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
[skills.sh 레지스트리](https://skills.sh/crewaiinc/skills)에서 가져옵니다.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="공식 스킬 팩 설치">
|
||||
위 방법 중 하나를 사용하세요 — Claude Code 플러그인 마켓플레이스 또는 `npx skills add`. 둘 다 공식 [crewAIInc/skills](https://github.com/crewAIInc/skills) 팩을 설치합니다.
|
||||
</Step>
|
||||
<Step title="에이전트가 즉시 CrewAI 전문성을 갖춤">
|
||||
스킬 팩이 에이전트에게 알려 주는 내용:
|
||||
- **Flow** — 상태ful 앱, 단계, crew 킥오프
|
||||
- **Crew 및 에이전트** — YAML 우선 패턴, 역할, 태스크, 위임
|
||||
- **도구 및 통합** — 검색, API, MCP 서버, 일반적인 CrewAI 도구
|
||||
- **프로젝트 레이아웃** — CLI 스캐폴드와 저장소 관례
|
||||
- **최신 패턴** — 현재 CrewAI 문서와 모범 사례 반영
|
||||
</Step>
|
||||
<Step title="빌드 시작">
|
||||
매 세션마다 프레임워크를 다시 설명하지 않아도 에이전트가 CrewAI 프로젝트를 스캐폴딩하고 빌드할 수 있습니다.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Skills 개념" icon="bolt" href="/ko/concepts/skills">
|
||||
CrewAI 에이전트에서 스킬이 동작하는 방식 — 주입, 활성화, 패턴.
|
||||
</Card>
|
||||
<Card title="Skills 랜딩 페이지" icon="wand-magic-sparkles" href="/ko/skills">
|
||||
crewAIInc/skills 팩 개요와 포함 내용.
|
||||
</Card>
|
||||
<Card title="AGENTS.md 및 코딩 도구" icon="terminal" href="/ko/guides/coding-tools/agents-md">
|
||||
Claude Code, Codex, Cursor, Gemini CLI용 AGENTS.md 설정.
|
||||
</Card>
|
||||
<Card title="Skills 레지스트리 (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
공식 목록 — 스킬, 설치 통계, 감사 정보.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — 기계가 읽을 수 있는 문서
|
||||
|
||||
CrewAI는 AI 어시스턴트가 전체 문서에 기계가 읽을 수 있는 형태로 바로 접근할 수 있도록 `llms.txt` 파일을 제공합니다.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="llms.txt란?">
|
||||
[`llms.txt`](https://llmstxt.org/)는 문서를 대규모 언어 모델이 소비하기 쉽게 만드는 새로운 표준입니다. HTML을 스크래핑하는 대신, 필요한 내용이 담긴 하나의 구조화된 텍스트 파일을 가져올 수 있습니다.
|
||||
|
||||
CrewAI의 `llms.txt`는 **이미 제공 중**이며, 에이전트가 바로 사용할 수 있습니다.
|
||||
</Tab>
|
||||
<Tab title="사용 방법">
|
||||
CrewAI 참고 문서가 필요할 때 코딩 에이전트에 URL을 알려 주세요.
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
Claude Code, Cursor 등 많은 코딩 에이전트가 URL을 직접 가져올 수 있습니다. 파일에는 CrewAI 개념, API, 가이드를 아우르는 구조화된 문서가 포함되어 있습니다.
|
||||
</Tab>
|
||||
<Tab title="왜 중요한가">
|
||||
- **스크래핑 불필요** — 한 번의 요청으로 깔끔한 구조화 콘텐츠
|
||||
- **항상 최신** — docs.crewai.com에서 직접 제공
|
||||
- **LLM에 최적화** — 브라우저가 아니라 컨텍스트 윈도우에 맞게 포맷
|
||||
- **스킬과 상호 보완** — 스킬은 패턴을, llms.txt는 참조를 제공
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. 엔터프라이즈에 배포
|
||||
|
||||
로컬 crew를 몇 분 안에 **CrewAI AMP**(Agent Management Platform) 프로덕션으로 가져가세요.
|
||||
|
||||
<Steps>
|
||||
<Step title="로컬에서 빌드">
|
||||
crew 또는 flow를 스캐폴딩하고 테스트합니다.
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="배포 준비">
|
||||
프로젝트 구조가 준비되었는지 확인합니다.
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
구조와 요구 사항은 [준비 가이드](/ko/enterprise/guides/prepare-for-deployment)를 참고하세요.
|
||||
</Step>
|
||||
<Step title="AMP에 배포">
|
||||
CrewAI AMP 플랫폼으로 푸시합니다.
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
[GitHub 연동](/ko/enterprise/guides/deploy-to-amp) 또는 [Crew Studio](/ko/enterprise/guides/enable-crew-studio)로도 배포할 수 있습니다.
|
||||
</Step>
|
||||
<Step title="API로 접근">
|
||||
배포된 crew는 REST API 엔드포인트를 받습니다. 모든 애플리케이션에 통합할 수 있습니다.
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="AMP에 배포" icon="rocket" href="/ko/enterprise/guides/deploy-to-amp">
|
||||
전체 배포 가이드 — CLI, GitHub, Crew Studio 방법.
|
||||
</Card>
|
||||
<Card title="AMP 소개" icon="globe" href="/ko/enterprise/introduction">
|
||||
플랫폼 개요 — 프로덕션 crew에 AMP가 제공하는 것.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. 엔터프라이즈 기능
|
||||
|
||||
CrewAI AMP는 프로덕션 팀을 위해 만들어졌습니다. 배포 외에 제공되는 것은 다음과 같습니다.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="관측 가능성" icon="chart-line">
|
||||
모든 crew 실행에 대한 상세 실행 추적, 로그, 성능 지표. 에이전트 결정, 도구 호출, 태스크 완료를 실시간으로 모니터링합니다.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
시각적으로 crew를 만들고, 맞춤 설정하고, 배포하는 노코드/로코드 인터페이스 — 코드로 보내거나 바로 배포할 수 있습니다.
|
||||
</Card>
|
||||
<Card title="웹훅 스트리밍" icon="webhook">
|
||||
crew 실행에서 실시간 이벤트를 시스템으로 스트리밍합니다. Slack, Zapier 등 웹훅 소비자와 연동할 수 있습니다.
|
||||
</Card>
|
||||
<Card title="팀 관리" icon="users">
|
||||
SSO, RBAC, 조직 단위 제어. 팀 전체에서 crew 생성·배포·접근 권한을 관리합니다.
|
||||
</Card>
|
||||
<Card title="도구 저장소" icon="toolbox">
|
||||
조직 전체에 맞춤 도구를 게시하고 공유합니다. 레지스트리에서 커뮤니티 도구를 설치합니다.
|
||||
</Card>
|
||||
<Card title="Factory(셀프 호스팅)" icon="server">
|
||||
자체 인프라에서 CrewAI AMP를 실행합니다. 데이터 상주와 규정 준수 제어와 함께 플랫폼 전체 기능을 사용할 수 있습니다.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="AMP는 누구를 위한 것인가요?">
|
||||
AI 에이전트 워크플로를 프로토타입에서 프로덕션으로 옮겨야 하는 팀을 위한 제품입니다. 관측 가능성, 접근 제어, 확장 가능한 인프라를 제공합니다. 스타트업이든 대기업이든 운영 복잡도는 AMP가 맡고, 에이전트 구축에 집중할 수 있습니다.
|
||||
</Accordion>
|
||||
<Accordion title="배포 옵션은 무엇이 있나요?">
|
||||
- **클라우드 (app.crewai.com)** — CrewAI가 관리, 프로덕션까지 가장 빠른 경로
|
||||
- **Factory(셀프 호스팅)** — 데이터 통제를 위해 자체 인프라에서 실행
|
||||
- **하이브리드** — 민감도에 따라 클라우드와 셀프 호스팅을 혼합
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="CrewAI AMP 살펴보기 →" icon="arrow-right" href="https://app.crewai.com">
|
||||
가입하고 첫 crew를 프로덕션에 배포해 보세요.
|
||||
</Card>
|
||||
@@ -189,7 +189,7 @@ CrewAI는 의존성 관리와 패키지 처리를 위해 `uv`를 사용합니다
|
||||
- 온프레미스 배포를 포함하여 모든 하이퍼스케일러 지원
|
||||
- 기존 보안 시스템과의 통합
|
||||
|
||||
<Card title="엔터프라이즈 옵션 살펴보기" icon="building" href="https://crewai.com/enterprise">
|
||||
<Card title="엔터프라이즈 옵션 살펴보기" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
|
||||
CrewAI의 엔터프라이즈 서비스에 대해 알아보고 데모를 예약하세요
|
||||
</Card>
|
||||
</Note>
|
||||
|
||||
@@ -4,6 +4,292 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="25 abr 2026">
|
||||
## v1.14.3
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar eventos de ciclo de vida para operações de checkpoint
|
||||
- Adicionar suporte para e2b
|
||||
- Reverter para DefaultAzureCredential quando nenhuma chave de API for fornecida na integração com o Azure
|
||||
- Adicionar suporte ao Bedrock V4
|
||||
- Adicionar ferramentas de sandbox Daytona para funcionalidade aprimorada
|
||||
- Adicionar suporte a checkpoint e fork para agentes autônomos
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir execution_id para ser separado de state.id
|
||||
- Resolver a reprodução de eventos de método gravados na retomada do checkpoint
|
||||
- Corrigir a serialização de referências de classe initial_state como esquema JSON
|
||||
- Preservar habilidades de agente somente de metadados
|
||||
- Propagar nomes implícitos @CrewBase para eventos da equipe
|
||||
- Mesclar metadados de execução na inicialização de lote duplicado
|
||||
- Corrigir a serialização de campos de referência de classe Task para checkpointing
|
||||
- Lidar com o resultado BaseModel no loop de retry do guardrail
|
||||
- Preservar thought_signature em chamadas de ferramentas de streaming Gemini
|
||||
- Emitir task_started na retomada do fork e redesenhar TUI de checkpoint
|
||||
- Usar datas futuras em testes de poda de checkpoint para evitar falhas dependentes do tempo
|
||||
- Corrigir a ordem de dry-run e lidar com branch obsoleta verificada na liberação do devtools
|
||||
- Atualizar lxml para >=6.1.0 para patch de segurança
|
||||
- Aumentar python-dotenv para >=1.2.2 para patch de segurança
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.3
|
||||
- Adicionar página 'Construir com IA' e atualizar navegação para todos os idiomas
|
||||
- Remover FAQ de preços da página construir-com-ia em todos os locais
|
||||
|
||||
### Desempenho
|
||||
- Otimizar MCP SDK e tipos de eventos para reduzir o tempo de inicialização a frio em ~29%
|
||||
|
||||
### Refatoração
|
||||
- Refatorar auxiliares de checkpoint para eliminar duplicação e apertar dicas de tipo de estado
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="23 abr 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar suporte para e2b
|
||||
- Implementar fallback para DefaultAzureCredential quando nenhuma chave de API for fornecida
|
||||
|
||||
### Correções de Bugs
|
||||
- Atualizar lxml para >=6.1.0 para resolver problema de segurança GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### Documentação
|
||||
- Remover FAQ de preços da página build-with-ai em todos os locais
|
||||
|
||||
### Desempenho
|
||||
- Melhorar o tempo de inicialização a frio em ~29% através do carregamento preguiçoso do SDK MCP e tipos de eventos
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="22 abr 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## O que mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar suporte para bedrock V4
|
||||
- Adicionar ferramentas de sandbox Daytona para funcionalidade aprimorada
|
||||
- Adicionar página 'Construir com IA' — documentação nativa de IA para agentes de codificação
|
||||
- Adicionar Construir com IA à navegação Começar e arquivos de página para todos os idiomas (en, ko, pt-BR, ar)
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir a propagação de nomes implícitos @CrewBase para eventos da equipe
|
||||
- Resolver problema com inicialização de lote duplicada na mesclagem de metadados de execução
|
||||
- Corrigir a serialização de campos de referência de classe Task para checkpointing
|
||||
- Lidar com o resultado BaseModel no loop de repetição de guardrail
|
||||
- Atualizar python-dotenv para a versão >=1.2.2 para conformidade de segurança
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.3a1
|
||||
- Atualizar descrições e aplicar traduções reais
|
||||
|
||||
## Contributors
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="21 abr 2026">
|
||||
## v1.14.3a1
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Funcionalidades
|
||||
- Adicionar suporte a checkpoint e fork para agentes autônomos
|
||||
|
||||
### Correções de Bugs
|
||||
- Preservar thought_signature nas chamadas da ferramenta de streaming Gemini
|
||||
- Emitir task_started na retomada do fork e redesenhar a TUI de checkpoint
|
||||
- Corrigir a ordem do dry-run e lidar com branch desatualizada em release do devtools
|
||||
- Usar datas futuras nos testes de poda de checkpoint para evitar falhas dependentes do tempo (#5543)
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.2
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@alex-clawd, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="17 abr 2026">
|
||||
## v1.14.2
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar comandos de retomar, diferenciar e podar checkpoints com melhor descobribilidade.
|
||||
- Adicionar o parâmetro `from_checkpoint` ao `Agent.kickoff` e métodos relacionados.
|
||||
- Adicionar comandos de gerenciamento de templates para templates de projeto.
|
||||
- Adicionar dicas de retomar na liberação de devtools em caso de falha.
|
||||
- Adicionar CLI de validação de implantação e melhorar a ergonomia da inicialização do LLM.
|
||||
- Adicionar bifurcação de checkpoints com rastreamento de linhagem.
|
||||
- Enriquecer o rastreamento de tokens do LLM com tokens de raciocínio e tokens de criação de cache.
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir prompt em conflitos de branch obsoletos na liberação de devtools.
|
||||
- Corrigir vulnerabilidades em `authlib`, `langchain-text-splitters` e `pypdf`.
|
||||
- Restringir manipuladores de streaming para evitar contaminação de chunks entre execuções.
|
||||
- Despachar checkpoints de Flow através das APIs de Flow na TUI.
|
||||
- Usar glob recursivo para descoberta de checkpoints JSON.
|
||||
- Lidar com esquemas JSON cíclicos na resolução de ferramentas MCP.
|
||||
- Preservar os argumentos de chamada da ferramenta Bedrock removendo o padrão truthy.
|
||||
- Emitir evento flow_finished após retomar HITL.
|
||||
- Corrigir várias vulnerabilidades atualizando dependências, incluindo `requests`, `cryptography` e `pytest`.
|
||||
- Corrigir para parar de encaminhar o modo estrito para a API Bedrock Converse.
|
||||
|
||||
### Documentação
|
||||
- Documentar parâmetros ausentes e adicionar seção de Checkpointing.
|
||||
- Atualizar changelog e versão para v1.14.2 e candidatos a liberação anteriores.
|
||||
- Adicionar documentação da funcionalidade A2A empresarial e atualizar a documentação A2A OSS.
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="16 abr 2026">
|
||||
## v1.14.2rc1
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir o manuseio de esquemas JSON cíclicos na resolução da ferramenta MCP
|
||||
- Corrigir vulnerabilidade atualizando python-multipart para 0.0.26
|
||||
- Corrigir vulnerabilidade atualizando pypdf para 6.10.1
|
||||
|
||||
### Documentação
|
||||
- Atualizar o changelog e a versão para v1.14.2a5
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="15 abr 2026">
|
||||
## v1.14.2a5
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.2a4
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="15 abr 2026">
|
||||
## v1.14.2a4
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar dicas de retomar ao release do devtools em caso de falha
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir o encaminhamento do modo estrito para a API Bedrock Converse
|
||||
- Corrigir a versão do pytest para 9.0.3 devido à vulnerabilidade de segurança GHSA-6w46-j5rx-g56g
|
||||
- Aumentar o limite inferior do OpenAI para >=2.0.0
|
||||
|
||||
### Documentação
|
||||
- Atualizar o changelog e a versão para v1.14.2a3
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="13 abr 2026">
|
||||
## v1.14.2a3
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar CLI de validação de deploy
|
||||
- Melhorar a ergonomia de inicialização do LLM
|
||||
|
||||
### Correções de Bugs
|
||||
- Substituir pypdf e uv por versões corrigidas para CVE-2026-40260 e GHSA-pjjw-68hj-v9mw
|
||||
- Atualizar requests para >=2.33.0 devido à vulnerabilidade de arquivo temporário CVE
|
||||
- Preservar os argumentos de chamada da ferramenta Bedrock removendo o padrão truthy
|
||||
- Sanitizar esquemas de ferramentas para modo estrito
|
||||
- Remover flakiness do teste de serialização de embedding MemoryRecord
|
||||
|
||||
### Documentação
|
||||
- Limpar a linguagem do A2A empresarial
|
||||
- Adicionar documentação de recursos do A2A empresarial
|
||||
- Atualizar documentação do A2A OSS
|
||||
- Atualizar changelog e versão para v1.14.2a2
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@Yanhu007, @greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="10 abr 2026">
|
||||
## v1.14.2a2
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Funcionalidades
|
||||
- Adicionar TUI de ponto de verificação com visualização em árvore, suporte a bifurcações e entradas/saídas editáveis
|
||||
- Enriquecer o rastreamento de tokens LLM com tokens de raciocínio e tokens de criação de cache
|
||||
- Adicionar parâmetro `from_checkpoint` aos métodos de inicialização
|
||||
- Incorporar `crewai_version` em pontos de verificação com o framework de migração
|
||||
- Adicionar bifurcação de ponto de verificação com rastreamento de linhagem
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir o encaminhamento em modo estrito para os provedores Anthropic e Bedrock
|
||||
- Fortalecer NL2SQLTool com padrão somente leitura, validação de consultas e consultas parametrizadas
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.2a1
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="09 abr 2026">
|
||||
## v1.14.2a1
|
||||
|
||||
|
||||
214
docs/pt-BR/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/pt-BR/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "Construa com IA"
|
||||
description: "Tudo o que agentes de codificação com IA precisam para criar, implantar e escalar com CrewAI — skills, documentação legível por máquina, implantação e recursos enterprise."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# Construa com IA
|
||||
|
||||
O CrewAI é nativo de IA. Esta página reúne o que um agente de codificação com IA precisa para construir com CrewAI — seja Claude Code, Codex, Cursor, Gemini CLI ou qualquer outro assistente que ajude um desenvolvedor a entregar crews e flows.
|
||||
|
||||
### Agentes de codificação compatíveis
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
Esta página serve para humanos e para assistentes de IA. Se você é um agente de codificação, comece por **Skills** para obter contexto do CrewAI e depois use **llms.txt** para acesso completo à documentação.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — ensine CrewAI ao seu agente
|
||||
|
||||
**Skills** são pacotes de instruções que dão aos agentes de codificação conhecimento profundo do CrewAI — como estruturar Flows, configurar Crews, usar ferramentas e seguir convenções do framework.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (Plugin Marketplace)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
As skills do CrewAI estão no **plugin marketplace do Claude Code** — o mesmo canal usado por empresas líderes em IA:
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
Quatro skills são ativadas automaticamente quando você faz perguntas relevantes sobre CrewAI:
|
||||
|
||||
| Skill | Quando é usada |
|
||||
|-------|----------------|
|
||||
| `getting-started` | Novos projetos, escolha entre `LLM.call()` / `Agent` / `Crew` / `Flow`, arquivos `crew.py` / `main.py` |
|
||||
| `design-agent` | Configurar agentes — papel, objetivo, história, ferramentas, LLMs, memória, guardrails |
|
||||
| `design-task` | Descrever tarefas, dependências, saída estruturada (`output_pydantic`, `output_json`), revisão humana |
|
||||
| `ask-docs` | Consultar o [servidor MCP da documentação CrewAI](https://docs.crewai.com/mcp) em tempo real para detalhes de API |
|
||||
</Tab>
|
||||
<Tab title="npx (qualquer agente)">
|
||||
Funciona com Claude Code, Codex, Cursor, Gemini CLI ou qualquer agente de codificação:
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
Obtido do [registro skills.sh](https://skills.sh/crewaiinc/skills).
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="Instale o pacote oficial de skills">
|
||||
Use um dos métodos acima — o plugin marketplace do Claude Code ou `npx skills add`. Ambos instalam o pacote oficial [crewAIInc/skills](https://github.com/crewAIInc/skills).
|
||||
</Step>
|
||||
<Step title="Seu agente ganha expertise imediata em CrewAI">
|
||||
O pacote ensina ao seu agente:
|
||||
- **Flows** — apps com estado, passos e disparo de crews
|
||||
- **Crews e agentes** — padrões YAML-first, papéis, tarefas, delegação
|
||||
- **Ferramentas e integrações** — busca, APIs, servidores MCP e ferramentas comuns do CrewAI
|
||||
- **Estrutura do projeto** — scaffolds da CLI e convenções de repositório
|
||||
- **Padrões atualizados** — alinhado à documentação e às melhores práticas atuais do CrewAI
|
||||
</Step>
|
||||
<Step title="Comece a construir">
|
||||
Seu agente pode estruturar e construir projetos CrewAI sem você precisar reexplicar o framework a cada sessão.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Conceito de skills" icon="bolt" href="/pt-BR/concepts/skills">
|
||||
Como skills funcionam em agentes CrewAI — injeção, ativação e padrões.
|
||||
</Card>
|
||||
<Card title="Página de skills" icon="wand-magic-sparkles" href="/pt-BR/skills">
|
||||
Visão geral do pacote crewAIInc/skills e do que ele inclui.
|
||||
</Card>
|
||||
<Card title="AGENTS.md e ferramentas" icon="terminal" href="/pt-BR/guides/coding-tools/agents-md">
|
||||
Configure o AGENTS.md para Claude Code, Codex, Cursor e Gemini CLI.
|
||||
</Card>
|
||||
<Card title="Registro skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
Listagem oficial — skills, estatísticas de instalação e auditorias.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — documentação legível por máquina
|
||||
|
||||
O CrewAI publica um arquivo `llms.txt` que dá aos assistentes de IA acesso direto à documentação completa em formato legível por máquinas.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="O que é llms.txt?">
|
||||
[`llms.txt`](https://llmstxt.org/) é um padrão emergente para tornar a documentação consumível por grandes modelos de linguagem. Em vez de fazer scraping de HTML, seu agente pode buscar um único arquivo de texto estruturado com o conteúdo necessário.
|
||||
|
||||
O `llms.txt` do CrewAI **já está no ar** — seu agente pode usar agora.
|
||||
</Tab>
|
||||
<Tab title="Como usar">
|
||||
Indique ao agente de codificação a URL quando precisar da referência do CrewAI:
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
Muitos agentes (Claude Code, Cursor etc.) conseguem buscar URLs diretamente. O arquivo contém documentação estruturada sobre conceitos, APIs e guias do CrewAI.
|
||||
</Tab>
|
||||
<Tab title="Por que importa">
|
||||
- **Sem scraping** — conteúdo limpo e estruturado em uma requisição
|
||||
- **Sempre atualizado** — servido diretamente de docs.crewai.com
|
||||
- **Otimizado para LLMs** — formatado para janelas de contexto, não para navegadores
|
||||
- **Complementa as skills** — skills ensinam padrões; llms.txt fornece referência
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. Implantação enterprise
|
||||
|
||||
Do crew local à produção no **CrewAI AMP** (Agent Management Platform) em minutos.
|
||||
|
||||
<Steps>
|
||||
<Step title="Construa localmente">
|
||||
Estruture e teste seu crew ou flow:
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="Prepare a implantação">
|
||||
Garanta que a estrutura do projeto está pronta:
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
Veja o [guia de preparação](/pt-BR/enterprise/guides/prepare-for-deployment) para detalhes de estrutura e requisitos.
|
||||
</Step>
|
||||
<Step title="Implante no AMP">
|
||||
Envie para a plataforma CrewAI AMP:
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
Também é possível implantar pela [integração com GitHub](/pt-BR/enterprise/guides/deploy-to-amp) ou pelo [Crew Studio](/pt-BR/enterprise/guides/enable-crew-studio).
|
||||
</Step>
|
||||
<Step title="Acesso via API">
|
||||
O crew implantado recebe um endpoint REST. Integre em qualquer aplicação:
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Implantar no AMP" icon="rocket" href="/pt-BR/enterprise/guides/deploy-to-amp">
|
||||
Guia completo de implantação — CLI, GitHub e Crew Studio.
|
||||
</Card>
|
||||
<Card title="Introdução ao AMP" icon="globe" href="/pt-BR/enterprise/introduction">
|
||||
Visão da plataforma — o que o AMP oferece para crews em produção.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. Recursos enterprise
|
||||
|
||||
O CrewAI AMP foi feito para equipes em produção. Além da implantação, você obtém:
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Observabilidade" icon="chart-line">
|
||||
Traces de execução, logs e métricas de desempenho para cada execução de crew. Monitore decisões de agentes, chamadas de ferramentas e conclusão de tarefas em tempo real.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
Interface no-code/low-code para criar, personalizar e implantar crews visualmente — exporte para código ou implante direto.
|
||||
</Card>
|
||||
<Card title="Webhook streaming" icon="webhook">
|
||||
Transmita eventos em tempo real das execuções para seus sistemas. Integre com Slack, Zapier ou qualquer consumidor de webhook.
|
||||
</Card>
|
||||
<Card title="Gestão de equipe" icon="users">
|
||||
SSO, RBAC e controles em nível de organização. Gerencie quem pode criar, implantar e acessar crews.
|
||||
</Card>
|
||||
<Card title="Repositório de ferramentas" icon="toolbox">
|
||||
Publique e compartilhe ferramentas customizadas na organização. Instale ferramentas da comunidade a partir do registro.
|
||||
</Card>
|
||||
<Card title="Factory (self-hosted)" icon="server">
|
||||
Execute o CrewAI AMP na sua infraestrutura. Capacidades completas da plataforma com residência de dados e controles de conformidade.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Para quem é o AMP?">
|
||||
Para equipes que precisam levar fluxos de agentes de IA do protótipo à produção — com observabilidade, controles de acesso e infraestrutura escalável. De startups a grandes empresas, o AMP cuida da complexidade operacional para você focar nos agentes.
|
||||
</Accordion>
|
||||
<Accordion title="Quais opções de implantação existem?">
|
||||
- **Nuvem (app.crewai.com)** — gerenciada pela CrewAI, caminho mais rápido para produção
|
||||
- **Factory (self-hosted)** — na sua infraestrutura para controle total dos dados
|
||||
- **Híbrido** — combine nuvem e self-hosted conforme a sensibilidade dos dados
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="Conheça o CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
Cadastre-se e leve seu primeiro crew à produção.
|
||||
</Card>
|
||||
@@ -191,7 +191,7 @@ Para equipes e organizações, o CrewAI oferece opções de implantação corpor
|
||||
- Compatível com qualquer hyperscaler, incluindo ambientes on-premises
|
||||
- Integração com seus sistemas de segurança existentes
|
||||
|
||||
<Card title="Explore as Opções Enterprise" icon="building" href="https://crewai.com/enterprise">
|
||||
<Card title="Explore as Opções Enterprise" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
|
||||
Saiba mais sobre as soluções enterprise do CrewAI e agende uma demonstração
|
||||
</Card>
|
||||
</Note>
|
||||
|
||||
@@ -9,7 +9,7 @@ authors = [
|
||||
requires-python = ">=3.10, <3.14"
|
||||
dependencies = [
|
||||
"Pillow~=12.1.1",
|
||||
"pypdf~=6.9.1",
|
||||
"pypdf~=6.10.0",
|
||||
"python-magic>=0.4.27",
|
||||
"aiocache~=0.12.3",
|
||||
"aiofiles~=24.1.0",
|
||||
|
||||
@@ -152,4 +152,4 @@ __all__ = [
|
||||
"wrap_file_source",
|
||||
]
|
||||
|
||||
__version__ = "1.14.2a1"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -9,8 +9,8 @@ authors = [
|
||||
requires-python = ">=3.10, <3.14"
|
||||
dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests~=2.32.5",
|
||||
"crewai==1.14.2a1",
|
||||
"requests>=2.33.0,<3",
|
||||
"crewai==1.14.3",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
"python-docx~=1.2.0",
|
||||
@@ -112,7 +112,7 @@ github = [
|
||||
]
|
||||
rag = [
|
||||
"python-docx>=1.1.0",
|
||||
"lxml>=5.3.0,<5.4.0", # Pin to avoid etree import issues in 5.4.0
|
||||
"lxml>=6.1.0,<7", # 6.1.0+ required for GHSA-vfmq-68hx-4jfw (XXE in iterparse)
|
||||
]
|
||||
xml = [
|
||||
"unstructured[local-inference, all-docs]>=0.17.2"
|
||||
@@ -139,6 +139,14 @@ contextual = [
|
||||
"contextual-client>=0.1.0",
|
||||
"nest-asyncio>=1.6.0",
|
||||
]
|
||||
daytona = [
|
||||
"daytona~=0.140.0",
|
||||
]
|
||||
|
||||
e2b = [
|
||||
"e2b~=2.20.0",
|
||||
"e2b-code-interpreter~=2.6.0",
|
||||
]
|
||||
|
||||
|
||||
[tool.uv]
|
||||
|
||||
@@ -59,6 +59,11 @@ from crewai_tools.tools.dalle_tool.dalle_tool import DallETool
|
||||
from crewai_tools.tools.databricks_query_tool.databricks_query_tool import (
|
||||
DatabricksQueryTool,
|
||||
)
|
||||
from crewai_tools.tools.daytona_sandbox_tool import (
|
||||
DaytonaExecTool,
|
||||
DaytonaFileTool,
|
||||
DaytonaPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.directory_read_tool.directory_read_tool import (
|
||||
DirectoryReadTool,
|
||||
)
|
||||
@@ -66,6 +71,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
|
||||
DirectorySearchTool,
|
||||
)
|
||||
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool import (
|
||||
E2BExecTool,
|
||||
E2BFileTool,
|
||||
E2BPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
|
||||
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
|
||||
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
|
||||
@@ -232,8 +242,14 @@ __all__ = [
|
||||
"DOCXSearchTool",
|
||||
"DallETool",
|
||||
"DatabricksQueryTool",
|
||||
"DaytonaExecTool",
|
||||
"DaytonaFileTool",
|
||||
"DaytonaPythonTool",
|
||||
"DirectoryReadTool",
|
||||
"DirectorySearchTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
"EXASearchTool",
|
||||
"EnterpriseActionTool",
|
||||
"FileCompressorTool",
|
||||
@@ -305,4 +321,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.14.2a1"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -48,6 +48,11 @@ from crewai_tools.tools.dalle_tool.dalle_tool import DallETool
|
||||
from crewai_tools.tools.databricks_query_tool.databricks_query_tool import (
|
||||
DatabricksQueryTool,
|
||||
)
|
||||
from crewai_tools.tools.daytona_sandbox_tool import (
|
||||
DaytonaExecTool,
|
||||
DaytonaFileTool,
|
||||
DaytonaPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.directory_read_tool.directory_read_tool import (
|
||||
DirectoryReadTool,
|
||||
)
|
||||
@@ -55,6 +60,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
|
||||
DirectorySearchTool,
|
||||
)
|
||||
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool import (
|
||||
E2BExecTool,
|
||||
E2BFileTool,
|
||||
E2BPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
|
||||
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
|
||||
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
|
||||
@@ -217,8 +227,14 @@ __all__ = [
|
||||
"DOCXSearchTool",
|
||||
"DallETool",
|
||||
"DatabricksQueryTool",
|
||||
"DaytonaExecTool",
|
||||
"DaytonaFileTool",
|
||||
"DaytonaPythonTool",
|
||||
"DirectoryReadTool",
|
||||
"DirectorySearchTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
"EXASearchTool",
|
||||
"FileCompressorTool",
|
||||
"FileReadTool",
|
||||
|
||||
@@ -0,0 +1,107 @@
|
||||
# Daytona Sandbox Tools
|
||||
|
||||
Run shell commands, execute Python, and manage files inside a [Daytona](https://www.daytona.io/) sandbox. Daytona provides isolated, ephemeral compute environments suitable for agent-driven code execution.
|
||||
|
||||
Three tools are provided so you can pick what the agent actually needs:
|
||||
|
||||
- **`DaytonaExecTool`** — run a shell command (`sandbox.process.exec`).
|
||||
- **`DaytonaPythonTool`** — run a Python script (`sandbox.process.code_run`).
|
||||
- **`DaytonaFileTool`** — read / write / list / delete files (`sandbox.fs.*`).
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add "crewai-tools[daytona]"
|
||||
# or
|
||||
pip install "crewai-tools[daytona]"
|
||||
```
|
||||
|
||||
Set the API key:
|
||||
|
||||
```shell
|
||||
export DAYTONA_API_KEY="..."
|
||||
```
|
||||
|
||||
`DAYTONA_API_URL` and `DAYTONA_TARGET` are also respected if set.
|
||||
|
||||
## Sandbox lifecycle
|
||||
|
||||
All three tools share the same lifecycle controls from `DaytonaBaseTool`:
|
||||
|
||||
| Mode | When the sandbox is created | When it is deleted |
|
||||
| --- | --- | --- |
|
||||
| **Ephemeral** (default, `persistent=False`) | On every `_run` call | At the end of that same call |
|
||||
| **Persistent** (`persistent=True`) | Lazily on first use | At process exit (via `atexit`), or manually via `tool.close()` |
|
||||
| **Attach** (`sandbox_id="…"`) | Never — the tool attaches to an existing sandbox | Never — the tool will not delete a sandbox it did not create |
|
||||
|
||||
Ephemeral mode is the safe default: nothing leaks if the agent forgets to clean up. Use persistent mode when you want filesystem state or installed packages to carry across steps — this is typical when pairing `DaytonaFileTool` with `DaytonaExecTool`.
|
||||
|
||||
## Examples
|
||||
|
||||
### One-shot Python execution (ephemeral)
|
||||
|
||||
```python
|
||||
from crewai_tools import DaytonaPythonTool
|
||||
|
||||
tool = DaytonaPythonTool()
|
||||
result = tool.run(code="print(sum(range(10)))")
|
||||
```
|
||||
|
||||
### Multi-step shell session (persistent)
|
||||
|
||||
```python
|
||||
from crewai_tools import DaytonaExecTool, DaytonaFileTool
|
||||
|
||||
exec_tool = DaytonaExecTool(persistent=True)
|
||||
file_tool = DaytonaFileTool(persistent=True)
|
||||
|
||||
# Agent writes a script, then runs it — both share the same sandbox instance
|
||||
# because they each keep their own persistent sandbox. If you need the *same*
|
||||
# sandbox across two tools, create one tool, grab the sandbox id via
|
||||
# `tool._persistent_sandbox.id`, and pass it to the other via `sandbox_id=...`.
|
||||
```
|
||||
|
||||
### Attach to an existing sandbox
|
||||
|
||||
```python
|
||||
from crewai_tools import DaytonaExecTool
|
||||
|
||||
tool = DaytonaExecTool(sandbox_id="my-long-lived-sandbox")
|
||||
```
|
||||
|
||||
### Custom create params
|
||||
|
||||
Pass Daytona's `CreateSandboxFromSnapshotParams` kwargs via `create_params`:
|
||||
|
||||
```python
|
||||
tool = DaytonaExecTool(
|
||||
persistent=True,
|
||||
create_params={
|
||||
"language": "python",
|
||||
"env_vars": {"MY_FLAG": "1"},
|
||||
"labels": {"owner": "crewai-agent"},
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
## Tool arguments
|
||||
|
||||
### `DaytonaExecTool`
|
||||
- `command: str` — shell command to run.
|
||||
- `cwd: str | None` — working directory.
|
||||
- `env: dict[str, str] | None` — extra env vars for this command.
|
||||
- `timeout: int | None` — seconds.
|
||||
|
||||
### `DaytonaPythonTool`
|
||||
- `code: str` — Python source to execute.
|
||||
- `argv: list[str] | None` — argv forwarded via `CodeRunParams`.
|
||||
- `env: dict[str, str] | None` — env vars forwarded via `CodeRunParams`.
|
||||
- `timeout: int | None` — seconds.
|
||||
|
||||
### `DaytonaFileTool`
|
||||
- `action: "read" | "write" | "list" | "delete" | "mkdir" | "info"`
|
||||
- `path: str` — absolute path inside the sandbox.
|
||||
- `content: str | None` — required for `write`.
|
||||
- `binary: bool` — if `True`, `content` is base64 on write / returned as base64 on read.
|
||||
- `recursive: bool` — for `delete`, removes directories recursively.
|
||||
- `mode: str` — for `mkdir`, octal permission string (default `"0755"`).
|
||||
@@ -0,0 +1,13 @@
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_exec_tool import DaytonaExecTool
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_file_tool import DaytonaFileTool
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_python_tool import (
|
||||
DaytonaPythonTool,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"DaytonaBaseTool",
|
||||
"DaytonaExecTool",
|
||||
"DaytonaFileTool",
|
||||
"DaytonaPythonTool",
|
||||
]
|
||||
@@ -0,0 +1,198 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import atexit
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from crewai.tools import BaseTool, EnvVar
|
||||
from pydantic import ConfigDict, Field, PrivateAttr
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DaytonaBaseTool(BaseTool):
|
||||
"""Shared base for tools that act on a Daytona sandbox.
|
||||
|
||||
Lifecycle modes:
|
||||
- persistent=False (default): create a fresh sandbox per `_run` call and
|
||||
delete it when the call returns. Safer and stateless — nothing leaks if
|
||||
the agent forgets cleanup.
|
||||
- persistent=True: lazily create a single sandbox on first use, cache it
|
||||
on the instance, and register an atexit hook to delete it at process
|
||||
exit. Cheaper across many calls and lets files/state carry over.
|
||||
- sandbox_id=<existing>: attach to a sandbox the caller already owns.
|
||||
Never deleted by the tool.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
|
||||
package_dependencies: list[str] = Field(default_factory=lambda: ["daytona"])
|
||||
|
||||
api_key: str | None = Field(
|
||||
default_factory=lambda: os.getenv("DAYTONA_API_KEY"),
|
||||
description="Daytona API key. Falls back to DAYTONA_API_KEY env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
api_url: str | None = Field(
|
||||
default_factory=lambda: os.getenv("DAYTONA_API_URL"),
|
||||
description="Daytona API URL override. Falls back to DAYTONA_API_URL env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
target: str | None = Field(
|
||||
default_factory=lambda: os.getenv("DAYTONA_TARGET"),
|
||||
description="Daytona target region. Falls back to DAYTONA_TARGET env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
|
||||
persistent: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"If True, reuse one sandbox across all calls to this tool instance "
|
||||
"and delete it at process exit. Default False creates and deletes a "
|
||||
"fresh sandbox per call."
|
||||
),
|
||||
)
|
||||
sandbox_id: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Attach to an existing sandbox by id or name instead of creating a "
|
||||
"new one. The tool will never delete a sandbox it did not create."
|
||||
),
|
||||
)
|
||||
create_params: dict[str, Any] | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Optional kwargs forwarded to CreateSandboxFromSnapshotParams when "
|
||||
"creating a sandbox (e.g. language, snapshot, env_vars, labels)."
|
||||
),
|
||||
)
|
||||
sandbox_timeout: float = Field(
|
||||
default=60.0,
|
||||
description="Timeout in seconds for sandbox create/delete operations.",
|
||||
)
|
||||
|
||||
env_vars: list[EnvVar] = Field(
|
||||
default_factory=lambda: [
|
||||
EnvVar(
|
||||
name="DAYTONA_API_KEY",
|
||||
description="API key for Daytona sandbox service",
|
||||
required=False,
|
||||
),
|
||||
EnvVar(
|
||||
name="DAYTONA_API_URL",
|
||||
description="Daytona API base URL (optional)",
|
||||
required=False,
|
||||
),
|
||||
EnvVar(
|
||||
name="DAYTONA_TARGET",
|
||||
description="Daytona target region (optional)",
|
||||
required=False,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
_client: Any | None = PrivateAttr(default=None)
|
||||
_persistent_sandbox: Any | None = PrivateAttr(default=None)
|
||||
_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
|
||||
_cleanup_registered: bool = PrivateAttr(default=False)
|
||||
|
||||
_sdk_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sdk(cls) -> dict[str, Any]:
|
||||
if cls._sdk_cache:
|
||||
return cls._sdk_cache
|
||||
try:
|
||||
from daytona import (
|
||||
CreateSandboxFromSnapshotParams,
|
||||
Daytona,
|
||||
DaytonaConfig,
|
||||
)
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'daytona' package is required for Daytona sandbox tools. "
|
||||
"Install it with: uv add daytona (or) pip install daytona"
|
||||
) from exc
|
||||
cls._sdk_cache = {
|
||||
"Daytona": Daytona,
|
||||
"DaytonaConfig": DaytonaConfig,
|
||||
"CreateSandboxFromSnapshotParams": CreateSandboxFromSnapshotParams,
|
||||
}
|
||||
return cls._sdk_cache
|
||||
|
||||
def _get_client(self) -> Any:
|
||||
if self._client is not None:
|
||||
return self._client
|
||||
sdk = self._import_sdk()
|
||||
config_kwargs: dict[str, Any] = {}
|
||||
if self.api_key:
|
||||
config_kwargs["api_key"] = self.api_key
|
||||
if self.api_url:
|
||||
config_kwargs["api_url"] = self.api_url
|
||||
if self.target:
|
||||
config_kwargs["target"] = self.target
|
||||
config = sdk["DaytonaConfig"](**config_kwargs) if config_kwargs else None
|
||||
self._client = sdk["Daytona"](config) if config else sdk["Daytona"]()
|
||||
return self._client
|
||||
|
||||
def _build_create_params(self) -> Any | None:
|
||||
if not self.create_params:
|
||||
return None
|
||||
sdk = self._import_sdk()
|
||||
return sdk["CreateSandboxFromSnapshotParams"](**self.create_params)
|
||||
|
||||
def _acquire_sandbox(self) -> tuple[Any, bool]:
|
||||
"""Return (sandbox, should_delete_after_use)."""
|
||||
client = self._get_client()
|
||||
|
||||
if self.sandbox_id:
|
||||
return client.get(self.sandbox_id), False
|
||||
|
||||
if self.persistent:
|
||||
with self._lock:
|
||||
if self._persistent_sandbox is None:
|
||||
self._persistent_sandbox = client.create(
|
||||
self._build_create_params(),
|
||||
timeout=self.sandbox_timeout,
|
||||
)
|
||||
if not self._cleanup_registered:
|
||||
atexit.register(self.close)
|
||||
self._cleanup_registered = True
|
||||
return self._persistent_sandbox, False
|
||||
|
||||
sandbox = client.create(
|
||||
self._build_create_params(),
|
||||
timeout=self.sandbox_timeout,
|
||||
)
|
||||
return sandbox, True
|
||||
|
||||
def _release_sandbox(self, sandbox: Any, should_delete: bool) -> None:
|
||||
if not should_delete:
|
||||
return
|
||||
try:
|
||||
sandbox.delete(timeout=self.sandbox_timeout)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort sandbox cleanup failed after ephemeral use; "
|
||||
"the sandbox may need manual deletion.",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Delete the cached persistent sandbox if one exists."""
|
||||
with self._lock:
|
||||
sandbox = self._persistent_sandbox
|
||||
self._persistent_sandbox = None
|
||||
if sandbox is None:
|
||||
return
|
||||
try:
|
||||
sandbox.delete(timeout=self.sandbox_timeout)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort persistent sandbox cleanup failed at close(); "
|
||||
"the sandbox may need manual deletion.",
|
||||
exc_info=True,
|
||||
)
|
||||
@@ -0,0 +1,59 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
|
||||
|
||||
class DaytonaExecToolSchema(BaseModel):
|
||||
command: str = Field(..., description="Shell command to execute in the sandbox.")
|
||||
cwd: str | None = Field(
|
||||
default=None,
|
||||
description="Working directory to run the command in. Defaults to the sandbox work dir.",
|
||||
)
|
||||
env: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables to set for this command.",
|
||||
)
|
||||
timeout: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the command to finish.",
|
||||
)
|
||||
|
||||
|
||||
class DaytonaExecTool(DaytonaBaseTool):
|
||||
"""Run a shell command inside a Daytona sandbox."""
|
||||
|
||||
name: str = "Daytona Sandbox Exec"
|
||||
description: str = (
|
||||
"Execute a shell command inside a Daytona sandbox and return the exit "
|
||||
"code and combined output. Use this to run builds, package installs, "
|
||||
"git operations, or any one-off shell command."
|
||||
)
|
||||
args_schema: type_[BaseModel] = DaytonaExecToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
command: str,
|
||||
cwd: str | None = None,
|
||||
env: dict[str, str] | None = None,
|
||||
timeout: int | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_delete = self._acquire_sandbox()
|
||||
try:
|
||||
response = sandbox.process.exec(
|
||||
command,
|
||||
cwd=cwd,
|
||||
env=env,
|
||||
timeout=timeout,
|
||||
)
|
||||
return {
|
||||
"exit_code": getattr(response, "exit_code", None),
|
||||
"result": getattr(response, "result", None),
|
||||
"artifacts": getattr(response, "artifacts", None),
|
||||
}
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_delete)
|
||||
@@ -0,0 +1,205 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
from builtins import type as type_
|
||||
import logging
|
||||
import posixpath
|
||||
from typing import Any, Literal
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
FileAction = Literal["read", "write", "append", "list", "delete", "mkdir", "info"]
|
||||
|
||||
|
||||
class DaytonaFileToolSchema(BaseModel):
|
||||
action: FileAction = Field(
|
||||
...,
|
||||
description=(
|
||||
"The filesystem action to perform: 'read' (returns file contents), "
|
||||
"'write' (create or replace a file with content), 'append' (append "
|
||||
"content to an existing file — use this for writing large files in "
|
||||
"chunks to avoid hitting tool-call size limits), 'list' (lists a "
|
||||
"directory), 'delete' (removes a file/dir), 'mkdir' (creates a "
|
||||
"directory), 'info' (returns file metadata)."
|
||||
),
|
||||
)
|
||||
path: str = Field(..., description="Absolute path inside the sandbox.")
|
||||
content: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Content to write or append. If omitted for 'write', an empty file "
|
||||
"is created. For files larger than a few KB, prefer one 'write' "
|
||||
"with empty content followed by multiple 'append' calls of ~4KB "
|
||||
"each to stay within tool-call payload limits."
|
||||
),
|
||||
)
|
||||
binary: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"For 'write': treat content as base64 and upload raw bytes. "
|
||||
"For 'read': return contents as base64 instead of decoded utf-8."
|
||||
),
|
||||
)
|
||||
recursive: bool = Field(
|
||||
default=False,
|
||||
description="For action='delete': remove directories recursively.",
|
||||
)
|
||||
mode: str = Field(
|
||||
default="0755",
|
||||
description="For action='mkdir': octal permission string (default 0755).",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate_action_args(self) -> DaytonaFileToolSchema:
|
||||
if self.action == "append" and self.content is None:
|
||||
raise ValueError(
|
||||
"action='append' requires 'content'. Pass the chunk to append "
|
||||
"in the 'content' field."
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
class DaytonaFileTool(DaytonaBaseTool):
|
||||
"""Read, write, and manage files inside a Daytona sandbox.
|
||||
|
||||
Notes:
|
||||
- Most useful with `persistent=True` or an explicit `sandbox_id`. With the
|
||||
default ephemeral mode, files disappear when this tool call finishes.
|
||||
"""
|
||||
|
||||
name: str = "Daytona Sandbox Files"
|
||||
description: str = (
|
||||
"Perform filesystem operations inside a Daytona sandbox: read a file, "
|
||||
"write content to a path, append content to an existing file, list a "
|
||||
"directory, delete a path, make a directory, or fetch file metadata. "
|
||||
"For files larger than a few KB, create the file with action='write' "
|
||||
"and empty content, then send the body via multiple 'append' calls of "
|
||||
"~4KB each to stay within tool-call payload limits."
|
||||
)
|
||||
args_schema: type_[BaseModel] = DaytonaFileToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
action: FileAction,
|
||||
path: str,
|
||||
content: str | None = None,
|
||||
binary: bool = False,
|
||||
recursive: bool = False,
|
||||
mode: str = "0755",
|
||||
) -> Any:
|
||||
sandbox, should_delete = self._acquire_sandbox()
|
||||
try:
|
||||
if action == "read":
|
||||
return self._read(sandbox, path, binary=binary)
|
||||
if action == "write":
|
||||
return self._write(sandbox, path, content or "", binary=binary)
|
||||
if action == "append":
|
||||
return self._append(sandbox, path, content or "", binary=binary)
|
||||
if action == "list":
|
||||
return self._list(sandbox, path)
|
||||
if action == "delete":
|
||||
sandbox.fs.delete_file(path, recursive=recursive)
|
||||
return {"status": "deleted", "path": path}
|
||||
if action == "mkdir":
|
||||
sandbox.fs.create_folder(path, mode)
|
||||
return {"status": "created", "path": path, "mode": mode}
|
||||
if action == "info":
|
||||
return self._info(sandbox, path)
|
||||
raise ValueError(f"Unknown action: {action}")
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_delete)
|
||||
|
||||
def _read(self, sandbox: Any, path: str, *, binary: bool) -> dict[str, Any]:
|
||||
data: bytes = sandbox.fs.download_file(path)
|
||||
if binary:
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
}
|
||||
try:
|
||||
return {"path": path, "encoding": "utf-8", "content": data.decode("utf-8")}
|
||||
except UnicodeDecodeError:
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
"note": "File was not valid utf-8; returned as base64.",
|
||||
}
|
||||
|
||||
def _write(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
payload = base64.b64decode(content) if binary else content.encode("utf-8")
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
sandbox.fs.upload_file(payload, path)
|
||||
return {"status": "written", "path": path, "bytes": len(payload)}
|
||||
|
||||
def _append(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
chunk = base64.b64decode(content) if binary else content.encode("utf-8")
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
try:
|
||||
existing: bytes = sandbox.fs.download_file(path)
|
||||
except Exception:
|
||||
existing = b""
|
||||
payload = existing + chunk
|
||||
sandbox.fs.upload_file(payload, path)
|
||||
return {
|
||||
"status": "appended",
|
||||
"path": path,
|
||||
"appended_bytes": len(chunk),
|
||||
"total_bytes": len(payload),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _ensure_parent_dir(sandbox: Any, path: str) -> None:
|
||||
"""Make sure the parent directory of `path` exists.
|
||||
|
||||
Daytona's upload returns 400 if the parent directory is missing. We
|
||||
best-effort mkdir the parent; any error (e.g. already exists) is
|
||||
swallowed because `create_folder` is not idempotent on the server.
|
||||
"""
|
||||
parent = posixpath.dirname(path)
|
||||
if not parent or parent in ("/", "."):
|
||||
return
|
||||
try:
|
||||
sandbox.fs.create_folder(parent, "0755")
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort parent-directory create failed for %s; "
|
||||
"assuming it already exists and proceeding with the write.",
|
||||
parent,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _list(self, sandbox: Any, path: str) -> dict[str, Any]:
|
||||
entries = sandbox.fs.list_files(path)
|
||||
return {
|
||||
"path": path,
|
||||
"entries": [self._file_info_to_dict(entry) for entry in entries],
|
||||
}
|
||||
|
||||
def _info(self, sandbox: Any, path: str) -> dict[str, Any]:
|
||||
return self._file_info_to_dict(sandbox.fs.get_file_info(path))
|
||||
|
||||
@staticmethod
|
||||
def _file_info_to_dict(info: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"name",
|
||||
"size",
|
||||
"mode",
|
||||
"permissions",
|
||||
"is_dir",
|
||||
"mod_time",
|
||||
"owner",
|
||||
"group",
|
||||
)
|
||||
return {field: getattr(info, field, None) for field in fields}
|
||||
@@ -0,0 +1,82 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
|
||||
|
||||
class DaytonaPythonToolSchema(BaseModel):
|
||||
code: str = Field(
|
||||
...,
|
||||
description="Python source to execute inside the sandbox.",
|
||||
)
|
||||
argv: list[str] | None = Field(
|
||||
default=None,
|
||||
description="Optional argv passed to the script (forwarded as params.argv).",
|
||||
)
|
||||
env: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables for the run (forwarded as params.env).",
|
||||
)
|
||||
timeout: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the code to finish.",
|
||||
)
|
||||
|
||||
|
||||
class DaytonaPythonTool(DaytonaBaseTool):
|
||||
"""Run Python source inside a Daytona sandbox."""
|
||||
|
||||
name: str = "Daytona Sandbox Python"
|
||||
description: str = (
|
||||
"Execute a block of Python code inside a Daytona sandbox and return the "
|
||||
"exit code, captured stdout, and any produced artifacts. Use this for "
|
||||
"data processing, quick scripts, or analysis that should run in an "
|
||||
"isolated environment."
|
||||
)
|
||||
args_schema: type_[BaseModel] = DaytonaPythonToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
code: str,
|
||||
argv: list[str] | None = None,
|
||||
env: dict[str, str] | None = None,
|
||||
timeout: int | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_delete = self._acquire_sandbox()
|
||||
try:
|
||||
params = self._build_code_run_params(argv=argv, env=env)
|
||||
response = sandbox.process.code_run(code, params=params, timeout=timeout)
|
||||
return {
|
||||
"exit_code": getattr(response, "exit_code", None),
|
||||
"result": getattr(response, "result", None),
|
||||
"artifacts": getattr(response, "artifacts", None),
|
||||
}
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_delete)
|
||||
|
||||
def _build_code_run_params(
|
||||
self,
|
||||
argv: list[str] | None,
|
||||
env: dict[str, str] | None,
|
||||
) -> Any | None:
|
||||
if argv is None and env is None:
|
||||
return None
|
||||
try:
|
||||
from daytona import CodeRunParams
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"Could not import daytona.CodeRunParams while building "
|
||||
"argv/env for sandbox.process.code_run. This usually means the "
|
||||
"installed 'daytona' SDK is too old or incompatible. Upgrade "
|
||||
"with: pip install -U 'crewai-tools[daytona]'"
|
||||
) from exc
|
||||
kwargs: dict[str, Any] = {}
|
||||
if argv is not None:
|
||||
kwargs["argv"] = argv
|
||||
if env is not None:
|
||||
kwargs["env"] = env
|
||||
return CodeRunParams(**kwargs)
|
||||
@@ -0,0 +1,120 @@
|
||||
# E2B Sandbox Tools
|
||||
|
||||
Run shell commands, execute Python, and manage files inside an [E2B](https://e2b.dev/) sandbox. E2B provides isolated, ephemeral VMs suitable for agent-driven code execution, with a Jupyter-style code interpreter for rich Python results.
|
||||
|
||||
Three tools are provided so you can pick what the agent actually needs:
|
||||
|
||||
- **`E2BExecTool`** — run a shell command (`sandbox.commands.run`).
|
||||
- **`E2BPythonTool`** — run a Python cell in the E2B code interpreter (`sandbox.run_code`), returning stdout/stderr and rich results (charts, dataframes).
|
||||
- **`E2BFileTool`** — read / write / list / delete files (`sandbox.files.*`).
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add "crewai-tools[e2b]"
|
||||
# or
|
||||
pip install "crewai-tools[e2b]"
|
||||
```
|
||||
|
||||
Set the API key:
|
||||
|
||||
```shell
|
||||
export E2B_API_KEY="..."
|
||||
```
|
||||
|
||||
`E2B_DOMAIN` is also respected if set (for self-hosted or non-default deployments).
|
||||
|
||||
## Sandbox lifecycle
|
||||
|
||||
All three tools share the same lifecycle controls from `E2BBaseTool`:
|
||||
|
||||
| Mode | When the sandbox is created | When it is killed |
|
||||
| --- | --- | --- |
|
||||
| **Ephemeral** (default, `persistent=False`) | On every `_run` call | At the end of that same call |
|
||||
| **Persistent** (`persistent=True`) | Lazily on first use | At process exit (via `atexit`), or manually via `tool.close()` |
|
||||
| **Attach** (`sandbox_id="…"`) | Never — the tool attaches to an existing sandbox | Never — the tool will not kill a sandbox it did not create |
|
||||
|
||||
Ephemeral mode is the safe default: nothing leaks if the agent forgets to clean up. Use persistent mode when you want filesystem state or installed packages to carry across steps — this is typical when pairing `E2BFileTool` with `E2BExecTool`.
|
||||
|
||||
E2B sandboxes also auto-expire after an idle timeout. Tune it via `sandbox_timeout` (seconds, default `300`).
|
||||
|
||||
## Examples
|
||||
|
||||
### One-shot Python execution (ephemeral)
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BPythonTool
|
||||
|
||||
tool = E2BPythonTool()
|
||||
result = tool.run(code="print(sum(range(10)))")
|
||||
```
|
||||
|
||||
### Multi-step shell session (persistent)
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BExecTool, E2BFileTool
|
||||
|
||||
exec_tool = E2BExecTool(persistent=True)
|
||||
file_tool = E2BFileTool(persistent=True)
|
||||
|
||||
# Each tool keeps its own persistent sandbox. If you need the *same* sandbox
|
||||
# across two tools, create one tool, grab the sandbox id via
|
||||
# `tool._persistent_sandbox.sandbox_id`, and pass it to the other via
|
||||
# `sandbox_id=...`.
|
||||
```
|
||||
|
||||
### Attach to an existing sandbox
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BExecTool
|
||||
|
||||
tool = E2BExecTool(sandbox_id="sbx_...")
|
||||
```
|
||||
|
||||
### Custom create params
|
||||
|
||||
```python
|
||||
tool = E2BExecTool(
|
||||
persistent=True,
|
||||
template="my-custom-template",
|
||||
sandbox_timeout=600,
|
||||
envs={"MY_FLAG": "1"},
|
||||
metadata={"owner": "crewai-agent"},
|
||||
)
|
||||
```
|
||||
|
||||
## Tool arguments
|
||||
|
||||
### `E2BExecTool`
|
||||
- `command: str` — shell command to run.
|
||||
- `cwd: str | None` — working directory.
|
||||
- `envs: dict[str, str] | None` — extra env vars for this command.
|
||||
- `timeout: float | None` — seconds.
|
||||
|
||||
### `E2BPythonTool`
|
||||
- `code: str` — source to execute.
|
||||
- `language: str | None` — override kernel language (default: Python).
|
||||
- `envs: dict[str, str] | None` — env vars for the run.
|
||||
- `timeout: float | None` — seconds.
|
||||
|
||||
### `E2BFileTool`
|
||||
- `action: "read" | "write" | "append" | "list" | "delete" | "mkdir" | "info" | "exists"`
|
||||
- `path: str` — absolute path inside the sandbox.
|
||||
- `content: str | None` — required for `append`; optional for `write`.
|
||||
- `binary: bool` — if `True`, `content` is base64 on write / returned as base64 on read.
|
||||
- `depth: int` — for `list`, how many levels to recurse (default 1).
|
||||
|
||||
## Security considerations
|
||||
|
||||
These tools hand the LLM arbitrary shell, Python, and filesystem access inside a remote VM. The threat model to keep in mind:
|
||||
|
||||
- **Prompt-injection is a code-execution vector.** If the agent ingests untrusted content (web pages, scraped documents, user-supplied files, emails, search results), a malicious instruction hidden in that content can coerce the agent into issuing commands to `E2BExecTool` / `E2BPythonTool`. Treat any pipeline that feeds untrusted text into an agent that also has these tools as equivalent to remote code execution — the LLM is the attacker's shell.
|
||||
- **Ephemeral mode (the default) is the main blast-radius control.** A fresh sandbox is created per call and killed at the end, so injected commands cannot persist state, exfiltrate long-lived secrets, or build up tooling across turns. Leave `persistent=False` unless you have a concrete reason to change it.
|
||||
- **Avoid this specific combination:**
|
||||
- untrusted content in the agent's context, **plus**
|
||||
- `persistent=True` or an explicit long-lived `sandbox_id`, **plus**
|
||||
- a large `sandbox_timeout` or credentials/secrets seeded into the sandbox via `envs`.
|
||||
|
||||
That stack lets a single injection pivot into a long-running, credentialed shell that survives across turns. If you must run persistently, also keep `sandbox_timeout` short, scope `envs` to the minimum the task needs, and don't feed the same agent untrusted input.
|
||||
- **Don't mount production credentials.** Anything you put into `envs`, `metadata`, or files written to the sandbox is reachable from the LLM. Use per-task scoped keys, not your personal API tokens.
|
||||
- **E2B's VM isolation is the final backstop**, not a license to relax the above — isolation prevents escape to the host, but everything the sandbox can reach (the public internet, any service whose token you dropped in) is still fair game for an injected command.
|
||||
@@ -0,0 +1,12 @@
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_exec_tool import E2BExecTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_file_tool import E2BFileTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_python_tool import E2BPythonTool
|
||||
|
||||
|
||||
__all__ = [
|
||||
"E2BBaseTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
]
|
||||
@@ -0,0 +1,197 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import atexit
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from crewai.tools import BaseTool, EnvVar
|
||||
from pydantic import ConfigDict, Field, PrivateAttr, SecretStr
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class E2BBaseTool(BaseTool):
|
||||
"""Shared base for tools that act on an E2B sandbox.
|
||||
|
||||
Lifecycle modes:
|
||||
- persistent=False (default): create a fresh sandbox per `_run` call and
|
||||
kill it when the call returns. Safer and stateless — nothing leaks if
|
||||
the agent forgets cleanup.
|
||||
- persistent=True: lazily create a single sandbox on first use, cache it
|
||||
on the instance, and register an atexit hook to kill it at process
|
||||
exit. Cheaper across many calls and lets files/state carry over.
|
||||
- sandbox_id=<existing>: attach to a sandbox the caller already owns.
|
||||
Never killed by the tool.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
|
||||
package_dependencies: list[str] = Field(default_factory=lambda: ["e2b"])
|
||||
|
||||
api_key: SecretStr | None = Field(
|
||||
default_factory=lambda: (
|
||||
SecretStr(val) if (val := os.getenv("E2B_API_KEY")) else None
|
||||
),
|
||||
description="E2B API key. Falls back to E2B_API_KEY env var.",
|
||||
json_schema_extra={"required": False},
|
||||
repr=False,
|
||||
)
|
||||
domain: str | None = Field(
|
||||
default_factory=lambda: os.getenv("E2B_DOMAIN"),
|
||||
description="E2B API domain override. Falls back to E2B_DOMAIN env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
|
||||
template: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Optional template/snapshot name or id to create the sandbox from. "
|
||||
"Defaults to E2B's base template when omitted."
|
||||
),
|
||||
)
|
||||
persistent: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"If True, reuse one sandbox across all calls to this tool instance "
|
||||
"and kill it at process exit. Default False creates and kills a "
|
||||
"fresh sandbox per call."
|
||||
),
|
||||
)
|
||||
sandbox_id: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Attach to an existing sandbox by id instead of creating a new "
|
||||
"one. The tool will never kill a sandbox it did not create."
|
||||
),
|
||||
)
|
||||
sandbox_timeout: int = Field(
|
||||
default=300,
|
||||
description=(
|
||||
"Idle timeout in seconds after which E2B auto-kills the sandbox. "
|
||||
"Applied at create time and when attaching via sandbox_id."
|
||||
),
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Environment variables to set inside the sandbox at create time.",
|
||||
)
|
||||
metadata: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Metadata key-value pairs to attach to the sandbox at create time.",
|
||||
)
|
||||
|
||||
env_vars: list[EnvVar] = Field(
|
||||
default_factory=lambda: [
|
||||
EnvVar(
|
||||
name="E2B_API_KEY",
|
||||
description="API key for E2B sandbox service",
|
||||
required=False,
|
||||
),
|
||||
EnvVar(
|
||||
name="E2B_DOMAIN",
|
||||
description="E2B API domain (optional)",
|
||||
required=False,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
_persistent_sandbox: Any | None = PrivateAttr(default=None)
|
||||
_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
|
||||
_cleanup_registered: bool = PrivateAttr(default=False)
|
||||
|
||||
_sdk_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sandbox_class(cls) -> Any:
|
||||
"""Return the Sandbox class used by this tool.
|
||||
|
||||
Subclasses override this to swap in a different SDK (e.g. the code
|
||||
interpreter sandbox). The default uses plain `e2b.Sandbox`.
|
||||
"""
|
||||
cached = cls._sdk_cache.get("e2b.Sandbox")
|
||||
if cached is not None:
|
||||
return cached
|
||||
try:
|
||||
from e2b import Sandbox # type: ignore[import-untyped]
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'e2b' package is required for E2B sandbox tools. "
|
||||
"Install it with: uv add e2b (or) pip install e2b"
|
||||
) from exc
|
||||
cls._sdk_cache["e2b.Sandbox"] = Sandbox
|
||||
return Sandbox
|
||||
|
||||
def _connect_kwargs(self) -> dict[str, Any]:
|
||||
kwargs: dict[str, Any] = {}
|
||||
if self.api_key is not None:
|
||||
kwargs["api_key"] = self.api_key.get_secret_value()
|
||||
if self.domain:
|
||||
kwargs["domain"] = self.domain
|
||||
if self.sandbox_timeout is not None:
|
||||
kwargs["timeout"] = self.sandbox_timeout
|
||||
return kwargs
|
||||
|
||||
def _create_kwargs(self) -> dict[str, Any]:
|
||||
kwargs: dict[str, Any] = self._connect_kwargs()
|
||||
if self.template is not None:
|
||||
kwargs["template"] = self.template
|
||||
if self.envs is not None:
|
||||
kwargs["envs"] = self.envs
|
||||
if self.metadata is not None:
|
||||
kwargs["metadata"] = self.metadata
|
||||
return kwargs
|
||||
|
||||
def _acquire_sandbox(self) -> tuple[Any, bool]:
|
||||
"""Return (sandbox, should_kill_after_use)."""
|
||||
sandbox_cls = self._import_sandbox_class()
|
||||
|
||||
if self.sandbox_id:
|
||||
return (
|
||||
sandbox_cls.connect(self.sandbox_id, **self._connect_kwargs()),
|
||||
False,
|
||||
)
|
||||
|
||||
if self.persistent:
|
||||
with self._lock:
|
||||
if self._persistent_sandbox is None:
|
||||
self._persistent_sandbox = sandbox_cls.create(
|
||||
**self._create_kwargs()
|
||||
)
|
||||
if not self._cleanup_registered:
|
||||
atexit.register(self.close)
|
||||
self._cleanup_registered = True
|
||||
return self._persistent_sandbox, False
|
||||
|
||||
sandbox = sandbox_cls.create(**self._create_kwargs())
|
||||
return sandbox, True
|
||||
|
||||
def _release_sandbox(self, sandbox: Any, should_kill: bool) -> None:
|
||||
if not should_kill:
|
||||
return
|
||||
try:
|
||||
sandbox.kill()
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort sandbox cleanup failed after ephemeral use; "
|
||||
"the sandbox may need manual termination.",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Kill the cached persistent sandbox if one exists."""
|
||||
with self._lock:
|
||||
sandbox = self._persistent_sandbox
|
||||
self._persistent_sandbox = None
|
||||
if sandbox is None:
|
||||
return
|
||||
try:
|
||||
sandbox.kill()
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort persistent sandbox cleanup failed at close(); "
|
||||
"the sandbox may need manual termination.",
|
||||
exc_info=True,
|
||||
)
|
||||
@@ -0,0 +1,62 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
class E2BExecToolSchema(BaseModel):
|
||||
command: str = Field(..., description="Shell command to execute in the sandbox.")
|
||||
cwd: str | None = Field(
|
||||
default=None,
|
||||
description="Working directory to run the command in. Defaults to the sandbox home dir.",
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables to set for this command.",
|
||||
)
|
||||
timeout: float | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the command to finish.",
|
||||
)
|
||||
|
||||
|
||||
class E2BExecTool(E2BBaseTool):
|
||||
"""Run a shell command inside an E2B sandbox."""
|
||||
|
||||
name: str = "E2B Sandbox Exec"
|
||||
description: str = (
|
||||
"Execute a shell command inside an E2B sandbox and return the exit "
|
||||
"code, stdout, and stderr. Use this to run builds, package installs, "
|
||||
"git operations, or any one-off shell command."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BExecToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
command: str,
|
||||
cwd: str | None = None,
|
||||
envs: dict[str, str] | None = None,
|
||||
timeout: float | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
run_kwargs: dict[str, Any] = {}
|
||||
if cwd is not None:
|
||||
run_kwargs["cwd"] = cwd
|
||||
if envs is not None:
|
||||
run_kwargs["envs"] = envs
|
||||
if timeout is not None:
|
||||
run_kwargs["timeout"] = timeout
|
||||
result = sandbox.commands.run(command, **run_kwargs)
|
||||
return {
|
||||
"exit_code": getattr(result, "exit_code", None),
|
||||
"stdout": getattr(result, "stdout", None),
|
||||
"stderr": getattr(result, "stderr", None),
|
||||
"error": getattr(result, "error", None),
|
||||
}
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
@@ -0,0 +1,220 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
from builtins import type as type_
|
||||
import logging
|
||||
import posixpath
|
||||
from typing import Any, Literal
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
FileAction = Literal[
|
||||
"read", "write", "append", "list", "delete", "mkdir", "info", "exists"
|
||||
]
|
||||
|
||||
|
||||
class E2BFileToolSchema(BaseModel):
|
||||
action: FileAction = Field(
|
||||
...,
|
||||
description=(
|
||||
"The filesystem action to perform: 'read' (returns file contents), "
|
||||
"'write' (create or replace a file with content), 'append' (append "
|
||||
"content to an existing file — use this for writing large files in "
|
||||
"chunks to avoid hitting tool-call size limits), 'list' (lists a "
|
||||
"directory), 'delete' (removes a file/dir), 'mkdir' (creates a "
|
||||
"directory), 'info' (returns file metadata), 'exists' (returns a "
|
||||
"boolean for whether the path exists)."
|
||||
),
|
||||
)
|
||||
path: str = Field(..., description="Absolute path inside the sandbox.")
|
||||
content: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Content to write or append. If omitted for 'write', an empty file "
|
||||
"is created. For files larger than a few KB, prefer one 'write' "
|
||||
"with empty content followed by multiple 'append' calls of ~4KB "
|
||||
"each to stay within tool-call payload limits."
|
||||
),
|
||||
)
|
||||
binary: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"For 'write'/'append': treat content as base64 and upload raw "
|
||||
"bytes. For 'read': return contents as base64 instead of decoded "
|
||||
"utf-8."
|
||||
),
|
||||
)
|
||||
depth: int = Field(
|
||||
default=1,
|
||||
description="For action='list': how many levels deep to recurse (default 1).",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate_action_args(self) -> E2BFileToolSchema:
|
||||
if self.action == "append" and self.content is None:
|
||||
raise ValueError(
|
||||
"action='append' requires 'content'. Pass the chunk to append "
|
||||
"in the 'content' field."
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
class E2BFileTool(E2BBaseTool):
|
||||
"""Read, write, and manage files inside an E2B sandbox.
|
||||
|
||||
Notes:
|
||||
- Most useful with `persistent=True` or an explicit `sandbox_id`. With
|
||||
the default ephemeral mode, files disappear when this tool call
|
||||
finishes.
|
||||
"""
|
||||
|
||||
name: str = "E2B Sandbox Files"
|
||||
description: str = (
|
||||
"Perform filesystem operations inside an E2B sandbox: read a file, "
|
||||
"write content to a path, append content to an existing file, list a "
|
||||
"directory, delete a path, make a directory, fetch file metadata, or "
|
||||
"check whether a path exists. For files larger than a few KB, create "
|
||||
"the file with action='write' and empty content, then send the body "
|
||||
"via multiple 'append' calls of ~4KB each to stay within tool-call "
|
||||
"payload limits."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BFileToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
action: FileAction,
|
||||
path: str,
|
||||
content: str | None = None,
|
||||
binary: bool = False,
|
||||
depth: int = 1,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
if action == "read":
|
||||
return self._read(sandbox, path, binary=binary)
|
||||
if action == "write":
|
||||
return self._write(sandbox, path, content or "", binary=binary)
|
||||
if action == "append":
|
||||
return self._append(sandbox, path, content or "", binary=binary)
|
||||
if action == "list":
|
||||
return self._list(sandbox, path, depth=depth)
|
||||
if action == "delete":
|
||||
sandbox.files.remove(path)
|
||||
return {"status": "deleted", "path": path}
|
||||
if action == "mkdir":
|
||||
created = sandbox.files.make_dir(path)
|
||||
return {"status": "created", "path": path, "created": bool(created)}
|
||||
if action == "info":
|
||||
return self._info(sandbox, path)
|
||||
if action == "exists":
|
||||
return {"path": path, "exists": bool(sandbox.files.exists(path))}
|
||||
raise ValueError(f"Unknown action: {action}")
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
|
||||
def _read(self, sandbox: Any, path: str, *, binary: bool) -> dict[str, Any]:
|
||||
if binary:
|
||||
data: bytes = sandbox.files.read(path, format="bytes")
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
}
|
||||
try:
|
||||
content: str = sandbox.files.read(path)
|
||||
return {"path": path, "encoding": "utf-8", "content": content}
|
||||
except UnicodeDecodeError:
|
||||
data = sandbox.files.read(path, format="bytes")
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
"note": "File was not valid utf-8; returned as base64.",
|
||||
}
|
||||
|
||||
def _write(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
payload: str | bytes = base64.b64decode(content) if binary else content
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
sandbox.files.write(path, payload)
|
||||
size = (
|
||||
len(payload)
|
||||
if isinstance(payload, (bytes, bytearray))
|
||||
else len(payload.encode("utf-8"))
|
||||
)
|
||||
return {"status": "written", "path": path, "bytes": size}
|
||||
|
||||
def _append(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
chunk: bytes = base64.b64decode(content) if binary else content.encode("utf-8")
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
try:
|
||||
existing: bytes = sandbox.files.read(path, format="bytes")
|
||||
except Exception:
|
||||
existing = b""
|
||||
payload = existing + chunk
|
||||
sandbox.files.write(path, payload)
|
||||
return {
|
||||
"status": "appended",
|
||||
"path": path,
|
||||
"appended_bytes": len(chunk),
|
||||
"total_bytes": len(payload),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _ensure_parent_dir(sandbox: Any, path: str) -> None:
|
||||
parent = posixpath.dirname(path)
|
||||
if not parent or parent in ("/", "."):
|
||||
return
|
||||
try:
|
||||
sandbox.files.make_dir(parent)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort parent-directory create failed for %s; "
|
||||
"assuming it already exists and proceeding with the write.",
|
||||
parent,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _list(self, sandbox: Any, path: str, *, depth: int) -> dict[str, Any]:
|
||||
entries = sandbox.files.list(path, depth=depth)
|
||||
return {
|
||||
"path": path,
|
||||
"entries": [self._entry_to_dict(e) for e in entries],
|
||||
}
|
||||
|
||||
def _info(self, sandbox: Any, path: str) -> dict[str, Any]:
|
||||
return self._entry_to_dict(sandbox.files.get_info(path))
|
||||
|
||||
@staticmethod
|
||||
def _entry_to_dict(entry: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"name",
|
||||
"path",
|
||||
"type",
|
||||
"size",
|
||||
"mode",
|
||||
"permissions",
|
||||
"owner",
|
||||
"group",
|
||||
"modified_time",
|
||||
"symlink_target",
|
||||
)
|
||||
result: dict[str, Any] = {}
|
||||
for field in fields:
|
||||
value = getattr(entry, field, None)
|
||||
if value is not None and field == "modified_time":
|
||||
result[field] = (
|
||||
value.isoformat() if hasattr(value, "isoformat") else str(value)
|
||||
)
|
||||
else:
|
||||
result[field] = value
|
||||
return result
|
||||
@@ -0,0 +1,133 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
class E2BPythonToolSchema(BaseModel):
|
||||
code: str = Field(
|
||||
...,
|
||||
description="Python source to execute inside the sandbox.",
|
||||
)
|
||||
language: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Override the execution language (e.g. 'python', 'r', 'javascript'). "
|
||||
"Defaults to Python when omitted."
|
||||
),
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables for the run.",
|
||||
)
|
||||
timeout: float | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the code to finish.",
|
||||
)
|
||||
|
||||
|
||||
class E2BPythonTool(E2BBaseTool):
|
||||
"""Run Python code inside an E2B code interpreter sandbox.
|
||||
|
||||
Uses `e2b_code_interpreter`, which runs cells in a persistent Jupyter-style
|
||||
kernel so state (imports, variables) carries across calls when
|
||||
`persistent=True`.
|
||||
"""
|
||||
|
||||
name: str = "E2B Sandbox Python"
|
||||
description: str = (
|
||||
"Execute a block of Python code inside an E2B code interpreter sandbox "
|
||||
"and return captured stdout, stderr, the final expression value, and "
|
||||
"any rich results (charts, dataframes). Use this for data processing, "
|
||||
"quick scripts, or analysis that should run in an isolated environment."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BPythonToolSchema
|
||||
|
||||
package_dependencies: list[str] = Field(
|
||||
default_factory=lambda: ["e2b_code_interpreter"],
|
||||
)
|
||||
|
||||
_ci_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sandbox_class(cls) -> Any:
|
||||
cached = cls._ci_cache.get("Sandbox")
|
||||
if cached is not None:
|
||||
return cached
|
||||
try:
|
||||
from e2b_code_interpreter import Sandbox # type: ignore[import-untyped]
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'e2b_code_interpreter' package is required for the E2B "
|
||||
"Python tool. Install it with: "
|
||||
"uv add e2b-code-interpreter (or) "
|
||||
"pip install e2b-code-interpreter"
|
||||
) from exc
|
||||
cls._ci_cache["Sandbox"] = Sandbox
|
||||
return Sandbox
|
||||
|
||||
def _run(
|
||||
self,
|
||||
code: str,
|
||||
language: str | None = None,
|
||||
envs: dict[str, str] | None = None,
|
||||
timeout: float | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
run_kwargs: dict[str, Any] = {}
|
||||
if language is not None:
|
||||
run_kwargs["language"] = language
|
||||
if envs is not None:
|
||||
run_kwargs["envs"] = envs
|
||||
if timeout is not None:
|
||||
run_kwargs["timeout"] = timeout
|
||||
execution = sandbox.run_code(code, **run_kwargs)
|
||||
return self._serialize_execution(execution)
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
|
||||
@staticmethod
|
||||
def _serialize_execution(execution: Any) -> dict[str, Any]:
|
||||
logs = getattr(execution, "logs", None)
|
||||
error = getattr(execution, "error", None)
|
||||
results = getattr(execution, "results", None) or []
|
||||
return {
|
||||
"text": getattr(execution, "text", None),
|
||||
"stdout": list(getattr(logs, "stdout", []) or []) if logs else [],
|
||||
"stderr": list(getattr(logs, "stderr", []) or []) if logs else [],
|
||||
"error": (
|
||||
{
|
||||
"name": getattr(error, "name", None),
|
||||
"value": getattr(error, "value", None),
|
||||
"traceback": getattr(error, "traceback", None),
|
||||
}
|
||||
if error
|
||||
else None
|
||||
),
|
||||
"results": [E2BPythonTool._serialize_result(r) for r in results],
|
||||
"execution_count": getattr(execution, "execution_count", None),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _serialize_result(result: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"text",
|
||||
"html",
|
||||
"markdown",
|
||||
"svg",
|
||||
"png",
|
||||
"jpeg",
|
||||
"pdf",
|
||||
"latex",
|
||||
"json",
|
||||
"javascript",
|
||||
"data",
|
||||
"is_main_result",
|
||||
"extra",
|
||||
)
|
||||
return {field: getattr(result, field, None) for field in fields}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
|
||||
dependencies = [
|
||||
# Core Dependencies
|
||||
"pydantic~=2.11.9",
|
||||
"openai>=1.83.0,<3",
|
||||
"openai>=2.0.0,<3",
|
||||
"instructor>=1.3.3",
|
||||
# Text Processing
|
||||
"pdfplumber~=0.11.4",
|
||||
@@ -24,7 +24,7 @@ dependencies = [
|
||||
"tokenizers>=0.21,<1",
|
||||
"openpyxl~=3.1.5",
|
||||
# Authentication and Security
|
||||
"python-dotenv~=1.1.1",
|
||||
"python-dotenv>=1.2.2,<2",
|
||||
"pyjwt>=2.9.0,<3",
|
||||
# TUI
|
||||
"textual>=7.5.0",
|
||||
@@ -40,7 +40,7 @@ dependencies = [
|
||||
"pydantic-settings~=2.10.1",
|
||||
"httpx~=0.28.1",
|
||||
"mcp~=1.26.0",
|
||||
"uv~=0.9.13",
|
||||
"uv~=0.11.6",
|
||||
"aiosqlite~=0.21.0",
|
||||
"pyyaml~=6.0",
|
||||
"aiofiles~=24.1.0",
|
||||
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.14.2a1",
|
||||
"crewai-tools==1.14.3",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
@@ -94,6 +94,7 @@ google-genai = [
|
||||
]
|
||||
azure-ai-inference = [
|
||||
"azure-ai-inference~=1.0.0b9",
|
||||
"azure-identity>=1.17.0,<2",
|
||||
]
|
||||
anthropic = [
|
||||
"anthropic~=0.73.0",
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
import contextvars
|
||||
import threading
|
||||
from typing import Any
|
||||
import urllib.request
|
||||
import importlib
|
||||
import sys
|
||||
from typing import TYPE_CHECKING, Annotated, Any
|
||||
import warnings
|
||||
|
||||
from pydantic import PydanticUserError
|
||||
from pydantic import Field, PydanticUserError
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.agent.planning_config import PlanningConfig
|
||||
@@ -20,7 +19,10 @@ from crewai.state.checkpoint_config import CheckpointConfig # noqa: F401
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.llm_guardrail import LLMGuardrail
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.telemetry.telemetry import Telemetry
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.memory.unified_memory import Memory
|
||||
|
||||
|
||||
def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
@@ -46,38 +48,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.14.2a1"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
def _track_install() -> None:
|
||||
"""Track package installation/first-use via Scarf analytics."""
|
||||
global _telemetry_submitted
|
||||
|
||||
if _telemetry_submitted or Telemetry._is_telemetry_disabled():
|
||||
return
|
||||
|
||||
try:
|
||||
pixel_url = "https://api.scarf.sh/v2/packages/CrewAI/crewai/docs/00f2dad1-8334-4a39-934e-003b2e1146db"
|
||||
|
||||
req = urllib.request.Request(pixel_url) # noqa: S310
|
||||
req.add_header("User-Agent", f"CrewAI-Python/{__version__}")
|
||||
|
||||
with urllib.request.urlopen(req, timeout=2): # noqa: S310
|
||||
_telemetry_submitted = True
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
|
||||
def _track_install_async() -> None:
|
||||
"""Track installation in background thread to avoid blocking imports."""
|
||||
if not Telemetry._is_telemetry_disabled():
|
||||
ctx = contextvars.copy_context()
|
||||
thread = threading.Thread(target=ctx.run, args=(_track_install,), daemon=True)
|
||||
thread.start()
|
||||
|
||||
|
||||
_track_install_async()
|
||||
__version__ = "1.14.3"
|
||||
|
||||
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
|
||||
"Memory": ("crewai.memory.unified_memory", "Memory"),
|
||||
@@ -88,8 +59,6 @@ def __getattr__(name: str) -> Any:
|
||||
"""Lazily import heavy modules (e.g. Memory → lancedb) on first access."""
|
||||
if name in _LAZY_IMPORTS:
|
||||
module_path, attr = _LAZY_IMPORTS[name]
|
||||
import importlib
|
||||
|
||||
mod = importlib.import_module(module_path)
|
||||
val = getattr(mod, attr)
|
||||
globals()[name] = val
|
||||
@@ -125,10 +94,16 @@ try:
|
||||
}
|
||||
|
||||
from crewai.tools.base_tool import BaseTool as _BaseTool
|
||||
from crewai.tools.flow_tool import (
|
||||
FlowTool as _FlowTool,
|
||||
create_flow_tools as _create_flow_tools,
|
||||
)
|
||||
from crewai.tools.structured_tool import CrewStructuredTool as _CrewStructuredTool
|
||||
|
||||
_base_namespace["BaseTool"] = _BaseTool
|
||||
_base_namespace["CrewStructuredTool"] = _CrewStructuredTool
|
||||
_base_namespace["FlowTool"] = _FlowTool
|
||||
_base_namespace["create_flow_tools"] = _create_flow_tools # type: ignore[assignment]
|
||||
|
||||
try:
|
||||
from crewai.a2a.config import (
|
||||
@@ -147,8 +122,6 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
import sys
|
||||
|
||||
_full_namespace = {
|
||||
**_base_namespace,
|
||||
"ToolsHandler": _ToolsHandler,
|
||||
@@ -191,10 +164,6 @@ try:
|
||||
Flow.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
_AgentExecutor.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
|
||||
from typing import Annotated
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from crewai.state.runtime import RuntimeState
|
||||
|
||||
Entity = Annotated[
|
||||
|
||||
@@ -98,7 +98,6 @@ class A2AErrorCode(IntEnum):
|
||||
"""The specified artifact was not found."""
|
||||
|
||||
|
||||
# Error code to default message mapping
|
||||
ERROR_MESSAGES: dict[int, str] = {
|
||||
A2AErrorCode.JSON_PARSE_ERROR: "Parse error",
|
||||
A2AErrorCode.INVALID_REQUEST: "Invalid Request",
|
||||
|
||||
@@ -63,25 +63,21 @@ class A2AExtension(Protocol):
|
||||
Example:
|
||||
class MyExtension:
|
||||
def inject_tools(self, agent: Agent) -> None:
|
||||
# Add custom tools to the agent
|
||||
pass
|
||||
|
||||
def extract_state_from_history(
|
||||
self, conversation_history: Sequence[Message]
|
||||
) -> ConversationState | None:
|
||||
# Extract state from conversation
|
||||
return None
|
||||
|
||||
def augment_prompt(
|
||||
self, base_prompt: str, conversation_state: ConversationState | None
|
||||
) -> str:
|
||||
# Add custom instructions
|
||||
return base_prompt
|
||||
|
||||
def process_response(
|
||||
self, agent_response: Any, conversation_state: ConversationState | None
|
||||
) -> Any:
|
||||
# Modify response if needed
|
||||
return agent_response
|
||||
"""
|
||||
|
||||
|
||||
@@ -77,7 +77,6 @@ def extract_a2a_agent_ids_from_config(
|
||||
else:
|
||||
configs = a2a_config
|
||||
|
||||
# Filter to only client configs (those with endpoint)
|
||||
client_configs: list[A2AClientConfigTypes] = [
|
||||
config for config in configs if isinstance(config, (A2AConfig, A2AClientConfig))
|
||||
]
|
||||
|
||||
@@ -29,7 +29,7 @@ from pydantic import (
|
||||
model_validator,
|
||||
)
|
||||
from pydantic.functional_serializers import PlainSerializer
|
||||
from typing_extensions import Self
|
||||
from typing_extensions import Self, TypeIs
|
||||
|
||||
from crewai.agent.planning_config import PlanningConfig
|
||||
from crewai.agent.utils import (
|
||||
@@ -78,13 +78,14 @@ from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.lite_agent_output import LiteAgentOutput
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.mcp import MCPServerConfig
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.config import MCPServerConfig
|
||||
from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.security.fingerprint import Fingerprint
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
|
||||
from crewai.state.checkpoint_config import CheckpointConfig, apply_checkpoint
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.tools.flow_tool import create_flow_tools
|
||||
from crewai.types.callback import SerializableCallable
|
||||
from crewai.utilities.agent_utils import (
|
||||
get_tool_names,
|
||||
@@ -118,6 +119,7 @@ if TYPE_CHECKING:
|
||||
|
||||
from crewai.a2a.config import A2AClientConfig, A2AConfig, A2AServerConfig
|
||||
from crewai.agents.agent_builder.base_agent import PlatformAppOrAction
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
@@ -132,6 +134,13 @@ _EXECUTOR_CLASS_MAP: dict[str, type] = {
|
||||
}
|
||||
|
||||
|
||||
def _is_resuming_agent_executor(
|
||||
executor: CrewAgentExecutor | AgentExecutor | None,
|
||||
) -> TypeIs[AgentExecutor]:
|
||||
"""Type guard: True when the executor is resuming from a checkpoint."""
|
||||
return isinstance(executor, AgentExecutor) and executor._resuming
|
||||
|
||||
|
||||
def _validate_executor_class(value: Any) -> Any:
|
||||
if isinstance(value, str):
|
||||
cls = _EXECUTOR_CLASS_MAP.get(value)
|
||||
@@ -297,6 +306,10 @@ class Agent(BaseAgent):
|
||||
Can be a single A2AConfig/A2AClientConfig/A2AServerConfig, or a list of any number of A2AConfig/A2AClientConfig with a single A2AServerConfig.
|
||||
""",
|
||||
)
|
||||
flows: list[Any] | None = Field(
|
||||
default=None,
|
||||
description="Flow classes that the agent can invoke as tools. Each entry is a Flow subclass (not an instance).",
|
||||
)
|
||||
agent_executor: CrewAgentExecutor | AgentExecutor | None = Field(
|
||||
default=None, description="An instance of the CrewAgentExecutor class."
|
||||
)
|
||||
@@ -339,6 +352,7 @@ class Agent(BaseAgent):
|
||||
)
|
||||
|
||||
self.set_skills()
|
||||
self._set_flow_tools()
|
||||
|
||||
if self.reasoning and self.planning_config is None:
|
||||
warnings.warn(
|
||||
@@ -386,15 +400,17 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
resolved_crew_skills: list[SkillModel] | None = None,
|
||||
) -> None:
|
||||
"""Resolve skill paths and activate skills to INSTRUCTIONS level.
|
||||
"""Resolve skill paths while preserving explicit disclosure levels.
|
||||
|
||||
Path entries trigger discovery and activation. Pre-loaded Skill objects
|
||||
below INSTRUCTIONS level are activated. Crew-level skills are merged in
|
||||
with event emission so observability is consistent regardless of origin.
|
||||
Path entries trigger discovery and activation because directory-based
|
||||
skills opt into eager loading. Pre-loaded Skill objects keep their
|
||||
current disclosure level so callers can attach METADATA-only skills and
|
||||
progressively activate them later. Crew-level skills are merged in with
|
||||
event emission so observability is consistent regardless of origin.
|
||||
|
||||
Args:
|
||||
resolved_crew_skills: Pre-resolved crew skills (already discovered
|
||||
and activated). When provided, avoids redundant discovery per agent.
|
||||
resolved_crew_skills: Pre-resolved crew skills. When provided,
|
||||
avoids redundant discovery per agent.
|
||||
"""
|
||||
from crewai.crew import Crew
|
||||
|
||||
@@ -435,8 +451,7 @@ class Agent(BaseAgent):
|
||||
elif isinstance(item, SkillModel):
|
||||
if item.name not in seen:
|
||||
seen.add(item.name)
|
||||
activated = activate_skill(item, source=self)
|
||||
if activated is item and item.disclosure_level >= INSTRUCTIONS:
|
||||
if item.disclosure_level >= INSTRUCTIONS:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=SkillActivatedEvent(
|
||||
@@ -446,10 +461,20 @@ class Agent(BaseAgent):
|
||||
disclosure_level=item.disclosure_level,
|
||||
),
|
||||
)
|
||||
resolved.append(activated)
|
||||
resolved.append(item)
|
||||
|
||||
self.skills = resolved if resolved else None
|
||||
|
||||
def _set_flow_tools(self) -> None:
|
||||
"""Convert Flow classes in ``self.flows`` to tools and merge them."""
|
||||
if not self.flows:
|
||||
return
|
||||
flow_tools = create_flow_tools(self.flows)
|
||||
if flow_tools:
|
||||
if self.tools is None:
|
||||
self.tools = []
|
||||
self.tools.extend(flow_tools)
|
||||
|
||||
def _is_any_available_memory(self) -> bool:
|
||||
"""Check if unified memory is available (agent or crew)."""
|
||||
if getattr(self, "memory", None):
|
||||
@@ -1112,6 +1137,8 @@ class Agent(BaseAgent):
|
||||
Delegates to :class:`~crewai.mcp.tool_resolver.MCPToolResolver`.
|
||||
"""
|
||||
self._cleanup_mcp_clients()
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
|
||||
self._mcp_resolver = MCPToolResolver(agent=self, logger=self._logger)
|
||||
return self._mcp_resolver.resolve(mcps)
|
||||
|
||||
@@ -1341,7 +1368,6 @@ class Agent(BaseAgent):
|
||||
|
||||
raw_tools: list[BaseTool] = self.tools or []
|
||||
|
||||
# Inject memory tools for standalone kickoff (crew path handles its own)
|
||||
agent_memory = getattr(self, "memory", None)
|
||||
if agent_memory is not None:
|
||||
from crewai.tools.memory_tools import create_memory_tools
|
||||
@@ -1366,24 +1392,42 @@ class Agent(BaseAgent):
|
||||
|
||||
prompt, stop_words, rpm_limit_fn = self._build_execution_prompt(raw_tools)
|
||||
|
||||
executor = AgentExecutor(
|
||||
llm=cast(BaseLLM, self.llm),
|
||||
agent=self,
|
||||
prompt=prompt,
|
||||
max_iter=self.max_iter,
|
||||
tools=parsed_tools,
|
||||
tools_names=get_tool_names(parsed_tools),
|
||||
stop_words=stop_words,
|
||||
tools_description=render_text_description_and_args(parsed_tools),
|
||||
tools_handler=self.tools_handler,
|
||||
original_tools=raw_tools,
|
||||
step_callback=self.step_callback,
|
||||
function_calling_llm=self.function_calling_llm,
|
||||
respect_context_window=self.respect_context_window,
|
||||
request_within_rpm_limit=rpm_limit_fn,
|
||||
callbacks=[TokenCalcHandler(self._token_process)],
|
||||
response_model=response_format,
|
||||
)
|
||||
if _is_resuming_agent_executor(self.agent_executor):
|
||||
executor = self.agent_executor
|
||||
executor.tools = parsed_tools
|
||||
executor.tools_names = get_tool_names(parsed_tools)
|
||||
executor.tools_description = render_text_description_and_args(parsed_tools)
|
||||
executor.original_tools = raw_tools
|
||||
executor.prompt = prompt
|
||||
executor.response_model = response_format
|
||||
executor.stop_words = stop_words
|
||||
executor.tools_handler = self.tools_handler
|
||||
executor.step_callback = self.step_callback
|
||||
executor.function_calling_llm = cast(
|
||||
BaseLLM | None, self.function_calling_llm
|
||||
)
|
||||
executor.respect_context_window = self.respect_context_window
|
||||
executor.request_within_rpm_limit = rpm_limit_fn
|
||||
executor.callbacks = [TokenCalcHandler(self._token_process)]
|
||||
else:
|
||||
executor = AgentExecutor(
|
||||
llm=cast(BaseLLM, self.llm),
|
||||
agent=self,
|
||||
prompt=prompt,
|
||||
max_iter=self.max_iter,
|
||||
tools=parsed_tools,
|
||||
tools_names=get_tool_names(parsed_tools),
|
||||
stop_words=stop_words,
|
||||
tools_description=render_text_description_and_args(parsed_tools),
|
||||
tools_handler=self.tools_handler,
|
||||
original_tools=raw_tools,
|
||||
step_callback=self.step_callback,
|
||||
function_calling_llm=self.function_calling_llm,
|
||||
respect_context_window=self.respect_context_window,
|
||||
request_within_rpm_limit=rpm_limit_fn,
|
||||
callbacks=[TokenCalcHandler(self._token_process)],
|
||||
response_model=response_format,
|
||||
)
|
||||
|
||||
all_files: dict[str, Any] = {}
|
||||
if isinstance(messages, str):
|
||||
@@ -1399,7 +1443,6 @@ class Agent(BaseAgent):
|
||||
if input_files:
|
||||
all_files.update(input_files)
|
||||
|
||||
# Inject memory context for standalone kickoff (recall before execution)
|
||||
if agent_memory is not None:
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
@@ -1459,6 +1502,7 @@ class Agent(BaseAgent):
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
input_files: dict[str, FileInput] | None = None,
|
||||
from_checkpoint: CheckpointConfig | None = None,
|
||||
) -> LiteAgentOutput | Coroutine[Any, Any, LiteAgentOutput]:
|
||||
"""Execute the agent with the given messages using the AgentExecutor.
|
||||
|
||||
@@ -1477,6 +1521,9 @@ class Agent(BaseAgent):
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
input_files: Optional dict of named files to attach to the message.
|
||||
Files can be paths, bytes, or File objects from crewai_files.
|
||||
from_checkpoint: Optional checkpoint config. If ``restore_from``
|
||||
is set, the agent resumes from that checkpoint. Remaining
|
||||
config fields enable checkpointing for the run.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
@@ -1485,8 +1532,14 @@ class Agent(BaseAgent):
|
||||
Note:
|
||||
For explicit async usage outside of Flow, use kickoff_async() directly.
|
||||
"""
|
||||
# Magic auto-async: if inside event loop (e.g., inside a Flow),
|
||||
# return coroutine for Flow to await
|
||||
restored = apply_checkpoint(self, from_checkpoint)
|
||||
if restored is not None:
|
||||
return restored.kickoff( # type: ignore[no-any-return]
|
||||
messages=messages,
|
||||
response_format=response_format,
|
||||
input_files=input_files,
|
||||
)
|
||||
|
||||
if is_inside_event_loop():
|
||||
return self.kickoff_async(messages, response_format, input_files)
|
||||
|
||||
@@ -1495,14 +1548,17 @@ class Agent(BaseAgent):
|
||||
)
|
||||
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionStartedEvent(
|
||||
if self.checkpoint_kickoff_event_id is not None:
|
||||
self._kickoff_event_id = self.checkpoint_kickoff_event_id
|
||||
self.checkpoint_kickoff_event_id = None
|
||||
else:
|
||||
started_event = LiteAgentExecutionStartedEvent(
|
||||
agent_info=agent_info,
|
||||
tools=parsed_tools,
|
||||
messages=messages,
|
||||
),
|
||||
)
|
||||
)
|
||||
crewai_event_bus.emit(self, event=started_event)
|
||||
self._kickoff_event_id = started_event.event_id
|
||||
|
||||
output = self._execute_and_build_output(executor, inputs, response_format)
|
||||
return self._finalize_kickoff(
|
||||
@@ -1637,7 +1693,7 @@ class Agent(BaseAgent):
|
||||
if isinstance(conversion_result, BaseModel):
|
||||
formatted_result = conversion_result
|
||||
except ConverterError:
|
||||
pass # Keep raw output if conversion fails
|
||||
pass
|
||||
else:
|
||||
raw_output = str(output) if not isinstance(output, str) else output
|
||||
|
||||
@@ -1719,7 +1775,6 @@ class Agent(BaseAgent):
|
||||
elif callable(self.guardrail):
|
||||
guardrail_callable = self.guardrail
|
||||
else:
|
||||
# Should not happen if called from kickoff with guardrail check
|
||||
return output
|
||||
|
||||
guardrail_result = process_guardrail(
|
||||
@@ -1765,6 +1820,7 @@ class Agent(BaseAgent):
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
input_files: dict[str, FileInput] | None = None,
|
||||
from_checkpoint: CheckpointConfig | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""Execute the agent asynchronously with the given messages.
|
||||
|
||||
@@ -1780,23 +1836,36 @@ class Agent(BaseAgent):
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
input_files: Optional dict of named files to attach to the message.
|
||||
Files can be paths, bytes, or File objects from crewai_files.
|
||||
from_checkpoint: Optional checkpoint config. If ``restore_from``
|
||||
is set, the agent resumes from that checkpoint.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
"""
|
||||
restored = apply_checkpoint(self, from_checkpoint)
|
||||
if restored is not None:
|
||||
return await restored.kickoff_async( # type: ignore[no-any-return]
|
||||
messages=messages,
|
||||
response_format=response_format,
|
||||
input_files=input_files,
|
||||
)
|
||||
|
||||
executor, inputs, agent_info, parsed_tools = self._prepare_kickoff(
|
||||
messages, response_format, input_files
|
||||
)
|
||||
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionStartedEvent(
|
||||
if self.checkpoint_kickoff_event_id is not None:
|
||||
self._kickoff_event_id = self.checkpoint_kickoff_event_id
|
||||
self.checkpoint_kickoff_event_id = None
|
||||
else:
|
||||
started_event = LiteAgentExecutionStartedEvent(
|
||||
agent_info=agent_info,
|
||||
tools=parsed_tools,
|
||||
messages=messages,
|
||||
),
|
||||
)
|
||||
)
|
||||
crewai_event_bus.emit(self, event=started_event)
|
||||
self._kickoff_event_id = started_event.event_id
|
||||
|
||||
output = await self._execute_and_build_output_async(
|
||||
executor, inputs, response_format
|
||||
@@ -1813,6 +1882,7 @@ class Agent(BaseAgent):
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
input_files: dict[str, FileInput] | None = None,
|
||||
from_checkpoint: CheckpointConfig | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""Async version of kickoff. Alias for kickoff_async.
|
||||
|
||||
@@ -1820,8 +1890,12 @@ class Agent(BaseAgent):
|
||||
messages: Either a string query or a list of message dictionaries.
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
input_files: Optional dict of named files to attach to the message.
|
||||
from_checkpoint: Optional checkpoint config. If ``restore_from``
|
||||
is set, the agent resumes from that checkpoint.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
"""
|
||||
return await self.kickoff_async(messages, response_format, input_files)
|
||||
return await self.kickoff_async(
|
||||
messages, response_format, input_files, from_checkpoint
|
||||
)
|
||||
|
||||
@@ -41,7 +41,6 @@ class PlanningConfig(BaseModel):
|
||||
from crewai import Agent
|
||||
from crewai.agent.planning_config import PlanningConfig
|
||||
|
||||
# Simple usage — fast, linear execution (default)
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Research topics",
|
||||
@@ -49,7 +48,6 @@ class PlanningConfig(BaseModel):
|
||||
planning_config=PlanningConfig(),
|
||||
)
|
||||
|
||||
# Balanced — replan only when steps fail
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Research topics",
|
||||
@@ -59,7 +57,6 @@ class PlanningConfig(BaseModel):
|
||||
),
|
||||
)
|
||||
|
||||
# Full adaptive planning with refinement and replanning
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Research topics",
|
||||
@@ -69,7 +66,7 @@ class PlanningConfig(BaseModel):
|
||||
max_attempts=3,
|
||||
max_steps=10,
|
||||
plan_prompt="Create a focused plan for: {description}",
|
||||
llm="gpt-4o-mini", # Use cheaper model for planning
|
||||
llm="gpt-4o-mini",
|
||||
),
|
||||
)
|
||||
```
|
||||
|
||||
@@ -39,7 +39,6 @@ def handle_reasoning(agent: Agent, task: Task) -> None:
|
||||
agent: The agent performing the task.
|
||||
task: The task to execute.
|
||||
"""
|
||||
# Check if planning is enabled using the planning_enabled property
|
||||
if not getattr(agent, "planning_enabled", False):
|
||||
return
|
||||
|
||||
|
||||
@@ -99,12 +99,10 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
|
||||
Returns:
|
||||
Tool execution result.
|
||||
"""
|
||||
# Get the parameter name from the schema
|
||||
param_name: str = next(
|
||||
iter(tool.args_schema.model_json_schema()["properties"].keys())
|
||||
)
|
||||
|
||||
# Handle different argument types
|
||||
args_dict: dict[str, Any]
|
||||
if isinstance(arguments, dict):
|
||||
args_dict = arguments
|
||||
@@ -116,16 +114,13 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
|
||||
else:
|
||||
args_dict = {param_name: str(arguments)}
|
||||
|
||||
# Run the tool with the processed arguments
|
||||
output: Any | Awaitable[Any] = tool._run(**args_dict)
|
||||
|
||||
# Await if the tool returned a coroutine
|
||||
if inspect.isawaitable(output):
|
||||
result: Any = await output
|
||||
else:
|
||||
result = output
|
||||
|
||||
# Ensure the result is JSON serializable
|
||||
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
|
||||
return result
|
||||
return str(result)
|
||||
|
||||
@@ -28,6 +28,9 @@ from crewai.agents.agent_builder.base_agent_executor import BaseAgentExecutor
|
||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.events.base_events import set_emission_counter
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.event_context import restore_event_scope, set_last_event_id
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.knowledge_config import KnowledgeConfig
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
@@ -51,6 +54,7 @@ from crewai.utilities.string_utils import interpolate_only
|
||||
if TYPE_CHECKING:
|
||||
from crewai.context import ExecutionContext
|
||||
from crewai.crew import Crew
|
||||
from crewai.state.runtime import RuntimeState
|
||||
|
||||
|
||||
def _validate_crew_ref(value: Any) -> Any:
|
||||
@@ -219,6 +223,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
_original_goal: str | None = PrivateAttr(default=None)
|
||||
_original_backstory: str | None = PrivateAttr(default=None)
|
||||
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
|
||||
_kickoff_event_id: str | None = PrivateAttr(default=None)
|
||||
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
|
||||
role: str = Field(description="Role of the agent")
|
||||
goal: str = Field(description="Objective of the agent")
|
||||
@@ -335,30 +340,90 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
min_length=1,
|
||||
)
|
||||
execution_context: ExecutionContext | None = Field(default=None)
|
||||
checkpoint_kickoff_event_id: str | None = Field(default=None)
|
||||
|
||||
@classmethod
|
||||
def from_checkpoint(cls, config: CheckpointConfig) -> Self:
|
||||
"""Restore an Agent from a checkpoint.
|
||||
"""Restore an Agent from a checkpoint, ready to resume via kickoff().
|
||||
|
||||
Args:
|
||||
config: Checkpoint configuration with ``restore_from`` set.
|
||||
config: Checkpoint configuration with ``restore_from`` set to
|
||||
the path of the checkpoint to load.
|
||||
|
||||
Returns:
|
||||
An Agent instance. Call kickoff() to resume execution.
|
||||
"""
|
||||
from crewai.context import apply_execution_context
|
||||
from crewai.state.runtime import RuntimeState
|
||||
|
||||
state = RuntimeState.from_checkpoint(config, context={"from_checkpoint": True})
|
||||
crewai_event_bus.set_runtime_state(state)
|
||||
for entity in state.root:
|
||||
if isinstance(entity, cls):
|
||||
if entity.execution_context is not None:
|
||||
apply_execution_context(entity.execution_context)
|
||||
if entity.agent_executor is not None:
|
||||
entity.agent_executor.agent = entity
|
||||
entity.agent_executor._resuming = True
|
||||
entity._restore_runtime(state)
|
||||
return entity
|
||||
raise ValueError(
|
||||
f"No {cls.__name__} found in checkpoint: {config.restore_from}"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def fork(cls, config: CheckpointConfig, branch: str | None = None) -> Self:
|
||||
"""Fork an Agent from a checkpoint, creating a new execution branch.
|
||||
|
||||
Args:
|
||||
config: Checkpoint configuration with ``restore_from`` set.
|
||||
branch: Branch label for the fork. Auto-generated if not provided.
|
||||
|
||||
Returns:
|
||||
An Agent instance on the new branch. Call kickoff() to run.
|
||||
"""
|
||||
agent = cls.from_checkpoint(config)
|
||||
state = crewai_event_bus._runtime_state
|
||||
if state is None:
|
||||
raise RuntimeError("Cannot fork: no runtime state on the event bus.")
|
||||
state.fork(branch)
|
||||
return agent
|
||||
|
||||
def _restore_runtime(self, state: RuntimeState) -> None:
|
||||
"""Re-create runtime objects after restoring from a checkpoint.
|
||||
|
||||
Args:
|
||||
state: The RuntimeState containing the event record.
|
||||
"""
|
||||
if self.agent_executor is not None:
|
||||
self.agent_executor.agent = self
|
||||
self.agent_executor._resuming = True
|
||||
if self.checkpoint_kickoff_event_id is not None:
|
||||
self._kickoff_event_id = self.checkpoint_kickoff_event_id
|
||||
self._restore_event_scope(state)
|
||||
|
||||
def _restore_event_scope(self, state: RuntimeState) -> None:
|
||||
"""Rebuild the event scope stack from the checkpoint's event record.
|
||||
|
||||
Args:
|
||||
state: The RuntimeState containing the event record.
|
||||
"""
|
||||
stack: list[tuple[str, str]] = []
|
||||
kickoff_id = self._kickoff_event_id
|
||||
if kickoff_id:
|
||||
stack.append((kickoff_id, "lite_agent_execution_started"))
|
||||
|
||||
restore_event_scope(tuple(stack))
|
||||
|
||||
last_event_id: str | None = None
|
||||
max_seq = 0
|
||||
for node in state.event_record.nodes.values():
|
||||
seq = node.event.emission_sequence or 0
|
||||
if seq > max_seq:
|
||||
max_seq = seq
|
||||
last_event_id = node.event.event_id
|
||||
if last_event_id is not None:
|
||||
set_last_event_id(last_event_id)
|
||||
if max_seq > 0:
|
||||
set_emission_counter(max_seq)
|
||||
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
def process_model_config(cls, values: Any) -> dict[str, Any]:
|
||||
@@ -383,7 +448,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
if isinstance(tool, BaseTool):
|
||||
processed_tools.append(tool)
|
||||
elif all(hasattr(tool, attr) for attr in required_attrs):
|
||||
# Tool has the required attributes, create a Tool instance
|
||||
processed_tools.append(Tool.from_langchain(tool))
|
||||
else:
|
||||
raise ValueError(
|
||||
@@ -448,14 +512,12 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_and_set_attributes(self) -> Self:
|
||||
# Validate required fields
|
||||
for field in ["role", "goal", "backstory"]:
|
||||
if getattr(self, field) is None:
|
||||
raise ValueError(
|
||||
f"{field} must be provided either directly or through config"
|
||||
)
|
||||
|
||||
# Set private attributes
|
||||
self._logger = Logger(verbose=self.verbose)
|
||||
if self.max_rpm and not self._rpm_controller:
|
||||
self._rpm_controller = RPMController(
|
||||
@@ -464,7 +526,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
if not self._token_process:
|
||||
self._token_process = TokenProcess()
|
||||
|
||||
# Initialize security_config if not provided
|
||||
if self.security_config is None:
|
||||
self.security_config = SecurityConfig()
|
||||
|
||||
@@ -566,14 +627,11 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
"actions",
|
||||
}
|
||||
|
||||
# Copy llm
|
||||
existing_llm = shallow_copy(self.llm)
|
||||
copied_knowledge = shallow_copy(self.knowledge)
|
||||
copied_knowledge_storage = shallow_copy(self.knowledge_storage)
|
||||
# Properly copy knowledge sources if they exist
|
||||
existing_knowledge_sources = None
|
||||
if self.knowledge_sources:
|
||||
# Create a shared storage instance for all knowledge sources
|
||||
shared_storage = (
|
||||
self.knowledge_sources[0].storage if self.knowledge_sources else None
|
||||
)
|
||||
@@ -585,7 +643,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
if hasattr(source, "model_copy")
|
||||
else shallow_copy(source)
|
||||
)
|
||||
# Ensure all copied sources use the same storage instance
|
||||
copied_source.storage = shared_storage
|
||||
existing_knowledge_sources.append(copied_source)
|
||||
|
||||
|
||||
@@ -4,8 +4,6 @@ import re
|
||||
from typing import Final
|
||||
|
||||
|
||||
# crewai.agents.parser constants
|
||||
|
||||
FINAL_ANSWER_ACTION: Final[str] = "Final Answer:"
|
||||
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE: Final[str] = (
|
||||
"I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.\n"
|
||||
|
||||
@@ -296,7 +296,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
Returns:
|
||||
Final answer from the agent.
|
||||
"""
|
||||
# Check if model supports native function calling
|
||||
use_native_tools = (
|
||||
hasattr(self.llm, "supports_function_calling")
|
||||
and callable(getattr(self.llm, "supports_function_calling", None))
|
||||
@@ -307,7 +306,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
if use_native_tools:
|
||||
return self._invoke_loop_native_tools()
|
||||
|
||||
# Fall back to ReAct text-based pattern
|
||||
return self._invoke_loop_react()
|
||||
|
||||
def _invoke_loop_react(self) -> AgentFinish:
|
||||
@@ -347,7 +345,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
executor_context=self,
|
||||
verbose=self.agent.verbose,
|
||||
)
|
||||
# breakpoint()
|
||||
if self.response_model is not None:
|
||||
try:
|
||||
if isinstance(answer, BaseModel):
|
||||
@@ -365,7 +362,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
text=answer,
|
||||
)
|
||||
except ValidationError:
|
||||
# If validation fails, convert BaseModel to JSON string for parsing
|
||||
answer_str = (
|
||||
answer.model_dump_json()
|
||||
if isinstance(answer, BaseModel)
|
||||
@@ -375,14 +371,12 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
answer_str, self.use_stop_words
|
||||
) # type: ignore[assignment]
|
||||
else:
|
||||
# When no response_model, answer should be a string
|
||||
answer_str = str(answer) if not isinstance(answer, str) else answer
|
||||
formatted_answer = process_llm_response(
|
||||
answer_str, self.use_stop_words
|
||||
) # type: ignore[assignment]
|
||||
|
||||
if isinstance(formatted_answer, AgentAction):
|
||||
# Extract agent fingerprint if available
|
||||
fingerprint_context = {}
|
||||
if (
|
||||
self.agent
|
||||
@@ -426,7 +420,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
|
||||
except Exception as e:
|
||||
if e.__class__.__module__.startswith("litellm"):
|
||||
# Do not retry on litellm errors
|
||||
raise e
|
||||
if is_context_length_exceeded(e):
|
||||
handle_context_length(
|
||||
@@ -443,10 +436,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
finally:
|
||||
self.iterations += 1
|
||||
|
||||
# During the invoke loop, formatted_answer alternates between AgentAction
|
||||
# (when the agent is using tools) and eventually becomes AgentFinish
|
||||
# (when the agent reaches a final answer). This check confirms we've
|
||||
# reached a final answer and helps type checking understand this transition.
|
||||
if not isinstance(formatted_answer, AgentFinish):
|
||||
raise RuntimeError(
|
||||
"Agent execution ended without reaching a final answer. "
|
||||
@@ -465,9 +454,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
Returns:
|
||||
Final answer from the agent.
|
||||
"""
|
||||
# Convert tools to OpenAI schema format
|
||||
if not self.original_tools:
|
||||
# No tools available, fall back to simple LLM call
|
||||
return self._invoke_loop_native_no_tools()
|
||||
|
||||
openai_tools, available_functions, self._tool_name_mapping = (
|
||||
@@ -490,10 +477,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
|
||||
enforce_rpm_limit(self.request_within_rpm_limit)
|
||||
|
||||
# Call LLM with native tools
|
||||
# Pass available_functions=None so the LLM returns tool_calls
|
||||
# without executing them. The executor handles tool execution
|
||||
# via _handle_native_tool_calls to properly manage message history.
|
||||
answer = get_llm_response(
|
||||
llm=cast("BaseLLM", self.llm),
|
||||
messages=self.messages,
|
||||
@@ -508,32 +491,26 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
verbose=self.agent.verbose,
|
||||
)
|
||||
|
||||
# Check if the response is a list of tool calls
|
||||
if (
|
||||
isinstance(answer, list)
|
||||
and answer
|
||||
and self._is_tool_call_list(answer)
|
||||
):
|
||||
# Handle tool calls - execute tools and add results to messages
|
||||
tool_finish = self._handle_native_tool_calls(
|
||||
answer, available_functions
|
||||
)
|
||||
# If tool has result_as_answer=True, return immediately
|
||||
if tool_finish is not None:
|
||||
return tool_finish
|
||||
# Continue loop to let LLM analyze results and decide next steps
|
||||
continue
|
||||
|
||||
# Text or other response - handle as potential final answer
|
||||
if isinstance(answer, str):
|
||||
# Text response - this is the final answer
|
||||
formatted_answer = AgentFinish(
|
||||
thought="",
|
||||
output=answer,
|
||||
text=answer,
|
||||
)
|
||||
self._invoke_step_callback(formatted_answer)
|
||||
self._append_message(answer) # Save final answer to messages
|
||||
self._append_message(answer)
|
||||
self._show_logs(formatted_answer)
|
||||
return formatted_answer
|
||||
|
||||
@@ -549,14 +526,13 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
self._show_logs(formatted_answer)
|
||||
return formatted_answer
|
||||
|
||||
# Unexpected response type, treat as final answer
|
||||
formatted_answer = AgentFinish(
|
||||
thought="",
|
||||
output=str(answer),
|
||||
text=str(answer),
|
||||
)
|
||||
self._invoke_step_callback(formatted_answer)
|
||||
self._append_message(str(answer)) # Save final answer to messages
|
||||
self._append_message(str(answer))
|
||||
self._show_logs(formatted_answer)
|
||||
return formatted_answer
|
||||
|
||||
@@ -627,12 +603,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
if not response:
|
||||
return False
|
||||
first_item = response[0]
|
||||
# OpenAI-style
|
||||
if hasattr(first_item, "function") or (
|
||||
isinstance(first_item, dict) and "function" in first_item
|
||||
):
|
||||
return True
|
||||
# Anthropic-style (object with attributes)
|
||||
if (
|
||||
hasattr(first_item, "type")
|
||||
and getattr(first_item, "type", None) == "tool_use"
|
||||
@@ -640,14 +614,12 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
return True
|
||||
if hasattr(first_item, "name") and hasattr(first_item, "input"):
|
||||
return True
|
||||
# Bedrock-style (dict with name and input keys)
|
||||
if (
|
||||
isinstance(first_item, dict)
|
||||
and "name" in first_item
|
||||
and "input" in first_item
|
||||
):
|
||||
return True
|
||||
# Gemini-style
|
||||
if hasattr(first_item, "function_call") and first_item.function_call:
|
||||
return True
|
||||
return False
|
||||
@@ -706,8 +678,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
for _, func_name, _ in parsed_calls
|
||||
)
|
||||
|
||||
# Preserve historical sequential behavior for result_as_answer batches.
|
||||
# Also avoid threading around usage counters for max_usage_count tools.
|
||||
if has_result_as_answer_in_batch or has_max_usage_count_in_batch:
|
||||
logger.debug(
|
||||
"Skipping parallel native execution because batch includes result_as_answer or max_usage_count tool"
|
||||
@@ -773,7 +743,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
self.messages.append(reasoning_message)
|
||||
return None
|
||||
|
||||
# Sequential behavior: process only first tool call, then force reflection.
|
||||
call_id, func_name, func_args = parsed_calls[0]
|
||||
self._append_assistant_tool_calls_message([(call_id, func_name, func_args)])
|
||||
|
||||
@@ -827,7 +796,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
func_name = sanitize_tool_name(
|
||||
func_info.get("name", "") or tool_call.get("name", "")
|
||||
)
|
||||
func_args = func_info.get("arguments", "{}") or tool_call.get("input", {})
|
||||
func_args = func_info.get("arguments") or tool_call.get("input", {})
|
||||
return call_id, func_name, func_args
|
||||
return None
|
||||
|
||||
@@ -1202,7 +1171,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
text=answer,
|
||||
)
|
||||
except ValidationError:
|
||||
# If validation fails, convert BaseModel to JSON string for parsing
|
||||
answer_str = (
|
||||
answer.model_dump_json()
|
||||
if isinstance(answer, BaseModel)
|
||||
@@ -1212,7 +1180,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
answer_str, self.use_stop_words
|
||||
) # type: ignore[assignment]
|
||||
else:
|
||||
# When no response_model, answer should be a string
|
||||
answer_str = str(answer) if not isinstance(answer, str) else answer
|
||||
formatted_answer = process_llm_response(
|
||||
answer_str, self.use_stop_words
|
||||
@@ -1319,10 +1286,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
|
||||
enforce_rpm_limit(self.request_within_rpm_limit)
|
||||
|
||||
# Call LLM with native tools
|
||||
# Pass available_functions=None so the LLM returns tool_calls
|
||||
# without executing them. The executor handles tool execution
|
||||
# via _handle_native_tool_calls to properly manage message history.
|
||||
answer = await aget_llm_response(
|
||||
llm=cast("BaseLLM", self.llm),
|
||||
messages=self.messages,
|
||||
@@ -1336,32 +1299,26 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
executor_context=self,
|
||||
verbose=self.agent.verbose,
|
||||
)
|
||||
# Check if the response is a list of tool calls
|
||||
if (
|
||||
isinstance(answer, list)
|
||||
and answer
|
||||
and self._is_tool_call_list(answer)
|
||||
):
|
||||
# Handle tool calls - execute tools and add results to messages
|
||||
tool_finish = self._handle_native_tool_calls(
|
||||
answer, available_functions
|
||||
)
|
||||
# If tool has result_as_answer=True, return immediately
|
||||
if tool_finish is not None:
|
||||
return tool_finish
|
||||
# Continue loop to let LLM analyze results and decide next steps
|
||||
continue
|
||||
|
||||
# Text or other response - handle as potential final answer
|
||||
if isinstance(answer, str):
|
||||
# Text response - this is the final answer
|
||||
formatted_answer = AgentFinish(
|
||||
thought="",
|
||||
output=answer,
|
||||
text=answer,
|
||||
)
|
||||
await self._ainvoke_step_callback(formatted_answer)
|
||||
self._append_message(answer) # Save final answer to messages
|
||||
self._append_message(answer)
|
||||
self._show_logs(formatted_answer)
|
||||
return formatted_answer
|
||||
|
||||
@@ -1377,14 +1334,13 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
self._show_logs(formatted_answer)
|
||||
return formatted_answer
|
||||
|
||||
# Unexpected response type, treat as final answer
|
||||
formatted_answer = AgentFinish(
|
||||
thought="",
|
||||
output=str(answer),
|
||||
text=str(answer),
|
||||
)
|
||||
await self._ainvoke_step_callback(formatted_answer)
|
||||
self._append_message(str(answer)) # Save final answer to messages
|
||||
self._append_message(str(answer))
|
||||
self._show_logs(formatted_answer)
|
||||
return formatted_answer
|
||||
|
||||
@@ -1455,7 +1411,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
Returns:
|
||||
Updated action or final answer.
|
||||
"""
|
||||
# Special case for add_image_tool
|
||||
add_image_tool = I18N_DEFAULT.tools("add_image")
|
||||
if (
|
||||
isinstance(add_image_tool, dict)
|
||||
@@ -1575,17 +1530,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
|
||||
training_data = training_handler.load() or {}
|
||||
|
||||
# Initialize or retrieve agent's training data
|
||||
agent_training_data = training_data.get(agent_id, {})
|
||||
|
||||
if human_feedback is not None:
|
||||
# Save initial output and human feedback
|
||||
agent_training_data[train_iteration] = {
|
||||
"initial_output": result.output,
|
||||
"human_feedback": human_feedback,
|
||||
}
|
||||
else:
|
||||
# Save improved output
|
||||
if train_iteration in agent_training_data:
|
||||
agent_training_data[train_iteration]["improved_output"] = result.output
|
||||
else:
|
||||
@@ -1599,7 +1551,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
|
||||
)
|
||||
return
|
||||
|
||||
# Update the training data and save
|
||||
training_data[agent_id] = agent_training_data
|
||||
training_handler.save(training_data)
|
||||
|
||||
|
||||
@@ -94,11 +94,8 @@ def parse(text: str) -> AgentAction | AgentFinish:
|
||||
|
||||
if includes_answer:
|
||||
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
|
||||
# Check whether the final answer ends with triple backticks.
|
||||
if final_answer.endswith("```"):
|
||||
# Count occurrences of triple backticks in the final answer.
|
||||
count = final_answer.count("```")
|
||||
# If count is odd then it's an unmatched trailing set; remove it.
|
||||
if count % 2 != 0:
|
||||
final_answer = final_answer[:-3].rstrip()
|
||||
return AgentFinish(thought=thought, output=final_answer, text=text)
|
||||
@@ -146,7 +143,6 @@ def _extract_thought(text: str) -> str:
|
||||
if thought_index == -1:
|
||||
return ""
|
||||
thought = text[:thought_index].strip()
|
||||
# Remove any triple backticks from the thought string
|
||||
return thought.replace("```", "").strip()
|
||||
|
||||
|
||||
@@ -171,18 +167,9 @@ def _safe_repair_json(tool_input: str) -> str:
|
||||
Returns:
|
||||
The repaired JSON string or original if repair fails.
|
||||
"""
|
||||
# Skip repair if the input starts and ends with square brackets
|
||||
# Explanation: The JSON parser has issues handling inputs that are enclosed in square brackets ('[]').
|
||||
# These are typically valid JSON arrays or strings that do not require repair. Attempting to repair such inputs
|
||||
# might lead to unintended alterations, such as wrapping the entire input in additional layers or modifying
|
||||
# the structure in a way that changes its meaning. By skipping the repair for inputs that start and end with
|
||||
# square brackets, we preserve the integrity of these valid JSON structures and avoid unnecessary modifications.
|
||||
if tool_input.startswith("[") and tool_input.endswith("]"):
|
||||
return tool_input
|
||||
|
||||
# Before repair, handle common LLM issues:
|
||||
# 1. Replace """ with " to avoid JSON parser errors
|
||||
|
||||
tool_input = tool_input.replace('"""', '"')
|
||||
|
||||
result = repair_json(tool_input)
|
||||
|
||||
@@ -83,10 +83,6 @@ class PlannerObserver:
|
||||
return create_llm(config.llm)
|
||||
return self.agent.llm
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Public API
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def observe(
|
||||
self,
|
||||
completed_step: TodoItem,
|
||||
@@ -182,9 +178,6 @@ class PlannerObserver:
|
||||
),
|
||||
)
|
||||
|
||||
# Don't force a full replan — the step may have succeeded even if the
|
||||
# observer LLM failed to parse the result. Defaulting to "continue" is
|
||||
# far less disruptive than wiping the entire plan on every observer error.
|
||||
return StepObservation(
|
||||
step_completed_successfully=True,
|
||||
key_information_learned="",
|
||||
@@ -221,10 +214,6 @@ class PlannerObserver:
|
||||
|
||||
return remaining_todos
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal: Message building
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _build_observation_messages(
|
||||
self,
|
||||
completed_step: TodoItem,
|
||||
@@ -239,15 +228,11 @@ class PlannerObserver:
|
||||
task_desc = self.task.description or ""
|
||||
task_goal = self.task.expected_output or ""
|
||||
elif self.kickoff_input:
|
||||
# Standalone kickoff path — no Task object, but we have the raw input.
|
||||
# Extract just the ## Task section so the observer sees the actual goal,
|
||||
# not the full enriched instruction with env/tools/verification noise.
|
||||
task_desc = extract_task_section(self.kickoff_input)
|
||||
task_goal = "Complete the task successfully"
|
||||
|
||||
system_prompt = I18N_DEFAULT.retrieve("planning", "observation_system_prompt")
|
||||
|
||||
# Build context of what's been done
|
||||
completed_summary = ""
|
||||
if all_completed:
|
||||
completed_lines = []
|
||||
@@ -261,7 +246,6 @@ class PlannerObserver:
|
||||
completed_lines
|
||||
)
|
||||
|
||||
# Build remaining plan
|
||||
remaining_summary = ""
|
||||
if remaining_todos:
|
||||
remaining_lines = [
|
||||
@@ -306,17 +290,14 @@ class PlannerObserver:
|
||||
if isinstance(response, StepObservation):
|
||||
return response
|
||||
|
||||
# JSON string path — most common miss before this fix
|
||||
if isinstance(response, str):
|
||||
text = response.strip()
|
||||
try:
|
||||
return StepObservation.model_validate_json(text)
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
# Some LLMs wrap the JSON in markdown fences
|
||||
if text.startswith("```"):
|
||||
lines = text.split("\n")
|
||||
# Strip first and last lines (``` markers)
|
||||
inner = "\n".join(
|
||||
lines[1:-1] if lines[-1].strip() == "```" else lines[1:]
|
||||
)
|
||||
@@ -325,14 +306,12 @@ class PlannerObserver:
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
# Dict path
|
||||
if isinstance(response, dict):
|
||||
try:
|
||||
return StepObservation.model_validate(response)
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
# Last resort — log what we got so it's diagnosable
|
||||
logger.warning(
|
||||
"Could not parse observation response (type=%s). "
|
||||
"Falling back to default failure observation. Preview: %.200s",
|
||||
|
||||
@@ -108,7 +108,6 @@ class StepExecutor:
|
||||
self.request_within_rpm_limit = request_within_rpm_limit
|
||||
self.callbacks = callbacks or []
|
||||
|
||||
# Native tool support — set up once
|
||||
self._use_native_tools = check_native_tool_support(
|
||||
self.llm, self.original_tools
|
||||
)
|
||||
@@ -121,10 +120,6 @@ class StepExecutor:
|
||||
_,
|
||||
) = setup_native_tools(self.original_tools)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Public API
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def execute(
|
||||
self,
|
||||
todo: TodoItem,
|
||||
@@ -190,10 +185,6 @@ class StepExecutor:
|
||||
execution_time=elapsed,
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal: Message building
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _build_isolated_messages(
|
||||
self, todo: TodoItem, context: StepExecutionContext
|
||||
) -> list[LLMMessage]:
|
||||
@@ -237,10 +228,6 @@ class StepExecutor:
|
||||
"""Build the user prompt for this specific step."""
|
||||
parts: list[str] = []
|
||||
|
||||
# Include overall task context so the executor knows the full goal and
|
||||
# required output format/location — critical for knowing WHAT to produce.
|
||||
# We extract only the task body (not tool instructions or verification
|
||||
# sections) to avoid duplicating directives already in the system prompt.
|
||||
if context.task_description:
|
||||
task_section = extract_task_section(context.task_description)
|
||||
if task_section:
|
||||
@@ -267,7 +254,6 @@ class StepExecutor:
|
||||
)
|
||||
)
|
||||
|
||||
# Include dependency results (final results only, no traces)
|
||||
if context.dependency_results:
|
||||
parts.append(
|
||||
I18N_DEFAULT.retrieve("planning", "step_executor_context_header")
|
||||
@@ -283,10 +269,6 @@ class StepExecutor:
|
||||
|
||||
return "\n".join(parts)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal: Multi-turn execution loop
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _execute_text_parsed(
|
||||
self,
|
||||
messages: list[LLMMessage],
|
||||
@@ -306,7 +288,6 @@ class StepExecutor:
|
||||
last_tool_result = ""
|
||||
|
||||
for _ in range(max_step_iterations):
|
||||
# Check step timeout
|
||||
if step_timeout and start_time:
|
||||
elapsed = time.monotonic() - start_time
|
||||
if elapsed >= step_timeout:
|
||||
@@ -331,17 +312,12 @@ class StepExecutor:
|
||||
tool_calls_made.append(formatted.tool)
|
||||
tool_result = self._execute_text_tool_with_events(formatted)
|
||||
last_tool_result = tool_result
|
||||
# Append the assistant's reasoning + action, then the observation.
|
||||
# _build_observation_message handles vision sentinels so the LLM
|
||||
# receives an image content block instead of raw base64 text.
|
||||
messages.append({"role": "assistant", "content": answer_str})
|
||||
messages.append(self._build_observation_message(tool_result))
|
||||
continue
|
||||
|
||||
# Raw text response with no Final Answer marker — treat as done
|
||||
return answer_str
|
||||
|
||||
# Max iterations reached — return the last tool result we accumulated
|
||||
return last_tool_result
|
||||
|
||||
def _execute_text_tool_with_events(self, formatted: AgentAction) -> str:
|
||||
@@ -429,10 +405,6 @@ class StepExecutor:
|
||||
return {"input": stripped_input}
|
||||
return {"input": str(tool_input)}
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal: Vision support
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@staticmethod
|
||||
def _parse_vision_sentinel(raw: str) -> tuple[str, str] | None:
|
||||
"""Parse a VISION_IMAGE sentinel into (media_type, base64_data), or None."""
|
||||
@@ -517,7 +489,6 @@ class StepExecutor:
|
||||
accumulated_results: list[str] = []
|
||||
|
||||
for _ in range(max_step_iterations):
|
||||
# Check step timeout
|
||||
if step_timeout and start_time:
|
||||
elapsed = time.monotonic() - start_time
|
||||
if elapsed >= step_timeout:
|
||||
@@ -541,19 +512,14 @@ class StepExecutor:
|
||||
return answer.model_dump_json()
|
||||
|
||||
if isinstance(answer, list) and answer and is_tool_call_list(answer):
|
||||
# _execute_native_tool_calls appends assistant + tool messages
|
||||
# to `messages` as a side-effect, so the next LLM call will
|
||||
# see the full conversation history including tool outputs.
|
||||
result = self._execute_native_tool_calls(
|
||||
answer, messages, tool_calls_made
|
||||
)
|
||||
accumulated_results.append(result)
|
||||
continue
|
||||
|
||||
# Text answer → LLM decided the step is done
|
||||
return str(answer)
|
||||
|
||||
# Max iterations reached — return everything we accumulated
|
||||
return "\n".join(filter(None, accumulated_results))
|
||||
|
||||
def _execute_native_tool_calls(
|
||||
@@ -599,9 +565,6 @@ class StepExecutor:
|
||||
parsed = self._parse_vision_sentinel(raw_content)
|
||||
if parsed:
|
||||
media_type, b64_data = parsed
|
||||
# Replace the sentinel with a standard image_url content block.
|
||||
# Each provider's _format_messages handles conversion to
|
||||
# its native format (e.g. Anthropic image blocks).
|
||||
modified: LLMMessage = cast(
|
||||
LLMMessage, dict(call_result.tool_message)
|
||||
)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timedelta, timezone
|
||||
import glob
|
||||
import json
|
||||
import os
|
||||
@@ -37,6 +37,26 @@ ORDER BY rowid DESC
|
||||
LIMIT 1
|
||||
"""
|
||||
|
||||
_DELETE_OLDER_THAN = """
|
||||
DELETE FROM checkpoints
|
||||
WHERE created_at < ?
|
||||
"""
|
||||
|
||||
_DELETE_KEEP_N = """
|
||||
DELETE FROM checkpoints WHERE rowid NOT IN (
|
||||
SELECT rowid FROM checkpoints ORDER BY rowid DESC LIMIT ?
|
||||
)
|
||||
"""
|
||||
|
||||
_COUNT_CHECKPOINTS = "SELECT COUNT(*) FROM checkpoints"
|
||||
|
||||
_SELECT_LIKE = """
|
||||
SELECT id, created_at, json(data)
|
||||
FROM checkpoints
|
||||
WHERE id LIKE ?
|
||||
ORDER BY rowid DESC
|
||||
"""
|
||||
|
||||
|
||||
_DEFAULT_DIR = "./.checkpoints"
|
||||
_DEFAULT_DB = "./.checkpoints.db"
|
||||
@@ -86,17 +106,50 @@ def _parse_checkpoint_json(raw: str, source: str) -> dict[str, Any]:
|
||||
"name": entity.get("name"),
|
||||
"id": entity.get("id"),
|
||||
}
|
||||
|
||||
raw_agents = entity.get("agents", [])
|
||||
agents_by_id: dict[str, dict[str, Any]] = {}
|
||||
parsed_agents: list[dict[str, Any]] = []
|
||||
for ag in raw_agents:
|
||||
agent_info: dict[str, Any] = {
|
||||
"id": ag.get("id", ""),
|
||||
"role": ag.get("role", ""),
|
||||
"goal": ag.get("goal", ""),
|
||||
}
|
||||
parsed_agents.append(agent_info)
|
||||
if ag.get("id"):
|
||||
agents_by_id[str(ag["id"])] = agent_info
|
||||
if parsed_agents:
|
||||
info["agents"] = parsed_agents
|
||||
|
||||
if tasks:
|
||||
info["tasks_completed"] = completed
|
||||
info["tasks_total"] = len(tasks)
|
||||
info["tasks"] = [
|
||||
{
|
||||
parsed_tasks: list[dict[str, Any]] = []
|
||||
for t in tasks:
|
||||
task_info: dict[str, Any] = {
|
||||
"description": t.get("description", ""),
|
||||
"completed": t.get("output") is not None,
|
||||
"output": (t.get("output") or {}).get("raw", ""),
|
||||
}
|
||||
for t in tasks
|
||||
]
|
||||
task_agent = t.get("agent")
|
||||
if isinstance(task_agent, dict):
|
||||
task_info["agent_role"] = task_agent.get("role", "")
|
||||
task_info["agent_id"] = task_agent.get("id", "")
|
||||
elif isinstance(task_agent, str) and task_agent in agents_by_id:
|
||||
task_info["agent_role"] = agents_by_id[task_agent].get("role", "")
|
||||
task_info["agent_id"] = task_agent
|
||||
parsed_tasks.append(task_info)
|
||||
info["tasks"] = parsed_tasks
|
||||
|
||||
if entity.get("entity_type") == "flow":
|
||||
completed_methods = entity.get("checkpoint_completed_methods")
|
||||
if completed_methods:
|
||||
info["completed_methods"] = sorted(completed_methods)
|
||||
state = entity.get("checkpoint_state")
|
||||
if isinstance(state, dict):
|
||||
info["flow_state"] = state
|
||||
|
||||
parsed_entities.append(info)
|
||||
|
||||
inputs: dict[str, Any] = {}
|
||||
@@ -173,9 +226,11 @@ def _entity_summary(entities: list[dict[str, Any]]) -> str:
|
||||
|
||||
|
||||
def _list_json(location: str) -> list[dict[str, Any]]:
|
||||
pattern = os.path.join(location, "*.json")
|
||||
pattern = os.path.join(location, "**", "*.json")
|
||||
results = []
|
||||
for path in sorted(glob.glob(pattern), key=os.path.getmtime, reverse=True):
|
||||
for path in sorted(
|
||||
glob.glob(pattern, recursive=True), key=os.path.getmtime, reverse=True
|
||||
):
|
||||
name = os.path.basename(path)
|
||||
try:
|
||||
with open(path) as f:
|
||||
@@ -192,8 +247,10 @@ def _list_json(location: str) -> list[dict[str, Any]]:
|
||||
|
||||
|
||||
def _info_json_latest(location: str) -> dict[str, Any] | None:
|
||||
pattern = os.path.join(location, "*.json")
|
||||
files = sorted(glob.glob(pattern), key=os.path.getmtime, reverse=True)
|
||||
pattern = os.path.join(location, "**", "*.json")
|
||||
files = sorted(
|
||||
glob.glob(pattern, recursive=True), key=os.path.getmtime, reverse=True
|
||||
)
|
||||
if not files:
|
||||
return None
|
||||
path = files[0]
|
||||
@@ -258,6 +315,8 @@ def _info_sqlite_latest(db_path: str) -> dict[str, Any] | None:
|
||||
def _info_sqlite_id(db_path: str, checkpoint_id: str) -> dict[str, Any] | None:
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
row = conn.execute(_SELECT_ONE, (checkpoint_id,)).fetchone()
|
||||
if not row:
|
||||
row = conn.execute(_SELECT_LIKE, (f"%{checkpoint_id}%",)).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
cid, created_at, raw = row
|
||||
@@ -380,3 +439,294 @@ def _print_info(meta: dict[str, Any]) -> None:
|
||||
if len(desc) > 70:
|
||||
desc = desc[:67] + "..."
|
||||
click.echo(f" {i + 1}. [{status}] {desc}")
|
||||
|
||||
|
||||
def _resolve_checkpoint(
|
||||
location: str, checkpoint_id: str | None
|
||||
) -> dict[str, Any] | None:
|
||||
if _is_sqlite(location):
|
||||
if checkpoint_id:
|
||||
return _info_sqlite_id(location, checkpoint_id)
|
||||
return _info_sqlite_latest(location)
|
||||
if os.path.isdir(location):
|
||||
if checkpoint_id:
|
||||
from crewai.state.provider.json_provider import JsonProvider
|
||||
|
||||
_json_provider: JsonProvider = JsonProvider()
|
||||
pattern: str = os.path.join(location, "**", "*.json")
|
||||
all_files: list[str] = glob.glob(pattern, recursive=True)
|
||||
matches: list[str] = [
|
||||
f for f in all_files if checkpoint_id in _json_provider.extract_id(f)
|
||||
]
|
||||
matches.sort(key=os.path.getmtime, reverse=True)
|
||||
if matches:
|
||||
return _info_json_file(matches[0])
|
||||
return None
|
||||
return _info_json_latest(location)
|
||||
if os.path.isfile(location):
|
||||
return _info_json_file(location)
|
||||
return None
|
||||
|
||||
|
||||
def _entity_type_from_meta(meta: dict[str, Any]) -> str:
|
||||
for ent in meta.get("entities", []):
|
||||
if ent.get("type") == "flow":
|
||||
return "flow"
|
||||
if ent.get("type") == "agent":
|
||||
return "agent"
|
||||
return "crew"
|
||||
|
||||
|
||||
def resume_checkpoint(location: str, checkpoint_id: str | None) -> None:
|
||||
import asyncio
|
||||
|
||||
meta: dict[str, Any] | None = _resolve_checkpoint(location, checkpoint_id)
|
||||
if meta is None:
|
||||
if checkpoint_id:
|
||||
click.echo(f"Checkpoint not found: {checkpoint_id}")
|
||||
else:
|
||||
click.echo(f"No checkpoints found in {location}")
|
||||
return
|
||||
|
||||
restore_path: str = meta.get("path") or meta.get("source", "")
|
||||
if meta.get("db"):
|
||||
restore_path = f"{meta['db']}#{meta['name']}"
|
||||
|
||||
click.echo(f"Resuming from: {meta.get('name', restore_path)}")
|
||||
_print_info(meta)
|
||||
click.echo()
|
||||
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
|
||||
config: CheckpointConfig = CheckpointConfig(restore_from=restore_path)
|
||||
entity_type: str = _entity_type_from_meta(meta)
|
||||
inputs: dict[str, Any] | None = meta.get("inputs") or None
|
||||
|
||||
if entity_type == "flow":
|
||||
from crewai.flow.flow import Flow
|
||||
|
||||
flow = Flow.from_checkpoint(config)
|
||||
result = asyncio.run(flow.kickoff_async(inputs=inputs))
|
||||
elif entity_type == "agent":
|
||||
from crewai.agent import Agent
|
||||
|
||||
agent = Agent.from_checkpoint(config)
|
||||
result = asyncio.run(agent.akickoff(messages="Resume execution."))
|
||||
else:
|
||||
from crewai.crew import Crew
|
||||
|
||||
crew = Crew.from_checkpoint(config)
|
||||
result = asyncio.run(crew.akickoff(inputs=inputs))
|
||||
|
||||
click.echo(f"\nResult: {getattr(result, 'raw', result)}")
|
||||
|
||||
|
||||
def _task_list_from_meta(meta: dict[str, Any]) -> list[dict[str, Any]]:
|
||||
tasks: list[dict[str, Any]] = []
|
||||
for ent in meta.get("entities", []):
|
||||
tasks.extend(
|
||||
{
|
||||
"entity": ent.get("name", "unnamed"),
|
||||
"description": t.get("description", ""),
|
||||
"completed": t.get("completed", False),
|
||||
"output": t.get("output", ""),
|
||||
}
|
||||
for t in ent.get("tasks", [])
|
||||
)
|
||||
return tasks
|
||||
|
||||
|
||||
def diff_checkpoints(location: str, id1: str, id2: str) -> None:
|
||||
meta1: dict[str, Any] | None = _resolve_checkpoint(location, id1)
|
||||
meta2: dict[str, Any] | None = _resolve_checkpoint(location, id2)
|
||||
|
||||
if meta1 is None:
|
||||
click.echo(f"Checkpoint not found: {id1}")
|
||||
return
|
||||
if meta2 is None:
|
||||
click.echo(f"Checkpoint not found: {id2}")
|
||||
return
|
||||
|
||||
name1: str = meta1.get("name", id1)
|
||||
name2: str = meta2.get("name", id2)
|
||||
|
||||
click.echo(f"--- {name1}")
|
||||
click.echo(f"+++ {name2}")
|
||||
click.echo()
|
||||
|
||||
fields: list[tuple[str, str]] = [
|
||||
("Time", "ts"),
|
||||
("Branch", "branch"),
|
||||
("Trigger", "trigger"),
|
||||
("Events", "event_count"),
|
||||
]
|
||||
for label, key in fields:
|
||||
v1: str = str(meta1.get(key, ""))
|
||||
v2: str = str(meta2.get(key, ""))
|
||||
if v1 != v2:
|
||||
click.echo(f" {label}:")
|
||||
click.echo(f" - {v1}")
|
||||
click.echo(f" + {v2}")
|
||||
|
||||
inputs1: dict[str, Any] = meta1.get("inputs", {})
|
||||
inputs2: dict[str, Any] = meta2.get("inputs", {})
|
||||
all_keys: list[str] = sorted(set(list(inputs1.keys()) + list(inputs2.keys())))
|
||||
changed_inputs: list[tuple[str, Any, Any]] = [
|
||||
(k, inputs1.get(k, ""), inputs2.get(k, ""))
|
||||
for k in all_keys
|
||||
if inputs1.get(k) != inputs2.get(k)
|
||||
]
|
||||
if changed_inputs:
|
||||
click.echo("\n Inputs:")
|
||||
for key, v1, v2 in changed_inputs:
|
||||
click.echo(f" {key}:")
|
||||
click.echo(f" - {v1}")
|
||||
click.echo(f" + {v2}")
|
||||
|
||||
tasks1: list[dict[str, Any]] = _task_list_from_meta(meta1)
|
||||
tasks2: list[dict[str, Any]] = _task_list_from_meta(meta2)
|
||||
|
||||
max_tasks: int = max(len(tasks1), len(tasks2))
|
||||
if max_tasks == 0:
|
||||
return
|
||||
|
||||
click.echo("\n Tasks:")
|
||||
for i in range(max_tasks):
|
||||
t1: dict[str, Any] | None = tasks1[i] if i < len(tasks1) else None
|
||||
t2: dict[str, Any] | None = tasks2[i] if i < len(tasks2) else None
|
||||
|
||||
if t1 is None:
|
||||
desc: str = t2["description"][:60] if t2 else ""
|
||||
click.echo(f" + {i + 1}. [new] {desc}")
|
||||
continue
|
||||
if t2 is None:
|
||||
desc = t1["description"][:60]
|
||||
click.echo(f" - {i + 1}. [removed] {desc}")
|
||||
continue
|
||||
|
||||
desc = str(t1["description"][:60])
|
||||
s1: str = "done" if t1["completed"] else "pending"
|
||||
s2: str = "done" if t2["completed"] else "pending"
|
||||
|
||||
if s1 != s2:
|
||||
click.echo(f" {i + 1}. {desc}")
|
||||
click.echo(f" status: {s1} -> {s2}")
|
||||
|
||||
out1: str = (t1.get("output") or "").strip()
|
||||
out2: str = (t2.get("output") or "").strip()
|
||||
if out1 != out2:
|
||||
if s1 == s2:
|
||||
click.echo(f" {i + 1}. {desc}")
|
||||
preview1: str = (
|
||||
out1[:80] + ("..." if len(out1) > 80 else "") if out1 else "(empty)"
|
||||
)
|
||||
preview2: str = (
|
||||
out2[:80] + ("..." if len(out2) > 80 else "") if out2 else "(empty)"
|
||||
)
|
||||
click.echo(" output:")
|
||||
click.echo(f" - {preview1}")
|
||||
click.echo(f" + {preview2}")
|
||||
|
||||
|
||||
def _parse_duration(value: str) -> timedelta:
|
||||
match: re.Match[str] | None = re.match(r"^(\d+)([dhm])$", value.strip())
|
||||
if not match:
|
||||
raise click.BadParameter(
|
||||
f"Invalid duration: {value!r}. Use format like '7d', '24h', or '30m'."
|
||||
)
|
||||
amount: int = int(match.group(1))
|
||||
unit: str = match.group(2)
|
||||
if unit == "d":
|
||||
return timedelta(days=amount)
|
||||
if unit == "h":
|
||||
return timedelta(hours=amount)
|
||||
return timedelta(minutes=amount)
|
||||
|
||||
|
||||
def _prune_json(location: str, keep: int | None, older_than: timedelta | None) -> int:
|
||||
pattern: str = os.path.join(location, "**", "*.json")
|
||||
files: list[str] = sorted(
|
||||
glob.glob(pattern, recursive=True), key=os.path.getmtime, reverse=True
|
||||
)
|
||||
if not files:
|
||||
return 0
|
||||
|
||||
to_delete: set[str] = set()
|
||||
|
||||
if keep is not None and len(files) > keep:
|
||||
to_delete.update(files[keep:])
|
||||
|
||||
if older_than is not None:
|
||||
cutoff: datetime = datetime.now(timezone.utc) - older_than
|
||||
for path in files:
|
||||
mtime: datetime = datetime.fromtimestamp(
|
||||
os.path.getmtime(path), tz=timezone.utc
|
||||
)
|
||||
if mtime < cutoff:
|
||||
to_delete.add(path)
|
||||
|
||||
deleted: int = 0
|
||||
for path in to_delete:
|
||||
try:
|
||||
os.remove(path)
|
||||
deleted += 1
|
||||
except OSError: # noqa: PERF203
|
||||
pass
|
||||
|
||||
for dirpath, dirnames, filenames in os.walk(location, topdown=False):
|
||||
if dirpath != location and not filenames and not dirnames:
|
||||
try:
|
||||
os.rmdir(dirpath)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
return deleted
|
||||
|
||||
|
||||
def _prune_sqlite(db_path: str, keep: int | None, older_than: timedelta | None) -> int:
|
||||
deleted: int = 0
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
if older_than is not None:
|
||||
cutoff: str = (datetime.now(timezone.utc) - older_than).strftime(
|
||||
"%Y%m%dT%H%M%S"
|
||||
)
|
||||
cursor: sqlite3.Cursor = conn.execute(_DELETE_OLDER_THAN, (cutoff,))
|
||||
deleted += cursor.rowcount
|
||||
|
||||
if keep is not None:
|
||||
cursor = conn.execute(_DELETE_KEEP_N, (keep,))
|
||||
deleted += cursor.rowcount
|
||||
|
||||
conn.commit()
|
||||
return deleted
|
||||
|
||||
|
||||
def prune_checkpoints(
|
||||
location: str, keep: int | None, older_than: str | None, dry_run: bool = False
|
||||
) -> None:
|
||||
if keep is None and older_than is None:
|
||||
click.echo("Specify --keep N and/or --older-than DURATION (e.g. 7d, 24h)")
|
||||
return
|
||||
|
||||
duration: timedelta | None = _parse_duration(older_than) if older_than else None
|
||||
|
||||
deleted: int
|
||||
if _is_sqlite(location):
|
||||
if dry_run:
|
||||
with sqlite3.connect(location) as conn:
|
||||
total: int = conn.execute(_COUNT_CHECKPOINTS).fetchone()[0]
|
||||
click.echo(f"Would prune from {total} checkpoint(s) in {location}")
|
||||
return
|
||||
deleted = _prune_sqlite(location, keep, duration)
|
||||
elif os.path.isdir(location):
|
||||
if dry_run:
|
||||
files: list[str] = glob.glob(
|
||||
os.path.join(location, "**", "*.json"), recursive=True
|
||||
)
|
||||
click.echo(f"Would prune from {len(files)} checkpoint(s) in {location}")
|
||||
return
|
||||
deleted = _prune_json(location, keep, duration)
|
||||
else:
|
||||
click.echo(f"Not a directory or SQLite database: {location}")
|
||||
return
|
||||
click.echo(f"Pruned {deleted} checkpoint(s) from {location}")
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -18,6 +18,7 @@ from crewai.cli.install_crew import install_crew
|
||||
from crewai.cli.kickoff_flow import kickoff_flow
|
||||
from crewai.cli.organization.main import OrganizationCommand
|
||||
from crewai.cli.plot_flow import plot_flow
|
||||
from crewai.cli.remote_template.main import TemplateCommand
|
||||
from crewai.cli.replay_from_task import replay_task_command
|
||||
from crewai.cli.reset_memories_command import reset_memories_command
|
||||
from crewai.cli.run_crew import run_crew
|
||||
@@ -392,10 +393,15 @@ def deploy() -> None:
|
||||
|
||||
@deploy.command(name="create")
|
||||
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
|
||||
def deploy_create(yes: bool) -> None:
|
||||
@click.option(
|
||||
"--skip-validate",
|
||||
is_flag=True,
|
||||
help="Skip the pre-deploy validation checks.",
|
||||
)
|
||||
def deploy_create(yes: bool, skip_validate: bool) -> None:
|
||||
"""Create a Crew deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.create_crew(yes)
|
||||
deploy_cmd.create_crew(yes, skip_validate=skip_validate)
|
||||
|
||||
|
||||
@deploy.command(name="list")
|
||||
@@ -407,10 +413,28 @@ def deploy_list() -> None:
|
||||
|
||||
@deploy.command(name="push")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deploy_push(uuid: str | None) -> None:
|
||||
@click.option(
|
||||
"--skip-validate",
|
||||
is_flag=True,
|
||||
help="Skip the pre-deploy validation checks.",
|
||||
)
|
||||
def deploy_push(uuid: str | None, skip_validate: bool) -> None:
|
||||
"""Deploy the Crew."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.deploy(uuid=uuid)
|
||||
deploy_cmd.deploy(uuid=uuid, skip_validate=skip_validate)
|
||||
|
||||
|
||||
@deploy.command(name="validate")
|
||||
def deploy_validate() -> None:
|
||||
"""Validate the current project against common deployment failures.
|
||||
|
||||
Runs the same pre-deploy checks that `crewai deploy create` and
|
||||
`crewai deploy push` run automatically, without contacting the platform.
|
||||
Exits non-zero if any blocking issues are found.
|
||||
"""
|
||||
from crewai.cli.deploy.validate import run_validate_command
|
||||
|
||||
run_validate_command()
|
||||
|
||||
|
||||
@deploy.command(name="status")
|
||||
@@ -473,6 +497,33 @@ def tool_publish(is_public: bool, force: bool) -> None:
|
||||
tool_cmd.publish(is_public, force)
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def template() -> None:
|
||||
"""Browse and install project templates."""
|
||||
|
||||
|
||||
@template.command(name="list")
|
||||
def template_list() -> None:
|
||||
"""List available templates and select one to install."""
|
||||
template_cmd = TemplateCommand()
|
||||
template_cmd.list_templates()
|
||||
|
||||
|
||||
@template.command(name="add")
|
||||
@click.argument("name")
|
||||
@click.option(
|
||||
"-o",
|
||||
"--output-dir",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Directory name for the template (defaults to template name)",
|
||||
)
|
||||
def template_add(name: str, output_dir: str | None) -> None:
|
||||
"""Add a template to the current directory."""
|
||||
template_cmd = TemplateCommand()
|
||||
template_cmd.add_template(name, output_dir)
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def flow() -> None:
|
||||
"""Flow related commands."""
|
||||
@@ -822,5 +873,48 @@ def checkpoint_info(path: str) -> None:
|
||||
info_checkpoint(_detect_location(path))
|
||||
|
||||
|
||||
@checkpoint.command("resume")
|
||||
@click.argument("checkpoint_id", required=False, default=None)
|
||||
@click.pass_context
|
||||
def checkpoint_resume(ctx: click.Context, checkpoint_id: str | None) -> None:
|
||||
"""Resume from a checkpoint. Defaults to the most recent."""
|
||||
from crewai.cli.checkpoint_cli import resume_checkpoint
|
||||
|
||||
resume_checkpoint(ctx.obj["location"], checkpoint_id)
|
||||
|
||||
|
||||
@checkpoint.command("diff")
|
||||
@click.argument("id1")
|
||||
@click.argument("id2")
|
||||
@click.pass_context
|
||||
def checkpoint_diff(ctx: click.Context, id1: str, id2: str) -> None:
|
||||
"""Compare two checkpoints side-by-side."""
|
||||
from crewai.cli.checkpoint_cli import diff_checkpoints
|
||||
|
||||
diff_checkpoints(ctx.obj["location"], id1, id2)
|
||||
|
||||
|
||||
@checkpoint.command("prune")
|
||||
@click.option(
|
||||
"--keep", type=int, default=None, help="Keep the N most recent checkpoints."
|
||||
)
|
||||
@click.option(
|
||||
"--older-than",
|
||||
default=None,
|
||||
help="Remove checkpoints older than duration (e.g. 7d, 24h, 30m).",
|
||||
)
|
||||
@click.option(
|
||||
"--dry-run", is_flag=True, help="Show what would be pruned without deleting."
|
||||
)
|
||||
@click.pass_context
|
||||
def checkpoint_prune(
|
||||
ctx: click.Context, keep: int | None, older_than: str | None, dry_run: bool
|
||||
) -> None:
|
||||
"""Remove old checkpoints."""
|
||||
from crewai.cli.checkpoint_cli import prune_checkpoints
|
||||
|
||||
prune_checkpoints(ctx.obj["location"], keep, older_than, dry_run)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
crewai()
|
||||
|
||||
@@ -4,12 +4,35 @@ from rich.console import Console
|
||||
|
||||
from crewai.cli import git
|
||||
from crewai.cli.command import BaseCommand, PlusAPIMixin
|
||||
from crewai.cli.deploy.validate import validate_project
|
||||
from crewai.cli.utils import fetch_and_json_env_file, get_project_name
|
||||
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
def _run_predeploy_validation(skip_validate: bool) -> bool:
|
||||
"""Run pre-deploy validation unless skipped.
|
||||
|
||||
Returns True if deployment should proceed, False if it should abort.
|
||||
"""
|
||||
if skip_validate:
|
||||
console.print(
|
||||
"[yellow]Skipping pre-deploy validation (--skip-validate).[/yellow]"
|
||||
)
|
||||
return True
|
||||
|
||||
console.print("Running pre-deploy validation...", style="bold blue")
|
||||
validator = validate_project()
|
||||
if not validator.ok:
|
||||
console.print(
|
||||
"\n[bold red]Pre-deploy validation failed. "
|
||||
"Fix the issues above or re-run with --skip-validate.[/bold red]"
|
||||
)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
"""
|
||||
A class to handle deployment-related operations for CrewAI projects.
|
||||
@@ -60,13 +83,16 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
f"{log_message['timestamp']} - {log_message['level']}: {log_message['message']}"
|
||||
)
|
||||
|
||||
def deploy(self, uuid: str | None = None) -> None:
|
||||
def deploy(self, uuid: str | None = None, skip_validate: bool = False) -> None:
|
||||
"""
|
||||
Deploy a crew using either UUID or project name.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): The UUID of the crew to deploy.
|
||||
skip_validate (bool): Skip pre-deploy validation checks.
|
||||
"""
|
||||
if not _run_predeploy_validation(skip_validate):
|
||||
return
|
||||
self._telemetry.start_deployment_span(uuid)
|
||||
console.print("Starting deployment...", style="bold blue")
|
||||
if uuid:
|
||||
@@ -80,10 +106,16 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
self._validate_response(response)
|
||||
self._display_deployment_info(response.json())
|
||||
|
||||
def create_crew(self, confirm: bool = False) -> None:
|
||||
def create_crew(self, confirm: bool = False, skip_validate: bool = False) -> None:
|
||||
"""
|
||||
Create a new crew deployment.
|
||||
|
||||
Args:
|
||||
confirm (bool): Whether to skip the interactive confirmation prompt.
|
||||
skip_validate (bool): Skip pre-deploy validation checks.
|
||||
"""
|
||||
if not _run_predeploy_validation(skip_validate):
|
||||
return
|
||||
self._telemetry.create_crew_deployment_span()
|
||||
console.print("Creating deployment...", style="bold blue")
|
||||
env_vars = fetch_and_json_env_file()
|
||||
|
||||
845
lib/crewai/src/crewai/cli/deploy/validate.py
Normal file
845
lib/crewai/src/crewai/cli/deploy/validate.py
Normal file
@@ -0,0 +1,845 @@
|
||||
"""Pre-deploy validation for CrewAI projects.
|
||||
|
||||
Catches locally what a deploy would reject at build or runtime so users
|
||||
don't burn deployment attempts on fixable project-structure problems.
|
||||
|
||||
Each check is grouped into one of:
|
||||
- ERROR: will block a deployment; validator exits non-zero.
|
||||
- WARNING: may still deploy but is almost always a deployment bug; printed
|
||||
but does not block.
|
||||
|
||||
The individual checks mirror the categories observed in production
|
||||
deployment-failure logs:
|
||||
|
||||
1. pyproject.toml present with ``[project].name``
|
||||
2. lockfile (``uv.lock`` or ``poetry.lock``) present and not stale
|
||||
3. package directory at ``src/<package>/`` exists (no empty name, no egg-info)
|
||||
4. standard crew files: ``crew.py``, ``config/agents.yaml``, ``config/tasks.yaml``
|
||||
5. flow entrypoint: ``main.py`` with a Flow subclass
|
||||
6. hatch wheel target resolves (packages = [...] or default dir matches name)
|
||||
7. crew/flow module imports cleanly (catches ``@CrewBase not found``,
|
||||
``No Flow subclass found``, provider import errors)
|
||||
8. environment variables referenced in code vs ``.env`` / deployment env
|
||||
9. installed crewai vs lockfile pin (catches missing-attribute failures from
|
||||
stale pins)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from pathlib import Path
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Any
|
||||
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.cli.utils import parse_toml
|
||||
|
||||
|
||||
console = Console()
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Severity(str, Enum):
|
||||
"""Severity of a validation finding."""
|
||||
|
||||
ERROR = "error"
|
||||
WARNING = "warning"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ValidationResult:
|
||||
"""A single finding from a validation check.
|
||||
|
||||
Attributes:
|
||||
severity: whether this blocks deploy or is advisory.
|
||||
code: stable short identifier, used in tests and docs
|
||||
(e.g. ``missing_pyproject``, ``stale_lockfile``).
|
||||
title: one-line summary shown to the user.
|
||||
detail: optional multi-line explanation.
|
||||
hint: optional remediation suggestion.
|
||||
"""
|
||||
|
||||
severity: Severity
|
||||
code: str
|
||||
title: str
|
||||
detail: str = ""
|
||||
hint: str = ""
|
||||
|
||||
|
||||
# Maps known provider env var names → label used in hint messages.
|
||||
_KNOWN_API_KEY_HINTS: dict[str, str] = {
|
||||
"OPENAI_API_KEY": "OpenAI",
|
||||
"ANTHROPIC_API_KEY": "Anthropic",
|
||||
"GOOGLE_API_KEY": "Google",
|
||||
"GEMINI_API_KEY": "Gemini",
|
||||
"AZURE_OPENAI_API_KEY": "Azure OpenAI",
|
||||
"AZURE_API_KEY": "Azure",
|
||||
"AWS_ACCESS_KEY_ID": "AWS",
|
||||
"AWS_SECRET_ACCESS_KEY": "AWS",
|
||||
"COHERE_API_KEY": "Cohere",
|
||||
"GROQ_API_KEY": "Groq",
|
||||
"MISTRAL_API_KEY": "Mistral",
|
||||
"TAVILY_API_KEY": "Tavily",
|
||||
"SERPER_API_KEY": "Serper",
|
||||
"SERPLY_API_KEY": "Serply",
|
||||
"PERPLEXITY_API_KEY": "Perplexity",
|
||||
"DEEPSEEK_API_KEY": "DeepSeek",
|
||||
"OPENROUTER_API_KEY": "OpenRouter",
|
||||
"FIRECRAWL_API_KEY": "Firecrawl",
|
||||
"EXA_API_KEY": "Exa",
|
||||
"BROWSERBASE_API_KEY": "Browserbase",
|
||||
}
|
||||
|
||||
|
||||
def normalize_package_name(project_name: str) -> str:
|
||||
"""Normalize a pyproject project.name into a Python package directory name.
|
||||
|
||||
Mirrors the rules in ``crewai.cli.create_crew.create_crew`` so the
|
||||
validator agrees with the scaffolder about where ``src/<pkg>/`` should
|
||||
live.
|
||||
"""
|
||||
folder = project_name.replace(" ", "_").replace("-", "_").lower()
|
||||
return re.sub(r"[^a-zA-Z0-9_]", "", folder)
|
||||
|
||||
|
||||
class DeployValidator:
|
||||
"""Runs the full pre-deploy validation suite against a project directory."""
|
||||
|
||||
def __init__(self, project_root: Path | None = None) -> None:
|
||||
self.project_root: Path = (project_root or Path.cwd()).resolve()
|
||||
self.results: list[ValidationResult] = []
|
||||
self._pyproject: dict[str, Any] | None = None
|
||||
self._project_name: str | None = None
|
||||
self._package_name: str | None = None
|
||||
self._package_dir: Path | None = None
|
||||
self._is_flow: bool = False
|
||||
|
||||
def _add(
|
||||
self,
|
||||
severity: Severity,
|
||||
code: str,
|
||||
title: str,
|
||||
detail: str = "",
|
||||
hint: str = "",
|
||||
) -> None:
|
||||
self.results.append(
|
||||
ValidationResult(
|
||||
severity=severity,
|
||||
code=code,
|
||||
title=title,
|
||||
detail=detail,
|
||||
hint=hint,
|
||||
)
|
||||
)
|
||||
|
||||
@property
|
||||
def errors(self) -> list[ValidationResult]:
|
||||
return [r for r in self.results if r.severity is Severity.ERROR]
|
||||
|
||||
@property
|
||||
def warnings(self) -> list[ValidationResult]:
|
||||
return [r for r in self.results if r.severity is Severity.WARNING]
|
||||
|
||||
@property
|
||||
def ok(self) -> bool:
|
||||
return not self.errors
|
||||
|
||||
def run(self) -> list[ValidationResult]:
|
||||
"""Run all checks. Later checks are skipped when earlier ones make
|
||||
them impossible (e.g. no pyproject.toml → no lockfile check)."""
|
||||
if not self._check_pyproject():
|
||||
return self.results
|
||||
|
||||
self._check_lockfile()
|
||||
|
||||
if not self._check_package_dir():
|
||||
self._check_hatch_wheel_target()
|
||||
return self.results
|
||||
|
||||
if self._is_flow:
|
||||
self._check_flow_entrypoint()
|
||||
else:
|
||||
self._check_crew_entrypoint()
|
||||
self._check_config_yamls()
|
||||
|
||||
self._check_hatch_wheel_target()
|
||||
self._check_module_imports()
|
||||
self._check_env_vars()
|
||||
self._check_version_vs_lockfile()
|
||||
|
||||
return self.results
|
||||
|
||||
def _check_pyproject(self) -> bool:
|
||||
pyproject_path = self.project_root / "pyproject.toml"
|
||||
if not pyproject_path.exists():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_pyproject",
|
||||
"Cannot find pyproject.toml",
|
||||
detail=(
|
||||
f"Expected pyproject.toml at {pyproject_path}. "
|
||||
"CrewAI projects must be installable Python packages."
|
||||
),
|
||||
hint="Run `crewai create crew <name>` to scaffold a valid project layout.",
|
||||
)
|
||||
return False
|
||||
|
||||
try:
|
||||
self._pyproject = parse_toml(pyproject_path.read_text())
|
||||
except Exception as e:
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"invalid_pyproject",
|
||||
"pyproject.toml is not valid TOML",
|
||||
detail=str(e),
|
||||
)
|
||||
return False
|
||||
|
||||
project = self._pyproject.get("project") or {}
|
||||
name = project.get("name")
|
||||
if not isinstance(name, str) or not name.strip():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_project_name",
|
||||
"pyproject.toml is missing [project].name",
|
||||
detail=(
|
||||
"Without a project name the platform cannot resolve your "
|
||||
"package directory (this produces errors like "
|
||||
"'Cannot find src//crew.py')."
|
||||
),
|
||||
hint='Set a `name = "..."` field under `[project]` in pyproject.toml.',
|
||||
)
|
||||
return False
|
||||
|
||||
self._project_name = name
|
||||
self._package_name = normalize_package_name(name)
|
||||
self._is_flow = (self._pyproject.get("tool") or {}).get("crewai", {}).get(
|
||||
"type"
|
||||
) == "flow"
|
||||
return True
|
||||
|
||||
def _check_lockfile(self) -> None:
|
||||
uv_lock = self.project_root / "uv.lock"
|
||||
poetry_lock = self.project_root / "poetry.lock"
|
||||
pyproject = self.project_root / "pyproject.toml"
|
||||
|
||||
if not uv_lock.exists() and not poetry_lock.exists():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_lockfile",
|
||||
"Expected to find at least one of these files: uv.lock or poetry.lock",
|
||||
hint=(
|
||||
"Run `uv lock` (recommended) or `poetry lock` in your project "
|
||||
"directory, commit the lockfile, then redeploy."
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
lockfile = uv_lock if uv_lock.exists() else poetry_lock
|
||||
try:
|
||||
if lockfile.stat().st_mtime < pyproject.stat().st_mtime:
|
||||
self._add(
|
||||
Severity.WARNING,
|
||||
"stale_lockfile",
|
||||
f"{lockfile.name} is older than pyproject.toml",
|
||||
detail=(
|
||||
"Your lockfile may not reflect recent dependency changes. "
|
||||
"The platform resolves from the lockfile, so deployed "
|
||||
"dependencies may differ from local."
|
||||
),
|
||||
hint="Run `uv lock` (or `poetry lock`) and commit the result.",
|
||||
)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def _check_package_dir(self) -> bool:
|
||||
if self._package_name is None:
|
||||
return False
|
||||
|
||||
src_dir = self.project_root / "src"
|
||||
if not src_dir.is_dir():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_src_dir",
|
||||
"Missing src/ directory",
|
||||
detail=(
|
||||
"CrewAI deployments expect a src-layout project: "
|
||||
f"src/{self._package_name}/crew.py (or main.py for flows)."
|
||||
),
|
||||
hint="Run `crewai create crew <name>` to see the expected layout.",
|
||||
)
|
||||
return False
|
||||
|
||||
package_dir = src_dir / self._package_name
|
||||
if not package_dir.is_dir():
|
||||
siblings = [
|
||||
p.name
|
||||
for p in src_dir.iterdir()
|
||||
if p.is_dir() and not p.name.endswith(".egg-info")
|
||||
]
|
||||
egg_info = [
|
||||
p.name for p in src_dir.iterdir() if p.name.endswith(".egg-info")
|
||||
]
|
||||
|
||||
hint_parts = [
|
||||
f'Create src/{self._package_name}/ to match [project].name = "{self._project_name}".'
|
||||
]
|
||||
if siblings:
|
||||
hint_parts.append(
|
||||
f"Found other package directories: {', '.join(siblings)}. "
|
||||
f"Either rename one to '{self._package_name}' or update [project].name."
|
||||
)
|
||||
if egg_info:
|
||||
hint_parts.append(
|
||||
f"Delete stale build artifacts: {', '.join(egg_info)} "
|
||||
"(these confuse the platform's package discovery)."
|
||||
)
|
||||
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_package_dir",
|
||||
f"Cannot find src/{self._package_name}/",
|
||||
detail=(
|
||||
"The platform looks for your crew source under "
|
||||
"src/<package_name>/, derived from [project].name."
|
||||
),
|
||||
hint=" ".join(hint_parts),
|
||||
)
|
||||
return False
|
||||
|
||||
for p in src_dir.iterdir():
|
||||
if p.name.endswith(".egg-info"):
|
||||
self._add(
|
||||
Severity.WARNING,
|
||||
"stale_egg_info",
|
||||
f"Stale build artifact in src/: {p.name}",
|
||||
detail=(
|
||||
".egg-info directories can be mistaken for your package "
|
||||
"and cause 'Cannot find src/<name>.egg-info/crew.py' errors."
|
||||
),
|
||||
hint=f"Delete {p} and add `*.egg-info/` to .gitignore.",
|
||||
)
|
||||
|
||||
self._package_dir = package_dir
|
||||
return True
|
||||
|
||||
def _check_crew_entrypoint(self) -> None:
|
||||
if self._package_dir is None:
|
||||
return
|
||||
crew_py = self._package_dir / "crew.py"
|
||||
if not crew_py.is_file():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_crew_py",
|
||||
f"Cannot find {crew_py.relative_to(self.project_root)}",
|
||||
detail=(
|
||||
"Standard crew projects must define a Crew class decorated "
|
||||
"with @CrewBase inside crew.py."
|
||||
),
|
||||
hint=(
|
||||
"Create crew.py with an @CrewBase-annotated class, or set "
|
||||
'`[tool.crewai] type = "flow"` in pyproject.toml if this is a flow.'
|
||||
),
|
||||
)
|
||||
|
||||
def _check_config_yamls(self) -> None:
|
||||
if self._package_dir is None:
|
||||
return
|
||||
config_dir = self._package_dir / "config"
|
||||
if not config_dir.is_dir():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_config_dir",
|
||||
f"Cannot find {config_dir.relative_to(self.project_root)}",
|
||||
hint="Create a config/ directory with agents.yaml and tasks.yaml.",
|
||||
)
|
||||
return
|
||||
|
||||
for yaml_name in ("agents.yaml", "tasks.yaml"):
|
||||
yaml_path = config_dir / yaml_name
|
||||
if not yaml_path.is_file():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
f"missing_{yaml_name.replace('.', '_')}",
|
||||
f"Cannot find {yaml_path.relative_to(self.project_root)}",
|
||||
detail=(
|
||||
"CrewAI loads agent and task config from these files; "
|
||||
"missing them causes empty-config warnings and runtime crashes."
|
||||
),
|
||||
)
|
||||
|
||||
def _check_flow_entrypoint(self) -> None:
|
||||
if self._package_dir is None:
|
||||
return
|
||||
main_py = self._package_dir / "main.py"
|
||||
if not main_py.is_file():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_flow_main",
|
||||
f"Cannot find {main_py.relative_to(self.project_root)}",
|
||||
detail=(
|
||||
"Flow projects must define a Flow subclass in main.py. "
|
||||
'This project has `[tool.crewai] type = "flow"` set.'
|
||||
),
|
||||
hint="Create main.py with a `class MyFlow(Flow[...])`.",
|
||||
)
|
||||
|
||||
def _check_hatch_wheel_target(self) -> None:
|
||||
if not self._pyproject:
|
||||
return
|
||||
|
||||
build_system = self._pyproject.get("build-system") or {}
|
||||
backend = build_system.get("build-backend", "")
|
||||
if "hatchling" not in backend:
|
||||
return
|
||||
|
||||
hatch_wheel = (
|
||||
(self._pyproject.get("tool") or {})
|
||||
.get("hatch", {})
|
||||
.get("build", {})
|
||||
.get("targets", {})
|
||||
.get("wheel", {})
|
||||
)
|
||||
if hatch_wheel.get("packages") or hatch_wheel.get("only-include"):
|
||||
return
|
||||
|
||||
if self._package_dir and self._package_dir.is_dir():
|
||||
return
|
||||
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"hatch_wheel_target_missing",
|
||||
"Hatchling cannot determine which files to ship",
|
||||
detail=(
|
||||
"Your pyproject uses hatchling but has no "
|
||||
"[tool.hatch.build.targets.wheel] configuration and no "
|
||||
"directory matching your project name."
|
||||
),
|
||||
hint=(
|
||||
"Add:\n"
|
||||
" [tool.hatch.build.targets.wheel]\n"
|
||||
f' packages = ["src/{self._package_name}"]'
|
||||
),
|
||||
)
|
||||
|
||||
def _check_module_imports(self) -> None:
|
||||
"""Import the user's crew/flow via `uv run` so the check sees the same
|
||||
package versions as `crewai run` would. Result is reported as JSON on
|
||||
the subprocess's stdout."""
|
||||
script = (
|
||||
"import json, sys, traceback, os\n"
|
||||
"os.chdir(sys.argv[1])\n"
|
||||
"try:\n"
|
||||
" from crewai.cli.utils import get_crews, get_flows\n"
|
||||
" is_flow = sys.argv[2] == 'flow'\n"
|
||||
" if is_flow:\n"
|
||||
" instances = get_flows()\n"
|
||||
" kind = 'flow'\n"
|
||||
" else:\n"
|
||||
" instances = get_crews()\n"
|
||||
" kind = 'crew'\n"
|
||||
" print(json.dumps({'ok': True, 'kind': kind, 'count': len(instances)}))\n"
|
||||
"except BaseException as e:\n"
|
||||
" print(json.dumps({\n"
|
||||
" 'ok': False,\n"
|
||||
" 'error_type': type(e).__name__,\n"
|
||||
" 'error': str(e),\n"
|
||||
" 'traceback': traceback.format_exc(),\n"
|
||||
" }))\n"
|
||||
)
|
||||
|
||||
uv_path = shutil.which("uv")
|
||||
if uv_path is None:
|
||||
self._add(
|
||||
Severity.WARNING,
|
||||
"uv_not_found",
|
||||
"Skipping import check: `uv` not installed",
|
||||
hint="Install uv: https://docs.astral.sh/uv/",
|
||||
)
|
||||
return
|
||||
|
||||
try:
|
||||
proc = subprocess.run( # noqa: S603 - args constructed from trusted inputs
|
||||
[
|
||||
uv_path,
|
||||
"run",
|
||||
"python",
|
||||
"-c",
|
||||
script,
|
||||
str(self.project_root),
|
||||
"flow" if self._is_flow else "crew",
|
||||
],
|
||||
cwd=self.project_root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=120,
|
||||
check=False,
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"import_timeout",
|
||||
"Importing your crew/flow module timed out after 120s",
|
||||
detail=(
|
||||
"User code may be making network calls or doing heavy work "
|
||||
"at import time. Move that work into agent methods."
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
# The payload is the last JSON object on stdout; user code may print
|
||||
# other lines before it.
|
||||
payload: dict[str, Any] | None = None
|
||||
for line in reversed(proc.stdout.splitlines()):
|
||||
line = line.strip()
|
||||
if line.startswith("{") and line.endswith("}"):
|
||||
try:
|
||||
payload = json.loads(line)
|
||||
break
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
if payload is None:
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"import_failed",
|
||||
"Could not import your crew/flow module",
|
||||
detail=(proc.stderr or proc.stdout or "").strip()[:1500],
|
||||
hint="Run `crewai run` locally first to reproduce the error.",
|
||||
)
|
||||
return
|
||||
|
||||
if payload.get("ok"):
|
||||
if payload.get("count", 0) == 0:
|
||||
kind = payload.get("kind", "crew")
|
||||
if kind == "flow":
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"no_flow_subclass",
|
||||
"No Flow subclass found in the module",
|
||||
hint=(
|
||||
"main.py must define a class extending "
|
||||
"`crewai.flow.Flow`, instantiable with no arguments."
|
||||
),
|
||||
)
|
||||
else:
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"no_crewbase_class",
|
||||
"Crew class annotated with @CrewBase not found",
|
||||
hint=(
|
||||
"Decorate your crew class with @CrewBase from "
|
||||
"crewai.project (see `crewai create crew` template)."
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
err_msg = str(payload.get("error", ""))
|
||||
err_type = str(payload.get("error_type", "Exception"))
|
||||
tb = str(payload.get("traceback", ""))
|
||||
self._classify_import_error(err_type, err_msg, tb)
|
||||
|
||||
def _classify_import_error(self, err_type: str, err_msg: str, tb: str) -> None:
|
||||
"""Turn a raw import-time exception into a user-actionable finding."""
|
||||
# Must be checked before the generic "native provider" branch below:
|
||||
# the extras-missing message contains the same phrase. Providers
|
||||
# format the install command as plain text (`to install: uv add
|
||||
# "crewai[extra]"`); also tolerate backtick-delimited variants.
|
||||
m = re.search(
|
||||
r"(?P<pkg>[A-Za-z0-9_ -]+?)\s+native provider not available"
|
||||
r".*?to install:\s*`?(?P<cmd>uv add [\"']crewai\[[^\]]+\][\"'])`?",
|
||||
err_msg,
|
||||
)
|
||||
if m:
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"missing_provider_extra",
|
||||
f"{m.group('pkg').strip()} provider extra not installed",
|
||||
hint=f"Run: {m.group('cmd')}",
|
||||
)
|
||||
return
|
||||
|
||||
# crewai.llm.LLM.__new__ wraps provider init errors as
|
||||
# ImportError("Error importing native provider: ...").
|
||||
if "Error importing native provider" in err_msg or "native provider" in err_msg:
|
||||
missing_key = self._extract_missing_api_key(err_msg)
|
||||
if missing_key:
|
||||
provider = _KNOWN_API_KEY_HINTS.get(missing_key, missing_key)
|
||||
self._add(
|
||||
Severity.WARNING,
|
||||
"llm_init_missing_key",
|
||||
f"LLM is constructed at import time but {missing_key} is not set",
|
||||
detail=(
|
||||
f"Your crew instantiates a {provider} LLM during module "
|
||||
"load (e.g. in a class field default or @crew method). "
|
||||
f"The {provider} provider currently requires {missing_key} "
|
||||
"at construction time, so this will fail on the platform "
|
||||
"unless the key is set in your deployment environment."
|
||||
),
|
||||
hint=(
|
||||
f"Add {missing_key} to your deployment's Environment "
|
||||
"Variables before deploying, or move LLM construction "
|
||||
"inside agent methods so it runs lazily."
|
||||
),
|
||||
)
|
||||
return
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"llm_provider_init_failed",
|
||||
"LLM native provider failed to initialize",
|
||||
detail=err_msg,
|
||||
hint=(
|
||||
"Check your LLM(model=...) configuration and provider-specific "
|
||||
"extras (e.g. `uv add 'crewai[azure-ai-inference]'` for Azure)."
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
if err_type == "KeyError":
|
||||
key = err_msg.strip("'\"")
|
||||
if key in _KNOWN_API_KEY_HINTS or key.endswith("_API_KEY"):
|
||||
self._add(
|
||||
Severity.WARNING,
|
||||
"env_var_read_at_import",
|
||||
f"{key} is read at import time via os.environ[...]",
|
||||
detail=(
|
||||
"Using os.environ[...] (rather than os.getenv(...)) "
|
||||
"at module scope crashes the build if the key isn't set."
|
||||
),
|
||||
hint=(
|
||||
f"Either add {key} as a deployment env var, or switch "
|
||||
"to os.getenv() and move the access inside agent methods."
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
if "Crew class annotated with @CrewBase not found" in err_msg:
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"no_crewbase_class",
|
||||
"Crew class annotated with @CrewBase not found",
|
||||
detail=err_msg,
|
||||
)
|
||||
return
|
||||
if "No Flow subclass found" in err_msg:
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"no_flow_subclass",
|
||||
"No Flow subclass found in the module",
|
||||
detail=err_msg,
|
||||
)
|
||||
return
|
||||
|
||||
if (
|
||||
err_type == "AttributeError"
|
||||
and "has no attribute '_load_response_format'" in err_msg
|
||||
):
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"stale_crewai_pin",
|
||||
"Your lockfile pins a crewai version missing `_load_response_format`",
|
||||
detail=err_msg,
|
||||
hint=(
|
||||
"Run `uv lock --upgrade-package crewai` (or `poetry update crewai`) "
|
||||
"to pin a newer release."
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
if "pydantic" in tb.lower() or "validation error" in err_msg.lower():
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"pydantic_validation_error",
|
||||
"Pydantic validation failed while loading your crew",
|
||||
detail=err_msg[:800],
|
||||
hint=(
|
||||
"Check agent/task configuration fields. `crewai run` locally "
|
||||
"will show the full traceback."
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
self._add(
|
||||
Severity.ERROR,
|
||||
"import_failed",
|
||||
f"Importing your crew failed: {err_type}",
|
||||
detail=err_msg[:800],
|
||||
hint="Run `crewai run` locally to see the full traceback.",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _extract_missing_api_key(err_msg: str) -> str | None:
|
||||
"""Pull 'FOO_API_KEY' out of '... FOO_API_KEY is required ...'."""
|
||||
m = re.search(r"([A-Z][A-Z0-9_]*_API_KEY)\s+is required", err_msg)
|
||||
if m:
|
||||
return m.group(1)
|
||||
m = re.search(r"['\"]([A-Z][A-Z0-9_]*_API_KEY)['\"]", err_msg)
|
||||
if m:
|
||||
return m.group(1)
|
||||
return None
|
||||
|
||||
def _check_env_vars(self) -> None:
|
||||
"""Warn about env vars referenced in user code but missing locally.
|
||||
Best-effort only — the platform sets vars server-side, so we never error.
|
||||
"""
|
||||
if not self._package_dir:
|
||||
return
|
||||
|
||||
referenced: set[str] = set()
|
||||
pattern = re.compile(
|
||||
r"""(?x)
|
||||
(?:os\.environ\s*(?:\[\s*|\.get\s*\(\s*)
|
||||
|os\.getenv\s*\(\s*
|
||||
|getenv\s*\(\s*)
|
||||
['"]([A-Z][A-Z0-9_]*)['"]
|
||||
"""
|
||||
)
|
||||
|
||||
for path in self._package_dir.rglob("*.py"):
|
||||
try:
|
||||
text = path.read_text(encoding="utf-8", errors="ignore")
|
||||
except OSError:
|
||||
continue
|
||||
referenced.update(pattern.findall(text))
|
||||
|
||||
for path in self._package_dir.rglob("*.yaml"):
|
||||
try:
|
||||
text = path.read_text(encoding="utf-8", errors="ignore")
|
||||
except OSError:
|
||||
continue
|
||||
referenced.update(re.findall(r"\$\{?([A-Z][A-Z0-9_]+)\}?", text))
|
||||
|
||||
env_file = self.project_root / ".env"
|
||||
env_keys: set[str] = set()
|
||||
if env_file.exists():
|
||||
for line in env_file.read_text(errors="ignore").splitlines():
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#") or "=" not in line:
|
||||
continue
|
||||
env_keys.add(line.split("=", 1)[0].strip())
|
||||
|
||||
missing_known: list[str] = sorted(
|
||||
var
|
||||
for var in referenced
|
||||
if var in _KNOWN_API_KEY_HINTS
|
||||
and var not in env_keys
|
||||
and var not in os.environ
|
||||
)
|
||||
if missing_known:
|
||||
self._add(
|
||||
Severity.WARNING,
|
||||
"env_vars_not_in_dotenv",
|
||||
f"{len(missing_known)} referenced API key(s) not in .env",
|
||||
detail=(
|
||||
"These env vars are referenced in your source but not set "
|
||||
f"locally: {', '.join(missing_known)}. Deploys will fail "
|
||||
"unless they are added to the deployment's Environment "
|
||||
"Variables in the CrewAI dashboard."
|
||||
),
|
||||
)
|
||||
|
||||
def _check_version_vs_lockfile(self) -> None:
|
||||
"""Warn when the lockfile pins a crewai release older than 1.13.0,
|
||||
which is where ``_load_response_format`` was introduced.
|
||||
"""
|
||||
uv_lock = self.project_root / "uv.lock"
|
||||
poetry_lock = self.project_root / "poetry.lock"
|
||||
lockfile = (
|
||||
uv_lock
|
||||
if uv_lock.exists()
|
||||
else poetry_lock
|
||||
if poetry_lock.exists()
|
||||
else None
|
||||
)
|
||||
if lockfile is None:
|
||||
return
|
||||
|
||||
try:
|
||||
text = lockfile.read_text(errors="ignore")
|
||||
except OSError:
|
||||
return
|
||||
|
||||
m = re.search(
|
||||
r'name\s*=\s*"crewai"\s*\nversion\s*=\s*"([^"]+)"',
|
||||
text,
|
||||
)
|
||||
if not m:
|
||||
return
|
||||
locked = m.group(1)
|
||||
|
||||
try:
|
||||
from packaging.version import Version
|
||||
|
||||
if Version(locked) < Version("1.13.0"):
|
||||
self._add(
|
||||
Severity.WARNING,
|
||||
"old_crewai_pin",
|
||||
f"Lockfile pins crewai=={locked} (older than 1.13.0)",
|
||||
detail=(
|
||||
"Older pinned versions are missing API surface the "
|
||||
"platform builder expects (e.g. `_load_response_format`)."
|
||||
),
|
||||
hint="Run `uv lock --upgrade-package crewai` and redeploy.",
|
||||
)
|
||||
except Exception as e:
|
||||
logger.debug("Could not parse crewai pin from lockfile: %s", e)
|
||||
|
||||
|
||||
def render_report(results: list[ValidationResult]) -> None:
|
||||
"""Pretty-print results to the shared rich console."""
|
||||
if not results:
|
||||
console.print("[bold green]Pre-deploy validation passed.[/bold green]")
|
||||
return
|
||||
|
||||
errors = [r for r in results if r.severity is Severity.ERROR]
|
||||
warnings = [r for r in results if r.severity is Severity.WARNING]
|
||||
|
||||
for result in errors:
|
||||
console.print(f"[bold red]ERROR[/bold red] [{result.code}] {result.title}")
|
||||
if result.detail:
|
||||
console.print(f" {result.detail}")
|
||||
if result.hint:
|
||||
console.print(f" [dim]hint:[/dim] {result.hint}")
|
||||
|
||||
for result in warnings:
|
||||
console.print(
|
||||
f"[bold yellow]WARNING[/bold yellow] [{result.code}] {result.title}"
|
||||
)
|
||||
if result.detail:
|
||||
console.print(f" {result.detail}")
|
||||
if result.hint:
|
||||
console.print(f" [dim]hint:[/dim] {result.hint}")
|
||||
|
||||
summary_parts: list[str] = []
|
||||
if errors:
|
||||
summary_parts.append(f"[bold red]{len(errors)} error(s)[/bold red]")
|
||||
if warnings:
|
||||
summary_parts.append(f"[bold yellow]{len(warnings)} warning(s)[/bold yellow]")
|
||||
console.print(f"\n{' / '.join(summary_parts)}")
|
||||
|
||||
|
||||
def validate_project(project_root: Path | None = None) -> DeployValidator:
|
||||
"""Entrypoint: run validation, render results, return the validator.
|
||||
|
||||
The caller inspects ``validator.ok`` to decide whether to proceed with a
|
||||
deploy.
|
||||
"""
|
||||
validator = DeployValidator(project_root=project_root)
|
||||
validator.run()
|
||||
render_report(validator.results)
|
||||
return validator
|
||||
|
||||
|
||||
def run_validate_command() -> None:
|
||||
"""Implementation of `crewai deploy validate`."""
|
||||
validator = validate_project()
|
||||
if not validator.ok:
|
||||
sys.exit(1)
|
||||
250
lib/crewai/src/crewai/cli/remote_template/main.py
Normal file
250
lib/crewai/src/crewai/cli/remote_template/main.py
Normal file
@@ -0,0 +1,250 @@
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
from typing import Any
|
||||
import zipfile
|
||||
|
||||
import click
|
||||
import httpx
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
|
||||
from crewai.cli.command import BaseCommand
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
console = Console()
|
||||
|
||||
GITHUB_ORG = "crewAIInc"
|
||||
TEMPLATE_PREFIX = "template_"
|
||||
GITHUB_API_BASE = "https://api.github.com"
|
||||
|
||||
BANNER = """\
|
||||
[bold white] ██████╗██████╗ ███████╗██╗ ██╗[/bold white] [bold red] █████╗ ██╗[/bold red]
|
||||
[bold white]██╔════╝██╔══██╗██╔════╝██║ ██║[/bold white] [bold red]██╔══██╗██║[/bold red]
|
||||
[bold white]██║ ██████╔╝█████╗ ██║ █╗ ██║[/bold white] [bold red]███████║██║[/bold red]
|
||||
[bold white]██║ ██╔══██╗██╔══╝ ██║███╗██║[/bold white] [bold red]██╔══██║██║[/bold red]
|
||||
[bold white]╚██████╗██║ ██║███████╗╚███╔███╔╝[/bold white] [bold red]██║ ██║██║[/bold red]
|
||||
[bold white] ╚═════╝╚═╝ ╚═╝╚══════╝ ╚══╝╚══╝[/bold white] [bold red]╚═╝ ╚═╝╚═╝[/bold red]
|
||||
[dim white]████████╗███████╗███╗ ███╗██████╗ ██╗ █████╗ ████████╗███████╗███████╗[/dim white]
|
||||
[dim white]╚══██╔══╝██╔════╝████╗ ████║██╔══██╗██║ ██╔══██╗╚══██╔══╝██╔════╝██╔════╝[/dim white]
|
||||
[dim white] ██║ █████╗ ██╔████╔██║██████╔╝██║ ███████║ ██║ █████╗ ███████╗[/dim white]
|
||||
[dim white] ██║ ██╔══╝ ██║╚██╔╝██║██╔═══╝ ██║ ██╔══██║ ██║ ██╔══╝ ╚════██║[/dim white]
|
||||
[dim white] ██║ ███████╗██║ ╚═╝ ██║██║ ███████╗██║ ██║ ██║ ███████╗███████║[/dim white]
|
||||
[dim white] ╚═╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚══════╝╚═╝ ╚═╝ ╚═╝ ╚══════╝╚══════╝[/dim white]"""
|
||||
|
||||
|
||||
class TemplateCommand(BaseCommand):
|
||||
"""Handle template-related operations for CrewAI projects."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
|
||||
def list_templates(self) -> None:
|
||||
"""List available templates with an interactive selector to install."""
|
||||
templates = self._fetch_templates()
|
||||
if not templates:
|
||||
click.echo("No templates found.")
|
||||
return
|
||||
|
||||
console.print(f"\n{BANNER}\n")
|
||||
console.print(" [on cyan] templates [/on cyan]\n")
|
||||
console.print(f" [green]o[/green] Source: https://github.com/{GITHUB_ORG}")
|
||||
console.print(
|
||||
f" [green]o[/green] Found [bold]{len(templates)}[/bold] templates\n"
|
||||
)
|
||||
console.print(" [green]o[/green] Select a template to install")
|
||||
|
||||
for idx, repo in enumerate(templates, start=1):
|
||||
name = repo["name"].removeprefix(TEMPLATE_PREFIX)
|
||||
description = repo.get("description") or ""
|
||||
if description:
|
||||
console.print(
|
||||
f" [bold cyan]{idx}.[/bold cyan] [bold white]{name}[/bold white] [dim]({description})[/dim]"
|
||||
)
|
||||
else:
|
||||
console.print(
|
||||
f" [bold cyan]{idx}.[/bold cyan] [bold white]{name}[/bold white]"
|
||||
)
|
||||
|
||||
console.print(" [bold cyan]q.[/bold cyan] [dim]Quit[/dim]\n")
|
||||
|
||||
while True:
|
||||
choice = click.prompt("Enter your choice", type=str)
|
||||
|
||||
if choice.lower() == "q":
|
||||
return
|
||||
|
||||
if choice.isdigit() and 1 <= int(choice) <= len(templates):
|
||||
selected_index = int(choice) - 1
|
||||
break
|
||||
|
||||
click.secho(
|
||||
f"Please enter a number between 1 and {len(templates)}, or 'q' to quit.",
|
||||
fg="yellow",
|
||||
)
|
||||
|
||||
selected = templates[selected_index]
|
||||
repo_name = selected["name"]
|
||||
self._install_repo(repo_name)
|
||||
|
||||
def add_template(self, name: str, output_dir: str | None = None) -> None:
|
||||
"""Download a template and copy it into the current working directory.
|
||||
|
||||
Args:
|
||||
name: Template name (with or without the template_ prefix).
|
||||
output_dir: Optional directory name. Defaults to the template name.
|
||||
"""
|
||||
repo_name = self._resolve_repo_name(name)
|
||||
if repo_name is None:
|
||||
click.secho(f"Template '{name}' not found.", fg="red")
|
||||
click.echo("Run 'crewai template list' to see available templates.")
|
||||
raise SystemExit(1)
|
||||
|
||||
self._install_repo(repo_name, output_dir)
|
||||
|
||||
def _install_repo(self, repo_name: str, output_dir: str | None = None) -> None:
|
||||
"""Download and extract a template repo into the current directory.
|
||||
|
||||
Args:
|
||||
repo_name: Full GitHub repo name (e.g. template_deep_research).
|
||||
output_dir: Optional directory name. Defaults to the template name.
|
||||
"""
|
||||
folder_name = output_dir or repo_name.removeprefix(TEMPLATE_PREFIX)
|
||||
dest = os.path.join(os.getcwd(), folder_name)
|
||||
|
||||
while os.path.exists(dest):
|
||||
click.secho(f"Directory '{folder_name}' already exists.", fg="yellow")
|
||||
folder_name = click.prompt(
|
||||
"Enter a different directory name (or 'q' to quit)", type=str
|
||||
)
|
||||
if folder_name.lower() == "q":
|
||||
return
|
||||
dest = os.path.join(os.getcwd(), folder_name)
|
||||
|
||||
click.echo(
|
||||
f"Downloading template '{repo_name.removeprefix(TEMPLATE_PREFIX)}'..."
|
||||
)
|
||||
|
||||
zip_bytes = self._download_zip(repo_name)
|
||||
self._extract_zip(zip_bytes, dest)
|
||||
|
||||
self._telemetry.template_installed_span(repo_name.removeprefix(TEMPLATE_PREFIX))
|
||||
|
||||
console.print(
|
||||
f"\n [green]\u2713[/green] Installed template [bold white]{folder_name}[/bold white]"
|
||||
f" [dim](source: github.com/{GITHUB_ORG}/{repo_name})[/dim]\n"
|
||||
)
|
||||
|
||||
next_steps = Text()
|
||||
next_steps.append(f" cd {folder_name}\n", style="bold white")
|
||||
next_steps.append(" crewai install", style="bold white")
|
||||
|
||||
panel = Panel(
|
||||
next_steps,
|
||||
title="[green]\u25c7 Next steps[/green]",
|
||||
title_align="left",
|
||||
border_style="dim",
|
||||
padding=(1, 2),
|
||||
)
|
||||
console.print(panel)
|
||||
|
||||
def _fetch_templates(self) -> list[dict[str, Any]]:
|
||||
"""Fetch all template repos from the GitHub org."""
|
||||
templates: list[dict[str, Any]] = []
|
||||
page = 1
|
||||
while True:
|
||||
url = f"{GITHUB_API_BASE}/orgs/{GITHUB_ORG}/repos"
|
||||
params: dict[str, str | int] = {
|
||||
"per_page": 100,
|
||||
"page": page,
|
||||
"type": "public",
|
||||
}
|
||||
try:
|
||||
response = httpx.get(url, params=params, timeout=15)
|
||||
response.raise_for_status()
|
||||
except httpx.HTTPError as e:
|
||||
click.secho(f"Failed to fetch templates from GitHub: {e}", fg="red")
|
||||
raise SystemExit(1) from e
|
||||
|
||||
repos = response.json()
|
||||
if not repos:
|
||||
break
|
||||
|
||||
templates.extend(
|
||||
repo
|
||||
for repo in repos
|
||||
if repo["name"].startswith(TEMPLATE_PREFIX) and not repo.get("private")
|
||||
)
|
||||
|
||||
page += 1
|
||||
|
||||
templates.sort(key=lambda r: r["name"])
|
||||
return templates
|
||||
|
||||
def _resolve_repo_name(self, name: str) -> str | None:
|
||||
"""Resolve user input to a full repo name, or None if not found."""
|
||||
# Accept both 'deep_research' and 'template_deep_research'
|
||||
candidates = [
|
||||
f"{TEMPLATE_PREFIX}{name}"
|
||||
if not name.startswith(TEMPLATE_PREFIX)
|
||||
else name,
|
||||
name,
|
||||
]
|
||||
|
||||
templates = self._fetch_templates()
|
||||
template_names = {t["name"] for t in templates}
|
||||
|
||||
for candidate in candidates:
|
||||
if candidate in template_names:
|
||||
return candidate
|
||||
|
||||
return None
|
||||
|
||||
def _download_zip(self, repo_name: str) -> bytes:
|
||||
"""Download the default branch zipball for a repo."""
|
||||
url = f"{GITHUB_API_BASE}/repos/{GITHUB_ORG}/{repo_name}/zipball"
|
||||
try:
|
||||
response = httpx.get(url, follow_redirects=True, timeout=60)
|
||||
response.raise_for_status()
|
||||
except httpx.HTTPError as e:
|
||||
click.secho(f"Failed to download template: {e}", fg="red")
|
||||
raise SystemExit(1) from e
|
||||
|
||||
return response.content
|
||||
|
||||
def _extract_zip(self, zip_bytes: bytes, dest: str) -> None:
|
||||
"""Extract a GitHub zipball into dest, stripping the top-level directory."""
|
||||
with zipfile.ZipFile(io.BytesIO(zip_bytes)) as zf:
|
||||
# GitHub zipballs have a single top-level dir like 'crewAIInc-template_xxx-<sha>/'
|
||||
members = zf.namelist()
|
||||
if not members:
|
||||
click.secho("Downloaded archive is empty.", fg="red")
|
||||
raise SystemExit(1)
|
||||
|
||||
top_dir = members[0].split("/")[0] + "/"
|
||||
|
||||
os.makedirs(dest, exist_ok=True)
|
||||
|
||||
for member in members:
|
||||
if member == top_dir or not member.startswith(top_dir):
|
||||
continue
|
||||
|
||||
relative_path = member[len(top_dir) :]
|
||||
if not relative_path:
|
||||
continue
|
||||
|
||||
target = os.path.realpath(os.path.join(dest, relative_path))
|
||||
if not target.startswith(
|
||||
os.path.realpath(dest) + os.sep
|
||||
) and target != os.path.realpath(dest):
|
||||
continue
|
||||
|
||||
if member.endswith("/"):
|
||||
os.makedirs(target, exist_ok=True)
|
||||
else:
|
||||
os.makedirs(os.path.dirname(target), exist_ok=True)
|
||||
with zf.open(member) as src, open(target, "wb") as dst:
|
||||
shutil.copyfileobj(src, dst)
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.2a1"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.2a1"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.2a1"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -419,10 +419,32 @@ class Crew(FlowTrackable, BaseModel):
|
||||
|
||||
def _restore_runtime(self) -> None:
|
||||
"""Re-create runtime objects after restoring from a checkpoint."""
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
|
||||
started_task_ids: set[str] = set()
|
||||
state = crewai_event_bus._runtime_state
|
||||
if state is not None:
|
||||
for node in state.event_record.nodes.values():
|
||||
if node.event.type == "task_started" and node.event.task_id:
|
||||
started_task_ids.add(node.event.task_id)
|
||||
|
||||
resuming_task_agent_roles: set[str] = set()
|
||||
for task in self.tasks:
|
||||
if (
|
||||
task.output is None
|
||||
and task.agent is not None
|
||||
and str(task.id) in started_task_ids
|
||||
):
|
||||
resuming_task_agent_roles.add(task.agent.role)
|
||||
|
||||
for agent in self.agents:
|
||||
agent.crew = self
|
||||
executor = agent.agent_executor
|
||||
if executor and executor.messages:
|
||||
if (
|
||||
executor
|
||||
and executor.messages
|
||||
and agent.role in resuming_task_agent_roles
|
||||
):
|
||||
executor.crew = self
|
||||
executor.agent = agent
|
||||
executor._resuming = True
|
||||
|
||||
@@ -6,111 +6,20 @@ This module provides the event infrastructure that allows users to:
|
||||
- Build custom logging and analytics
|
||||
- Extend CrewAI with custom event handlers
|
||||
- Declare handler dependencies for ordered execution
|
||||
|
||||
Event type classes are lazy-loaded on first access to avoid importing
|
||||
~12 Pydantic model modules (and their transitive deps) at package init time.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from crewai.events.base_event_listener import BaseEventListener
|
||||
from crewai.events.depends import Depends
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.handler_graph import CircularDependencyError
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestFailedEvent,
|
||||
CrewTestResultEvent,
|
||||
CrewTestStartedEvent,
|
||||
CrewTrainCompletedEvent,
|
||||
CrewTrainFailedEvent,
|
||||
CrewTrainStartedEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowEvent,
|
||||
FlowFinishedEvent,
|
||||
FlowPlotEvent,
|
||||
FlowStartedEvent,
|
||||
HumanFeedbackReceivedEvent,
|
||||
HumanFeedbackRequestedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMStreamChunkEvent,
|
||||
)
|
||||
from crewai.events.types.llm_guardrail_events import (
|
||||
LLMGuardrailCompletedEvent,
|
||||
LLMGuardrailStartedEvent,
|
||||
)
|
||||
from crewai.events.types.logging_events import (
|
||||
AgentLogsExecutionEvent,
|
||||
AgentLogsStartedEvent,
|
||||
)
|
||||
from crewai.events.types.mcp_events import (
|
||||
MCPConfigFetchFailedEvent,
|
||||
MCPConnectionCompletedEvent,
|
||||
MCPConnectionFailedEvent,
|
||||
MCPConnectionStartedEvent,
|
||||
MCPToolExecutionCompletedEvent,
|
||||
MCPToolExecutionFailedEvent,
|
||||
MCPToolExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryQueryStartedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalFailedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningCompletedEvent,
|
||||
AgentReasoningFailedEvent,
|
||||
AgentReasoningStartedEvent,
|
||||
ReasoningEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskEvaluationEvent,
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.events.types.tool_usage_events import (
|
||||
ToolExecutionErrorEvent,
|
||||
ToolSelectionErrorEvent,
|
||||
ToolUsageErrorEvent,
|
||||
ToolUsageEvent,
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
ToolValidateInputErrorEvent,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -125,6 +34,250 @@ if TYPE_CHECKING:
|
||||
LiteAgentExecutionErrorEvent,
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointBaseEvent,
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkBaseEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreBaseEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestFailedEvent,
|
||||
CrewTestResultEvent,
|
||||
CrewTestStartedEvent,
|
||||
CrewTrainCompletedEvent,
|
||||
CrewTrainFailedEvent,
|
||||
CrewTrainStartedEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowEvent,
|
||||
FlowFinishedEvent,
|
||||
FlowPlotEvent,
|
||||
FlowStartedEvent,
|
||||
HumanFeedbackReceivedEvent,
|
||||
HumanFeedbackRequestedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMStreamChunkEvent,
|
||||
)
|
||||
from crewai.events.types.llm_guardrail_events import (
|
||||
LLMGuardrailCompletedEvent,
|
||||
LLMGuardrailStartedEvent,
|
||||
)
|
||||
from crewai.events.types.logging_events import (
|
||||
AgentLogsExecutionEvent,
|
||||
AgentLogsStartedEvent,
|
||||
)
|
||||
from crewai.events.types.mcp_events import (
|
||||
MCPConfigFetchFailedEvent,
|
||||
MCPConnectionCompletedEvent,
|
||||
MCPConnectionFailedEvent,
|
||||
MCPConnectionStartedEvent,
|
||||
MCPToolExecutionCompletedEvent,
|
||||
MCPToolExecutionFailedEvent,
|
||||
MCPToolExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryQueryStartedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalFailedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningCompletedEvent,
|
||||
AgentReasoningFailedEvent,
|
||||
AgentReasoningStartedEvent,
|
||||
ReasoningEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskEvaluationEvent,
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.events.types.tool_usage_events import (
|
||||
ToolExecutionErrorEvent,
|
||||
ToolSelectionErrorEvent,
|
||||
ToolUsageErrorEvent,
|
||||
ToolUsageEvent,
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
ToolValidateInputErrorEvent,
|
||||
)
|
||||
|
||||
# Map every event class name → its module path for lazy loading
|
||||
_LAZY_EVENT_MAPPING: dict[str, str] = {
|
||||
# agent_events
|
||||
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
# checkpoint_events
|
||||
"CheckpointBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointFailedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointPrunedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreFailedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
# crew_events
|
||||
"CrewKickoffCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewKickoffFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewKickoffStartedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestResultEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestStartedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainStartedEvent": "crewai.events.types.crew_events",
|
||||
# flow_events
|
||||
"FlowCreatedEvent": "crewai.events.types.flow_events",
|
||||
"FlowEvent": "crewai.events.types.flow_events",
|
||||
"FlowFinishedEvent": "crewai.events.types.flow_events",
|
||||
"FlowPlotEvent": "crewai.events.types.flow_events",
|
||||
"FlowStartedEvent": "crewai.events.types.flow_events",
|
||||
"HumanFeedbackReceivedEvent": "crewai.events.types.flow_events",
|
||||
"HumanFeedbackRequestedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionFailedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionFinishedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionStartedEvent": "crewai.events.types.flow_events",
|
||||
# knowledge_events
|
||||
"KnowledgeQueryCompletedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeQueryFailedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeQueryStartedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeRetrievalCompletedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeRetrievalStartedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeSearchQueryFailedEvent": "crewai.events.types.knowledge_events",
|
||||
# llm_events
|
||||
"LLMCallCompletedEvent": "crewai.events.types.llm_events",
|
||||
"LLMCallFailedEvent": "crewai.events.types.llm_events",
|
||||
"LLMCallStartedEvent": "crewai.events.types.llm_events",
|
||||
"LLMStreamChunkEvent": "crewai.events.types.llm_events",
|
||||
# llm_guardrail_events
|
||||
"LLMGuardrailCompletedEvent": "crewai.events.types.llm_guardrail_events",
|
||||
"LLMGuardrailStartedEvent": "crewai.events.types.llm_guardrail_events",
|
||||
# logging_events
|
||||
"AgentLogsExecutionEvent": "crewai.events.types.logging_events",
|
||||
"AgentLogsStartedEvent": "crewai.events.types.logging_events",
|
||||
# mcp_events
|
||||
"MCPConfigFetchFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionCompletedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionStartedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionCompletedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionStartedEvent": "crewai.events.types.mcp_events",
|
||||
# memory_events
|
||||
"MemoryQueryCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryQueryFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryQueryStartedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalStartedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveStartedEvent": "crewai.events.types.memory_events",
|
||||
# reasoning_events
|
||||
"AgentReasoningCompletedEvent": "crewai.events.types.reasoning_events",
|
||||
"AgentReasoningFailedEvent": "crewai.events.types.reasoning_events",
|
||||
"AgentReasoningStartedEvent": "crewai.events.types.reasoning_events",
|
||||
"ReasoningEvent": "crewai.events.types.reasoning_events",
|
||||
# skill_events
|
||||
"SkillActivatedEvent": "crewai.events.types.skill_events",
|
||||
"SkillDiscoveryCompletedEvent": "crewai.events.types.skill_events",
|
||||
"SkillDiscoveryStartedEvent": "crewai.events.types.skill_events",
|
||||
"SkillEvent": "crewai.events.types.skill_events",
|
||||
"SkillLoadFailedEvent": "crewai.events.types.skill_events",
|
||||
"SkillLoadedEvent": "crewai.events.types.skill_events",
|
||||
# task_events
|
||||
"TaskCompletedEvent": "crewai.events.types.task_events",
|
||||
"TaskEvaluationEvent": "crewai.events.types.task_events",
|
||||
"TaskFailedEvent": "crewai.events.types.task_events",
|
||||
"TaskStartedEvent": "crewai.events.types.task_events",
|
||||
# tool_usage_events
|
||||
"ToolExecutionErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolSelectionErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageFinishedEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageStartedEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolValidateInputErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
}
|
||||
|
||||
_extension_exports: dict[str, Any] = {}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
"""Lazy import for event types and registered extensions."""
|
||||
if name in _LAZY_EVENT_MAPPING:
|
||||
module_path = _LAZY_EVENT_MAPPING[name]
|
||||
module = importlib.import_module(module_path)
|
||||
val = getattr(module, name)
|
||||
globals()[name] = val # cache for subsequent access
|
||||
return val
|
||||
|
||||
if name in _extension_exports:
|
||||
value = _extension_exports[name]
|
||||
if isinstance(value, str):
|
||||
module_path, _, attr_name = value.rpartition(".")
|
||||
if module_path:
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, attr_name)
|
||||
return importlib.import_module(value)
|
||||
return value
|
||||
|
||||
msg = f"module {__name__!r} has no attribute {name!r}"
|
||||
raise AttributeError(msg)
|
||||
|
||||
|
||||
__all__ = [
|
||||
@@ -140,6 +293,18 @@ __all__ = [
|
||||
"AgentReasoningFailedEvent",
|
||||
"AgentReasoningStartedEvent",
|
||||
"BaseEventListener",
|
||||
"CheckpointBaseEvent",
|
||||
"CheckpointCompletedEvent",
|
||||
"CheckpointFailedEvent",
|
||||
"CheckpointForkBaseEvent",
|
||||
"CheckpointForkCompletedEvent",
|
||||
"CheckpointForkStartedEvent",
|
||||
"CheckpointPrunedEvent",
|
||||
"CheckpointRestoreBaseEvent",
|
||||
"CheckpointRestoreCompletedEvent",
|
||||
"CheckpointRestoreFailedEvent",
|
||||
"CheckpointRestoreStartedEvent",
|
||||
"CheckpointStartedEvent",
|
||||
"CircularDependencyError",
|
||||
"CrewKickoffCompletedEvent",
|
||||
"CrewKickoffFailedEvent",
|
||||
@@ -214,42 +379,3 @@ __all__ = [
|
||||
"_extension_exports",
|
||||
"crewai_event_bus",
|
||||
]
|
||||
|
||||
_AGENT_EVENT_MAPPING = {
|
||||
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
}
|
||||
|
||||
_extension_exports: dict[str, Any] = {}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
"""Lazy import for agent events and registered extensions."""
|
||||
if name in _AGENT_EVENT_MAPPING:
|
||||
import importlib
|
||||
|
||||
module_path = _AGENT_EVENT_MAPPING[name]
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, name)
|
||||
|
||||
if name in _extension_exports:
|
||||
import importlib
|
||||
|
||||
value = _extension_exports[name]
|
||||
if isinstance(value, str):
|
||||
module_path, _, attr_name = value.rpartition(".")
|
||||
if module_path:
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, attr_name)
|
||||
return importlib.import_module(value)
|
||||
return value
|
||||
|
||||
msg = f"module {__name__!r} has no attribute {name!r}"
|
||||
raise AttributeError(msg)
|
||||
|
||||
@@ -64,6 +64,22 @@ P = ParamSpec("P")
|
||||
R = TypeVar("R")
|
||||
|
||||
|
||||
_replaying: contextvars.ContextVar[bool] = contextvars.ContextVar(
|
||||
"crewai_event_replaying", default=False
|
||||
)
|
||||
|
||||
|
||||
def is_replaying() -> bool:
|
||||
"""Return True if the current context is dispatching a replayed event.
|
||||
|
||||
Listeners with side effects (checkpoint writes, external API calls that
|
||||
should not be repeated) should early-return when this is true. Listeners
|
||||
whose purpose is reconstructing timeline state (trace batch, console
|
||||
formatter) should ignore the flag and process replayed events normally.
|
||||
"""
|
||||
return _replaying.get()
|
||||
|
||||
|
||||
class CrewAIEventsBus:
|
||||
"""Singleton event bus for handling events in CrewAI.
|
||||
|
||||
@@ -261,6 +277,11 @@ class CrewAIEventsBus:
|
||||
self._runtime_state = state
|
||||
self._registered_entity_ids = {id(e) for e in state.root}
|
||||
|
||||
@property
|
||||
def runtime_state(self) -> RuntimeState | None:
|
||||
"""The RuntimeState currently attached to the bus, if any."""
|
||||
return self._runtime_state
|
||||
|
||||
def register_entity(self, entity: Any) -> None:
|
||||
"""Add an entity to the RuntimeState, creating it if needed.
|
||||
|
||||
@@ -568,6 +589,87 @@ class CrewAIEventsBus:
|
||||
|
||||
return None
|
||||
|
||||
async def _acall_handlers_replaying(
|
||||
self,
|
||||
source: Any,
|
||||
event: BaseEvent,
|
||||
handlers: AsyncHandlerSet,
|
||||
) -> None:
|
||||
"""Call async handlers with the replaying flag set on the loop thread."""
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
await self._acall_handlers(source, event, handlers)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
async def _emit_with_dependencies_replaying(
|
||||
self, source: Any, event: BaseEvent
|
||||
) -> None:
|
||||
"""Dependency-aware dispatch with the replaying flag set."""
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
await self._emit_with_dependencies(source, event)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
def replay(self, source: Any, event: BaseEvent) -> Future[None] | None:
|
||||
"""Dispatch a previously-recorded event without mutating its fields.
|
||||
|
||||
Unlike :meth:`emit`, this does not run ``_prepare_event`` (so stored
|
||||
event ids and ``emission_sequence`` are preserved) and does not
|
||||
re-record the event. Listeners can call :func:`is_replaying` to
|
||||
opt out of side-effectful processing.
|
||||
|
||||
Args:
|
||||
source: The emitting object.
|
||||
event: The previously-recorded event to dispatch.
|
||||
|
||||
Returns:
|
||||
Future that completes when handlers finish, or None if no handlers.
|
||||
"""
|
||||
event_type = type(event)
|
||||
|
||||
with self._rwlock.r_locked():
|
||||
if self._shutting_down:
|
||||
return None
|
||||
has_dependencies = event_type in self._handler_dependencies
|
||||
sync_handlers = self._sync_handlers.get(event_type, frozenset())
|
||||
async_handlers = self._async_handlers.get(event_type, frozenset())
|
||||
|
||||
if not sync_handlers and not async_handlers:
|
||||
return None
|
||||
|
||||
self._ensure_executor_initialized()
|
||||
self._has_pending_events = True
|
||||
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
if has_dependencies:
|
||||
return self._track_future(
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self._emit_with_dependencies_replaying(source, event),
|
||||
self._loop,
|
||||
)
|
||||
)
|
||||
|
||||
if sync_handlers:
|
||||
ctx = contextvars.copy_context()
|
||||
sync_future = self._sync_executor.submit(
|
||||
ctx.run, self._call_handlers, source, event, sync_handlers
|
||||
)
|
||||
self._track_future(sync_future)
|
||||
if not async_handlers:
|
||||
return sync_future
|
||||
|
||||
return self._track_future(
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self._acall_handlers_replaying(source, event, async_handlers),
|
||||
self._loop,
|
||||
)
|
||||
)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
def flush(self, timeout: float | None = 30.0) -> bool:
|
||||
"""Block until all pending event handlers complete.
|
||||
|
||||
|
||||
@@ -30,6 +30,17 @@ from crewai.events.types.agent_events import (
|
||||
AgentExecutionStartedEvent,
|
||||
LiteAgentExecutionCompletedEvent,
|
||||
)
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
@@ -183,4 +194,13 @@ EventTypes = (
|
||||
| MCPToolExecutionCompletedEvent
|
||||
| MCPToolExecutionFailedEvent
|
||||
| MCPConfigFetchFailedEvent
|
||||
| CheckpointStartedEvent
|
||||
| CheckpointCompletedEvent
|
||||
| CheckpointFailedEvent
|
||||
| CheckpointForkStartedEvent
|
||||
| CheckpointForkCompletedEvent
|
||||
| CheckpointRestoreStartedEvent
|
||||
| CheckpointRestoreCompletedEvent
|
||||
| CheckpointRestoreFailedEvent
|
||||
| CheckpointPrunedEvent
|
||||
)
|
||||
|
||||
@@ -81,8 +81,11 @@ class TraceBatchManager:
|
||||
"""Initialize a new trace batch (thread-safe)"""
|
||||
with self._batch_ready_cv:
|
||||
if self.current_batch is not None:
|
||||
# Lazy init (e.g. DefaultEnvEvent) may have created the batch without
|
||||
# execution_type; merge metadata from a later flow/crew initializer.
|
||||
self.current_batch.execution_metadata.update(execution_metadata)
|
||||
logger.debug(
|
||||
"Batch already initialized, skipping duplicate initialization"
|
||||
"Batch already initialized, merged execution metadata and skipped duplicate initialization"
|
||||
)
|
||||
return self.current_batch
|
||||
|
||||
|
||||
@@ -60,12 +60,6 @@ from crewai.events.types.crew_events import (
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
)
|
||||
from crewai.events.types.env_events import (
|
||||
CCEnvEvent,
|
||||
CodexEnvEvent,
|
||||
CursorEnvEvent,
|
||||
DefaultEnvEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowFinishedEvent,
|
||||
@@ -212,7 +206,6 @@ class TraceCollectionListener(BaseEventListener):
|
||||
self._listeners_setup = True
|
||||
return
|
||||
|
||||
self._register_env_event_handlers(crewai_event_bus)
|
||||
self._register_flow_event_handlers(crewai_event_bus)
|
||||
self._register_context_event_handlers(crewai_event_bus)
|
||||
self._register_action_event_handlers(crewai_event_bus)
|
||||
@@ -221,25 +214,6 @@ class TraceCollectionListener(BaseEventListener):
|
||||
|
||||
self._listeners_setup = True
|
||||
|
||||
def _register_env_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
|
||||
"""Register handlers for environment context events."""
|
||||
|
||||
@event_bus.on(CCEnvEvent)
|
||||
def on_cc_env(source: Any, event: CCEnvEvent) -> None:
|
||||
self._handle_action_event("cc_env", source, event)
|
||||
|
||||
@event_bus.on(CodexEnvEvent)
|
||||
def on_codex_env(source: Any, event: CodexEnvEvent) -> None:
|
||||
self._handle_action_event("codex_env", source, event)
|
||||
|
||||
@event_bus.on(CursorEnvEvent)
|
||||
def on_cursor_env(source: Any, event: CursorEnvEvent) -> None:
|
||||
self._handle_action_event("cursor_env", source, event)
|
||||
|
||||
@event_bus.on(DefaultEnvEvent)
|
||||
def on_default_env(source: Any, event: DefaultEnvEvent) -> None:
|
||||
self._handle_action_event("default_env", source, event)
|
||||
|
||||
def _register_flow_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
|
||||
"""Register handlers for flow events."""
|
||||
|
||||
@@ -286,8 +260,8 @@ class TraceCollectionListener(BaseEventListener):
|
||||
if self.batch_manager.batch_owner_type != "flow":
|
||||
# Always call _initialize_crew_batch to claim ownership.
|
||||
# If batch was already initialized by a concurrent action event
|
||||
# (race condition with DefaultEnvEvent), initialize_batch() returns
|
||||
# early but batch_owner_type is still correctly set to "crew".
|
||||
# (e.g. LLM/tool before crew_kickoff_started), initialize_batch()
|
||||
# returns early but batch_owner_type is still correctly set to "crew".
|
||||
# Skip only when a parent flow already owns the batch.
|
||||
self._initialize_crew_batch(source, event)
|
||||
self._handle_trace_event("crew_kickoff_started", source, event)
|
||||
|
||||
97
lib/crewai/src/crewai/events/types/checkpoint_events.py
Normal file
97
lib/crewai/src/crewai/events/types/checkpoint_events.py
Normal file
@@ -0,0 +1,97 @@
|
||||
"""Event family for automatic state checkpointing and forking."""
|
||||
|
||||
from typing import Literal
|
||||
|
||||
from crewai.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class CheckpointBaseEvent(BaseEvent):
|
||||
"""Base event for checkpoint lifecycle operations."""
|
||||
|
||||
type: str
|
||||
location: str
|
||||
provider: str
|
||||
trigger: str | None = None
|
||||
branch: str | None = None
|
||||
parent_id: str | None = None
|
||||
|
||||
|
||||
class CheckpointStartedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted immediately before a checkpoint is written."""
|
||||
|
||||
type: Literal["checkpoint_started"] = "checkpoint_started"
|
||||
|
||||
|
||||
class CheckpointCompletedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted when a checkpoint has been written successfully."""
|
||||
|
||||
type: Literal["checkpoint_completed"] = "checkpoint_completed"
|
||||
checkpoint_id: str
|
||||
duration_ms: float
|
||||
|
||||
|
||||
class CheckpointFailedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted when a checkpoint write fails."""
|
||||
|
||||
type: Literal["checkpoint_failed"] = "checkpoint_failed"
|
||||
error: str
|
||||
|
||||
|
||||
class CheckpointPrunedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted after pruning old checkpoints from a branch."""
|
||||
|
||||
type: Literal["checkpoint_pruned"] = "checkpoint_pruned"
|
||||
removed_count: int
|
||||
max_checkpoints: int
|
||||
|
||||
|
||||
class CheckpointForkBaseEvent(BaseEvent):
|
||||
"""Base event for fork lifecycle operations on a RuntimeState."""
|
||||
|
||||
type: str
|
||||
branch: str
|
||||
parent_branch: str | None = None
|
||||
parent_checkpoint_id: str | None = None
|
||||
|
||||
|
||||
class CheckpointForkStartedEvent(CheckpointForkBaseEvent):
|
||||
"""Event emitted immediately before a fork relabels the branch."""
|
||||
|
||||
type: Literal["checkpoint_fork_started"] = "checkpoint_fork_started"
|
||||
|
||||
|
||||
class CheckpointForkCompletedEvent(CheckpointForkBaseEvent):
|
||||
"""Event emitted after a fork has established the new branch."""
|
||||
|
||||
type: Literal["checkpoint_fork_completed"] = "checkpoint_fork_completed"
|
||||
|
||||
|
||||
class CheckpointRestoreBaseEvent(BaseEvent):
|
||||
"""Base event for checkpoint restore lifecycle operations."""
|
||||
|
||||
type: str
|
||||
location: str
|
||||
provider: str | None = None
|
||||
|
||||
|
||||
class CheckpointRestoreStartedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted immediately before a checkpoint restore begins."""
|
||||
|
||||
type: Literal["checkpoint_restore_started"] = "checkpoint_restore_started"
|
||||
|
||||
|
||||
class CheckpointRestoreCompletedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted when a checkpoint has been restored successfully."""
|
||||
|
||||
type: Literal["checkpoint_restore_completed"] = "checkpoint_restore_completed"
|
||||
checkpoint_id: str
|
||||
branch: str | None = None
|
||||
parent_id: str | None = None
|
||||
duration_ms: float
|
||||
|
||||
|
||||
class CheckpointRestoreFailedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted when a checkpoint restore fails."""
|
||||
|
||||
type: Literal["checkpoint_restore_failed"] = "checkpoint_restore_failed"
|
||||
error: str
|
||||
@@ -45,6 +45,7 @@ from pydantic import (
|
||||
BeforeValidator,
|
||||
ConfigDict,
|
||||
Field,
|
||||
PlainSerializer,
|
||||
PrivateAttr,
|
||||
SerializeAsAny,
|
||||
ValidationError,
|
||||
@@ -58,6 +59,7 @@ from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.event_context import (
|
||||
get_current_parent_id,
|
||||
reset_last_event_id,
|
||||
restore_event_scope,
|
||||
triggered_by_scope,
|
||||
)
|
||||
from crewai.events.listeners.tracing.trace_listener import (
|
||||
@@ -157,6 +159,37 @@ def _resolve_persistence(value: Any) -> Any:
|
||||
return value
|
||||
|
||||
|
||||
_INITIAL_STATE_CLASS_MARKER = "__crewai_pydantic_class_schema__"
|
||||
|
||||
|
||||
def _serialize_initial_state(value: Any) -> Any:
|
||||
"""Make ``initial_state`` safe for JSON checkpoint serialization.
|
||||
|
||||
``BaseModel`` class refs are emitted as their JSON schema under a sentinel
|
||||
marker key so deserialization can round-trip them back to a class.
|
||||
``BaseModel`` instances are dumped to JSON (round-trip as plain dicts,
|
||||
which ``_create_initial_state`` accepts). Bare ``type`` values that are
|
||||
not ``BaseModel`` subclasses (e.g. ``dict``) are dropped since they
|
||||
can't be represented in JSON.
|
||||
"""
|
||||
if isinstance(value, type):
|
||||
if issubclass(value, BaseModel):
|
||||
return {_INITIAL_STATE_CLASS_MARKER: value.model_json_schema()}
|
||||
return None
|
||||
if isinstance(value, BaseModel):
|
||||
return value.model_dump(mode="json")
|
||||
return value
|
||||
|
||||
|
||||
def _deserialize_initial_state(value: Any) -> Any:
|
||||
"""Rehydrate a class ref serialized by :func:`_serialize_initial_state`."""
|
||||
if isinstance(value, dict) and _INITIAL_STATE_CLASS_MARKER in value:
|
||||
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
|
||||
|
||||
return create_model_from_schema(value[_INITIAL_STATE_CLASS_MARKER])
|
||||
return value
|
||||
|
||||
|
||||
class FlowState(BaseModel):
|
||||
"""Base model for all flow states, ensuring each state has a unique ID."""
|
||||
|
||||
@@ -908,7 +941,11 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
|
||||
entity_type: Literal["flow"] = "flow"
|
||||
|
||||
initial_state: Any = Field(default=None)
|
||||
initial_state: Annotated[ # type: ignore[type-arg]
|
||||
type[BaseModel] | type[dict] | dict[str, Any] | BaseModel | None,
|
||||
BeforeValidator(_deserialize_initial_state),
|
||||
PlainSerializer(_serialize_initial_state, return_type=Any, when_used="json"),
|
||||
] = Field(default=None)
|
||||
name: str | None = Field(default=None)
|
||||
tracing: bool | None = Field(default=None)
|
||||
stream: bool = Field(default=False)
|
||||
@@ -980,13 +1017,18 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
A Flow instance on the new branch. Call kickoff() to run.
|
||||
"""
|
||||
flow = cls.from_checkpoint(config)
|
||||
state = crewai_event_bus._runtime_state
|
||||
state = crewai_event_bus.runtime_state
|
||||
if state is None:
|
||||
raise RuntimeError(
|
||||
"Cannot fork: no runtime state on the event bus. "
|
||||
"Ensure from_checkpoint() succeeded before calling fork()."
|
||||
)
|
||||
state.fork(branch)
|
||||
new_id = str(uuid4())
|
||||
if isinstance(flow._state, dict):
|
||||
flow._state["id"] = new_id
|
||||
else:
|
||||
object.__setattr__(flow._state, "id", new_id)
|
||||
return flow
|
||||
|
||||
checkpoint_completed_methods: set[str] | None = Field(default=None)
|
||||
@@ -1008,6 +1050,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
}
|
||||
if self.checkpoint_state is not None:
|
||||
self._restore_state(self.checkpoint_state)
|
||||
restore_event_scope(())
|
||||
reset_last_event_id()
|
||||
|
||||
_methods: dict[FlowMethodName, FlowMethod[Any, Any]] = PrivateAttr(
|
||||
default_factory=dict
|
||||
@@ -1030,6 +1074,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
_human_feedback_method_outputs: dict[str, Any] = PrivateAttr(default_factory=dict)
|
||||
_input_history: list[InputHistoryEntry] = PrivateAttr(default_factory=list)
|
||||
_state: Any = PrivateAttr(default=None)
|
||||
_execution_id: str = PrivateAttr(default_factory=lambda: str(uuid4()))
|
||||
|
||||
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]: # type: ignore[override]
|
||||
class _FlowGeneric(cls): # type: ignore[valid-type,misc]
|
||||
@@ -1503,6 +1548,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
except Exception:
|
||||
logger.warning("FlowStartedEvent handler failed", exc_info=True)
|
||||
|
||||
get_env_context()
|
||||
|
||||
context = self._pending_feedback_context
|
||||
emit = context.emit
|
||||
default_outcome = context.default_outcome
|
||||
@@ -1818,6 +1865,27 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
except (AttributeError, TypeError):
|
||||
return "" # Safely handle any unexpected attribute access issues
|
||||
|
||||
@property
|
||||
def execution_id(self) -> str:
|
||||
"""Stable identifier for this flow execution.
|
||||
|
||||
Separate from ``flow_id`` / ``state.id``, which consumers may
|
||||
override via ``kickoff(inputs={"id": ...})`` to resume a persisted
|
||||
flow. ``execution_id`` is never affected by ``inputs`` and stays
|
||||
stable for the lifetime of a single run, so it is the correct key
|
||||
for telemetry, tracing, and any external correlation that must
|
||||
uniquely identify a single execution even when callers pass an
|
||||
``id`` in ``inputs``.
|
||||
|
||||
Defaults to a fresh ``uuid4`` per ``Flow`` instance; assign to
|
||||
override when an outer system already has an execution identity.
|
||||
"""
|
||||
return self._execution_id
|
||||
|
||||
@execution_id.setter
|
||||
def execution_id(self, value: str) -> None:
|
||||
self._execution_id = value
|
||||
|
||||
def _initialize_state(self, inputs: dict[str, Any]) -> None:
|
||||
"""Initialize or update flow state with new inputs.
|
||||
|
||||
@@ -2004,7 +2072,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
restored = apply_checkpoint(self, from_checkpoint)
|
||||
if restored is not None:
|
||||
return restored.kickoff(inputs=inputs, input_files=input_files)
|
||||
get_env_context()
|
||||
if self.stream:
|
||||
result_holder: list[Any] = []
|
||||
current_task_info: TaskInfo = {
|
||||
@@ -2132,13 +2199,15 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
flow_id_token = None
|
||||
request_id_token = None
|
||||
if current_flow_id.get() is None:
|
||||
flow_id_token = current_flow_id.set(self.flow_id)
|
||||
flow_id_token = current_flow_id.set(self.execution_id)
|
||||
if current_flow_request_id.get() is None:
|
||||
request_id_token = current_flow_request_id.set(self.flow_id)
|
||||
request_id_token = current_flow_request_id.set(self.execution_id)
|
||||
|
||||
try:
|
||||
# Reset flow state for fresh execution unless restoring from persistence
|
||||
is_restoring = inputs and "id" in inputs and self.persistence is not None
|
||||
is_restoring = (
|
||||
inputs and "id" in inputs and self.persistence is not None
|
||||
) or self.checkpoint_completed_methods is not None
|
||||
if not is_restoring:
|
||||
# Clear completed methods and outputs for a fresh start
|
||||
self._completed_methods.clear()
|
||||
@@ -2204,9 +2273,16 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
f"Flow started with ID: {self.flow_id}", color="bold magenta"
|
||||
)
|
||||
|
||||
# After FlowStarted (when not suppressed): env events must not pre-empt
|
||||
# trace batch init with implicit "crew" execution_type.
|
||||
get_env_context()
|
||||
|
||||
if inputs is not None and "id" not in inputs:
|
||||
self._initialize_state(inputs)
|
||||
|
||||
if self._is_execution_resuming:
|
||||
await self._replay_recorded_events()
|
||||
|
||||
try:
|
||||
# Determine which start methods to execute at kickoff
|
||||
# Conditional start methods (with __trigger_methods__) are only triggered by their conditions
|
||||
@@ -2354,6 +2430,44 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
"""
|
||||
return await self.kickoff_async(inputs, input_files, from_checkpoint)
|
||||
|
||||
async def _replay_recorded_events(self) -> None:
|
||||
"""Dispatch recorded ``MethodExecution*`` events from the event record."""
|
||||
state = crewai_event_bus.runtime_state
|
||||
if state is None:
|
||||
return
|
||||
record = state.event_record
|
||||
if len(record) == 0:
|
||||
return
|
||||
|
||||
replayable = (
|
||||
MethodExecutionStartedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
)
|
||||
flow_name = self.name or self.__class__.__name__
|
||||
nodes = sorted(
|
||||
(
|
||||
n
|
||||
for n in record.all_nodes()
|
||||
if isinstance(n.event, replayable)
|
||||
and n.event.flow_name == flow_name
|
||||
and n.event.method_name in self._completed_methods
|
||||
),
|
||||
key=lambda n: n.event.emission_sequence or 0,
|
||||
)
|
||||
|
||||
for node in nodes:
|
||||
future = crewai_event_bus.replay(self, node.event)
|
||||
if future is not None:
|
||||
try:
|
||||
await asyncio.wrap_future(future)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Replayed event handler failed: %s",
|
||||
node.event.type,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
async def _execute_start_method(self, start_method_name: FlowMethodName) -> None:
|
||||
"""Executes a flow's start method and its triggered listeners.
|
||||
|
||||
|
||||
@@ -16,7 +16,6 @@ from typing import (
|
||||
get_origin,
|
||||
)
|
||||
import uuid
|
||||
import warnings
|
||||
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
@@ -26,7 +25,7 @@ from pydantic import (
|
||||
field_validator,
|
||||
model_validator,
|
||||
)
|
||||
from typing_extensions import Self
|
||||
from typing_extensions import Self, deprecated
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -173,9 +172,12 @@ def _kickoff_with_a2a_support(
|
||||
)
|
||||
|
||||
|
||||
@deprecated(
|
||||
"LiteAgent is deprecated and will be removed in v2.0.0.",
|
||||
category=FutureWarning,
|
||||
)
|
||||
class LiteAgent(FlowTrackable, BaseModel):
|
||||
"""
|
||||
A lightweight agent that can process messages and use tools.
|
||||
"""A lightweight agent that can process messages and use tools.
|
||||
|
||||
.. deprecated::
|
||||
LiteAgent is deprecated and will be removed in a future version.
|
||||
@@ -278,18 +280,6 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
)
|
||||
_memory: Any = PrivateAttr(default=None)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def emit_deprecation_warning(self) -> Self:
|
||||
"""Emit deprecation warning for LiteAgent usage."""
|
||||
warnings.warn(
|
||||
"LiteAgent is deprecated and will be removed in a future version. "
|
||||
"Use Agent().kickoff(messages) instead, which provides the same "
|
||||
"functionality with additional features like memory and knowledge support.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def setup_llm(self) -> Self:
|
||||
"""Set up the LLM and other components after initialization."""
|
||||
|
||||
@@ -51,6 +51,7 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
)
|
||||
from crewai.utilities.logger_utils import suppress_warnings
|
||||
from crewai.utilities.string_utils import sanitize_tool_name
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
|
||||
|
||||
try:
|
||||
@@ -75,8 +76,13 @@ try:
|
||||
from litellm.types.utils import (
|
||||
ChatCompletionDeltaToolCall,
|
||||
Choices,
|
||||
Delta as LiteLLMDelta,
|
||||
Function,
|
||||
Message,
|
||||
ModelResponse,
|
||||
ModelResponseBase,
|
||||
ModelResponseStream,
|
||||
StreamingChoices as LiteLLMStreamingChoices,
|
||||
)
|
||||
from litellm.utils import supports_response_schema
|
||||
|
||||
@@ -85,6 +91,11 @@ except ImportError:
|
||||
LITELLM_AVAILABLE = False
|
||||
litellm = None # type: ignore[assignment]
|
||||
Choices = None # type: ignore[assignment, misc]
|
||||
LiteLLMDelta = None # type: ignore[assignment, misc]
|
||||
Message = None # type: ignore[assignment, misc]
|
||||
ModelResponseBase = None # type: ignore[assignment, misc]
|
||||
ModelResponseStream = None # type: ignore[assignment, misc]
|
||||
LiteLLMStreamingChoices = None # type: ignore[assignment, misc]
|
||||
get_supported_openai_params = None # type: ignore[assignment]
|
||||
ChatCompletionDeltaToolCall = None # type: ignore[assignment, misc]
|
||||
Function = None # type: ignore[assignment, misc]
|
||||
@@ -164,6 +175,16 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
|
||||
"us.amazon.nova-pro-v1:0": 300000,
|
||||
"us.amazon.nova-micro-v1:0": 128000,
|
||||
"us.amazon.nova-lite-v1:0": 300000,
|
||||
# Claude 4 models
|
||||
"us.anthropic.claude-opus-4-7": 1000000,
|
||||
"us.anthropic.claude-sonnet-4-6": 1000000,
|
||||
"us.anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"us.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"us.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"us.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"us.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"us.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"us.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
@@ -182,15 +203,44 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
|
||||
"eu.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"eu.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"eu.anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
# Claude 4 EU
|
||||
"eu.anthropic.claude-opus-4-7": 1000000,
|
||||
"eu.anthropic.claude-sonnet-4-6": 1000000,
|
||||
"eu.anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"eu.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"eu.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"eu.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"eu.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"eu.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"eu.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"eu.meta.llama3-2-3b-instruct-v1:0": 131000,
|
||||
"eu.meta.llama3-2-1b-instruct-v1:0": 131000,
|
||||
"apac.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
"apac.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"apac.anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
# Claude 4 APAC
|
||||
"apac.anthropic.claude-opus-4-7": 1000000,
|
||||
"apac.anthropic.claude-sonnet-4-6": 1000000,
|
||||
"apac.anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"apac.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"apac.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"apac.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"apac.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"apac.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"apac.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"amazon.nova-pro-v1:0": 300000,
|
||||
"amazon.nova-micro-v1:0": 128000,
|
||||
"amazon.nova-lite-v1:0": 300000,
|
||||
"anthropic.claude-opus-4-7": 1000000,
|
||||
"anthropic.claude-sonnet-4-6": 1000000,
|
||||
"anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
|
||||
"anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
@@ -709,7 +759,7 @@ class LLM(BaseLLM):
|
||||
chunk_content = None
|
||||
response_id = None
|
||||
|
||||
if hasattr(chunk, "id"):
|
||||
if isinstance(chunk, ModelResponseBase):
|
||||
response_id = chunk.id
|
||||
|
||||
# Safely extract content from various chunk formats
|
||||
@@ -718,18 +768,16 @@ class LLM(BaseLLM):
|
||||
choices = None
|
||||
if isinstance(chunk, dict) and "choices" in chunk:
|
||||
choices = chunk["choices"]
|
||||
elif hasattr(chunk, "choices"):
|
||||
# Check if choices is not a type but an actual attribute with value
|
||||
if not isinstance(chunk.choices, type):
|
||||
choices = chunk.choices
|
||||
elif isinstance(chunk, ModelResponseStream):
|
||||
choices = chunk.choices
|
||||
|
||||
# Try to extract usage information if available
|
||||
# NOTE: usage is a pydantic extra field on ModelResponseBase,
|
||||
# so it must be accessed via model_extra.
|
||||
if isinstance(chunk, dict) and "usage" in chunk:
|
||||
usage_info = chunk["usage"]
|
||||
elif hasattr(chunk, "usage"):
|
||||
# Check if usage is not a type but an actual attribute with value
|
||||
if not isinstance(chunk.usage, type):
|
||||
usage_info = chunk.usage
|
||||
elif isinstance(chunk, ModelResponseBase) and chunk.model_extra:
|
||||
usage_info = chunk.model_extra.get("usage") or usage_info
|
||||
|
||||
if choices and len(choices) > 0:
|
||||
choice = choices[0]
|
||||
@@ -738,7 +786,7 @@ class LLM(BaseLLM):
|
||||
delta = None
|
||||
if isinstance(choice, dict) and "delta" in choice:
|
||||
delta = choice["delta"]
|
||||
elif hasattr(choice, "delta"):
|
||||
elif isinstance(choice, LiteLLMStreamingChoices):
|
||||
delta = choice.delta
|
||||
|
||||
# Extract content from delta
|
||||
@@ -748,7 +796,7 @@ class LLM(BaseLLM):
|
||||
if "content" in delta and delta["content"] is not None:
|
||||
chunk_content = delta["content"]
|
||||
# Handle object format
|
||||
elif hasattr(delta, "content"):
|
||||
elif isinstance(delta, LiteLLMDelta):
|
||||
chunk_content = delta.content
|
||||
|
||||
# Handle case where content might be None or empty
|
||||
@@ -821,9 +869,8 @@ class LLM(BaseLLM):
|
||||
choices = None
|
||||
if isinstance(last_chunk, dict) and "choices" in last_chunk:
|
||||
choices = last_chunk["choices"]
|
||||
elif hasattr(last_chunk, "choices"):
|
||||
if not isinstance(last_chunk.choices, type):
|
||||
choices = last_chunk.choices
|
||||
elif isinstance(last_chunk, ModelResponseStream):
|
||||
choices = last_chunk.choices
|
||||
|
||||
if choices and len(choices) > 0:
|
||||
choice = choices[0]
|
||||
@@ -832,14 +879,14 @@ class LLM(BaseLLM):
|
||||
message = None
|
||||
if isinstance(choice, dict) and "message" in choice:
|
||||
message = choice["message"]
|
||||
elif hasattr(choice, "message"):
|
||||
elif isinstance(choice, Choices):
|
||||
message = choice.message
|
||||
|
||||
if message:
|
||||
content = None
|
||||
if isinstance(message, dict) and "content" in message:
|
||||
content = message["content"]
|
||||
elif hasattr(message, "content"):
|
||||
elif isinstance(message, Message):
|
||||
content = message.content
|
||||
|
||||
if content:
|
||||
@@ -866,24 +913,23 @@ class LLM(BaseLLM):
|
||||
choices = None
|
||||
if isinstance(last_chunk, dict) and "choices" in last_chunk:
|
||||
choices = last_chunk["choices"]
|
||||
elif hasattr(last_chunk, "choices"):
|
||||
if not isinstance(last_chunk.choices, type):
|
||||
choices = last_chunk.choices
|
||||
elif isinstance(last_chunk, ModelResponseStream):
|
||||
choices = last_chunk.choices
|
||||
|
||||
if choices and len(choices) > 0:
|
||||
choice = choices[0]
|
||||
|
||||
message = None
|
||||
if isinstance(choice, dict) and "message" in choice:
|
||||
message = choice["message"]
|
||||
elif hasattr(choice, "message"):
|
||||
message = choice.message
|
||||
delta = None
|
||||
if isinstance(choice, dict) and "delta" in choice:
|
||||
delta = choice["delta"]
|
||||
elif isinstance(choice, LiteLLMStreamingChoices):
|
||||
delta = choice.delta
|
||||
|
||||
if message:
|
||||
if isinstance(message, dict) and "tool_calls" in message:
|
||||
tool_calls = message["tool_calls"]
|
||||
elif hasattr(message, "tool_calls"):
|
||||
tool_calls = message.tool_calls
|
||||
if delta:
|
||||
if isinstance(delta, dict) and "tool_calls" in delta:
|
||||
tool_calls = delta["tool_calls"]
|
||||
elif isinstance(delta, LiteLLMDelta):
|
||||
tool_calls = delta.tool_calls
|
||||
except Exception as e:
|
||||
logging.debug(f"Error checking for tool calls: {e}")
|
||||
|
||||
@@ -1037,7 +1083,7 @@ class LLM(BaseLLM):
|
||||
"""
|
||||
if callbacks and len(callbacks) > 0:
|
||||
for callback in callbacks:
|
||||
if hasattr(callback, "log_success_event"):
|
||||
if isinstance(callback, TokenCalcHandler):
|
||||
# Use the usage_info we've been tracking
|
||||
if not usage_info:
|
||||
# Try to get usage from the last chunk if we haven't already
|
||||
@@ -1048,9 +1094,14 @@ class LLM(BaseLLM):
|
||||
and "usage" in last_chunk
|
||||
):
|
||||
usage_info = last_chunk["usage"]
|
||||
elif hasattr(last_chunk, "usage"):
|
||||
if not isinstance(last_chunk.usage, type):
|
||||
usage_info = last_chunk.usage
|
||||
elif (
|
||||
isinstance(last_chunk, ModelResponseBase)
|
||||
and last_chunk.model_extra
|
||||
):
|
||||
usage_info = (
|
||||
last_chunk.model_extra.get("usage")
|
||||
or usage_info
|
||||
)
|
||||
except Exception as e:
|
||||
logging.debug(f"Error extracting usage info: {e}")
|
||||
|
||||
@@ -1123,13 +1174,10 @@ class LLM(BaseLLM):
|
||||
params["response_model"] = response_model
|
||||
response = litellm.completion(**params)
|
||||
|
||||
if (
|
||||
hasattr(response, "usage")
|
||||
and not isinstance(response.usage, type)
|
||||
and response.usage
|
||||
):
|
||||
usage_info = response.usage
|
||||
self._track_token_usage_internal(usage_info)
|
||||
if isinstance(response, ModelResponseBase) and response.model_extra:
|
||||
usage_info = response.model_extra.get("usage")
|
||||
if usage_info:
|
||||
self._track_token_usage_internal(usage_info)
|
||||
|
||||
except LLMContextLengthExceededError:
|
||||
# Re-raise our own context length error
|
||||
@@ -1141,7 +1189,11 @@ class LLM(BaseLLM):
|
||||
raise LLMContextLengthExceededError(error_msg) from e
|
||||
raise
|
||||
|
||||
response_usage = self._usage_to_dict(getattr(response, "usage", None))
|
||||
response_usage = self._usage_to_dict(
|
||||
response.model_extra.get("usage")
|
||||
if isinstance(response, ModelResponseBase) and response.model_extra
|
||||
else None
|
||||
)
|
||||
|
||||
# --- 2) Handle structured output response (when response_model is provided)
|
||||
if response_model is not None:
|
||||
@@ -1166,8 +1218,13 @@ class LLM(BaseLLM):
|
||||
# --- 3) Handle callbacks with usage info
|
||||
if callbacks and len(callbacks) > 0:
|
||||
for callback in callbacks:
|
||||
if hasattr(callback, "log_success_event"):
|
||||
usage_info = getattr(response, "usage", None)
|
||||
if isinstance(callback, TokenCalcHandler):
|
||||
usage_info = (
|
||||
response.model_extra.get("usage")
|
||||
if isinstance(response, ModelResponseBase)
|
||||
and response.model_extra
|
||||
else None
|
||||
)
|
||||
if usage_info:
|
||||
callback.log_success_event(
|
||||
kwargs=params,
|
||||
@@ -1176,7 +1233,7 @@ class LLM(BaseLLM):
|
||||
end_time=0,
|
||||
)
|
||||
# --- 4) Check for tool calls
|
||||
tool_calls = getattr(response_message, "tool_calls", [])
|
||||
tool_calls = response_message.tool_calls or []
|
||||
|
||||
# --- 5) If no tool calls or no available functions, return the text response directly as long as there is a text response
|
||||
if (not tool_calls or not available_functions) and text_response:
|
||||
@@ -1269,13 +1326,10 @@ class LLM(BaseLLM):
|
||||
params["response_model"] = response_model
|
||||
response = await litellm.acompletion(**params)
|
||||
|
||||
if (
|
||||
hasattr(response, "usage")
|
||||
and not isinstance(response.usage, type)
|
||||
and response.usage
|
||||
):
|
||||
usage_info = response.usage
|
||||
self._track_token_usage_internal(usage_info)
|
||||
if isinstance(response, ModelResponseBase) and response.model_extra:
|
||||
usage_info = response.model_extra.get("usage")
|
||||
if usage_info:
|
||||
self._track_token_usage_internal(usage_info)
|
||||
|
||||
except LLMContextLengthExceededError:
|
||||
# Re-raise our own context length error
|
||||
@@ -1287,7 +1341,11 @@ class LLM(BaseLLM):
|
||||
raise LLMContextLengthExceededError(error_msg) from e
|
||||
raise
|
||||
|
||||
response_usage = self._usage_to_dict(getattr(response, "usage", None))
|
||||
response_usage = self._usage_to_dict(
|
||||
response.model_extra.get("usage")
|
||||
if isinstance(response, ModelResponseBase) and response.model_extra
|
||||
else None
|
||||
)
|
||||
|
||||
if response_model is not None:
|
||||
if isinstance(response, BaseModel):
|
||||
@@ -1309,8 +1367,13 @@ class LLM(BaseLLM):
|
||||
|
||||
if callbacks and len(callbacks) > 0:
|
||||
for callback in callbacks:
|
||||
if hasattr(callback, "log_success_event"):
|
||||
usage_info = getattr(response, "usage", None)
|
||||
if isinstance(callback, TokenCalcHandler):
|
||||
usage_info = (
|
||||
response.model_extra.get("usage")
|
||||
if isinstance(response, ModelResponseBase)
|
||||
and response.model_extra
|
||||
else None
|
||||
)
|
||||
if usage_info:
|
||||
callback.log_success_event(
|
||||
kwargs=params,
|
||||
@@ -1319,7 +1382,7 @@ class LLM(BaseLLM):
|
||||
end_time=0,
|
||||
)
|
||||
|
||||
tool_calls = getattr(response_message, "tool_calls", [])
|
||||
tool_calls = response_message.tool_calls or []
|
||||
|
||||
if (not tool_calls or not available_functions) and text_response:
|
||||
self._handle_emit_call_events(
|
||||
@@ -1394,18 +1457,19 @@ class LLM(BaseLLM):
|
||||
async for chunk in await litellm.acompletion(**params):
|
||||
chunk_count += 1
|
||||
chunk_content = None
|
||||
response_id = chunk.id if hasattr(chunk, "id") else None
|
||||
response_id = chunk.id if isinstance(chunk, ModelResponseBase) else None
|
||||
|
||||
try:
|
||||
choices = None
|
||||
if isinstance(chunk, dict) and "choices" in chunk:
|
||||
choices = chunk["choices"]
|
||||
elif hasattr(chunk, "choices"):
|
||||
if not isinstance(chunk.choices, type):
|
||||
choices = chunk.choices
|
||||
elif isinstance(chunk, ModelResponseStream):
|
||||
choices = chunk.choices
|
||||
|
||||
if hasattr(chunk, "usage") and chunk.usage is not None:
|
||||
usage_info = chunk.usage
|
||||
if isinstance(chunk, ModelResponseBase) and chunk.model_extra:
|
||||
chunk_usage = chunk.model_extra.get("usage")
|
||||
if chunk_usage is not None:
|
||||
usage_info = chunk_usage
|
||||
|
||||
if choices and len(choices) > 0:
|
||||
first_choice = choices[0]
|
||||
@@ -1413,19 +1477,19 @@ class LLM(BaseLLM):
|
||||
|
||||
if isinstance(first_choice, dict):
|
||||
delta = first_choice.get("delta", {})
|
||||
elif hasattr(first_choice, "delta"):
|
||||
elif isinstance(first_choice, LiteLLMStreamingChoices):
|
||||
delta = first_choice.delta
|
||||
|
||||
if delta:
|
||||
if isinstance(delta, dict):
|
||||
chunk_content = delta.get("content")
|
||||
elif hasattr(delta, "content"):
|
||||
elif isinstance(delta, LiteLLMDelta):
|
||||
chunk_content = delta.content
|
||||
|
||||
tool_calls: list[ChatCompletionDeltaToolCall] | None = None
|
||||
if isinstance(delta, dict):
|
||||
tool_calls = delta.get("tool_calls")
|
||||
elif hasattr(delta, "tool_calls"):
|
||||
elif isinstance(delta, LiteLLMDelta):
|
||||
tool_calls = delta.tool_calls
|
||||
|
||||
if tool_calls:
|
||||
@@ -1461,7 +1525,7 @@ class LLM(BaseLLM):
|
||||
|
||||
if callbacks and len(callbacks) > 0 and usage_info:
|
||||
for callback in callbacks:
|
||||
if hasattr(callback, "log_success_event"):
|
||||
if isinstance(callback, TokenCalcHandler):
|
||||
callback.log_success_event(
|
||||
kwargs=params,
|
||||
response_obj={"usage": usage_info},
|
||||
@@ -1920,7 +1984,7 @@ class LLM(BaseLLM):
|
||||
return None
|
||||
if isinstance(usage, dict):
|
||||
return usage
|
||||
if hasattr(usage, "model_dump"):
|
||||
if isinstance(usage, BaseModel):
|
||||
result: dict[str, Any] = usage.model_dump()
|
||||
return result
|
||||
if hasattr(usage, "__dict__"):
|
||||
@@ -1984,7 +2048,7 @@ class LLM(BaseLLM):
|
||||
)
|
||||
return messages
|
||||
|
||||
provider = getattr(self, "provider", None) or self.model
|
||||
provider = self.provider or self.model
|
||||
|
||||
for msg in messages:
|
||||
files = msg.get("files")
|
||||
@@ -2035,7 +2099,7 @@ class LLM(BaseLLM):
|
||||
)
|
||||
return messages
|
||||
|
||||
provider = getattr(self, "provider", None) or self.model
|
||||
provider = self.provider or self.model
|
||||
|
||||
for msg in messages:
|
||||
files = msg.get("files")
|
||||
|
||||
@@ -423,6 +423,34 @@ AZURE_MODELS: list[AzureModels] = [
|
||||
|
||||
|
||||
BedrockModels: TypeAlias = Literal[
|
||||
# Inference profiles (regional) - Claude 4
|
||||
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"us.anthropic.claude-opus-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-6",
|
||||
"us.anthropic.claude-opus-4-6-v1",
|
||||
# Inference profiles - shorter versions
|
||||
"us.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-6-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-v1:0",
|
||||
"eu.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"eu.anthropic.claude-opus-4-5-v1:0",
|
||||
"eu.anthropic.claude-haiku-4-5-v1:0",
|
||||
"apac.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"apac.anthropic.claude-opus-4-5-v1:0",
|
||||
"apac.anthropic.claude-haiku-4-5-v1:0",
|
||||
# Global inference profiles
|
||||
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"global.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"global.anthropic.claude-opus-4-6-v1",
|
||||
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-6",
|
||||
# Direct model IDs
|
||||
"ai21.jamba-1-5-large-v1:0",
|
||||
"ai21.jamba-1-5-mini-v1:0",
|
||||
"amazon.nova-lite-v1:0",
|
||||
@@ -496,6 +524,34 @@ BedrockModels: TypeAlias = Literal[
|
||||
"twelvelabs.pegasus-1-2-v1:0",
|
||||
]
|
||||
BEDROCK_MODELS: list[BedrockModels] = [
|
||||
# Inference profiles (regional) - Claude 4
|
||||
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"us.anthropic.claude-opus-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-6",
|
||||
"us.anthropic.claude-opus-4-6-v1",
|
||||
# Inference profiles - shorter versions
|
||||
"us.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-6-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-v1:0",
|
||||
"eu.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"eu.anthropic.claude-opus-4-5-v1:0",
|
||||
"eu.anthropic.claude-haiku-4-5-v1:0",
|
||||
"apac.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"apac.anthropic.claude-opus-4-5-v1:0",
|
||||
"apac.anthropic.claude-haiku-4-5-v1:0",
|
||||
# Global inference profiles
|
||||
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"global.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"global.anthropic.claude-opus-4-6-v1",
|
||||
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-6",
|
||||
# Direct model IDs
|
||||
"ai21.jamba-1-5-large-v1:0",
|
||||
"ai21.jamba-1-5-mini-v1:0",
|
||||
"amazon.nova-lite-v1:0",
|
||||
|
||||
@@ -11,10 +11,14 @@ from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
)
|
||||
from crewai.utilities.pydantic_schema_utils import (
|
||||
sanitize_tool_params_for_anthropic_strict,
|
||||
)
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
@@ -189,16 +193,41 @@ class AnthropicCompletion(BaseLLM):
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _init_clients(self) -> AnthropicCompletion:
|
||||
self._client = Anthropic(**self._get_client_params())
|
||||
"""Eagerly build clients when the API key is available, otherwise
|
||||
defer so ``LLM(model="anthropic/...")`` can be constructed at module
|
||||
import time even before deployment env vars are set.
|
||||
"""
|
||||
try:
|
||||
self._client = self._build_sync_client()
|
||||
self._async_client = self._build_async_client()
|
||||
except ValueError:
|
||||
pass
|
||||
return self
|
||||
|
||||
async_client_params = self._get_client_params()
|
||||
def _build_sync_client(self) -> Any:
|
||||
return Anthropic(**self._get_client_params())
|
||||
|
||||
def _build_async_client(self) -> Any:
|
||||
# Skip the sync httpx.Client that `_get_client_params` would
|
||||
# otherwise construct under `interceptor`; we attach an async one
|
||||
# below and would leak the sync one if both were built.
|
||||
async_client_params = self._get_client_params(include_http_client=False)
|
||||
if self.interceptor:
|
||||
async_transport = AsyncHTTPTransport(interceptor=self.interceptor)
|
||||
async_http_client = httpx.AsyncClient(transport=async_transport)
|
||||
async_client_params["http_client"] = async_http_client
|
||||
async_client_params["http_client"] = httpx.AsyncClient(
|
||||
transport=async_transport
|
||||
)
|
||||
return AsyncAnthropic(**async_client_params)
|
||||
|
||||
self._async_client = AsyncAnthropic(**async_client_params)
|
||||
return self
|
||||
def _get_sync_client(self) -> Any:
|
||||
if self._client is None:
|
||||
self._client = self._build_sync_client()
|
||||
return self._client
|
||||
|
||||
def _get_async_client(self) -> Any:
|
||||
if self._async_client is None:
|
||||
self._async_client = self._build_async_client()
|
||||
return self._async_client
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Anthropic-specific fields."""
|
||||
@@ -213,8 +242,15 @@ class AnthropicCompletion(BaseLLM):
|
||||
config["timeout"] = self.timeout
|
||||
return config
|
||||
|
||||
def _get_client_params(self) -> dict[str, Any]:
|
||||
"""Get client parameters."""
|
||||
def _get_client_params(self, include_http_client: bool = True) -> dict[str, Any]:
|
||||
"""Get client parameters.
|
||||
|
||||
Args:
|
||||
include_http_client: When True (default) and an interceptor is
|
||||
set, attach a sync ``httpx.Client``. The async builder
|
||||
passes ``False`` so it can attach its own async client
|
||||
without leaking a sync one.
|
||||
"""
|
||||
|
||||
if self.api_key is None:
|
||||
self.api_key = os.getenv("ANTHROPIC_API_KEY")
|
||||
@@ -228,7 +264,7 @@ class AnthropicCompletion(BaseLLM):
|
||||
"max_retries": self.max_retries,
|
||||
}
|
||||
|
||||
if self.interceptor:
|
||||
if include_http_client and self.interceptor:
|
||||
transport = HTTPTransport(interceptor=self.interceptor)
|
||||
http_client = httpx.Client(transport=transport)
|
||||
client_params["http_client"] = http_client # type: ignore[assignment]
|
||||
@@ -473,10 +509,8 @@ class AnthropicCompletion(BaseLLM):
|
||||
continue
|
||||
|
||||
try:
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
|
||||
name, description, parameters = safe_tool_conversion(tool, "Anthropic")
|
||||
except (ImportError, KeyError, ValueError) as e:
|
||||
except (KeyError, ValueError) as e:
|
||||
logging.error(f"Error converting tool to Anthropic format: {e}")
|
||||
raise e
|
||||
|
||||
@@ -485,8 +519,15 @@ class AnthropicCompletion(BaseLLM):
|
||||
"description": description,
|
||||
}
|
||||
|
||||
func_info = tool.get("function", {})
|
||||
strict_enabled = bool(func_info.get("strict"))
|
||||
|
||||
if parameters and isinstance(parameters, dict):
|
||||
anthropic_tool["input_schema"] = parameters
|
||||
anthropic_tool["input_schema"] = (
|
||||
sanitize_tool_params_for_anthropic_strict(parameters)
|
||||
if strict_enabled
|
||||
else parameters
|
||||
)
|
||||
else:
|
||||
anthropic_tool["input_schema"] = {
|
||||
"type": "object",
|
||||
@@ -494,8 +535,7 @@ class AnthropicCompletion(BaseLLM):
|
||||
"required": [],
|
||||
}
|
||||
|
||||
func_info = tool.get("function", {})
|
||||
if func_info.get("strict"):
|
||||
if strict_enabled:
|
||||
anthropic_tool["strict"] = True
|
||||
|
||||
anthropic_tools.append(anthropic_tool)
|
||||
@@ -790,11 +830,11 @@ class AnthropicCompletion(BaseLLM):
|
||||
try:
|
||||
if betas:
|
||||
params["betas"] = betas
|
||||
response = self._client.beta.messages.create(
|
||||
response = self._get_sync_client().beta.messages.create(
|
||||
**params, extra_body=extra_body
|
||||
)
|
||||
else:
|
||||
response = self._client.messages.create(**params)
|
||||
response = self._get_sync_client().messages.create(**params)
|
||||
|
||||
except Exception as e:
|
||||
if is_context_length_exceeded(e):
|
||||
@@ -942,9 +982,11 @@ class AnthropicCompletion(BaseLLM):
|
||||
current_tool_calls: dict[int, dict[str, Any]] = {}
|
||||
|
||||
stream_context = (
|
||||
self._client.beta.messages.stream(**stream_params, extra_body=extra_body)
|
||||
self._get_sync_client().beta.messages.stream(
|
||||
**stream_params, extra_body=extra_body
|
||||
)
|
||||
if betas
|
||||
else self._client.messages.stream(**stream_params)
|
||||
else self._get_sync_client().messages.stream(**stream_params)
|
||||
)
|
||||
with stream_context as stream:
|
||||
response_id = None
|
||||
@@ -1223,7 +1265,9 @@ class AnthropicCompletion(BaseLLM):
|
||||
|
||||
try:
|
||||
# Send tool results back to Claude for final response
|
||||
final_response: Message = self._client.messages.create(**follow_up_params)
|
||||
final_response: Message = self._get_sync_client().messages.create(
|
||||
**follow_up_params
|
||||
)
|
||||
|
||||
# Track token usage for follow-up call
|
||||
follow_up_usage = self._extract_anthropic_token_usage(final_response)
|
||||
@@ -1319,11 +1363,11 @@ class AnthropicCompletion(BaseLLM):
|
||||
try:
|
||||
if betas:
|
||||
params["betas"] = betas
|
||||
response = await self._async_client.beta.messages.create(
|
||||
response = await self._get_async_client().beta.messages.create(
|
||||
**params, extra_body=extra_body
|
||||
)
|
||||
else:
|
||||
response = await self._async_client.messages.create(**params)
|
||||
response = await self._get_async_client().messages.create(**params)
|
||||
|
||||
except Exception as e:
|
||||
if is_context_length_exceeded(e):
|
||||
@@ -1457,11 +1501,11 @@ class AnthropicCompletion(BaseLLM):
|
||||
current_tool_calls: dict[int, dict[str, Any]] = {}
|
||||
|
||||
stream_context = (
|
||||
self._async_client.beta.messages.stream(
|
||||
self._get_async_client().beta.messages.stream(
|
||||
**stream_params, extra_body=extra_body
|
||||
)
|
||||
if betas
|
||||
else self._async_client.messages.stream(**stream_params)
|
||||
else self._get_async_client().messages.stream(**stream_params)
|
||||
)
|
||||
async with stream_context as stream:
|
||||
response_id = None
|
||||
@@ -1626,7 +1670,7 @@ class AnthropicCompletion(BaseLLM):
|
||||
]
|
||||
|
||||
try:
|
||||
final_response: Message = await self._async_client.messages.create(
|
||||
final_response: Message = await self._get_async_client().messages.create(
|
||||
**follow_up_params
|
||||
)
|
||||
|
||||
@@ -1754,8 +1798,8 @@ class AnthropicCompletion(BaseLLM):
|
||||
from crewai_files.uploaders.anthropic import AnthropicFileUploader
|
||||
|
||||
return AnthropicFileUploader(
|
||||
client=self._client,
|
||||
async_client=self._async_client,
|
||||
client=self._get_sync_client(),
|
||||
async_client=self._get_async_client(),
|
||||
)
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
@@ -116,43 +116,122 @@ class AzureCompletion(BaseLLM):
|
||||
data.get("api_version") or os.getenv("AZURE_API_VERSION") or "2024-06-01"
|
||||
)
|
||||
|
||||
if not data["api_key"]:
|
||||
raise ValueError(
|
||||
"Azure API key is required. Set AZURE_API_KEY environment variable or pass api_key parameter."
|
||||
)
|
||||
if not data["endpoint"]:
|
||||
raise ValueError(
|
||||
"Azure endpoint is required. Set AZURE_ENDPOINT environment variable or pass endpoint parameter."
|
||||
)
|
||||
|
||||
# Credentials and endpoint are validated lazily in `_init_clients`
|
||||
# so the LLM can be constructed before deployment env vars are set.
|
||||
model = data.get("model", "")
|
||||
data["endpoint"] = AzureCompletion._validate_and_fix_endpoint(
|
||||
data["endpoint"], model
|
||||
if data["endpoint"]:
|
||||
data["endpoint"] = AzureCompletion._validate_and_fix_endpoint(
|
||||
data["endpoint"], model
|
||||
)
|
||||
data["is_azure_openai_endpoint"] = AzureCompletion._is_azure_openai_endpoint(
|
||||
data["endpoint"]
|
||||
)
|
||||
data["is_openai_model"] = any(
|
||||
prefix in model.lower() for prefix in ["gpt-", "o1-", "text-"]
|
||||
)
|
||||
parsed = urlparse(data["endpoint"])
|
||||
hostname = parsed.hostname or ""
|
||||
data["is_azure_openai_endpoint"] = (
|
||||
hostname == "openai.azure.com" or hostname.endswith(".openai.azure.com")
|
||||
) and "/openai/deployments/" in data["endpoint"]
|
||||
return data
|
||||
|
||||
@staticmethod
|
||||
def _is_azure_openai_endpoint(endpoint: str | None) -> bool:
|
||||
if not endpoint:
|
||||
return False
|
||||
hostname = urlparse(endpoint).hostname or ""
|
||||
return (
|
||||
hostname == "openai.azure.com" or hostname.endswith(".openai.azure.com")
|
||||
) and "/openai/deployments/" in endpoint
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _init_clients(self) -> AzureCompletion:
|
||||
"""Eagerly build clients when credentials are available, otherwise
|
||||
defer so ``LLM(model="azure/...")`` can be constructed at module
|
||||
import time even before deployment env vars are set.
|
||||
"""
|
||||
try:
|
||||
self._client = self._build_sync_client()
|
||||
self._async_client = self._build_async_client()
|
||||
except ValueError:
|
||||
pass
|
||||
return self
|
||||
|
||||
def _build_sync_client(self) -> Any:
|
||||
return ChatCompletionsClient(**self._make_client_kwargs())
|
||||
|
||||
def _build_async_client(self) -> Any:
|
||||
return AsyncChatCompletionsClient(**self._make_client_kwargs())
|
||||
|
||||
def _make_client_kwargs(self) -> dict[str, Any]:
|
||||
# Re-read env vars so that a deferred build can pick up credentials
|
||||
# that weren't set at instantiation time (e.g. LLM constructed at
|
||||
# module import before deployment env vars were injected).
|
||||
if not self.api_key:
|
||||
raise ValueError("Azure API key is required.")
|
||||
self.api_key = os.getenv("AZURE_API_KEY")
|
||||
if not self.endpoint:
|
||||
endpoint = (
|
||||
os.getenv("AZURE_ENDPOINT")
|
||||
or os.getenv("AZURE_OPENAI_ENDPOINT")
|
||||
or os.getenv("AZURE_API_BASE")
|
||||
)
|
||||
if endpoint:
|
||||
self.endpoint = AzureCompletion._validate_and_fix_endpoint(
|
||||
endpoint, self.model
|
||||
)
|
||||
# Recompute the routing flag now that the endpoint is known —
|
||||
# _prepare_completion_params uses it to decide whether to
|
||||
# include `model` in the request body (Azure OpenAI endpoints
|
||||
# embed the deployment name in the URL and reject it).
|
||||
self.is_azure_openai_endpoint = (
|
||||
AzureCompletion._is_azure_openai_endpoint(self.endpoint)
|
||||
)
|
||||
|
||||
if not self.endpoint:
|
||||
raise ValueError(
|
||||
"Azure endpoint is required. Set AZURE_ENDPOINT environment "
|
||||
"variable or pass endpoint parameter."
|
||||
)
|
||||
client_kwargs: dict[str, Any] = {
|
||||
"endpoint": self.endpoint,
|
||||
"credential": AzureKeyCredential(self.api_key),
|
||||
"credential": self._resolve_credential(),
|
||||
}
|
||||
if self.api_version:
|
||||
client_kwargs["api_version"] = self.api_version
|
||||
return client_kwargs
|
||||
|
||||
self._client = ChatCompletionsClient(**client_kwargs)
|
||||
self._async_client = AsyncChatCompletionsClient(**client_kwargs)
|
||||
return self
|
||||
def _resolve_credential(self) -> Any:
|
||||
"""Return an Azure credential, preferring the API key when set.
|
||||
|
||||
Without an API key, fall back to ``DefaultAzureCredential`` from
|
||||
``azure-identity``. That chain auto-detects the standard keyless
|
||||
paths the customer's environment may provide — OIDC Workload
|
||||
Identity Federation (``AZURE_FEDERATED_TOKEN_FILE`` +
|
||||
``AZURE_TENANT_ID`` + ``AZURE_CLIENT_ID``), Managed Identity on
|
||||
AKS/Azure VMs, environment-configured service principals, and
|
||||
developer tools like the Azure CLI. Installing ``azure-identity``
|
||||
is what enables these paths; without it we raise the existing
|
||||
API-key error.
|
||||
"""
|
||||
if self.api_key:
|
||||
return AzureKeyCredential(self.api_key)
|
||||
|
||||
try:
|
||||
from azure.identity import DefaultAzureCredential
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Azure API key is required when azure-identity is not "
|
||||
"installed. Set AZURE_API_KEY, or install azure-identity "
|
||||
'for keyless auth: uv add "crewai[azure-ai-inference]"'
|
||||
) from None
|
||||
|
||||
return DefaultAzureCredential()
|
||||
|
||||
def _get_sync_client(self) -> Any:
|
||||
if self._client is None:
|
||||
self._client = self._build_sync_client()
|
||||
return self._client
|
||||
|
||||
def _get_async_client(self) -> Any:
|
||||
if self._async_client is None:
|
||||
self._async_client = self._build_async_client()
|
||||
return self._async_client
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Azure-specific fields."""
|
||||
@@ -713,8 +792,7 @@ class AzureCompletion(BaseLLM):
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming chat completion."""
|
||||
try:
|
||||
# Cast params to Any to avoid type checking issues with TypedDict unpacking
|
||||
response: ChatCompletions = self._client.complete(**params)
|
||||
response: ChatCompletions = self._get_sync_client().complete(**params)
|
||||
return self._process_completion_response(
|
||||
response=response,
|
||||
params=params,
|
||||
@@ -913,7 +991,7 @@ class AzureCompletion(BaseLLM):
|
||||
tool_calls: dict[int, dict[str, Any]] = {}
|
||||
|
||||
usage_data: dict[str, Any] | None = None
|
||||
for update in self._client.complete(**params):
|
||||
for update in self._get_sync_client().complete(**params):
|
||||
if isinstance(update, StreamingChatCompletionsUpdate):
|
||||
if update.usage:
|
||||
usage = update.usage
|
||||
@@ -953,8 +1031,9 @@ class AzureCompletion(BaseLLM):
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming chat completion asynchronously."""
|
||||
try:
|
||||
# Cast params to Any to avoid type checking issues with TypedDict unpacking
|
||||
response: ChatCompletions = await self._async_client.complete(**params)
|
||||
response: ChatCompletions = await self._get_async_client().complete(
|
||||
**params
|
||||
)
|
||||
return self._process_completion_response(
|
||||
response=response,
|
||||
params=params,
|
||||
@@ -980,7 +1059,7 @@ class AzureCompletion(BaseLLM):
|
||||
|
||||
usage_data: dict[str, Any] | None = None
|
||||
|
||||
stream = await self._async_client.complete(**params)
|
||||
stream = await self._get_async_client().complete(**params)
|
||||
async for update in stream:
|
||||
if isinstance(update, StreamingChatCompletionsUpdate):
|
||||
if hasattr(update, "usage") and update.usage:
|
||||
@@ -1103,9 +1182,12 @@ class AzureCompletion(BaseLLM):
|
||||
"""Close the async client and clean up resources.
|
||||
|
||||
This ensures proper cleanup of the underlying aiohttp session
|
||||
to avoid unclosed connector warnings.
|
||||
to avoid unclosed connector warnings. Accesses the cached client
|
||||
directly rather than going through `_get_async_client` so a
|
||||
cleanup on an uninitialized LLM is a harmless no-op rather than
|
||||
a credential-required error.
|
||||
"""
|
||||
if hasattr(self._async_client, "close"):
|
||||
if self._async_client is not None and hasattr(self._async_client, "close"):
|
||||
await self._async_client.close()
|
||||
|
||||
async def __aenter__(self) -> Self:
|
||||
|
||||
@@ -12,6 +12,7 @@ from typing_extensions import Required
|
||||
|
||||
from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llms.base_llm import BaseLLM, llm_call_context
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
@@ -169,7 +170,6 @@ class ToolSpec(TypedDict, total=False):
|
||||
name: Required[str]
|
||||
description: Required[str]
|
||||
inputSchema: ToolInputSchema
|
||||
strict: bool
|
||||
|
||||
|
||||
class ConverseToolTypeDef(TypedDict):
|
||||
@@ -303,6 +303,22 @@ class BedrockCompletion(BaseLLM):
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _init_clients(self) -> BedrockCompletion:
|
||||
"""Eagerly build the sync client when AWS credentials resolve,
|
||||
otherwise defer so ``LLM(model="bedrock/...")`` can be constructed
|
||||
at module import time even before deployment env vars are set.
|
||||
|
||||
Only credential/SDK errors are caught — programming errors like
|
||||
``TypeError`` or ``AttributeError`` propagate so real bugs aren't
|
||||
silently swallowed.
|
||||
"""
|
||||
try:
|
||||
self._client = self._build_sync_client()
|
||||
except (BotoCoreError, ClientError, ValueError) as e:
|
||||
logging.debug("Deferring Bedrock client construction: %s", e)
|
||||
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
|
||||
return self
|
||||
|
||||
def _build_sync_client(self) -> Any:
|
||||
config = Config(
|
||||
read_timeout=300,
|
||||
retries={"max_attempts": 3, "mode": "adaptive"},
|
||||
@@ -314,9 +330,17 @@ class BedrockCompletion(BaseLLM):
|
||||
aws_session_token=self.aws_session_token,
|
||||
region_name=self.region_name,
|
||||
)
|
||||
self._client = session.client("bedrock-runtime", config=config)
|
||||
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
|
||||
return self
|
||||
return session.client("bedrock-runtime", config=config)
|
||||
|
||||
def _get_sync_client(self) -> Any:
|
||||
if self._client is None:
|
||||
self._client = self._build_sync_client()
|
||||
return self._client
|
||||
|
||||
def _get_async_client(self) -> Any:
|
||||
"""Async client is set up separately by ``_ensure_async_client``
|
||||
using ``aiobotocore`` inside an exit stack."""
|
||||
return self._async_client
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Bedrock-specific fields."""
|
||||
@@ -656,7 +680,7 @@ class BedrockCompletion(BaseLLM):
|
||||
raise ValueError(f"Invalid message format at index {i}")
|
||||
|
||||
# Call Bedrock Converse API with proper error handling
|
||||
response = self._client.converse(
|
||||
response = self._get_sync_client().converse(
|
||||
modelId=self.model_id,
|
||||
messages=cast(
|
||||
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
|
||||
@@ -945,7 +969,7 @@ class BedrockCompletion(BaseLLM):
|
||||
usage_data: dict[str, Any] | None = None
|
||||
|
||||
try:
|
||||
response = self._client.converse_stream(
|
||||
response = self._get_sync_client().converse_stream(
|
||||
modelId=self.model_id,
|
||||
messages=cast(
|
||||
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
|
||||
@@ -1949,8 +1973,6 @@ class BedrockCompletion(BaseLLM):
|
||||
tools: list[dict[str, Any]],
|
||||
) -> list[ConverseToolTypeDef]:
|
||||
"""Convert CrewAI tools to Converse API format following AWS specification."""
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
|
||||
converse_tools: list[ConverseToolTypeDef] = []
|
||||
|
||||
for tool in tools:
|
||||
@@ -1966,10 +1988,6 @@ class BedrockCompletion(BaseLLM):
|
||||
input_schema: ToolInputSchema = {"json": parameters}
|
||||
tool_spec["inputSchema"] = input_schema
|
||||
|
||||
func_info = tool.get("function", {})
|
||||
if func_info.get("strict"):
|
||||
tool_spec["strict"] = True
|
||||
|
||||
converse_tool: ConverseToolTypeDef = {"toolSpec": tool_spec}
|
||||
|
||||
converse_tools.append(converse_tool)
|
||||
@@ -2057,6 +2075,9 @@ class BedrockCompletion(BaseLLM):
|
||||
|
||||
# Context window sizes for common Bedrock models
|
||||
context_windows = {
|
||||
"anthropic.claude-sonnet-4": 200000,
|
||||
"anthropic.claude-opus-4": 200000,
|
||||
"anthropic.claude-haiku-4": 200000,
|
||||
"anthropic.claude-3-5-sonnet": 200000,
|
||||
"anthropic.claude-3-5-haiku": 200000,
|
||||
"anthropic.claude-3-opus": 200000,
|
||||
|
||||
@@ -118,9 +118,33 @@ class GeminiCompletion(BaseLLM):
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _init_client(self) -> GeminiCompletion:
|
||||
self._client = self._initialize_client(self.use_vertexai)
|
||||
"""Eagerly build the client when credentials resolve, otherwise defer
|
||||
so ``LLM(model="gemini/...")`` can be constructed at module import time
|
||||
even before deployment env vars are set.
|
||||
"""
|
||||
try:
|
||||
self._client = self._initialize_client(self.use_vertexai)
|
||||
except ValueError:
|
||||
pass
|
||||
return self
|
||||
|
||||
def _get_sync_client(self) -> Any:
|
||||
if self._client is None:
|
||||
# Re-read env vars so a deferred build can pick up credentials
|
||||
# that weren't set at instantiation time.
|
||||
if not self.api_key:
|
||||
self.api_key = os.getenv("GOOGLE_API_KEY") or os.getenv(
|
||||
"GEMINI_API_KEY"
|
||||
)
|
||||
if not self.project:
|
||||
self.project = os.getenv("GOOGLE_CLOUD_PROJECT")
|
||||
self._client = self._initialize_client(self.use_vertexai)
|
||||
return self._client
|
||||
|
||||
def _get_async_client(self) -> Any:
|
||||
"""Gemini uses a single client for both sync and async calls."""
|
||||
return self._get_sync_client()
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Gemini/Vertex-specific fields."""
|
||||
config = super().to_config_dict()
|
||||
@@ -228,6 +252,7 @@ class GeminiCompletion(BaseLLM):
|
||||
|
||||
if (
|
||||
hasattr(self, "client")
|
||||
and self._client is not None
|
||||
and hasattr(self._client, "vertexai")
|
||||
and self._client.vertexai
|
||||
):
|
||||
@@ -951,6 +976,7 @@ class GeminiCompletion(BaseLLM):
|
||||
"id": call_id,
|
||||
"name": part.function_call.name,
|
||||
"args": args_dict,
|
||||
"raw_part": part,
|
||||
}
|
||||
|
||||
self._emit_stream_chunk_event(
|
||||
@@ -1035,29 +1061,20 @@ class GeminiCompletion(BaseLLM):
|
||||
if call_data.get("name") != STRUCTURED_OUTPUT_TOOL_NAME
|
||||
}
|
||||
|
||||
# If there are function calls but no available_functions,
|
||||
# return them for the executor to handle
|
||||
if non_structured_output_calls and not available_functions:
|
||||
formatted_function_calls = [
|
||||
{
|
||||
"id": call_data["id"],
|
||||
"function": {
|
||||
"name": call_data["name"],
|
||||
"arguments": json.dumps(call_data["args"]),
|
||||
},
|
||||
"type": "function",
|
||||
}
|
||||
raw_parts = [
|
||||
call_data["raw_part"]
|
||||
for call_data in non_structured_output_calls.values()
|
||||
]
|
||||
self._emit_call_completed_event(
|
||||
response=formatted_function_calls,
|
||||
response=raw_parts,
|
||||
call_type=LLMCallType.TOOL_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=self._convert_contents_to_dict(contents),
|
||||
usage=usage_data,
|
||||
)
|
||||
return formatted_function_calls
|
||||
return raw_parts
|
||||
|
||||
# Handle completed function calls (excluding structured_output)
|
||||
if non_structured_output_calls and available_functions:
|
||||
@@ -1112,7 +1129,7 @@ class GeminiCompletion(BaseLLM):
|
||||
try:
|
||||
# The API accepts list[Content] but mypy is overly strict about variance
|
||||
contents_for_api: Any = contents
|
||||
response = self._client.models.generate_content(
|
||||
response = self._get_sync_client().models.generate_content(
|
||||
model=self.model,
|
||||
contents=contents_for_api,
|
||||
config=config,
|
||||
@@ -1153,7 +1170,7 @@ class GeminiCompletion(BaseLLM):
|
||||
|
||||
# The API accepts list[Content] but mypy is overly strict about variance
|
||||
contents_for_api: Any = contents
|
||||
for chunk in self._client.models.generate_content_stream(
|
||||
for chunk in self._get_sync_client().models.generate_content_stream(
|
||||
model=self.model,
|
||||
contents=contents_for_api,
|
||||
config=config,
|
||||
@@ -1191,7 +1208,7 @@ class GeminiCompletion(BaseLLM):
|
||||
try:
|
||||
# The API accepts list[Content] but mypy is overly strict about variance
|
||||
contents_for_api: Any = contents
|
||||
response = await self._client.aio.models.generate_content(
|
||||
response = await self._get_async_client().aio.models.generate_content(
|
||||
model=self.model,
|
||||
contents=contents_for_api,
|
||||
config=config,
|
||||
@@ -1232,7 +1249,7 @@ class GeminiCompletion(BaseLLM):
|
||||
|
||||
# The API accepts list[Content] but mypy is overly strict about variance
|
||||
contents_for_api: Any = contents
|
||||
stream = await self._client.aio.models.generate_content_stream(
|
||||
stream = await self._get_async_client().aio.models.generate_content_stream(
|
||||
model=self.model,
|
||||
contents=contents_for_api,
|
||||
config=config,
|
||||
@@ -1439,6 +1456,6 @@ class GeminiCompletion(BaseLLM):
|
||||
try:
|
||||
from crewai_files.uploaders.gemini import GeminiFileUploader
|
||||
|
||||
return GeminiFileUploader(client=self._client)
|
||||
return GeminiFileUploader(client=self._get_sync_client())
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
@@ -32,11 +32,15 @@ from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
)
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
from crewai.utilities.pydantic_schema_utils import (
|
||||
generate_model_description,
|
||||
sanitize_tool_params_for_openai_strict,
|
||||
)
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
@@ -253,22 +257,40 @@ class OpenAICompletion(BaseLLM):
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _init_clients(self) -> OpenAICompletion:
|
||||
"""Eagerly build clients when the API key is available, otherwise
|
||||
defer so ``LLM(model="openai/...")`` can be constructed at module
|
||||
import time even before deployment env vars are set.
|
||||
"""
|
||||
try:
|
||||
self._client = self._build_sync_client()
|
||||
self._async_client = self._build_async_client()
|
||||
except ValueError:
|
||||
pass
|
||||
return self
|
||||
|
||||
def _build_sync_client(self) -> Any:
|
||||
client_config = self._get_client_params()
|
||||
if self.interceptor:
|
||||
transport = HTTPTransport(interceptor=self.interceptor)
|
||||
http_client = httpx.Client(transport=transport)
|
||||
client_config["http_client"] = http_client
|
||||
client_config["http_client"] = httpx.Client(transport=transport)
|
||||
return OpenAI(**client_config)
|
||||
|
||||
self._client = OpenAI(**client_config)
|
||||
|
||||
async_client_config = self._get_client_params()
|
||||
def _build_async_client(self) -> Any:
|
||||
client_config = self._get_client_params()
|
||||
if self.interceptor:
|
||||
async_transport = AsyncHTTPTransport(interceptor=self.interceptor)
|
||||
async_http_client = httpx.AsyncClient(transport=async_transport)
|
||||
async_client_config["http_client"] = async_http_client
|
||||
transport = AsyncHTTPTransport(interceptor=self.interceptor)
|
||||
client_config["http_client"] = httpx.AsyncClient(transport=transport)
|
||||
return AsyncOpenAI(**client_config)
|
||||
|
||||
self._async_client = AsyncOpenAI(**async_client_config)
|
||||
return self
|
||||
def _get_sync_client(self) -> Any:
|
||||
if self._client is None:
|
||||
self._client = self._build_sync_client()
|
||||
return self._client
|
||||
|
||||
def _get_async_client(self) -> Any:
|
||||
if self._async_client is None:
|
||||
self._async_client = self._build_async_client()
|
||||
return self._async_client
|
||||
|
||||
@property
|
||||
def last_response_id(self) -> str | None:
|
||||
@@ -764,8 +786,6 @@ class OpenAICompletion(BaseLLM):
|
||||
"function": {"name": "...", "description": "...", "parameters": {...}}
|
||||
}
|
||||
"""
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
|
||||
responses_tools = []
|
||||
|
||||
for tool in tools:
|
||||
@@ -797,7 +817,7 @@ class OpenAICompletion(BaseLLM):
|
||||
) -> str | ResponsesAPIResult | Any:
|
||||
"""Handle non-streaming Responses API call."""
|
||||
try:
|
||||
response: Response = self._client.responses.create(**params)
|
||||
response: Response = self._get_sync_client().responses.create(**params)
|
||||
|
||||
# Track response ID for auto-chaining
|
||||
if self.auto_chain and response.id:
|
||||
@@ -933,7 +953,9 @@ class OpenAICompletion(BaseLLM):
|
||||
) -> str | ResponsesAPIResult | Any:
|
||||
"""Handle async non-streaming Responses API call."""
|
||||
try:
|
||||
response: Response = await self._async_client.responses.create(**params)
|
||||
response: Response = await self._get_async_client().responses.create(
|
||||
**params
|
||||
)
|
||||
|
||||
# Track response ID for auto-chaining
|
||||
if self.auto_chain and response.id:
|
||||
@@ -1069,7 +1091,7 @@ class OpenAICompletion(BaseLLM):
|
||||
final_response: Response | None = None
|
||||
usage: dict[str, Any] | None = None
|
||||
|
||||
stream = self._client.responses.create(**params)
|
||||
stream = self._get_sync_client().responses.create(**params)
|
||||
response_id_stream = None
|
||||
|
||||
for event in stream:
|
||||
@@ -1197,7 +1219,7 @@ class OpenAICompletion(BaseLLM):
|
||||
final_response: Response | None = None
|
||||
usage: dict[str, Any] | None = None
|
||||
|
||||
stream = await self._async_client.responses.create(**params)
|
||||
stream = await self._get_async_client().responses.create(**params)
|
||||
response_id_stream = None
|
||||
|
||||
async for event in stream:
|
||||
@@ -1548,11 +1570,6 @@ class OpenAICompletion(BaseLLM):
|
||||
self, tools: list[dict[str, BaseTool]]
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Convert CrewAI tool format to OpenAI function calling format."""
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
from crewai.utilities.pydantic_schema_utils import (
|
||||
force_additional_properties_false,
|
||||
)
|
||||
|
||||
openai_tools = []
|
||||
|
||||
for tool in tools:
|
||||
@@ -1571,8 +1588,9 @@ class OpenAICompletion(BaseLLM):
|
||||
params_dict = (
|
||||
parameters if isinstance(parameters, dict) else dict(parameters)
|
||||
)
|
||||
params_dict = force_additional_properties_false(params_dict)
|
||||
openai_tool["function"]["parameters"] = params_dict
|
||||
openai_tool["function"]["parameters"] = (
|
||||
sanitize_tool_params_for_openai_strict(params_dict)
|
||||
)
|
||||
|
||||
openai_tools.append(openai_tool)
|
||||
return openai_tools
|
||||
@@ -1591,7 +1609,7 @@ class OpenAICompletion(BaseLLM):
|
||||
parse_params = {
|
||||
k: v for k, v in params.items() if k != "response_format"
|
||||
}
|
||||
parsed_response = self._client.beta.chat.completions.parse(
|
||||
parsed_response = self._get_sync_client().beta.chat.completions.parse(
|
||||
**parse_params,
|
||||
response_format=response_model,
|
||||
)
|
||||
@@ -1615,7 +1633,9 @@ class OpenAICompletion(BaseLLM):
|
||||
)
|
||||
return parsed_object
|
||||
|
||||
response: ChatCompletion = self._client.chat.completions.create(**params)
|
||||
response: ChatCompletion = self._get_sync_client().chat.completions.create(
|
||||
**params
|
||||
)
|
||||
|
||||
usage = self._extract_openai_token_usage(response)
|
||||
|
||||
@@ -1842,7 +1862,7 @@ class OpenAICompletion(BaseLLM):
|
||||
}
|
||||
|
||||
stream: ChatCompletionStream[BaseModel]
|
||||
with self._client.beta.chat.completions.stream(
|
||||
with self._get_sync_client().beta.chat.completions.stream(
|
||||
**parse_params, response_format=response_model
|
||||
) as stream:
|
||||
for chunk in stream:
|
||||
@@ -1879,7 +1899,7 @@ class OpenAICompletion(BaseLLM):
|
||||
return ""
|
||||
|
||||
completion_stream: Stream[ChatCompletionChunk] = (
|
||||
self._client.chat.completions.create(**params)
|
||||
self._get_sync_client().chat.completions.create(**params)
|
||||
)
|
||||
|
||||
usage_data: dict[str, Any] | None = None
|
||||
@@ -1976,9 +1996,11 @@ class OpenAICompletion(BaseLLM):
|
||||
parse_params = {
|
||||
k: v for k, v in params.items() if k != "response_format"
|
||||
}
|
||||
parsed_response = await self._async_client.beta.chat.completions.parse(
|
||||
**parse_params,
|
||||
response_format=response_model,
|
||||
parsed_response = (
|
||||
await self._get_async_client().beta.chat.completions.parse(
|
||||
**parse_params,
|
||||
response_format=response_model,
|
||||
)
|
||||
)
|
||||
math_reasoning = parsed_response.choices[0].message
|
||||
|
||||
@@ -2000,8 +2022,8 @@ class OpenAICompletion(BaseLLM):
|
||||
)
|
||||
return parsed_object
|
||||
|
||||
response: ChatCompletion = await self._async_client.chat.completions.create(
|
||||
**params
|
||||
response: ChatCompletion = (
|
||||
await self._get_async_client().chat.completions.create(**params)
|
||||
)
|
||||
|
||||
usage = self._extract_openai_token_usage(response)
|
||||
@@ -2127,7 +2149,7 @@ class OpenAICompletion(BaseLLM):
|
||||
if response_model:
|
||||
completion_stream: AsyncIterator[
|
||||
ChatCompletionChunk
|
||||
] = await self._async_client.chat.completions.create(**params)
|
||||
] = await self._get_async_client().chat.completions.create(**params)
|
||||
|
||||
accumulated_content = ""
|
||||
usage_data: dict[str, Any] | None = None
|
||||
@@ -2183,7 +2205,7 @@ class OpenAICompletion(BaseLLM):
|
||||
|
||||
stream: AsyncIterator[
|
||||
ChatCompletionChunk
|
||||
] = await self._async_client.chat.completions.create(**params)
|
||||
] = await self._get_async_client().chat.completions.create(**params)
|
||||
|
||||
usage_data = None
|
||||
|
||||
@@ -2379,8 +2401,8 @@ class OpenAICompletion(BaseLLM):
|
||||
from crewai_files.uploaders.openai import OpenAIFileUploader
|
||||
|
||||
return OpenAIFileUploader(
|
||||
client=self._client,
|
||||
async_client=self._async_client,
|
||||
client=self._get_sync_client(),
|
||||
async_client=self._get_async_client(),
|
||||
)
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
@@ -2,9 +2,17 @@
|
||||
|
||||
This module provides native MCP client functionality, allowing CrewAI agents
|
||||
to connect to any MCP-compliant server using various transport types.
|
||||
|
||||
Heavy imports (MCPClient, MCPToolResolver, BaseTransport, TransportType) are
|
||||
lazy-loaded on first access to avoid pulling in the ``mcp`` SDK (~400ms)
|
||||
when only lightweight config/filter types are needed.
|
||||
"""
|
||||
|
||||
from crewai.mcp.client import MCPClient
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from crewai.mcp.config import (
|
||||
MCPServerConfig,
|
||||
MCPServerHTTP,
|
||||
@@ -18,8 +26,28 @@ from crewai.mcp.filters import (
|
||||
create_dynamic_tool_filter,
|
||||
create_static_tool_filter,
|
||||
)
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.transports.base import BaseTransport, TransportType
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.mcp.client import MCPClient
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.transports.base import BaseTransport, TransportType
|
||||
|
||||
_LAZY: dict[str, tuple[str, str]] = {
|
||||
"MCPClient": ("crewai.mcp.client", "MCPClient"),
|
||||
"MCPToolResolver": ("crewai.mcp.tool_resolver", "MCPToolResolver"),
|
||||
"BaseTransport": ("crewai.mcp.transports.base", "BaseTransport"),
|
||||
"TransportType": ("crewai.mcp.transports.base", "TransportType"),
|
||||
}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
if name in _LAZY:
|
||||
mod_path, attr = _LAZY[name]
|
||||
mod = importlib.import_module(mod_path)
|
||||
val = getattr(mod, attr)
|
||||
globals()[name] = val # cache for subsequent access
|
||||
return val
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
|
||||
|
||||
__all__ = [
|
||||
|
||||
@@ -417,9 +417,18 @@ class MCPToolResolver:
|
||||
|
||||
args_schema = None
|
||||
if tool_def.get("inputSchema"):
|
||||
args_schema = self._json_schema_to_pydantic(
|
||||
tool_name, tool_def["inputSchema"]
|
||||
)
|
||||
try:
|
||||
args_schema = self._json_schema_to_pydantic(
|
||||
tool_name, tool_def["inputSchema"]
|
||||
)
|
||||
except Exception as e:
|
||||
self._logger.log(
|
||||
"warning",
|
||||
f"Failed to build args schema for MCP tool "
|
||||
f"'{tool_name}': {e}. Registering tool without a "
|
||||
"typed schema.",
|
||||
)
|
||||
args_schema = None
|
||||
|
||||
tool_schema = {
|
||||
"description": tool_def.get("description", ""),
|
||||
|
||||
@@ -237,6 +237,8 @@ def crew(
|
||||
self.tasks = instantiated_tasks
|
||||
|
||||
crew_instance: Crew = _call_method(meth, self, *args, **kwargs)
|
||||
if "name" not in crew_instance.model_fields_set:
|
||||
crew_instance.name = getattr(self, "_crew_name", None) or crew_instance.name
|
||||
|
||||
def callback_wrapper(
|
||||
hook: Callable[Concatenate[CrewInstance, P2], R2], instance: CrewInstance
|
||||
|
||||
@@ -10,12 +10,22 @@ from __future__ import annotations
|
||||
import json
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crew import Crew
|
||||
from crewai.events.base_events import BaseEvent
|
||||
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus
|
||||
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus, is_replaying
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointBaseEvent,
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkBaseEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreBaseEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.flow.flow import Flow
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.runtime import RuntimeState, _prepare_entities
|
||||
@@ -53,12 +63,26 @@ def _resolve(value: CheckpointConfig | bool | None) -> CheckpointConfig | None |
|
||||
if isinstance(value, CheckpointConfig):
|
||||
_ensure_handlers_registered()
|
||||
return value
|
||||
if value is True:
|
||||
if value:
|
||||
_ensure_handlers_registered()
|
||||
return CheckpointConfig()
|
||||
if value is False:
|
||||
return _SENTINEL
|
||||
return None # None = inherit
|
||||
return None
|
||||
|
||||
|
||||
def _resolve_from_agent(agent: BaseAgent) -> CheckpointConfig | None:
|
||||
"""Resolve a checkpoint config starting from an agent, walking to its crew."""
|
||||
result = _resolve(agent.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = agent.crew
|
||||
if isinstance(crew, Crew):
|
||||
crew_result = _resolve(crew.checkpoint)
|
||||
return crew_result if isinstance(crew_result, CheckpointConfig) else None
|
||||
return None
|
||||
|
||||
|
||||
def _find_checkpoint(source: Any) -> CheckpointConfig | None:
|
||||
@@ -77,28 +101,11 @@ def _find_checkpoint(source: Any) -> CheckpointConfig | None:
|
||||
result = _resolve(source.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
if isinstance(source, BaseAgent):
|
||||
result = _resolve(source.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = source.crew
|
||||
if isinstance(crew, Crew):
|
||||
result = _resolve(crew.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
return None
|
||||
return _resolve_from_agent(source)
|
||||
if isinstance(source, Task):
|
||||
agent = source.agent
|
||||
if isinstance(agent, BaseAgent):
|
||||
result = _resolve(agent.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = agent.crew
|
||||
if isinstance(crew, Crew):
|
||||
result = _resolve(crew.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
return _resolve_from_agent(agent)
|
||||
return None
|
||||
return None
|
||||
|
||||
@@ -107,21 +114,106 @@ def _do_checkpoint(
|
||||
state: RuntimeState, cfg: CheckpointConfig, event: BaseEvent | None = None
|
||||
) -> None:
|
||||
"""Write a checkpoint and prune old ones if configured."""
|
||||
_prepare_entities(state.root)
|
||||
payload = state.model_dump(mode="json")
|
||||
if event is not None:
|
||||
payload["trigger"] = event.type
|
||||
data = json.dumps(payload)
|
||||
location = cfg.provider.checkpoint(
|
||||
data,
|
||||
cfg.location,
|
||||
parent_id=state._parent_id,
|
||||
branch=state._branch,
|
||||
provider_name: str = type(cfg.provider).__name__
|
||||
trigger: str | None = event.type if event is not None else None
|
||||
context: dict[str, Any] = {
|
||||
"task_id": event.task_id if event is not None else None,
|
||||
"task_name": event.task_name if event is not None else None,
|
||||
"agent_id": event.agent_id if event is not None else None,
|
||||
"agent_role": event.agent_role if event is not None else None,
|
||||
}
|
||||
|
||||
parent_id_snapshot: str | None = state._parent_id
|
||||
branch_snapshot: str = state._branch
|
||||
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointStartedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
start: float = time.perf_counter()
|
||||
try:
|
||||
_prepare_entities(state.root)
|
||||
payload = state.model_dump(mode="json")
|
||||
if event is not None:
|
||||
payload["trigger"] = event.type
|
||||
data = json.dumps(payload)
|
||||
location = cfg.provider.checkpoint(
|
||||
data,
|
||||
cfg.location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
state._chain_lineage(cfg.provider, location)
|
||||
checkpoint_id: str = cfg.provider.extract_id(location)
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointFailedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
error=str(exc),
|
||||
**context,
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
duration_ms: float = (time.perf_counter() - start) * 1000.0
|
||||
msg: str = (
|
||||
f"Checkpoint saved. Resume with: crewai checkpoint resume {checkpoint_id}"
|
||||
)
|
||||
logger.info(msg)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
checkpoint_id=checkpoint_id,
|
||||
duration_ms=duration_ms,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
state._chain_lineage(cfg.provider, location)
|
||||
|
||||
if cfg.max_checkpoints is not None:
|
||||
cfg.provider.prune(cfg.location, cfg.max_checkpoints, branch=state._branch)
|
||||
try:
|
||||
removed_count: int = cfg.provider.prune(
|
||||
cfg.location, cfg.max_checkpoints, branch=branch_snapshot
|
||||
)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Checkpoint prune failed for %s (branch=%s)",
|
||||
cfg.location,
|
||||
branch_snapshot,
|
||||
exc_info=True,
|
||||
)
|
||||
return
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointPrunedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
removed_count=removed_count,
|
||||
max_checkpoints=cfg.max_checkpoints,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _should_checkpoint(source: Any, event: BaseEvent) -> CheckpointConfig | None:
|
||||
@@ -136,6 +228,13 @@ def _should_checkpoint(source: Any, event: BaseEvent) -> CheckpointConfig | None
|
||||
|
||||
def _on_any_event(source: Any, event: BaseEvent, state: Any) -> None:
|
||||
"""Sync handler registered on every event class."""
|
||||
if is_replaying():
|
||||
return
|
||||
if isinstance(
|
||||
event,
|
||||
(CheckpointBaseEvent, CheckpointForkBaseEvent, CheckpointRestoreBaseEvent),
|
||||
):
|
||||
return
|
||||
cfg = _should_checkpoint(source, event)
|
||||
if cfg is None:
|
||||
return
|
||||
@@ -155,7 +254,8 @@ def _register_all_handlers(event_bus: CrewAIEventsBus) -> None:
|
||||
seen: set[type] = set()
|
||||
|
||||
def _collect(cls: type[BaseEvent]) -> None:
|
||||
for sub in cls.__subclasses__():
|
||||
subclasses: list[type[BaseEvent]] = cls.__subclasses__()
|
||||
for sub in subclasses:
|
||||
if sub not in seen:
|
||||
seen.add(sub)
|
||||
type_field = sub.model_fields.get("type")
|
||||
|
||||
@@ -39,7 +39,8 @@ def _build_event_type_map() -> None:
|
||||
"""Populate _event_type_map from all BaseEvent subclasses."""
|
||||
|
||||
def _collect(cls: type[BaseEvent]) -> None:
|
||||
for sub in cls.__subclasses__():
|
||||
subclasses: list[type[BaseEvent]] = cls.__subclasses__()
|
||||
for sub in subclasses:
|
||||
type_field = sub.model_fields.get("type")
|
||||
if type_field and type_field.default:
|
||||
_event_type_map[type_field.default] = sub
|
||||
@@ -196,6 +197,21 @@ class EventRecord(BaseModel):
|
||||
node for node in self.nodes.values() if not node.neighbors("parent")
|
||||
]
|
||||
|
||||
def all_nodes(self) -> list[EventNode]:
|
||||
"""Return a snapshot of every node under the read lock.
|
||||
|
||||
Returns:
|
||||
A list copy of the current nodes, safe to iterate without holding
|
||||
the lock.
|
||||
"""
|
||||
with self._lock.r_locked():
|
||||
return list(self.nodes.values())
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Remove all nodes from the record under the write lock."""
|
||||
with self._lock.w_locked():
|
||||
self.nodes.clear()
|
||||
|
||||
def __len__(self) -> int:
|
||||
with self._lock.r_locked():
|
||||
return len(self.nodes)
|
||||
|
||||
@@ -61,13 +61,16 @@ class BaseProvider(BaseModel, ABC):
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove old checkpoints, keeping at most *max_keep* per branch.
|
||||
|
||||
Args:
|
||||
location: The storage destination passed to ``checkpoint``.
|
||||
max_keep: Maximum number of checkpoints to retain.
|
||||
branch: Only prune checkpoints on this branch.
|
||||
|
||||
Returns:
|
||||
The number of checkpoints removed.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
@@ -95,17 +95,20 @@ class JsonProvider(BaseProvider):
|
||||
await f.write(data)
|
||||
return str(file_path)
|
||||
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove oldest checkpoint files beyond *max_keep* on a branch."""
|
||||
_safe_branch(location, branch)
|
||||
branch_dir = os.path.join(location, branch)
|
||||
pattern = os.path.join(branch_dir, "*.json")
|
||||
files = sorted(glob.glob(pattern), key=os.path.getmtime)
|
||||
removed = 0
|
||||
for path in files if max_keep == 0 else files[:-max_keep]:
|
||||
try:
|
||||
os.remove(path)
|
||||
removed += 1
|
||||
except OSError: # noqa: PERF203
|
||||
logger.debug("Failed to remove %s", path, exc_info=True)
|
||||
return removed
|
||||
|
||||
def extract_id(self, location: str) -> str:
|
||||
"""Extract the checkpoint ID from a file path.
|
||||
|
||||
@@ -111,11 +111,13 @@ class SqliteProvider(BaseProvider):
|
||||
await db.commit()
|
||||
return f"{location}#{checkpoint_id}"
|
||||
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove oldest checkpoint rows beyond *max_keep* on a branch."""
|
||||
with sqlite3.connect(location) as conn:
|
||||
conn.execute(_PRUNE, (branch, branch, max_keep))
|
||||
cursor = conn.execute(_PRUNE, (branch, branch, max_keep))
|
||||
removed: int = cursor.rowcount
|
||||
conn.commit()
|
||||
return max(removed, 0)
|
||||
|
||||
def extract_id(self, location: str) -> str:
|
||||
"""Extract the checkpoint ID from a ``db_path#id`` string."""
|
||||
|
||||
@@ -10,6 +10,7 @@ via ``RuntimeState.model_rebuild()``.
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import TYPE_CHECKING, Any
|
||||
import uuid
|
||||
|
||||
@@ -23,6 +24,17 @@ from pydantic import (
|
||||
)
|
||||
|
||||
from crewai.context import capture_execution_context
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.event_record import EventRecord
|
||||
from crewai.state.provider.core import BaseProvider
|
||||
@@ -44,9 +56,12 @@ def _sync_checkpoint_fields(entity: object) -> None:
|
||||
entity: The entity whose private runtime attributes will be
|
||||
copied into its public checkpoint fields.
|
||||
"""
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crew import Crew
|
||||
from crewai.flow.flow import Flow
|
||||
|
||||
if isinstance(entity, BaseAgent):
|
||||
entity.checkpoint_kickoff_event_id = entity._kickoff_event_id
|
||||
if isinstance(entity, Flow):
|
||||
entity.checkpoint_completed_methods = (
|
||||
set(entity._completed_methods) if entity._completed_methods else None
|
||||
@@ -86,7 +101,7 @@ def _migrate(data: dict[str, Any]) -> dict[str, Any]:
|
||||
"""
|
||||
raw = data.get("crewai_version")
|
||||
current = Version(get_crewai_version())
|
||||
stored = Version(raw) if raw else Version("0.0.0")
|
||||
stored = Version(raw) if isinstance(raw, str) and raw else Version("0.0.0")
|
||||
|
||||
if raw is None:
|
||||
logger.warning("Checkpoint has no crewai_version — treating as 0.0.0")
|
||||
@@ -156,6 +171,63 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
self._checkpoint_id = provider.extract_id(location)
|
||||
self._parent_id = self._checkpoint_id
|
||||
|
||||
def _begin_checkpoint(self, location: str) -> tuple[str, str | None, str, float]:
|
||||
"""Emit the start event and return the invariant context for a checkpoint."""
|
||||
provider_name: str = type(self._provider).__name__
|
||||
parent_id_snapshot: str | None = self._parent_id
|
||||
branch_snapshot: str = self._branch
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointStartedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
),
|
||||
)
|
||||
return provider_name, parent_id_snapshot, branch_snapshot, time.perf_counter()
|
||||
|
||||
def _emit_checkpoint_failed(
|
||||
self,
|
||||
location: str,
|
||||
provider_name: str,
|
||||
branch_snapshot: str,
|
||||
parent_id_snapshot: str | None,
|
||||
exc: Exception,
|
||||
) -> None:
|
||||
"""Emit the failure event for a checkpoint write."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
|
||||
def _emit_checkpoint_completed(
|
||||
self,
|
||||
result: str,
|
||||
provider_name: str,
|
||||
branch_snapshot: str,
|
||||
parent_id_snapshot: str | None,
|
||||
start: float,
|
||||
) -> None:
|
||||
"""Emit the completion event for a successful checkpoint write."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointCompletedEvent(
|
||||
location=result,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
checkpoint_id=self._provider.extract_id(result),
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
|
||||
def checkpoint(self, location: str) -> str:
|
||||
"""Write a checkpoint.
|
||||
|
||||
@@ -166,14 +238,27 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
Returns:
|
||||
A location identifier for the saved checkpoint.
|
||||
"""
|
||||
_prepare_entities(self.root)
|
||||
result = self._provider.checkpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=self._parent_id,
|
||||
branch=self._branch,
|
||||
provider_name, parent_id_snapshot, branch_snapshot, start = (
|
||||
self._begin_checkpoint(location)
|
||||
)
|
||||
try:
|
||||
_prepare_entities(self.root)
|
||||
result = self._provider.checkpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
except Exception as exc:
|
||||
self._emit_checkpoint_failed(
|
||||
location, provider_name, branch_snapshot, parent_id_snapshot, exc
|
||||
)
|
||||
raise
|
||||
|
||||
self._emit_checkpoint_completed(
|
||||
result, provider_name, branch_snapshot, parent_id_snapshot, start
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
return result
|
||||
|
||||
async def acheckpoint(self, location: str) -> str:
|
||||
@@ -186,14 +271,27 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
Returns:
|
||||
A location identifier for the saved checkpoint.
|
||||
"""
|
||||
_prepare_entities(self.root)
|
||||
result = await self._provider.acheckpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=self._parent_id,
|
||||
branch=self._branch,
|
||||
provider_name, parent_id_snapshot, branch_snapshot, start = (
|
||||
self._begin_checkpoint(location)
|
||||
)
|
||||
try:
|
||||
_prepare_entities(self.root)
|
||||
result = await self._provider.acheckpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
except Exception as exc:
|
||||
self._emit_checkpoint_failed(
|
||||
location, provider_name, branch_snapshot, parent_id_snapshot, exc
|
||||
)
|
||||
raise
|
||||
|
||||
self._emit_checkpoint_completed(
|
||||
result, provider_name, branch_snapshot, parent_id_snapshot, start
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
return result
|
||||
|
||||
def fork(self, branch: str | None = None) -> None:
|
||||
@@ -208,11 +306,32 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
times without collisions.
|
||||
"""
|
||||
if branch:
|
||||
self._branch = branch
|
||||
new_branch = branch
|
||||
elif self._checkpoint_id:
|
||||
self._branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
|
||||
new_branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
|
||||
else:
|
||||
self._branch = f"fork/{uuid.uuid4().hex[:8]}"
|
||||
new_branch = f"fork/{uuid.uuid4().hex[:8]}"
|
||||
|
||||
parent_branch: str | None = self._branch
|
||||
parent_checkpoint_id: str | None = self._checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointForkStartedEvent(
|
||||
branch=new_branch,
|
||||
parent_branch=parent_branch,
|
||||
parent_checkpoint_id=parent_checkpoint_id,
|
||||
),
|
||||
)
|
||||
self._branch = new_branch
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointForkCompletedEvent(
|
||||
branch=new_branch,
|
||||
parent_branch=parent_branch,
|
||||
parent_checkpoint_id=parent_checkpoint_id,
|
||||
),
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_checkpoint(cls, config: CheckpointConfig, **kwargs: Any) -> RuntimeState:
|
||||
@@ -230,13 +349,41 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
if config.restore_from is None:
|
||||
raise ValueError("CheckpointConfig.restore_from must be set")
|
||||
location = str(config.restore_from)
|
||||
provider = detect_provider(location)
|
||||
raw = provider.from_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(config, CheckpointRestoreStartedEvent(location=location))
|
||||
start: float = time.perf_counter()
|
||||
provider_name: str | None = None
|
||||
try:
|
||||
provider = detect_provider(location)
|
||||
provider_name = type(provider).__name__
|
||||
raw = provider.from_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
checkpoint_id=checkpoint_id,
|
||||
branch=state._branch,
|
||||
parent_id=state._parent_id,
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
return state
|
||||
|
||||
@classmethod
|
||||
@@ -257,13 +404,41 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
if config.restore_from is None:
|
||||
raise ValueError("CheckpointConfig.restore_from must be set")
|
||||
location = str(config.restore_from)
|
||||
provider = detect_provider(location)
|
||||
raw = await provider.afrom_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(config, CheckpointRestoreStartedEvent(location=location))
|
||||
start: float = time.perf_counter()
|
||||
provider_name: str | None = None
|
||||
try:
|
||||
provider = detect_provider(location)
|
||||
provider_name = type(provider).__name__
|
||||
raw = await provider.afrom_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
checkpoint_id=checkpoint_id,
|
||||
branch=state._branch,
|
||||
parent_id=state._parent_id,
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
return state
|
||||
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@ from pydantic import (
|
||||
field_validator,
|
||||
model_validator,
|
||||
)
|
||||
from pydantic.functional_serializers import PlainSerializer
|
||||
from pydantic_core import PydanticCustomError
|
||||
from typing_extensions import Self
|
||||
|
||||
@@ -45,6 +46,7 @@ from crewai.events.types.task_events import (
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.llms.providers.openai.completion import OpenAICompletion
|
||||
from crewai.security import Fingerprint, SecurityConfig
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
@@ -85,6 +87,22 @@ from crewai.utilities.printer import PRINTER
|
||||
from crewai.utilities.string_utils import interpolate_only
|
||||
|
||||
|
||||
def _serialize_model_class(v: type[BaseModel] | None) -> dict[str, Any] | None:
|
||||
"""Serialize a Pydantic model class reference to its JSON schema."""
|
||||
return v.model_json_schema() if v else None
|
||||
|
||||
|
||||
def _deserialize_model_class(v: Any) -> type[BaseModel] | None:
|
||||
"""Hydrate a model class reference from checkpoint data."""
|
||||
if v is None or isinstance(v, type):
|
||||
return v
|
||||
if isinstance(v, dict):
|
||||
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
|
||||
|
||||
return create_model_from_schema(v)
|
||||
return None
|
||||
|
||||
|
||||
class Task(BaseModel):
|
||||
"""Class that represents a task to be executed.
|
||||
|
||||
@@ -140,15 +158,33 @@ class Task(BaseModel):
|
||||
description="Whether the task should be executed asynchronously or not.",
|
||||
default=False,
|
||||
)
|
||||
output_json: type[BaseModel] | None = Field(
|
||||
output_json: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_deserialize_model_class),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A Pydantic model to be used to create a JSON output.",
|
||||
default=None,
|
||||
)
|
||||
output_pydantic: type[BaseModel] | None = Field(
|
||||
output_pydantic: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_deserialize_model_class),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A Pydantic model to be used to create a Pydantic output.",
|
||||
default=None,
|
||||
)
|
||||
response_model: type[BaseModel] | None = Field(
|
||||
response_model: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_deserialize_model_class),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A Pydantic model for structured LLM outputs using native provider features.",
|
||||
default=None,
|
||||
)
|
||||
@@ -188,7 +224,13 @@ class Task(BaseModel):
|
||||
description="Whether the task should instruct the agent to return the final answer formatted in Markdown",
|
||||
default=False,
|
||||
)
|
||||
converter_cls: type[Converter] | None = Field(
|
||||
converter_cls: Annotated[
|
||||
type[Converter] | None,
|
||||
BeforeValidator(lambda v: v if v is None or isinstance(v, type) else None),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A converter class used to export structured output",
|
||||
default=None,
|
||||
)
|
||||
@@ -301,12 +343,14 @@ class Task(BaseModel):
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_required_fields(self) -> Self:
|
||||
required_fields = ["description", "expected_output"]
|
||||
for field in required_fields:
|
||||
if getattr(self, field) is None:
|
||||
raise ValueError(
|
||||
f"{field} must be provided either directly or through config"
|
||||
)
|
||||
if self.description is None:
|
||||
raise ValueError(
|
||||
"description must be provided either directly or through config"
|
||||
)
|
||||
if self.expected_output is None:
|
||||
raise ValueError(
|
||||
"expected_output must be provided either directly or through config"
|
||||
)
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
@@ -838,8 +882,8 @@ class Task(BaseModel):
|
||||
should_inject = self.allow_crewai_trigger_context
|
||||
|
||||
if should_inject and self.agent:
|
||||
crew = getattr(self.agent, "crew", None)
|
||||
if crew and hasattr(crew, "_inputs") and crew._inputs:
|
||||
crew = self.agent.crew
|
||||
if crew and not isinstance(crew, str) and crew._inputs:
|
||||
trigger_payload = crew._inputs.get("crewai_trigger_payload")
|
||||
if trigger_payload is not None:
|
||||
description += f"\n\nTrigger Payload: {trigger_payload}"
|
||||
@@ -852,11 +896,12 @@ class Task(BaseModel):
|
||||
isinstance(self.agent.llm, BaseLLM)
|
||||
and self.agent.llm.supports_multimodal()
|
||||
):
|
||||
provider: str = str(
|
||||
getattr(self.agent.llm, "provider", None)
|
||||
or getattr(self.agent.llm, "model", "openai")
|
||||
provider: str = self.agent.llm.provider or self.agent.llm.model
|
||||
api: str | None = (
|
||||
self.agent.llm.api
|
||||
if isinstance(self.agent.llm, OpenAICompletion)
|
||||
else None
|
||||
)
|
||||
api: str | None = getattr(self.agent.llm, "api", None)
|
||||
supported_types = get_supported_content_types(provider, api)
|
||||
|
||||
def is_auto_injected(content_type: str) -> bool:
|
||||
@@ -1237,12 +1282,26 @@ Follow these guidelines:
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
if isinstance(result, BaseModel):
|
||||
raw = result.model_dump_json()
|
||||
if self.output_pydantic:
|
||||
pydantic_output = result
|
||||
json_output = None
|
||||
elif self.output_json:
|
||||
pydantic_output = None
|
||||
json_output = result.model_dump()
|
||||
else:
|
||||
pydantic_output = None
|
||||
json_output = None
|
||||
else:
|
||||
raw = result
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
|
||||
task_output = TaskOutput(
|
||||
name=self.name or self.description,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
raw=raw,
|
||||
pydantic=pydantic_output,
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
@@ -1333,12 +1392,26 @@ Follow these guidelines:
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
if isinstance(result, BaseModel):
|
||||
raw = result.model_dump_json()
|
||||
if self.output_pydantic:
|
||||
pydantic_output = result
|
||||
json_output = None
|
||||
elif self.output_json:
|
||||
pydantic_output = None
|
||||
json_output = result.model_dump()
|
||||
else:
|
||||
pydantic_output = None
|
||||
json_output = None
|
||||
else:
|
||||
raw = result
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
|
||||
task_output = TaskOutput(
|
||||
name=self.name or self.description,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
raw=raw,
|
||||
pydantic=pydantic_output,
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
|
||||
@@ -1058,3 +1058,20 @@ class Telemetry:
|
||||
close_span(span)
|
||||
|
||||
self._safe_telemetry_operation(_operation)
|
||||
|
||||
def template_installed_span(self, template_name: str) -> None:
|
||||
"""Records when a template is downloaded and installed.
|
||||
|
||||
Args:
|
||||
template_name: Name of the template that was installed
|
||||
(without the template_ prefix).
|
||||
"""
|
||||
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Template Installed")
|
||||
self._add_attribute(span, "crewai_version", version("crewai"))
|
||||
self._add_attribute(span, "template_name", template_name)
|
||||
close_span(span)
|
||||
|
||||
self._safe_telemetry_operation(_operation)
|
||||
|
||||
82
lib/crewai/src/crewai/tools/flow_tool.py
Normal file
82
lib/crewai/src/crewai/tools/flow_tool.py
Normal file
@@ -0,0 +1,82 @@
|
||||
"""Wrap Flow classes as callable tools so agents can invoke them."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.string_utils import sanitize_tool_name
|
||||
|
||||
|
||||
class FlowToolInputSchema(BaseModel):
|
||||
"""Default input schema for a FlowTool."""
|
||||
|
||||
inputs: str = Field(
|
||||
default="{}",
|
||||
description=(
|
||||
"JSON string of key-value pairs to pass as inputs to the flow. "
|
||||
"Use '{}' if the flow requires no inputs."
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
class FlowTool(BaseTool):
|
||||
"""Wraps a Flow class as a BaseTool so an agent can invoke it.
|
||||
|
||||
The tool instantiates the Flow, calls ``kickoff(inputs=...)`` and returns
|
||||
the result as a string.
|
||||
"""
|
||||
|
||||
name: str = ""
|
||||
description: str = ""
|
||||
flow_class: Any = Field(
|
||||
default=None,
|
||||
description="The Flow class (not instance) to wrap.",
|
||||
exclude=True,
|
||||
)
|
||||
args_schema: Any = FlowToolInputSchema
|
||||
|
||||
def _run(self, inputs: str = "{}") -> str:
|
||||
"""Instantiate the Flow, run kickoff, and return the result."""
|
||||
try:
|
||||
parsed_inputs = json.loads(inputs) if isinstance(inputs, str) else inputs
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
parsed_inputs = {}
|
||||
|
||||
if not isinstance(parsed_inputs, dict):
|
||||
parsed_inputs = {}
|
||||
|
||||
flow_instance = self.flow_class()
|
||||
result = flow_instance.kickoff(inputs=parsed_inputs if parsed_inputs else None)
|
||||
return str(result)
|
||||
|
||||
|
||||
def create_flow_tools(flows: list[type] | None) -> list[BaseTool]:
|
||||
"""Convert a list of Flow classes into BaseTool wrappers.
|
||||
|
||||
Args:
|
||||
flows: Flow classes (not instances) to wrap as tools.
|
||||
|
||||
Returns:
|
||||
A list of FlowTool instances ready for agent use.
|
||||
"""
|
||||
if not flows:
|
||||
return []
|
||||
|
||||
tools: list[BaseTool] = []
|
||||
for flow_cls in flows:
|
||||
name = sanitize_tool_name(flow_cls.__name__)
|
||||
docstring = (flow_cls.__doc__ or "").strip()
|
||||
description = docstring if docstring else f"Run the {flow_cls.__name__} flow."
|
||||
|
||||
tools.append(
|
||||
FlowTool(
|
||||
name=name,
|
||||
description=description,
|
||||
flow_class=flow_cls,
|
||||
)
|
||||
)
|
||||
return tools
|
||||
@@ -19,7 +19,18 @@ from collections.abc import Callable
|
||||
from copy import deepcopy
|
||||
import datetime
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Annotated, Any, Final, Literal, TypedDict, Union
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Annotated,
|
||||
Any,
|
||||
Final,
|
||||
ForwardRef,
|
||||
Literal,
|
||||
Optional,
|
||||
TypedDict,
|
||||
Union,
|
||||
cast,
|
||||
)
|
||||
import uuid
|
||||
|
||||
import jsonref # type: ignore[import-untyped]
|
||||
@@ -99,15 +110,26 @@ def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""
|
||||
defs = schema.get("$defs", {})
|
||||
schema_copy = deepcopy(schema)
|
||||
expanding: set[str] = set()
|
||||
|
||||
def _resolve(node: Any) -> Any:
|
||||
if isinstance(node, dict):
|
||||
ref = node.get("$ref")
|
||||
if isinstance(ref, str) and ref.startswith("#/$defs/"):
|
||||
def_name = ref.replace("#/$defs/", "")
|
||||
if def_name in defs:
|
||||
if def_name not in defs:
|
||||
raise KeyError(f"Definition '{def_name}' not found in $defs.")
|
||||
if def_name in expanding:
|
||||
def_schema = defs[def_name]
|
||||
stub: dict[str, Any] = {"type": def_schema.get("type", "object")}
|
||||
if "description" in def_schema:
|
||||
stub["description"] = def_schema["description"]
|
||||
return stub
|
||||
expanding.add(def_name)
|
||||
try:
|
||||
return _resolve(deepcopy(defs[def_name]))
|
||||
raise KeyError(f"Definition '{def_name}' not found in $defs.")
|
||||
finally:
|
||||
expanding.discard(def_name)
|
||||
return {k: _resolve(v) for k, v in node.items()}
|
||||
|
||||
if isinstance(node, list):
|
||||
@@ -119,7 +141,11 @@ def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
|
||||
|
||||
def add_key_in_dict_recursively(
|
||||
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
|
||||
d: dict[str, Any],
|
||||
key: str,
|
||||
value: Any,
|
||||
criteria: Callable[[dict[str, Any]], bool],
|
||||
_seen: set[int] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Recursively adds a key/value pair to all nested dicts matching `criteria`.
|
||||
|
||||
@@ -128,22 +154,31 @@ def add_key_in_dict_recursively(
|
||||
key: The key to add.
|
||||
value: The value to add.
|
||||
criteria: A function that returns True for dicts that should receive the key.
|
||||
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
|
||||
|
||||
Returns:
|
||||
The modified dictionary.
|
||||
"""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(d, dict):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
if criteria(d) and key not in d:
|
||||
d[key] = value
|
||||
for v in d.values():
|
||||
add_key_in_dict_recursively(v, key, value, criteria)
|
||||
add_key_in_dict_recursively(v, key, value, criteria, _seen)
|
||||
elif isinstance(d, list):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
for i in d:
|
||||
add_key_in_dict_recursively(i, key, value, criteria)
|
||||
add_key_in_dict_recursively(i, key, value, criteria, _seen)
|
||||
return d
|
||||
|
||||
|
||||
def force_additional_properties_false(d: Any) -> Any:
|
||||
def force_additional_properties_false(d: Any, _seen: set[int] | None = None) -> Any:
|
||||
"""Force additionalProperties=false on all object-type dicts recursively.
|
||||
|
||||
OpenAI strict mode requires all objects to have additionalProperties=false.
|
||||
@@ -154,11 +189,17 @@ def force_additional_properties_false(d: Any) -> Any:
|
||||
|
||||
Args:
|
||||
d: The dictionary/list to modify.
|
||||
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
|
||||
|
||||
Returns:
|
||||
The modified dictionary/list.
|
||||
"""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(d, dict):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
if d.get("type") == "object":
|
||||
d["additionalProperties"] = False
|
||||
if "properties" not in d:
|
||||
@@ -166,10 +207,13 @@ def force_additional_properties_false(d: Any) -> Any:
|
||||
if "required" not in d:
|
||||
d["required"] = []
|
||||
for v in d.values():
|
||||
force_additional_properties_false(v)
|
||||
force_additional_properties_false(v, _seen)
|
||||
elif isinstance(d, list):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
for i in d:
|
||||
force_additional_properties_false(i)
|
||||
force_additional_properties_false(i, _seen)
|
||||
return d
|
||||
|
||||
|
||||
@@ -183,7 +227,7 @@ OPENAI_SUPPORTED_FORMATS: Final[
|
||||
}
|
||||
|
||||
|
||||
def strip_unsupported_formats(d: Any) -> Any:
|
||||
def strip_unsupported_formats(d: Any, _seen: set[int] | None = None) -> Any:
|
||||
"""Remove format annotations that OpenAI strict mode doesn't support.
|
||||
|
||||
OpenAI only supports: date-time, date, time, duration.
|
||||
@@ -191,11 +235,17 @@ def strip_unsupported_formats(d: Any) -> Any:
|
||||
|
||||
Args:
|
||||
d: The dictionary/list to modify.
|
||||
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
|
||||
|
||||
Returns:
|
||||
The modified dictionary/list.
|
||||
"""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(d, dict):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
format_value = d.get("format")
|
||||
if (
|
||||
isinstance(format_value, str)
|
||||
@@ -203,14 +253,17 @@ def strip_unsupported_formats(d: Any) -> Any:
|
||||
):
|
||||
del d["format"]
|
||||
for v in d.values():
|
||||
strip_unsupported_formats(v)
|
||||
strip_unsupported_formats(v, _seen)
|
||||
elif isinstance(d, list):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
for i in d:
|
||||
strip_unsupported_formats(i)
|
||||
strip_unsupported_formats(i, _seen)
|
||||
return d
|
||||
|
||||
|
||||
def ensure_type_in_schemas(d: Any) -> Any:
|
||||
def ensure_type_in_schemas(d: Any, _seen: set[int] | None = None) -> Any:
|
||||
"""Ensure all schema objects in anyOf/oneOf have a 'type' key.
|
||||
|
||||
OpenAI strict mode requires every schema to have a 'type' key.
|
||||
@@ -218,11 +271,17 @@ def ensure_type_in_schemas(d: Any) -> Any:
|
||||
|
||||
Args:
|
||||
d: The dictionary/list to modify.
|
||||
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
|
||||
|
||||
Returns:
|
||||
The modified dictionary/list.
|
||||
"""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(d, dict):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
for key in ("anyOf", "oneOf"):
|
||||
if key in d:
|
||||
schema_list = d[key]
|
||||
@@ -230,12 +289,15 @@ def ensure_type_in_schemas(d: Any) -> Any:
|
||||
if isinstance(schema, dict) and schema == {}:
|
||||
schema_list[i] = {"type": "object"}
|
||||
else:
|
||||
ensure_type_in_schemas(schema)
|
||||
ensure_type_in_schemas(schema, _seen)
|
||||
for v in d.values():
|
||||
ensure_type_in_schemas(v)
|
||||
ensure_type_in_schemas(v, _seen)
|
||||
elif isinstance(d, list):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
for item in d:
|
||||
ensure_type_in_schemas(item)
|
||||
ensure_type_in_schemas(item, _seen)
|
||||
return d
|
||||
|
||||
|
||||
@@ -318,7 +380,9 @@ def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
return _process_oneof(deepcopy(schema))
|
||||
|
||||
|
||||
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
def convert_oneof_to_anyof(
|
||||
schema: dict[str, Any], _seen: set[int] | None = None
|
||||
) -> dict[str, Any]:
|
||||
"""Convert oneOf to anyOf for OpenAI compatibility.
|
||||
|
||||
OpenAI's Structured Outputs support anyOf better than oneOf.
|
||||
@@ -326,26 +390,37 @@ def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
|
||||
|
||||
Returns:
|
||||
Modified schema with anyOf instead of oneOf.
|
||||
"""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(schema, dict):
|
||||
if id(schema) in _seen:
|
||||
return schema
|
||||
_seen.add(id(schema))
|
||||
if "oneOf" in schema:
|
||||
schema["anyOf"] = schema.pop("oneOf")
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
convert_oneof_to_anyof(value)
|
||||
convert_oneof_to_anyof(value, _seen)
|
||||
elif isinstance(value, list):
|
||||
if id(value) in _seen:
|
||||
continue
|
||||
_seen.add(id(value))
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
convert_oneof_to_anyof(item)
|
||||
convert_oneof_to_anyof(item, _seen)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
def ensure_all_properties_required(
|
||||
schema: dict[str, Any], _seen: set[int] | None = None
|
||||
) -> dict[str, Any]:
|
||||
"""Ensure all properties are in the required array for OpenAI strict mode.
|
||||
|
||||
OpenAI's strict structured outputs require all properties to be listed
|
||||
@@ -354,11 +429,17 @@ def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
|
||||
|
||||
Returns:
|
||||
Modified schema with all properties marked as required.
|
||||
"""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(schema, dict):
|
||||
if id(schema) in _seen:
|
||||
return schema
|
||||
_seen.add(id(schema))
|
||||
if schema.get("type") == "object" and "properties" in schema:
|
||||
properties = schema["properties"]
|
||||
if properties:
|
||||
@@ -366,16 +447,21 @@ def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
ensure_all_properties_required(value)
|
||||
ensure_all_properties_required(value, _seen)
|
||||
elif isinstance(value, list):
|
||||
if id(value) in _seen:
|
||||
continue
|
||||
_seen.add(id(value))
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
ensure_all_properties_required(item)
|
||||
ensure_all_properties_required(item, _seen)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
def strip_null_from_types(
|
||||
schema: dict[str, Any], _seen: set[int] | None = None
|
||||
) -> dict[str, Any]:
|
||||
"""Remove null type from anyOf/type arrays.
|
||||
|
||||
Pydantic generates `T | None` for optional fields, which creates schemas with
|
||||
@@ -384,11 +470,17 @@ def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
|
||||
|
||||
Returns:
|
||||
Modified schema with null types removed.
|
||||
"""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(schema, dict):
|
||||
if id(schema) in _seen:
|
||||
return schema
|
||||
_seen.add(id(schema))
|
||||
if "anyOf" in schema:
|
||||
any_of = schema["anyOf"]
|
||||
non_null = [opt for opt in any_of if opt.get("type") != "null"]
|
||||
@@ -408,15 +500,141 @@ def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
strip_null_from_types(value)
|
||||
strip_null_from_types(value, _seen)
|
||||
elif isinstance(value, list):
|
||||
if id(value) in _seen:
|
||||
continue
|
||||
_seen.add(id(value))
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
strip_null_from_types(item)
|
||||
strip_null_from_types(item, _seen)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
_STRICT_METADATA_KEYS: Final[tuple[str, ...]] = (
|
||||
"title",
|
||||
"default",
|
||||
"examples",
|
||||
"example",
|
||||
"$comment",
|
||||
"readOnly",
|
||||
"writeOnly",
|
||||
"deprecated",
|
||||
)
|
||||
|
||||
_CLAUDE_STRICT_UNSUPPORTED: Final[tuple[str, ...]] = (
|
||||
"minimum",
|
||||
"maximum",
|
||||
"exclusiveMinimum",
|
||||
"exclusiveMaximum",
|
||||
"multipleOf",
|
||||
"minLength",
|
||||
"maxLength",
|
||||
"pattern",
|
||||
"minItems",
|
||||
"maxItems",
|
||||
"uniqueItems",
|
||||
"minContains",
|
||||
"maxContains",
|
||||
"minProperties",
|
||||
"maxProperties",
|
||||
"patternProperties",
|
||||
"propertyNames",
|
||||
"dependentRequired",
|
||||
"dependentSchemas",
|
||||
)
|
||||
|
||||
|
||||
def _strip_keys_recursive(
|
||||
d: Any, keys: tuple[str, ...], _seen: set[int] | None = None
|
||||
) -> Any:
|
||||
"""Recursively delete a fixed set of keys from a schema."""
|
||||
if _seen is None:
|
||||
_seen = set()
|
||||
if isinstance(d, dict):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
for key in keys:
|
||||
d.pop(key, None)
|
||||
for v in d.values():
|
||||
_strip_keys_recursive(v, keys, _seen)
|
||||
elif isinstance(d, list):
|
||||
if id(d) in _seen:
|
||||
return d
|
||||
_seen.add(id(d))
|
||||
for i in d:
|
||||
_strip_keys_recursive(i, keys, _seen)
|
||||
return d
|
||||
|
||||
|
||||
def lift_top_level_anyof(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Unwrap a top-level anyOf/oneOf/allOf wrapping a single object variant.
|
||||
|
||||
Anthropic's strict ``input_schema`` rejects top-level union keywords. When
|
||||
exactly one variant is an object schema, lift it so the root is a plain
|
||||
object; otherwise leave the schema alone.
|
||||
"""
|
||||
for key in ("anyOf", "oneOf", "allOf"):
|
||||
variants = schema.get(key)
|
||||
if not isinstance(variants, list):
|
||||
continue
|
||||
object_variants = [
|
||||
v for v in variants if isinstance(v, dict) and v.get("type") == "object"
|
||||
]
|
||||
if len(object_variants) == 1:
|
||||
lifted = deepcopy(object_variants[0])
|
||||
schema.pop(key)
|
||||
schema.update(lifted)
|
||||
break
|
||||
return schema
|
||||
|
||||
|
||||
def _common_strict_pipeline(params: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Shared strict sanitization: inline refs, close objects, require all properties."""
|
||||
sanitized = resolve_refs(deepcopy(params))
|
||||
sanitized.pop("$defs", None)
|
||||
sanitized = convert_oneof_to_anyof(sanitized)
|
||||
sanitized = ensure_type_in_schemas(sanitized)
|
||||
sanitized = force_additional_properties_false(sanitized)
|
||||
sanitized = ensure_all_properties_required(sanitized)
|
||||
return cast(dict[str, Any], _strip_keys_recursive(sanitized, _STRICT_METADATA_KEYS))
|
||||
|
||||
|
||||
def sanitize_tool_params_for_openai_strict(
|
||||
params: dict[str, Any],
|
||||
) -> dict[str, Any]:
|
||||
"""Sanitize a JSON schema for OpenAI strict function calling."""
|
||||
if not isinstance(params, dict):
|
||||
return params
|
||||
return cast(
|
||||
dict[str, Any], strip_unsupported_formats(_common_strict_pipeline(params))
|
||||
)
|
||||
|
||||
|
||||
def sanitize_tool_params_for_anthropic_strict(
|
||||
params: dict[str, Any],
|
||||
) -> dict[str, Any]:
|
||||
"""Sanitize a JSON schema for Anthropic strict tool use."""
|
||||
if not isinstance(params, dict):
|
||||
return params
|
||||
sanitized = lift_top_level_anyof(_common_strict_pipeline(params))
|
||||
sanitized = _strip_keys_recursive(sanitized, _CLAUDE_STRICT_UNSUPPORTED)
|
||||
return cast(dict[str, Any], strip_unsupported_formats(sanitized))
|
||||
|
||||
|
||||
def sanitize_tool_params_for_bedrock_strict(
|
||||
params: dict[str, Any],
|
||||
) -> dict[str, Any]:
|
||||
"""Sanitize a JSON schema for Bedrock Converse strict tool use.
|
||||
|
||||
Bedrock Converse uses the same grammar compiler as the underlying Claude
|
||||
model, so the constraints match Anthropic's.
|
||||
"""
|
||||
return sanitize_tool_params_for_anthropic_strict(params)
|
||||
|
||||
|
||||
def generate_model_description(
|
||||
model: type[BaseModel],
|
||||
*,
|
||||
@@ -545,6 +763,25 @@ def build_rich_field_description(prop_schema: dict[str, Any]) -> str:
|
||||
return ". ".join(parts) if parts else ""
|
||||
|
||||
|
||||
def _inline_top_level_ref(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Resolve only the top-level ``$ref``, preserving ``$defs`` for lazy inner resolution.
|
||||
|
||||
Used as a fallback when ``jsonref.replace_refs`` fails on circular schemas.
|
||||
Inner ``$ref`` pointers are left intact so that :func:`_resolve_ref` can
|
||||
resolve them during model construction, with cycle detection via ``in_progress``.
|
||||
"""
|
||||
schema = deepcopy(schema)
|
||||
ref = schema.get("$ref")
|
||||
if isinstance(ref, str) and ref.startswith("#/$defs/"):
|
||||
def_name = ref[len("#/$defs/") :]
|
||||
defs = schema.get("$defs", {})
|
||||
if def_name in defs:
|
||||
resolved: dict[str, Any] = deepcopy(defs[def_name])
|
||||
resolved.setdefault("$defs", defs)
|
||||
return resolved
|
||||
return schema
|
||||
|
||||
|
||||
def create_model_from_schema( # type: ignore[no-any-unimported]
|
||||
json_schema: dict[str, Any],
|
||||
*,
|
||||
@@ -599,19 +836,80 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
|
||||
>>> person.name
|
||||
'John'
|
||||
"""
|
||||
json_schema = dict(jsonref.replace_refs(json_schema, proxies=False))
|
||||
try:
|
||||
json_schema = dict(jsonref.replace_refs(json_schema, proxies=False))
|
||||
except (jsonref.JsonRefError, RecursionError):
|
||||
json_schema = _inline_top_level_ref(json_schema)
|
||||
|
||||
effective_root = root_schema or json_schema
|
||||
|
||||
json_schema = force_additional_properties_false(json_schema)
|
||||
effective_root = force_additional_properties_false(effective_root)
|
||||
|
||||
in_progress: dict[int, Any] = {}
|
||||
model = _build_model_from_schema(
|
||||
json_schema,
|
||||
effective_root,
|
||||
model_name=model_name,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
__config__=__config__,
|
||||
__base__=__base__,
|
||||
__module__=__module__,
|
||||
__validators__=__validators__,
|
||||
__cls_kwargs__=__cls_kwargs__,
|
||||
)
|
||||
|
||||
types_namespace: dict[str, Any] = {
|
||||
entry.__name__: entry
|
||||
for entry in in_progress.values()
|
||||
if isinstance(entry, type) and issubclass(entry, BaseModel)
|
||||
}
|
||||
for entry in in_progress.values():
|
||||
if (
|
||||
isinstance(entry, type)
|
||||
and issubclass(entry, BaseModel)
|
||||
and not getattr(entry, "__pydantic_complete__", True)
|
||||
):
|
||||
try:
|
||||
entry.model_rebuild(_types_namespace=types_namespace)
|
||||
except Exception as e:
|
||||
logger.debug("model_rebuild failed for %s: %s", entry.__name__, e)
|
||||
return model
|
||||
|
||||
|
||||
def _build_model_from_schema( # type: ignore[no-any-unimported]
|
||||
json_schema: dict[str, Any],
|
||||
effective_root: dict[str, Any],
|
||||
*,
|
||||
model_name: str | None,
|
||||
enrich_descriptions: bool,
|
||||
in_progress: dict[int, Any],
|
||||
__config__: ConfigDict | None = None,
|
||||
__base__: type[BaseModel] | None = None,
|
||||
__module__: str = __name__,
|
||||
__validators__: dict[str, AnyClassMethod] | None = None,
|
||||
__cls_kwargs__: dict[str, Any] | None = None,
|
||||
) -> type[BaseModel]:
|
||||
"""Inner builder shared by the public entry point and recursive nested-object creation.
|
||||
|
||||
Preprocessing via ``jsonref.replace_refs`` and the sanitization walkers is
|
||||
run once by the public entry; this helper walks the already-normalized
|
||||
schema and emits Pydantic models. ``in_progress`` maps ``id(schema)`` to
|
||||
the model being built for that schema, so a cyclic ``$ref`` graph
|
||||
degrades to a ``ForwardRef`` back-edge instead of blowing the stack.
|
||||
"""
|
||||
original_id = id(json_schema)
|
||||
if "allOf" in json_schema:
|
||||
json_schema = _merge_all_of_schemas(json_schema["allOf"], effective_root)
|
||||
if "title" not in json_schema and "title" in (root_schema or {}):
|
||||
json_schema["title"] = (root_schema or {}).get("title")
|
||||
|
||||
effective_name = model_name or json_schema.get("title") or "DynamicModel"
|
||||
|
||||
schema_id = id(json_schema)
|
||||
in_progress[original_id] = effective_name
|
||||
if schema_id != original_id:
|
||||
in_progress[schema_id] = effective_name
|
||||
|
||||
field_definitions = {
|
||||
name: _json_schema_to_pydantic_field(
|
||||
name,
|
||||
@@ -619,13 +917,14 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
|
||||
json_schema.get("required", []),
|
||||
effective_root,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
for name, prop in (json_schema.get("properties", {}) or {}).items()
|
||||
}
|
||||
|
||||
effective_config = __config__ or ConfigDict(extra="forbid")
|
||||
|
||||
return create_model_base(
|
||||
model = create_model_base(
|
||||
effective_name,
|
||||
__config__=effective_config,
|
||||
__base__=__base__,
|
||||
@@ -634,6 +933,10 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
|
||||
__cls_kwargs__=__cls_kwargs__,
|
||||
**field_definitions,
|
||||
)
|
||||
in_progress[original_id] = model
|
||||
if schema_id != original_id:
|
||||
in_progress[schema_id] = model
|
||||
return model
|
||||
|
||||
|
||||
def _json_schema_to_pydantic_field(
|
||||
@@ -643,6 +946,7 @@ def _json_schema_to_pydantic_field(
|
||||
root_schema: dict[str, Any],
|
||||
*,
|
||||
enrich_descriptions: bool = False,
|
||||
in_progress: dict[int, Any] | None = None,
|
||||
) -> Any:
|
||||
"""Convert a JSON schema property to a Pydantic field definition.
|
||||
|
||||
@@ -661,6 +965,7 @@ def _json_schema_to_pydantic_field(
|
||||
root_schema,
|
||||
name_=name.title(),
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
is_required = name in required
|
||||
|
||||
@@ -720,7 +1025,7 @@ def _json_schema_to_pydantic_field(
|
||||
field_params["pattern"] = json_schema["pattern"]
|
||||
|
||||
if not is_required:
|
||||
type_ = type_ | None
|
||||
type_ = Optional[type_] # noqa: UP045 - ForwardRef does not support `|`
|
||||
|
||||
if schema_extra:
|
||||
field_params["json_schema_extra"] = schema_extra
|
||||
@@ -793,6 +1098,7 @@ def _json_schema_to_pydantic_type(
|
||||
*,
|
||||
name_: str | None = None,
|
||||
enrich_descriptions: bool = False,
|
||||
in_progress: dict[int, Any] | None = None,
|
||||
) -> Any:
|
||||
"""Convert a JSON schema to a Python/Pydantic type.
|
||||
|
||||
@@ -801,10 +1107,23 @@ def _json_schema_to_pydantic_type(
|
||||
root_schema: The root schema for resolving $ref.
|
||||
name_: Optional name for nested models.
|
||||
enrich_descriptions: Propagated to nested model creation.
|
||||
in_progress: Map of ``id(schema_dict)`` to the Pydantic model
|
||||
currently being built for that schema, or to a placeholder name
|
||||
as a plain ``str`` while the model is still being constructed.
|
||||
Populated by :func:`_build_model_from_schema`. Enables cycle
|
||||
detection so a self-referential ``$ref`` graph resolves to a
|
||||
:class:`ForwardRef` back-edge rather than recursing forever.
|
||||
|
||||
Returns:
|
||||
A Python type corresponding to the JSON schema.
|
||||
"""
|
||||
if in_progress is not None:
|
||||
cached = in_progress.get(id(json_schema))
|
||||
if isinstance(cached, str):
|
||||
return ForwardRef(cached)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
ref = json_schema.get("$ref")
|
||||
if ref:
|
||||
ref_schema = _resolve_ref(ref, root_schema)
|
||||
@@ -813,6 +1132,7 @@ def _json_schema_to_pydantic_type(
|
||||
root_schema,
|
||||
name_=name_,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
|
||||
enum_values = json_schema.get("enum")
|
||||
@@ -832,6 +1152,7 @@ def _json_schema_to_pydantic_type(
|
||||
root_schema,
|
||||
name_=f"{name_ or 'Union'}Option{i}",
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
for i, schema in enumerate(any_of_schemas)
|
||||
]
|
||||
@@ -845,6 +1166,15 @@ def _json_schema_to_pydantic_type(
|
||||
root_schema,
|
||||
name_=name_,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
if in_progress is not None:
|
||||
return _build_model_from_schema(
|
||||
json_schema,
|
||||
root_schema,
|
||||
model_name=name_,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
merged = _merge_all_of_schemas(all_of_schemas, root_schema)
|
||||
return _json_schema_to_pydantic_type(
|
||||
@@ -852,6 +1182,7 @@ def _json_schema_to_pydantic_type(
|
||||
root_schema,
|
||||
name_=name_,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
|
||||
type_ = json_schema.get("type")
|
||||
@@ -872,12 +1203,21 @@ def _json_schema_to_pydantic_type(
|
||||
root_schema,
|
||||
name_=name_,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
return list[item_type] # type: ignore[valid-type]
|
||||
return list
|
||||
if type_ == "object":
|
||||
properties = json_schema.get("properties")
|
||||
if properties:
|
||||
if in_progress is not None:
|
||||
return _build_model_from_schema(
|
||||
json_schema,
|
||||
root_schema,
|
||||
model_name=name_,
|
||||
enrich_descriptions=enrich_descriptions,
|
||||
in_progress=in_progress,
|
||||
)
|
||||
json_schema_ = json_schema.copy()
|
||||
if json_schema_.get("title") is None:
|
||||
json_schema_["title"] = name_ or "DynamicModel"
|
||||
|
||||
@@ -7,6 +7,7 @@ import logging
|
||||
import queue
|
||||
import threading
|
||||
from typing import Any, NamedTuple
|
||||
import uuid
|
||||
|
||||
from typing_extensions import TypedDict
|
||||
|
||||
@@ -25,6 +26,10 @@ from crewai.utilities.string_utils import sanitize_tool_name
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_current_stream_ids: contextvars.ContextVar[tuple[str, ...]] = contextvars.ContextVar(
|
||||
"_current_stream_ids", default=()
|
||||
)
|
||||
|
||||
|
||||
class TaskInfo(TypedDict):
|
||||
"""Task context information for streaming."""
|
||||
@@ -45,6 +50,7 @@ class StreamingState(NamedTuple):
|
||||
async_queue: asyncio.Queue[StreamChunk | None | Exception] | None
|
||||
loop: asyncio.AbstractEventLoop | None
|
||||
handler: Callable[[Any, BaseEvent], None]
|
||||
stream_id: str | None = None
|
||||
|
||||
|
||||
def _extract_tool_call_info(
|
||||
@@ -106,6 +112,7 @@ def _create_stream_handler(
|
||||
sync_queue: queue.Queue[StreamChunk | None | Exception],
|
||||
async_queue: asyncio.Queue[StreamChunk | None | Exception] | None = None,
|
||||
loop: asyncio.AbstractEventLoop | None = None,
|
||||
stream_id: str | None = None,
|
||||
) -> Callable[[Any, BaseEvent], None]:
|
||||
"""Create a stream handler function.
|
||||
|
||||
@@ -114,21 +121,19 @@ def _create_stream_handler(
|
||||
sync_queue: Synchronous queue for chunks.
|
||||
async_queue: Optional async queue for chunks.
|
||||
loop: Optional event loop for async operations.
|
||||
stream_id: Stream scope ID for concurrent isolation.
|
||||
|
||||
Returns:
|
||||
Handler function that can be registered with the event bus.
|
||||
"""
|
||||
|
||||
def stream_handler(_: Any, event: BaseEvent) -> None:
|
||||
"""Handle LLM stream chunk events and enqueue them.
|
||||
|
||||
Args:
|
||||
_: Event source (unused).
|
||||
event: The event to process.
|
||||
"""
|
||||
if not isinstance(event, LLMStreamChunkEvent):
|
||||
return
|
||||
|
||||
if stream_id is not None and stream_id not in _current_stream_ids.get():
|
||||
return
|
||||
|
||||
chunk = _create_stream_chunk(event, current_task_info)
|
||||
|
||||
if async_queue is not None and loop is not None:
|
||||
@@ -203,7 +208,11 @@ def create_streaming_state(
|
||||
async_queue = asyncio.Queue()
|
||||
loop = asyncio.get_event_loop()
|
||||
|
||||
handler = _create_stream_handler(current_task_info, sync_queue, async_queue, loop)
|
||||
stream_id = str(uuid.uuid4())
|
||||
|
||||
handler = _create_stream_handler(
|
||||
current_task_info, sync_queue, async_queue, loop, stream_id=stream_id
|
||||
)
|
||||
crewai_event_bus.register_handler(LLMStreamChunkEvent, handler)
|
||||
|
||||
return StreamingState(
|
||||
@@ -213,6 +222,7 @@ def create_streaming_state(
|
||||
async_queue=async_queue,
|
||||
loop=loop,
|
||||
handler=handler,
|
||||
stream_id=stream_id,
|
||||
)
|
||||
|
||||
|
||||
@@ -260,7 +270,12 @@ def create_chunk_generator(
|
||||
Yields:
|
||||
StreamChunk objects as they arrive.
|
||||
"""
|
||||
ctx = contextvars.copy_context()
|
||||
if state.stream_id is not None:
|
||||
token = _current_stream_ids.set((*_current_stream_ids.get(), state.stream_id))
|
||||
ctx = contextvars.copy_context()
|
||||
_current_stream_ids.reset(token)
|
||||
else:
|
||||
ctx = contextvars.copy_context()
|
||||
thread = threading.Thread(target=ctx.run, args=(run_func,), daemon=True)
|
||||
thread.start()
|
||||
|
||||
@@ -300,7 +315,12 @@ async def create_async_chunk_generator(
|
||||
"Async queue not initialized. Use create_streaming_state(use_async=True)."
|
||||
)
|
||||
|
||||
task = asyncio.create_task(run_coro())
|
||||
if state.stream_id is not None:
|
||||
token = _current_stream_ids.set((*_current_stream_ids.get(), state.stream_id))
|
||||
task = asyncio.create_task(run_coro())
|
||||
_current_stream_ids.reset(token)
|
||||
else:
|
||||
task = asyncio.create_task(run_coro())
|
||||
|
||||
try:
|
||||
while True:
|
||||
|
||||
@@ -1051,7 +1051,7 @@ def test_lite_agent_verbose_false_suppresses_printer_output():
|
||||
successful_requests=1,
|
||||
)
|
||||
|
||||
with pytest.warns(DeprecationWarning):
|
||||
with pytest.warns(FutureWarning):
|
||||
agent = LiteAgent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
|
||||
@@ -55,7 +55,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
- 2.31.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
@@ -63,50 +63,51 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-DIqxWpJbbFJoV8WlXhb9UYFbCmdPk\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1773385850,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-DTApYQx2LepfeRL1XcDKPgrhMFnQr\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1775845516,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"ask_question_to_coworker\",\n
|
||||
\ \"arguments\": \"{\\\"question\\\": \\\"What are some trending
|
||||
topics or ideas in various fields that could be explored for an article?\\\",
|
||||
\\\"context\\\": \\\"We need to generate a list of 5 interesting ideas to
|
||||
explore for an article. These ideas should be engaging and relevant to current
|
||||
trends or captivating subjects.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"\n
|
||||
\ }\n },\n {\n \"id\": \"call_j4KH2SGZvNeioql0HcRQ9NTp\",\n
|
||||
\ \"id\": \"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"delegate_work_to_coworker\",\n
|
||||
\ \"arguments\": \"{\\\"task\\\": \\\"Come up with a list of 5
|
||||
interesting ideas to explore for an article.\\\", \\\"context\\\": \\\"We
|
||||
need five intriguing ideas worth exploring for an article. Each idea should
|
||||
have potential for in-depth exploration and appeal to a broad audience, possibly
|
||||
touching on current trends, historical insights, future possibilities, or
|
||||
human interest stories.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"\n }\n
|
||||
\ },\n {\n \"id\": \"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\n
|
||||
\ \"type\": \"function\",\n \"function\": {\n \"name\":
|
||||
\"ask_question_to_coworker\",\n \"arguments\": \"{\\\"question\\\":
|
||||
\\\"What unique angles or perspectives could we explore to make articles more
|
||||
compelling and engaging?\\\", \\\"context\\\": \\\"Our task involves coming
|
||||
up with 5 ideas for articles, each with an exciting paragraph highlight that
|
||||
illustrates the promise and intrigue of the topic. We want them to be more
|
||||
than generic concepts, shining for readers with fresh insights or engaging
|
||||
twists.\\\", \\\"coworker\\\": \\\"Senior Writer\\\"}\"\n }\n }\n
|
||||
\ ],\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n }\n
|
||||
\ ],\n \"usage\": {\n \"prompt_tokens\": 476,\n \"completion_tokens\":
|
||||
183,\n \"total_tokens\": 659,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
\"delegate_work_to_coworker\",\n \"arguments\": \"{\\\"task\\\":
|
||||
\\\"Write one amazing paragraph highlight for each of 5 ideas that showcases
|
||||
how good an article about this topic could be.\\\", \\\"context\\\": \\\"Upon
|
||||
receiving five intriguing ideas from the Researcher, create a compelling paragraph
|
||||
for each idea that highlights its potential as a fascinating article. These
|
||||
paragraphs must capture the essence of the topic and explain why it would
|
||||
captivate readers, incorporating possible themes and insights.\\\", \\\"coworker\\\":
|
||||
\\\"Senior Writer\\\"}\"\n }\n }\n ],\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
476,\n \"completion_tokens\": 201,\n \"total_tokens\": 677,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_b7c8e3f100\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_2ca5b70601\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-Ray:
|
||||
- 9db9389a3f9e424c-EWR
|
||||
- 9ea3cb06ba66b301-TPE
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 13 Mar 2026 07:10:53 GMT
|
||||
- Fri, 10 Apr 2026 18:25:18 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
@@ -122,7 +123,7 @@ interactions:
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '2402'
|
||||
- '1981'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
@@ -154,13 +155,14 @@ interactions:
|
||||
You work as a freelancer and is now working on doing research and analysis for
|
||||
a new customer.\nYour personal goal is: Make the best research and analysis
|
||||
on content about AI and AI agents"},{"role":"user","content":"\nCurrent Task:
|
||||
What are some trending topics or ideas in various fields that could be explored
|
||||
for an article?\n\nThis is the expected criteria for your final answer: Your
|
||||
best answer to your coworker asking you this, accounting for the context shared.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nThis
|
||||
is the context you''re working with:\nWe need to generate a list of 5 interesting
|
||||
ideas to explore for an article. These ideas should be engaging and relevant
|
||||
to current trends or captivating subjects.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
Come up with a list of 5 interesting ideas to explore for an article.\n\nThis
|
||||
is the expected criteria for your final answer: Your best answer to your coworker
|
||||
asking you this, accounting for the context shared.\nyou MUST return the actual
|
||||
complete content as the final answer, not a summary.\n\nThis is the context
|
||||
you''re working with:\nWe need five intriguing ideas worth exploring for an
|
||||
article. Each idea should have potential for in-depth exploration and appeal
|
||||
to a broad audience, possibly touching on current trends, historical insights,
|
||||
future possibilities, or human interest stories.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -173,7 +175,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '978'
|
||||
- '1046'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -187,7 +189,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
- 2.31.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
@@ -195,63 +197,69 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-DIqxak88AexErt9PGFGHnWPIJLwNV\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1773385854,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-DTApalbfnYkqIc8slLS3DKwo9KXbc\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1775845518,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Here are five trending and engaging
|
||||
topics across various fields that could be explored for an article:\\n\\n1.
|
||||
**The Rise of Autonomous AI Agents and Their Impact on the Future of Work**
|
||||
\ \\nExplore how autonomous AI agents\u2014systems capable of performing complex
|
||||
tasks independently\u2014are transforming industries such as customer service,
|
||||
software development, and logistics. Discuss implications for job automation,
|
||||
human-AI collaboration, and ethical considerations surrounding decision-making
|
||||
autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations in Audio, Video,
|
||||
and 3D Content Creation** \\nDelve into advancements in generative AI models
|
||||
that create not only text but also realistic audio, video content, virtual
|
||||
environments, and 3D models. Highlight applications in gaming, entertainment,
|
||||
education, and digital marketing, as well as challenges like misinformation
|
||||
and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling: Enhancing Predictive
|
||||
Accuracy to Combat Climate Change** \\nExamine how AI and machine learning
|
||||
are improving climate models by analyzing vast datasets, uncovering patterns,
|
||||
and simulating environmental scenarios. Discuss how these advances are aiding
|
||||
policymakers in making informed decisions to address climate risks and sustainability
|
||||
goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare: Balancing Innovation
|
||||
with Patient Privacy** \\nInvestigate ethical challenges posed by AI applications
|
||||
in healthcare, including diagnosis, personalized treatment, and patient data
|
||||
management. Focus on balancing rapid technological innovation with privacy,
|
||||
bias mitigation, and regulatory frameworks to ensure equitable access and
|
||||
trust.\\n\\n5. **Quantum Computing Meets AI: Exploring the Next Leap in Computational
|
||||
Power** \\nCover the intersection of quantum computing and artificial intelligence,
|
||||
exploring how quantum algorithms could accelerate AI training processes and
|
||||
solve problems beyond the reach of classical computers. Outline current research,
|
||||
potential breakthroughs, and the timeline for real-world applications.\\n\\nEach
|
||||
of these topics is timely, relevant, and has the potential to engage readers
|
||||
interested in cutting-edge technology, societal impact, and future trends.
|
||||
Let me know if you want me to help develop an outline or deeper research into
|
||||
any of these areas!\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 178,\n \"completion_tokens\":
|
||||
402,\n \"total_tokens\": 580,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
\"assistant\",\n \"content\": \"Certainly! Here are five intriguing
|
||||
article ideas that offer rich potential for deep exploration and broad audience
|
||||
appeal, especially aligned with current trends and human interest in AI and
|
||||
technology:\\n\\n1. **The Evolution of AI Agents: From Rule-Based Bots to
|
||||
Autonomous Decision Makers** \\n Explore the historical development of
|
||||
AI agents, tracing the journey from simple scripted chatbots to advanced autonomous
|
||||
systems capable of complex decision-making and learning. Dive into key technological
|
||||
milestones, breakthroughs in machine learning, and current state-of-the-art
|
||||
AI agents. Discuss implications for industries such as customer service, healthcare,
|
||||
and autonomous vehicles, highlighting both opportunities and ethical concerns.\\n\\n2.
|
||||
**AI in Daily Life: How Intelligent Agents Are Reshaping Human Routines**
|
||||
\ \\n Investigate the integration of AI agents in everyday life\u2014from
|
||||
virtual assistants like Siri and Alexa to personalized recommendation systems
|
||||
and smart home devices. Analyze how these AI tools influence productivity,
|
||||
privacy, and social behavior. Include human interest elements through stories
|
||||
of individuals or communities who have embraced or resisted these technologies.\\n\\n3.
|
||||
**The Future of Work: AI Agents as Collaborative Colleagues** \\n Examine
|
||||
how AI agents are transforming workplaces by acting as collaborators rather
|
||||
than just tools. Cover applications in creative fields, data analysis, and
|
||||
decision support, while addressing potential challenges such as job displacement,
|
||||
new skill requirements, and the evolving definition of teamwork. Use expert
|
||||
opinions and case studies to paint a nuanced future outlook.\\n\\n4. **Ethics
|
||||
and Accountability in AI Agent Development** \\n Delve into the ethical
|
||||
dilemmas posed by increasingly autonomous AI agents\u2014topics like bias
|
||||
in algorithms, data privacy, and accountability for AI-driven decisions. Explore
|
||||
measures being taken globally to regulate AI, frameworks for responsible AI
|
||||
development, and the role of public awareness. Include historical context
|
||||
about technology ethics to provide depth.\\n\\n5. **Human-AI Symbiosis: Stories
|
||||
of Innovative Partnerships Shaping Our World** \\n Tell compelling human
|
||||
interest stories about individuals or organizations pioneering collaborative
|
||||
projects with AI agents that lead to breakthroughs in science, art, or social
|
||||
good. Highlight how these partnerships transcend traditional human-machine
|
||||
interaction and open new creative and problem-solving possibilities, inspiring
|
||||
readers about the potential of human-AI synergy.\\n\\nThese ideas are designed
|
||||
to be both engaging and informative, offering multiple angles\u2014technical,
|
||||
historical, ethical, and personal\u2014to keep readers captivated while providing
|
||||
substantial content for in-depth analysis.\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 189,\n \"completion_tokens\":
|
||||
472,\n \"total_tokens\": 661,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e76a310957\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_fbf43a1ff3\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-Ray:
|
||||
- 9db938b0493c4b9f-EWR
|
||||
- 9ea3cb1b5c943323-TPE
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 13 Mar 2026 07:10:59 GMT
|
||||
- Fri, 10 Apr 2026 18:25:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
@@ -267,7 +275,7 @@ interactions:
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '5699'
|
||||
- '6990'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
@@ -298,15 +306,16 @@ interactions:
|
||||
a senior writer, specialized in technology, software engineering, AI and startups.
|
||||
You work as a freelancer and are now working on writing content for a new customer.\nYour
|
||||
personal goal is: Write the best content about AI and AI agents."},{"role":"user","content":"\nCurrent
|
||||
Task: What unique angles or perspectives could we explore to make articles more
|
||||
compelling and engaging?\n\nThis is the expected criteria for your final answer:
|
||||
Your best answer to your coworker asking you this, accounting for the context
|
||||
shared.\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.\n\nThis is the context you''re working with:\nOur task involves coming
|
||||
up with 5 ideas for articles, each with an exciting paragraph highlight that
|
||||
illustrates the promise and intrigue of the topic. We want them to be more than
|
||||
generic concepts, shining for readers with fresh insights or engaging twists.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
Task: Write one amazing paragraph highlight for each of 5 ideas that showcases
|
||||
how good an article about this topic could be.\n\nThis is the expected criteria
|
||||
for your final answer: Your best answer to your coworker asking you this, accounting
|
||||
for the context shared.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nThis is the context you''re working with:\nUpon
|
||||
receiving five intriguing ideas from the Researcher, create a compelling paragraph
|
||||
for each idea that highlights its potential as a fascinating article. These
|
||||
paragraphs must capture the essence of the topic and explain why it would captivate
|
||||
readers, incorporating possible themes and insights.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -319,7 +328,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1041'
|
||||
- '1103'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -333,7 +342,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
- 2.31.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
@@ -341,78 +350,83 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-DIqxZCl1kFIE7WXznIKow9QFNZ2QT\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1773385853,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-DTApbrh9Z4yFAKPHIR48ubdB1R5xK\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1775845519,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Absolutely! To create compelling and
|
||||
engaging AI articles that stand out, we need to go beyond surface-level discussions
|
||||
and deliver fresh perspectives that challenge assumptions and spark curiosity.
|
||||
Here are five unique angles with their highlight paragraphs that could really
|
||||
captivate our readers:\\n\\n1. **The Hidden Psychology of AI Agents: How They
|
||||
Learn Human Biases and What That Means for Our Future** \\n*Highlight:* AI
|
||||
agents don\u2019t just process data\u2014they absorb the subtle nuances and
|
||||
biases embedded in human language, behavior, and culture. This article dives
|
||||
deep into the psychological parallels between AI learning mechanisms and human
|
||||
cognitive biases, revealing surprising ways AI can both mirror and amplify
|
||||
our prejudices. Understanding these dynamics is crucial for building trustworthy
|
||||
AI systems and reshaping the future relationship between humans and machines.\\n\\n2.
|
||||
**From Assistants to Autonomous Creators: The Rise of AI Agents as Artists,
|
||||
Writers, and Innovators** \\n*Highlight:* What do we lose and gain when AI
|
||||
agents start producing original art, literature, and innovations? This piece
|
||||
explores groundbreaking examples where AI isn\u2019t just a tool but a creative
|
||||
partner that challenges our definition of authorship and genius. We\u2019ll
|
||||
examine ethical dilemmas, collaborative workflows, and the exciting frontier
|
||||
where human intuition meets algorithmic originality.\\n\\n3. **AI Agents in
|
||||
the Wild: How Decentralized Autonomous Organizations Could Redefine Economy
|
||||
and Governance** \\n*Highlight:* Imagine AI agents operating autonomously
|
||||
in decentralized networks, making real-time decisions that affect finances,
|
||||
resource management, and governance without human intervention. This article
|
||||
uncovers how DAOs powered by AI agents might spontaneously evolve new forms
|
||||
of organization\u2014transparent, efficient, and resistant to traditional
|
||||
corruption. We\u2019ll investigate early case studies and speculate on how
|
||||
this might disrupt centuries-old societal structures.\\n\\n4. **Beyond Chatbots:
|
||||
The Next Generation of AI Agents as Empathetic Digital Companions** \\n*Highlight:*
|
||||
Moving past scripted conversations, emerging AI agents simulate empathy and
|
||||
emotional intelligence in ways that can transform mental health care, education,
|
||||
and companionship. This article provides an insider look at the complex algorithms
|
||||
and biofeedback mechanisms enabling AI to recognize, respond to, and foster
|
||||
human emotions\u2014potentially filling gaps in underserved populations while
|
||||
raising profound questions about authenticity and connection.\\n\\n5. **The
|
||||
Environmental Toll of AI Agents: Unmasking the Ecological Cost of Intelligent
|
||||
Automation** \\n*Highlight:* While AI promises efficiency and innovation,
|
||||
the environmental footprint of training and deploying millions of AI agents
|
||||
is rarely discussed. This eye-opening article quantifies the energy demands
|
||||
of current models, challenges the narrative of AI as an unequivocal green
|
||||
solution, and explores emerging approaches pathing toward sustainable intelligent
|
||||
automation\u2014an urgent conversation for an increasingly eco-conscious tech
|
||||
landscape.\\n\\nEach of these angles opens a door to rich storytelling that
|
||||
blends technical depth, ethical inquiry, and visionary implications\u2014perfect
|
||||
for readers hungry for insight that\u2019s both sophisticated and accessible.
|
||||
Let me know which ones resonate most, or if you want me to refine any into
|
||||
full article outlines!\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
|
||||
595,\n \"total_tokens\": 783,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
\"assistant\",\n \"content\": \"1. **The Rise of Autonomous AI Agents:
|
||||
Revolutionizing Everyday Tasks** \\nImagine a world where AI agents autonomously
|
||||
manage your daily schedule, optimize your work routines, and even handle complex
|
||||
decision-making with minimal human intervention. An article exploring the
|
||||
rise of autonomous AI agents would captivate readers by diving into how advancements
|
||||
in machine learning and natural language processing have matured these agents
|
||||
from simple chatbots to intelligent collaborators. Themes could include practical
|
||||
applications in industries like healthcare, finance, and personal productivity,
|
||||
the challenges of trust and transparency, and a glimpse into the ethical questions
|
||||
surrounding AI autonomy. This topic not only showcases cutting-edge technology
|
||||
but also invites readers to envision the near future of human-AI synergy.\\n\\n2.
|
||||
**Building Ethical AI Agents: Balancing Innovation with Responsibility** \\nAs
|
||||
AI agents become more powerful and independent, the imperative to embed ethical
|
||||
frameworks within their design comes sharply into focus. An insightful article
|
||||
on this theme would engage readers by unpacking the complexities of programming
|
||||
morality, fairness, and accountability into AI systems that influence critical
|
||||
decisions\u2014whether in hiring processes, law enforcement, or digital content
|
||||
moderation. Exploring real-world case studies alongside philosophical and
|
||||
regulatory perspectives, the piece could illuminate the delicate balance between
|
||||
technological innovation and societal values, offering a nuanced discussion
|
||||
that appeals to technologists, ethicists, and everyday users alike.\\n\\n3.
|
||||
**AI Agents in Startups: Accelerating Growth and Disrupting Markets** \\nStartups
|
||||
are uniquely positioned to leverage AI agents as game-changers that turbocharge
|
||||
growth, optimize workflows, and unlock new business models. This article could
|
||||
enthrall readers by detailing how nimble companies integrate AI-driven agents
|
||||
for customer engagement, market analysis, and personalized product recommendations\u2014outpacing
|
||||
larger incumbents. It would also examine hurdles such as data privacy, scaling
|
||||
complexities, and the human-AI collaboration dynamic, providing actionable
|
||||
insights for entrepreneurs and investors. The story of AI agents fueling startup
|
||||
innovation not only inspires but also outlines the practical pathways and
|
||||
pitfalls on the frontier of modern entrepreneurship.\\n\\n4. **The Future
|
||||
of Work with AI Agents: Redefining Roles and Skills** \\nAI agents are redefining
|
||||
professional landscapes by automating routine tasks and augmenting human creativity
|
||||
and decision-making. An article on this topic could engage readers by painting
|
||||
a vivid picture of the evolving workplace, where collaboration between humans
|
||||
and AI agents becomes the norm. Delving into emerging roles, necessary skill
|
||||
sets, and how education and training must adapt, the piece would offer a forward-thinking
|
||||
analysis that resonates deeply with employees, managers, and policymakers.
|
||||
Exploring themes of workforce transformation, productivity gains, and potential
|
||||
socioeconomic impacts, it provides a comprehensive outlook on an AI-integrated
|
||||
work environment.\\n\\n5. **From Reactive to Proactive: How Next-Gen AI Agents
|
||||
Anticipate Needs** \\nThe leap from reactive AI assistants to truly proactive
|
||||
AI agents signifies one of the most thrilling advances in artificial intelligence.
|
||||
An article centered on this evolution would captivate readers by illustrating
|
||||
how these agents utilize predictive analytics, contextual understanding, and
|
||||
continuous learning to anticipate user needs before they are expressed. By
|
||||
showcasing pioneering applications in personalized healthcare management,
|
||||
smart homes, and adaptive learning platforms, the article would highlight
|
||||
the profound shift toward intuitive, anticipatory technology. This theme not
|
||||
only excites with futuristic promise but also probes the technical and privacy
|
||||
challenges that come with increased agency and foresight.\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
197,\n \"completion_tokens\": 666,\n \"total_tokens\": 863,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_ae0f8c9a7b\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_d45f83c5fd\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-Ray:
|
||||
- 9db938b0489680d4-EWR
|
||||
- 9ea3cb1cbfe2b312-TPE
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 13 Mar 2026 07:11:02 GMT
|
||||
- Fri, 10 Apr 2026 18:25:28 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
@@ -428,7 +442,7 @@ interactions:
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '8310'
|
||||
- '9479'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
@@ -467,91 +481,105 @@ interactions:
|
||||
good an article about this topic could be. Return the list of ideas with their
|
||||
paragraph and your notes.\\n\\nThis is the expected criteria for your final
|
||||
answer: 5 bullet points with a paragraph for each idea.\\nyou MUST return the
|
||||
actual complete content as the final answer, not a summary.\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\"type\":\"function\",\"function\":{\"name\":\"ask_question_to_coworker\",\"arguments\":\"{\\\"question\\\":
|
||||
\\\"What are some trending topics or ideas in various fields that could be explored
|
||||
for an article?\\\", \\\"context\\\": \\\"We need to generate a list of 5 interesting
|
||||
ideas to explore for an article. These ideas should be engaging and relevant
|
||||
to current trends or captivating subjects.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"}},{\"id\":\"call_j4KH2SGZvNeioql0HcRQ9NTp\",\"type\":\"function\",\"function\":{\"name\":\"ask_question_to_coworker\",\"arguments\":\"{\\\"question\\\":
|
||||
\\\"What unique angles or perspectives could we explore to make articles more
|
||||
compelling and engaging?\\\", \\\"context\\\": \\\"Our task involves coming
|
||||
up with 5 ideas for articles, each with an exciting paragraph highlight that
|
||||
illustrates the promise and intrigue of the topic. We want them to be more than
|
||||
generic concepts, shining for readers with fresh insights or engaging twists.\\\",
|
||||
\\\"coworker\\\": \\\"Senior Writer\\\"}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\"name\":\"ask_question_to_coworker\",\"content\":\"Here
|
||||
are five trending and engaging topics across various fields that could be explored
|
||||
for an article:\\n\\n1. **The Rise of Autonomous AI Agents and Their Impact
|
||||
on the Future of Work** \\nExplore how autonomous AI agents\u2014systems capable
|
||||
of performing complex tasks independently\u2014are transforming industries such
|
||||
as customer service, software development, and logistics. Discuss implications
|
||||
for job automation, human-AI collaboration, and ethical considerations surrounding
|
||||
decision-making autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations in
|
||||
Audio, Video, and 3D Content Creation** \\nDelve into advancements in generative
|
||||
AI models that create not only text but also realistic audio, video content,
|
||||
virtual environments, and 3D models. Highlight applications in gaming, entertainment,
|
||||
education, and digital marketing, as well as challenges like misinformation
|
||||
and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling: Enhancing Predictive
|
||||
Accuracy to Combat Climate Change** \\nExamine how AI and machine learning
|
||||
are improving climate models by analyzing vast datasets, uncovering patterns,
|
||||
and simulating environmental scenarios. Discuss how these advances are aiding
|
||||
policymakers in making informed decisions to address climate risks and sustainability
|
||||
goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare: Balancing Innovation
|
||||
with Patient Privacy** \\nInvestigate ethical challenges posed by AI applications
|
||||
in healthcare, including diagnosis, personalized treatment, and patient data
|
||||
management. Focus on balancing rapid technological innovation with privacy,
|
||||
bias mitigation, and regulatory frameworks to ensure equitable access and trust.\\n\\n5.
|
||||
**Quantum Computing Meets AI: Exploring the Next Leap in Computational Power**
|
||||
\ \\nCover the intersection of quantum computing and artificial intelligence,
|
||||
exploring how quantum algorithms could accelerate AI training processes and
|
||||
solve problems beyond the reach of classical computers. Outline current research,
|
||||
potential breakthroughs, and the timeline for real-world applications.\\n\\nEach
|
||||
of these topics is timely, relevant, and has the potential to engage readers
|
||||
interested in cutting-edge technology, societal impact, and future trends. Let
|
||||
me know if you want me to help develop an outline or deeper research into any
|
||||
of these areas!\"},{\"role\":\"tool\",\"tool_call_id\":\"call_j4KH2SGZvNeioql0HcRQ9NTp\",\"name\":\"ask_question_to_coworker\",\"content\":\"Absolutely!
|
||||
To create compelling and engaging AI articles that stand out, we need to go
|
||||
beyond surface-level discussions and deliver fresh perspectives that challenge
|
||||
assumptions and spark curiosity. Here are five unique angles with their highlight
|
||||
paragraphs that could really captivate our readers:\\n\\n1. **The Hidden Psychology
|
||||
of AI Agents: How They Learn Human Biases and What That Means for Our Future**
|
||||
\ \\n*Highlight:* AI agents don\u2019t just process data\u2014they absorb the
|
||||
subtle nuances and biases embedded in human language, behavior, and culture.
|
||||
This article dives deep into the psychological parallels between AI learning
|
||||
mechanisms and human cognitive biases, revealing surprising ways AI can both
|
||||
mirror and amplify our prejudices. Understanding these dynamics is crucial for
|
||||
building trustworthy AI systems and reshaping the future relationship between
|
||||
humans and machines.\\n\\n2. **From Assistants to Autonomous Creators: The Rise
|
||||
of AI Agents as Artists, Writers, and Innovators** \\n*Highlight:* What do
|
||||
we lose and gain when AI agents start producing original art, literature, and
|
||||
innovations? This piece explores groundbreaking examples where AI isn\u2019t
|
||||
just a tool but a creative partner that challenges our definition of authorship
|
||||
and genius. We\u2019ll examine ethical dilemmas, collaborative workflows, and
|
||||
the exciting frontier where human intuition meets algorithmic originality.\\n\\n3.
|
||||
**AI Agents in the Wild: How Decentralized Autonomous Organizations Could Redefine
|
||||
Economy and Governance** \\n*Highlight:* Imagine AI agents operating autonomously
|
||||
in decentralized networks, making real-time decisions that affect finances,
|
||||
resource management, and governance without human intervention. This article
|
||||
uncovers how DAOs powered by AI agents might spontaneously evolve new forms
|
||||
of organization\u2014transparent, efficient, and resistant to traditional corruption.
|
||||
We\u2019ll investigate early case studies and speculate on how this might disrupt
|
||||
centuries-old societal structures.\\n\\n4. **Beyond Chatbots: The Next Generation
|
||||
of AI Agents as Empathetic Digital Companions** \\n*Highlight:* Moving past
|
||||
scripted conversations, emerging AI agents simulate empathy and emotional intelligence
|
||||
in ways that can transform mental health care, education, and companionship.
|
||||
This article provides an insider look at the complex algorithms and biofeedback
|
||||
mechanisms enabling AI to recognize, respond to, and foster human emotions\u2014potentially
|
||||
filling gaps in underserved populations while raising profound questions about
|
||||
authenticity and connection.\\n\\n5. **The Environmental Toll of AI Agents:
|
||||
Unmasking the Ecological Cost of Intelligent Automation** \\n*Highlight:* While
|
||||
AI promises efficiency and innovation, the environmental footprint of training
|
||||
and deploying millions of AI agents is rarely discussed. This eye-opening article
|
||||
quantifies the energy demands of current models, challenges the narrative of
|
||||
AI as an unequivocal green solution, and explores emerging approaches pathing
|
||||
toward sustainable intelligent automation\u2014an urgent conversation for an
|
||||
increasingly eco-conscious tech landscape.\\n\\nEach of these angles opens a
|
||||
door to rich storytelling that blends technical depth, ethical inquiry, and
|
||||
visionary implications\u2014perfect for readers hungry for insight that\u2019s
|
||||
both sophisticated and accessible. Let me know which ones resonate most, or
|
||||
if you want me to refine any into full article outlines!\"},{\"role\":\"user\",\"content\":\"Analyze
|
||||
actual complete content as the final answer, not a summary.\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"arguments\":\"{\\\"task\\\":
|
||||
\\\"Come up with a list of 5 interesting ideas to explore for an article.\\\",
|
||||
\\\"context\\\": \\\"We need five intriguing ideas worth exploring for an article.
|
||||
Each idea should have potential for in-depth exploration and appeal to a broad
|
||||
audience, possibly touching on current trends, historical insights, future possibilities,
|
||||
or human interest stories.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"}},{\"id\":\"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"arguments\":\"{\\\"task\\\":
|
||||
\\\"Write one amazing paragraph highlight for each of 5 ideas that showcases
|
||||
how good an article about this topic could be.\\\", \\\"context\\\": \\\"Upon
|
||||
receiving five intriguing ideas from the Researcher, create a compelling paragraph
|
||||
for each idea that highlights its potential as a fascinating article. These
|
||||
paragraphs must capture the essence of the topic and explain why it would captivate
|
||||
readers, incorporating possible themes and insights.\\\", \\\"coworker\\\":
|
||||
\\\"Senior Writer\\\"}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\"name\":\"delegate_work_to_coworker\",\"content\":\"Certainly!
|
||||
Here are five intriguing article ideas that offer rich potential for deep exploration
|
||||
and broad audience appeal, especially aligned with current trends and human
|
||||
interest in AI and technology:\\n\\n1. **The Evolution of AI Agents: From Rule-Based
|
||||
Bots to Autonomous Decision Makers** \\n Explore the historical development
|
||||
of AI agents, tracing the journey from simple scripted chatbots to advanced
|
||||
autonomous systems capable of complex decision-making and learning. Dive into
|
||||
key technological milestones, breakthroughs in machine learning, and current
|
||||
state-of-the-art AI agents. Discuss implications for industries such as customer
|
||||
service, healthcare, and autonomous vehicles, highlighting both opportunities
|
||||
and ethical concerns.\\n\\n2. **AI in Daily Life: How Intelligent Agents Are
|
||||
Reshaping Human Routines** \\n Investigate the integration of AI agents in
|
||||
everyday life\u2014from virtual assistants like Siri and Alexa to personalized
|
||||
recommendation systems and smart home devices. Analyze how these AI tools influence
|
||||
productivity, privacy, and social behavior. Include human interest elements
|
||||
through stories of individuals or communities who have embraced or resisted
|
||||
these technologies.\\n\\n3. **The Future of Work: AI Agents as Collaborative
|
||||
Colleagues** \\n Examine how AI agents are transforming workplaces by acting
|
||||
as collaborators rather than just tools. Cover applications in creative fields,
|
||||
data analysis, and decision support, while addressing potential challenges such
|
||||
as job displacement, new skill requirements, and the evolving definition of
|
||||
teamwork. Use expert opinions and case studies to paint a nuanced future outlook.\\n\\n4.
|
||||
**Ethics and Accountability in AI Agent Development** \\n Delve into the
|
||||
ethical dilemmas posed by increasingly autonomous AI agents\u2014topics like
|
||||
bias in algorithms, data privacy, and accountability for AI-driven decisions.
|
||||
Explore measures being taken globally to regulate AI, frameworks for responsible
|
||||
AI development, and the role of public awareness. Include historical context
|
||||
about technology ethics to provide depth.\\n\\n5. **Human-AI Symbiosis: Stories
|
||||
of Innovative Partnerships Shaping Our World** \\n Tell compelling human
|
||||
interest stories about individuals or organizations pioneering collaborative
|
||||
projects with AI agents that lead to breakthroughs in science, art, or social
|
||||
good. Highlight how these partnerships transcend traditional human-machine interaction
|
||||
and open new creative and problem-solving possibilities, inspiring readers about
|
||||
the potential of human-AI synergy.\\n\\nThese ideas are designed to be both
|
||||
engaging and informative, offering multiple angles\u2014technical, historical,
|
||||
ethical, and personal\u2014to keep readers captivated while providing substantial
|
||||
content for in-depth analysis.\"},{\"role\":\"tool\",\"tool_call_id\":\"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\"name\":\"delegate_work_to_coworker\",\"content\":\"1.
|
||||
**The Rise of Autonomous AI Agents: Revolutionizing Everyday Tasks** \\nImagine
|
||||
a world where AI agents autonomously manage your daily schedule, optimize your
|
||||
work routines, and even handle complex decision-making with minimal human intervention.
|
||||
An article exploring the rise of autonomous AI agents would captivate readers
|
||||
by diving into how advancements in machine learning and natural language processing
|
||||
have matured these agents from simple chatbots to intelligent collaborators.
|
||||
Themes could include practical applications in industries like healthcare, finance,
|
||||
and personal productivity, the challenges of trust and transparency, and a glimpse
|
||||
into the ethical questions surrounding AI autonomy. This topic not only showcases
|
||||
cutting-edge technology but also invites readers to envision the near future
|
||||
of human-AI synergy.\\n\\n2. **Building Ethical AI Agents: Balancing Innovation
|
||||
with Responsibility** \\nAs AI agents become more powerful and independent,
|
||||
the imperative to embed ethical frameworks within their design comes sharply
|
||||
into focus. An insightful article on this theme would engage readers by unpacking
|
||||
the complexities of programming morality, fairness, and accountability into
|
||||
AI systems that influence critical decisions\u2014whether in hiring processes,
|
||||
law enforcement, or digital content moderation. Exploring real-world case studies
|
||||
alongside philosophical and regulatory perspectives, the piece could illuminate
|
||||
the delicate balance between technological innovation and societal values, offering
|
||||
a nuanced discussion that appeals to technologists, ethicists, and everyday
|
||||
users alike.\\n\\n3. **AI Agents in Startups: Accelerating Growth and Disrupting
|
||||
Markets** \\nStartups are uniquely positioned to leverage AI agents as game-changers
|
||||
that turbocharge growth, optimize workflows, and unlock new business models.
|
||||
This article could enthrall readers by detailing how nimble companies integrate
|
||||
AI-driven agents for customer engagement, market analysis, and personalized
|
||||
product recommendations\u2014outpacing larger incumbents. It would also examine
|
||||
hurdles such as data privacy, scaling complexities, and the human-AI collaboration
|
||||
dynamic, providing actionable insights for entrepreneurs and investors. The
|
||||
story of AI agents fueling startup innovation not only inspires but also outlines
|
||||
the practical pathways and pitfalls on the frontier of modern entrepreneurship.\\n\\n4.
|
||||
**The Future of Work with AI Agents: Redefining Roles and Skills** \\nAI agents
|
||||
are redefining professional landscapes by automating routine tasks and augmenting
|
||||
human creativity and decision-making. An article on this topic could engage
|
||||
readers by painting a vivid picture of the evolving workplace, where collaboration
|
||||
between humans and AI agents becomes the norm. Delving into emerging roles,
|
||||
necessary skill sets, and how education and training must adapt, the piece would
|
||||
offer a forward-thinking analysis that resonates deeply with employees, managers,
|
||||
and policymakers. Exploring themes of workforce transformation, productivity
|
||||
gains, and potential socioeconomic impacts, it provides a comprehensive outlook
|
||||
on an AI-integrated work environment.\\n\\n5. **From Reactive to Proactive:
|
||||
How Next-Gen AI Agents Anticipate Needs** \\nThe leap from reactive AI assistants
|
||||
to truly proactive AI agents signifies one of the most thrilling advances in
|
||||
artificial intelligence. An article centered on this evolution would captivate
|
||||
readers by illustrating how these agents utilize predictive analytics, contextual
|
||||
understanding, and continuous learning to anticipate user needs before they
|
||||
are expressed. By showcasing pioneering applications in personalized healthcare
|
||||
management, smart homes, and adaptive learning platforms, the article would
|
||||
highlight the profound shift toward intuitive, anticipatory technology. This
|
||||
theme not only excites with futuristic promise but also probes the technical
|
||||
and privacy challenges that come with increased agency and foresight.\"},{\"role\":\"user\",\"content\":\"Analyze
|
||||
the tool result. If requirements are met, provide the Final Answer. Otherwise,
|
||||
call the next tool. Deliver only the answer without meta-commentary.\"}],\"model\":\"gpt-4o\",\"tool_choice\":\"auto\",\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"description\":\"Delegate
|
||||
a specific task to one of the following coworkers: Researcher, Senior Writer\\nThe
|
||||
@@ -582,7 +610,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '9923'
|
||||
- '11056'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
@@ -598,7 +626,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
- 2.31.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
@@ -606,58 +634,64 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-DIqxidsfoqQl7qXSIVHfSCyETUwlU\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1773385862,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-DTApljTaq8nDgNMS21B319i56seCn\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1775845529,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"1. **The Rise of Autonomous AI Agents
|
||||
and Their Impact on the Future of Work** \\nExplore how autonomous AI agents\u2014systems
|
||||
capable of performing complex tasks independently\u2014are transforming industries
|
||||
such as customer service, software development, and logistics. Discuss implications
|
||||
for job automation, human-AI collaboration, and ethical considerations surrounding
|
||||
decision-making autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations
|
||||
in Audio, Video, and 3D Content Creation** \\nDelve into advancements in
|
||||
generative AI models that create not only text but also realistic audio, video
|
||||
content, virtual environments, and 3D models. Highlight applications in gaming,
|
||||
entertainment, education, and digital marketing, as well as challenges like
|
||||
misinformation and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling:
|
||||
Enhancing Predictive Accuracy to Combat Climate Change** \\nExamine how AI
|
||||
and machine learning are improving climate models by analyzing vast datasets,
|
||||
uncovering patterns, and simulating environmental scenarios. Discuss how these
|
||||
advances are aiding policymakers in making informed decisions to address climate
|
||||
risks and sustainability goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare:
|
||||
Balancing Innovation with Patient Privacy** \\nInvestigate ethical challenges
|
||||
posed by AI applications in healthcare, including diagnosis, personalized
|
||||
treatment, and patient data management. Focus on balancing rapid technological
|
||||
innovation with privacy, bias mitigation, and regulatory frameworks to ensure
|
||||
equitable access and trust.\\n\\n5. **Quantum Computing Meets AI: Exploring
|
||||
the Next Leap in Computational Power** \\nCover the intersection of quantum
|
||||
computing and artificial intelligence, exploring how quantum algorithms could
|
||||
accelerate AI training processes and solve problems beyond the reach of classical
|
||||
computers. Outline current research, potential breakthroughs, and the timeline
|
||||
for real-world applications.\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1748,\n \"completion_tokens\":
|
||||
335,\n \"total_tokens\": 2083,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
\"assistant\",\n \"content\": \"- **The Evolution of AI Agents: From
|
||||
Rule-Based Bots to Autonomous Decision Makers** \\n Explore the historical
|
||||
development of AI agents, tracing the journey from simple scripted chatbots
|
||||
to advanced autonomous systems capable of complex decision-making and learning.
|
||||
Dive into key technological milestones, breakthroughs in machine learning,
|
||||
and current state-of-the-art AI agents. Discuss implications for industries
|
||||
such as customer service, healthcare, and autonomous vehicles, highlighting
|
||||
both opportunities and ethical concerns.\\n\\n- **AI in Daily Life: How Intelligent
|
||||
Agents Are Reshaping Human Routines** \\n Investigate the integration of
|
||||
AI agents in everyday life\u2014from virtual assistants like Siri and Alexa
|
||||
to personalized recommendation systems and smart home devices. Analyze how
|
||||
these AI tools influence productivity, privacy, and social behavior. Include
|
||||
human interest elements through stories of individuals or communities who
|
||||
have embraced or resisted these technologies.\\n\\n- **The Future of Work:
|
||||
AI Agents as Collaborative Colleagues** \\n Examine how AI agents are transforming
|
||||
workplaces by acting as collaborators rather than just tools. Cover applications
|
||||
in creative fields, data analysis, and decision support, while addressing
|
||||
potential challenges such as job displacement, new skill requirements, and
|
||||
the evolving definition of teamwork. Use expert opinions and case studies
|
||||
to paint a nuanced future outlook.\\n\\n- **Ethics and Accountability in AI
|
||||
Agent Development** \\n Delve into the ethical dilemmas posed by increasingly
|
||||
autonomous AI agents\u2014topics like bias in algorithms, data privacy, and
|
||||
accountability for AI-driven decisions. Explore measures being taken globally
|
||||
to regulate AI, frameworks for responsible AI development, and the role of
|
||||
public awareness. Include historical context about technology ethics to provide
|
||||
depth.\\n\\n- **Human-AI Symbiosis: Stories of Innovative Partnerships Shaping
|
||||
Our World** \\n Tell compelling human interest stories about individuals
|
||||
or organizations pioneering collaborative projects with AI agents that lead
|
||||
to breakthroughs in science, art, or social good. Highlight how these partnerships
|
||||
transcend traditional human-machine interaction and open new creative and
|
||||
problem-solving possibilities, inspiring readers about the potential of human-AI
|
||||
synergy.\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 1903,\n \"completion_tokens\": 399,\n
|
||||
\ \"total_tokens\": 2302,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_b7c8e3f100\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_df40ab6c25\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-Ray:
|
||||
- 9db938e60d5bc5e7-EWR
|
||||
- 9ea3cb5a6957b301-TPE
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 13 Mar 2026 07:11:04 GMT
|
||||
- Fri, 10 Apr 2026 18:25:31 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
@@ -673,7 +707,7 @@ interactions:
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '2009'
|
||||
- '2183'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
|
||||
@@ -125,7 +125,7 @@ class TestDeployCommand(unittest.TestCase):
|
||||
mock_response.json.return_value = {"uuid": "test-uuid"}
|
||||
self.mock_client.deploy_by_uuid.return_value = mock_response
|
||||
|
||||
self.deploy_command.deploy(uuid="test-uuid")
|
||||
self.deploy_command.deploy(uuid="test-uuid", skip_validate=True)
|
||||
|
||||
self.mock_client.deploy_by_uuid.assert_called_once_with("test-uuid")
|
||||
mock_display.assert_called_once_with({"uuid": "test-uuid"})
|
||||
@@ -137,7 +137,7 @@ class TestDeployCommand(unittest.TestCase):
|
||||
mock_response.json.return_value = {"uuid": "test-uuid"}
|
||||
self.mock_client.deploy_by_name.return_value = mock_response
|
||||
|
||||
self.deploy_command.deploy()
|
||||
self.deploy_command.deploy(skip_validate=True)
|
||||
|
||||
self.mock_client.deploy_by_name.assert_called_once_with("test_project")
|
||||
mock_display.assert_called_once_with({"uuid": "test-uuid"})
|
||||
@@ -156,7 +156,7 @@ class TestDeployCommand(unittest.TestCase):
|
||||
self.mock_client.create_crew.return_value = mock_response
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command.create_crew()
|
||||
self.deploy_command.create_crew(skip_validate=True)
|
||||
self.assertIn("Deployment created successfully!", fake_out.getvalue())
|
||||
self.assertIn("new-uuid", fake_out.getvalue())
|
||||
|
||||
|
||||
430
lib/crewai/tests/cli/deploy/test_validate.py
Normal file
430
lib/crewai/tests/cli/deploy/test_validate.py
Normal file
@@ -0,0 +1,430 @@
|
||||
"""Tests for `crewai.cli.deploy.validate`.
|
||||
|
||||
The fixtures here correspond 1:1 to the deployment-failure patterns observed
|
||||
in the #crewai-deployment-failures Slack channel that motivated this work.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from textwrap import dedent
|
||||
from typing import Iterable
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.cli.deploy.validate import (
|
||||
DeployValidator,
|
||||
Severity,
|
||||
normalize_package_name,
|
||||
)
|
||||
|
||||
|
||||
def _make_pyproject(
|
||||
name: str = "my_crew",
|
||||
dependencies: Iterable[str] = ("crewai>=1.14.0",),
|
||||
*,
|
||||
hatchling: bool = False,
|
||||
flow: bool = False,
|
||||
extra: str = "",
|
||||
) -> str:
|
||||
deps = ", ".join(f'"{d}"' for d in dependencies)
|
||||
lines = [
|
||||
"[project]",
|
||||
f'name = "{name}"',
|
||||
'version = "0.1.0"',
|
||||
f"dependencies = [{deps}]",
|
||||
]
|
||||
if hatchling:
|
||||
lines += [
|
||||
"",
|
||||
"[build-system]",
|
||||
'requires = ["hatchling"]',
|
||||
'build-backend = "hatchling.build"',
|
||||
]
|
||||
if flow:
|
||||
lines += ["", "[tool.crewai]", 'type = "flow"']
|
||||
if extra:
|
||||
lines += ["", extra]
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def _scaffold_standard_crew(
|
||||
root: Path,
|
||||
*,
|
||||
name: str = "my_crew",
|
||||
include_crew_py: bool = True,
|
||||
include_agents_yaml: bool = True,
|
||||
include_tasks_yaml: bool = True,
|
||||
include_lockfile: bool = True,
|
||||
pyproject: str | None = None,
|
||||
) -> Path:
|
||||
(root / "pyproject.toml").write_text(pyproject or _make_pyproject(name=name))
|
||||
if include_lockfile:
|
||||
(root / "uv.lock").write_text("# dummy uv lockfile\n")
|
||||
|
||||
pkg_dir = root / "src" / normalize_package_name(name)
|
||||
pkg_dir.mkdir(parents=True)
|
||||
(pkg_dir / "__init__.py").write_text("")
|
||||
|
||||
if include_crew_py:
|
||||
(pkg_dir / "crew.py").write_text(
|
||||
dedent(
|
||||
"""
|
||||
from crewai.project import CrewBase, crew
|
||||
|
||||
@CrewBase
|
||||
class MyCrew:
|
||||
agents_config = "config/agents.yaml"
|
||||
tasks_config = "config/tasks.yaml"
|
||||
|
||||
@crew
|
||||
def crew(self):
|
||||
from crewai import Crew
|
||||
return Crew(agents=[], tasks=[])
|
||||
"""
|
||||
).strip()
|
||||
+ "\n"
|
||||
)
|
||||
|
||||
config_dir = pkg_dir / "config"
|
||||
config_dir.mkdir()
|
||||
if include_agents_yaml:
|
||||
(config_dir / "agents.yaml").write_text("{}\n")
|
||||
if include_tasks_yaml:
|
||||
(config_dir / "tasks.yaml").write_text("{}\n")
|
||||
|
||||
return pkg_dir
|
||||
|
||||
|
||||
def _codes(validator: DeployValidator) -> set[str]:
|
||||
return {r.code for r in validator.results}
|
||||
|
||||
|
||||
def _run_without_import_check(root: Path) -> DeployValidator:
|
||||
"""Run validation with the subprocess-based import check stubbed out;
|
||||
the classifier is exercised directly in its own tests below."""
|
||||
with patch.object(DeployValidator, "_check_module_imports", lambda self: None):
|
||||
v = DeployValidator(project_root=root)
|
||||
v.run()
|
||||
return v
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"project_name, expected",
|
||||
[
|
||||
("my-crew", "my_crew"),
|
||||
("My Cool-Project", "my_cool_project"),
|
||||
("crew123", "crew123"),
|
||||
("crew.name!with$chars", "crewnamewithchars"),
|
||||
],
|
||||
)
|
||||
def test_normalize_package_name(project_name: str, expected: str) -> None:
|
||||
assert normalize_package_name(project_name) == expected
|
||||
|
||||
|
||||
def test_valid_standard_crew_project_passes(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert v.ok, f"expected clean run, got {v.results}"
|
||||
|
||||
|
||||
def test_missing_pyproject_errors(tmp_path: Path) -> None:
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_pyproject" in _codes(v)
|
||||
assert not v.ok
|
||||
|
||||
|
||||
def test_invalid_pyproject_errors(tmp_path: Path) -> None:
|
||||
(tmp_path / "pyproject.toml").write_text("this is not valid toml ====\n")
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "invalid_pyproject" in _codes(v)
|
||||
|
||||
|
||||
def test_missing_project_name_errors(tmp_path: Path) -> None:
|
||||
(tmp_path / "pyproject.toml").write_text(
|
||||
'[project]\nversion = "0.1.0"\ndependencies = ["crewai>=1.14.0"]\n'
|
||||
)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_project_name" in _codes(v)
|
||||
|
||||
|
||||
def test_missing_lockfile_errors(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path, include_lockfile=False)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_lockfile" in _codes(v)
|
||||
|
||||
|
||||
def test_poetry_lock_is_accepted(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path, include_lockfile=False)
|
||||
(tmp_path / "poetry.lock").write_text("# poetry lockfile\n")
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_lockfile" not in _codes(v)
|
||||
|
||||
|
||||
def test_stale_lockfile_warns(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path)
|
||||
# Make lockfile older than pyproject.
|
||||
lock = tmp_path / "uv.lock"
|
||||
pyproject = tmp_path / "pyproject.toml"
|
||||
old_time = pyproject.stat().st_mtime - 60
|
||||
import os
|
||||
|
||||
os.utime(lock, (old_time, old_time))
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "stale_lockfile" in _codes(v)
|
||||
# Stale is a warning, so the run can still be ok (no errors).
|
||||
assert v.ok
|
||||
|
||||
|
||||
def test_missing_package_dir_errors(tmp_path: Path) -> None:
|
||||
# pyproject says name=my_crew but we only create src/other_pkg/
|
||||
(tmp_path / "pyproject.toml").write_text(_make_pyproject(name="my_crew"))
|
||||
(tmp_path / "uv.lock").write_text("")
|
||||
(tmp_path / "src" / "other_pkg").mkdir(parents=True)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
codes = _codes(v)
|
||||
assert "missing_package_dir" in codes
|
||||
finding = next(r for r in v.results if r.code == "missing_package_dir")
|
||||
assert "other_pkg" in finding.hint
|
||||
|
||||
|
||||
def test_egg_info_only_errors_with_targeted_hint(tmp_path: Path) -> None:
|
||||
"""Regression for the case where only src/<name>.egg-info/ exists."""
|
||||
(tmp_path / "pyproject.toml").write_text(_make_pyproject(name="odoo_pm_agents"))
|
||||
(tmp_path / "uv.lock").write_text("")
|
||||
(tmp_path / "src" / "odoo_pm_agents.egg-info").mkdir(parents=True)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
finding = next(r for r in v.results if r.code == "missing_package_dir")
|
||||
assert "egg-info" in finding.hint
|
||||
|
||||
|
||||
def test_stale_egg_info_sibling_warns(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path)
|
||||
(tmp_path / "src" / "my_crew.egg-info").mkdir()
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "stale_egg_info" in _codes(v)
|
||||
|
||||
|
||||
def test_missing_crew_py_errors(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path, include_crew_py=False)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_crew_py" in _codes(v)
|
||||
|
||||
|
||||
def test_missing_agents_yaml_errors(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path, include_agents_yaml=False)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_agents_yaml" in _codes(v)
|
||||
|
||||
|
||||
def test_missing_tasks_yaml_errors(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path, include_tasks_yaml=False)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_tasks_yaml" in _codes(v)
|
||||
|
||||
|
||||
def test_flow_project_requires_main_py(tmp_path: Path) -> None:
|
||||
(tmp_path / "pyproject.toml").write_text(
|
||||
_make_pyproject(name="my_flow", flow=True)
|
||||
)
|
||||
(tmp_path / "uv.lock").write_text("")
|
||||
(tmp_path / "src" / "my_flow").mkdir(parents=True)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_flow_main" in _codes(v)
|
||||
|
||||
|
||||
def test_flow_project_with_main_py_passes(tmp_path: Path) -> None:
|
||||
(tmp_path / "pyproject.toml").write_text(
|
||||
_make_pyproject(name="my_flow", flow=True)
|
||||
)
|
||||
(tmp_path / "uv.lock").write_text("")
|
||||
pkg = tmp_path / "src" / "my_flow"
|
||||
pkg.mkdir(parents=True)
|
||||
(pkg / "main.py").write_text("# flow entrypoint\n")
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "missing_flow_main" not in _codes(v)
|
||||
|
||||
|
||||
def test_hatchling_without_wheel_config_passes_when_pkg_dir_matches(
|
||||
tmp_path: Path,
|
||||
) -> None:
|
||||
_scaffold_standard_crew(
|
||||
tmp_path, pyproject=_make_pyproject(name="my_crew", hatchling=True)
|
||||
)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
# src/my_crew/ exists, so hatch default should find it — no wheel error.
|
||||
assert "hatch_wheel_target_missing" not in _codes(v)
|
||||
|
||||
|
||||
def test_hatchling_with_explicit_wheel_config_passes(tmp_path: Path) -> None:
|
||||
extra = (
|
||||
"[tool.hatch.build.targets.wheel]\n"
|
||||
'packages = ["src/my_crew"]'
|
||||
)
|
||||
_scaffold_standard_crew(
|
||||
tmp_path,
|
||||
pyproject=_make_pyproject(name="my_crew", hatchling=True, extra=extra),
|
||||
)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "hatch_wheel_target_missing" not in _codes(v)
|
||||
|
||||
|
||||
def test_classify_missing_openai_key_is_warning(tmp_path: Path) -> None:
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error(
|
||||
"ImportError",
|
||||
"Error importing native provider: 1 validation error for OpenAICompletion\n"
|
||||
" Value error, OPENAI_API_KEY is required",
|
||||
tb="",
|
||||
)
|
||||
assert len(v.results) == 1
|
||||
result = v.results[0]
|
||||
assert result.code == "llm_init_missing_key"
|
||||
assert result.severity is Severity.WARNING
|
||||
assert "OPENAI_API_KEY" in result.title
|
||||
|
||||
|
||||
def test_classify_azure_extra_missing_is_error(tmp_path: Path) -> None:
|
||||
"""The real message raised by the Azure provider module uses plain
|
||||
double quotes around the install command (no backticks). Match the
|
||||
exact string that ships in the provider source so this test actually
|
||||
guards the regex used in production."""
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error(
|
||||
"ImportError",
|
||||
'Azure AI Inference native provider not available, to install: uv add "crewai[azure-ai-inference]"',
|
||||
tb="",
|
||||
)
|
||||
assert "missing_provider_extra" in _codes(v)
|
||||
finding = next(r for r in v.results if r.code == "missing_provider_extra")
|
||||
assert finding.title.startswith("Azure AI Inference")
|
||||
assert 'uv add "crewai[azure-ai-inference]"' in finding.hint
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"pkg_label, install_cmd",
|
||||
[
|
||||
("Anthropic", 'uv add "crewai[anthropic]"'),
|
||||
("AWS Bedrock", 'uv add "crewai[bedrock]"'),
|
||||
("Google Gen AI", 'uv add "crewai[google-genai]"'),
|
||||
],
|
||||
)
|
||||
def test_classify_missing_provider_extra_matches_real_messages(
|
||||
tmp_path: Path, pkg_label: str, install_cmd: str
|
||||
) -> None:
|
||||
"""Regression for the four provider error strings verbatim."""
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error(
|
||||
"ImportError",
|
||||
f"{pkg_label} native provider not available, to install: {install_cmd}",
|
||||
tb="",
|
||||
)
|
||||
assert "missing_provider_extra" in _codes(v)
|
||||
finding = next(r for r in v.results if r.code == "missing_provider_extra")
|
||||
assert install_cmd in finding.hint
|
||||
|
||||
|
||||
def test_classify_keyerror_at_import_is_warning(tmp_path: Path) -> None:
|
||||
"""Regression for `KeyError: 'SERPLY_API_KEY'` raised at import time."""
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error("KeyError", "'SERPLY_API_KEY'", tb="")
|
||||
codes = _codes(v)
|
||||
assert "env_var_read_at_import" in codes
|
||||
|
||||
|
||||
def test_classify_no_crewbase_class_is_error(tmp_path: Path) -> None:
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error(
|
||||
"ValueError",
|
||||
"Crew class annotated with @CrewBase not found.",
|
||||
tb="",
|
||||
)
|
||||
assert "no_crewbase_class" in _codes(v)
|
||||
|
||||
|
||||
def test_classify_no_flow_subclass_is_error(tmp_path: Path) -> None:
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error("ValueError", "No Flow subclass found in the module.", tb="")
|
||||
assert "no_flow_subclass" in _codes(v)
|
||||
|
||||
|
||||
def test_classify_stale_crewai_pin_attribute_error(tmp_path: Path) -> None:
|
||||
"""Regression for a stale crewai pin missing `_load_response_format`."""
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error(
|
||||
"AttributeError",
|
||||
"'EmploymentServiceDecisionSupportSystemCrew' object has no attribute '_load_response_format'",
|
||||
tb="",
|
||||
)
|
||||
assert "stale_crewai_pin" in _codes(v)
|
||||
|
||||
|
||||
def test_classify_unknown_error_is_fallback(tmp_path: Path) -> None:
|
||||
v = DeployValidator(project_root=tmp_path)
|
||||
v._classify_import_error("RuntimeError", "something weird happened", tb="")
|
||||
assert "import_failed" in _codes(v)
|
||||
|
||||
|
||||
def test_env_var_referenced_but_missing_warns(tmp_path: Path) -> None:
|
||||
pkg = _scaffold_standard_crew(tmp_path)
|
||||
(pkg / "tools.py").write_text(
|
||||
'import os\nkey = os.getenv("TAVILY_API_KEY")\n'
|
||||
)
|
||||
import os
|
||||
|
||||
# Make sure the test doesn't inherit the key from the host environment.
|
||||
with patch.dict(os.environ, {}, clear=False):
|
||||
os.environ.pop("TAVILY_API_KEY", None)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
codes = _codes(v)
|
||||
assert "env_vars_not_in_dotenv" in codes
|
||||
|
||||
|
||||
def test_env_var_in_dotenv_does_not_warn(tmp_path: Path) -> None:
|
||||
pkg = _scaffold_standard_crew(tmp_path)
|
||||
(pkg / "tools.py").write_text(
|
||||
'import os\nkey = os.getenv("TAVILY_API_KEY")\n'
|
||||
)
|
||||
(tmp_path / ".env").write_text("TAVILY_API_KEY=abc\n")
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "env_vars_not_in_dotenv" not in _codes(v)
|
||||
|
||||
|
||||
def test_old_crewai_pin_in_uv_lock_warns(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path)
|
||||
(tmp_path / "uv.lock").write_text(
|
||||
'name = "crewai"\nversion = "1.10.0"\nsource = { registry = "..." }\n'
|
||||
)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "old_crewai_pin" in _codes(v)
|
||||
|
||||
|
||||
def test_modern_crewai_pin_does_not_warn(tmp_path: Path) -> None:
|
||||
_scaffold_standard_crew(tmp_path)
|
||||
(tmp_path / "uv.lock").write_text(
|
||||
'name = "crewai"\nversion = "1.14.1"\nsource = { registry = "..." }\n'
|
||||
)
|
||||
v = _run_without_import_check(tmp_path)
|
||||
assert "old_crewai_pin" not in _codes(v)
|
||||
|
||||
|
||||
def test_create_crew_aborts_on_validation_error(tmp_path: Path) -> None:
|
||||
"""`crewai deploy create` must not contact the API when validation fails."""
|
||||
from unittest.mock import MagicMock, patch as mock_patch
|
||||
|
||||
from crewai.cli.deploy.main import DeployCommand
|
||||
|
||||
with (
|
||||
mock_patch("crewai.cli.command.get_auth_token", return_value="tok"),
|
||||
mock_patch("crewai.cli.deploy.main.get_project_name", return_value="p"),
|
||||
mock_patch("crewai.cli.command.PlusAPI") as mock_api,
|
||||
mock_patch(
|
||||
"crewai.cli.deploy.main.validate_project"
|
||||
) as mock_validate,
|
||||
):
|
||||
mock_validate.return_value = MagicMock(ok=False)
|
||||
cmd = DeployCommand()
|
||||
cmd.create_crew()
|
||||
assert not cmd.plus_api_client.create_crew.called
|
||||
del mock_api # silence unused-var lint
|
||||
0
lib/crewai/tests/cli/remote_template/__init__.py
Normal file
0
lib/crewai/tests/cli/remote_template/__init__.py
Normal file
283
lib/crewai/tests/cli/remote_template/test_main.py
Normal file
283
lib/crewai/tests/cli/remote_template/test_main.py
Normal file
@@ -0,0 +1,283 @@
|
||||
import io
|
||||
import os
|
||||
import zipfile
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import httpx
|
||||
import pytest
|
||||
from click.testing import CliRunner
|
||||
|
||||
from crewai.cli.cli import template_add, template_list
|
||||
from crewai.cli.remote_template.main import TemplateCommand
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def runner():
|
||||
return CliRunner()
|
||||
|
||||
|
||||
SAMPLE_REPOS = [
|
||||
{"name": "template_deep_research", "description": "Deep research template", "private": False},
|
||||
{"name": "template_pull_request_review", "description": "PR review template", "private": False},
|
||||
{"name": "template_conversational_example", "description": "Conversational demo", "private": False},
|
||||
{"name": "crewai", "description": "Main repo", "private": False},
|
||||
{"name": "marketplace-crew-template", "description": "Marketplace", "private": False},
|
||||
]
|
||||
|
||||
|
||||
def _make_zipball(files: dict[str, str], top_dir: str = "crewAIInc-template_test-abc123") -> bytes:
|
||||
"""Create an in-memory zipball mimicking GitHub's format."""
|
||||
buf = io.BytesIO()
|
||||
with zipfile.ZipFile(buf, "w") as zf:
|
||||
zf.writestr(f"{top_dir}/", "")
|
||||
for path, content in files.items():
|
||||
zf.writestr(f"{top_dir}/{path}", content)
|
||||
return buf.getvalue()
|
||||
|
||||
|
||||
# --- CLI command tests ---
|
||||
|
||||
|
||||
@patch("crewai.cli.cli.TemplateCommand")
|
||||
def test_template_list_command(mock_cls, runner):
|
||||
mock_instance = MagicMock()
|
||||
mock_cls.return_value = mock_instance
|
||||
|
||||
result = runner.invoke(template_list)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_cls.assert_called_once()
|
||||
mock_instance.list_templates.assert_called_once()
|
||||
|
||||
|
||||
@patch("crewai.cli.cli.TemplateCommand")
|
||||
def test_template_add_command(mock_cls, runner):
|
||||
mock_instance = MagicMock()
|
||||
mock_cls.return_value = mock_instance
|
||||
|
||||
result = runner.invoke(template_add, ["deep_research"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_cls.assert_called_once()
|
||||
mock_instance.add_template.assert_called_once_with("deep_research", None)
|
||||
|
||||
|
||||
@patch("crewai.cli.cli.TemplateCommand")
|
||||
def test_template_add_with_output_dir(mock_cls, runner):
|
||||
mock_instance = MagicMock()
|
||||
mock_cls.return_value = mock_instance
|
||||
|
||||
result = runner.invoke(template_add, ["deep_research", "-o", "my_project"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_instance.add_template.assert_called_once_with("deep_research", "my_project")
|
||||
|
||||
|
||||
# --- TemplateCommand unit tests ---
|
||||
|
||||
|
||||
class TestTemplateCommand:
|
||||
@pytest.fixture
|
||||
def cmd(self):
|
||||
with patch.object(TemplateCommand, "__init__", return_value=None):
|
||||
instance = TemplateCommand()
|
||||
instance._telemetry = MagicMock()
|
||||
return instance
|
||||
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_fetch_templates_filters_by_prefix(self, mock_get, cmd):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = SAMPLE_REPOS
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
# Return empty on page 2 to stop pagination
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
templates = cmd._fetch_templates()
|
||||
|
||||
assert len(templates) == 3
|
||||
assert all(t["name"].startswith("template_") for t in templates)
|
||||
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_fetch_templates_excludes_private(self, mock_get, cmd):
|
||||
repos = [
|
||||
{"name": "template_private_one", "description": "", "private": True},
|
||||
{"name": "template_public_one", "description": "", "private": False},
|
||||
]
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = repos
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
templates = cmd._fetch_templates()
|
||||
|
||||
assert len(templates) == 1
|
||||
assert templates[0]["name"] == "template_public_one"
|
||||
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_fetch_templates_api_error(self, mock_get, cmd):
|
||||
mock_get.side_effect = httpx.HTTPError("connection error")
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
cmd._fetch_templates()
|
||||
|
||||
@patch("crewai.cli.remote_template.main.click.prompt", return_value="q")
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_list_templates_prints_output(self, mock_get, mock_prompt, cmd):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = SAMPLE_REPOS
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
with patch("crewai.cli.remote_template.main.console") as mock_console:
|
||||
cmd.list_templates()
|
||||
assert mock_console.print.call_count > 0
|
||||
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_resolve_repo_name_with_prefix(self, mock_get, cmd):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = SAMPLE_REPOS
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
result = cmd._resolve_repo_name("template_deep_research")
|
||||
assert result == "template_deep_research"
|
||||
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_resolve_repo_name_without_prefix(self, mock_get, cmd):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = SAMPLE_REPOS
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
result = cmd._resolve_repo_name("deep_research")
|
||||
assert result == "template_deep_research"
|
||||
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_resolve_repo_name_not_found(self, mock_get, cmd):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = SAMPLE_REPOS
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
result = cmd._resolve_repo_name("nonexistent")
|
||||
assert result is None
|
||||
|
||||
def test_extract_zip(self, cmd, tmp_path):
|
||||
files = {
|
||||
"README.md": "# Test Template",
|
||||
"src/main.py": "print('hello')",
|
||||
"config/settings.yaml": "key: value",
|
||||
}
|
||||
zip_bytes = _make_zipball(files)
|
||||
dest = str(tmp_path / "output")
|
||||
|
||||
cmd._extract_zip(zip_bytes, dest)
|
||||
|
||||
assert os.path.isfile(os.path.join(dest, "README.md"))
|
||||
assert os.path.isfile(os.path.join(dest, "src", "main.py"))
|
||||
assert os.path.isfile(os.path.join(dest, "config", "settings.yaml"))
|
||||
|
||||
with open(os.path.join(dest, "src", "main.py")) as f:
|
||||
assert f.read() == "print('hello')"
|
||||
|
||||
@patch.object(TemplateCommand, "_extract_zip")
|
||||
@patch.object(TemplateCommand, "_download_zip")
|
||||
@patch.object(TemplateCommand, "_resolve_repo_name")
|
||||
def test_add_template_success(self, mock_resolve, mock_download, mock_extract, cmd, tmp_path):
|
||||
mock_resolve.return_value = "template_deep_research"
|
||||
mock_download.return_value = b"fake-zip-bytes"
|
||||
|
||||
os.chdir(tmp_path)
|
||||
cmd.add_template("deep_research")
|
||||
|
||||
mock_resolve.assert_called_once_with("deep_research")
|
||||
mock_download.assert_called_once_with("template_deep_research")
|
||||
expected_dest = os.path.join(str(tmp_path), "deep_research")
|
||||
mock_extract.assert_called_once_with(b"fake-zip-bytes", expected_dest)
|
||||
|
||||
@patch.object(TemplateCommand, "_resolve_repo_name")
|
||||
def test_add_template_not_found(self, mock_resolve, cmd):
|
||||
mock_resolve.return_value = None
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
cmd.add_template("nonexistent")
|
||||
|
||||
@patch.object(TemplateCommand, "_extract_zip")
|
||||
@patch.object(TemplateCommand, "_download_zip")
|
||||
@patch("crewai.cli.remote_template.main.click.prompt", return_value="my_project")
|
||||
@patch.object(TemplateCommand, "_resolve_repo_name")
|
||||
def test_add_template_dir_exists_prompts_rename(self, mock_resolve, mock_prompt, mock_download, mock_extract, cmd, tmp_path):
|
||||
mock_resolve.return_value = "template_deep_research"
|
||||
mock_download.return_value = b"fake-zip-bytes"
|
||||
existing = tmp_path / "deep_research"
|
||||
existing.mkdir()
|
||||
|
||||
os.chdir(tmp_path)
|
||||
cmd.add_template("deep_research")
|
||||
|
||||
expected_dest = os.path.join(str(tmp_path), "my_project")
|
||||
mock_extract.assert_called_once_with(b"fake-zip-bytes", expected_dest)
|
||||
|
||||
@patch.object(TemplateCommand, "_resolve_repo_name")
|
||||
@patch("crewai.cli.remote_template.main.click.prompt", return_value="q")
|
||||
def test_add_template_dir_exists_quit(self, mock_prompt, mock_resolve, cmd, tmp_path):
|
||||
mock_resolve.return_value = "template_deep_research"
|
||||
existing = tmp_path / "deep_research"
|
||||
existing.mkdir()
|
||||
|
||||
os.chdir(tmp_path)
|
||||
cmd.add_template("deep_research")
|
||||
# Should return without downloading
|
||||
|
||||
@patch.object(TemplateCommand, "_install_repo")
|
||||
@patch("crewai.cli.remote_template.main.click.prompt", return_value="2")
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_list_templates_selects_and_installs(self, mock_get, mock_prompt, mock_install, cmd):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = SAMPLE_REPOS
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
with patch("crewai.cli.remote_template.main.console"):
|
||||
cmd.list_templates()
|
||||
|
||||
# Templates are sorted by name; index 1 (choice "2") = template_deep_research
|
||||
mock_install.assert_called_once_with("template_deep_research")
|
||||
|
||||
@patch.object(TemplateCommand, "_install_repo")
|
||||
@patch("crewai.cli.remote_template.main.click.prompt", return_value="q")
|
||||
@patch("crewai.cli.remote_template.main.httpx.get")
|
||||
def test_list_templates_quit(self, mock_get, mock_prompt, mock_install, cmd):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = SAMPLE_REPOS
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_empty = MagicMock()
|
||||
mock_empty.json.return_value = []
|
||||
mock_empty.raise_for_status = MagicMock()
|
||||
mock_get.side_effect = [mock_response, mock_empty]
|
||||
|
||||
with patch("crewai.cli.remote_template.main.console"):
|
||||
cmd.list_templates()
|
||||
|
||||
mock_install.assert_not_called()
|
||||
@@ -367,7 +367,7 @@ def test_deploy_push(command, runner):
|
||||
result = runner.invoke(deploy_push, ["-u", uuid])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.deploy.assert_called_once_with(uuid=uuid)
|
||||
mock_deploy.deploy.assert_called_once_with(uuid=uuid, skip_validate=False)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
@@ -376,7 +376,7 @@ def test_deploy_push_no_uuid(command, runner):
|
||||
result = runner.invoke(deploy_push)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.deploy.assert_called_once_with(uuid=None)
|
||||
mock_deploy.deploy.assert_called_once_with(uuid=None, skip_validate=False)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
|
||||
@@ -161,7 +161,8 @@ def test_install_api_error(mock_get, capsys, tool_command):
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
|
||||
def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
|
||||
@patch("crewai.cli.tools.main.git.Repository.__init__", return_value=None)
|
||||
def test_publish_when_not_in_sync(mock_init, mock_is_synced, capsys, tool_command):
|
||||
with raises(SystemExit):
|
||||
tool_command.publish(is_public=True)
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user