mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-08 02:29:00 +00:00
Compare commits
29 Commits
1.14.3a1
...
feat/agent
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aee571b775 | ||
|
|
cb46a1c4ba | ||
|
|
d9046b98dd | ||
|
|
b0e2fda105 | ||
|
|
69d777ca50 | ||
|
|
77b2835a1d | ||
|
|
c77f1632dd | ||
|
|
69461076df | ||
|
|
55937d7523 | ||
|
|
bc2fb71560 | ||
|
|
3e9deaf9c0 | ||
|
|
3f7637455c | ||
|
|
fdf3101b39 | ||
|
|
c94f2e8f28 | ||
|
|
944fe6d435 | ||
|
|
3be2fb65dc | ||
|
|
160e25c1a9 | ||
|
|
b34b336273 | ||
|
|
42d6c03ebc | ||
|
|
d4f9f875f7 | ||
|
|
6d153284d4 | ||
|
|
84a4d47aa7 | ||
|
|
9caed61f36 | ||
|
|
d45ed61db5 | ||
|
|
3b01da9ad9 | ||
|
|
874405b825 | ||
|
|
d6d04717c2 | ||
|
|
01b8437940 | ||
|
|
2c08f54341 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -30,3 +30,4 @@ chromadb-*.lock
|
||||
.crewai/memory
|
||||
blogs/*
|
||||
secrets/*
|
||||
UNKNOWN.egg-info/
|
||||
|
||||
27
README.md
27
README.md
@@ -83,6 +83,7 @@ intelligent automations.
|
||||
|
||||
## Table of contents
|
||||
|
||||
- [Build with AI](#build-with-ai)
|
||||
- [Why CrewAI?](#why-crewai)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Key Features](#key-features)
|
||||
@@ -101,6 +102,32 @@ intelligent automations.
|
||||
- [Telemetry](#telemetry)
|
||||
- [License](#license)
|
||||
|
||||
## Build with AI
|
||||
|
||||
Using an AI coding agent? Teach it CrewAI best practices in one command:
|
||||
|
||||
**Claude Code:**
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
Four skills that activate automatically when you ask relevant CrewAI questions:
|
||||
|
||||
| Skill | When it runs |
|
||||
|-------|--------------|
|
||||
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
|
||||
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
|
||||
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
|
||||
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
|
||||
|
||||
**Cursor, Codex, Windsurf, and others ([skills.sh](https://skills.sh/crewaiinc/skills)):**
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
|
||||
This installs the official [CrewAI Skills](https://github.com/crewAIInc/skills) — structured instructions that teach coding agents how to scaffold Flows, configure Crews, design agents and tasks, and follow CrewAI patterns.
|
||||
|
||||
## Why CrewAI?
|
||||
|
||||
<div align="center" style="margin-bottom: 30px;">
|
||||
|
||||
@@ -4,6 +4,110 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="25 أبريل 2026">
|
||||
## v1.14.3
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة أحداث دورة الحياة لعمليات نقطة التحقق
|
||||
- إضافة دعم لـ e2b
|
||||
- الرجوع إلى DefaultAzureCredential عند عدم توفير مفتاح API في تكامل Azure
|
||||
- إضافة دعم Bedrock V4
|
||||
- إضافة أدوات Daytona sandbox لوظائف محسّنة
|
||||
- إضافة دعم نقطة التحقق والتفرع للوكلاء المستقلين
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح execution_id ليكون منفصلًا عن state.id
|
||||
- حل مشكلة إعادة تشغيل أحداث الطريقة المسجلة عند استئناف نقطة التحقق
|
||||
- إصلاح تسلسل مراجع class initial_state كـ JSON schema
|
||||
- الحفاظ على مهارات الوكلاء التي تحتوي على بيانات وصفية فقط
|
||||
- تمرير أسماء @CrewBase الضمنية إلى أحداث الطاقم
|
||||
- دمج بيانات التنفيذ عند تهيئة دفعة مكررة
|
||||
- إصلاح تسلسل حقول مراجع class Task لنقاط التحقق
|
||||
- التعامل مع نتيجة BaseModel في حلقة إعادة المحاولة guardrail
|
||||
- الحفاظ على thought_signature في استدعاءات أدوات Gemini للبث
|
||||
- إصدار task_started عند استئناف التفرع وإعادة تصميم واجهة المستخدم النصية لنقطة التحقق
|
||||
- استخدام تواريخ مستقبلية في اختبارات تقليم نقطة التحقق لمنع الفشل المعتمد على الوقت
|
||||
- إصلاح ترتيب التشغيل الجاف والتعامل مع الفرع القديم الذي تم التحقق منه في إصدار أدوات التطوير
|
||||
- ترقية lxml إلى >=6.1.0 لرقعة الأمان
|
||||
- رفع python-dotenv إلى >=1.2.2 لرقعة الأمان
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.3
|
||||
- إضافة صفحة "بناء باستخدام الذكاء الاصطناعي" وتحديث التنقل لجميع اللغات
|
||||
- إزالة الأسئلة الشائعة حول التسعير من صفحة البناء باستخدام الذكاء الاصطناعي عبر جميع المواقع
|
||||
|
||||
### الأداء
|
||||
- تحسين MCP SDK وأنواع الأحداث لتقليل بدء التشغيل البارد بنسبة ~29%
|
||||
|
||||
### إعادة الهيكلة
|
||||
- إعادة هيكلة مساعدي نقطة التحقق للقضاء على التكرار وتشديد تلميحات نوع الحالة
|
||||
|
||||
## المساهمون
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="23 أبريل 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة دعم لـ e2b
|
||||
- تنفيذ التراجع إلى DefaultAzureCredential عند عدم توفير مفتاح API
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- ترقية lxml إلى >=6.1.0 لمعالجة مشكلة الأمان GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### الوثائق
|
||||
- إزالة الأسئلة الشائعة حول التسعير من صفحة البناء باستخدام الذكاء الاصطناعي عبر جميع اللغات
|
||||
|
||||
### الأداء
|
||||
- تحسين وقت بدء التشغيل البارد بنسبة ~29% من خلال التحميل الكسول لمجموعة أدوات MCP وأنواع الأحداث
|
||||
|
||||
## المساهمون
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="22 أبريل 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة دعم لـ bedrock V4
|
||||
- إضافة أدوات Daytona sandbox لوظائف محسّنة
|
||||
- إضافة صفحة "البناء باستخدام الذكاء الاصطناعي" — مستندات أصلية للذكاء الاصطناعي لوكلاء البرمجة
|
||||
- إضافة "البناء باستخدام الذكاء الاصطناعي" إلى التنقل في صفحة "البدء" وملفات الصفحات لجميع اللغات (en, ko, pt-BR, ar)
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح انتشار أسماء @CrewBase الضمنية إلى أحداث الطاقم
|
||||
- حل مشكلة تكرار تهيئة الدفعات في دمج بيانات التنفيذ الوصفية
|
||||
- إصلاح تسلسل حقول مرجع فئة Task لعمليات التحقق من النقاط
|
||||
- التعامل مع نتيجة BaseModel في حلقة إعادة المحاولة للحدود
|
||||
- تحديث python-dotenv إلى الإصدار >=1.2.2 للامتثال الأمني
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.3a1
|
||||
- تحديث الأوصاف وتطبيق الترجمات الفعلية
|
||||
|
||||
## المساهمون
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="21 أبريل 2026">
|
||||
## v1.14.3a1
|
||||
|
||||
|
||||
214
docs/ar/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/ar/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "البناء باستخدام الذكاء الاصطناعي"
|
||||
description: "كل ما يحتاجه وكلاء البرمجة بالذكاء الاصطناعي للبناء والنشر والتوسع مع CrewAI — المهارات، وثائق مقروءة آلياً، النشر، وميزات المؤسسات."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# البناء باستخدام الذكاء الاصطناعي
|
||||
|
||||
CrewAI مُصمَّم أصلاً للعمل مع الذكاء الاصطناعي. تجمع هذه الصفحة ما يحتاجه وكيل البرمجة بالذكاء الاصطناعي للبناء مع CrewAI — سواءً كان Claude Code أو Codex أو Cursor أو Gemini CLI أو أي مساعد آخر يساعد المطوّر على إيصال الـ crews والـ flows.
|
||||
|
||||
### وكلاء البرمجة المدعومون
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
صُممت هذه الصفحة للبشر وللمساعدين الذكيين على حدٍّ سواء. إذا كنت وكيل برمجة، ابدأ بـ **Skills** للحصول على سياق CrewAI، ثم استخدم **llms.txt** للوصول الكامل إلى الوثائق.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — علِّم وكيلك CrewAI
|
||||
|
||||
**Skills** حزم تعليمات تمنح وكلاء البرمجة معرفة عميقة بـ CrewAI — كيفية إنشاء هيكل Flows، وضبط Crews، استخدام الأدوات، واتباع اتفاقيات الإطار.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (سوق الإضافات)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
مهارات CrewAI متاحة في **سوق إضافات Claude Code** — نفس قناة التوزيع التي تستخدمها شركات رائدة في مجال الذكاء الاصطناعي:
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
تُفعَّل أربع مهارات تلقائياً عند طرح أسئلة متعلقة بـ CrewAI:
|
||||
|
||||
| المهارة | متى تُستخدم |
|
||||
|---------|-------------|
|
||||
| `getting-started` | مشاريع جديدة، الاختيار بين `LLM.call()` / `Agent` / `Crew` / `Flow`، ربط `crew.py` / `main.py` |
|
||||
| `design-agent` | ضبط الوكلاء — الدور، الهدف، الخلفية، الأدوات، نماذج اللغة، الذاكرة، الحدود الآمنة |
|
||||
| `design-task` | وصف المهام، التبعيات، المخرجات المنظمة (`output_pydantic`، `output_json`)، المراجعة البشرية |
|
||||
| `ask-docs` | الاستعلام من [خادم CrewAI docs MCP](https://docs.crewai.com/mcp) للحصول على تفاصيل واجهة البرمجة الحالية |
|
||||
</Tab>
|
||||
<Tab title="npx (أي وكيل)">
|
||||
يعمل مع Claude Code أو Codex أو Cursor أو Gemini CLI أو أي وكيل برمجة:
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
يُجلب من [سجل skills.sh](https://skills.sh/crewaiinc/skills).
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="ثبِّت حزمة المهارات الرسمية">
|
||||
استخدم إحدى الطريقتين أعلاه — سوق إضافات Claude Code أو `npx skills add`. كلاهما يثبّت الحزمة الرسمية [crewAIInc/skills](https://github.com/crewAIInc/skills).
|
||||
</Step>
|
||||
<Step title="يحصل وكيلك فوراً على خبرة CrewAI">
|
||||
تعلّم الحزمة وكيلك:
|
||||
- **Flows** — تطبيقات ذات حالة، خطوات، وتشغيل crews
|
||||
- **Crews والوكلاء** — أنماط YAML أولاً، الأدوار، المهام، التفويض
|
||||
- **الأدوات والتكاملات** — البحث، واجهات API، خوادم MCP، وأدوات CrewAI الشائعة
|
||||
- **هيكل المشروع** — هياكل CLI واتفاقيات المستودع
|
||||
- **أنماط محدثة** — يتماشى مع وثائق CrewAI الحالية وأفضل الممارسات
|
||||
</Step>
|
||||
<Step title="ابدأ البناء">
|
||||
يمكن لوكيلك الآن إنشاء هيكل وبناء مشاريع CrewAI دون أن تعيد شرح الإطار في كل جلسة.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="مفهوم Skills" icon="bolt" href="/ar/concepts/skills">
|
||||
كيف تعمل المهارات في وكلاء CrewAI — الحقن، التفعيل، والأنماط.
|
||||
</Card>
|
||||
<Card title="صفحة Skills" icon="wand-magic-sparkles" href="/ar/skills">
|
||||
نظرة على حزمة crewAIInc/skills وما تتضمنه.
|
||||
</Card>
|
||||
<Card title="AGENTS.md والأدوات" icon="terminal" href="/ar/guides/coding-tools/agents-md">
|
||||
إعداد AGENTS.md لـ Claude Code وCodex وCursor وGemini CLI.
|
||||
</Card>
|
||||
<Card title="سجل skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
القائمة الرسمية — المهارات، إحصاءات التثبيت، والتدقيق.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — وثائق مقروءة آلياً
|
||||
|
||||
ينشر CrewAI ملف `llms.txt` يمنح المساعدين الذكيين وصولاً مباشراً إلى الوثائق الكاملة بصيغة مقروءة آلياً.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="ما هو llms.txt؟">
|
||||
[`llms.txt`](https://llmstxt.org/) معيار ناشئ لجعل الوثائق قابلة للاستهلاك من قبل نماذج اللغة الكبيرة. بدلاً من استخراج HTML، يمكن لوكيلك جلب ملف نصي واحد منظم بكل المحتوى المطلوب.
|
||||
|
||||
ملف `llms.txt` الخاص بـ CrewAI **متاح فعلياً** — يمكن لوكيلك استخدامه الآن.
|
||||
</Tab>
|
||||
<Tab title="كيفية الاستخدام">
|
||||
وجِّه وكيل البرمجة إلى عنوان URL عندما يحتاج إلى مرجع CrewAI:
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
يمكن للعديد من وكلاء البرمجة (Claude Code، Cursor، وغيرهما) جلب عناوين URL مباشرة. يحتوي الملف على وثائق منظمة تغطي مفاهيم CrewAI وواجهات البرمجة والأدلة.
|
||||
</Tab>
|
||||
<Tab title="لماذا يهم">
|
||||
- **دون استخراج ويب** — محتوى نظيف ومنظم في طلب واحد
|
||||
- **دائماً محدث** — يُقدَّم مباشرة من docs.crewai.com
|
||||
- **محسّن لنماذج اللغة** — مُنسَّق لنوافذ السياق لا للمتصفحات
|
||||
- **يُكمّل Skills** — المهارات تعلّم الأنماط، وllms.txt يوفّر المرجع
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. النشر للمؤسسات
|
||||
|
||||
انتقل من crew محلي إلى الإنتاج على **CrewAI AMP** (منصة إدارة الوكلاء) في دقائق.
|
||||
|
||||
<Steps>
|
||||
<Step title="ابنِ محلياً">
|
||||
أنشئ الهيكل واختبر crew أو flow:
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="جهّز للنشر">
|
||||
تأكد أن هيكل مشروعك جاهز:
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
راجع [دليل التحضير](/ar/enterprise/guides/prepare-for-deployment) لتفاصيل الهيكل والمتطلبات.
|
||||
</Step>
|
||||
<Step title="انشر على AMP">
|
||||
ادفع إلى منصة CrewAI AMP:
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
يمكنك أيضاً النشر عبر [تكامل GitHub](/ar/enterprise/guides/deploy-to-amp) أو [Crew Studio](/ar/enterprise/guides/enable-crew-studio).
|
||||
</Step>
|
||||
<Step title="الوصول عبر API">
|
||||
يحصل الـ crew المنشور على نقطة نهاية REST. دمجه في أي تطبيق:
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="النشر على AMP" icon="rocket" href="/ar/enterprise/guides/deploy-to-amp">
|
||||
دليل النشر الكامل — CLI وGitHub وCrew Studio.
|
||||
</Card>
|
||||
<Card title="مقدمة عن AMP" icon="globe" href="/ar/enterprise/introduction">
|
||||
نظرة على المنصة — ما يوفّره AMP لـ crews في الإنتاج.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. ميزات المؤسسات
|
||||
|
||||
CrewAI AMP مُصمَّم لفرق الإنتاج. إليك ما تحصل عليه بعد النشر.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="المراقبة والرصد" icon="chart-line">
|
||||
مسارات تنفيذ مفصّلة، وسجلات، ومقاييس أداء لكل تشغيل crew. راقب قرارات الوكلاء، استدعاءات الأدوات، وإكمال المهام في الوقت الفعلي.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
واجهة منخفضة/بدون كود لإنشاء crews وتخصيصها ونشرها بصرياً — ثم التصدير إلى الشيفرة أو النشر مباشرة.
|
||||
</Card>
|
||||
<Card title="بث الويبهوك" icon="webhook">
|
||||
بث أحداث فورية من تنفيذات الـ crews إلى أنظمتك. تكامل مع Slack أو Zapier أو أي مستهلك ويبهوك.
|
||||
</Card>
|
||||
<Card title="إدارة الفريق" icon="users">
|
||||
SSO وRBAC وضوابط على مستوى المؤسسة. أدر من يمكنه إنشاء crews ونشرها والوصول إليها.
|
||||
</Card>
|
||||
<Card title="مستودع الأدوات" icon="toolbox">
|
||||
انشر وشارك أدواتاً مخصصة عبر مؤسستك. ثبّت أدوات المجتمع من السجل.
|
||||
</Card>
|
||||
<Card title="Factory (استضافة ذاتية)" icon="server">
|
||||
شغّل CrewAI AMP على بنيتك التحتية. قدرات المنصة كاملة مع ضوابط إقامة البيانات والامتثال.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="لمن مخصص AMP؟">
|
||||
لفرق تحتاج نقل سير عمل وكلاء الذكاء الاصطناعي من النماذج الأولية إلى الإنتاج — مع المراقبة وضوابط الوصول والبنية التحتية القابلة للتوسع. سواءً كنت ناشئاً أو مؤسسة كبيرة، يتولى AMP التعقيد التشغيلي لتتفرغ لبناء الوكلاء.
|
||||
</Accordion>
|
||||
<Accordion title="ما خيارات النشر المتاحة؟">
|
||||
- **السحابة (app.crewai.com)** — تُدار من CrewAI، أسرع طريق إلى الإنتاج
|
||||
- **Factory (استضافة ذاتية)** — على بنيتك التحتية لسيطرة كاملة على البيانات
|
||||
- **هجين** — دمج السحابة والاستضافة الذاتية حسب حساسية البيانات
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="استكشف CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
سجّل وانشر أول crew لك في الإنتاج.
|
||||
</Card>
|
||||
1962
docs/docs.json
1962
docs/docs.json
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,110 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="Apr 25, 2026">
|
||||
## v1.14.3
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add lifecycle events for checkpoint operations
|
||||
- Add support for e2b
|
||||
- Fall back to DefaultAzureCredential when no API key is provided in Azure integration
|
||||
- Add Bedrock V4 support
|
||||
- Add Daytona sandbox tools for enhanced functionality
|
||||
- Add checkpoint and fork support to standalone agents
|
||||
|
||||
### Bug Fixes
|
||||
- Fix execution_id to be separate from state.id
|
||||
- Resolve replay of recorded method events on checkpoint resume
|
||||
- Fix serialization of initial_state class references as JSON schema
|
||||
- Preserve metadata-only agent skills
|
||||
- Propagate implicit @CrewBase names to crew events
|
||||
- Merge execution metadata on duplicate batch initialization
|
||||
- Fix serialization of Task class-reference fields for checkpointing
|
||||
- Handle BaseModel result in guardrail retry loop
|
||||
- Preserve thought_signature in Gemini streaming tool calls
|
||||
- Emit task_started on fork resume and redesign checkpoint TUI
|
||||
- Use future dates in checkpoint prune tests to prevent time-dependent failures
|
||||
- Fix dry-run order and handle checked-out stale branch in devtools release
|
||||
- Upgrade lxml to >=6.1.0 for security patch
|
||||
- Bump python-dotenv to >=1.2.2 for security patch
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.3
|
||||
- Add 'Build with AI' page and update navigation for all languages
|
||||
- Remove pricing FAQ from build-with-ai page across all locales
|
||||
|
||||
### Performance
|
||||
- Optimize MCP SDK and event types to reduce cold start by ~29%
|
||||
|
||||
### Refactoring
|
||||
- Refactor checkpoint helpers to eliminate duplication and tighten state type hints
|
||||
|
||||
## Contributors
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 23, 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add support for e2b
|
||||
- Implement fallback to DefaultAzureCredential when no API key is provided
|
||||
|
||||
### Bug Fixes
|
||||
- Upgrade lxml to >=6.1.0 to address security issue GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### Documentation
|
||||
- Remove pricing FAQ from build-with-ai page across all locales
|
||||
|
||||
### Performance
|
||||
- Improve cold start time by ~29% through lazy-loading of MCP SDK and event types
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 22, 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add support for bedrock V4
|
||||
- Add Daytona sandbox tools for enhanced functionality
|
||||
- Add 'Build with AI' page — AI-native docs for coding agents
|
||||
- Add Build with AI to Get Started navigation and page files for all languages (en, ko, pt-BR, ar)
|
||||
|
||||
### Bug Fixes
|
||||
- Fix propagation of implicit @CrewBase names to crew events
|
||||
- Resolve issue with duplicate batch initialization in execution metadata merge
|
||||
- Fix serialization of Task class-reference fields for checkpointing
|
||||
- Handle BaseModel result in guardrail retry loop
|
||||
- Bump python-dotenv to version >=1.2.2 for security compliance
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.3a1
|
||||
- Update descriptions and apply actual translations
|
||||
|
||||
## Contributors
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 21, 2026">
|
||||
## v1.14.3a1
|
||||
|
||||
|
||||
214
docs/en/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/en/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "Build with AI"
|
||||
description: "Everything AI coding agents need to build, deploy, and scale with CrewAI — skills, machine-readable docs, deployment, and enterprise features."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# Build with AI
|
||||
|
||||
CrewAI is AI-native. This page brings together everything an AI coding agent needs to build with CrewAI — whether you're Claude Code, Codex, Cursor, Gemini CLI, or any other assistant helping a developer ship crews and flows.
|
||||
|
||||
### Supported Coding Agents
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
This page is designed to be consumed by both humans and AI assistants. If you're a coding agent, start with **Skills** to get CrewAI context, then use **llms.txt** for full docs access.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — Teach Your Agent CrewAI
|
||||
|
||||
**Skills** are instruction packs that give coding agents deep CrewAI knowledge — how to scaffold Flows, configure Crews, use tools, and follow framework conventions.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (Plugin Marketplace)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
CrewAI skills are available in the **Claude Code plugin marketplace** — the same distribution channel used by top AI-native companies:
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
Four skills activate automatically when you ask relevant CrewAI questions:
|
||||
|
||||
| Skill | When it runs |
|
||||
|-------|--------------|
|
||||
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
|
||||
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
|
||||
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
|
||||
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
|
||||
</Tab>
|
||||
<Tab title="npx (Any Agent)">
|
||||
Works with Claude Code, Codex, Cursor, Gemini CLI, or any coding agent:
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
Pulls from the [skills.sh registry](https://skills.sh/crewaiinc/skills).
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="Install the official skill pack">
|
||||
Use either method above — the Claude Code plugin marketplace or `npx skills add`. Both install the official [crewAIInc/skills](https://github.com/crewAIInc/skills) pack.
|
||||
</Step>
|
||||
<Step title="Your agent gets instant CrewAI expertise">
|
||||
The skill pack teaches your agent:
|
||||
- **Flows** — stateful apps, steps, and crew kickoffs
|
||||
- **Crews & Agents** — YAML-first patterns, roles, tasks, delegation
|
||||
- **Tools & Integrations** — search, APIs, MCP servers, and common CrewAI tools
|
||||
- **Project layout** — CLI scaffolds and repo conventions
|
||||
- **Up-to-date patterns** — tracks current CrewAI docs and best practices
|
||||
</Step>
|
||||
<Step title="Start building">
|
||||
Your agent can now scaffold and build CrewAI projects without you re-explaining the framework each session.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Skills concept" icon="bolt" href="/en/concepts/skills">
|
||||
How skills work in CrewAI agents — injection, activation, and patterns.
|
||||
</Card>
|
||||
<Card title="Skills landing page" icon="wand-magic-sparkles" href="/en/skills">
|
||||
Overview of the crewAIInc/skills pack and what it includes.
|
||||
</Card>
|
||||
<Card title="AGENTS.md & coding tools" icon="terminal" href="/en/guides/coding-tools/agents-md">
|
||||
Set up AGENTS.md for Claude Code, Codex, Cursor, and Gemini CLI.
|
||||
</Card>
|
||||
<Card title="Skills registry (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
Official listing — skills, install stats, and audits.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — Machine-Readable Docs
|
||||
|
||||
CrewAI publishes an `llms.txt` file that gives AI assistants direct access to the full documentation in a machine-readable format.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="What is llms.txt?">
|
||||
[`llms.txt`](https://llmstxt.org/) is an emerging standard for making documentation consumable by large language models. Instead of scraping HTML, your agent can fetch a single structured text file with all the content it needs.
|
||||
|
||||
CrewAI's `llms.txt` is **already live** — your agent can use it right now.
|
||||
</Tab>
|
||||
<Tab title="How to use it">
|
||||
Point your coding agent at the URL when it needs CrewAI reference docs:
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
Many coding agents (Claude Code, Cursor, etc.) can fetch URLs directly. The file contains structured documentation covering all CrewAI concepts, APIs, and guides.
|
||||
</Tab>
|
||||
<Tab title="Why it matters">
|
||||
- **No scraping required** — clean, structured content in one request
|
||||
- **Always up-to-date** — served directly from docs.crewai.com
|
||||
- **Optimized for LLMs** — formatted for context windows, not browsers
|
||||
- **Complements skills** — skills teach patterns, llms.txt provides reference
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. Deploy to Enterprise
|
||||
|
||||
Go from a local crew to production on **CrewAI AMP** (Agent Management Platform) in minutes.
|
||||
|
||||
<Steps>
|
||||
<Step title="Build locally">
|
||||
Scaffold and test your crew or flow:
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="Prepare for deployment">
|
||||
Ensure your project structure is ready:
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
See the [preparation guide](/en/enterprise/guides/prepare-for-deployment) for details on project structure and requirements.
|
||||
</Step>
|
||||
<Step title="Deploy to AMP">
|
||||
Push to the CrewAI AMP platform:
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
You can also deploy via [GitHub integration](/en/enterprise/guides/deploy-to-amp) or [Crew Studio](/en/enterprise/guides/enable-crew-studio).
|
||||
</Step>
|
||||
<Step title="Access via API">
|
||||
Your deployed crew gets a REST API endpoint. Integrate it into any application:
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
|
||||
Full deployment guide — CLI, GitHub, and Crew Studio methods.
|
||||
</Card>
|
||||
<Card title="AMP introduction" icon="globe" href="/en/enterprise/introduction">
|
||||
Platform overview — what AMP provides for production crews.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. Enterprise Features
|
||||
|
||||
CrewAI AMP is built for production teams. Here's what you get beyond deployment.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Observability" icon="chart-line">
|
||||
Detailed execution traces, logs, and performance metrics for every crew run. Monitor agent decisions, tool calls, and task completion in real time.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
No-code/low-code interface to create, customize, and deploy crews visually — then export to code or deploy directly.
|
||||
</Card>
|
||||
<Card title="Webhook Streaming" icon="webhook">
|
||||
Stream real-time events from crew executions to your systems. Integrate with Slack, Zapier, or any webhook consumer.
|
||||
</Card>
|
||||
<Card title="Team Management" icon="users">
|
||||
SSO, RBAC, and organization-level controls. Manage who can create, deploy, and access crews across your team.
|
||||
</Card>
|
||||
<Card title="Tool Repository" icon="toolbox">
|
||||
Publish and share custom tools across your organization. Install community tools from the registry.
|
||||
</Card>
|
||||
<Card title="Factory (Self-Hosted)" icon="server">
|
||||
Run CrewAI AMP on your own infrastructure. Full platform capabilities with data residency and compliance controls.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Who is AMP for?">
|
||||
AMP is for teams that need to move AI agent workflows from prototypes to production — with observability, access controls, and scalable infrastructure. Whether you're a startup or enterprise, AMP handles the operational complexity so you can focus on building agents.
|
||||
</Accordion>
|
||||
<Accordion title="What deployment options are available?">
|
||||
- **Cloud (app.crewai.com)** — managed by CrewAI, fastest path to production
|
||||
- **Factory (self-hosted)** — run on your own infrastructure for full data control
|
||||
- **Hybrid** — mix cloud and self-hosted based on sensitivity requirements
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="Explore CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
Sign up and deploy your first crew to production.
|
||||
</Card>
|
||||
@@ -4,6 +4,110 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2026년 4월 25일">
|
||||
## v1.14.3
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 체크포인트 작업을 위한 생명주기 이벤트 추가
|
||||
- e2b 지원 추가
|
||||
- Azure 통합에서 API 키가 제공되지 않을 경우 DefaultAzureCredential로 대체
|
||||
- Bedrock V4 지원 추가
|
||||
- 향상된 기능을 위한 Daytona 샌드박스 도구 추가
|
||||
- 독립형 에이전트에 체크포인트 및 포크 지원 추가
|
||||
|
||||
### 버그 수정
|
||||
- execution_id를 state.id와 분리되도록 수정
|
||||
- 체크포인트 재개 시 기록된 메서드 이벤트 재생 문제 해결
|
||||
- initial_state 클래스 참조의 JSON 스키마 직렬화 수정
|
||||
- 메타데이터 전용 에이전트 기술 보존
|
||||
- 암묵적인 @CrewBase 이름을 크루 이벤트로 전파
|
||||
- 중복 배치 초기화 시 실행 메타데이터 병합
|
||||
- 체크포인트를 위한 Task 클래스 참조 필드의 직렬화 수정
|
||||
- 가드레일 재시도 루프에서 BaseModel 결과 처리
|
||||
- Gemini 스트리밍 도구 호출에서 thought_signature 보존
|
||||
- 포크 재개 시 task_started 방출 및 체크포인트 TUI 재설계
|
||||
- 체크포인트 가지치기 테스트에서 미래 날짜 사용하여 시간 의존적 실패 방지
|
||||
- 드라이 런 주문 수정 및 devtools 릴리스에서 체크아웃된 오래된 브랜치 처리
|
||||
- 보안 패치를 위해 lxml을 >=6.1.0으로 업그레이드
|
||||
- 보안 패치를 위해 python-dotenv를 >=1.2.2로 업그레이드
|
||||
|
||||
### 문서
|
||||
- v1.14.3에 대한 변경 로그 및 버전 업데이트
|
||||
- 'AI로 빌드하기' 페이지 추가 및 모든 언어에 대한 내비게이션 업데이트
|
||||
- 모든 로케일에서 build-with-ai 페이지의 가격 FAQ 제거
|
||||
|
||||
### 성능
|
||||
- MCP SDK 및 이벤트 유형 최적화하여 콜드 스타트를 약 29% 감소
|
||||
|
||||
### 리팩토링
|
||||
- 중복 제거 및 상태 유형 힌트를 강화하기 위해 체크포인트 헬퍼 리팩토링
|
||||
|
||||
## 기여자
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 23일">
|
||||
## v1.14.3a3
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- e2b 지원 추가
|
||||
- API 키가 제공되지 않을 경우 DefaultAzureCredential로 대체 구현
|
||||
|
||||
### 버그 수정
|
||||
- 보안 문제 GHSA-vfmq-68hx-4jfw를 해결하기 위해 lxml을 >=6.1.0으로 업그레이드
|
||||
|
||||
### 문서
|
||||
- 모든 지역에서 build-with-ai 페이지의 가격 FAQ 제거
|
||||
|
||||
### 성능
|
||||
- MCP SDK 및 이벤트 유형의 지연 로딩을 통해 콜드 스타트 시간을 약 29% 개선
|
||||
|
||||
## 기여자
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 22일">
|
||||
## v1.14.3a2
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 베드록 V4 지원 추가
|
||||
- 향상된 기능을 위한 데이토나 샌드박스 도구 추가
|
||||
- 'AI와 함께 빌드' 페이지 추가 — 코딩 에이전트를 위한 AI 네이티브 문서
|
||||
- 모든 언어(en, ko, pt-BR, ar)에 대한 시작하기 탐색 및 페이지 파일에 AI와 함께 빌드 추가
|
||||
|
||||
### 버그 수정
|
||||
- 크루 이벤트에 대한 암묵적 @CrewBase 이름 전파 수정
|
||||
- 실행 메타데이터 병합에서 중복 배치 초기화 문제 해결
|
||||
- 체크포인트를 위한 Task 클래스 참조 필드 직렬화 수정
|
||||
- 가드레일 재시도 루프에서 BaseModel 결과 처리
|
||||
- 보안 준수를 위해 python-dotenv를 버전 >=1.2.2로 업데이트
|
||||
|
||||
### 문서
|
||||
- v1.14.3a1에 대한 변경 로그 및 버전 업데이트
|
||||
- 설명 업데이트 및 실제 번역 적용
|
||||
|
||||
## 기여자
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 21일">
|
||||
## v1.14.3a1
|
||||
|
||||
|
||||
214
docs/ko/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/ko/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "AI와 함께 빌드하기"
|
||||
description: "CrewAI로 빌드·배포·확장하는 데 필요한 모든 것 — 스킬, 기계가 읽을 수 있는 문서, 배포, 엔터프라이즈 기능을 AI 코딩 에이전트용으로 정리했습니다."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# AI와 함께 빌드하기
|
||||
|
||||
CrewAI는 AI 네이티브입니다. 이 페이지는 Claude Code, Codex, Cursor, Gemini CLI 등 개발자가 crew와 flow를 배포하도록 돕는 코딩 에이전트가 CrewAI로 빌드할 때 필요한 내용을 한곳에 모았습니다.
|
||||
|
||||
### 지원 코딩 에이전트
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
이 페이지는 사람과 AI 어시스턴트 모두를 위해 작성되었습니다. 코딩 에이전트라면 CrewAI 맥락은 **Skills**부터, 전체 문서 접근은 **llms.txt**를 사용하세요.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — 에이전트에게 CrewAI 가르치기
|
||||
|
||||
**Skills**는 코딩 에이전트에게 Flow 스캐폴딩, Crew 구성, 도구 사용, 프레임워크 관례 등 CrewAI에 대한 깊은 지식을 담은 지침 묶음입니다.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (플러그인 마켓플레이스)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
CrewAI 스킬은 **Claude Code 플러그인 마켓플레이스**에서 제공됩니다. AI 네이티브 기업들이 쓰는 것과 같은 배포 채널입니다.
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
CrewAI와 관련된 질문을 하면 다음 네 가지 스킬이 자동으로 활성화됩니다.
|
||||
|
||||
| 스킬 | 실행 시점 |
|
||||
|------|-------------|
|
||||
| `getting-started` | 새 프로젝트 스캐폴딩, `LLM.call()` / `Agent` / `Crew` / `Flow` 선택, `crew.py` / `main.py` 연결 |
|
||||
| `design-agent` | 에이전트 구성 — 역할, 목표, 배경 이야기, 도구, LLM, 메모리, 가드레일 |
|
||||
| `design-task` | 태스크 설명, 의존성, 구조화된 출력(`output_pydantic`, `output_json`), 사람 검토 |
|
||||
| `ask-docs` | 최신 API 정보를 위해 [CrewAI 문서 MCP 서버](https://docs.crewai.com/mcp) 조회 |
|
||||
</Tab>
|
||||
<Tab title="npx (모든 에이전트)">
|
||||
Claude Code, Codex, Cursor, Gemini CLI 등 모든 코딩 에이전트에서 사용할 수 있습니다.
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
[skills.sh 레지스트리](https://skills.sh/crewaiinc/skills)에서 가져옵니다.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="공식 스킬 팩 설치">
|
||||
위 방법 중 하나를 사용하세요 — Claude Code 플러그인 마켓플레이스 또는 `npx skills add`. 둘 다 공식 [crewAIInc/skills](https://github.com/crewAIInc/skills) 팩을 설치합니다.
|
||||
</Step>
|
||||
<Step title="에이전트가 즉시 CrewAI 전문성을 갖춤">
|
||||
스킬 팩이 에이전트에게 알려 주는 내용:
|
||||
- **Flow** — 상태ful 앱, 단계, crew 킥오프
|
||||
- **Crew 및 에이전트** — YAML 우선 패턴, 역할, 태스크, 위임
|
||||
- **도구 및 통합** — 검색, API, MCP 서버, 일반적인 CrewAI 도구
|
||||
- **프로젝트 레이아웃** — CLI 스캐폴드와 저장소 관례
|
||||
- **최신 패턴** — 현재 CrewAI 문서와 모범 사례 반영
|
||||
</Step>
|
||||
<Step title="빌드 시작">
|
||||
매 세션마다 프레임워크를 다시 설명하지 않아도 에이전트가 CrewAI 프로젝트를 스캐폴딩하고 빌드할 수 있습니다.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Skills 개념" icon="bolt" href="/ko/concepts/skills">
|
||||
CrewAI 에이전트에서 스킬이 동작하는 방식 — 주입, 활성화, 패턴.
|
||||
</Card>
|
||||
<Card title="Skills 랜딩 페이지" icon="wand-magic-sparkles" href="/ko/skills">
|
||||
crewAIInc/skills 팩 개요와 포함 내용.
|
||||
</Card>
|
||||
<Card title="AGENTS.md 및 코딩 도구" icon="terminal" href="/ko/guides/coding-tools/agents-md">
|
||||
Claude Code, Codex, Cursor, Gemini CLI용 AGENTS.md 설정.
|
||||
</Card>
|
||||
<Card title="Skills 레지스트리 (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
공식 목록 — 스킬, 설치 통계, 감사 정보.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — 기계가 읽을 수 있는 문서
|
||||
|
||||
CrewAI는 AI 어시스턴트가 전체 문서에 기계가 읽을 수 있는 형태로 바로 접근할 수 있도록 `llms.txt` 파일을 제공합니다.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="llms.txt란?">
|
||||
[`llms.txt`](https://llmstxt.org/)는 문서를 대규모 언어 모델이 소비하기 쉽게 만드는 새로운 표준입니다. HTML을 스크래핑하는 대신, 필요한 내용이 담긴 하나의 구조화된 텍스트 파일을 가져올 수 있습니다.
|
||||
|
||||
CrewAI의 `llms.txt`는 **이미 제공 중**이며, 에이전트가 바로 사용할 수 있습니다.
|
||||
</Tab>
|
||||
<Tab title="사용 방법">
|
||||
CrewAI 참고 문서가 필요할 때 코딩 에이전트에 URL을 알려 주세요.
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
Claude Code, Cursor 등 많은 코딩 에이전트가 URL을 직접 가져올 수 있습니다. 파일에는 CrewAI 개념, API, 가이드를 아우르는 구조화된 문서가 포함되어 있습니다.
|
||||
</Tab>
|
||||
<Tab title="왜 중요한가">
|
||||
- **스크래핑 불필요** — 한 번의 요청으로 깔끔한 구조화 콘텐츠
|
||||
- **항상 최신** — docs.crewai.com에서 직접 제공
|
||||
- **LLM에 최적화** — 브라우저가 아니라 컨텍스트 윈도우에 맞게 포맷
|
||||
- **스킬과 상호 보완** — 스킬은 패턴을, llms.txt는 참조를 제공
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. 엔터프라이즈에 배포
|
||||
|
||||
로컬 crew를 몇 분 안에 **CrewAI AMP**(Agent Management Platform) 프로덕션으로 가져가세요.
|
||||
|
||||
<Steps>
|
||||
<Step title="로컬에서 빌드">
|
||||
crew 또는 flow를 스캐폴딩하고 테스트합니다.
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="배포 준비">
|
||||
프로젝트 구조가 준비되었는지 확인합니다.
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
구조와 요구 사항은 [준비 가이드](/ko/enterprise/guides/prepare-for-deployment)를 참고하세요.
|
||||
</Step>
|
||||
<Step title="AMP에 배포">
|
||||
CrewAI AMP 플랫폼으로 푸시합니다.
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
[GitHub 연동](/ko/enterprise/guides/deploy-to-amp) 또는 [Crew Studio](/ko/enterprise/guides/enable-crew-studio)로도 배포할 수 있습니다.
|
||||
</Step>
|
||||
<Step title="API로 접근">
|
||||
배포된 crew는 REST API 엔드포인트를 받습니다. 모든 애플리케이션에 통합할 수 있습니다.
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="AMP에 배포" icon="rocket" href="/ko/enterprise/guides/deploy-to-amp">
|
||||
전체 배포 가이드 — CLI, GitHub, Crew Studio 방법.
|
||||
</Card>
|
||||
<Card title="AMP 소개" icon="globe" href="/ko/enterprise/introduction">
|
||||
플랫폼 개요 — 프로덕션 crew에 AMP가 제공하는 것.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. 엔터프라이즈 기능
|
||||
|
||||
CrewAI AMP는 프로덕션 팀을 위해 만들어졌습니다. 배포 외에 제공되는 것은 다음과 같습니다.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="관측 가능성" icon="chart-line">
|
||||
모든 crew 실행에 대한 상세 실행 추적, 로그, 성능 지표. 에이전트 결정, 도구 호출, 태스크 완료를 실시간으로 모니터링합니다.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
시각적으로 crew를 만들고, 맞춤 설정하고, 배포하는 노코드/로코드 인터페이스 — 코드로 보내거나 바로 배포할 수 있습니다.
|
||||
</Card>
|
||||
<Card title="웹훅 스트리밍" icon="webhook">
|
||||
crew 실행에서 실시간 이벤트를 시스템으로 스트리밍합니다. Slack, Zapier 등 웹훅 소비자와 연동할 수 있습니다.
|
||||
</Card>
|
||||
<Card title="팀 관리" icon="users">
|
||||
SSO, RBAC, 조직 단위 제어. 팀 전체에서 crew 생성·배포·접근 권한을 관리합니다.
|
||||
</Card>
|
||||
<Card title="도구 저장소" icon="toolbox">
|
||||
조직 전체에 맞춤 도구를 게시하고 공유합니다. 레지스트리에서 커뮤니티 도구를 설치합니다.
|
||||
</Card>
|
||||
<Card title="Factory(셀프 호스팅)" icon="server">
|
||||
자체 인프라에서 CrewAI AMP를 실행합니다. 데이터 상주와 규정 준수 제어와 함께 플랫폼 전체 기능을 사용할 수 있습니다.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="AMP는 누구를 위한 것인가요?">
|
||||
AI 에이전트 워크플로를 프로토타입에서 프로덕션으로 옮겨야 하는 팀을 위한 제품입니다. 관측 가능성, 접근 제어, 확장 가능한 인프라를 제공합니다. 스타트업이든 대기업이든 운영 복잡도는 AMP가 맡고, 에이전트 구축에 집중할 수 있습니다.
|
||||
</Accordion>
|
||||
<Accordion title="배포 옵션은 무엇이 있나요?">
|
||||
- **클라우드 (app.crewai.com)** — CrewAI가 관리, 프로덕션까지 가장 빠른 경로
|
||||
- **Factory(셀프 호스팅)** — 데이터 통제를 위해 자체 인프라에서 실행
|
||||
- **하이브리드** — 민감도에 따라 클라우드와 셀프 호스팅을 혼합
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="CrewAI AMP 살펴보기 →" icon="arrow-right" href="https://app.crewai.com">
|
||||
가입하고 첫 crew를 프로덕션에 배포해 보세요.
|
||||
</Card>
|
||||
@@ -4,6 +4,110 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="25 abr 2026">
|
||||
## v1.14.3
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar eventos de ciclo de vida para operações de checkpoint
|
||||
- Adicionar suporte para e2b
|
||||
- Reverter para DefaultAzureCredential quando nenhuma chave de API for fornecida na integração com o Azure
|
||||
- Adicionar suporte ao Bedrock V4
|
||||
- Adicionar ferramentas de sandbox Daytona para funcionalidade aprimorada
|
||||
- Adicionar suporte a checkpoint e fork para agentes autônomos
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir execution_id para ser separado de state.id
|
||||
- Resolver a reprodução de eventos de método gravados na retomada do checkpoint
|
||||
- Corrigir a serialização de referências de classe initial_state como esquema JSON
|
||||
- Preservar habilidades de agente somente de metadados
|
||||
- Propagar nomes implícitos @CrewBase para eventos da equipe
|
||||
- Mesclar metadados de execução na inicialização de lote duplicado
|
||||
- Corrigir a serialização de campos de referência de classe Task para checkpointing
|
||||
- Lidar com o resultado BaseModel no loop de retry do guardrail
|
||||
- Preservar thought_signature em chamadas de ferramentas de streaming Gemini
|
||||
- Emitir task_started na retomada do fork e redesenhar TUI de checkpoint
|
||||
- Usar datas futuras em testes de poda de checkpoint para evitar falhas dependentes do tempo
|
||||
- Corrigir a ordem de dry-run e lidar com branch obsoleta verificada na liberação do devtools
|
||||
- Atualizar lxml para >=6.1.0 para patch de segurança
|
||||
- Aumentar python-dotenv para >=1.2.2 para patch de segurança
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.3
|
||||
- Adicionar página 'Construir com IA' e atualizar navegação para todos os idiomas
|
||||
- Remover FAQ de preços da página construir-com-ia em todos os locais
|
||||
|
||||
### Desempenho
|
||||
- Otimizar MCP SDK e tipos de eventos para reduzir o tempo de inicialização a frio em ~29%
|
||||
|
||||
### Refatoração
|
||||
- Refatorar auxiliares de checkpoint para eliminar duplicação e apertar dicas de tipo de estado
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="23 abr 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar suporte para e2b
|
||||
- Implementar fallback para DefaultAzureCredential quando nenhuma chave de API for fornecida
|
||||
|
||||
### Correções de Bugs
|
||||
- Atualizar lxml para >=6.1.0 para resolver problema de segurança GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### Documentação
|
||||
- Remover FAQ de preços da página build-with-ai em todos os locais
|
||||
|
||||
### Desempenho
|
||||
- Melhorar o tempo de inicialização a frio em ~29% através do carregamento preguiçoso do SDK MCP e tipos de eventos
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="22 abr 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
|
||||
|
||||
## O que mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar suporte para bedrock V4
|
||||
- Adicionar ferramentas de sandbox Daytona para funcionalidade aprimorada
|
||||
- Adicionar página 'Construir com IA' — documentação nativa de IA para agentes de codificação
|
||||
- Adicionar Construir com IA à navegação Começar e arquivos de página para todos os idiomas (en, ko, pt-BR, ar)
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir a propagação de nomes implícitos @CrewBase para eventos da equipe
|
||||
- Resolver problema com inicialização de lote duplicada na mesclagem de metadados de execução
|
||||
- Corrigir a serialização de campos de referência de classe Task para checkpointing
|
||||
- Lidar com o resultado BaseModel no loop de repetição de guardrail
|
||||
- Atualizar python-dotenv para a versão >=1.2.2 para conformidade de segurança
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.3a1
|
||||
- Atualizar descrições e aplicar traduções reais
|
||||
|
||||
## Contributors
|
||||
|
||||
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="21 abr 2026">
|
||||
## v1.14.3a1
|
||||
|
||||
|
||||
214
docs/pt-BR/guides/coding-tools/build-with-ai.mdx
Normal file
214
docs/pt-BR/guides/coding-tools/build-with-ai.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: "Construa com IA"
|
||||
description: "Tudo o que agentes de codificação com IA precisam para criar, implantar e escalar com CrewAI — skills, documentação legível por máquina, implantação e recursos enterprise."
|
||||
icon: robot
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# Construa com IA
|
||||
|
||||
O CrewAI é nativo de IA. Esta página reúne o que um agente de codificação com IA precisa para construir com CrewAI — seja Claude Code, Codex, Cursor, Gemini CLI ou qualquer outro assistente que ajude um desenvolvedor a entregar crews e flows.
|
||||
|
||||
### Agentes de codificação compatíveis
|
||||
|
||||
<CardGroup cols={5}>
|
||||
<Card title="Claude Code" icon="message-bot" color="#D97706" />
|
||||
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
|
||||
<Card title="Codex" icon="terminal" color="#10B981" />
|
||||
<Card title="Windsurf" icon="wind" color="#06B6D4" />
|
||||
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
|
||||
</CardGroup>
|
||||
|
||||
<Note>
|
||||
Esta página serve para humanos e para assistentes de IA. Se você é um agente de codificação, comece por **Skills** para obter contexto do CrewAI e depois use **llms.txt** para acesso completo à documentação.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## 1. Skills — ensine CrewAI ao seu agente
|
||||
|
||||
**Skills** são pacotes de instruções que dão aos agentes de codificação conhecimento profundo do CrewAI — como estruturar Flows, configurar Crews, usar ferramentas e seguir convenções do framework.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Claude Code (Plugin Marketplace)">
|
||||
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
|
||||
As skills do CrewAI estão no **plugin marketplace do Claude Code** — o mesmo canal usado por empresas líderes em IA:
|
||||
```shell
|
||||
/plugin marketplace add crewAIInc/skills
|
||||
/plugin install crewai-skills@crewai-plugins
|
||||
/reload-plugins
|
||||
```
|
||||
|
||||
Quatro skills são ativadas automaticamente quando você faz perguntas relevantes sobre CrewAI:
|
||||
|
||||
| Skill | Quando é usada |
|
||||
|-------|----------------|
|
||||
| `getting-started` | Novos projetos, escolha entre `LLM.call()` / `Agent` / `Crew` / `Flow`, arquivos `crew.py` / `main.py` |
|
||||
| `design-agent` | Configurar agentes — papel, objetivo, história, ferramentas, LLMs, memória, guardrails |
|
||||
| `design-task` | Descrever tarefas, dependências, saída estruturada (`output_pydantic`, `output_json`), revisão humana |
|
||||
| `ask-docs` | Consultar o [servidor MCP da documentação CrewAI](https://docs.crewai.com/mcp) em tempo real para detalhes de API |
|
||||
</Tab>
|
||||
<Tab title="npx (qualquer agente)">
|
||||
Funciona com Claude Code, Codex, Cursor, Gemini CLI ou qualquer agente de codificação:
|
||||
```shell
|
||||
npx skills add crewaiinc/skills
|
||||
```
|
||||
Obtido do [registro skills.sh](https://skills.sh/crewaiinc/skills).
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Steps>
|
||||
<Step title="Instale o pacote oficial de skills">
|
||||
Use um dos métodos acima — o plugin marketplace do Claude Code ou `npx skills add`. Ambos instalam o pacote oficial [crewAIInc/skills](https://github.com/crewAIInc/skills).
|
||||
</Step>
|
||||
<Step title="Seu agente ganha expertise imediata em CrewAI">
|
||||
O pacote ensina ao seu agente:
|
||||
- **Flows** — apps com estado, passos e disparo de crews
|
||||
- **Crews e agentes** — padrões YAML-first, papéis, tarefas, delegação
|
||||
- **Ferramentas e integrações** — busca, APIs, servidores MCP e ferramentas comuns do CrewAI
|
||||
- **Estrutura do projeto** — scaffolds da CLI e convenções de repositório
|
||||
- **Padrões atualizados** — alinhado à documentação e às melhores práticas atuais do CrewAI
|
||||
</Step>
|
||||
<Step title="Comece a construir">
|
||||
Seu agente pode estruturar e construir projetos CrewAI sem você precisar reexplicar o framework a cada sessão.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Conceito de skills" icon="bolt" href="/pt-BR/concepts/skills">
|
||||
Como skills funcionam em agentes CrewAI — injeção, ativação e padrões.
|
||||
</Card>
|
||||
<Card title="Página de skills" icon="wand-magic-sparkles" href="/pt-BR/skills">
|
||||
Visão geral do pacote crewAIInc/skills e do que ele inclui.
|
||||
</Card>
|
||||
<Card title="AGENTS.md e ferramentas" icon="terminal" href="/pt-BR/guides/coding-tools/agents-md">
|
||||
Configure o AGENTS.md para Claude Code, Codex, Cursor e Gemini CLI.
|
||||
</Card>
|
||||
<Card title="Registro skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
|
||||
Listagem oficial — skills, estatísticas de instalação e auditorias.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 2. llms.txt — documentação legível por máquina
|
||||
|
||||
O CrewAI publica um arquivo `llms.txt` que dá aos assistentes de IA acesso direto à documentação completa em formato legível por máquinas.
|
||||
|
||||
```
|
||||
https://docs.crewai.com/llms.txt
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="O que é llms.txt?">
|
||||
[`llms.txt`](https://llmstxt.org/) é um padrão emergente para tornar a documentação consumível por grandes modelos de linguagem. Em vez de fazer scraping de HTML, seu agente pode buscar um único arquivo de texto estruturado com o conteúdo necessário.
|
||||
|
||||
O `llms.txt` do CrewAI **já está no ar** — seu agente pode usar agora.
|
||||
</Tab>
|
||||
<Tab title="Como usar">
|
||||
Indique ao agente de codificação a URL quando precisar da referência do CrewAI:
|
||||
|
||||
```
|
||||
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
|
||||
```
|
||||
|
||||
Muitos agentes (Claude Code, Cursor etc.) conseguem buscar URLs diretamente. O arquivo contém documentação estruturada sobre conceitos, APIs e guias do CrewAI.
|
||||
</Tab>
|
||||
<Tab title="Por que importa">
|
||||
- **Sem scraping** — conteúdo limpo e estruturado em uma requisição
|
||||
- **Sempre atualizado** — servido diretamente de docs.crewai.com
|
||||
- **Otimizado para LLMs** — formatado para janelas de contexto, não para navegadores
|
||||
- **Complementa as skills** — skills ensinam padrões; llms.txt fornece referência
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## 3. Implantação enterprise
|
||||
|
||||
Do crew local à produção no **CrewAI AMP** (Agent Management Platform) em minutos.
|
||||
|
||||
<Steps>
|
||||
<Step title="Construa localmente">
|
||||
Estruture e teste seu crew ou flow:
|
||||
```bash
|
||||
crewai create crew my_crew
|
||||
cd my_crew
|
||||
crewai run
|
||||
```
|
||||
</Step>
|
||||
<Step title="Prepare a implantação">
|
||||
Garanta que a estrutura do projeto está pronta:
|
||||
```bash
|
||||
crewai deploy --prepare
|
||||
```
|
||||
Veja o [guia de preparação](/pt-BR/enterprise/guides/prepare-for-deployment) para detalhes de estrutura e requisitos.
|
||||
</Step>
|
||||
<Step title="Implante no AMP">
|
||||
Envie para a plataforma CrewAI AMP:
|
||||
```bash
|
||||
crewai deploy
|
||||
```
|
||||
Também é possível implantar pela [integração com GitHub](/pt-BR/enterprise/guides/deploy-to-amp) ou pelo [Crew Studio](/pt-BR/enterprise/guides/enable-crew-studio).
|
||||
</Step>
|
||||
<Step title="Acesso via API">
|
||||
O crew implantado recebe um endpoint REST. Integre em qualquer aplicação:
|
||||
```bash
|
||||
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
|
||||
-H "Authorization: Bearer $CREWAI_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"inputs": {"topic": "AI agents"}}'
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Implantar no AMP" icon="rocket" href="/pt-BR/enterprise/guides/deploy-to-amp">
|
||||
Guia completo de implantação — CLI, GitHub e Crew Studio.
|
||||
</Card>
|
||||
<Card title="Introdução ao AMP" icon="globe" href="/pt-BR/enterprise/introduction">
|
||||
Visão da plataforma — o que o AMP oferece para crews em produção.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## 4. Recursos enterprise
|
||||
|
||||
O CrewAI AMP foi feito para equipes em produção. Além da implantação, você obtém:
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Observabilidade" icon="chart-line">
|
||||
Traces de execução, logs e métricas de desempenho para cada execução de crew. Monitore decisões de agentes, chamadas de ferramentas e conclusão de tarefas em tempo real.
|
||||
</Card>
|
||||
<Card title="Crew Studio" icon="paintbrush">
|
||||
Interface no-code/low-code para criar, personalizar e implantar crews visualmente — exporte para código ou implante direto.
|
||||
</Card>
|
||||
<Card title="Webhook streaming" icon="webhook">
|
||||
Transmita eventos em tempo real das execuções para seus sistemas. Integre com Slack, Zapier ou qualquer consumidor de webhook.
|
||||
</Card>
|
||||
<Card title="Gestão de equipe" icon="users">
|
||||
SSO, RBAC e controles em nível de organização. Gerencie quem pode criar, implantar e acessar crews.
|
||||
</Card>
|
||||
<Card title="Repositório de ferramentas" icon="toolbox">
|
||||
Publique e compartilhe ferramentas customizadas na organização. Instale ferramentas da comunidade a partir do registro.
|
||||
</Card>
|
||||
<Card title="Factory (self-hosted)" icon="server">
|
||||
Execute o CrewAI AMP na sua infraestrutura. Capacidades completas da plataforma com residência de dados e controles de conformidade.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Para quem é o AMP?">
|
||||
Para equipes que precisam levar fluxos de agentes de IA do protótipo à produção — com observabilidade, controles de acesso e infraestrutura escalável. De startups a grandes empresas, o AMP cuida da complexidade operacional para você focar nos agentes.
|
||||
</Accordion>
|
||||
<Accordion title="Quais opções de implantação existem?">
|
||||
- **Nuvem (app.crewai.com)** — gerenciada pela CrewAI, caminho mais rápido para produção
|
||||
- **Factory (self-hosted)** — na sua infraestrutura para controle total dos dados
|
||||
- **Híbrido** — combine nuvem e self-hosted conforme a sensibilidade dos dados
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="Conheça o CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
Cadastre-se e leve seu primeiro crew à produção.
|
||||
</Card>
|
||||
@@ -152,4 +152,4 @@ __all__ = [
|
||||
"wrap_file_source",
|
||||
]
|
||||
|
||||
__version__ = "1.14.3a1"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
|
||||
dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests>=2.33.0,<3",
|
||||
"crewai==1.14.3a1",
|
||||
"crewai==1.14.3",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
"python-docx~=1.2.0",
|
||||
@@ -112,7 +112,7 @@ github = [
|
||||
]
|
||||
rag = [
|
||||
"python-docx>=1.1.0",
|
||||
"lxml>=5.3.0,<5.4.0", # Pin to avoid etree import issues in 5.4.0
|
||||
"lxml>=6.1.0,<7", # 6.1.0+ required for GHSA-vfmq-68hx-4jfw (XXE in iterparse)
|
||||
]
|
||||
xml = [
|
||||
"unstructured[local-inference, all-docs]>=0.17.2"
|
||||
@@ -139,6 +139,14 @@ contextual = [
|
||||
"contextual-client>=0.1.0",
|
||||
"nest-asyncio>=1.6.0",
|
||||
]
|
||||
daytona = [
|
||||
"daytona~=0.140.0",
|
||||
]
|
||||
|
||||
e2b = [
|
||||
"e2b~=2.20.0",
|
||||
"e2b-code-interpreter~=2.6.0",
|
||||
]
|
||||
|
||||
|
||||
[tool.uv]
|
||||
|
||||
@@ -59,6 +59,11 @@ from crewai_tools.tools.dalle_tool.dalle_tool import DallETool
|
||||
from crewai_tools.tools.databricks_query_tool.databricks_query_tool import (
|
||||
DatabricksQueryTool,
|
||||
)
|
||||
from crewai_tools.tools.daytona_sandbox_tool import (
|
||||
DaytonaExecTool,
|
||||
DaytonaFileTool,
|
||||
DaytonaPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.directory_read_tool.directory_read_tool import (
|
||||
DirectoryReadTool,
|
||||
)
|
||||
@@ -66,6 +71,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
|
||||
DirectorySearchTool,
|
||||
)
|
||||
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool import (
|
||||
E2BExecTool,
|
||||
E2BFileTool,
|
||||
E2BPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
|
||||
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
|
||||
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
|
||||
@@ -232,8 +242,14 @@ __all__ = [
|
||||
"DOCXSearchTool",
|
||||
"DallETool",
|
||||
"DatabricksQueryTool",
|
||||
"DaytonaExecTool",
|
||||
"DaytonaFileTool",
|
||||
"DaytonaPythonTool",
|
||||
"DirectoryReadTool",
|
||||
"DirectorySearchTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
"EXASearchTool",
|
||||
"EnterpriseActionTool",
|
||||
"FileCompressorTool",
|
||||
@@ -305,4 +321,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.14.3a1"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -48,6 +48,11 @@ from crewai_tools.tools.dalle_tool.dalle_tool import DallETool
|
||||
from crewai_tools.tools.databricks_query_tool.databricks_query_tool import (
|
||||
DatabricksQueryTool,
|
||||
)
|
||||
from crewai_tools.tools.daytona_sandbox_tool import (
|
||||
DaytonaExecTool,
|
||||
DaytonaFileTool,
|
||||
DaytonaPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.directory_read_tool.directory_read_tool import (
|
||||
DirectoryReadTool,
|
||||
)
|
||||
@@ -55,6 +60,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
|
||||
DirectorySearchTool,
|
||||
)
|
||||
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool import (
|
||||
E2BExecTool,
|
||||
E2BFileTool,
|
||||
E2BPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
|
||||
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
|
||||
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
|
||||
@@ -217,8 +227,14 @@ __all__ = [
|
||||
"DOCXSearchTool",
|
||||
"DallETool",
|
||||
"DatabricksQueryTool",
|
||||
"DaytonaExecTool",
|
||||
"DaytonaFileTool",
|
||||
"DaytonaPythonTool",
|
||||
"DirectoryReadTool",
|
||||
"DirectorySearchTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
"EXASearchTool",
|
||||
"FileCompressorTool",
|
||||
"FileReadTool",
|
||||
|
||||
@@ -0,0 +1,107 @@
|
||||
# Daytona Sandbox Tools
|
||||
|
||||
Run shell commands, execute Python, and manage files inside a [Daytona](https://www.daytona.io/) sandbox. Daytona provides isolated, ephemeral compute environments suitable for agent-driven code execution.
|
||||
|
||||
Three tools are provided so you can pick what the agent actually needs:
|
||||
|
||||
- **`DaytonaExecTool`** — run a shell command (`sandbox.process.exec`).
|
||||
- **`DaytonaPythonTool`** — run a Python script (`sandbox.process.code_run`).
|
||||
- **`DaytonaFileTool`** — read / write / list / delete files (`sandbox.fs.*`).
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add "crewai-tools[daytona]"
|
||||
# or
|
||||
pip install "crewai-tools[daytona]"
|
||||
```
|
||||
|
||||
Set the API key:
|
||||
|
||||
```shell
|
||||
export DAYTONA_API_KEY="..."
|
||||
```
|
||||
|
||||
`DAYTONA_API_URL` and `DAYTONA_TARGET` are also respected if set.
|
||||
|
||||
## Sandbox lifecycle
|
||||
|
||||
All three tools share the same lifecycle controls from `DaytonaBaseTool`:
|
||||
|
||||
| Mode | When the sandbox is created | When it is deleted |
|
||||
| --- | --- | --- |
|
||||
| **Ephemeral** (default, `persistent=False`) | On every `_run` call | At the end of that same call |
|
||||
| **Persistent** (`persistent=True`) | Lazily on first use | At process exit (via `atexit`), or manually via `tool.close()` |
|
||||
| **Attach** (`sandbox_id="…"`) | Never — the tool attaches to an existing sandbox | Never — the tool will not delete a sandbox it did not create |
|
||||
|
||||
Ephemeral mode is the safe default: nothing leaks if the agent forgets to clean up. Use persistent mode when you want filesystem state or installed packages to carry across steps — this is typical when pairing `DaytonaFileTool` with `DaytonaExecTool`.
|
||||
|
||||
## Examples
|
||||
|
||||
### One-shot Python execution (ephemeral)
|
||||
|
||||
```python
|
||||
from crewai_tools import DaytonaPythonTool
|
||||
|
||||
tool = DaytonaPythonTool()
|
||||
result = tool.run(code="print(sum(range(10)))")
|
||||
```
|
||||
|
||||
### Multi-step shell session (persistent)
|
||||
|
||||
```python
|
||||
from crewai_tools import DaytonaExecTool, DaytonaFileTool
|
||||
|
||||
exec_tool = DaytonaExecTool(persistent=True)
|
||||
file_tool = DaytonaFileTool(persistent=True)
|
||||
|
||||
# Agent writes a script, then runs it — both share the same sandbox instance
|
||||
# because they each keep their own persistent sandbox. If you need the *same*
|
||||
# sandbox across two tools, create one tool, grab the sandbox id via
|
||||
# `tool._persistent_sandbox.id`, and pass it to the other via `sandbox_id=...`.
|
||||
```
|
||||
|
||||
### Attach to an existing sandbox
|
||||
|
||||
```python
|
||||
from crewai_tools import DaytonaExecTool
|
||||
|
||||
tool = DaytonaExecTool(sandbox_id="my-long-lived-sandbox")
|
||||
```
|
||||
|
||||
### Custom create params
|
||||
|
||||
Pass Daytona's `CreateSandboxFromSnapshotParams` kwargs via `create_params`:
|
||||
|
||||
```python
|
||||
tool = DaytonaExecTool(
|
||||
persistent=True,
|
||||
create_params={
|
||||
"language": "python",
|
||||
"env_vars": {"MY_FLAG": "1"},
|
||||
"labels": {"owner": "crewai-agent"},
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
## Tool arguments
|
||||
|
||||
### `DaytonaExecTool`
|
||||
- `command: str` — shell command to run.
|
||||
- `cwd: str | None` — working directory.
|
||||
- `env: dict[str, str] | None` — extra env vars for this command.
|
||||
- `timeout: int | None` — seconds.
|
||||
|
||||
### `DaytonaPythonTool`
|
||||
- `code: str` — Python source to execute.
|
||||
- `argv: list[str] | None` — argv forwarded via `CodeRunParams`.
|
||||
- `env: dict[str, str] | None` — env vars forwarded via `CodeRunParams`.
|
||||
- `timeout: int | None` — seconds.
|
||||
|
||||
### `DaytonaFileTool`
|
||||
- `action: "read" | "write" | "list" | "delete" | "mkdir" | "info"`
|
||||
- `path: str` — absolute path inside the sandbox.
|
||||
- `content: str | None` — required for `write`.
|
||||
- `binary: bool` — if `True`, `content` is base64 on write / returned as base64 on read.
|
||||
- `recursive: bool` — for `delete`, removes directories recursively.
|
||||
- `mode: str` — for `mkdir`, octal permission string (default `"0755"`).
|
||||
@@ -0,0 +1,13 @@
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_exec_tool import DaytonaExecTool
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_file_tool import DaytonaFileTool
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_python_tool import (
|
||||
DaytonaPythonTool,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"DaytonaBaseTool",
|
||||
"DaytonaExecTool",
|
||||
"DaytonaFileTool",
|
||||
"DaytonaPythonTool",
|
||||
]
|
||||
@@ -0,0 +1,198 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import atexit
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from crewai.tools import BaseTool, EnvVar
|
||||
from pydantic import ConfigDict, Field, PrivateAttr
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DaytonaBaseTool(BaseTool):
|
||||
"""Shared base for tools that act on a Daytona sandbox.
|
||||
|
||||
Lifecycle modes:
|
||||
- persistent=False (default): create a fresh sandbox per `_run` call and
|
||||
delete it when the call returns. Safer and stateless — nothing leaks if
|
||||
the agent forgets cleanup.
|
||||
- persistent=True: lazily create a single sandbox on first use, cache it
|
||||
on the instance, and register an atexit hook to delete it at process
|
||||
exit. Cheaper across many calls and lets files/state carry over.
|
||||
- sandbox_id=<existing>: attach to a sandbox the caller already owns.
|
||||
Never deleted by the tool.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
|
||||
package_dependencies: list[str] = Field(default_factory=lambda: ["daytona"])
|
||||
|
||||
api_key: str | None = Field(
|
||||
default_factory=lambda: os.getenv("DAYTONA_API_KEY"),
|
||||
description="Daytona API key. Falls back to DAYTONA_API_KEY env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
api_url: str | None = Field(
|
||||
default_factory=lambda: os.getenv("DAYTONA_API_URL"),
|
||||
description="Daytona API URL override. Falls back to DAYTONA_API_URL env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
target: str | None = Field(
|
||||
default_factory=lambda: os.getenv("DAYTONA_TARGET"),
|
||||
description="Daytona target region. Falls back to DAYTONA_TARGET env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
|
||||
persistent: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"If True, reuse one sandbox across all calls to this tool instance "
|
||||
"and delete it at process exit. Default False creates and deletes a "
|
||||
"fresh sandbox per call."
|
||||
),
|
||||
)
|
||||
sandbox_id: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Attach to an existing sandbox by id or name instead of creating a "
|
||||
"new one. The tool will never delete a sandbox it did not create."
|
||||
),
|
||||
)
|
||||
create_params: dict[str, Any] | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Optional kwargs forwarded to CreateSandboxFromSnapshotParams when "
|
||||
"creating a sandbox (e.g. language, snapshot, env_vars, labels)."
|
||||
),
|
||||
)
|
||||
sandbox_timeout: float = Field(
|
||||
default=60.0,
|
||||
description="Timeout in seconds for sandbox create/delete operations.",
|
||||
)
|
||||
|
||||
env_vars: list[EnvVar] = Field(
|
||||
default_factory=lambda: [
|
||||
EnvVar(
|
||||
name="DAYTONA_API_KEY",
|
||||
description="API key for Daytona sandbox service",
|
||||
required=False,
|
||||
),
|
||||
EnvVar(
|
||||
name="DAYTONA_API_URL",
|
||||
description="Daytona API base URL (optional)",
|
||||
required=False,
|
||||
),
|
||||
EnvVar(
|
||||
name="DAYTONA_TARGET",
|
||||
description="Daytona target region (optional)",
|
||||
required=False,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
_client: Any | None = PrivateAttr(default=None)
|
||||
_persistent_sandbox: Any | None = PrivateAttr(default=None)
|
||||
_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
|
||||
_cleanup_registered: bool = PrivateAttr(default=False)
|
||||
|
||||
_sdk_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sdk(cls) -> dict[str, Any]:
|
||||
if cls._sdk_cache:
|
||||
return cls._sdk_cache
|
||||
try:
|
||||
from daytona import (
|
||||
CreateSandboxFromSnapshotParams,
|
||||
Daytona,
|
||||
DaytonaConfig,
|
||||
)
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'daytona' package is required for Daytona sandbox tools. "
|
||||
"Install it with: uv add daytona (or) pip install daytona"
|
||||
) from exc
|
||||
cls._sdk_cache = {
|
||||
"Daytona": Daytona,
|
||||
"DaytonaConfig": DaytonaConfig,
|
||||
"CreateSandboxFromSnapshotParams": CreateSandboxFromSnapshotParams,
|
||||
}
|
||||
return cls._sdk_cache
|
||||
|
||||
def _get_client(self) -> Any:
|
||||
if self._client is not None:
|
||||
return self._client
|
||||
sdk = self._import_sdk()
|
||||
config_kwargs: dict[str, Any] = {}
|
||||
if self.api_key:
|
||||
config_kwargs["api_key"] = self.api_key
|
||||
if self.api_url:
|
||||
config_kwargs["api_url"] = self.api_url
|
||||
if self.target:
|
||||
config_kwargs["target"] = self.target
|
||||
config = sdk["DaytonaConfig"](**config_kwargs) if config_kwargs else None
|
||||
self._client = sdk["Daytona"](config) if config else sdk["Daytona"]()
|
||||
return self._client
|
||||
|
||||
def _build_create_params(self) -> Any | None:
|
||||
if not self.create_params:
|
||||
return None
|
||||
sdk = self._import_sdk()
|
||||
return sdk["CreateSandboxFromSnapshotParams"](**self.create_params)
|
||||
|
||||
def _acquire_sandbox(self) -> tuple[Any, bool]:
|
||||
"""Return (sandbox, should_delete_after_use)."""
|
||||
client = self._get_client()
|
||||
|
||||
if self.sandbox_id:
|
||||
return client.get(self.sandbox_id), False
|
||||
|
||||
if self.persistent:
|
||||
with self._lock:
|
||||
if self._persistent_sandbox is None:
|
||||
self._persistent_sandbox = client.create(
|
||||
self._build_create_params(),
|
||||
timeout=self.sandbox_timeout,
|
||||
)
|
||||
if not self._cleanup_registered:
|
||||
atexit.register(self.close)
|
||||
self._cleanup_registered = True
|
||||
return self._persistent_sandbox, False
|
||||
|
||||
sandbox = client.create(
|
||||
self._build_create_params(),
|
||||
timeout=self.sandbox_timeout,
|
||||
)
|
||||
return sandbox, True
|
||||
|
||||
def _release_sandbox(self, sandbox: Any, should_delete: bool) -> None:
|
||||
if not should_delete:
|
||||
return
|
||||
try:
|
||||
sandbox.delete(timeout=self.sandbox_timeout)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort sandbox cleanup failed after ephemeral use; "
|
||||
"the sandbox may need manual deletion.",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Delete the cached persistent sandbox if one exists."""
|
||||
with self._lock:
|
||||
sandbox = self._persistent_sandbox
|
||||
self._persistent_sandbox = None
|
||||
if sandbox is None:
|
||||
return
|
||||
try:
|
||||
sandbox.delete(timeout=self.sandbox_timeout)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort persistent sandbox cleanup failed at close(); "
|
||||
"the sandbox may need manual deletion.",
|
||||
exc_info=True,
|
||||
)
|
||||
@@ -0,0 +1,59 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
|
||||
|
||||
class DaytonaExecToolSchema(BaseModel):
|
||||
command: str = Field(..., description="Shell command to execute in the sandbox.")
|
||||
cwd: str | None = Field(
|
||||
default=None,
|
||||
description="Working directory to run the command in. Defaults to the sandbox work dir.",
|
||||
)
|
||||
env: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables to set for this command.",
|
||||
)
|
||||
timeout: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the command to finish.",
|
||||
)
|
||||
|
||||
|
||||
class DaytonaExecTool(DaytonaBaseTool):
|
||||
"""Run a shell command inside a Daytona sandbox."""
|
||||
|
||||
name: str = "Daytona Sandbox Exec"
|
||||
description: str = (
|
||||
"Execute a shell command inside a Daytona sandbox and return the exit "
|
||||
"code and combined output. Use this to run builds, package installs, "
|
||||
"git operations, or any one-off shell command."
|
||||
)
|
||||
args_schema: type_[BaseModel] = DaytonaExecToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
command: str,
|
||||
cwd: str | None = None,
|
||||
env: dict[str, str] | None = None,
|
||||
timeout: int | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_delete = self._acquire_sandbox()
|
||||
try:
|
||||
response = sandbox.process.exec(
|
||||
command,
|
||||
cwd=cwd,
|
||||
env=env,
|
||||
timeout=timeout,
|
||||
)
|
||||
return {
|
||||
"exit_code": getattr(response, "exit_code", None),
|
||||
"result": getattr(response, "result", None),
|
||||
"artifacts": getattr(response, "artifacts", None),
|
||||
}
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_delete)
|
||||
@@ -0,0 +1,205 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
from builtins import type as type_
|
||||
import logging
|
||||
import posixpath
|
||||
from typing import Any, Literal
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
FileAction = Literal["read", "write", "append", "list", "delete", "mkdir", "info"]
|
||||
|
||||
|
||||
class DaytonaFileToolSchema(BaseModel):
|
||||
action: FileAction = Field(
|
||||
...,
|
||||
description=(
|
||||
"The filesystem action to perform: 'read' (returns file contents), "
|
||||
"'write' (create or replace a file with content), 'append' (append "
|
||||
"content to an existing file — use this for writing large files in "
|
||||
"chunks to avoid hitting tool-call size limits), 'list' (lists a "
|
||||
"directory), 'delete' (removes a file/dir), 'mkdir' (creates a "
|
||||
"directory), 'info' (returns file metadata)."
|
||||
),
|
||||
)
|
||||
path: str = Field(..., description="Absolute path inside the sandbox.")
|
||||
content: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Content to write or append. If omitted for 'write', an empty file "
|
||||
"is created. For files larger than a few KB, prefer one 'write' "
|
||||
"with empty content followed by multiple 'append' calls of ~4KB "
|
||||
"each to stay within tool-call payload limits."
|
||||
),
|
||||
)
|
||||
binary: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"For 'write': treat content as base64 and upload raw bytes. "
|
||||
"For 'read': return contents as base64 instead of decoded utf-8."
|
||||
),
|
||||
)
|
||||
recursive: bool = Field(
|
||||
default=False,
|
||||
description="For action='delete': remove directories recursively.",
|
||||
)
|
||||
mode: str = Field(
|
||||
default="0755",
|
||||
description="For action='mkdir': octal permission string (default 0755).",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate_action_args(self) -> DaytonaFileToolSchema:
|
||||
if self.action == "append" and self.content is None:
|
||||
raise ValueError(
|
||||
"action='append' requires 'content'. Pass the chunk to append "
|
||||
"in the 'content' field."
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
class DaytonaFileTool(DaytonaBaseTool):
|
||||
"""Read, write, and manage files inside a Daytona sandbox.
|
||||
|
||||
Notes:
|
||||
- Most useful with `persistent=True` or an explicit `sandbox_id`. With the
|
||||
default ephemeral mode, files disappear when this tool call finishes.
|
||||
"""
|
||||
|
||||
name: str = "Daytona Sandbox Files"
|
||||
description: str = (
|
||||
"Perform filesystem operations inside a Daytona sandbox: read a file, "
|
||||
"write content to a path, append content to an existing file, list a "
|
||||
"directory, delete a path, make a directory, or fetch file metadata. "
|
||||
"For files larger than a few KB, create the file with action='write' "
|
||||
"and empty content, then send the body via multiple 'append' calls of "
|
||||
"~4KB each to stay within tool-call payload limits."
|
||||
)
|
||||
args_schema: type_[BaseModel] = DaytonaFileToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
action: FileAction,
|
||||
path: str,
|
||||
content: str | None = None,
|
||||
binary: bool = False,
|
||||
recursive: bool = False,
|
||||
mode: str = "0755",
|
||||
) -> Any:
|
||||
sandbox, should_delete = self._acquire_sandbox()
|
||||
try:
|
||||
if action == "read":
|
||||
return self._read(sandbox, path, binary=binary)
|
||||
if action == "write":
|
||||
return self._write(sandbox, path, content or "", binary=binary)
|
||||
if action == "append":
|
||||
return self._append(sandbox, path, content or "", binary=binary)
|
||||
if action == "list":
|
||||
return self._list(sandbox, path)
|
||||
if action == "delete":
|
||||
sandbox.fs.delete_file(path, recursive=recursive)
|
||||
return {"status": "deleted", "path": path}
|
||||
if action == "mkdir":
|
||||
sandbox.fs.create_folder(path, mode)
|
||||
return {"status": "created", "path": path, "mode": mode}
|
||||
if action == "info":
|
||||
return self._info(sandbox, path)
|
||||
raise ValueError(f"Unknown action: {action}")
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_delete)
|
||||
|
||||
def _read(self, sandbox: Any, path: str, *, binary: bool) -> dict[str, Any]:
|
||||
data: bytes = sandbox.fs.download_file(path)
|
||||
if binary:
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
}
|
||||
try:
|
||||
return {"path": path, "encoding": "utf-8", "content": data.decode("utf-8")}
|
||||
except UnicodeDecodeError:
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
"note": "File was not valid utf-8; returned as base64.",
|
||||
}
|
||||
|
||||
def _write(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
payload = base64.b64decode(content) if binary else content.encode("utf-8")
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
sandbox.fs.upload_file(payload, path)
|
||||
return {"status": "written", "path": path, "bytes": len(payload)}
|
||||
|
||||
def _append(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
chunk = base64.b64decode(content) if binary else content.encode("utf-8")
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
try:
|
||||
existing: bytes = sandbox.fs.download_file(path)
|
||||
except Exception:
|
||||
existing = b""
|
||||
payload = existing + chunk
|
||||
sandbox.fs.upload_file(payload, path)
|
||||
return {
|
||||
"status": "appended",
|
||||
"path": path,
|
||||
"appended_bytes": len(chunk),
|
||||
"total_bytes": len(payload),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _ensure_parent_dir(sandbox: Any, path: str) -> None:
|
||||
"""Make sure the parent directory of `path` exists.
|
||||
|
||||
Daytona's upload returns 400 if the parent directory is missing. We
|
||||
best-effort mkdir the parent; any error (e.g. already exists) is
|
||||
swallowed because `create_folder` is not idempotent on the server.
|
||||
"""
|
||||
parent = posixpath.dirname(path)
|
||||
if not parent or parent in ("/", "."):
|
||||
return
|
||||
try:
|
||||
sandbox.fs.create_folder(parent, "0755")
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort parent-directory create failed for %s; "
|
||||
"assuming it already exists and proceeding with the write.",
|
||||
parent,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _list(self, sandbox: Any, path: str) -> dict[str, Any]:
|
||||
entries = sandbox.fs.list_files(path)
|
||||
return {
|
||||
"path": path,
|
||||
"entries": [self._file_info_to_dict(entry) for entry in entries],
|
||||
}
|
||||
|
||||
def _info(self, sandbox: Any, path: str) -> dict[str, Any]:
|
||||
return self._file_info_to_dict(sandbox.fs.get_file_info(path))
|
||||
|
||||
@staticmethod
|
||||
def _file_info_to_dict(info: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"name",
|
||||
"size",
|
||||
"mode",
|
||||
"permissions",
|
||||
"is_dir",
|
||||
"mod_time",
|
||||
"owner",
|
||||
"group",
|
||||
)
|
||||
return {field: getattr(info, field, None) for field in fields}
|
||||
@@ -0,0 +1,82 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
|
||||
|
||||
|
||||
class DaytonaPythonToolSchema(BaseModel):
|
||||
code: str = Field(
|
||||
...,
|
||||
description="Python source to execute inside the sandbox.",
|
||||
)
|
||||
argv: list[str] | None = Field(
|
||||
default=None,
|
||||
description="Optional argv passed to the script (forwarded as params.argv).",
|
||||
)
|
||||
env: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables for the run (forwarded as params.env).",
|
||||
)
|
||||
timeout: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the code to finish.",
|
||||
)
|
||||
|
||||
|
||||
class DaytonaPythonTool(DaytonaBaseTool):
|
||||
"""Run Python source inside a Daytona sandbox."""
|
||||
|
||||
name: str = "Daytona Sandbox Python"
|
||||
description: str = (
|
||||
"Execute a block of Python code inside a Daytona sandbox and return the "
|
||||
"exit code, captured stdout, and any produced artifacts. Use this for "
|
||||
"data processing, quick scripts, or analysis that should run in an "
|
||||
"isolated environment."
|
||||
)
|
||||
args_schema: type_[BaseModel] = DaytonaPythonToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
code: str,
|
||||
argv: list[str] | None = None,
|
||||
env: dict[str, str] | None = None,
|
||||
timeout: int | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_delete = self._acquire_sandbox()
|
||||
try:
|
||||
params = self._build_code_run_params(argv=argv, env=env)
|
||||
response = sandbox.process.code_run(code, params=params, timeout=timeout)
|
||||
return {
|
||||
"exit_code": getattr(response, "exit_code", None),
|
||||
"result": getattr(response, "result", None),
|
||||
"artifacts": getattr(response, "artifacts", None),
|
||||
}
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_delete)
|
||||
|
||||
def _build_code_run_params(
|
||||
self,
|
||||
argv: list[str] | None,
|
||||
env: dict[str, str] | None,
|
||||
) -> Any | None:
|
||||
if argv is None and env is None:
|
||||
return None
|
||||
try:
|
||||
from daytona import CodeRunParams
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"Could not import daytona.CodeRunParams while building "
|
||||
"argv/env for sandbox.process.code_run. This usually means the "
|
||||
"installed 'daytona' SDK is too old or incompatible. Upgrade "
|
||||
"with: pip install -U 'crewai-tools[daytona]'"
|
||||
) from exc
|
||||
kwargs: dict[str, Any] = {}
|
||||
if argv is not None:
|
||||
kwargs["argv"] = argv
|
||||
if env is not None:
|
||||
kwargs["env"] = env
|
||||
return CodeRunParams(**kwargs)
|
||||
@@ -0,0 +1,120 @@
|
||||
# E2B Sandbox Tools
|
||||
|
||||
Run shell commands, execute Python, and manage files inside an [E2B](https://e2b.dev/) sandbox. E2B provides isolated, ephemeral VMs suitable for agent-driven code execution, with a Jupyter-style code interpreter for rich Python results.
|
||||
|
||||
Three tools are provided so you can pick what the agent actually needs:
|
||||
|
||||
- **`E2BExecTool`** — run a shell command (`sandbox.commands.run`).
|
||||
- **`E2BPythonTool`** — run a Python cell in the E2B code interpreter (`sandbox.run_code`), returning stdout/stderr and rich results (charts, dataframes).
|
||||
- **`E2BFileTool`** — read / write / list / delete files (`sandbox.files.*`).
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add "crewai-tools[e2b]"
|
||||
# or
|
||||
pip install "crewai-tools[e2b]"
|
||||
```
|
||||
|
||||
Set the API key:
|
||||
|
||||
```shell
|
||||
export E2B_API_KEY="..."
|
||||
```
|
||||
|
||||
`E2B_DOMAIN` is also respected if set (for self-hosted or non-default deployments).
|
||||
|
||||
## Sandbox lifecycle
|
||||
|
||||
All three tools share the same lifecycle controls from `E2BBaseTool`:
|
||||
|
||||
| Mode | When the sandbox is created | When it is killed |
|
||||
| --- | --- | --- |
|
||||
| **Ephemeral** (default, `persistent=False`) | On every `_run` call | At the end of that same call |
|
||||
| **Persistent** (`persistent=True`) | Lazily on first use | At process exit (via `atexit`), or manually via `tool.close()` |
|
||||
| **Attach** (`sandbox_id="…"`) | Never — the tool attaches to an existing sandbox | Never — the tool will not kill a sandbox it did not create |
|
||||
|
||||
Ephemeral mode is the safe default: nothing leaks if the agent forgets to clean up. Use persistent mode when you want filesystem state or installed packages to carry across steps — this is typical when pairing `E2BFileTool` with `E2BExecTool`.
|
||||
|
||||
E2B sandboxes also auto-expire after an idle timeout. Tune it via `sandbox_timeout` (seconds, default `300`).
|
||||
|
||||
## Examples
|
||||
|
||||
### One-shot Python execution (ephemeral)
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BPythonTool
|
||||
|
||||
tool = E2BPythonTool()
|
||||
result = tool.run(code="print(sum(range(10)))")
|
||||
```
|
||||
|
||||
### Multi-step shell session (persistent)
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BExecTool, E2BFileTool
|
||||
|
||||
exec_tool = E2BExecTool(persistent=True)
|
||||
file_tool = E2BFileTool(persistent=True)
|
||||
|
||||
# Each tool keeps its own persistent sandbox. If you need the *same* sandbox
|
||||
# across two tools, create one tool, grab the sandbox id via
|
||||
# `tool._persistent_sandbox.sandbox_id`, and pass it to the other via
|
||||
# `sandbox_id=...`.
|
||||
```
|
||||
|
||||
### Attach to an existing sandbox
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BExecTool
|
||||
|
||||
tool = E2BExecTool(sandbox_id="sbx_...")
|
||||
```
|
||||
|
||||
### Custom create params
|
||||
|
||||
```python
|
||||
tool = E2BExecTool(
|
||||
persistent=True,
|
||||
template="my-custom-template",
|
||||
sandbox_timeout=600,
|
||||
envs={"MY_FLAG": "1"},
|
||||
metadata={"owner": "crewai-agent"},
|
||||
)
|
||||
```
|
||||
|
||||
## Tool arguments
|
||||
|
||||
### `E2BExecTool`
|
||||
- `command: str` — shell command to run.
|
||||
- `cwd: str | None` — working directory.
|
||||
- `envs: dict[str, str] | None` — extra env vars for this command.
|
||||
- `timeout: float | None` — seconds.
|
||||
|
||||
### `E2BPythonTool`
|
||||
- `code: str` — source to execute.
|
||||
- `language: str | None` — override kernel language (default: Python).
|
||||
- `envs: dict[str, str] | None` — env vars for the run.
|
||||
- `timeout: float | None` — seconds.
|
||||
|
||||
### `E2BFileTool`
|
||||
- `action: "read" | "write" | "append" | "list" | "delete" | "mkdir" | "info" | "exists"`
|
||||
- `path: str` — absolute path inside the sandbox.
|
||||
- `content: str | None` — required for `append`; optional for `write`.
|
||||
- `binary: bool` — if `True`, `content` is base64 on write / returned as base64 on read.
|
||||
- `depth: int` — for `list`, how many levels to recurse (default 1).
|
||||
|
||||
## Security considerations
|
||||
|
||||
These tools hand the LLM arbitrary shell, Python, and filesystem access inside a remote VM. The threat model to keep in mind:
|
||||
|
||||
- **Prompt-injection is a code-execution vector.** If the agent ingests untrusted content (web pages, scraped documents, user-supplied files, emails, search results), a malicious instruction hidden in that content can coerce the agent into issuing commands to `E2BExecTool` / `E2BPythonTool`. Treat any pipeline that feeds untrusted text into an agent that also has these tools as equivalent to remote code execution — the LLM is the attacker's shell.
|
||||
- **Ephemeral mode (the default) is the main blast-radius control.** A fresh sandbox is created per call and killed at the end, so injected commands cannot persist state, exfiltrate long-lived secrets, or build up tooling across turns. Leave `persistent=False` unless you have a concrete reason to change it.
|
||||
- **Avoid this specific combination:**
|
||||
- untrusted content in the agent's context, **plus**
|
||||
- `persistent=True` or an explicit long-lived `sandbox_id`, **plus**
|
||||
- a large `sandbox_timeout` or credentials/secrets seeded into the sandbox via `envs`.
|
||||
|
||||
That stack lets a single injection pivot into a long-running, credentialed shell that survives across turns. If you must run persistently, also keep `sandbox_timeout` short, scope `envs` to the minimum the task needs, and don't feed the same agent untrusted input.
|
||||
- **Don't mount production credentials.** Anything you put into `envs`, `metadata`, or files written to the sandbox is reachable from the LLM. Use per-task scoped keys, not your personal API tokens.
|
||||
- **E2B's VM isolation is the final backstop**, not a license to relax the above — isolation prevents escape to the host, but everything the sandbox can reach (the public internet, any service whose token you dropped in) is still fair game for an injected command.
|
||||
@@ -0,0 +1,12 @@
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_exec_tool import E2BExecTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_file_tool import E2BFileTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_python_tool import E2BPythonTool
|
||||
|
||||
|
||||
__all__ = [
|
||||
"E2BBaseTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
]
|
||||
@@ -0,0 +1,197 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import atexit
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from crewai.tools import BaseTool, EnvVar
|
||||
from pydantic import ConfigDict, Field, PrivateAttr, SecretStr
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class E2BBaseTool(BaseTool):
|
||||
"""Shared base for tools that act on an E2B sandbox.
|
||||
|
||||
Lifecycle modes:
|
||||
- persistent=False (default): create a fresh sandbox per `_run` call and
|
||||
kill it when the call returns. Safer and stateless — nothing leaks if
|
||||
the agent forgets cleanup.
|
||||
- persistent=True: lazily create a single sandbox on first use, cache it
|
||||
on the instance, and register an atexit hook to kill it at process
|
||||
exit. Cheaper across many calls and lets files/state carry over.
|
||||
- sandbox_id=<existing>: attach to a sandbox the caller already owns.
|
||||
Never killed by the tool.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
|
||||
package_dependencies: list[str] = Field(default_factory=lambda: ["e2b"])
|
||||
|
||||
api_key: SecretStr | None = Field(
|
||||
default_factory=lambda: (
|
||||
SecretStr(val) if (val := os.getenv("E2B_API_KEY")) else None
|
||||
),
|
||||
description="E2B API key. Falls back to E2B_API_KEY env var.",
|
||||
json_schema_extra={"required": False},
|
||||
repr=False,
|
||||
)
|
||||
domain: str | None = Field(
|
||||
default_factory=lambda: os.getenv("E2B_DOMAIN"),
|
||||
description="E2B API domain override. Falls back to E2B_DOMAIN env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
|
||||
template: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Optional template/snapshot name or id to create the sandbox from. "
|
||||
"Defaults to E2B's base template when omitted."
|
||||
),
|
||||
)
|
||||
persistent: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"If True, reuse one sandbox across all calls to this tool instance "
|
||||
"and kill it at process exit. Default False creates and kills a "
|
||||
"fresh sandbox per call."
|
||||
),
|
||||
)
|
||||
sandbox_id: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Attach to an existing sandbox by id instead of creating a new "
|
||||
"one. The tool will never kill a sandbox it did not create."
|
||||
),
|
||||
)
|
||||
sandbox_timeout: int = Field(
|
||||
default=300,
|
||||
description=(
|
||||
"Idle timeout in seconds after which E2B auto-kills the sandbox. "
|
||||
"Applied at create time and when attaching via sandbox_id."
|
||||
),
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Environment variables to set inside the sandbox at create time.",
|
||||
)
|
||||
metadata: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Metadata key-value pairs to attach to the sandbox at create time.",
|
||||
)
|
||||
|
||||
env_vars: list[EnvVar] = Field(
|
||||
default_factory=lambda: [
|
||||
EnvVar(
|
||||
name="E2B_API_KEY",
|
||||
description="API key for E2B sandbox service",
|
||||
required=False,
|
||||
),
|
||||
EnvVar(
|
||||
name="E2B_DOMAIN",
|
||||
description="E2B API domain (optional)",
|
||||
required=False,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
_persistent_sandbox: Any | None = PrivateAttr(default=None)
|
||||
_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
|
||||
_cleanup_registered: bool = PrivateAttr(default=False)
|
||||
|
||||
_sdk_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sandbox_class(cls) -> Any:
|
||||
"""Return the Sandbox class used by this tool.
|
||||
|
||||
Subclasses override this to swap in a different SDK (e.g. the code
|
||||
interpreter sandbox). The default uses plain `e2b.Sandbox`.
|
||||
"""
|
||||
cached = cls._sdk_cache.get("e2b.Sandbox")
|
||||
if cached is not None:
|
||||
return cached
|
||||
try:
|
||||
from e2b import Sandbox # type: ignore[import-untyped]
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'e2b' package is required for E2B sandbox tools. "
|
||||
"Install it with: uv add e2b (or) pip install e2b"
|
||||
) from exc
|
||||
cls._sdk_cache["e2b.Sandbox"] = Sandbox
|
||||
return Sandbox
|
||||
|
||||
def _connect_kwargs(self) -> dict[str, Any]:
|
||||
kwargs: dict[str, Any] = {}
|
||||
if self.api_key is not None:
|
||||
kwargs["api_key"] = self.api_key.get_secret_value()
|
||||
if self.domain:
|
||||
kwargs["domain"] = self.domain
|
||||
if self.sandbox_timeout is not None:
|
||||
kwargs["timeout"] = self.sandbox_timeout
|
||||
return kwargs
|
||||
|
||||
def _create_kwargs(self) -> dict[str, Any]:
|
||||
kwargs: dict[str, Any] = self._connect_kwargs()
|
||||
if self.template is not None:
|
||||
kwargs["template"] = self.template
|
||||
if self.envs is not None:
|
||||
kwargs["envs"] = self.envs
|
||||
if self.metadata is not None:
|
||||
kwargs["metadata"] = self.metadata
|
||||
return kwargs
|
||||
|
||||
def _acquire_sandbox(self) -> tuple[Any, bool]:
|
||||
"""Return (sandbox, should_kill_after_use)."""
|
||||
sandbox_cls = self._import_sandbox_class()
|
||||
|
||||
if self.sandbox_id:
|
||||
return (
|
||||
sandbox_cls.connect(self.sandbox_id, **self._connect_kwargs()),
|
||||
False,
|
||||
)
|
||||
|
||||
if self.persistent:
|
||||
with self._lock:
|
||||
if self._persistent_sandbox is None:
|
||||
self._persistent_sandbox = sandbox_cls.create(
|
||||
**self._create_kwargs()
|
||||
)
|
||||
if not self._cleanup_registered:
|
||||
atexit.register(self.close)
|
||||
self._cleanup_registered = True
|
||||
return self._persistent_sandbox, False
|
||||
|
||||
sandbox = sandbox_cls.create(**self._create_kwargs())
|
||||
return sandbox, True
|
||||
|
||||
def _release_sandbox(self, sandbox: Any, should_kill: bool) -> None:
|
||||
if not should_kill:
|
||||
return
|
||||
try:
|
||||
sandbox.kill()
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort sandbox cleanup failed after ephemeral use; "
|
||||
"the sandbox may need manual termination.",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Kill the cached persistent sandbox if one exists."""
|
||||
with self._lock:
|
||||
sandbox = self._persistent_sandbox
|
||||
self._persistent_sandbox = None
|
||||
if sandbox is None:
|
||||
return
|
||||
try:
|
||||
sandbox.kill()
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort persistent sandbox cleanup failed at close(); "
|
||||
"the sandbox may need manual termination.",
|
||||
exc_info=True,
|
||||
)
|
||||
@@ -0,0 +1,62 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
class E2BExecToolSchema(BaseModel):
|
||||
command: str = Field(..., description="Shell command to execute in the sandbox.")
|
||||
cwd: str | None = Field(
|
||||
default=None,
|
||||
description="Working directory to run the command in. Defaults to the sandbox home dir.",
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables to set for this command.",
|
||||
)
|
||||
timeout: float | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the command to finish.",
|
||||
)
|
||||
|
||||
|
||||
class E2BExecTool(E2BBaseTool):
|
||||
"""Run a shell command inside an E2B sandbox."""
|
||||
|
||||
name: str = "E2B Sandbox Exec"
|
||||
description: str = (
|
||||
"Execute a shell command inside an E2B sandbox and return the exit "
|
||||
"code, stdout, and stderr. Use this to run builds, package installs, "
|
||||
"git operations, or any one-off shell command."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BExecToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
command: str,
|
||||
cwd: str | None = None,
|
||||
envs: dict[str, str] | None = None,
|
||||
timeout: float | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
run_kwargs: dict[str, Any] = {}
|
||||
if cwd is not None:
|
||||
run_kwargs["cwd"] = cwd
|
||||
if envs is not None:
|
||||
run_kwargs["envs"] = envs
|
||||
if timeout is not None:
|
||||
run_kwargs["timeout"] = timeout
|
||||
result = sandbox.commands.run(command, **run_kwargs)
|
||||
return {
|
||||
"exit_code": getattr(result, "exit_code", None),
|
||||
"stdout": getattr(result, "stdout", None),
|
||||
"stderr": getattr(result, "stderr", None),
|
||||
"error": getattr(result, "error", None),
|
||||
}
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
@@ -0,0 +1,220 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
from builtins import type as type_
|
||||
import logging
|
||||
import posixpath
|
||||
from typing import Any, Literal
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
FileAction = Literal[
|
||||
"read", "write", "append", "list", "delete", "mkdir", "info", "exists"
|
||||
]
|
||||
|
||||
|
||||
class E2BFileToolSchema(BaseModel):
|
||||
action: FileAction = Field(
|
||||
...,
|
||||
description=(
|
||||
"The filesystem action to perform: 'read' (returns file contents), "
|
||||
"'write' (create or replace a file with content), 'append' (append "
|
||||
"content to an existing file — use this for writing large files in "
|
||||
"chunks to avoid hitting tool-call size limits), 'list' (lists a "
|
||||
"directory), 'delete' (removes a file/dir), 'mkdir' (creates a "
|
||||
"directory), 'info' (returns file metadata), 'exists' (returns a "
|
||||
"boolean for whether the path exists)."
|
||||
),
|
||||
)
|
||||
path: str = Field(..., description="Absolute path inside the sandbox.")
|
||||
content: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Content to write or append. If omitted for 'write', an empty file "
|
||||
"is created. For files larger than a few KB, prefer one 'write' "
|
||||
"with empty content followed by multiple 'append' calls of ~4KB "
|
||||
"each to stay within tool-call payload limits."
|
||||
),
|
||||
)
|
||||
binary: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"For 'write'/'append': treat content as base64 and upload raw "
|
||||
"bytes. For 'read': return contents as base64 instead of decoded "
|
||||
"utf-8."
|
||||
),
|
||||
)
|
||||
depth: int = Field(
|
||||
default=1,
|
||||
description="For action='list': how many levels deep to recurse (default 1).",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate_action_args(self) -> E2BFileToolSchema:
|
||||
if self.action == "append" and self.content is None:
|
||||
raise ValueError(
|
||||
"action='append' requires 'content'. Pass the chunk to append "
|
||||
"in the 'content' field."
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
class E2BFileTool(E2BBaseTool):
|
||||
"""Read, write, and manage files inside an E2B sandbox.
|
||||
|
||||
Notes:
|
||||
- Most useful with `persistent=True` or an explicit `sandbox_id`. With
|
||||
the default ephemeral mode, files disappear when this tool call
|
||||
finishes.
|
||||
"""
|
||||
|
||||
name: str = "E2B Sandbox Files"
|
||||
description: str = (
|
||||
"Perform filesystem operations inside an E2B sandbox: read a file, "
|
||||
"write content to a path, append content to an existing file, list a "
|
||||
"directory, delete a path, make a directory, fetch file metadata, or "
|
||||
"check whether a path exists. For files larger than a few KB, create "
|
||||
"the file with action='write' and empty content, then send the body "
|
||||
"via multiple 'append' calls of ~4KB each to stay within tool-call "
|
||||
"payload limits."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BFileToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
action: FileAction,
|
||||
path: str,
|
||||
content: str | None = None,
|
||||
binary: bool = False,
|
||||
depth: int = 1,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
if action == "read":
|
||||
return self._read(sandbox, path, binary=binary)
|
||||
if action == "write":
|
||||
return self._write(sandbox, path, content or "", binary=binary)
|
||||
if action == "append":
|
||||
return self._append(sandbox, path, content or "", binary=binary)
|
||||
if action == "list":
|
||||
return self._list(sandbox, path, depth=depth)
|
||||
if action == "delete":
|
||||
sandbox.files.remove(path)
|
||||
return {"status": "deleted", "path": path}
|
||||
if action == "mkdir":
|
||||
created = sandbox.files.make_dir(path)
|
||||
return {"status": "created", "path": path, "created": bool(created)}
|
||||
if action == "info":
|
||||
return self._info(sandbox, path)
|
||||
if action == "exists":
|
||||
return {"path": path, "exists": bool(sandbox.files.exists(path))}
|
||||
raise ValueError(f"Unknown action: {action}")
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
|
||||
def _read(self, sandbox: Any, path: str, *, binary: bool) -> dict[str, Any]:
|
||||
if binary:
|
||||
data: bytes = sandbox.files.read(path, format="bytes")
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
}
|
||||
try:
|
||||
content: str = sandbox.files.read(path)
|
||||
return {"path": path, "encoding": "utf-8", "content": content}
|
||||
except UnicodeDecodeError:
|
||||
data = sandbox.files.read(path, format="bytes")
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
"note": "File was not valid utf-8; returned as base64.",
|
||||
}
|
||||
|
||||
def _write(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
payload: str | bytes = base64.b64decode(content) if binary else content
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
sandbox.files.write(path, payload)
|
||||
size = (
|
||||
len(payload)
|
||||
if isinstance(payload, (bytes, bytearray))
|
||||
else len(payload.encode("utf-8"))
|
||||
)
|
||||
return {"status": "written", "path": path, "bytes": size}
|
||||
|
||||
def _append(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
chunk: bytes = base64.b64decode(content) if binary else content.encode("utf-8")
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
try:
|
||||
existing: bytes = sandbox.files.read(path, format="bytes")
|
||||
except Exception:
|
||||
existing = b""
|
||||
payload = existing + chunk
|
||||
sandbox.files.write(path, payload)
|
||||
return {
|
||||
"status": "appended",
|
||||
"path": path,
|
||||
"appended_bytes": len(chunk),
|
||||
"total_bytes": len(payload),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _ensure_parent_dir(sandbox: Any, path: str) -> None:
|
||||
parent = posixpath.dirname(path)
|
||||
if not parent or parent in ("/", "."):
|
||||
return
|
||||
try:
|
||||
sandbox.files.make_dir(parent)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort parent-directory create failed for %s; "
|
||||
"assuming it already exists and proceeding with the write.",
|
||||
parent,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _list(self, sandbox: Any, path: str, *, depth: int) -> dict[str, Any]:
|
||||
entries = sandbox.files.list(path, depth=depth)
|
||||
return {
|
||||
"path": path,
|
||||
"entries": [self._entry_to_dict(e) for e in entries],
|
||||
}
|
||||
|
||||
def _info(self, sandbox: Any, path: str) -> dict[str, Any]:
|
||||
return self._entry_to_dict(sandbox.files.get_info(path))
|
||||
|
||||
@staticmethod
|
||||
def _entry_to_dict(entry: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"name",
|
||||
"path",
|
||||
"type",
|
||||
"size",
|
||||
"mode",
|
||||
"permissions",
|
||||
"owner",
|
||||
"group",
|
||||
"modified_time",
|
||||
"symlink_target",
|
||||
)
|
||||
result: dict[str, Any] = {}
|
||||
for field in fields:
|
||||
value = getattr(entry, field, None)
|
||||
if value is not None and field == "modified_time":
|
||||
result[field] = (
|
||||
value.isoformat() if hasattr(value, "isoformat") else str(value)
|
||||
)
|
||||
else:
|
||||
result[field] = value
|
||||
return result
|
||||
@@ -0,0 +1,133 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
class E2BPythonToolSchema(BaseModel):
|
||||
code: str = Field(
|
||||
...,
|
||||
description="Python source to execute inside the sandbox.",
|
||||
)
|
||||
language: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Override the execution language (e.g. 'python', 'r', 'javascript'). "
|
||||
"Defaults to Python when omitted."
|
||||
),
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables for the run.",
|
||||
)
|
||||
timeout: float | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the code to finish.",
|
||||
)
|
||||
|
||||
|
||||
class E2BPythonTool(E2BBaseTool):
|
||||
"""Run Python code inside an E2B code interpreter sandbox.
|
||||
|
||||
Uses `e2b_code_interpreter`, which runs cells in a persistent Jupyter-style
|
||||
kernel so state (imports, variables) carries across calls when
|
||||
`persistent=True`.
|
||||
"""
|
||||
|
||||
name: str = "E2B Sandbox Python"
|
||||
description: str = (
|
||||
"Execute a block of Python code inside an E2B code interpreter sandbox "
|
||||
"and return captured stdout, stderr, the final expression value, and "
|
||||
"any rich results (charts, dataframes). Use this for data processing, "
|
||||
"quick scripts, or analysis that should run in an isolated environment."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BPythonToolSchema
|
||||
|
||||
package_dependencies: list[str] = Field(
|
||||
default_factory=lambda: ["e2b_code_interpreter"],
|
||||
)
|
||||
|
||||
_ci_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sandbox_class(cls) -> Any:
|
||||
cached = cls._ci_cache.get("Sandbox")
|
||||
if cached is not None:
|
||||
return cached
|
||||
try:
|
||||
from e2b_code_interpreter import Sandbox # type: ignore[import-untyped]
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'e2b_code_interpreter' package is required for the E2B "
|
||||
"Python tool. Install it with: "
|
||||
"uv add e2b-code-interpreter (or) "
|
||||
"pip install e2b-code-interpreter"
|
||||
) from exc
|
||||
cls._ci_cache["Sandbox"] = Sandbox
|
||||
return Sandbox
|
||||
|
||||
def _run(
|
||||
self,
|
||||
code: str,
|
||||
language: str | None = None,
|
||||
envs: dict[str, str] | None = None,
|
||||
timeout: float | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
run_kwargs: dict[str, Any] = {}
|
||||
if language is not None:
|
||||
run_kwargs["language"] = language
|
||||
if envs is not None:
|
||||
run_kwargs["envs"] = envs
|
||||
if timeout is not None:
|
||||
run_kwargs["timeout"] = timeout
|
||||
execution = sandbox.run_code(code, **run_kwargs)
|
||||
return self._serialize_execution(execution)
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
|
||||
@staticmethod
|
||||
def _serialize_execution(execution: Any) -> dict[str, Any]:
|
||||
logs = getattr(execution, "logs", None)
|
||||
error = getattr(execution, "error", None)
|
||||
results = getattr(execution, "results", None) or []
|
||||
return {
|
||||
"text": getattr(execution, "text", None),
|
||||
"stdout": list(getattr(logs, "stdout", []) or []) if logs else [],
|
||||
"stderr": list(getattr(logs, "stderr", []) or []) if logs else [],
|
||||
"error": (
|
||||
{
|
||||
"name": getattr(error, "name", None),
|
||||
"value": getattr(error, "value", None),
|
||||
"traceback": getattr(error, "traceback", None),
|
||||
}
|
||||
if error
|
||||
else None
|
||||
),
|
||||
"results": [E2BPythonTool._serialize_result(r) for r in results],
|
||||
"execution_count": getattr(execution, "execution_count", None),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _serialize_result(result: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"text",
|
||||
"html",
|
||||
"markdown",
|
||||
"svg",
|
||||
"png",
|
||||
"jpeg",
|
||||
"pdf",
|
||||
"latex",
|
||||
"json",
|
||||
"javascript",
|
||||
"data",
|
||||
"is_main_result",
|
||||
"extra",
|
||||
)
|
||||
return {field: getattr(result, field, None) for field in fields}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -24,7 +24,7 @@ dependencies = [
|
||||
"tokenizers>=0.21,<1",
|
||||
"openpyxl~=3.1.5",
|
||||
# Authentication and Security
|
||||
"python-dotenv~=1.1.1",
|
||||
"python-dotenv>=1.2.2,<2",
|
||||
"pyjwt>=2.9.0,<3",
|
||||
# TUI
|
||||
"textual>=7.5.0",
|
||||
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.14.3a1",
|
||||
"crewai-tools==1.14.3",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
@@ -94,6 +94,7 @@ google-genai = [
|
||||
]
|
||||
azure-ai-inference = [
|
||||
"azure-ai-inference~=1.0.0b9",
|
||||
"azure-identity>=1.17.0,<2",
|
||||
]
|
||||
anthropic = [
|
||||
"anthropic~=0.73.0",
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
import contextvars
|
||||
import threading
|
||||
from typing import Any
|
||||
import urllib.request
|
||||
import importlib
|
||||
import sys
|
||||
from typing import TYPE_CHECKING, Annotated, Any
|
||||
import warnings
|
||||
|
||||
from pydantic import PydanticUserError
|
||||
from pydantic import Field, PydanticUserError
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.agent.planning_config import PlanningConfig
|
||||
@@ -20,7 +19,10 @@ from crewai.state.checkpoint_config import CheckpointConfig # noqa: F401
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.llm_guardrail import LLMGuardrail
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.telemetry.telemetry import Telemetry
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.memory.unified_memory import Memory
|
||||
|
||||
|
||||
def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
@@ -46,38 +48,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.14.3a1"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
def _track_install() -> None:
|
||||
"""Track package installation/first-use via Scarf analytics."""
|
||||
global _telemetry_submitted
|
||||
|
||||
if _telemetry_submitted or Telemetry._is_telemetry_disabled():
|
||||
return
|
||||
|
||||
try:
|
||||
pixel_url = "https://api.scarf.sh/v2/packages/CrewAI/crewai/docs/00f2dad1-8334-4a39-934e-003b2e1146db"
|
||||
|
||||
req = urllib.request.Request(pixel_url) # noqa: S310
|
||||
req.add_header("User-Agent", f"CrewAI-Python/{__version__}")
|
||||
|
||||
with urllib.request.urlopen(req, timeout=2): # noqa: S310
|
||||
_telemetry_submitted = True
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
|
||||
def _track_install_async() -> None:
|
||||
"""Track installation in background thread to avoid blocking imports."""
|
||||
if not Telemetry._is_telemetry_disabled():
|
||||
ctx = contextvars.copy_context()
|
||||
thread = threading.Thread(target=ctx.run, args=(_track_install,), daemon=True)
|
||||
thread.start()
|
||||
|
||||
|
||||
_track_install_async()
|
||||
__version__ = "1.14.3"
|
||||
|
||||
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
|
||||
"Memory": ("crewai.memory.unified_memory", "Memory"),
|
||||
@@ -88,8 +59,6 @@ def __getattr__(name: str) -> Any:
|
||||
"""Lazily import heavy modules (e.g. Memory → lancedb) on first access."""
|
||||
if name in _LAZY_IMPORTS:
|
||||
module_path, attr = _LAZY_IMPORTS[name]
|
||||
import importlib
|
||||
|
||||
mod = importlib.import_module(module_path)
|
||||
val = getattr(mod, attr)
|
||||
globals()[name] = val
|
||||
@@ -125,10 +94,16 @@ try:
|
||||
}
|
||||
|
||||
from crewai.tools.base_tool import BaseTool as _BaseTool
|
||||
from crewai.tools.flow_tool import (
|
||||
FlowTool as _FlowTool,
|
||||
create_flow_tools as _create_flow_tools,
|
||||
)
|
||||
from crewai.tools.structured_tool import CrewStructuredTool as _CrewStructuredTool
|
||||
|
||||
_base_namespace["BaseTool"] = _BaseTool
|
||||
_base_namespace["CrewStructuredTool"] = _CrewStructuredTool
|
||||
_base_namespace["FlowTool"] = _FlowTool
|
||||
_base_namespace["create_flow_tools"] = _create_flow_tools # type: ignore[assignment]
|
||||
|
||||
try:
|
||||
from crewai.a2a.config import (
|
||||
@@ -147,8 +122,6 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
import sys
|
||||
|
||||
_full_namespace = {
|
||||
**_base_namespace,
|
||||
"ToolsHandler": _ToolsHandler,
|
||||
@@ -191,10 +164,6 @@ try:
|
||||
Flow.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
_AgentExecutor.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
|
||||
from typing import Annotated
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from crewai.state.runtime import RuntimeState
|
||||
|
||||
Entity = Annotated[
|
||||
|
||||
@@ -78,14 +78,14 @@ from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.lite_agent_output import LiteAgentOutput
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.mcp import MCPServerConfig
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.config import MCPServerConfig
|
||||
from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.security.fingerprint import Fingerprint
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
|
||||
from crewai.state.checkpoint_config import CheckpointConfig, apply_checkpoint
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.tools.flow_tool import create_flow_tools
|
||||
from crewai.types.callback import SerializableCallable
|
||||
from crewai.utilities.agent_utils import (
|
||||
get_tool_names,
|
||||
@@ -119,6 +119,7 @@ if TYPE_CHECKING:
|
||||
|
||||
from crewai.a2a.config import A2AClientConfig, A2AConfig, A2AServerConfig
|
||||
from crewai.agents.agent_builder.base_agent import PlatformAppOrAction
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
@@ -305,6 +306,10 @@ class Agent(BaseAgent):
|
||||
Can be a single A2AConfig/A2AClientConfig/A2AServerConfig, or a list of any number of A2AConfig/A2AClientConfig with a single A2AServerConfig.
|
||||
""",
|
||||
)
|
||||
flows: list[Any] | None = Field(
|
||||
default=None,
|
||||
description="Flow classes that the agent can invoke as tools. Each entry is a Flow subclass (not an instance).",
|
||||
)
|
||||
agent_executor: CrewAgentExecutor | AgentExecutor | None = Field(
|
||||
default=None, description="An instance of the CrewAgentExecutor class."
|
||||
)
|
||||
@@ -347,6 +352,7 @@ class Agent(BaseAgent):
|
||||
)
|
||||
|
||||
self.set_skills()
|
||||
self._set_flow_tools()
|
||||
|
||||
if self.reasoning and self.planning_config is None:
|
||||
warnings.warn(
|
||||
@@ -394,15 +400,17 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
resolved_crew_skills: list[SkillModel] | None = None,
|
||||
) -> None:
|
||||
"""Resolve skill paths and activate skills to INSTRUCTIONS level.
|
||||
"""Resolve skill paths while preserving explicit disclosure levels.
|
||||
|
||||
Path entries trigger discovery and activation. Pre-loaded Skill objects
|
||||
below INSTRUCTIONS level are activated. Crew-level skills are merged in
|
||||
with event emission so observability is consistent regardless of origin.
|
||||
Path entries trigger discovery and activation because directory-based
|
||||
skills opt into eager loading. Pre-loaded Skill objects keep their
|
||||
current disclosure level so callers can attach METADATA-only skills and
|
||||
progressively activate them later. Crew-level skills are merged in with
|
||||
event emission so observability is consistent regardless of origin.
|
||||
|
||||
Args:
|
||||
resolved_crew_skills: Pre-resolved crew skills (already discovered
|
||||
and activated). When provided, avoids redundant discovery per agent.
|
||||
resolved_crew_skills: Pre-resolved crew skills. When provided,
|
||||
avoids redundant discovery per agent.
|
||||
"""
|
||||
from crewai.crew import Crew
|
||||
|
||||
@@ -443,8 +451,7 @@ class Agent(BaseAgent):
|
||||
elif isinstance(item, SkillModel):
|
||||
if item.name not in seen:
|
||||
seen.add(item.name)
|
||||
activated = activate_skill(item, source=self)
|
||||
if activated is item and item.disclosure_level >= INSTRUCTIONS:
|
||||
if item.disclosure_level >= INSTRUCTIONS:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=SkillActivatedEvent(
|
||||
@@ -454,10 +461,20 @@ class Agent(BaseAgent):
|
||||
disclosure_level=item.disclosure_level,
|
||||
),
|
||||
)
|
||||
resolved.append(activated)
|
||||
resolved.append(item)
|
||||
|
||||
self.skills = resolved if resolved else None
|
||||
|
||||
def _set_flow_tools(self) -> None:
|
||||
"""Convert Flow classes in ``self.flows`` to tools and merge them."""
|
||||
if not self.flows:
|
||||
return
|
||||
flow_tools = create_flow_tools(self.flows)
|
||||
if flow_tools:
|
||||
if self.tools is None:
|
||||
self.tools = []
|
||||
self.tools.extend(flow_tools)
|
||||
|
||||
def _is_any_available_memory(self) -> bool:
|
||||
"""Check if unified memory is available (agent or crew)."""
|
||||
if getattr(self, "memory", None):
|
||||
@@ -1120,6 +1137,8 @@ class Agent(BaseAgent):
|
||||
Delegates to :class:`~crewai.mcp.tool_resolver.MCPToolResolver`.
|
||||
"""
|
||||
self._cleanup_mcp_clients()
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
|
||||
self._mcp_resolver = MCPToolResolver(agent=self, logger=self._logger)
|
||||
return self._mcp_resolver.resolve(mcps)
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.3a1"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.3a1"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.3a1"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -6,111 +6,20 @@ This module provides the event infrastructure that allows users to:
|
||||
- Build custom logging and analytics
|
||||
- Extend CrewAI with custom event handlers
|
||||
- Declare handler dependencies for ordered execution
|
||||
|
||||
Event type classes are lazy-loaded on first access to avoid importing
|
||||
~12 Pydantic model modules (and their transitive deps) at package init time.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from crewai.events.base_event_listener import BaseEventListener
|
||||
from crewai.events.depends import Depends
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.handler_graph import CircularDependencyError
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestFailedEvent,
|
||||
CrewTestResultEvent,
|
||||
CrewTestStartedEvent,
|
||||
CrewTrainCompletedEvent,
|
||||
CrewTrainFailedEvent,
|
||||
CrewTrainStartedEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowEvent,
|
||||
FlowFinishedEvent,
|
||||
FlowPlotEvent,
|
||||
FlowStartedEvent,
|
||||
HumanFeedbackReceivedEvent,
|
||||
HumanFeedbackRequestedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMStreamChunkEvent,
|
||||
)
|
||||
from crewai.events.types.llm_guardrail_events import (
|
||||
LLMGuardrailCompletedEvent,
|
||||
LLMGuardrailStartedEvent,
|
||||
)
|
||||
from crewai.events.types.logging_events import (
|
||||
AgentLogsExecutionEvent,
|
||||
AgentLogsStartedEvent,
|
||||
)
|
||||
from crewai.events.types.mcp_events import (
|
||||
MCPConfigFetchFailedEvent,
|
||||
MCPConnectionCompletedEvent,
|
||||
MCPConnectionFailedEvent,
|
||||
MCPConnectionStartedEvent,
|
||||
MCPToolExecutionCompletedEvent,
|
||||
MCPToolExecutionFailedEvent,
|
||||
MCPToolExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryQueryStartedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalFailedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningCompletedEvent,
|
||||
AgentReasoningFailedEvent,
|
||||
AgentReasoningStartedEvent,
|
||||
ReasoningEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskEvaluationEvent,
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.events.types.tool_usage_events import (
|
||||
ToolExecutionErrorEvent,
|
||||
ToolSelectionErrorEvent,
|
||||
ToolUsageErrorEvent,
|
||||
ToolUsageEvent,
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
ToolValidateInputErrorEvent,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -125,6 +34,250 @@ if TYPE_CHECKING:
|
||||
LiteAgentExecutionErrorEvent,
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointBaseEvent,
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkBaseEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreBaseEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestFailedEvent,
|
||||
CrewTestResultEvent,
|
||||
CrewTestStartedEvent,
|
||||
CrewTrainCompletedEvent,
|
||||
CrewTrainFailedEvent,
|
||||
CrewTrainStartedEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowEvent,
|
||||
FlowFinishedEvent,
|
||||
FlowPlotEvent,
|
||||
FlowStartedEvent,
|
||||
HumanFeedbackReceivedEvent,
|
||||
HumanFeedbackRequestedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMStreamChunkEvent,
|
||||
)
|
||||
from crewai.events.types.llm_guardrail_events import (
|
||||
LLMGuardrailCompletedEvent,
|
||||
LLMGuardrailStartedEvent,
|
||||
)
|
||||
from crewai.events.types.logging_events import (
|
||||
AgentLogsExecutionEvent,
|
||||
AgentLogsStartedEvent,
|
||||
)
|
||||
from crewai.events.types.mcp_events import (
|
||||
MCPConfigFetchFailedEvent,
|
||||
MCPConnectionCompletedEvent,
|
||||
MCPConnectionFailedEvent,
|
||||
MCPConnectionStartedEvent,
|
||||
MCPToolExecutionCompletedEvent,
|
||||
MCPToolExecutionFailedEvent,
|
||||
MCPToolExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryQueryStartedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalFailedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningCompletedEvent,
|
||||
AgentReasoningFailedEvent,
|
||||
AgentReasoningStartedEvent,
|
||||
ReasoningEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskEvaluationEvent,
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.events.types.tool_usage_events import (
|
||||
ToolExecutionErrorEvent,
|
||||
ToolSelectionErrorEvent,
|
||||
ToolUsageErrorEvent,
|
||||
ToolUsageEvent,
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
ToolValidateInputErrorEvent,
|
||||
)
|
||||
|
||||
# Map every event class name → its module path for lazy loading
|
||||
_LAZY_EVENT_MAPPING: dict[str, str] = {
|
||||
# agent_events
|
||||
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
# checkpoint_events
|
||||
"CheckpointBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointFailedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointPrunedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreFailedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
# crew_events
|
||||
"CrewKickoffCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewKickoffFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewKickoffStartedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestResultEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestStartedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainStartedEvent": "crewai.events.types.crew_events",
|
||||
# flow_events
|
||||
"FlowCreatedEvent": "crewai.events.types.flow_events",
|
||||
"FlowEvent": "crewai.events.types.flow_events",
|
||||
"FlowFinishedEvent": "crewai.events.types.flow_events",
|
||||
"FlowPlotEvent": "crewai.events.types.flow_events",
|
||||
"FlowStartedEvent": "crewai.events.types.flow_events",
|
||||
"HumanFeedbackReceivedEvent": "crewai.events.types.flow_events",
|
||||
"HumanFeedbackRequestedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionFailedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionFinishedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionStartedEvent": "crewai.events.types.flow_events",
|
||||
# knowledge_events
|
||||
"KnowledgeQueryCompletedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeQueryFailedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeQueryStartedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeRetrievalCompletedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeRetrievalStartedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeSearchQueryFailedEvent": "crewai.events.types.knowledge_events",
|
||||
# llm_events
|
||||
"LLMCallCompletedEvent": "crewai.events.types.llm_events",
|
||||
"LLMCallFailedEvent": "crewai.events.types.llm_events",
|
||||
"LLMCallStartedEvent": "crewai.events.types.llm_events",
|
||||
"LLMStreamChunkEvent": "crewai.events.types.llm_events",
|
||||
# llm_guardrail_events
|
||||
"LLMGuardrailCompletedEvent": "crewai.events.types.llm_guardrail_events",
|
||||
"LLMGuardrailStartedEvent": "crewai.events.types.llm_guardrail_events",
|
||||
# logging_events
|
||||
"AgentLogsExecutionEvent": "crewai.events.types.logging_events",
|
||||
"AgentLogsStartedEvent": "crewai.events.types.logging_events",
|
||||
# mcp_events
|
||||
"MCPConfigFetchFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionCompletedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionStartedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionCompletedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionStartedEvent": "crewai.events.types.mcp_events",
|
||||
# memory_events
|
||||
"MemoryQueryCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryQueryFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryQueryStartedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalStartedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveStartedEvent": "crewai.events.types.memory_events",
|
||||
# reasoning_events
|
||||
"AgentReasoningCompletedEvent": "crewai.events.types.reasoning_events",
|
||||
"AgentReasoningFailedEvent": "crewai.events.types.reasoning_events",
|
||||
"AgentReasoningStartedEvent": "crewai.events.types.reasoning_events",
|
||||
"ReasoningEvent": "crewai.events.types.reasoning_events",
|
||||
# skill_events
|
||||
"SkillActivatedEvent": "crewai.events.types.skill_events",
|
||||
"SkillDiscoveryCompletedEvent": "crewai.events.types.skill_events",
|
||||
"SkillDiscoveryStartedEvent": "crewai.events.types.skill_events",
|
||||
"SkillEvent": "crewai.events.types.skill_events",
|
||||
"SkillLoadFailedEvent": "crewai.events.types.skill_events",
|
||||
"SkillLoadedEvent": "crewai.events.types.skill_events",
|
||||
# task_events
|
||||
"TaskCompletedEvent": "crewai.events.types.task_events",
|
||||
"TaskEvaluationEvent": "crewai.events.types.task_events",
|
||||
"TaskFailedEvent": "crewai.events.types.task_events",
|
||||
"TaskStartedEvent": "crewai.events.types.task_events",
|
||||
# tool_usage_events
|
||||
"ToolExecutionErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolSelectionErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageFinishedEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageStartedEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolValidateInputErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
}
|
||||
|
||||
_extension_exports: dict[str, Any] = {}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
"""Lazy import for event types and registered extensions."""
|
||||
if name in _LAZY_EVENT_MAPPING:
|
||||
module_path = _LAZY_EVENT_MAPPING[name]
|
||||
module = importlib.import_module(module_path)
|
||||
val = getattr(module, name)
|
||||
globals()[name] = val # cache for subsequent access
|
||||
return val
|
||||
|
||||
if name in _extension_exports:
|
||||
value = _extension_exports[name]
|
||||
if isinstance(value, str):
|
||||
module_path, _, attr_name = value.rpartition(".")
|
||||
if module_path:
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, attr_name)
|
||||
return importlib.import_module(value)
|
||||
return value
|
||||
|
||||
msg = f"module {__name__!r} has no attribute {name!r}"
|
||||
raise AttributeError(msg)
|
||||
|
||||
|
||||
__all__ = [
|
||||
@@ -140,6 +293,18 @@ __all__ = [
|
||||
"AgentReasoningFailedEvent",
|
||||
"AgentReasoningStartedEvent",
|
||||
"BaseEventListener",
|
||||
"CheckpointBaseEvent",
|
||||
"CheckpointCompletedEvent",
|
||||
"CheckpointFailedEvent",
|
||||
"CheckpointForkBaseEvent",
|
||||
"CheckpointForkCompletedEvent",
|
||||
"CheckpointForkStartedEvent",
|
||||
"CheckpointPrunedEvent",
|
||||
"CheckpointRestoreBaseEvent",
|
||||
"CheckpointRestoreCompletedEvent",
|
||||
"CheckpointRestoreFailedEvent",
|
||||
"CheckpointRestoreStartedEvent",
|
||||
"CheckpointStartedEvent",
|
||||
"CircularDependencyError",
|
||||
"CrewKickoffCompletedEvent",
|
||||
"CrewKickoffFailedEvent",
|
||||
@@ -214,42 +379,3 @@ __all__ = [
|
||||
"_extension_exports",
|
||||
"crewai_event_bus",
|
||||
]
|
||||
|
||||
_AGENT_EVENT_MAPPING = {
|
||||
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
}
|
||||
|
||||
_extension_exports: dict[str, Any] = {}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
"""Lazy import for agent events and registered extensions."""
|
||||
if name in _AGENT_EVENT_MAPPING:
|
||||
import importlib
|
||||
|
||||
module_path = _AGENT_EVENT_MAPPING[name]
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, name)
|
||||
|
||||
if name in _extension_exports:
|
||||
import importlib
|
||||
|
||||
value = _extension_exports[name]
|
||||
if isinstance(value, str):
|
||||
module_path, _, attr_name = value.rpartition(".")
|
||||
if module_path:
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, attr_name)
|
||||
return importlib.import_module(value)
|
||||
return value
|
||||
|
||||
msg = f"module {__name__!r} has no attribute {name!r}"
|
||||
raise AttributeError(msg)
|
||||
|
||||
@@ -64,6 +64,22 @@ P = ParamSpec("P")
|
||||
R = TypeVar("R")
|
||||
|
||||
|
||||
_replaying: contextvars.ContextVar[bool] = contextvars.ContextVar(
|
||||
"crewai_event_replaying", default=False
|
||||
)
|
||||
|
||||
|
||||
def is_replaying() -> bool:
|
||||
"""Return True if the current context is dispatching a replayed event.
|
||||
|
||||
Listeners with side effects (checkpoint writes, external API calls that
|
||||
should not be repeated) should early-return when this is true. Listeners
|
||||
whose purpose is reconstructing timeline state (trace batch, console
|
||||
formatter) should ignore the flag and process replayed events normally.
|
||||
"""
|
||||
return _replaying.get()
|
||||
|
||||
|
||||
class CrewAIEventsBus:
|
||||
"""Singleton event bus for handling events in CrewAI.
|
||||
|
||||
@@ -261,6 +277,11 @@ class CrewAIEventsBus:
|
||||
self._runtime_state = state
|
||||
self._registered_entity_ids = {id(e) for e in state.root}
|
||||
|
||||
@property
|
||||
def runtime_state(self) -> RuntimeState | None:
|
||||
"""The RuntimeState currently attached to the bus, if any."""
|
||||
return self._runtime_state
|
||||
|
||||
def register_entity(self, entity: Any) -> None:
|
||||
"""Add an entity to the RuntimeState, creating it if needed.
|
||||
|
||||
@@ -568,6 +589,87 @@ class CrewAIEventsBus:
|
||||
|
||||
return None
|
||||
|
||||
async def _acall_handlers_replaying(
|
||||
self,
|
||||
source: Any,
|
||||
event: BaseEvent,
|
||||
handlers: AsyncHandlerSet,
|
||||
) -> None:
|
||||
"""Call async handlers with the replaying flag set on the loop thread."""
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
await self._acall_handlers(source, event, handlers)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
async def _emit_with_dependencies_replaying(
|
||||
self, source: Any, event: BaseEvent
|
||||
) -> None:
|
||||
"""Dependency-aware dispatch with the replaying flag set."""
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
await self._emit_with_dependencies(source, event)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
def replay(self, source: Any, event: BaseEvent) -> Future[None] | None:
|
||||
"""Dispatch a previously-recorded event without mutating its fields.
|
||||
|
||||
Unlike :meth:`emit`, this does not run ``_prepare_event`` (so stored
|
||||
event ids and ``emission_sequence`` are preserved) and does not
|
||||
re-record the event. Listeners can call :func:`is_replaying` to
|
||||
opt out of side-effectful processing.
|
||||
|
||||
Args:
|
||||
source: The emitting object.
|
||||
event: The previously-recorded event to dispatch.
|
||||
|
||||
Returns:
|
||||
Future that completes when handlers finish, or None if no handlers.
|
||||
"""
|
||||
event_type = type(event)
|
||||
|
||||
with self._rwlock.r_locked():
|
||||
if self._shutting_down:
|
||||
return None
|
||||
has_dependencies = event_type in self._handler_dependencies
|
||||
sync_handlers = self._sync_handlers.get(event_type, frozenset())
|
||||
async_handlers = self._async_handlers.get(event_type, frozenset())
|
||||
|
||||
if not sync_handlers and not async_handlers:
|
||||
return None
|
||||
|
||||
self._ensure_executor_initialized()
|
||||
self._has_pending_events = True
|
||||
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
if has_dependencies:
|
||||
return self._track_future(
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self._emit_with_dependencies_replaying(source, event),
|
||||
self._loop,
|
||||
)
|
||||
)
|
||||
|
||||
if sync_handlers:
|
||||
ctx = contextvars.copy_context()
|
||||
sync_future = self._sync_executor.submit(
|
||||
ctx.run, self._call_handlers, source, event, sync_handlers
|
||||
)
|
||||
self._track_future(sync_future)
|
||||
if not async_handlers:
|
||||
return sync_future
|
||||
|
||||
return self._track_future(
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self._acall_handlers_replaying(source, event, async_handlers),
|
||||
self._loop,
|
||||
)
|
||||
)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
def flush(self, timeout: float | None = 30.0) -> bool:
|
||||
"""Block until all pending event handlers complete.
|
||||
|
||||
|
||||
@@ -30,6 +30,17 @@ from crewai.events.types.agent_events import (
|
||||
AgentExecutionStartedEvent,
|
||||
LiteAgentExecutionCompletedEvent,
|
||||
)
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
@@ -183,4 +194,13 @@ EventTypes = (
|
||||
| MCPToolExecutionCompletedEvent
|
||||
| MCPToolExecutionFailedEvent
|
||||
| MCPConfigFetchFailedEvent
|
||||
| CheckpointStartedEvent
|
||||
| CheckpointCompletedEvent
|
||||
| CheckpointFailedEvent
|
||||
| CheckpointForkStartedEvent
|
||||
| CheckpointForkCompletedEvent
|
||||
| CheckpointRestoreStartedEvent
|
||||
| CheckpointRestoreCompletedEvent
|
||||
| CheckpointRestoreFailedEvent
|
||||
| CheckpointPrunedEvent
|
||||
)
|
||||
|
||||
@@ -81,8 +81,11 @@ class TraceBatchManager:
|
||||
"""Initialize a new trace batch (thread-safe)"""
|
||||
with self._batch_ready_cv:
|
||||
if self.current_batch is not None:
|
||||
# Lazy init (e.g. DefaultEnvEvent) may have created the batch without
|
||||
# execution_type; merge metadata from a later flow/crew initializer.
|
||||
self.current_batch.execution_metadata.update(execution_metadata)
|
||||
logger.debug(
|
||||
"Batch already initialized, skipping duplicate initialization"
|
||||
"Batch already initialized, merged execution metadata and skipped duplicate initialization"
|
||||
)
|
||||
return self.current_batch
|
||||
|
||||
|
||||
@@ -60,12 +60,6 @@ from crewai.events.types.crew_events import (
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
)
|
||||
from crewai.events.types.env_events import (
|
||||
CCEnvEvent,
|
||||
CodexEnvEvent,
|
||||
CursorEnvEvent,
|
||||
DefaultEnvEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowFinishedEvent,
|
||||
@@ -212,7 +206,6 @@ class TraceCollectionListener(BaseEventListener):
|
||||
self._listeners_setup = True
|
||||
return
|
||||
|
||||
self._register_env_event_handlers(crewai_event_bus)
|
||||
self._register_flow_event_handlers(crewai_event_bus)
|
||||
self._register_context_event_handlers(crewai_event_bus)
|
||||
self._register_action_event_handlers(crewai_event_bus)
|
||||
@@ -221,25 +214,6 @@ class TraceCollectionListener(BaseEventListener):
|
||||
|
||||
self._listeners_setup = True
|
||||
|
||||
def _register_env_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
|
||||
"""Register handlers for environment context events."""
|
||||
|
||||
@event_bus.on(CCEnvEvent)
|
||||
def on_cc_env(source: Any, event: CCEnvEvent) -> None:
|
||||
self._handle_action_event("cc_env", source, event)
|
||||
|
||||
@event_bus.on(CodexEnvEvent)
|
||||
def on_codex_env(source: Any, event: CodexEnvEvent) -> None:
|
||||
self._handle_action_event("codex_env", source, event)
|
||||
|
||||
@event_bus.on(CursorEnvEvent)
|
||||
def on_cursor_env(source: Any, event: CursorEnvEvent) -> None:
|
||||
self._handle_action_event("cursor_env", source, event)
|
||||
|
||||
@event_bus.on(DefaultEnvEvent)
|
||||
def on_default_env(source: Any, event: DefaultEnvEvent) -> None:
|
||||
self._handle_action_event("default_env", source, event)
|
||||
|
||||
def _register_flow_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
|
||||
"""Register handlers for flow events."""
|
||||
|
||||
@@ -286,8 +260,8 @@ class TraceCollectionListener(BaseEventListener):
|
||||
if self.batch_manager.batch_owner_type != "flow":
|
||||
# Always call _initialize_crew_batch to claim ownership.
|
||||
# If batch was already initialized by a concurrent action event
|
||||
# (race condition with DefaultEnvEvent), initialize_batch() returns
|
||||
# early but batch_owner_type is still correctly set to "crew".
|
||||
# (e.g. LLM/tool before crew_kickoff_started), initialize_batch()
|
||||
# returns early but batch_owner_type is still correctly set to "crew".
|
||||
# Skip only when a parent flow already owns the batch.
|
||||
self._initialize_crew_batch(source, event)
|
||||
self._handle_trace_event("crew_kickoff_started", source, event)
|
||||
|
||||
97
lib/crewai/src/crewai/events/types/checkpoint_events.py
Normal file
97
lib/crewai/src/crewai/events/types/checkpoint_events.py
Normal file
@@ -0,0 +1,97 @@
|
||||
"""Event family for automatic state checkpointing and forking."""
|
||||
|
||||
from typing import Literal
|
||||
|
||||
from crewai.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class CheckpointBaseEvent(BaseEvent):
|
||||
"""Base event for checkpoint lifecycle operations."""
|
||||
|
||||
type: str
|
||||
location: str
|
||||
provider: str
|
||||
trigger: str | None = None
|
||||
branch: str | None = None
|
||||
parent_id: str | None = None
|
||||
|
||||
|
||||
class CheckpointStartedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted immediately before a checkpoint is written."""
|
||||
|
||||
type: Literal["checkpoint_started"] = "checkpoint_started"
|
||||
|
||||
|
||||
class CheckpointCompletedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted when a checkpoint has been written successfully."""
|
||||
|
||||
type: Literal["checkpoint_completed"] = "checkpoint_completed"
|
||||
checkpoint_id: str
|
||||
duration_ms: float
|
||||
|
||||
|
||||
class CheckpointFailedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted when a checkpoint write fails."""
|
||||
|
||||
type: Literal["checkpoint_failed"] = "checkpoint_failed"
|
||||
error: str
|
||||
|
||||
|
||||
class CheckpointPrunedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted after pruning old checkpoints from a branch."""
|
||||
|
||||
type: Literal["checkpoint_pruned"] = "checkpoint_pruned"
|
||||
removed_count: int
|
||||
max_checkpoints: int
|
||||
|
||||
|
||||
class CheckpointForkBaseEvent(BaseEvent):
|
||||
"""Base event for fork lifecycle operations on a RuntimeState."""
|
||||
|
||||
type: str
|
||||
branch: str
|
||||
parent_branch: str | None = None
|
||||
parent_checkpoint_id: str | None = None
|
||||
|
||||
|
||||
class CheckpointForkStartedEvent(CheckpointForkBaseEvent):
|
||||
"""Event emitted immediately before a fork relabels the branch."""
|
||||
|
||||
type: Literal["checkpoint_fork_started"] = "checkpoint_fork_started"
|
||||
|
||||
|
||||
class CheckpointForkCompletedEvent(CheckpointForkBaseEvent):
|
||||
"""Event emitted after a fork has established the new branch."""
|
||||
|
||||
type: Literal["checkpoint_fork_completed"] = "checkpoint_fork_completed"
|
||||
|
||||
|
||||
class CheckpointRestoreBaseEvent(BaseEvent):
|
||||
"""Base event for checkpoint restore lifecycle operations."""
|
||||
|
||||
type: str
|
||||
location: str
|
||||
provider: str | None = None
|
||||
|
||||
|
||||
class CheckpointRestoreStartedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted immediately before a checkpoint restore begins."""
|
||||
|
||||
type: Literal["checkpoint_restore_started"] = "checkpoint_restore_started"
|
||||
|
||||
|
||||
class CheckpointRestoreCompletedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted when a checkpoint has been restored successfully."""
|
||||
|
||||
type: Literal["checkpoint_restore_completed"] = "checkpoint_restore_completed"
|
||||
checkpoint_id: str
|
||||
branch: str | None = None
|
||||
parent_id: str | None = None
|
||||
duration_ms: float
|
||||
|
||||
|
||||
class CheckpointRestoreFailedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted when a checkpoint restore fails."""
|
||||
|
||||
type: Literal["checkpoint_restore_failed"] = "checkpoint_restore_failed"
|
||||
error: str
|
||||
@@ -45,6 +45,7 @@ from pydantic import (
|
||||
BeforeValidator,
|
||||
ConfigDict,
|
||||
Field,
|
||||
PlainSerializer,
|
||||
PrivateAttr,
|
||||
SerializeAsAny,
|
||||
ValidationError,
|
||||
@@ -58,6 +59,7 @@ from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.event_context import (
|
||||
get_current_parent_id,
|
||||
reset_last_event_id,
|
||||
restore_event_scope,
|
||||
triggered_by_scope,
|
||||
)
|
||||
from crewai.events.listeners.tracing.trace_listener import (
|
||||
@@ -157,6 +159,37 @@ def _resolve_persistence(value: Any) -> Any:
|
||||
return value
|
||||
|
||||
|
||||
_INITIAL_STATE_CLASS_MARKER = "__crewai_pydantic_class_schema__"
|
||||
|
||||
|
||||
def _serialize_initial_state(value: Any) -> Any:
|
||||
"""Make ``initial_state`` safe for JSON checkpoint serialization.
|
||||
|
||||
``BaseModel`` class refs are emitted as their JSON schema under a sentinel
|
||||
marker key so deserialization can round-trip them back to a class.
|
||||
``BaseModel`` instances are dumped to JSON (round-trip as plain dicts,
|
||||
which ``_create_initial_state`` accepts). Bare ``type`` values that are
|
||||
not ``BaseModel`` subclasses (e.g. ``dict``) are dropped since they
|
||||
can't be represented in JSON.
|
||||
"""
|
||||
if isinstance(value, type):
|
||||
if issubclass(value, BaseModel):
|
||||
return {_INITIAL_STATE_CLASS_MARKER: value.model_json_schema()}
|
||||
return None
|
||||
if isinstance(value, BaseModel):
|
||||
return value.model_dump(mode="json")
|
||||
return value
|
||||
|
||||
|
||||
def _deserialize_initial_state(value: Any) -> Any:
|
||||
"""Rehydrate a class ref serialized by :func:`_serialize_initial_state`."""
|
||||
if isinstance(value, dict) and _INITIAL_STATE_CLASS_MARKER in value:
|
||||
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
|
||||
|
||||
return create_model_from_schema(value[_INITIAL_STATE_CLASS_MARKER])
|
||||
return value
|
||||
|
||||
|
||||
class FlowState(BaseModel):
|
||||
"""Base model for all flow states, ensuring each state has a unique ID."""
|
||||
|
||||
@@ -908,7 +941,11 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
|
||||
entity_type: Literal["flow"] = "flow"
|
||||
|
||||
initial_state: Any = Field(default=None)
|
||||
initial_state: Annotated[ # type: ignore[type-arg]
|
||||
type[BaseModel] | type[dict] | dict[str, Any] | BaseModel | None,
|
||||
BeforeValidator(_deserialize_initial_state),
|
||||
PlainSerializer(_serialize_initial_state, return_type=Any, when_used="json"),
|
||||
] = Field(default=None)
|
||||
name: str | None = Field(default=None)
|
||||
tracing: bool | None = Field(default=None)
|
||||
stream: bool = Field(default=False)
|
||||
@@ -980,13 +1017,18 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
A Flow instance on the new branch. Call kickoff() to run.
|
||||
"""
|
||||
flow = cls.from_checkpoint(config)
|
||||
state = crewai_event_bus._runtime_state
|
||||
state = crewai_event_bus.runtime_state
|
||||
if state is None:
|
||||
raise RuntimeError(
|
||||
"Cannot fork: no runtime state on the event bus. "
|
||||
"Ensure from_checkpoint() succeeded before calling fork()."
|
||||
)
|
||||
state.fork(branch)
|
||||
new_id = str(uuid4())
|
||||
if isinstance(flow._state, dict):
|
||||
flow._state["id"] = new_id
|
||||
else:
|
||||
object.__setattr__(flow._state, "id", new_id)
|
||||
return flow
|
||||
|
||||
checkpoint_completed_methods: set[str] | None = Field(default=None)
|
||||
@@ -1008,6 +1050,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
}
|
||||
if self.checkpoint_state is not None:
|
||||
self._restore_state(self.checkpoint_state)
|
||||
restore_event_scope(())
|
||||
reset_last_event_id()
|
||||
|
||||
_methods: dict[FlowMethodName, FlowMethod[Any, Any]] = PrivateAttr(
|
||||
default_factory=dict
|
||||
@@ -1030,6 +1074,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
_human_feedback_method_outputs: dict[str, Any] = PrivateAttr(default_factory=dict)
|
||||
_input_history: list[InputHistoryEntry] = PrivateAttr(default_factory=list)
|
||||
_state: Any = PrivateAttr(default=None)
|
||||
_execution_id: str = PrivateAttr(default_factory=lambda: str(uuid4()))
|
||||
|
||||
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]: # type: ignore[override]
|
||||
class _FlowGeneric(cls): # type: ignore[valid-type,misc]
|
||||
@@ -1503,6 +1548,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
except Exception:
|
||||
logger.warning("FlowStartedEvent handler failed", exc_info=True)
|
||||
|
||||
get_env_context()
|
||||
|
||||
context = self._pending_feedback_context
|
||||
emit = context.emit
|
||||
default_outcome = context.default_outcome
|
||||
@@ -1818,6 +1865,27 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
except (AttributeError, TypeError):
|
||||
return "" # Safely handle any unexpected attribute access issues
|
||||
|
||||
@property
|
||||
def execution_id(self) -> str:
|
||||
"""Stable identifier for this flow execution.
|
||||
|
||||
Separate from ``flow_id`` / ``state.id``, which consumers may
|
||||
override via ``kickoff(inputs={"id": ...})`` to resume a persisted
|
||||
flow. ``execution_id`` is never affected by ``inputs`` and stays
|
||||
stable for the lifetime of a single run, so it is the correct key
|
||||
for telemetry, tracing, and any external correlation that must
|
||||
uniquely identify a single execution even when callers pass an
|
||||
``id`` in ``inputs``.
|
||||
|
||||
Defaults to a fresh ``uuid4`` per ``Flow`` instance; assign to
|
||||
override when an outer system already has an execution identity.
|
||||
"""
|
||||
return self._execution_id
|
||||
|
||||
@execution_id.setter
|
||||
def execution_id(self, value: str) -> None:
|
||||
self._execution_id = value
|
||||
|
||||
def _initialize_state(self, inputs: dict[str, Any]) -> None:
|
||||
"""Initialize or update flow state with new inputs.
|
||||
|
||||
@@ -2004,7 +2072,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
restored = apply_checkpoint(self, from_checkpoint)
|
||||
if restored is not None:
|
||||
return restored.kickoff(inputs=inputs, input_files=input_files)
|
||||
get_env_context()
|
||||
if self.stream:
|
||||
result_holder: list[Any] = []
|
||||
current_task_info: TaskInfo = {
|
||||
@@ -2132,9 +2199,9 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
flow_id_token = None
|
||||
request_id_token = None
|
||||
if current_flow_id.get() is None:
|
||||
flow_id_token = current_flow_id.set(self.flow_id)
|
||||
flow_id_token = current_flow_id.set(self.execution_id)
|
||||
if current_flow_request_id.get() is None:
|
||||
request_id_token = current_flow_request_id.set(self.flow_id)
|
||||
request_id_token = current_flow_request_id.set(self.execution_id)
|
||||
|
||||
try:
|
||||
# Reset flow state for fresh execution unless restoring from persistence
|
||||
@@ -2206,9 +2273,16 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
f"Flow started with ID: {self.flow_id}", color="bold magenta"
|
||||
)
|
||||
|
||||
# After FlowStarted (when not suppressed): env events must not pre-empt
|
||||
# trace batch init with implicit "crew" execution_type.
|
||||
get_env_context()
|
||||
|
||||
if inputs is not None and "id" not in inputs:
|
||||
self._initialize_state(inputs)
|
||||
|
||||
if self._is_execution_resuming:
|
||||
await self._replay_recorded_events()
|
||||
|
||||
try:
|
||||
# Determine which start methods to execute at kickoff
|
||||
# Conditional start methods (with __trigger_methods__) are only triggered by their conditions
|
||||
@@ -2356,6 +2430,44 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
"""
|
||||
return await self.kickoff_async(inputs, input_files, from_checkpoint)
|
||||
|
||||
async def _replay_recorded_events(self) -> None:
|
||||
"""Dispatch recorded ``MethodExecution*`` events from the event record."""
|
||||
state = crewai_event_bus.runtime_state
|
||||
if state is None:
|
||||
return
|
||||
record = state.event_record
|
||||
if len(record) == 0:
|
||||
return
|
||||
|
||||
replayable = (
|
||||
MethodExecutionStartedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
)
|
||||
flow_name = self.name or self.__class__.__name__
|
||||
nodes = sorted(
|
||||
(
|
||||
n
|
||||
for n in record.all_nodes()
|
||||
if isinstance(n.event, replayable)
|
||||
and n.event.flow_name == flow_name
|
||||
and n.event.method_name in self._completed_methods
|
||||
),
|
||||
key=lambda n: n.event.emission_sequence or 0,
|
||||
)
|
||||
|
||||
for node in nodes:
|
||||
future = crewai_event_bus.replay(self, node.event)
|
||||
if future is not None:
|
||||
try:
|
||||
await asyncio.wrap_future(future)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Replayed event handler failed: %s",
|
||||
node.event.type,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
async def _execute_start_method(self, start_method_name: FlowMethodName) -> None:
|
||||
"""Executes a flow's start method and its triggered listeners.
|
||||
|
||||
|
||||
@@ -175,6 +175,16 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
|
||||
"us.amazon.nova-pro-v1:0": 300000,
|
||||
"us.amazon.nova-micro-v1:0": 128000,
|
||||
"us.amazon.nova-lite-v1:0": 300000,
|
||||
# Claude 4 models
|
||||
"us.anthropic.claude-opus-4-7": 1000000,
|
||||
"us.anthropic.claude-sonnet-4-6": 1000000,
|
||||
"us.anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"us.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"us.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"us.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"us.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"us.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"us.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
@@ -193,15 +203,44 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
|
||||
"eu.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"eu.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"eu.anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
# Claude 4 EU
|
||||
"eu.anthropic.claude-opus-4-7": 1000000,
|
||||
"eu.anthropic.claude-sonnet-4-6": 1000000,
|
||||
"eu.anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"eu.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"eu.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"eu.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"eu.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"eu.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"eu.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"eu.meta.llama3-2-3b-instruct-v1:0": 131000,
|
||||
"eu.meta.llama3-2-1b-instruct-v1:0": 131000,
|
||||
"apac.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
"apac.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"apac.anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
# Claude 4 APAC
|
||||
"apac.anthropic.claude-opus-4-7": 1000000,
|
||||
"apac.anthropic.claude-sonnet-4-6": 1000000,
|
||||
"apac.anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"apac.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"apac.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"apac.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"apac.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"apac.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"apac.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"amazon.nova-pro-v1:0": 300000,
|
||||
"amazon.nova-micro-v1:0": 128000,
|
||||
"amazon.nova-lite-v1:0": 300000,
|
||||
"anthropic.claude-opus-4-7": 1000000,
|
||||
"anthropic.claude-sonnet-4-6": 1000000,
|
||||
"anthropic.claude-opus-4-6-v1": 1000000,
|
||||
"anthropic.claude-opus-4-5-20251101-v1:0": 200000,
|
||||
"anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
|
||||
"anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
|
||||
"anthropic.claude-opus-4-1-20250805-v1:0": 200000,
|
||||
"anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
"anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
|
||||
"anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
|
||||
@@ -423,6 +423,34 @@ AZURE_MODELS: list[AzureModels] = [
|
||||
|
||||
|
||||
BedrockModels: TypeAlias = Literal[
|
||||
# Inference profiles (regional) - Claude 4
|
||||
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"us.anthropic.claude-opus-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-6",
|
||||
"us.anthropic.claude-opus-4-6-v1",
|
||||
# Inference profiles - shorter versions
|
||||
"us.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-6-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-v1:0",
|
||||
"eu.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"eu.anthropic.claude-opus-4-5-v1:0",
|
||||
"eu.anthropic.claude-haiku-4-5-v1:0",
|
||||
"apac.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"apac.anthropic.claude-opus-4-5-v1:0",
|
||||
"apac.anthropic.claude-haiku-4-5-v1:0",
|
||||
# Global inference profiles
|
||||
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"global.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"global.anthropic.claude-opus-4-6-v1",
|
||||
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-6",
|
||||
# Direct model IDs
|
||||
"ai21.jamba-1-5-large-v1:0",
|
||||
"ai21.jamba-1-5-mini-v1:0",
|
||||
"amazon.nova-lite-v1:0",
|
||||
@@ -496,6 +524,34 @@ BedrockModels: TypeAlias = Literal[
|
||||
"twelvelabs.pegasus-1-2-v1:0",
|
||||
]
|
||||
BEDROCK_MODELS: list[BedrockModels] = [
|
||||
# Inference profiles (regional) - Claude 4
|
||||
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"us.anthropic.claude-opus-4-20250514-v1:0",
|
||||
"us.anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"us.anthropic.claude-sonnet-4-6",
|
||||
"us.anthropic.claude-opus-4-6-v1",
|
||||
# Inference profiles - shorter versions
|
||||
"us.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-5-v1:0",
|
||||
"us.anthropic.claude-opus-4-6-v1:0",
|
||||
"us.anthropic.claude-haiku-4-5-v1:0",
|
||||
"eu.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"eu.anthropic.claude-opus-4-5-v1:0",
|
||||
"eu.anthropic.claude-haiku-4-5-v1:0",
|
||||
"apac.anthropic.claude-sonnet-4-5-v1:0",
|
||||
"apac.anthropic.claude-opus-4-5-v1:0",
|
||||
"apac.anthropic.claude-haiku-4-5-v1:0",
|
||||
# Global inference profiles
|
||||
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"global.anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"global.anthropic.claude-opus-4-6-v1",
|
||||
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"global.anthropic.claude-sonnet-4-6",
|
||||
# Direct model IDs
|
||||
"ai21.jamba-1-5-large-v1:0",
|
||||
"ai21.jamba-1-5-mini-v1:0",
|
||||
"amazon.nova-lite-v1:0",
|
||||
|
||||
@@ -183,11 +183,6 @@ class AzureCompletion(BaseLLM):
|
||||
AzureCompletion._is_azure_openai_endpoint(self.endpoint)
|
||||
)
|
||||
|
||||
if not self.api_key:
|
||||
raise ValueError(
|
||||
"Azure API key is required. Set AZURE_API_KEY environment "
|
||||
"variable or pass api_key parameter."
|
||||
)
|
||||
if not self.endpoint:
|
||||
raise ValueError(
|
||||
"Azure endpoint is required. Set AZURE_ENDPOINT environment "
|
||||
@@ -195,12 +190,39 @@ class AzureCompletion(BaseLLM):
|
||||
)
|
||||
client_kwargs: dict[str, Any] = {
|
||||
"endpoint": self.endpoint,
|
||||
"credential": AzureKeyCredential(self.api_key),
|
||||
"credential": self._resolve_credential(),
|
||||
}
|
||||
if self.api_version:
|
||||
client_kwargs["api_version"] = self.api_version
|
||||
return client_kwargs
|
||||
|
||||
def _resolve_credential(self) -> Any:
|
||||
"""Return an Azure credential, preferring the API key when set.
|
||||
|
||||
Without an API key, fall back to ``DefaultAzureCredential`` from
|
||||
``azure-identity``. That chain auto-detects the standard keyless
|
||||
paths the customer's environment may provide — OIDC Workload
|
||||
Identity Federation (``AZURE_FEDERATED_TOKEN_FILE`` +
|
||||
``AZURE_TENANT_ID`` + ``AZURE_CLIENT_ID``), Managed Identity on
|
||||
AKS/Azure VMs, environment-configured service principals, and
|
||||
developer tools like the Azure CLI. Installing ``azure-identity``
|
||||
is what enables these paths; without it we raise the existing
|
||||
API-key error.
|
||||
"""
|
||||
if self.api_key:
|
||||
return AzureKeyCredential(self.api_key)
|
||||
|
||||
try:
|
||||
from azure.identity import DefaultAzureCredential
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Azure API key is required when azure-identity is not "
|
||||
"installed. Set AZURE_API_KEY, or install azure-identity "
|
||||
'for keyless auth: uv add "crewai[azure-ai-inference]"'
|
||||
) from None
|
||||
|
||||
return DefaultAzureCredential()
|
||||
|
||||
def _get_sync_client(self) -> Any:
|
||||
if self._client is None:
|
||||
self._client = self._build_sync_client()
|
||||
|
||||
@@ -2075,6 +2075,9 @@ class BedrockCompletion(BaseLLM):
|
||||
|
||||
# Context window sizes for common Bedrock models
|
||||
context_windows = {
|
||||
"anthropic.claude-sonnet-4": 200000,
|
||||
"anthropic.claude-opus-4": 200000,
|
||||
"anthropic.claude-haiku-4": 200000,
|
||||
"anthropic.claude-3-5-sonnet": 200000,
|
||||
"anthropic.claude-3-5-haiku": 200000,
|
||||
"anthropic.claude-3-opus": 200000,
|
||||
|
||||
@@ -2,9 +2,17 @@
|
||||
|
||||
This module provides native MCP client functionality, allowing CrewAI agents
|
||||
to connect to any MCP-compliant server using various transport types.
|
||||
|
||||
Heavy imports (MCPClient, MCPToolResolver, BaseTransport, TransportType) are
|
||||
lazy-loaded on first access to avoid pulling in the ``mcp`` SDK (~400ms)
|
||||
when only lightweight config/filter types are needed.
|
||||
"""
|
||||
|
||||
from crewai.mcp.client import MCPClient
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from crewai.mcp.config import (
|
||||
MCPServerConfig,
|
||||
MCPServerHTTP,
|
||||
@@ -18,8 +26,28 @@ from crewai.mcp.filters import (
|
||||
create_dynamic_tool_filter,
|
||||
create_static_tool_filter,
|
||||
)
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.transports.base import BaseTransport, TransportType
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.mcp.client import MCPClient
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.transports.base import BaseTransport, TransportType
|
||||
|
||||
_LAZY: dict[str, tuple[str, str]] = {
|
||||
"MCPClient": ("crewai.mcp.client", "MCPClient"),
|
||||
"MCPToolResolver": ("crewai.mcp.tool_resolver", "MCPToolResolver"),
|
||||
"BaseTransport": ("crewai.mcp.transports.base", "BaseTransport"),
|
||||
"TransportType": ("crewai.mcp.transports.base", "TransportType"),
|
||||
}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
if name in _LAZY:
|
||||
mod_path, attr = _LAZY[name]
|
||||
mod = importlib.import_module(mod_path)
|
||||
val = getattr(mod, attr)
|
||||
globals()[name] = val # cache for subsequent access
|
||||
return val
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
|
||||
|
||||
__all__ = [
|
||||
|
||||
@@ -237,6 +237,8 @@ def crew(
|
||||
self.tasks = instantiated_tasks
|
||||
|
||||
crew_instance: Crew = _call_method(meth, self, *args, **kwargs)
|
||||
if "name" not in crew_instance.model_fields_set:
|
||||
crew_instance.name = getattr(self, "_crew_name", None) or crew_instance.name
|
||||
|
||||
def callback_wrapper(
|
||||
hook: Callable[Concatenate[CrewInstance, P2], R2], instance: CrewInstance
|
||||
|
||||
@@ -10,12 +10,22 @@ from __future__ import annotations
|
||||
import json
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crew import Crew
|
||||
from crewai.events.base_events import BaseEvent
|
||||
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus
|
||||
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus, is_replaying
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointBaseEvent,
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkBaseEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreBaseEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.flow.flow import Flow
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.runtime import RuntimeState, _prepare_entities
|
||||
@@ -53,12 +63,26 @@ def _resolve(value: CheckpointConfig | bool | None) -> CheckpointConfig | None |
|
||||
if isinstance(value, CheckpointConfig):
|
||||
_ensure_handlers_registered()
|
||||
return value
|
||||
if value is True:
|
||||
if value:
|
||||
_ensure_handlers_registered()
|
||||
return CheckpointConfig()
|
||||
if value is False:
|
||||
return _SENTINEL
|
||||
return None # None = inherit
|
||||
return None
|
||||
|
||||
|
||||
def _resolve_from_agent(agent: BaseAgent) -> CheckpointConfig | None:
|
||||
"""Resolve a checkpoint config starting from an agent, walking to its crew."""
|
||||
result = _resolve(agent.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = agent.crew
|
||||
if isinstance(crew, Crew):
|
||||
crew_result = _resolve(crew.checkpoint)
|
||||
return crew_result if isinstance(crew_result, CheckpointConfig) else None
|
||||
return None
|
||||
|
||||
|
||||
def _find_checkpoint(source: Any) -> CheckpointConfig | None:
|
||||
@@ -77,28 +101,11 @@ def _find_checkpoint(source: Any) -> CheckpointConfig | None:
|
||||
result = _resolve(source.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
if isinstance(source, BaseAgent):
|
||||
result = _resolve(source.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = source.crew
|
||||
if isinstance(crew, Crew):
|
||||
result = _resolve(crew.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
return None
|
||||
return _resolve_from_agent(source)
|
||||
if isinstance(source, Task):
|
||||
agent = source.agent
|
||||
if isinstance(agent, BaseAgent):
|
||||
result = _resolve(agent.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = agent.crew
|
||||
if isinstance(crew, Crew):
|
||||
result = _resolve(crew.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
return _resolve_from_agent(agent)
|
||||
return None
|
||||
return None
|
||||
|
||||
@@ -107,27 +114,106 @@ def _do_checkpoint(
|
||||
state: RuntimeState, cfg: CheckpointConfig, event: BaseEvent | None = None
|
||||
) -> None:
|
||||
"""Write a checkpoint and prune old ones if configured."""
|
||||
_prepare_entities(state.root)
|
||||
payload = state.model_dump(mode="json")
|
||||
if event is not None:
|
||||
payload["trigger"] = event.type
|
||||
data = json.dumps(payload)
|
||||
location = cfg.provider.checkpoint(
|
||||
data,
|
||||
cfg.location,
|
||||
parent_id=state._parent_id,
|
||||
branch=state._branch,
|
||||
)
|
||||
state._chain_lineage(cfg.provider, location)
|
||||
provider_name: str = type(cfg.provider).__name__
|
||||
trigger: str | None = event.type if event is not None else None
|
||||
context: dict[str, Any] = {
|
||||
"task_id": event.task_id if event is not None else None,
|
||||
"task_name": event.task_name if event is not None else None,
|
||||
"agent_id": event.agent_id if event is not None else None,
|
||||
"agent_role": event.agent_role if event is not None else None,
|
||||
}
|
||||
|
||||
checkpoint_id: str = cfg.provider.extract_id(location)
|
||||
parent_id_snapshot: str | None = state._parent_id
|
||||
branch_snapshot: str = state._branch
|
||||
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointStartedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
start: float = time.perf_counter()
|
||||
try:
|
||||
_prepare_entities(state.root)
|
||||
payload = state.model_dump(mode="json")
|
||||
if event is not None:
|
||||
payload["trigger"] = event.type
|
||||
data = json.dumps(payload)
|
||||
location = cfg.provider.checkpoint(
|
||||
data,
|
||||
cfg.location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
state._chain_lineage(cfg.provider, location)
|
||||
checkpoint_id: str = cfg.provider.extract_id(location)
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointFailedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
error=str(exc),
|
||||
**context,
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
duration_ms: float = (time.perf_counter() - start) * 1000.0
|
||||
msg: str = (
|
||||
f"Checkpoint saved. Resume with: crewai checkpoint resume {checkpoint_id}"
|
||||
)
|
||||
logger.info(msg)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
checkpoint_id=checkpoint_id,
|
||||
duration_ms=duration_ms,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
if cfg.max_checkpoints is not None:
|
||||
cfg.provider.prune(cfg.location, cfg.max_checkpoints, branch=state._branch)
|
||||
try:
|
||||
removed_count: int = cfg.provider.prune(
|
||||
cfg.location, cfg.max_checkpoints, branch=branch_snapshot
|
||||
)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Checkpoint prune failed for %s (branch=%s)",
|
||||
cfg.location,
|
||||
branch_snapshot,
|
||||
exc_info=True,
|
||||
)
|
||||
return
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointPrunedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
removed_count=removed_count,
|
||||
max_checkpoints=cfg.max_checkpoints,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _should_checkpoint(source: Any, event: BaseEvent) -> CheckpointConfig | None:
|
||||
@@ -142,6 +228,13 @@ def _should_checkpoint(source: Any, event: BaseEvent) -> CheckpointConfig | None
|
||||
|
||||
def _on_any_event(source: Any, event: BaseEvent, state: Any) -> None:
|
||||
"""Sync handler registered on every event class."""
|
||||
if is_replaying():
|
||||
return
|
||||
if isinstance(
|
||||
event,
|
||||
(CheckpointBaseEvent, CheckpointForkBaseEvent, CheckpointRestoreBaseEvent),
|
||||
):
|
||||
return
|
||||
cfg = _should_checkpoint(source, event)
|
||||
if cfg is None:
|
||||
return
|
||||
@@ -161,7 +254,8 @@ def _register_all_handlers(event_bus: CrewAIEventsBus) -> None:
|
||||
seen: set[type] = set()
|
||||
|
||||
def _collect(cls: type[BaseEvent]) -> None:
|
||||
for sub in cls.__subclasses__():
|
||||
subclasses: list[type[BaseEvent]] = cls.__subclasses__()
|
||||
for sub in subclasses:
|
||||
if sub not in seen:
|
||||
seen.add(sub)
|
||||
type_field = sub.model_fields.get("type")
|
||||
|
||||
@@ -39,7 +39,8 @@ def _build_event_type_map() -> None:
|
||||
"""Populate _event_type_map from all BaseEvent subclasses."""
|
||||
|
||||
def _collect(cls: type[BaseEvent]) -> None:
|
||||
for sub in cls.__subclasses__():
|
||||
subclasses: list[type[BaseEvent]] = cls.__subclasses__()
|
||||
for sub in subclasses:
|
||||
type_field = sub.model_fields.get("type")
|
||||
if type_field and type_field.default:
|
||||
_event_type_map[type_field.default] = sub
|
||||
@@ -196,6 +197,21 @@ class EventRecord(BaseModel):
|
||||
node for node in self.nodes.values() if not node.neighbors("parent")
|
||||
]
|
||||
|
||||
def all_nodes(self) -> list[EventNode]:
|
||||
"""Return a snapshot of every node under the read lock.
|
||||
|
||||
Returns:
|
||||
A list copy of the current nodes, safe to iterate without holding
|
||||
the lock.
|
||||
"""
|
||||
with self._lock.r_locked():
|
||||
return list(self.nodes.values())
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Remove all nodes from the record under the write lock."""
|
||||
with self._lock.w_locked():
|
||||
self.nodes.clear()
|
||||
|
||||
def __len__(self) -> int:
|
||||
with self._lock.r_locked():
|
||||
return len(self.nodes)
|
||||
|
||||
@@ -61,13 +61,16 @@ class BaseProvider(BaseModel, ABC):
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove old checkpoints, keeping at most *max_keep* per branch.
|
||||
|
||||
Args:
|
||||
location: The storage destination passed to ``checkpoint``.
|
||||
max_keep: Maximum number of checkpoints to retain.
|
||||
branch: Only prune checkpoints on this branch.
|
||||
|
||||
Returns:
|
||||
The number of checkpoints removed.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
@@ -95,17 +95,20 @@ class JsonProvider(BaseProvider):
|
||||
await f.write(data)
|
||||
return str(file_path)
|
||||
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove oldest checkpoint files beyond *max_keep* on a branch."""
|
||||
_safe_branch(location, branch)
|
||||
branch_dir = os.path.join(location, branch)
|
||||
pattern = os.path.join(branch_dir, "*.json")
|
||||
files = sorted(glob.glob(pattern), key=os.path.getmtime)
|
||||
removed = 0
|
||||
for path in files if max_keep == 0 else files[:-max_keep]:
|
||||
try:
|
||||
os.remove(path)
|
||||
removed += 1
|
||||
except OSError: # noqa: PERF203
|
||||
logger.debug("Failed to remove %s", path, exc_info=True)
|
||||
return removed
|
||||
|
||||
def extract_id(self, location: str) -> str:
|
||||
"""Extract the checkpoint ID from a file path.
|
||||
|
||||
@@ -111,11 +111,13 @@ class SqliteProvider(BaseProvider):
|
||||
await db.commit()
|
||||
return f"{location}#{checkpoint_id}"
|
||||
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove oldest checkpoint rows beyond *max_keep* on a branch."""
|
||||
with sqlite3.connect(location) as conn:
|
||||
conn.execute(_PRUNE, (branch, branch, max_keep))
|
||||
cursor = conn.execute(_PRUNE, (branch, branch, max_keep))
|
||||
removed: int = cursor.rowcount
|
||||
conn.commit()
|
||||
return max(removed, 0)
|
||||
|
||||
def extract_id(self, location: str) -> str:
|
||||
"""Extract the checkpoint ID from a ``db_path#id`` string."""
|
||||
|
||||
@@ -10,6 +10,7 @@ via ``RuntimeState.model_rebuild()``.
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import TYPE_CHECKING, Any
|
||||
import uuid
|
||||
|
||||
@@ -23,6 +24,17 @@ from pydantic import (
|
||||
)
|
||||
|
||||
from crewai.context import capture_execution_context
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.event_record import EventRecord
|
||||
from crewai.state.provider.core import BaseProvider
|
||||
@@ -89,7 +101,7 @@ def _migrate(data: dict[str, Any]) -> dict[str, Any]:
|
||||
"""
|
||||
raw = data.get("crewai_version")
|
||||
current = Version(get_crewai_version())
|
||||
stored = Version(raw) if raw else Version("0.0.0")
|
||||
stored = Version(raw) if isinstance(raw, str) and raw else Version("0.0.0")
|
||||
|
||||
if raw is None:
|
||||
logger.warning("Checkpoint has no crewai_version — treating as 0.0.0")
|
||||
@@ -159,6 +171,63 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
self._checkpoint_id = provider.extract_id(location)
|
||||
self._parent_id = self._checkpoint_id
|
||||
|
||||
def _begin_checkpoint(self, location: str) -> tuple[str, str | None, str, float]:
|
||||
"""Emit the start event and return the invariant context for a checkpoint."""
|
||||
provider_name: str = type(self._provider).__name__
|
||||
parent_id_snapshot: str | None = self._parent_id
|
||||
branch_snapshot: str = self._branch
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointStartedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
),
|
||||
)
|
||||
return provider_name, parent_id_snapshot, branch_snapshot, time.perf_counter()
|
||||
|
||||
def _emit_checkpoint_failed(
|
||||
self,
|
||||
location: str,
|
||||
provider_name: str,
|
||||
branch_snapshot: str,
|
||||
parent_id_snapshot: str | None,
|
||||
exc: Exception,
|
||||
) -> None:
|
||||
"""Emit the failure event for a checkpoint write."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
|
||||
def _emit_checkpoint_completed(
|
||||
self,
|
||||
result: str,
|
||||
provider_name: str,
|
||||
branch_snapshot: str,
|
||||
parent_id_snapshot: str | None,
|
||||
start: float,
|
||||
) -> None:
|
||||
"""Emit the completion event for a successful checkpoint write."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointCompletedEvent(
|
||||
location=result,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
checkpoint_id=self._provider.extract_id(result),
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
|
||||
def checkpoint(self, location: str) -> str:
|
||||
"""Write a checkpoint.
|
||||
|
||||
@@ -169,14 +238,27 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
Returns:
|
||||
A location identifier for the saved checkpoint.
|
||||
"""
|
||||
_prepare_entities(self.root)
|
||||
result = self._provider.checkpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=self._parent_id,
|
||||
branch=self._branch,
|
||||
provider_name, parent_id_snapshot, branch_snapshot, start = (
|
||||
self._begin_checkpoint(location)
|
||||
)
|
||||
try:
|
||||
_prepare_entities(self.root)
|
||||
result = self._provider.checkpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
except Exception as exc:
|
||||
self._emit_checkpoint_failed(
|
||||
location, provider_name, branch_snapshot, parent_id_snapshot, exc
|
||||
)
|
||||
raise
|
||||
|
||||
self._emit_checkpoint_completed(
|
||||
result, provider_name, branch_snapshot, parent_id_snapshot, start
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
return result
|
||||
|
||||
async def acheckpoint(self, location: str) -> str:
|
||||
@@ -189,14 +271,27 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
Returns:
|
||||
A location identifier for the saved checkpoint.
|
||||
"""
|
||||
_prepare_entities(self.root)
|
||||
result = await self._provider.acheckpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=self._parent_id,
|
||||
branch=self._branch,
|
||||
provider_name, parent_id_snapshot, branch_snapshot, start = (
|
||||
self._begin_checkpoint(location)
|
||||
)
|
||||
try:
|
||||
_prepare_entities(self.root)
|
||||
result = await self._provider.acheckpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
except Exception as exc:
|
||||
self._emit_checkpoint_failed(
|
||||
location, provider_name, branch_snapshot, parent_id_snapshot, exc
|
||||
)
|
||||
raise
|
||||
|
||||
self._emit_checkpoint_completed(
|
||||
result, provider_name, branch_snapshot, parent_id_snapshot, start
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
return result
|
||||
|
||||
def fork(self, branch: str | None = None) -> None:
|
||||
@@ -211,11 +306,32 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
times without collisions.
|
||||
"""
|
||||
if branch:
|
||||
self._branch = branch
|
||||
new_branch = branch
|
||||
elif self._checkpoint_id:
|
||||
self._branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
|
||||
new_branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
|
||||
else:
|
||||
self._branch = f"fork/{uuid.uuid4().hex[:8]}"
|
||||
new_branch = f"fork/{uuid.uuid4().hex[:8]}"
|
||||
|
||||
parent_branch: str | None = self._branch
|
||||
parent_checkpoint_id: str | None = self._checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointForkStartedEvent(
|
||||
branch=new_branch,
|
||||
parent_branch=parent_branch,
|
||||
parent_checkpoint_id=parent_checkpoint_id,
|
||||
),
|
||||
)
|
||||
self._branch = new_branch
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointForkCompletedEvent(
|
||||
branch=new_branch,
|
||||
parent_branch=parent_branch,
|
||||
parent_checkpoint_id=parent_checkpoint_id,
|
||||
),
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_checkpoint(cls, config: CheckpointConfig, **kwargs: Any) -> RuntimeState:
|
||||
@@ -233,13 +349,41 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
if config.restore_from is None:
|
||||
raise ValueError("CheckpointConfig.restore_from must be set")
|
||||
location = str(config.restore_from)
|
||||
provider = detect_provider(location)
|
||||
raw = provider.from_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(config, CheckpointRestoreStartedEvent(location=location))
|
||||
start: float = time.perf_counter()
|
||||
provider_name: str | None = None
|
||||
try:
|
||||
provider = detect_provider(location)
|
||||
provider_name = type(provider).__name__
|
||||
raw = provider.from_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
checkpoint_id=checkpoint_id,
|
||||
branch=state._branch,
|
||||
parent_id=state._parent_id,
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
return state
|
||||
|
||||
@classmethod
|
||||
@@ -260,13 +404,41 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
if config.restore_from is None:
|
||||
raise ValueError("CheckpointConfig.restore_from must be set")
|
||||
location = str(config.restore_from)
|
||||
provider = detect_provider(location)
|
||||
raw = await provider.afrom_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(config, CheckpointRestoreStartedEvent(location=location))
|
||||
start: float = time.perf_counter()
|
||||
provider_name: str | None = None
|
||||
try:
|
||||
provider = detect_provider(location)
|
||||
provider_name = type(provider).__name__
|
||||
raw = await provider.afrom_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
checkpoint_id=checkpoint_id,
|
||||
branch=state._branch,
|
||||
parent_id=state._parent_id,
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
return state
|
||||
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@ from pydantic import (
|
||||
field_validator,
|
||||
model_validator,
|
||||
)
|
||||
from pydantic.functional_serializers import PlainSerializer
|
||||
from pydantic_core import PydanticCustomError
|
||||
from typing_extensions import Self
|
||||
|
||||
@@ -86,6 +87,22 @@ from crewai.utilities.printer import PRINTER
|
||||
from crewai.utilities.string_utils import interpolate_only
|
||||
|
||||
|
||||
def _serialize_model_class(v: type[BaseModel] | None) -> dict[str, Any] | None:
|
||||
"""Serialize a Pydantic model class reference to its JSON schema."""
|
||||
return v.model_json_schema() if v else None
|
||||
|
||||
|
||||
def _deserialize_model_class(v: Any) -> type[BaseModel] | None:
|
||||
"""Hydrate a model class reference from checkpoint data."""
|
||||
if v is None or isinstance(v, type):
|
||||
return v
|
||||
if isinstance(v, dict):
|
||||
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
|
||||
|
||||
return create_model_from_schema(v)
|
||||
return None
|
||||
|
||||
|
||||
class Task(BaseModel):
|
||||
"""Class that represents a task to be executed.
|
||||
|
||||
@@ -141,15 +158,33 @@ class Task(BaseModel):
|
||||
description="Whether the task should be executed asynchronously or not.",
|
||||
default=False,
|
||||
)
|
||||
output_json: type[BaseModel] | None = Field(
|
||||
output_json: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_deserialize_model_class),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A Pydantic model to be used to create a JSON output.",
|
||||
default=None,
|
||||
)
|
||||
output_pydantic: type[BaseModel] | None = Field(
|
||||
output_pydantic: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_deserialize_model_class),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A Pydantic model to be used to create a Pydantic output.",
|
||||
default=None,
|
||||
)
|
||||
response_model: type[BaseModel] | None = Field(
|
||||
response_model: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_deserialize_model_class),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A Pydantic model for structured LLM outputs using native provider features.",
|
||||
default=None,
|
||||
)
|
||||
@@ -189,7 +224,13 @@ class Task(BaseModel):
|
||||
description="Whether the task should instruct the agent to return the final answer formatted in Markdown",
|
||||
default=False,
|
||||
)
|
||||
converter_cls: type[Converter] | None = Field(
|
||||
converter_cls: Annotated[
|
||||
type[Converter] | None,
|
||||
BeforeValidator(lambda v: v if v is None or isinstance(v, type) else None),
|
||||
PlainSerializer(
|
||||
_serialize_model_class, return_type=dict | None, when_used="json"
|
||||
),
|
||||
] = Field(
|
||||
description="A converter class used to export structured output",
|
||||
default=None,
|
||||
)
|
||||
@@ -1241,12 +1282,26 @@ Follow these guidelines:
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
if isinstance(result, BaseModel):
|
||||
raw = result.model_dump_json()
|
||||
if self.output_pydantic:
|
||||
pydantic_output = result
|
||||
json_output = None
|
||||
elif self.output_json:
|
||||
pydantic_output = None
|
||||
json_output = result.model_dump()
|
||||
else:
|
||||
pydantic_output = None
|
||||
json_output = None
|
||||
else:
|
||||
raw = result
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
|
||||
task_output = TaskOutput(
|
||||
name=self.name or self.description,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
raw=raw,
|
||||
pydantic=pydantic_output,
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
@@ -1337,12 +1392,26 @@ Follow these guidelines:
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
if isinstance(result, BaseModel):
|
||||
raw = result.model_dump_json()
|
||||
if self.output_pydantic:
|
||||
pydantic_output = result
|
||||
json_output = None
|
||||
elif self.output_json:
|
||||
pydantic_output = None
|
||||
json_output = result.model_dump()
|
||||
else:
|
||||
pydantic_output = None
|
||||
json_output = None
|
||||
else:
|
||||
raw = result
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
|
||||
task_output = TaskOutput(
|
||||
name=self.name or self.description,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
raw=raw,
|
||||
pydantic=pydantic_output,
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
|
||||
82
lib/crewai/src/crewai/tools/flow_tool.py
Normal file
82
lib/crewai/src/crewai/tools/flow_tool.py
Normal file
@@ -0,0 +1,82 @@
|
||||
"""Wrap Flow classes as callable tools so agents can invoke them."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.string_utils import sanitize_tool_name
|
||||
|
||||
|
||||
class FlowToolInputSchema(BaseModel):
|
||||
"""Default input schema for a FlowTool."""
|
||||
|
||||
inputs: str = Field(
|
||||
default="{}",
|
||||
description=(
|
||||
"JSON string of key-value pairs to pass as inputs to the flow. "
|
||||
"Use '{}' if the flow requires no inputs."
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
class FlowTool(BaseTool):
|
||||
"""Wraps a Flow class as a BaseTool so an agent can invoke it.
|
||||
|
||||
The tool instantiates the Flow, calls ``kickoff(inputs=...)`` and returns
|
||||
the result as a string.
|
||||
"""
|
||||
|
||||
name: str = ""
|
||||
description: str = ""
|
||||
flow_class: Any = Field(
|
||||
default=None,
|
||||
description="The Flow class (not instance) to wrap.",
|
||||
exclude=True,
|
||||
)
|
||||
args_schema: Any = FlowToolInputSchema
|
||||
|
||||
def _run(self, inputs: str = "{}") -> str:
|
||||
"""Instantiate the Flow, run kickoff, and return the result."""
|
||||
try:
|
||||
parsed_inputs = json.loads(inputs) if isinstance(inputs, str) else inputs
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
parsed_inputs = {}
|
||||
|
||||
if not isinstance(parsed_inputs, dict):
|
||||
parsed_inputs = {}
|
||||
|
||||
flow_instance = self.flow_class()
|
||||
result = flow_instance.kickoff(inputs=parsed_inputs if parsed_inputs else None)
|
||||
return str(result)
|
||||
|
||||
|
||||
def create_flow_tools(flows: list[type] | None) -> list[BaseTool]:
|
||||
"""Convert a list of Flow classes into BaseTool wrappers.
|
||||
|
||||
Args:
|
||||
flows: Flow classes (not instances) to wrap as tools.
|
||||
|
||||
Returns:
|
||||
A list of FlowTool instances ready for agent use.
|
||||
"""
|
||||
if not flows:
|
||||
return []
|
||||
|
||||
tools: list[BaseTool] = []
|
||||
for flow_cls in flows:
|
||||
name = sanitize_tool_name(flow_cls.__name__)
|
||||
docstring = (flow_cls.__doc__ or "").strip()
|
||||
description = docstring if docstring else f"Run the {flow_cls.__name__} flow."
|
||||
|
||||
tools.append(
|
||||
FlowTool(
|
||||
name=name,
|
||||
description=description,
|
||||
flow_class=flow_cls,
|
||||
)
|
||||
)
|
||||
return tools
|
||||
165
lib/crewai/tests/events/test_event_replay.py
Normal file
165
lib/crewai/tests/events/test_event_replay.py
Normal file
@@ -0,0 +1,165 @@
|
||||
"""Tests for event bus replay dispatch and is_replaying flag."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai.events.event_bus import _replaying, crewai_event_bus, is_replaying
|
||||
from crewai.events.types.flow_events import (
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
|
||||
|
||||
def _make_started(method: str, event_id: str, sequence: int) -> MethodExecutionStartedEvent:
|
||||
"""Build a MethodExecutionStartedEvent with explicit ids/sequence."""
|
||||
ev = MethodExecutionStartedEvent(
|
||||
method_name=method,
|
||||
flow_name="F",
|
||||
params={},
|
||||
state={},
|
||||
)
|
||||
ev.event_id = event_id
|
||||
ev.emission_sequence = sequence
|
||||
return ev
|
||||
|
||||
|
||||
class TestReplayPreservesFields:
|
||||
"""replay() must not overwrite event_id, parent_event_id, or emission_sequence."""
|
||||
|
||||
def test_preserves_ids_and_sequence(self) -> None:
|
||||
captured: list[MethodExecutionStartedEvent] = []
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _capture(_: Any, event: MethodExecutionStartedEvent) -> None:
|
||||
captured.append(event)
|
||||
|
||||
ev = _make_started("outline", "orig-id-1", 42)
|
||||
ev.parent_event_id = "parent-abc"
|
||||
|
||||
future = crewai_event_bus.replay(object(), ev)
|
||||
if future is not None:
|
||||
future.result(timeout=5.0)
|
||||
|
||||
assert len(captured) == 1
|
||||
assert captured[0].event_id == "orig-id-1"
|
||||
assert captured[0].parent_event_id == "parent-abc"
|
||||
assert captured[0].emission_sequence == 42
|
||||
|
||||
|
||||
class TestIsReplayingFlag:
|
||||
"""is_replaying() must be True inside handlers dispatched via replay()."""
|
||||
|
||||
def test_flag_true_during_replay(self) -> None:
|
||||
seen: list[bool] = []
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _capture(_: Any, __: MethodExecutionStartedEvent) -> None:
|
||||
seen.append(is_replaying())
|
||||
|
||||
ev = _make_started("m", "id-1", 1)
|
||||
future = crewai_event_bus.replay(object(), ev)
|
||||
if future is not None:
|
||||
future.result(timeout=5.0)
|
||||
|
||||
assert seen == [True]
|
||||
assert is_replaying() is False
|
||||
|
||||
def test_flag_false_during_emit(self) -> None:
|
||||
seen: list[bool] = []
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _capture(_: Any, __: MethodExecutionStartedEvent) -> None:
|
||||
seen.append(is_replaying())
|
||||
|
||||
ev = _make_started("m", "id-1", 1)
|
||||
future = crewai_event_bus.emit(object(), ev)
|
||||
if future is not None:
|
||||
future.result(timeout=5.0)
|
||||
|
||||
assert seen == [False]
|
||||
|
||||
|
||||
class TestCheckpointListenerOptsOut:
|
||||
"""CheckpointListener must early-return during replay."""
|
||||
|
||||
def test_checkpoint_not_written_on_replay(self) -> None:
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.checkpoint_listener import _on_any_event
|
||||
|
||||
class FlowLike:
|
||||
entity_type = "flow"
|
||||
checkpoint = CheckpointConfig(trigger_all=True)
|
||||
|
||||
ev = _make_started("m", "id-1", 1)
|
||||
|
||||
with patch("crewai.state.checkpoint_listener._do_checkpoint") as do_cp:
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
_on_any_event(FlowLike(), ev, state=None)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
assert do_cp.call_count == 0
|
||||
|
||||
|
||||
class TestFlowResumeReplaysEvents:
|
||||
"""End-to-end: a resumed flow emits MethodExecution* events for completed methods."""
|
||||
|
||||
def test_resume_dispatches_completed_method_events(self, tmp_path) -> None:
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai.flow.persistence.sqlite import SQLiteFlowPersistence
|
||||
|
||||
db_path = tmp_path / "flows.db"
|
||||
persistence = SQLiteFlowPersistence(str(db_path))
|
||||
|
||||
class ThreeStepFlow(Flow[dict]):
|
||||
@start()
|
||||
def step_a(self) -> str:
|
||||
return "a"
|
||||
|
||||
@listen(step_a)
|
||||
def step_b(self) -> str:
|
||||
return "b"
|
||||
|
||||
@listen(step_b)
|
||||
def step_c(self) -> str:
|
||||
return "c"
|
||||
|
||||
if crewai_event_bus.runtime_state is not None:
|
||||
crewai_event_bus.runtime_state.event_record.clear()
|
||||
|
||||
flow1 = ThreeStepFlow(persistence=persistence)
|
||||
flow1.kickoff()
|
||||
flow_id = flow1.state["id"]
|
||||
|
||||
captured_started: list[str] = []
|
||||
captured_finished: list[str] = []
|
||||
|
||||
flow2 = ThreeStepFlow(persistence=persistence)
|
||||
flow2._completed_methods = {"step_a", "step_b"}
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _cs(_: Any, event: MethodExecutionStartedEvent) -> None:
|
||||
captured_started.append(event.method_name)
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionFinishedEvent)
|
||||
def _cf(_: Any, event: MethodExecutionFinishedEvent) -> None:
|
||||
captured_finished.append(event.method_name)
|
||||
|
||||
flow2.kickoff(inputs={"id": flow_id})
|
||||
|
||||
assert captured_started.count("step_a") == 1
|
||||
assert captured_started.count("step_b") == 1
|
||||
assert captured_started.count("step_c") == 1
|
||||
assert captured_finished.count("step_a") == 1
|
||||
assert captured_finished.count("step_b") == 1
|
||||
assert captured_finished.count("step_c") == 1
|
||||
@@ -389,17 +389,41 @@ def test_azure_raises_error_when_endpoint_missing():
|
||||
llm._get_sync_client()
|
||||
|
||||
|
||||
def test_azure_raises_error_when_api_key_missing():
|
||||
"""Credentials are validated lazily: construction succeeds, first
|
||||
def test_azure_raises_error_when_api_key_missing_without_azure_identity():
|
||||
"""Without an API key AND without ``azure-identity`` installed,
|
||||
client build raises the descriptive error."""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
llm = AzureCompletion(
|
||||
model="gpt-4", endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Azure API key is required"):
|
||||
llm._get_sync_client()
|
||||
with patch.dict("sys.modules", {"azure.identity": None}):
|
||||
llm = AzureCompletion(
|
||||
model="gpt-4", endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Azure API key is required"):
|
||||
llm._get_sync_client()
|
||||
|
||||
|
||||
def test_azure_uses_default_credential_when_api_key_missing():
|
||||
"""With ``azure-identity`` installed, a missing API key falls back to
|
||||
``DefaultAzureCredential`` instead of raising. This is the path that
|
||||
enables keyless auth (OIDC WIF on EKS/AKS, Managed Identity, Azure
|
||||
CLI) without any crewAI-specific config."""
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
sentinel = MagicMock(name="DefaultAzureCredential()")
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with patch(
|
||||
"azure.identity.DefaultAzureCredential", return_value=sentinel
|
||||
) as mock_cls:
|
||||
llm = AzureCompletion(
|
||||
model="gpt-4",
|
||||
endpoint="https://test-ai.services.example.com",
|
||||
)
|
||||
kwargs = llm._make_client_kwargs()
|
||||
assert kwargs["credential"] is sentinel
|
||||
mock_cls.assert_called()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
|
||||
@@ -4,6 +4,8 @@ from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent
|
||||
from crewai.agent.utils import append_skill_context
|
||||
from crewai.skills.loader import activate_skill, discover_skills, format_skill_context
|
||||
from crewai.skills.models import INSTRUCTIONS, METADATA
|
||||
|
||||
@@ -76,3 +78,23 @@ class TestSkillDiscoveryAndActivation:
|
||||
all_skills.extend(discover_skills(search_path))
|
||||
names = {s.name for s in all_skills}
|
||||
assert names == {"skill-a", "skill-b"}
|
||||
|
||||
def test_agent_preserves_metadata_for_discovered_skills(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "travel", body="Use this skill for travel planning.")
|
||||
discovered = discover_skills(tmp_path)
|
||||
|
||||
agent = Agent(
|
||||
role="Travel Advisor",
|
||||
goal="Provide personalized travel suggestions.",
|
||||
backstory="An experienced travel consultant.",
|
||||
skills=discovered,
|
||||
)
|
||||
|
||||
assert agent.skills is not None
|
||||
assert agent.skills[0].disclosure_level == METADATA
|
||||
assert agent.skills[0].instructions is None
|
||||
|
||||
prompt = append_skill_context(agent, "Plan a 10-day Japan itinerary.")
|
||||
assert "## Skill: travel" in prompt
|
||||
assert "Skill travel" in prompt
|
||||
assert "Use this skill for travel planning." not in prompt
|
||||
|
||||
@@ -11,11 +11,12 @@ from typing import Any
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crew import Crew
|
||||
from crewai.flow.flow import Flow, start
|
||||
from crewai.flow.flow import _INITIAL_STATE_CLASS_MARKER, Flow, start
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.checkpoint_listener import (
|
||||
_find_checkpoint,
|
||||
@@ -310,6 +311,65 @@ class TestRuntimeStateLineage:
|
||||
assert state._branch != first
|
||||
|
||||
|
||||
class TestFlowInitialStateSerialization:
|
||||
"""Regression tests for checkpoint serialization of ``Flow.initial_state``."""
|
||||
|
||||
def test_class_ref_serializes_as_schema(self) -> None:
|
||||
class MyState(BaseModel):
|
||||
id: str = "x"
|
||||
foo: str = "bar"
|
||||
|
||||
flow = Flow(initial_state=MyState)
|
||||
state = RuntimeState(root=[flow])
|
||||
dumped = json.loads(state.model_dump_json())
|
||||
entity = dumped["entities"][0]
|
||||
wrapped = entity["initial_state"]
|
||||
assert isinstance(wrapped, dict)
|
||||
assert _INITIAL_STATE_CLASS_MARKER in wrapped
|
||||
assert wrapped[_INITIAL_STATE_CLASS_MARKER].get("title") == "MyState"
|
||||
|
||||
def test_class_ref_round_trips_to_basemodel_subclass(self) -> None:
|
||||
class MyState(BaseModel):
|
||||
id: str = "x"
|
||||
foo: str = "bar"
|
||||
|
||||
flow = Flow(initial_state=MyState)
|
||||
raw = RuntimeState(root=[flow]).model_dump_json()
|
||||
restored = RuntimeState.model_validate_json(
|
||||
raw, context={"from_checkpoint": True}
|
||||
)
|
||||
rehydrated = restored.root[0].initial_state
|
||||
assert isinstance(rehydrated, type)
|
||||
assert issubclass(rehydrated, BaseModel)
|
||||
assert set(rehydrated.model_fields.keys()) == {"id", "foo"}
|
||||
|
||||
def test_instance_serializes_as_values(self) -> None:
|
||||
class MyState(BaseModel):
|
||||
id: str = "x"
|
||||
foo: str = "bar"
|
||||
|
||||
flow = Flow(initial_state=MyState(foo="baz"))
|
||||
state = RuntimeState(root=[flow])
|
||||
dumped = json.loads(state.model_dump_json())
|
||||
entity = dumped["entities"][0]
|
||||
assert entity["initial_state"] == {"id": "x", "foo": "baz"}
|
||||
|
||||
def test_dict_passthrough(self) -> None:
|
||||
flow = Flow(initial_state={"id": "x", "foo": "bar"})
|
||||
state = RuntimeState(root=[flow])
|
||||
dumped = json.loads(state.model_dump_json())
|
||||
entity = dumped["entities"][0]
|
||||
assert entity["initial_state"] == {"id": "x", "foo": "bar"}
|
||||
|
||||
def test_dict_round_trips_as_dict(self) -> None:
|
||||
flow = Flow(initial_state={"id": "x", "foo": "bar"})
|
||||
raw = RuntimeState(root=[flow]).model_dump_json()
|
||||
restored = RuntimeState.model_validate_json(
|
||||
raw, context={"from_checkpoint": True}
|
||||
)
|
||||
assert restored.root[0].initial_state == {"id": "x", "foo": "bar"}
|
||||
|
||||
|
||||
# ---------- JsonProvider forking ----------
|
||||
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ from concurrent.futures import Future
|
||||
from hashlib import md5
|
||||
import re
|
||||
import sys
|
||||
from typing import Any, cast
|
||||
from unittest.mock import ANY, MagicMock, call, patch
|
||||
|
||||
from crewai.agent import Agent
|
||||
@@ -17,6 +18,7 @@ from crewai.crew import Crew
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffStartedEvent,
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestStartedEvent,
|
||||
CrewTrainCompletedEvent,
|
||||
@@ -4517,8 +4519,8 @@ def test_sets_flow_context_when_using_crewbase_pattern_inside_flow():
|
||||
flow.kickoff()
|
||||
|
||||
assert captured_crew is not None
|
||||
assert captured_crew._flow_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert captured_crew._request_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert captured_crew._flow_id == flow.execution_id # type: ignore[attr-defined]
|
||||
assert captured_crew._request_id == flow.execution_id # type: ignore[attr-defined]
|
||||
|
||||
|
||||
def test_sets_flow_context_when_outside_flow(researcher, writer):
|
||||
@@ -4552,8 +4554,8 @@ def test_sets_flow_context_when_inside_flow(researcher, writer):
|
||||
|
||||
flow = MyFlow()
|
||||
result = flow.kickoff()
|
||||
assert result._flow_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert result._request_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert result._flow_id == flow.execution_id # type: ignore[attr-defined]
|
||||
assert result._request_id == flow.execution_id # type: ignore[attr-defined]
|
||||
|
||||
|
||||
def test_reset_knowledge_with_no_crew_knowledge(researcher, writer):
|
||||
@@ -4741,6 +4743,61 @@ def test_default_crew_name(researcher, writer):
|
||||
assert crew.name == "crew"
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"explicit_name,expected",
|
||||
[
|
||||
(None, "ResearchAutomation"),
|
||||
("My Research Automation", "My Research Automation"),
|
||||
],
|
||||
ids=["class_name_from_decorator", "explicit_name_preserved"],
|
||||
)
|
||||
def test_crew_kickoff_started_emits_display_name(
|
||||
researcher, writer, explicit_name, expected
|
||||
):
|
||||
"""Kickoff events should use the decorator-provided display name when implicit."""
|
||||
from crewai.crews.utils import prepare_kickoff
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
|
||||
@CrewBase
|
||||
class ResearchAutomation:
|
||||
agents_config = None
|
||||
tasks_config = None
|
||||
|
||||
@agent
|
||||
def researcher(self):
|
||||
return researcher
|
||||
|
||||
@task
|
||||
def first_task(self):
|
||||
return Task(
|
||||
description="Task 1",
|
||||
expected_output="output",
|
||||
agent=self.researcher(),
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self):
|
||||
crew_kwargs: dict[str, Any] = {
|
||||
"agents": self.agents,
|
||||
"tasks": self.tasks,
|
||||
}
|
||||
if explicit_name is not None:
|
||||
crew_kwargs["name"] = explicit_name
|
||||
return Crew(**crew_kwargs)
|
||||
|
||||
captured: list[str | None] = []
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(CrewKickoffStartedEvent)
|
||||
def _capture(_source: Any, event: CrewKickoffStartedEvent) -> None:
|
||||
captured.append(event.crew_name)
|
||||
|
||||
automation_cls = cast(type[Any], ResearchAutomation)
|
||||
prepare_kickoff(cast(Any, automation_cls()).crew(), inputs=None)
|
||||
|
||||
assert captured == [expected]
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_memory_remember_receives_task_content():
|
||||
"""With memory=True, extract_memories receives raw content with task, agent, expected output, and result."""
|
||||
|
||||
185
lib/crewai/tests/test_flow_as_tool.py
Normal file
185
lib/crewai/tests/test_flow_as_tool.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""Tests for Flow-as-tool functionality."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from crewai.flow.flow import Flow, start
|
||||
from crewai.tools.flow_tool import FlowTool, create_flow_tools
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test Flow classes
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class SimpleFlow(Flow):
|
||||
"""A simple flow that greets the user."""
|
||||
|
||||
@start()
|
||||
def greet(self) -> str:
|
||||
return "Hello from SimpleFlow!"
|
||||
|
||||
|
||||
class MathFlow(Flow):
|
||||
"""Performs basic math operations."""
|
||||
|
||||
@start()
|
||||
def compute(self) -> str:
|
||||
return "42"
|
||||
|
||||
|
||||
class NoDocFlow(Flow):
|
||||
@start()
|
||||
def run_it(self) -> str:
|
||||
return "no doc"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# FlowTool unit tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestFlowTool:
|
||||
def test_wrap_simple_flow(self) -> None:
|
||||
tool = FlowTool(
|
||||
name="simple_flow",
|
||||
description="A simple flow that greets the user.",
|
||||
flow_class=SimpleFlow,
|
||||
)
|
||||
assert tool.name == "simple_flow"
|
||||
assert "greets the user" in tool.description
|
||||
|
||||
def test_run_invokes_kickoff(self) -> None:
|
||||
mock_flow = MagicMock()
|
||||
mock_flow.return_value = mock_flow # __init__ returns self
|
||||
mock_flow.kickoff.return_value = "mocked result"
|
||||
|
||||
tool = FlowTool(
|
||||
name="test_flow",
|
||||
description="test",
|
||||
flow_class=mock_flow,
|
||||
)
|
||||
result = tool._run(inputs="{}")
|
||||
assert result == "mocked result"
|
||||
mock_flow.kickoff.assert_called_once()
|
||||
|
||||
def test_run_with_json_inputs(self) -> None:
|
||||
mock_flow = MagicMock()
|
||||
mock_flow.return_value = mock_flow
|
||||
mock_flow.kickoff.return_value = "result with inputs"
|
||||
|
||||
tool = FlowTool(
|
||||
name="test_flow",
|
||||
description="test",
|
||||
flow_class=mock_flow,
|
||||
)
|
||||
result = tool._run(inputs='{"key": "value"}')
|
||||
assert result == "result with inputs"
|
||||
mock_flow.kickoff.assert_called_once_with(inputs={"key": "value"})
|
||||
|
||||
def test_run_with_invalid_json_defaults_to_empty(self) -> None:
|
||||
mock_flow = MagicMock()
|
||||
mock_flow.return_value = mock_flow
|
||||
mock_flow.kickoff.return_value = "ok"
|
||||
|
||||
tool = FlowTool(
|
||||
name="test_flow",
|
||||
description="test",
|
||||
flow_class=mock_flow,
|
||||
)
|
||||
result = tool._run(inputs="not valid json")
|
||||
assert result == "ok"
|
||||
mock_flow.kickoff.assert_called_once_with(inputs=None)
|
||||
|
||||
def test_run_returns_string(self) -> None:
|
||||
mock_flow = MagicMock()
|
||||
mock_flow.return_value = mock_flow
|
||||
mock_flow.kickoff.return_value = 42
|
||||
|
||||
tool = FlowTool(
|
||||
name="test_flow",
|
||||
description="test",
|
||||
flow_class=mock_flow,
|
||||
)
|
||||
result = tool._run()
|
||||
assert result == "42"
|
||||
assert isinstance(result, str)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# create_flow_tools tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCreateFlowTools:
|
||||
def test_creates_tools_from_flow_classes(self) -> None:
|
||||
tools = create_flow_tools([SimpleFlow, MathFlow])
|
||||
assert len(tools) == 2
|
||||
names = {t.name for t in tools}
|
||||
assert "simple_flow" in names
|
||||
assert "math_flow" in names
|
||||
|
||||
def test_description_from_docstring(self) -> None:
|
||||
tools = create_flow_tools([SimpleFlow])
|
||||
assert len(tools) == 1
|
||||
assert "greets the user" in tools[0].description
|
||||
|
||||
def test_description_fallback_when_no_docstring(self) -> None:
|
||||
tools = create_flow_tools([NoDocFlow])
|
||||
assert len(tools) == 1
|
||||
assert "NoDocFlow" in tools[0].description
|
||||
|
||||
def test_empty_list_returns_empty(self) -> None:
|
||||
assert create_flow_tools([]) == []
|
||||
|
||||
def test_none_returns_empty(self) -> None:
|
||||
assert create_flow_tools(None) == []
|
||||
|
||||
def test_tools_are_base_tool_instances(self) -> None:
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
tools = create_flow_tools([SimpleFlow])
|
||||
for tool in tools:
|
||||
assert isinstance(tool, BaseTool)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Agent integration tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestAgentFlowIntegration:
|
||||
def test_agent_with_flows_has_flow_tools(self) -> None:
|
||||
from crewai.agent.core import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test flows",
|
||||
backstory="I test things",
|
||||
flows=[SimpleFlow, MathFlow],
|
||||
)
|
||||
tool_names = {t.name for t in (agent.tools or [])}
|
||||
assert "simple_flow" in tool_names
|
||||
assert "math_flow" in tool_names
|
||||
|
||||
def test_agent_without_flows_no_extra_tools(self) -> None:
|
||||
from crewai.agent.core import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test",
|
||||
backstory="I test things",
|
||||
)
|
||||
# Should not have any flow tools
|
||||
flow_tool_names = {
|
||||
t.name for t in (agent.tools or []) if isinstance(t, FlowTool)
|
||||
}
|
||||
assert len(flow_tool_names) == 0
|
||||
|
||||
def test_flow_tool_executes_real_flow(self) -> None:
|
||||
"""Test that a FlowTool actually runs the Flow's kickoff."""
|
||||
tools = create_flow_tools([SimpleFlow])
|
||||
tool = tools[0]
|
||||
result = tool.run(inputs="{}")
|
||||
assert "Hello from SimpleFlow" in result
|
||||
127
lib/crewai/tests/test_flow_execution_id.py
Normal file
127
lib/crewai/tests/test_flow_execution_id.py
Normal file
@@ -0,0 +1,127 @@
|
||||
"""Regression tests for ``Flow.execution_id``.
|
||||
|
||||
``execution_id`` is the stable tracking identifier for a single flow run.
|
||||
It must stay independent of ``state.id`` so that consumers passing an
|
||||
``id`` in ``inputs`` (used for persistence restore) cannot destabilize
|
||||
the identity used by telemetry, tracing, and external correlation.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
from crewai.flow.flow import Flow, FlowState, start
|
||||
from crewai.flow.flow_context import current_flow_id, current_flow_request_id
|
||||
|
||||
|
||||
class _CaptureState(FlowState):
|
||||
captured_flow_id: str = ""
|
||||
captured_state_id: str = ""
|
||||
captured_current_flow_id: str = ""
|
||||
captured_execution_id: str = ""
|
||||
|
||||
|
||||
class _IdentityCaptureFlow(Flow[_CaptureState]):
|
||||
initial_state = _CaptureState
|
||||
|
||||
@start()
|
||||
def capture(self) -> None:
|
||||
self.state.captured_flow_id = self.flow_id
|
||||
self.state.captured_state_id = self.state.id
|
||||
self.state.captured_current_flow_id = current_flow_id.get() or ""
|
||||
self.state.captured_execution_id = self.execution_id
|
||||
|
||||
|
||||
def test_execution_id_defaults_to_fresh_uuid_per_instance() -> None:
|
||||
a = _IdentityCaptureFlow()
|
||||
b = _IdentityCaptureFlow()
|
||||
|
||||
assert a.execution_id
|
||||
assert b.execution_id
|
||||
assert a.execution_id != b.execution_id
|
||||
|
||||
|
||||
def test_execution_id_survives_consumer_id_in_inputs() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
original_execution_id = flow.execution_id
|
||||
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
|
||||
assert flow.state.id == "consumer-supplied-id"
|
||||
assert flow.flow_id == "consumer-supplied-id"
|
||||
assert flow.execution_id == original_execution_id
|
||||
assert flow.execution_id != "consumer-supplied-id"
|
||||
|
||||
|
||||
def test_two_runs_with_same_consumer_id_have_distinct_execution_ids() -> None:
|
||||
flow_a = _IdentityCaptureFlow()
|
||||
flow_b = _IdentityCaptureFlow()
|
||||
|
||||
colliding_id = "shared-consumer-id"
|
||||
flow_a.kickoff(inputs={"id": colliding_id})
|
||||
flow_b.kickoff(inputs={"id": colliding_id})
|
||||
|
||||
assert flow_a.state.id == colliding_id
|
||||
assert flow_b.state.id == colliding_id
|
||||
assert flow_a.execution_id != flow_b.execution_id
|
||||
|
||||
|
||||
def test_execution_id_is_writable() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
flow.execution_id = "external-task-id"
|
||||
|
||||
assert flow.execution_id == "external-task-id"
|
||||
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
assert flow.execution_id == "external-task-id"
|
||||
assert flow.state.id == "consumer-supplied-id"
|
||||
|
||||
|
||||
def test_current_flow_id_context_var_matches_execution_id() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
flow.execution_id = "external-task-id"
|
||||
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
|
||||
assert flow.state.captured_current_flow_id == "external-task-id"
|
||||
assert flow.state.captured_flow_id == "consumer-supplied-id"
|
||||
assert flow.state.captured_execution_id == "external-task-id"
|
||||
|
||||
|
||||
def test_execution_id_not_included_in_serialized_state() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
flow.execution_id = "external-task-id"
|
||||
flow.kickoff()
|
||||
|
||||
dumped = flow.state.model_dump()
|
||||
assert "execution_id" not in dumped
|
||||
assert "_execution_id" not in dumped
|
||||
assert dumped["id"] == flow.state.id
|
||||
|
||||
|
||||
def test_dict_state_flow_also_exposes_stable_execution_id() -> None:
|
||||
class DictFlow(Flow[dict[str, Any]]):
|
||||
initial_state = dict # type: ignore[assignment]
|
||||
|
||||
@start()
|
||||
def noop(self) -> None:
|
||||
pass
|
||||
|
||||
flow = DictFlow()
|
||||
original = flow.execution_id
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
|
||||
assert flow.state["id"] == "consumer-supplied-id"
|
||||
assert flow.execution_id == original
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _reset_flow_context_vars():
|
||||
yield
|
||||
for var in (current_flow_id, current_flow_request_id):
|
||||
try:
|
||||
var.set(None)
|
||||
except LookupError:
|
||||
# ContextVar was never set in this context; nothing to reset.
|
||||
pass
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any, ClassVar
|
||||
from typing import Any, ClassVar, cast
|
||||
from unittest.mock import Mock, create_autospec, patch
|
||||
|
||||
import pytest
|
||||
@@ -261,6 +261,55 @@ def test_crew_name():
|
||||
assert crew._crew_name == "InternalCrew"
|
||||
|
||||
|
||||
def test_crew_decorator_propagates_class_name_to_instance():
|
||||
"""@crew-decorated factory method should set Crew.name to the decorated class name."""
|
||||
sample_agent = Agent(role="r", goal="g", backstory="b")
|
||||
sample_task = Task(description="d", expected_output="o", agent=sample_agent)
|
||||
|
||||
@CrewBase
|
||||
class ImplicitNameCrewFactory:
|
||||
agents_config = None
|
||||
tasks_config = None
|
||||
agents: list[BaseAgent] = [sample_agent]
|
||||
tasks: list[Task] = [sample_task]
|
||||
|
||||
@crew
|
||||
def crew(self):
|
||||
return Crew(
|
||||
agents=[sample_agent],
|
||||
tasks=[sample_task],
|
||||
)
|
||||
|
||||
factory_cls = cast(type[Any], ImplicitNameCrewFactory)
|
||||
crew_instance: Crew = cast(Any, factory_cls()).crew()
|
||||
assert crew_instance.name == "ImplicitNameCrewFactory"
|
||||
|
||||
|
||||
def test_crew_decorator_preserves_explicit_name():
|
||||
"""Explicit Crew(name=...) inside @crew should win over the @CrewBase class name."""
|
||||
sample_agent = Agent(role="r", goal="g", backstory="b")
|
||||
sample_task = Task(description="d", expected_output="o", agent=sample_agent)
|
||||
|
||||
@CrewBase
|
||||
class NamedCrewFactory:
|
||||
agents_config = None
|
||||
tasks_config = None
|
||||
agents: list[BaseAgent] = [sample_agent]
|
||||
tasks: list[Task] = [sample_task]
|
||||
|
||||
@crew
|
||||
def crew(self):
|
||||
return Crew(
|
||||
name="My Explicit Name",
|
||||
agents=[sample_agent],
|
||||
tasks=[sample_task],
|
||||
)
|
||||
|
||||
factory_cls = cast(type[Any], NamedCrewFactory)
|
||||
crew_instance: Crew = cast(Any, factory_cls()).crew()
|
||||
assert crew_instance.name == "My Explicit Name"
|
||||
|
||||
|
||||
@tool
|
||||
def simple_tool():
|
||||
"""Return 'Hi!'"""
|
||||
|
||||
@@ -1640,3 +1640,43 @@ class TestBackendInitializedGatedOnSuccess:
|
||||
|
||||
assert bm.backend_initialized is False
|
||||
assert bm.trace_batch_id is None
|
||||
|
||||
|
||||
class TestTraceBatchManagerDuplicateInitMerge:
|
||||
"""Second initialize_batch call merges execution_metadata (flow after lazy action)."""
|
||||
|
||||
def test_duplicate_initialize_merges_execution_metadata(self):
|
||||
with (
|
||||
patch(
|
||||
"crewai.events.listeners.tracing.trace_batch_manager.should_auto_collect_first_time_traces",
|
||||
return_value=True,
|
||||
),
|
||||
patch(
|
||||
"crewai.events.listeners.tracing.trace_batch_manager.is_tracing_enabled_in_context",
|
||||
return_value=True,
|
||||
),
|
||||
):
|
||||
bm = TraceBatchManager()
|
||||
bm.initialize_batch(
|
||||
user_context={"privacy_level": "standard"},
|
||||
execution_metadata={
|
||||
"crew_name": "Unknown Crew",
|
||||
"crewai_version": "9.9.9",
|
||||
},
|
||||
)
|
||||
first_batch_id = bm.current_batch.batch_id
|
||||
bm.initialize_batch(
|
||||
user_context={"privacy_level": "standard"},
|
||||
execution_metadata={
|
||||
"flow_name": "ResearchFlow",
|
||||
"execution_type": "flow",
|
||||
"crewai_version": "9.9.9",
|
||||
"execution_start": "2026-01-01T00:00:00+00:00",
|
||||
},
|
||||
)
|
||||
|
||||
assert bm.current_batch.batch_id == first_batch_id
|
||||
meta = bm.current_batch.execution_metadata
|
||||
assert meta.get("execution_type") == "flow"
|
||||
assert meta.get("flow_name") == "ResearchFlow"
|
||||
assert meta.get("crew_name") == "Unknown Crew"
|
||||
|
||||
@@ -13,7 +13,7 @@ dependencies = [
|
||||
"click~=8.1.7",
|
||||
"tomlkit~=0.13.2",
|
||||
"openai>=1.83.0,<3",
|
||||
"python-dotenv~=1.1.1",
|
||||
"python-dotenv>=1.2.2,<2",
|
||||
"pygithub~=1.59.1",
|
||||
"rich>=13.9.4",
|
||||
]
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.14.3a1"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -164,7 +164,7 @@ info = "Commits must follow Conventional Commits 1.0.0."
|
||||
[tool.uv]
|
||||
# Pinned to include the security patch releases (authlib 1.6.11,
|
||||
# langchain-text-splitters 1.1.2) uploaded on 2026-04-16.
|
||||
exclude-newer = "2026-04-17"
|
||||
exclude-newer = "2026-04-22"
|
||||
|
||||
# composio-core pins rich<14 but textual requires rich>=14.
|
||||
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.
|
||||
|
||||
Reference in New Issue
Block a user