Compare commits

...

18 Commits

Author SHA1 Message Date
Greyson LaLonde
bc2fb71560 docs: update changelog and version for v1.14.3a3
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-04-23 05:11:06 +08:00
Greyson LaLonde
3e9deaf9c0 feat: bump versions to 1.14.3a3 2026-04-23 04:55:08 +08:00
Lorenze Jay
3f7637455c feat: supporting e2b 2026-04-23 04:36:33 +08:00
Matt Aitchison
fdf3101b39 feat(azure): fall back to DefaultAzureCredential when no API key
Enables keyless Azure auth (OIDC Workload Identity Federation, Managed
Identity, Azure CLI, env-configured Service Principal) without any
crewAI-specific configuration. Customers whose deployment environment
already sets the standard azure-identity env vars get keyless auth for
free; the existing API-key path is unchanged.

Linear: FAC-40
2026-04-23 04:21:35 +08:00
Greyson LaLonde
c94f2e8f28 fix: upgrade lxml to >=6.1.0 for GHSA-vfmq-68hx-4jfw
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
2026-04-23 00:52:36 +08:00
alex-clawd
944fe6d435 docs: remove pricing FAQ from build-with-ai page across all locales (#5586)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Removes the 'How does pricing work?' accordion from EN, AR, KO, and PT-BR.

Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
2026-04-22 03:56:41 -03:00
iris-clawd
3be2fb65dc perf: lazy-load MCP SDK and event types to reduce cold start by ~29% (#5584)
* perf: defer MCP SDK import by fixing import path in agent/core.py

- Change 'from crewai.mcp import MCPServerConfig' to direct path
  'from crewai.mcp.config import MCPServerConfig' to avoid triggering
  mcp/__init__.py which eagerly loads the full mcp SDK (~300-400ms)
- Move MCPToolResolver import into get_mcp_tools() method body since
  it's only used at runtime, not in type annotations

Saves ~200ms on 'import crewai' cold start.

* perf: lazy-load heavy MCP imports in mcp/__init__.py

MCPClient, MCPToolResolver, BaseTransport, and TransportType now use
__getattr__ lazy loading. These pull in the full mcp SDK (~400ms) but
are only needed at runtime when agents actually connect to MCP servers.

Lightweight config and filter types remain eagerly imported.

* perf: lazy-load all event type modules in events/__init__.py

Previously only agent_events were lazy-loaded; all other event type
modules (crew, flow, knowledge, llm, guardrail, logging, mcp, memory,
reasoning, skill, task, tool_usage) were eagerly imported at package
init time. Since events/__init__.py runs whenever ANY crewai.events.*
submodule is accessed, this loaded ~12 Pydantic model modules
unnecessarily.

Now all event types use the same __getattr__ lazy-loading pattern,
with TYPE_CHECKING imports preserved for IDE/type-checker support.

Saves ~550ms on 'import crewai' cold start.

* chore: remove UNKNOWN.egg-info from version control

* fix: add MCPToolResolver to TYPE_CHECKING imports

Fixes F821 (ruff) and name-defined (mypy) from lazy-loading the
MCP import. The type annotation on _mcp_resolver needs the name
available at type-check time.

* fix: bump lxml to >=5.4.0 for GHSA-vfmq-68hx-4jfw

lxml 5.3.2 has a known vulnerability. Bump to 5.4.0+ which
includes the fix (libxml2 2.13.8). The previous <5.4.0 pin
was for etree import issues that have since been resolved.

* fix: bump exclude-newer to 2026-04-22 for lxml 6.1.0 resolution

lxml 6.1.0 (GHSA fix) was released April 17 but the exclude-newer
date was set to April 17, missing it by timestamp. Bump to April 22.

* perf: add import time benchmark script

scripts/benchmark_import_time.py measures import crewai cold start
in fresh subprocesses. Supports --runs, --json (for CI), and
--threshold (fail if median exceeds N seconds).

The companion GitHub Action workflow needs to be pushed separately
(requires workflow scope).

* new action

* Potential fix for pull request finding 'CodeQL / Workflow does not contain permissions'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

---------

Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-22 02:17:33 -03:00
Greyson LaLonde
160e25c1a9 docs: update changelog and version for v1.14.3a2
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-04-22 03:14:00 +08:00
Greyson LaLonde
b34b336273 feat: bump versions to 1.14.3a2 2026-04-22 03:08:52 +08:00
Renato Nitta
42d6c03ebc fix: propagate implicit @CrewBase names to crew events (#5574)
* fix: propagate implicit @CrewBase names to crew events

* test: appease static analysis for @CrewBase kickoff test

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-04-21 15:57:19 -03:00
Greyson LaLonde
d4f9f875f7 fix: bump python-dotenv to >=1.2.2 for GHSA-mf9w-mj56-hr94
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-04-22 01:22:19 +08:00
Lorenze Jay
6d153284d4 fix: merge execution metadata on duplicate batch initialization in Tr… (#5573)
* fix: merge execution metadata on duplicate batch initialization in TraceBatchManager

- Updated TraceBatchManager to merge execution metadata when a batch is initialized multiple times.
- Enhanced logging to reflect the merging of metadata during duplicate initialization.
- Added a test case to verify that execution metadata is correctly merged when initializing a batch after a lazy action.

* drop env events emitting from traces listener
2026-04-21 10:12:24 -07:00
Lorenze Jay
84a4d47aa7 updated descriptions and applied the actual translations (#5572) 2026-04-21 08:55:39 -07:00
Greyson LaLonde
9caed61f36 chore: remove scarf install tracking 2026-04-21 21:52:17 +08:00
MatthiasHowellYopp
d45ed61db5 feat: added bedrock V4 support 2026-04-21 21:09:13 +08:00
iris-clawd
3b01da9ad9 docs: add Build with AI to Get Started nav + page files for all languages (en, ko, pt-BR, ar) (#5567)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-04-20 23:43:37 -03:00
iris-clawd
874405b825 docs: Add 'Build with AI' page — AI-native docs for coding agents (#5558)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
* docs: add Build with AI page for coding agents and AI assistants

* docs: add Build with AI section to README

* docs: trim README Build with AI section to skills install only

* docs: add skills.sh reference link for npx install

* docs: add coding agent logos to Build with AI page

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-04-20 16:09:37 -07:00
Greyson LaLonde
d6d04717c2 fix: serialize Task class-reference fields for checkpointing
Task fields that store class references (output_pydantic, output_json,
response_model, converter_cls) caused PydanticSerializationError when
RuntimeState serialized Crew entities during checkpointing. Serialize
to model_json_schema() and hydrate back via create_model_from_schema.
2026-04-21 03:15:06 +08:00
47 changed files with 3621 additions and 506 deletions

1
.gitignore vendored
View File

@@ -30,3 +30,4 @@ chromadb-*.lock
.crewai/memory
blogs/*
secrets/*
UNKNOWN.egg-info/

View File

@@ -83,6 +83,7 @@ intelligent automations.
## Table of contents
- [Build with AI](#build-with-ai)
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
@@ -101,6 +102,32 @@ intelligent automations.
- [Telemetry](#telemetry)
- [License](#license)
## Build with AI
Using an AI coding agent? Teach it CrewAI best practices in one command:
**Claude Code:**
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
Four skills that activate automatically when you ask relevant CrewAI questions:
| Skill | When it runs |
|-------|--------------|
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
**Cursor, Codex, Windsurf, and others ([skills.sh](https://skills.sh/crewaiinc/skills)):**
```shell
npx skills add crewaiinc/skills
```
This installs the official [CrewAI Skills](https://github.com/crewAIInc/skills) — structured instructions that teach coding agents how to scaffold Flows, configure Crews, design agents and tasks, and follow CrewAI patterns.
## Why CrewAI?
<div align="center" style="margin-bottom: 30px;">

View File

@@ -4,6 +4,62 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="23 أبريل 2026">
## v1.14.3a3
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
## ما الذي تغير
### الميزات
- إضافة دعم لـ e2b
- تنفيذ التراجع إلى DefaultAzureCredential عند عدم توفير مفتاح API
### إصلاحات الأخطاء
- ترقية lxml إلى >=6.1.0 لمعالجة مشكلة الأمان GHSA-vfmq-68hx-4jfw
### الوثائق
- إزالة الأسئلة الشائعة حول التسعير من صفحة البناء باستخدام الذكاء الاصطناعي عبر جميع اللغات
### الأداء
- تحسين وقت بدء التشغيل البارد بنسبة ~29% من خلال التحميل الكسول لمجموعة أدوات MCP وأنواع الأحداث
## المساهمون
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
</Update>
<Update label="22 أبريل 2026">
## v1.14.3a2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## ما الذي تغير
### الميزات
- إضافة دعم لـ bedrock V4
- إضافة أدوات Daytona sandbox لوظائف محسّنة
- إضافة صفحة "البناء باستخدام الذكاء الاصطناعي" — مستندات أصلية للذكاء الاصطناعي لوكلاء البرمجة
- إضافة "البناء باستخدام الذكاء الاصطناعي" إلى التنقل في صفحة "البدء" وملفات الصفحات لجميع اللغات (en, ko, pt-BR, ar)
### إصلاحات الأخطاء
- إصلاح انتشار أسماء @CrewBase الضمنية إلى أحداث الطاقم
- حل مشكلة تكرار تهيئة الدفعات في دمج بيانات التنفيذ الوصفية
- إصلاح تسلسل حقول مرجع فئة Task لعمليات التحقق من النقاط
- التعامل مع نتيجة BaseModel في حلقة إعادة المحاولة للحدود
- تحديث python-dotenv إلى الإصدار >=1.2.2 للامتثال الأمني
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.3a1
- تحديث الأوصاف وتطبيق الترجمات الفعلية
## المساهمون
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="21 أبريل 2026">
## v1.14.3a1

View File

@@ -0,0 +1,214 @@
---
title: "البناء باستخدام الذكاء الاصطناعي"
description: "كل ما يحتاجه وكلاء البرمجة بالذكاء الاصطناعي للبناء والنشر والتوسع مع CrewAI — المهارات، وثائق مقروءة آلياً، النشر، وميزات المؤسسات."
icon: robot
mode: "wide"
---
# البناء باستخدام الذكاء الاصطناعي
CrewAI مُصمَّم أصلاً للعمل مع الذكاء الاصطناعي. تجمع هذه الصفحة ما يحتاجه وكيل البرمجة بالذكاء الاصطناعي للبناء مع CrewAI — سواءً كان Claude Code أو Codex أو Cursor أو Gemini CLI أو أي مساعد آخر يساعد المطوّر على إيصال الـ crews والـ flows.
### وكلاء البرمجة المدعومون
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
صُممت هذه الصفحة للبشر وللمساعدين الذكيين على حدٍّ سواء. إذا كنت وكيل برمجة، ابدأ بـ **Skills** للحصول على سياق CrewAI، ثم استخدم **llms.txt** للوصول الكامل إلى الوثائق.
</Note>
---
## 1. Skills — علِّم وكيلك CrewAI
**Skills** حزم تعليمات تمنح وكلاء البرمجة معرفة عميقة بـ CrewAI — كيفية إنشاء هيكل Flows، وضبط Crews، استخدام الأدوات، واتباع اتفاقيات الإطار.
<Tabs>
<Tab title="Claude Code (سوق الإضافات)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
مهارات CrewAI متاحة في **سوق إضافات Claude Code** — نفس قناة التوزيع التي تستخدمها شركات رائدة في مجال الذكاء الاصطناعي:
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
تُفعَّل أربع مهارات تلقائياً عند طرح أسئلة متعلقة بـ CrewAI:
| المهارة | متى تُستخدم |
|---------|-------------|
| `getting-started` | مشاريع جديدة، الاختيار بين `LLM.call()` / `Agent` / `Crew` / `Flow`، ربط `crew.py` / `main.py` |
| `design-agent` | ضبط الوكلاء — الدور، الهدف، الخلفية، الأدوات، نماذج اللغة، الذاكرة، الحدود الآمنة |
| `design-task` | وصف المهام، التبعيات، المخرجات المنظمة (`output_pydantic`، `output_json`)، المراجعة البشرية |
| `ask-docs` | الاستعلام من [خادم CrewAI docs MCP](https://docs.crewai.com/mcp) للحصول على تفاصيل واجهة البرمجة الحالية |
</Tab>
<Tab title="npx (أي وكيل)">
يعمل مع Claude Code أو Codex أو Cursor أو Gemini CLI أو أي وكيل برمجة:
```shell
npx skills add crewaiinc/skills
```
يُجلب من [سجل skills.sh](https://skills.sh/crewaiinc/skills).
</Tab>
</Tabs>
<Steps>
<Step title="ثبِّت حزمة المهارات الرسمية">
استخدم إحدى الطريقتين أعلاه — سوق إضافات Claude Code أو `npx skills add`. كلاهما يثبّت الحزمة الرسمية [crewAIInc/skills](https://github.com/crewAIInc/skills).
</Step>
<Step title="يحصل وكيلك فوراً على خبرة CrewAI">
تعلّم الحزمة وكيلك:
- **Flows** — تطبيقات ذات حالة، خطوات، وتشغيل crews
- **Crews والوكلاء** — أنماط YAML أولاً، الأدوار، المهام، التفويض
- **الأدوات والتكاملات** — البحث، واجهات API، خوادم MCP، وأدوات CrewAI الشائعة
- **هيكل المشروع** — هياكل CLI واتفاقيات المستودع
- **أنماط محدثة** — يتماشى مع وثائق CrewAI الحالية وأفضل الممارسات
</Step>
<Step title="ابدأ البناء">
يمكن لوكيلك الآن إنشاء هيكل وبناء مشاريع CrewAI دون أن تعيد شرح الإطار في كل جلسة.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="مفهوم Skills" icon="bolt" href="/ar/concepts/skills">
كيف تعمل المهارات في وكلاء CrewAI — الحقن، التفعيل، والأنماط.
</Card>
<Card title="صفحة Skills" icon="wand-magic-sparkles" href="/ar/skills">
نظرة على حزمة crewAIInc/skills وما تتضمنه.
</Card>
<Card title="AGENTS.md والأدوات" icon="terminal" href="/ar/guides/coding-tools/agents-md">
إعداد AGENTS.md لـ Claude Code وCodex وCursor وGemini CLI.
</Card>
<Card title="سجل skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
القائمة الرسمية — المهارات، إحصاءات التثبيت، والتدقيق.
</Card>
</CardGroup>
---
## 2. llms.txt — وثائق مقروءة آلياً
ينشر CrewAI ملف `llms.txt` يمنح المساعدين الذكيين وصولاً مباشراً إلى الوثائق الكاملة بصيغة مقروءة آلياً.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="ما هو llms.txt؟">
[`llms.txt`](https://llmstxt.org/) معيار ناشئ لجعل الوثائق قابلة للاستهلاك من قبل نماذج اللغة الكبيرة. بدلاً من استخراج HTML، يمكن لوكيلك جلب ملف نصي واحد منظم بكل المحتوى المطلوب.
ملف `llms.txt` الخاص بـ CrewAI **متاح فعلياً** — يمكن لوكيلك استخدامه الآن.
</Tab>
<Tab title="كيفية الاستخدام">
وجِّه وكيل البرمجة إلى عنوان URL عندما يحتاج إلى مرجع CrewAI:
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
يمكن للعديد من وكلاء البرمجة (Claude Code، Cursor، وغيرهما) جلب عناوين URL مباشرة. يحتوي الملف على وثائق منظمة تغطي مفاهيم CrewAI وواجهات البرمجة والأدلة.
</Tab>
<Tab title="لماذا يهم">
- **دون استخراج ويب** — محتوى نظيف ومنظم في طلب واحد
- **دائماً محدث** — يُقدَّم مباشرة من docs.crewai.com
- **محسّن لنماذج اللغة** — مُنسَّق لنوافذ السياق لا للمتصفحات
- **يُكمّل Skills** — المهارات تعلّم الأنماط، وllms.txt يوفّر المرجع
</Tab>
</Tabs>
---
## 3. النشر للمؤسسات
انتقل من crew محلي إلى الإنتاج على **CrewAI AMP** (منصة إدارة الوكلاء) في دقائق.
<Steps>
<Step title="ابنِ محلياً">
أنشئ الهيكل واختبر crew أو flow:
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="جهّز للنشر">
تأكد أن هيكل مشروعك جاهز:
```bash
crewai deploy --prepare
```
راجع [دليل التحضير](/ar/enterprise/guides/prepare-for-deployment) لتفاصيل الهيكل والمتطلبات.
</Step>
<Step title="انشر على AMP">
ادفع إلى منصة CrewAI AMP:
```bash
crewai deploy
```
يمكنك أيضاً النشر عبر [تكامل GitHub](/ar/enterprise/guides/deploy-to-amp) أو [Crew Studio](/ar/enterprise/guides/enable-crew-studio).
</Step>
<Step title="الوصول عبر API">
يحصل الـ crew المنشور على نقطة نهاية REST. دمجه في أي تطبيق:
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="النشر على AMP" icon="rocket" href="/ar/enterprise/guides/deploy-to-amp">
دليل النشر الكامل — CLI وGitHub وCrew Studio.
</Card>
<Card title="مقدمة عن AMP" icon="globe" href="/ar/enterprise/introduction">
نظرة على المنصة — ما يوفّره AMP لـ crews في الإنتاج.
</Card>
</CardGroup>
---
## 4. ميزات المؤسسات
CrewAI AMP مُصمَّم لفرق الإنتاج. إليك ما تحصل عليه بعد النشر.
<CardGroup cols={2}>
<Card title="المراقبة والرصد" icon="chart-line">
مسارات تنفيذ مفصّلة، وسجلات، ومقاييس أداء لكل تشغيل crew. راقب قرارات الوكلاء، استدعاءات الأدوات، وإكمال المهام في الوقت الفعلي.
</Card>
<Card title="Crew Studio" icon="paintbrush">
واجهة منخفضة/بدون كود لإنشاء crews وتخصيصها ونشرها بصرياً — ثم التصدير إلى الشيفرة أو النشر مباشرة.
</Card>
<Card title="بث الويبهوك" icon="webhook">
بث أحداث فورية من تنفيذات الـ crews إلى أنظمتك. تكامل مع Slack أو Zapier أو أي مستهلك ويبهوك.
</Card>
<Card title="إدارة الفريق" icon="users">
SSO وRBAC وضوابط على مستوى المؤسسة. أدر من يمكنه إنشاء crews ونشرها والوصول إليها.
</Card>
<Card title="مستودع الأدوات" icon="toolbox">
انشر وشارك أدواتاً مخصصة عبر مؤسستك. ثبّت أدوات المجتمع من السجل.
</Card>
<Card title="Factory (استضافة ذاتية)" icon="server">
شغّل CrewAI AMP على بنيتك التحتية. قدرات المنصة كاملة مع ضوابط إقامة البيانات والامتثال.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="لمن مخصص AMP؟">
لفرق تحتاج نقل سير عمل وكلاء الذكاء الاصطناعي من النماذج الأولية إلى الإنتاج — مع المراقبة وضوابط الوصول والبنية التحتية القابلة للتوسع. سواءً كنت ناشئاً أو مؤسسة كبيرة، يتولى AMP التعقيد التشغيلي لتتفرغ لبناء الوكلاء.
</Accordion>
<Accordion title="ما خيارات النشر المتاحة؟">
- **السحابة (app.crewai.com)** — تُدار من CrewAI، أسرع طريق إلى الإنتاج
- **Factory (استضافة ذاتية)** — على بنيتك التحتية لسيطرة كاملة على البيانات
- **هجين** — دمج السحابة والاستضافة الذاتية حسب حساسية البيانات
</Accordion>
</AccordionGroup>
<Card title="استكشف CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
سجّل وانشر أول crew لك في الإنتاج.
</Card>

View File

@@ -79,6 +79,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -127,7 +128,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -553,6 +555,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -601,7 +604,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -1027,6 +1031,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -1075,7 +1080,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -1501,6 +1507,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -1549,7 +1556,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -1975,6 +1983,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -2023,7 +2032,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -2449,6 +2459,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -2497,7 +2508,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -2921,6 +2933,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -2969,7 +2982,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -3393,6 +3407,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -3441,7 +3456,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -3866,6 +3882,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -3914,7 +3931,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -4340,6 +4358,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -4388,7 +4407,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -4812,6 +4832,7 @@
"group": "Get Started",
"pages": [
"en/introduction",
"en/guides/coding-tools/build-with-ai",
"en/skills",
"en/installation",
"en/quickstart"
@@ -4860,7 +4881,8 @@
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
"en/guides/coding-tools/agents-md",
"en/guides/coding-tools/build-with-ai"
]
},
{
@@ -5317,6 +5339,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -5775,6 +5798,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -6233,6 +6257,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -6691,6 +6716,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -7149,6 +7175,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -7607,6 +7634,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -8064,6 +8092,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -8521,6 +8550,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -8978,6 +9008,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -9434,6 +9465,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -9890,6 +9922,7 @@
"group": "Começando",
"pages": [
"pt-BR/introduction",
"pt-BR/guides/coding-tools/build-with-ai",
"pt-BR/skills",
"pt-BR/installation",
"pt-BR/quickstart"
@@ -10377,6 +10410,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -10847,6 +10881,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -11317,6 +11352,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -11787,6 +11823,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -12257,6 +12294,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -12727,6 +12765,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -13196,6 +13235,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -13665,6 +13705,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -14134,6 +14175,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -14602,6 +14644,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -15070,6 +15113,7 @@
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/guides/coding-tools/build-with-ai",
"ko/skills",
"ko/installation",
"ko/quickstart"
@@ -15569,6 +15613,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -16039,6 +16084,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -16509,6 +16555,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -16979,6 +17026,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -17449,6 +17497,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -17919,6 +17968,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -18388,6 +18438,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -18857,6 +18908,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -19326,6 +19378,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -19794,6 +19847,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"
@@ -20262,6 +20316,7 @@
"group": "البدء",
"pages": [
"ar/introduction",
"ar/guides/coding-tools/build-with-ai",
"ar/skills",
"ar/installation",
"ar/quickstart"

View File

@@ -4,6 +4,62 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Apr 23, 2026">
## v1.14.3a3
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
## What's Changed
### Features
- Add support for e2b
- Implement fallback to DefaultAzureCredential when no API key is provided
### Bug Fixes
- Upgrade lxml to >=6.1.0 to address security issue GHSA-vfmq-68hx-4jfw
### Documentation
- Remove pricing FAQ from build-with-ai page across all locales
### Performance
- Improve cold start time by ~29% through lazy-loading of MCP SDK and event types
## Contributors
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
</Update>
<Update label="Apr 22, 2026">
## v1.14.3a2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## What's Changed
### Features
- Add support for bedrock V4
- Add Daytona sandbox tools for enhanced functionality
- Add 'Build with AI' page — AI-native docs for coding agents
- Add Build with AI to Get Started navigation and page files for all languages (en, ko, pt-BR, ar)
### Bug Fixes
- Fix propagation of implicit @CrewBase names to crew events
- Resolve issue with duplicate batch initialization in execution metadata merge
- Fix serialization of Task class-reference fields for checkpointing
- Handle BaseModel result in guardrail retry loop
- Bump python-dotenv to version >=1.2.2 for security compliance
### Documentation
- Update changelog and version for v1.14.3a1
- Update descriptions and apply actual translations
## Contributors
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="Apr 21, 2026">
## v1.14.3a1

View File

@@ -0,0 +1,214 @@
---
title: "Build with AI"
description: "Everything AI coding agents need to build, deploy, and scale with CrewAI — skills, machine-readable docs, deployment, and enterprise features."
icon: robot
mode: "wide"
---
# Build with AI
CrewAI is AI-native. This page brings together everything an AI coding agent needs to build with CrewAI — whether you're Claude Code, Codex, Cursor, Gemini CLI, or any other assistant helping a developer ship crews and flows.
### Supported Coding Agents
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
This page is designed to be consumed by both humans and AI assistants. If you're a coding agent, start with **Skills** to get CrewAI context, then use **llms.txt** for full docs access.
</Note>
---
## 1. Skills — Teach Your Agent CrewAI
**Skills** are instruction packs that give coding agents deep CrewAI knowledge — how to scaffold Flows, configure Crews, use tools, and follow framework conventions.
<Tabs>
<Tab title="Claude Code (Plugin Marketplace)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
CrewAI skills are available in the **Claude Code plugin marketplace** — the same distribution channel used by top AI-native companies:
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
Four skills activate automatically when you ask relevant CrewAI questions:
| Skill | When it runs |
|-------|--------------|
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
</Tab>
<Tab title="npx (Any Agent)">
Works with Claude Code, Codex, Cursor, Gemini CLI, or any coding agent:
```shell
npx skills add crewaiinc/skills
```
Pulls from the [skills.sh registry](https://skills.sh/crewaiinc/skills).
</Tab>
</Tabs>
<Steps>
<Step title="Install the official skill pack">
Use either method above — the Claude Code plugin marketplace or `npx skills add`. Both install the official [crewAIInc/skills](https://github.com/crewAIInc/skills) pack.
</Step>
<Step title="Your agent gets instant CrewAI expertise">
The skill pack teaches your agent:
- **Flows** — stateful apps, steps, and crew kickoffs
- **Crews & Agents** — YAML-first patterns, roles, tasks, delegation
- **Tools & Integrations** — search, APIs, MCP servers, and common CrewAI tools
- **Project layout** — CLI scaffolds and repo conventions
- **Up-to-date patterns** — tracks current CrewAI docs and best practices
</Step>
<Step title="Start building">
Your agent can now scaffold and build CrewAI projects without you re-explaining the framework each session.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Skills concept" icon="bolt" href="/en/concepts/skills">
How skills work in CrewAI agents — injection, activation, and patterns.
</Card>
<Card title="Skills landing page" icon="wand-magic-sparkles" href="/en/skills">
Overview of the crewAIInc/skills pack and what it includes.
</Card>
<Card title="AGENTS.md & coding tools" icon="terminal" href="/en/guides/coding-tools/agents-md">
Set up AGENTS.md for Claude Code, Codex, Cursor, and Gemini CLI.
</Card>
<Card title="Skills registry (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
Official listing — skills, install stats, and audits.
</Card>
</CardGroup>
---
## 2. llms.txt — Machine-Readable Docs
CrewAI publishes an `llms.txt` file that gives AI assistants direct access to the full documentation in a machine-readable format.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="What is llms.txt?">
[`llms.txt`](https://llmstxt.org/) is an emerging standard for making documentation consumable by large language models. Instead of scraping HTML, your agent can fetch a single structured text file with all the content it needs.
CrewAI's `llms.txt` is **already live** — your agent can use it right now.
</Tab>
<Tab title="How to use it">
Point your coding agent at the URL when it needs CrewAI reference docs:
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
Many coding agents (Claude Code, Cursor, etc.) can fetch URLs directly. The file contains structured documentation covering all CrewAI concepts, APIs, and guides.
</Tab>
<Tab title="Why it matters">
- **No scraping required** — clean, structured content in one request
- **Always up-to-date** — served directly from docs.crewai.com
- **Optimized for LLMs** — formatted for context windows, not browsers
- **Complements skills** — skills teach patterns, llms.txt provides reference
</Tab>
</Tabs>
---
## 3. Deploy to Enterprise
Go from a local crew to production on **CrewAI AMP** (Agent Management Platform) in minutes.
<Steps>
<Step title="Build locally">
Scaffold and test your crew or flow:
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="Prepare for deployment">
Ensure your project structure is ready:
```bash
crewai deploy --prepare
```
See the [preparation guide](/en/enterprise/guides/prepare-for-deployment) for details on project structure and requirements.
</Step>
<Step title="Deploy to AMP">
Push to the CrewAI AMP platform:
```bash
crewai deploy
```
You can also deploy via [GitHub integration](/en/enterprise/guides/deploy-to-amp) or [Crew Studio](/en/enterprise/guides/enable-crew-studio).
</Step>
<Step title="Access via API">
Your deployed crew gets a REST API endpoint. Integrate it into any application:
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
Full deployment guide — CLI, GitHub, and Crew Studio methods.
</Card>
<Card title="AMP introduction" icon="globe" href="/en/enterprise/introduction">
Platform overview — what AMP provides for production crews.
</Card>
</CardGroup>
---
## 4. Enterprise Features
CrewAI AMP is built for production teams. Here's what you get beyond deployment.
<CardGroup cols={2}>
<Card title="Observability" icon="chart-line">
Detailed execution traces, logs, and performance metrics for every crew run. Monitor agent decisions, tool calls, and task completion in real time.
</Card>
<Card title="Crew Studio" icon="paintbrush">
No-code/low-code interface to create, customize, and deploy crews visually — then export to code or deploy directly.
</Card>
<Card title="Webhook Streaming" icon="webhook">
Stream real-time events from crew executions to your systems. Integrate with Slack, Zapier, or any webhook consumer.
</Card>
<Card title="Team Management" icon="users">
SSO, RBAC, and organization-level controls. Manage who can create, deploy, and access crews across your team.
</Card>
<Card title="Tool Repository" icon="toolbox">
Publish and share custom tools across your organization. Install community tools from the registry.
</Card>
<Card title="Factory (Self-Hosted)" icon="server">
Run CrewAI AMP on your own infrastructure. Full platform capabilities with data residency and compliance controls.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="Who is AMP for?">
AMP is for teams that need to move AI agent workflows from prototypes to production — with observability, access controls, and scalable infrastructure. Whether you're a startup or enterprise, AMP handles the operational complexity so you can focus on building agents.
</Accordion>
<Accordion title="What deployment options are available?">
- **Cloud (app.crewai.com)** — managed by CrewAI, fastest path to production
- **Factory (self-hosted)** — run on your own infrastructure for full data control
- **Hybrid** — mix cloud and self-hosted based on sensitivity requirements
</Accordion>
</AccordionGroup>
<Card title="Explore CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
Sign up and deploy your first crew to production.
</Card>

View File

@@ -4,6 +4,62 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 4월 23일">
## v1.14.3a3
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
## 변경 사항
### 기능
- e2b 지원 추가
- API 키가 제공되지 않을 경우 DefaultAzureCredential로 대체 구현
### 버그 수정
- 보안 문제 GHSA-vfmq-68hx-4jfw를 해결하기 위해 lxml을 >=6.1.0으로 업그레이드
### 문서
- 모든 지역에서 build-with-ai 페이지의 가격 FAQ 제거
### 성능
- MCP SDK 및 이벤트 유형의 지연 로딩을 통해 콜드 스타트 시간을 약 29% 개선
## 기여자
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
</Update>
<Update label="2026년 4월 22일">
## v1.14.3a2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## 변경 사항
### 기능
- 베드록 V4 지원 추가
- 향상된 기능을 위한 데이토나 샌드박스 도구 추가
- 'AI와 함께 빌드' 페이지 추가 — 코딩 에이전트를 위한 AI 네이티브 문서
- 모든 언어(en, ko, pt-BR, ar)에 대한 시작하기 탐색 및 페이지 파일에 AI와 함께 빌드 추가
### 버그 수정
- 크루 이벤트에 대한 암묵적 @CrewBase 이름 전파 수정
- 실행 메타데이터 병합에서 중복 배치 초기화 문제 해결
- 체크포인트를 위한 Task 클래스 참조 필드 직렬화 수정
- 가드레일 재시도 루프에서 BaseModel 결과 처리
- 보안 준수를 위해 python-dotenv를 버전 >=1.2.2로 업데이트
### 문서
- v1.14.3a1에 대한 변경 로그 및 버전 업데이트
- 설명 업데이트 및 실제 번역 적용
## 기여자
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="2026년 4월 21일">
## v1.14.3a1

View File

@@ -0,0 +1,214 @@
---
title: "AI와 함께 빌드하기"
description: "CrewAI로 빌드·배포·확장하는 데 필요한 모든 것 — 스킬, 기계가 읽을 수 있는 문서, 배포, 엔터프라이즈 기능을 AI 코딩 에이전트용으로 정리했습니다."
icon: robot
mode: "wide"
---
# AI와 함께 빌드하기
CrewAI는 AI 네이티브입니다. 이 페이지는 Claude Code, Codex, Cursor, Gemini CLI 등 개발자가 crew와 flow를 배포하도록 돕는 코딩 에이전트가 CrewAI로 빌드할 때 필요한 내용을 한곳에 모았습니다.
### 지원 코딩 에이전트
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
이 페이지는 사람과 AI 어시스턴트 모두를 위해 작성되었습니다. 코딩 에이전트라면 CrewAI 맥락은 **Skills**부터, 전체 문서 접근은 **llms.txt**를 사용하세요.
</Note>
---
## 1. Skills — 에이전트에게 CrewAI 가르치기
**Skills**는 코딩 에이전트에게 Flow 스캐폴딩, Crew 구성, 도구 사용, 프레임워크 관례 등 CrewAI에 대한 깊은 지식을 담은 지침 묶음입니다.
<Tabs>
<Tab title="Claude Code (플러그인 마켓플레이스)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
CrewAI 스킬은 **Claude Code 플러그인 마켓플레이스**에서 제공됩니다. AI 네이티브 기업들이 쓰는 것과 같은 배포 채널입니다.
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
CrewAI와 관련된 질문을 하면 다음 네 가지 스킬이 자동으로 활성화됩니다.
| 스킬 | 실행 시점 |
|------|-------------|
| `getting-started` | 새 프로젝트 스캐폴딩, `LLM.call()` / `Agent` / `Crew` / `Flow` 선택, `crew.py` / `main.py` 연결 |
| `design-agent` | 에이전트 구성 — 역할, 목표, 배경 이야기, 도구, LLM, 메모리, 가드레일 |
| `design-task` | 태스크 설명, 의존성, 구조화된 출력(`output_pydantic`, `output_json`), 사람 검토 |
| `ask-docs` | 최신 API 정보를 위해 [CrewAI 문서 MCP 서버](https://docs.crewai.com/mcp) 조회 |
</Tab>
<Tab title="npx (모든 에이전트)">
Claude Code, Codex, Cursor, Gemini CLI 등 모든 코딩 에이전트에서 사용할 수 있습니다.
```shell
npx skills add crewaiinc/skills
```
[skills.sh 레지스트리](https://skills.sh/crewaiinc/skills)에서 가져옵니다.
</Tab>
</Tabs>
<Steps>
<Step title="공식 스킬 팩 설치">
위 방법 중 하나를 사용하세요 — Claude Code 플러그인 마켓플레이스 또는 `npx skills add`. 둘 다 공식 [crewAIInc/skills](https://github.com/crewAIInc/skills) 팩을 설치합니다.
</Step>
<Step title="에이전트가 즉시 CrewAI 전문성을 갖춤">
스킬 팩이 에이전트에게 알려 주는 내용:
- **Flow** — 상태ful 앱, 단계, crew 킥오프
- **Crew 및 에이전트** — YAML 우선 패턴, 역할, 태스크, 위임
- **도구 및 통합** — 검색, API, MCP 서버, 일반적인 CrewAI 도구
- **프로젝트 레이아웃** — CLI 스캐폴드와 저장소 관례
- **최신 패턴** — 현재 CrewAI 문서와 모범 사례 반영
</Step>
<Step title="빌드 시작">
매 세션마다 프레임워크를 다시 설명하지 않아도 에이전트가 CrewAI 프로젝트를 스캐폴딩하고 빌드할 수 있습니다.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Skills 개념" icon="bolt" href="/ko/concepts/skills">
CrewAI 에이전트에서 스킬이 동작하는 방식 — 주입, 활성화, 패턴.
</Card>
<Card title="Skills 랜딩 페이지" icon="wand-magic-sparkles" href="/ko/skills">
crewAIInc/skills 팩 개요와 포함 내용.
</Card>
<Card title="AGENTS.md 및 코딩 도구" icon="terminal" href="/ko/guides/coding-tools/agents-md">
Claude Code, Codex, Cursor, Gemini CLI용 AGENTS.md 설정.
</Card>
<Card title="Skills 레지스트리 (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
공식 목록 — 스킬, 설치 통계, 감사 정보.
</Card>
</CardGroup>
---
## 2. llms.txt — 기계가 읽을 수 있는 문서
CrewAI는 AI 어시스턴트가 전체 문서에 기계가 읽을 수 있는 형태로 바로 접근할 수 있도록 `llms.txt` 파일을 제공합니다.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="llms.txt란?">
[`llms.txt`](https://llmstxt.org/)는 문서를 대규모 언어 모델이 소비하기 쉽게 만드는 새로운 표준입니다. HTML을 스크래핑하는 대신, 필요한 내용이 담긴 하나의 구조화된 텍스트 파일을 가져올 수 있습니다.
CrewAI의 `llms.txt`는 **이미 제공 중**이며, 에이전트가 바로 사용할 수 있습니다.
</Tab>
<Tab title="사용 방법">
CrewAI 참고 문서가 필요할 때 코딩 에이전트에 URL을 알려 주세요.
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
Claude Code, Cursor 등 많은 코딩 에이전트가 URL을 직접 가져올 수 있습니다. 파일에는 CrewAI 개념, API, 가이드를 아우르는 구조화된 문서가 포함되어 있습니다.
</Tab>
<Tab title="왜 중요한가">
- **스크래핑 불필요** — 한 번의 요청으로 깔끔한 구조화 콘텐츠
- **항상 최신** — docs.crewai.com에서 직접 제공
- **LLM에 최적화** — 브라우저가 아니라 컨텍스트 윈도우에 맞게 포맷
- **스킬과 상호 보완** — 스킬은 패턴을, llms.txt는 참조를 제공
</Tab>
</Tabs>
---
## 3. 엔터프라이즈에 배포
로컬 crew를 몇 분 안에 **CrewAI AMP**(Agent Management Platform) 프로덕션으로 가져가세요.
<Steps>
<Step title="로컬에서 빌드">
crew 또는 flow를 스캐폴딩하고 테스트합니다.
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="배포 준비">
프로젝트 구조가 준비되었는지 확인합니다.
```bash
crewai deploy --prepare
```
구조와 요구 사항은 [준비 가이드](/ko/enterprise/guides/prepare-for-deployment)를 참고하세요.
</Step>
<Step title="AMP에 배포">
CrewAI AMP 플랫폼으로 푸시합니다.
```bash
crewai deploy
```
[GitHub 연동](/ko/enterprise/guides/deploy-to-amp) 또는 [Crew Studio](/ko/enterprise/guides/enable-crew-studio)로도 배포할 수 있습니다.
</Step>
<Step title="API로 접근">
배포된 crew는 REST API 엔드포인트를 받습니다. 모든 애플리케이션에 통합할 수 있습니다.
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="AMP에 배포" icon="rocket" href="/ko/enterprise/guides/deploy-to-amp">
전체 배포 가이드 — CLI, GitHub, Crew Studio 방법.
</Card>
<Card title="AMP 소개" icon="globe" href="/ko/enterprise/introduction">
플랫폼 개요 — 프로덕션 crew에 AMP가 제공하는 것.
</Card>
</CardGroup>
---
## 4. 엔터프라이즈 기능
CrewAI AMP는 프로덕션 팀을 위해 만들어졌습니다. 배포 외에 제공되는 것은 다음과 같습니다.
<CardGroup cols={2}>
<Card title="관측 가능성" icon="chart-line">
모든 crew 실행에 대한 상세 실행 추적, 로그, 성능 지표. 에이전트 결정, 도구 호출, 태스크 완료를 실시간으로 모니터링합니다.
</Card>
<Card title="Crew Studio" icon="paintbrush">
시각적으로 crew를 만들고, 맞춤 설정하고, 배포하는 노코드/로코드 인터페이스 — 코드로 보내거나 바로 배포할 수 있습니다.
</Card>
<Card title="웹훅 스트리밍" icon="webhook">
crew 실행에서 실시간 이벤트를 시스템으로 스트리밍합니다. Slack, Zapier 등 웹훅 소비자와 연동할 수 있습니다.
</Card>
<Card title="팀 관리" icon="users">
SSO, RBAC, 조직 단위 제어. 팀 전체에서 crew 생성·배포·접근 권한을 관리합니다.
</Card>
<Card title="도구 저장소" icon="toolbox">
조직 전체에 맞춤 도구를 게시하고 공유합니다. 레지스트리에서 커뮤니티 도구를 설치합니다.
</Card>
<Card title="Factory(셀프 호스팅)" icon="server">
자체 인프라에서 CrewAI AMP를 실행합니다. 데이터 상주와 규정 준수 제어와 함께 플랫폼 전체 기능을 사용할 수 있습니다.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="AMP는 누구를 위한 것인가요?">
AI 에이전트 워크플로를 프로토타입에서 프로덕션으로 옮겨야 하는 팀을 위한 제품입니다. 관측 가능성, 접근 제어, 확장 가능한 인프라를 제공합니다. 스타트업이든 대기업이든 운영 복잡도는 AMP가 맡고, 에이전트 구축에 집중할 수 있습니다.
</Accordion>
<Accordion title="배포 옵션은 무엇이 있나요?">
- **클라우드 (app.crewai.com)** — CrewAI가 관리, 프로덕션까지 가장 빠른 경로
- **Factory(셀프 호스팅)** — 데이터 통제를 위해 자체 인프라에서 실행
- **하이브리드** — 민감도에 따라 클라우드와 셀프 호스팅을 혼합
</Accordion>
</AccordionGroup>
<Card title="CrewAI AMP 살펴보기 →" icon="arrow-right" href="https://app.crewai.com">
가입하고 첫 crew를 프로덕션에 배포해 보세요.
</Card>

View File

@@ -4,6 +4,62 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="23 abr 2026">
## v1.14.3a3
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
## O que Mudou
### Recursos
- Adicionar suporte para e2b
- Implementar fallback para DefaultAzureCredential quando nenhuma chave de API for fornecida
### Correções de Bugs
- Atualizar lxml para >=6.1.0 para resolver problema de segurança GHSA-vfmq-68hx-4jfw
### Documentação
- Remover FAQ de preços da página build-with-ai em todos os locais
### Desempenho
- Melhorar o tempo de inicialização a frio em ~29% através do carregamento preguiçoso do SDK MCP e tipos de eventos
## Contributors
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
</Update>
<Update label="22 abr 2026">
## v1.14.3a2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## O que mudou
### Recursos
- Adicionar suporte para bedrock V4
- Adicionar ferramentas de sandbox Daytona para funcionalidade aprimorada
- Adicionar página 'Construir com IA' — documentação nativa de IA para agentes de codificação
- Adicionar Construir com IA à navegação Começar e arquivos de página para todos os idiomas (en, ko, pt-BR, ar)
### Correções de Bugs
- Corrigir a propagação de nomes implícitos @CrewBase para eventos da equipe
- Resolver problema com inicialização de lote duplicada na mesclagem de metadados de execução
- Corrigir a serialização de campos de referência de classe Task para checkpointing
- Lidar com o resultado BaseModel no loop de repetição de guardrail
- Atualizar python-dotenv para a versão >=1.2.2 para conformidade de segurança
### Documentação
- Atualizar changelog e versão para v1.14.3a1
- Atualizar descrições e aplicar traduções reais
## Contributors
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="21 abr 2026">
## v1.14.3a1

View File

@@ -0,0 +1,214 @@
---
title: "Construa com IA"
description: "Tudo o que agentes de codificação com IA precisam para criar, implantar e escalar com CrewAI — skills, documentação legível por máquina, implantação e recursos enterprise."
icon: robot
mode: "wide"
---
# Construa com IA
O CrewAI é nativo de IA. Esta página reúne o que um agente de codificação com IA precisa para construir com CrewAI — seja Claude Code, Codex, Cursor, Gemini CLI ou qualquer outro assistente que ajude um desenvolvedor a entregar crews e flows.
### Agentes de codificação compatíveis
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
Esta página serve para humanos e para assistentes de IA. Se você é um agente de codificação, comece por **Skills** para obter contexto do CrewAI e depois use **llms.txt** para acesso completo à documentação.
</Note>
---
## 1. Skills — ensine CrewAI ao seu agente
**Skills** são pacotes de instruções que dão aos agentes de codificação conhecimento profundo do CrewAI — como estruturar Flows, configurar Crews, usar ferramentas e seguir convenções do framework.
<Tabs>
<Tab title="Claude Code (Plugin Marketplace)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
As skills do CrewAI estão no **plugin marketplace do Claude Code** — o mesmo canal usado por empresas líderes em IA:
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
Quatro skills são ativadas automaticamente quando você faz perguntas relevantes sobre CrewAI:
| Skill | Quando é usada |
|-------|----------------|
| `getting-started` | Novos projetos, escolha entre `LLM.call()` / `Agent` / `Crew` / `Flow`, arquivos `crew.py` / `main.py` |
| `design-agent` | Configurar agentes — papel, objetivo, história, ferramentas, LLMs, memória, guardrails |
| `design-task` | Descrever tarefas, dependências, saída estruturada (`output_pydantic`, `output_json`), revisão humana |
| `ask-docs` | Consultar o [servidor MCP da documentação CrewAI](https://docs.crewai.com/mcp) em tempo real para detalhes de API |
</Tab>
<Tab title="npx (qualquer agente)">
Funciona com Claude Code, Codex, Cursor, Gemini CLI ou qualquer agente de codificação:
```shell
npx skills add crewaiinc/skills
```
Obtido do [registro skills.sh](https://skills.sh/crewaiinc/skills).
</Tab>
</Tabs>
<Steps>
<Step title="Instale o pacote oficial de skills">
Use um dos métodos acima — o plugin marketplace do Claude Code ou `npx skills add`. Ambos instalam o pacote oficial [crewAIInc/skills](https://github.com/crewAIInc/skills).
</Step>
<Step title="Seu agente ganha expertise imediata em CrewAI">
O pacote ensina ao seu agente:
- **Flows** — apps com estado, passos e disparo de crews
- **Crews e agentes** — padrões YAML-first, papéis, tarefas, delegação
- **Ferramentas e integrações** — busca, APIs, servidores MCP e ferramentas comuns do CrewAI
- **Estrutura do projeto** — scaffolds da CLI e convenções de repositório
- **Padrões atualizados** — alinhado à documentação e às melhores práticas atuais do CrewAI
</Step>
<Step title="Comece a construir">
Seu agente pode estruturar e construir projetos CrewAI sem você precisar reexplicar o framework a cada sessão.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Conceito de skills" icon="bolt" href="/pt-BR/concepts/skills">
Como skills funcionam em agentes CrewAI — injeção, ativação e padrões.
</Card>
<Card title="Página de skills" icon="wand-magic-sparkles" href="/pt-BR/skills">
Visão geral do pacote crewAIInc/skills e do que ele inclui.
</Card>
<Card title="AGENTS.md e ferramentas" icon="terminal" href="/pt-BR/guides/coding-tools/agents-md">
Configure o AGENTS.md para Claude Code, Codex, Cursor e Gemini CLI.
</Card>
<Card title="Registro skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
Listagem oficial — skills, estatísticas de instalação e auditorias.
</Card>
</CardGroup>
---
## 2. llms.txt — documentação legível por máquina
O CrewAI publica um arquivo `llms.txt` que dá aos assistentes de IA acesso direto à documentação completa em formato legível por máquinas.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="O que é llms.txt?">
[`llms.txt`](https://llmstxt.org/) é um padrão emergente para tornar a documentação consumível por grandes modelos de linguagem. Em vez de fazer scraping de HTML, seu agente pode buscar um único arquivo de texto estruturado com o conteúdo necessário.
O `llms.txt` do CrewAI **já está no ar** — seu agente pode usar agora.
</Tab>
<Tab title="Como usar">
Indique ao agente de codificação a URL quando precisar da referência do CrewAI:
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
Muitos agentes (Claude Code, Cursor etc.) conseguem buscar URLs diretamente. O arquivo contém documentação estruturada sobre conceitos, APIs e guias do CrewAI.
</Tab>
<Tab title="Por que importa">
- **Sem scraping** — conteúdo limpo e estruturado em uma requisição
- **Sempre atualizado** — servido diretamente de docs.crewai.com
- **Otimizado para LLMs** — formatado para janelas de contexto, não para navegadores
- **Complementa as skills** — skills ensinam padrões; llms.txt fornece referência
</Tab>
</Tabs>
---
## 3. Implantação enterprise
Do crew local à produção no **CrewAI AMP** (Agent Management Platform) em minutos.
<Steps>
<Step title="Construa localmente">
Estruture e teste seu crew ou flow:
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="Prepare a implantação">
Garanta que a estrutura do projeto está pronta:
```bash
crewai deploy --prepare
```
Veja o [guia de preparação](/pt-BR/enterprise/guides/prepare-for-deployment) para detalhes de estrutura e requisitos.
</Step>
<Step title="Implante no AMP">
Envie para a plataforma CrewAI AMP:
```bash
crewai deploy
```
Também é possível implantar pela [integração com GitHub](/pt-BR/enterprise/guides/deploy-to-amp) ou pelo [Crew Studio](/pt-BR/enterprise/guides/enable-crew-studio).
</Step>
<Step title="Acesso via API">
O crew implantado recebe um endpoint REST. Integre em qualquer aplicação:
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Implantar no AMP" icon="rocket" href="/pt-BR/enterprise/guides/deploy-to-amp">
Guia completo de implantação — CLI, GitHub e Crew Studio.
</Card>
<Card title="Introdução ao AMP" icon="globe" href="/pt-BR/enterprise/introduction">
Visão da plataforma — o que o AMP oferece para crews em produção.
</Card>
</CardGroup>
---
## 4. Recursos enterprise
O CrewAI AMP foi feito para equipes em produção. Além da implantação, você obtém:
<CardGroup cols={2}>
<Card title="Observabilidade" icon="chart-line">
Traces de execução, logs e métricas de desempenho para cada execução de crew. Monitore decisões de agentes, chamadas de ferramentas e conclusão de tarefas em tempo real.
</Card>
<Card title="Crew Studio" icon="paintbrush">
Interface no-code/low-code para criar, personalizar e implantar crews visualmente — exporte para código ou implante direto.
</Card>
<Card title="Webhook streaming" icon="webhook">
Transmita eventos em tempo real das execuções para seus sistemas. Integre com Slack, Zapier ou qualquer consumidor de webhook.
</Card>
<Card title="Gestão de equipe" icon="users">
SSO, RBAC e controles em nível de organização. Gerencie quem pode criar, implantar e acessar crews.
</Card>
<Card title="Repositório de ferramentas" icon="toolbox">
Publique e compartilhe ferramentas customizadas na organização. Instale ferramentas da comunidade a partir do registro.
</Card>
<Card title="Factory (self-hosted)" icon="server">
Execute o CrewAI AMP na sua infraestrutura. Capacidades completas da plataforma com residência de dados e controles de conformidade.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="Para quem é o AMP?">
Para equipes que precisam levar fluxos de agentes de IA do protótipo à produção — com observabilidade, controles de acesso e infraestrutura escalável. De startups a grandes empresas, o AMP cuida da complexidade operacional para você focar nos agentes.
</Accordion>
<Accordion title="Quais opções de implantação existem?">
- **Nuvem (app.crewai.com)** — gerenciada pela CrewAI, caminho mais rápido para produção
- **Factory (self-hosted)** — na sua infraestrutura para controle total dos dados
- **Híbrido** — combine nuvem e self-hosted conforme a sensibilidade dos dados
</Accordion>
</AccordionGroup>
<Card title="Conheça o CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
Cadastre-se e leve seu primeiro crew à produção.
</Card>

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.3a1"
__version__ = "1.14.3a3"

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests>=2.33.0,<3",
"crewai==1.14.3a1",
"crewai==1.14.3a3",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",
@@ -112,7 +112,7 @@ github = [
]
rag = [
"python-docx>=1.1.0",
"lxml>=5.3.0,<5.4.0", # Pin to avoid etree import issues in 5.4.0
"lxml>=6.1.0,<7", # 6.1.0+ required for GHSA-vfmq-68hx-4jfw (XXE in iterparse)
]
xml = [
"unstructured[local-inference, all-docs]>=0.17.2"
@@ -143,6 +143,11 @@ daytona = [
"daytona~=0.140.0",
]
e2b = [
"e2b~=2.20.0",
"e2b-code-interpreter~=2.6.0",
]
[tool.uv]
exclude-newer = "3 days"

View File

@@ -71,6 +71,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
DirectorySearchTool,
)
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
from crewai_tools.tools.e2b_sandbox_tool import (
E2BExecTool,
E2BFileTool,
E2BPythonTool,
)
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
@@ -242,6 +247,9 @@ __all__ = [
"DaytonaPythonTool",
"DirectoryReadTool",
"DirectorySearchTool",
"E2BExecTool",
"E2BFileTool",
"E2BPythonTool",
"EXASearchTool",
"EnterpriseActionTool",
"FileCompressorTool",
@@ -313,4 +321,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.3a1"
__version__ = "1.14.3a3"

View File

@@ -60,6 +60,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
DirectorySearchTool,
)
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
from crewai_tools.tools.e2b_sandbox_tool import (
E2BExecTool,
E2BFileTool,
E2BPythonTool,
)
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
@@ -227,6 +232,9 @@ __all__ = [
"DaytonaPythonTool",
"DirectoryReadTool",
"DirectorySearchTool",
"E2BExecTool",
"E2BFileTool",
"E2BPythonTool",
"EXASearchTool",
"FileCompressorTool",
"FileReadTool",

View File

@@ -0,0 +1,120 @@
# E2B Sandbox Tools
Run shell commands, execute Python, and manage files inside an [E2B](https://e2b.dev/) sandbox. E2B provides isolated, ephemeral VMs suitable for agent-driven code execution, with a Jupyter-style code interpreter for rich Python results.
Three tools are provided so you can pick what the agent actually needs:
- **`E2BExecTool`** — run a shell command (`sandbox.commands.run`).
- **`E2BPythonTool`** — run a Python cell in the E2B code interpreter (`sandbox.run_code`), returning stdout/stderr and rich results (charts, dataframes).
- **`E2BFileTool`** — read / write / list / delete files (`sandbox.files.*`).
## Installation
```shell
uv add "crewai-tools[e2b]"
# or
pip install "crewai-tools[e2b]"
```
Set the API key:
```shell
export E2B_API_KEY="..."
```
`E2B_DOMAIN` is also respected if set (for self-hosted or non-default deployments).
## Sandbox lifecycle
All three tools share the same lifecycle controls from `E2BBaseTool`:
| Mode | When the sandbox is created | When it is killed |
| --- | --- | --- |
| **Ephemeral** (default, `persistent=False`) | On every `_run` call | At the end of that same call |
| **Persistent** (`persistent=True`) | Lazily on first use | At process exit (via `atexit`), or manually via `tool.close()` |
| **Attach** (`sandbox_id="…"`) | Never — the tool attaches to an existing sandbox | Never — the tool will not kill a sandbox it did not create |
Ephemeral mode is the safe default: nothing leaks if the agent forgets to clean up. Use persistent mode when you want filesystem state or installed packages to carry across steps — this is typical when pairing `E2BFileTool` with `E2BExecTool`.
E2B sandboxes also auto-expire after an idle timeout. Tune it via `sandbox_timeout` (seconds, default `300`).
## Examples
### One-shot Python execution (ephemeral)
```python
from crewai_tools import E2BPythonTool
tool = E2BPythonTool()
result = tool.run(code="print(sum(range(10)))")
```
### Multi-step shell session (persistent)
```python
from crewai_tools import E2BExecTool, E2BFileTool
exec_tool = E2BExecTool(persistent=True)
file_tool = E2BFileTool(persistent=True)
# Each tool keeps its own persistent sandbox. If you need the *same* sandbox
# across two tools, create one tool, grab the sandbox id via
# `tool._persistent_sandbox.sandbox_id`, and pass it to the other via
# `sandbox_id=...`.
```
### Attach to an existing sandbox
```python
from crewai_tools import E2BExecTool
tool = E2BExecTool(sandbox_id="sbx_...")
```
### Custom create params
```python
tool = E2BExecTool(
persistent=True,
template="my-custom-template",
sandbox_timeout=600,
envs={"MY_FLAG": "1"},
metadata={"owner": "crewai-agent"},
)
```
## Tool arguments
### `E2BExecTool`
- `command: str` — shell command to run.
- `cwd: str | None` — working directory.
- `envs: dict[str, str] | None` — extra env vars for this command.
- `timeout: float | None` — seconds.
### `E2BPythonTool`
- `code: str` — source to execute.
- `language: str | None` — override kernel language (default: Python).
- `envs: dict[str, str] | None` — env vars for the run.
- `timeout: float | None` — seconds.
### `E2BFileTool`
- `action: "read" | "write" | "append" | "list" | "delete" | "mkdir" | "info" | "exists"`
- `path: str` — absolute path inside the sandbox.
- `content: str | None` — required for `append`; optional for `write`.
- `binary: bool` — if `True`, `content` is base64 on write / returned as base64 on read.
- `depth: int` — for `list`, how many levels to recurse (default 1).
## Security considerations
These tools hand the LLM arbitrary shell, Python, and filesystem access inside a remote VM. The threat model to keep in mind:
- **Prompt-injection is a code-execution vector.** If the agent ingests untrusted content (web pages, scraped documents, user-supplied files, emails, search results), a malicious instruction hidden in that content can coerce the agent into issuing commands to `E2BExecTool` / `E2BPythonTool`. Treat any pipeline that feeds untrusted text into an agent that also has these tools as equivalent to remote code execution — the LLM is the attacker's shell.
- **Ephemeral mode (the default) is the main blast-radius control.** A fresh sandbox is created per call and killed at the end, so injected commands cannot persist state, exfiltrate long-lived secrets, or build up tooling across turns. Leave `persistent=False` unless you have a concrete reason to change it.
- **Avoid this specific combination:**
- untrusted content in the agent's context, **plus**
- `persistent=True` or an explicit long-lived `sandbox_id`, **plus**
- a large `sandbox_timeout` or credentials/secrets seeded into the sandbox via `envs`.
That stack lets a single injection pivot into a long-running, credentialed shell that survives across turns. If you must run persistently, also keep `sandbox_timeout` short, scope `envs` to the minimum the task needs, and don't feed the same agent untrusted input.
- **Don't mount production credentials.** Anything you put into `envs`, `metadata`, or files written to the sandbox is reachable from the LLM. Use per-task scoped keys, not your personal API tokens.
- **E2B's VM isolation is the final backstop**, not a license to relax the above — isolation prevents escape to the host, but everything the sandbox can reach (the public internet, any service whose token you dropped in) is still fair game for an injected command.

View File

@@ -0,0 +1,12 @@
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
from crewai_tools.tools.e2b_sandbox_tool.e2b_exec_tool import E2BExecTool
from crewai_tools.tools.e2b_sandbox_tool.e2b_file_tool import E2BFileTool
from crewai_tools.tools.e2b_sandbox_tool.e2b_python_tool import E2BPythonTool
__all__ = [
"E2BBaseTool",
"E2BExecTool",
"E2BFileTool",
"E2BPythonTool",
]

View File

@@ -0,0 +1,197 @@
from __future__ import annotations
import atexit
import logging
import os
import threading
from typing import Any, ClassVar
from crewai.tools import BaseTool, EnvVar
from pydantic import ConfigDict, Field, PrivateAttr, SecretStr
logger = logging.getLogger(__name__)
class E2BBaseTool(BaseTool):
"""Shared base for tools that act on an E2B sandbox.
Lifecycle modes:
- persistent=False (default): create a fresh sandbox per `_run` call and
kill it when the call returns. Safer and stateless — nothing leaks if
the agent forgets cleanup.
- persistent=True: lazily create a single sandbox on first use, cache it
on the instance, and register an atexit hook to kill it at process
exit. Cheaper across many calls and lets files/state carry over.
- sandbox_id=<existing>: attach to a sandbox the caller already owns.
Never killed by the tool.
"""
model_config = ConfigDict(arbitrary_types_allowed=True)
package_dependencies: list[str] = Field(default_factory=lambda: ["e2b"])
api_key: SecretStr | None = Field(
default_factory=lambda: (
SecretStr(val) if (val := os.getenv("E2B_API_KEY")) else None
),
description="E2B API key. Falls back to E2B_API_KEY env var.",
json_schema_extra={"required": False},
repr=False,
)
domain: str | None = Field(
default_factory=lambda: os.getenv("E2B_DOMAIN"),
description="E2B API domain override. Falls back to E2B_DOMAIN env var.",
json_schema_extra={"required": False},
)
template: str | None = Field(
default=None,
description=(
"Optional template/snapshot name or id to create the sandbox from. "
"Defaults to E2B's base template when omitted."
),
)
persistent: bool = Field(
default=False,
description=(
"If True, reuse one sandbox across all calls to this tool instance "
"and kill it at process exit. Default False creates and kills a "
"fresh sandbox per call."
),
)
sandbox_id: str | None = Field(
default=None,
description=(
"Attach to an existing sandbox by id instead of creating a new "
"one. The tool will never kill a sandbox it did not create."
),
)
sandbox_timeout: int = Field(
default=300,
description=(
"Idle timeout in seconds after which E2B auto-kills the sandbox. "
"Applied at create time and when attaching via sandbox_id."
),
)
envs: dict[str, str] | None = Field(
default=None,
description="Environment variables to set inside the sandbox at create time.",
)
metadata: dict[str, str] | None = Field(
default=None,
description="Metadata key-value pairs to attach to the sandbox at create time.",
)
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(
name="E2B_API_KEY",
description="API key for E2B sandbox service",
required=False,
),
EnvVar(
name="E2B_DOMAIN",
description="E2B API domain (optional)",
required=False,
),
]
)
_persistent_sandbox: Any | None = PrivateAttr(default=None)
_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
_cleanup_registered: bool = PrivateAttr(default=False)
_sdk_cache: ClassVar[dict[str, Any]] = {}
@classmethod
def _import_sandbox_class(cls) -> Any:
"""Return the Sandbox class used by this tool.
Subclasses override this to swap in a different SDK (e.g. the code
interpreter sandbox). The default uses plain `e2b.Sandbox`.
"""
cached = cls._sdk_cache.get("e2b.Sandbox")
if cached is not None:
return cached
try:
from e2b import Sandbox # type: ignore[import-untyped]
except ImportError as exc:
raise ImportError(
"The 'e2b' package is required for E2B sandbox tools. "
"Install it with: uv add e2b (or) pip install e2b"
) from exc
cls._sdk_cache["e2b.Sandbox"] = Sandbox
return Sandbox
def _connect_kwargs(self) -> dict[str, Any]:
kwargs: dict[str, Any] = {}
if self.api_key is not None:
kwargs["api_key"] = self.api_key.get_secret_value()
if self.domain:
kwargs["domain"] = self.domain
if self.sandbox_timeout is not None:
kwargs["timeout"] = self.sandbox_timeout
return kwargs
def _create_kwargs(self) -> dict[str, Any]:
kwargs: dict[str, Any] = self._connect_kwargs()
if self.template is not None:
kwargs["template"] = self.template
if self.envs is not None:
kwargs["envs"] = self.envs
if self.metadata is not None:
kwargs["metadata"] = self.metadata
return kwargs
def _acquire_sandbox(self) -> tuple[Any, bool]:
"""Return (sandbox, should_kill_after_use)."""
sandbox_cls = self._import_sandbox_class()
if self.sandbox_id:
return (
sandbox_cls.connect(self.sandbox_id, **self._connect_kwargs()),
False,
)
if self.persistent:
with self._lock:
if self._persistent_sandbox is None:
self._persistent_sandbox = sandbox_cls.create(
**self._create_kwargs()
)
if not self._cleanup_registered:
atexit.register(self.close)
self._cleanup_registered = True
return self._persistent_sandbox, False
sandbox = sandbox_cls.create(**self._create_kwargs())
return sandbox, True
def _release_sandbox(self, sandbox: Any, should_kill: bool) -> None:
if not should_kill:
return
try:
sandbox.kill()
except Exception:
logger.debug(
"Best-effort sandbox cleanup failed after ephemeral use; "
"the sandbox may need manual termination.",
exc_info=True,
)
def close(self) -> None:
"""Kill the cached persistent sandbox if one exists."""
with self._lock:
sandbox = self._persistent_sandbox
self._persistent_sandbox = None
if sandbox is None:
return
try:
sandbox.kill()
except Exception:
logger.debug(
"Best-effort persistent sandbox cleanup failed at close(); "
"the sandbox may need manual termination.",
exc_info=True,
)

View File

@@ -0,0 +1,62 @@
from __future__ import annotations
from builtins import type as type_
from typing import Any
from pydantic import BaseModel, Field
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
class E2BExecToolSchema(BaseModel):
command: str = Field(..., description="Shell command to execute in the sandbox.")
cwd: str | None = Field(
default=None,
description="Working directory to run the command in. Defaults to the sandbox home dir.",
)
envs: dict[str, str] | None = Field(
default=None,
description="Optional environment variables to set for this command.",
)
timeout: float | None = Field(
default=None,
description="Maximum seconds to wait for the command to finish.",
)
class E2BExecTool(E2BBaseTool):
"""Run a shell command inside an E2B sandbox."""
name: str = "E2B Sandbox Exec"
description: str = (
"Execute a shell command inside an E2B sandbox and return the exit "
"code, stdout, and stderr. Use this to run builds, package installs, "
"git operations, or any one-off shell command."
)
args_schema: type_[BaseModel] = E2BExecToolSchema
def _run(
self,
command: str,
cwd: str | None = None,
envs: dict[str, str] | None = None,
timeout: float | None = None,
) -> Any:
sandbox, should_kill = self._acquire_sandbox()
try:
run_kwargs: dict[str, Any] = {}
if cwd is not None:
run_kwargs["cwd"] = cwd
if envs is not None:
run_kwargs["envs"] = envs
if timeout is not None:
run_kwargs["timeout"] = timeout
result = sandbox.commands.run(command, **run_kwargs)
return {
"exit_code": getattr(result, "exit_code", None),
"stdout": getattr(result, "stdout", None),
"stderr": getattr(result, "stderr", None),
"error": getattr(result, "error", None),
}
finally:
self._release_sandbox(sandbox, should_kill)

View File

@@ -0,0 +1,220 @@
from __future__ import annotations
import base64
from builtins import type as type_
import logging
import posixpath
from typing import Any, Literal
from pydantic import BaseModel, Field, model_validator
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
logger = logging.getLogger(__name__)
FileAction = Literal[
"read", "write", "append", "list", "delete", "mkdir", "info", "exists"
]
class E2BFileToolSchema(BaseModel):
action: FileAction = Field(
...,
description=(
"The filesystem action to perform: 'read' (returns file contents), "
"'write' (create or replace a file with content), 'append' (append "
"content to an existing file — use this for writing large files in "
"chunks to avoid hitting tool-call size limits), 'list' (lists a "
"directory), 'delete' (removes a file/dir), 'mkdir' (creates a "
"directory), 'info' (returns file metadata), 'exists' (returns a "
"boolean for whether the path exists)."
),
)
path: str = Field(..., description="Absolute path inside the sandbox.")
content: str | None = Field(
default=None,
description=(
"Content to write or append. If omitted for 'write', an empty file "
"is created. For files larger than a few KB, prefer one 'write' "
"with empty content followed by multiple 'append' calls of ~4KB "
"each to stay within tool-call payload limits."
),
)
binary: bool = Field(
default=False,
description=(
"For 'write'/'append': treat content as base64 and upload raw "
"bytes. For 'read': return contents as base64 instead of decoded "
"utf-8."
),
)
depth: int = Field(
default=1,
description="For action='list': how many levels deep to recurse (default 1).",
)
@model_validator(mode="after")
def _validate_action_args(self) -> E2BFileToolSchema:
if self.action == "append" and self.content is None:
raise ValueError(
"action='append' requires 'content'. Pass the chunk to append "
"in the 'content' field."
)
return self
class E2BFileTool(E2BBaseTool):
"""Read, write, and manage files inside an E2B sandbox.
Notes:
- Most useful with `persistent=True` or an explicit `sandbox_id`. With
the default ephemeral mode, files disappear when this tool call
finishes.
"""
name: str = "E2B Sandbox Files"
description: str = (
"Perform filesystem operations inside an E2B sandbox: read a file, "
"write content to a path, append content to an existing file, list a "
"directory, delete a path, make a directory, fetch file metadata, or "
"check whether a path exists. For files larger than a few KB, create "
"the file with action='write' and empty content, then send the body "
"via multiple 'append' calls of ~4KB each to stay within tool-call "
"payload limits."
)
args_schema: type_[BaseModel] = E2BFileToolSchema
def _run(
self,
action: FileAction,
path: str,
content: str | None = None,
binary: bool = False,
depth: int = 1,
) -> Any:
sandbox, should_kill = self._acquire_sandbox()
try:
if action == "read":
return self._read(sandbox, path, binary=binary)
if action == "write":
return self._write(sandbox, path, content or "", binary=binary)
if action == "append":
return self._append(sandbox, path, content or "", binary=binary)
if action == "list":
return self._list(sandbox, path, depth=depth)
if action == "delete":
sandbox.files.remove(path)
return {"status": "deleted", "path": path}
if action == "mkdir":
created = sandbox.files.make_dir(path)
return {"status": "created", "path": path, "created": bool(created)}
if action == "info":
return self._info(sandbox, path)
if action == "exists":
return {"path": path, "exists": bool(sandbox.files.exists(path))}
raise ValueError(f"Unknown action: {action}")
finally:
self._release_sandbox(sandbox, should_kill)
def _read(self, sandbox: Any, path: str, *, binary: bool) -> dict[str, Any]:
if binary:
data: bytes = sandbox.files.read(path, format="bytes")
return {
"path": path,
"encoding": "base64",
"content": base64.b64encode(data).decode("ascii"),
}
try:
content: str = sandbox.files.read(path)
return {"path": path, "encoding": "utf-8", "content": content}
except UnicodeDecodeError:
data = sandbox.files.read(path, format="bytes")
return {
"path": path,
"encoding": "base64",
"content": base64.b64encode(data).decode("ascii"),
"note": "File was not valid utf-8; returned as base64.",
}
def _write(
self, sandbox: Any, path: str, content: str, *, binary: bool
) -> dict[str, Any]:
payload: str | bytes = base64.b64decode(content) if binary else content
self._ensure_parent_dir(sandbox, path)
sandbox.files.write(path, payload)
size = (
len(payload)
if isinstance(payload, (bytes, bytearray))
else len(payload.encode("utf-8"))
)
return {"status": "written", "path": path, "bytes": size}
def _append(
self, sandbox: Any, path: str, content: str, *, binary: bool
) -> dict[str, Any]:
chunk: bytes = base64.b64decode(content) if binary else content.encode("utf-8")
self._ensure_parent_dir(sandbox, path)
try:
existing: bytes = sandbox.files.read(path, format="bytes")
except Exception:
existing = b""
payload = existing + chunk
sandbox.files.write(path, payload)
return {
"status": "appended",
"path": path,
"appended_bytes": len(chunk),
"total_bytes": len(payload),
}
@staticmethod
def _ensure_parent_dir(sandbox: Any, path: str) -> None:
parent = posixpath.dirname(path)
if not parent or parent in ("/", "."):
return
try:
sandbox.files.make_dir(parent)
except Exception:
logger.debug(
"Best-effort parent-directory create failed for %s; "
"assuming it already exists and proceeding with the write.",
parent,
exc_info=True,
)
def _list(self, sandbox: Any, path: str, *, depth: int) -> dict[str, Any]:
entries = sandbox.files.list(path, depth=depth)
return {
"path": path,
"entries": [self._entry_to_dict(e) for e in entries],
}
def _info(self, sandbox: Any, path: str) -> dict[str, Any]:
return self._entry_to_dict(sandbox.files.get_info(path))
@staticmethod
def _entry_to_dict(entry: Any) -> dict[str, Any]:
fields = (
"name",
"path",
"type",
"size",
"mode",
"permissions",
"owner",
"group",
"modified_time",
"symlink_target",
)
result: dict[str, Any] = {}
for field in fields:
value = getattr(entry, field, None)
if value is not None and field == "modified_time":
result[field] = (
value.isoformat() if hasattr(value, "isoformat") else str(value)
)
else:
result[field] = value
return result

View File

@@ -0,0 +1,133 @@
from __future__ import annotations
from builtins import type as type_
from typing import Any, ClassVar
from pydantic import BaseModel, Field
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
class E2BPythonToolSchema(BaseModel):
code: str = Field(
...,
description="Python source to execute inside the sandbox.",
)
language: str | None = Field(
default=None,
description=(
"Override the execution language (e.g. 'python', 'r', 'javascript'). "
"Defaults to Python when omitted."
),
)
envs: dict[str, str] | None = Field(
default=None,
description="Optional environment variables for the run.",
)
timeout: float | None = Field(
default=None,
description="Maximum seconds to wait for the code to finish.",
)
class E2BPythonTool(E2BBaseTool):
"""Run Python code inside an E2B code interpreter sandbox.
Uses `e2b_code_interpreter`, which runs cells in a persistent Jupyter-style
kernel so state (imports, variables) carries across calls when
`persistent=True`.
"""
name: str = "E2B Sandbox Python"
description: str = (
"Execute a block of Python code inside an E2B code interpreter sandbox "
"and return captured stdout, stderr, the final expression value, and "
"any rich results (charts, dataframes). Use this for data processing, "
"quick scripts, or analysis that should run in an isolated environment."
)
args_schema: type_[BaseModel] = E2BPythonToolSchema
package_dependencies: list[str] = Field(
default_factory=lambda: ["e2b_code_interpreter"],
)
_ci_cache: ClassVar[dict[str, Any]] = {}
@classmethod
def _import_sandbox_class(cls) -> Any:
cached = cls._ci_cache.get("Sandbox")
if cached is not None:
return cached
try:
from e2b_code_interpreter import Sandbox # type: ignore[import-untyped]
except ImportError as exc:
raise ImportError(
"The 'e2b_code_interpreter' package is required for the E2B "
"Python tool. Install it with: "
"uv add e2b-code-interpreter (or) "
"pip install e2b-code-interpreter"
) from exc
cls._ci_cache["Sandbox"] = Sandbox
return Sandbox
def _run(
self,
code: str,
language: str | None = None,
envs: dict[str, str] | None = None,
timeout: float | None = None,
) -> Any:
sandbox, should_kill = self._acquire_sandbox()
try:
run_kwargs: dict[str, Any] = {}
if language is not None:
run_kwargs["language"] = language
if envs is not None:
run_kwargs["envs"] = envs
if timeout is not None:
run_kwargs["timeout"] = timeout
execution = sandbox.run_code(code, **run_kwargs)
return self._serialize_execution(execution)
finally:
self._release_sandbox(sandbox, should_kill)
@staticmethod
def _serialize_execution(execution: Any) -> dict[str, Any]:
logs = getattr(execution, "logs", None)
error = getattr(execution, "error", None)
results = getattr(execution, "results", None) or []
return {
"text": getattr(execution, "text", None),
"stdout": list(getattr(logs, "stdout", []) or []) if logs else [],
"stderr": list(getattr(logs, "stderr", []) or []) if logs else [],
"error": (
{
"name": getattr(error, "name", None),
"value": getattr(error, "value", None),
"traceback": getattr(error, "traceback", None),
}
if error
else None
),
"results": [E2BPythonTool._serialize_result(r) for r in results],
"execution_count": getattr(execution, "execution_count", None),
}
@staticmethod
def _serialize_result(result: Any) -> dict[str, Any]:
fields = (
"text",
"html",
"markdown",
"svg",
"png",
"jpeg",
"pdf",
"latex",
"json",
"javascript",
"data",
"is_main_result",
"extra",
)
return {field: getattr(result, field, None) for field in fields}

View File

@@ -8734,6 +8734,668 @@
"type": "object"
}
},
{
"description": "Execute a shell command inside an E2B sandbox and return the exit code, stdout, and stderr. Use this to run builds, package installs, git operations, or any one-off shell command.",
"env_vars": [
{
"default": null,
"description": "API key for E2B sandbox service",
"name": "E2B_API_KEY",
"required": false
},
{
"default": null,
"description": "E2B API domain (optional)",
"name": "E2B_DOMAIN",
"required": false
}
],
"humanized_name": "E2B Sandbox Exec",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"description": "Run a shell command inside an E2B sandbox.",
"properties": {
"api_key": {
"anyOf": [
{
"format": "password",
"type": "string",
"writeOnly": true
},
{
"type": "null"
}
],
"description": "E2B API key. Falls back to E2B_API_KEY env var.",
"required": false,
"title": "Api Key"
},
"domain": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "E2B API domain override. Falls back to E2B_DOMAIN env var.",
"required": false,
"title": "Domain"
},
"envs": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Environment variables to set inside the sandbox at create time.",
"title": "Envs"
},
"metadata": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Metadata key-value pairs to attach to the sandbox at create time.",
"title": "Metadata"
},
"persistent": {
"default": false,
"description": "If True, reuse one sandbox across all calls to this tool instance and kill it at process exit. Default False creates and kills a fresh sandbox per call.",
"title": "Persistent",
"type": "boolean"
},
"sandbox_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Attach to an existing sandbox by id instead of creating a new one. The tool will never kill a sandbox it did not create.",
"title": "Sandbox Id"
},
"sandbox_timeout": {
"default": 300,
"description": "Idle timeout in seconds after which E2B auto-kills the sandbox. Applied at create time and when attaching via sandbox_id.",
"title": "Sandbox Timeout",
"type": "integer"
},
"template": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional template/snapshot name or id to create the sandbox from. Defaults to E2B's base template when omitted.",
"title": "Template"
}
},
"required": [],
"title": "E2BExecTool",
"type": "object"
},
"name": "E2BExecTool",
"package_dependencies": [
"e2b"
],
"run_params_schema": {
"properties": {
"command": {
"description": "Shell command to execute in the sandbox.",
"title": "Command",
"type": "string"
},
"cwd": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Working directory to run the command in. Defaults to the sandbox home dir.",
"title": "Cwd"
},
"envs": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional environment variables to set for this command.",
"title": "Envs"
},
"timeout": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum seconds to wait for the command to finish.",
"title": "Timeout"
}
},
"required": [
"command"
],
"title": "E2BExecToolSchema",
"type": "object"
}
},
{
"description": "Perform filesystem operations inside an E2B sandbox: read a file, write content to a path, append content to an existing file, list a directory, delete a path, make a directory, fetch file metadata, or check whether a path exists. For files larger than a few KB, create the file with action='write' and empty content, then send the body via multiple 'append' calls of ~4KB each to stay within tool-call payload limits.",
"env_vars": [
{
"default": null,
"description": "API key for E2B sandbox service",
"name": "E2B_API_KEY",
"required": false
},
{
"default": null,
"description": "E2B API domain (optional)",
"name": "E2B_DOMAIN",
"required": false
}
],
"humanized_name": "E2B Sandbox Files",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"description": "Read, write, and manage files inside an E2B sandbox.\n\nNotes:\n - Most useful with `persistent=True` or an explicit `sandbox_id`. With\n the default ephemeral mode, files disappear when this tool call\n finishes.",
"properties": {
"api_key": {
"anyOf": [
{
"format": "password",
"type": "string",
"writeOnly": true
},
{
"type": "null"
}
],
"description": "E2B API key. Falls back to E2B_API_KEY env var.",
"required": false,
"title": "Api Key"
},
"domain": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "E2B API domain override. Falls back to E2B_DOMAIN env var.",
"required": false,
"title": "Domain"
},
"envs": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Environment variables to set inside the sandbox at create time.",
"title": "Envs"
},
"metadata": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Metadata key-value pairs to attach to the sandbox at create time.",
"title": "Metadata"
},
"persistent": {
"default": false,
"description": "If True, reuse one sandbox across all calls to this tool instance and kill it at process exit. Default False creates and kills a fresh sandbox per call.",
"title": "Persistent",
"type": "boolean"
},
"sandbox_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Attach to an existing sandbox by id instead of creating a new one. The tool will never kill a sandbox it did not create.",
"title": "Sandbox Id"
},
"sandbox_timeout": {
"default": 300,
"description": "Idle timeout in seconds after which E2B auto-kills the sandbox. Applied at create time and when attaching via sandbox_id.",
"title": "Sandbox Timeout",
"type": "integer"
},
"template": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional template/snapshot name or id to create the sandbox from. Defaults to E2B's base template when omitted.",
"title": "Template"
}
},
"required": [],
"title": "E2BFileTool",
"type": "object"
},
"name": "E2BFileTool",
"package_dependencies": [
"e2b"
],
"run_params_schema": {
"properties": {
"action": {
"description": "The filesystem action to perform: 'read' (returns file contents), 'write' (create or replace a file with content), 'append' (append content to an existing file \u2014 use this for writing large files in chunks to avoid hitting tool-call size limits), 'list' (lists a directory), 'delete' (removes a file/dir), 'mkdir' (creates a directory), 'info' (returns file metadata), 'exists' (returns a boolean for whether the path exists).",
"enum": [
"read",
"write",
"append",
"list",
"delete",
"mkdir",
"info",
"exists"
],
"title": "Action",
"type": "string"
},
"binary": {
"default": false,
"description": "For 'write'/'append': treat content as base64 and upload raw bytes. For 'read': return contents as base64 instead of decoded utf-8.",
"title": "Binary",
"type": "boolean"
},
"content": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Content to write or append. If omitted for 'write', an empty file is created. For files larger than a few KB, prefer one 'write' with empty content followed by multiple 'append' calls of ~4KB each to stay within tool-call payload limits.",
"title": "Content"
},
"depth": {
"default": 1,
"description": "For action='list': how many levels deep to recurse (default 1).",
"title": "Depth",
"type": "integer"
},
"path": {
"description": "Absolute path inside the sandbox.",
"title": "Path",
"type": "string"
}
},
"required": [
"action",
"path"
],
"title": "E2BFileToolSchema",
"type": "object"
}
},
{
"description": "Execute a block of Python code inside an E2B code interpreter sandbox and return captured stdout, stderr, the final expression value, and any rich results (charts, dataframes). Use this for data processing, quick scripts, or analysis that should run in an isolated environment.",
"env_vars": [
{
"default": null,
"description": "API key for E2B sandbox service",
"name": "E2B_API_KEY",
"required": false
},
{
"default": null,
"description": "E2B API domain (optional)",
"name": "E2B_DOMAIN",
"required": false
}
],
"humanized_name": "E2B Sandbox Python",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"description": "Run Python code inside an E2B code interpreter sandbox.\n\nUses `e2b_code_interpreter`, which runs cells in a persistent Jupyter-style\nkernel so state (imports, variables) carries across calls when\n`persistent=True`.",
"properties": {
"api_key": {
"anyOf": [
{
"format": "password",
"type": "string",
"writeOnly": true
},
{
"type": "null"
}
],
"description": "E2B API key. Falls back to E2B_API_KEY env var.",
"required": false,
"title": "Api Key"
},
"domain": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "E2B API domain override. Falls back to E2B_DOMAIN env var.",
"required": false,
"title": "Domain"
},
"envs": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Environment variables to set inside the sandbox at create time.",
"title": "Envs"
},
"metadata": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Metadata key-value pairs to attach to the sandbox at create time.",
"title": "Metadata"
},
"persistent": {
"default": false,
"description": "If True, reuse one sandbox across all calls to this tool instance and kill it at process exit. Default False creates and kills a fresh sandbox per call.",
"title": "Persistent",
"type": "boolean"
},
"sandbox_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Attach to an existing sandbox by id instead of creating a new one. The tool will never kill a sandbox it did not create.",
"title": "Sandbox Id"
},
"sandbox_timeout": {
"default": 300,
"description": "Idle timeout in seconds after which E2B auto-kills the sandbox. Applied at create time and when attaching via sandbox_id.",
"title": "Sandbox Timeout",
"type": "integer"
},
"template": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional template/snapshot name or id to create the sandbox from. Defaults to E2B's base template when omitted.",
"title": "Template"
}
},
"required": [],
"title": "E2BPythonTool",
"type": "object"
},
"name": "E2BPythonTool",
"package_dependencies": [
"e2b_code_interpreter"
],
"run_params_schema": {
"properties": {
"code": {
"description": "Python source to execute inside the sandbox.",
"title": "Code",
"type": "string"
},
"envs": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional environment variables for the run.",
"title": "Envs"
},
"language": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Override the execution language (e.g. 'python', 'r', 'javascript'). Defaults to Python when omitted.",
"title": "Language"
},
"timeout": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum seconds to wait for the code to finish.",
"title": "Timeout"
}
},
"required": [
"code"
],
"title": "E2BPythonToolSchema",
"type": "object"
}
},
{
"description": "Search the internet using Exa",
"env_vars": [

View File

@@ -24,7 +24,7 @@ dependencies = [
"tokenizers>=0.21,<1",
"openpyxl~=3.1.5",
# Authentication and Security
"python-dotenv~=1.1.1",
"python-dotenv>=1.2.2,<2",
"pyjwt>=2.9.0,<3",
# TUI
"textual>=7.5.0",
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.14.3a1",
"crewai-tools==1.14.3a3",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -94,6 +94,7 @@ google-genai = [
]
azure-ai-inference = [
"azure-ai-inference~=1.0.0b9",
"azure-identity>=1.17.0,<2",
]
anthropic = [
"anthropic~=0.73.0",

View File

@@ -1,10 +1,9 @@
import contextvars
import threading
from typing import Any
import urllib.request
import importlib
import sys
from typing import TYPE_CHECKING, Annotated, Any
import warnings
from pydantic import PydanticUserError
from pydantic import Field, PydanticUserError
from crewai.agent.core import Agent
from crewai.agent.planning_config import PlanningConfig
@@ -20,7 +19,10 @@ from crewai.state.checkpoint_config import CheckpointConfig # noqa: F401
from crewai.task import Task
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
if TYPE_CHECKING:
from crewai.memory.unified_memory import Memory
def _suppress_pydantic_deprecation_warnings() -> None:
@@ -46,38 +48,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.14.3a1"
_telemetry_submitted = False
def _track_install() -> None:
"""Track package installation/first-use via Scarf analytics."""
global _telemetry_submitted
if _telemetry_submitted or Telemetry._is_telemetry_disabled():
return
try:
pixel_url = "https://api.scarf.sh/v2/packages/CrewAI/crewai/docs/00f2dad1-8334-4a39-934e-003b2e1146db"
req = urllib.request.Request(pixel_url) # noqa: S310
req.add_header("User-Agent", f"CrewAI-Python/{__version__}")
with urllib.request.urlopen(req, timeout=2): # noqa: S310
_telemetry_submitted = True
except Exception: # noqa: S110
pass
def _track_install_async() -> None:
"""Track installation in background thread to avoid blocking imports."""
if not Telemetry._is_telemetry_disabled():
ctx = contextvars.copy_context()
thread = threading.Thread(target=ctx.run, args=(_track_install,), daemon=True)
thread.start()
_track_install_async()
__version__ = "1.14.3a3"
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
"Memory": ("crewai.memory.unified_memory", "Memory"),
@@ -88,8 +59,6 @@ def __getattr__(name: str) -> Any:
"""Lazily import heavy modules (e.g. Memory → lancedb) on first access."""
if name in _LAZY_IMPORTS:
module_path, attr = _LAZY_IMPORTS[name]
import importlib
mod = importlib.import_module(module_path)
val = getattr(mod, attr)
globals()[name] = val
@@ -147,8 +116,6 @@ try:
except ImportError:
pass
import sys
_full_namespace = {
**_base_namespace,
"ToolsHandler": _ToolsHandler,
@@ -191,10 +158,6 @@ try:
Flow.model_rebuild(force=True, _types_namespace=_full_namespace)
_AgentExecutor.model_rebuild(force=True, _types_namespace=_full_namespace)
from typing import Annotated
from pydantic import Field
from crewai.state.runtime import RuntimeState
Entity = Annotated[

View File

@@ -78,8 +78,7 @@ from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.lite_agent_output import LiteAgentOutput
from crewai.llms.base_llm import BaseLLM
from crewai.mcp import MCPServerConfig
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.mcp.config import MCPServerConfig
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.fingerprint import Fingerprint
from crewai.skills.loader import activate_skill, discover_skills
@@ -119,6 +118,7 @@ if TYPE_CHECKING:
from crewai.a2a.config import A2AClientConfig, A2AConfig, A2AServerConfig
from crewai.agents.agent_builder.base_agent import PlatformAppOrAction
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
@@ -1120,6 +1120,8 @@ class Agent(BaseAgent):
Delegates to :class:`~crewai.mcp.tool_resolver.MCPToolResolver`.
"""
self._cleanup_mcp_clients()
from crewai.mcp.tool_resolver import MCPToolResolver
self._mcp_resolver = MCPToolResolver(agent=self, logger=self._logger)
return self._mcp_resolver.resolve(mcps)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.3a1"
"crewai[tools]==1.14.3a3"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.3a1"
"crewai[tools]==1.14.3a3"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.3a1"
"crewai[tools]==1.14.3a3"
]
[tool.crewai]

View File

@@ -6,112 +6,20 @@ This module provides the event infrastructure that allows users to:
- Build custom logging and analytics
- Extend CrewAI with custom event handlers
- Declare handler dependencies for ordered execution
Event type classes are lazy-loaded on first access to avoid importing
~12 Pydantic model modules (and their transitive deps) at package init time.
"""
from __future__ import annotations
import importlib
from typing import TYPE_CHECKING, Any
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.depends import Depends
from crewai.events.event_bus import crewai_event_bus
from crewai.events.handler_graph import CircularDependencyError
from crewai.events.types.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestResultEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTrainStartedEvent,
)
from crewai.events.types.flow_events import (
FlowCreatedEvent,
FlowEvent,
FlowFinishedEvent,
FlowPlotEvent,
FlowStartedEvent,
HumanFeedbackReceivedEvent,
HumanFeedbackRequestedEvent,
MethodExecutionFailedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from crewai.events.types.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeQueryStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
)
from crewai.events.types.llm_events import (
LLMCallCompletedEvent,
LLMCallFailedEvent,
LLMCallStartedEvent,
LLMStreamChunkEvent,
)
from crewai.events.types.llm_guardrail_events import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from crewai.events.types.logging_events import (
AgentLogsExecutionEvent,
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
MCPToolExecutionCompletedEvent,
MCPToolExecutionFailedEvent,
MCPToolExecutionStartedEvent,
)
from crewai.events.types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent,
MemoryRetrievalFailedEvent,
MemoryRetrievalStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemorySaveStartedEvent,
)
from crewai.events.types.reasoning_events import (
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
AgentReasoningStartedEvent,
ReasoningEvent,
)
from crewai.events.types.skill_events import (
SkillActivatedEvent,
SkillDiscoveryCompletedEvent,
SkillDiscoveryStartedEvent,
SkillEvent,
SkillLoadFailedEvent,
SkillLoadedEvent,
)
from crewai.events.types.task_events import (
TaskCompletedEvent,
TaskEvaluationEvent,
TaskFailedEvent,
TaskStartedEvent,
)
from crewai.events.types.tool_usage_events import (
ToolExecutionErrorEvent,
ToolSelectionErrorEvent,
ToolUsageErrorEvent,
ToolUsageEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
ToolValidateInputErrorEvent,
)
if TYPE_CHECKING:
from crewai.events.types.agent_events import (
@@ -125,6 +33,223 @@ if TYPE_CHECKING:
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
)
from crewai.events.types.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestResultEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTrainStartedEvent,
)
from crewai.events.types.flow_events import (
FlowCreatedEvent,
FlowEvent,
FlowFinishedEvent,
FlowPlotEvent,
FlowStartedEvent,
HumanFeedbackReceivedEvent,
HumanFeedbackRequestedEvent,
MethodExecutionFailedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from crewai.events.types.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeQueryStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
)
from crewai.events.types.llm_events import (
LLMCallCompletedEvent,
LLMCallFailedEvent,
LLMCallStartedEvent,
LLMStreamChunkEvent,
)
from crewai.events.types.llm_guardrail_events import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from crewai.events.types.logging_events import (
AgentLogsExecutionEvent,
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
MCPToolExecutionCompletedEvent,
MCPToolExecutionFailedEvent,
MCPToolExecutionStartedEvent,
)
from crewai.events.types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent,
MemoryRetrievalFailedEvent,
MemoryRetrievalStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemorySaveStartedEvent,
)
from crewai.events.types.reasoning_events import (
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
AgentReasoningStartedEvent,
ReasoningEvent,
)
from crewai.events.types.skill_events import (
SkillActivatedEvent,
SkillDiscoveryCompletedEvent,
SkillDiscoveryStartedEvent,
SkillEvent,
SkillLoadFailedEvent,
SkillLoadedEvent,
)
from crewai.events.types.task_events import (
TaskCompletedEvent,
TaskEvaluationEvent,
TaskFailedEvent,
TaskStartedEvent,
)
from crewai.events.types.tool_usage_events import (
ToolExecutionErrorEvent,
ToolSelectionErrorEvent,
ToolUsageErrorEvent,
ToolUsageEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
ToolValidateInputErrorEvent,
)
# Map every event class name → its module path for lazy loading
_LAZY_EVENT_MAPPING: dict[str, str] = {
# agent_events
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
# crew_events
"CrewKickoffCompletedEvent": "crewai.events.types.crew_events",
"CrewKickoffFailedEvent": "crewai.events.types.crew_events",
"CrewKickoffStartedEvent": "crewai.events.types.crew_events",
"CrewTestCompletedEvent": "crewai.events.types.crew_events",
"CrewTestFailedEvent": "crewai.events.types.crew_events",
"CrewTestResultEvent": "crewai.events.types.crew_events",
"CrewTestStartedEvent": "crewai.events.types.crew_events",
"CrewTrainCompletedEvent": "crewai.events.types.crew_events",
"CrewTrainFailedEvent": "crewai.events.types.crew_events",
"CrewTrainStartedEvent": "crewai.events.types.crew_events",
# flow_events
"FlowCreatedEvent": "crewai.events.types.flow_events",
"FlowEvent": "crewai.events.types.flow_events",
"FlowFinishedEvent": "crewai.events.types.flow_events",
"FlowPlotEvent": "crewai.events.types.flow_events",
"FlowStartedEvent": "crewai.events.types.flow_events",
"HumanFeedbackReceivedEvent": "crewai.events.types.flow_events",
"HumanFeedbackRequestedEvent": "crewai.events.types.flow_events",
"MethodExecutionFailedEvent": "crewai.events.types.flow_events",
"MethodExecutionFinishedEvent": "crewai.events.types.flow_events",
"MethodExecutionStartedEvent": "crewai.events.types.flow_events",
# knowledge_events
"KnowledgeQueryCompletedEvent": "crewai.events.types.knowledge_events",
"KnowledgeQueryFailedEvent": "crewai.events.types.knowledge_events",
"KnowledgeQueryStartedEvent": "crewai.events.types.knowledge_events",
"KnowledgeRetrievalCompletedEvent": "crewai.events.types.knowledge_events",
"KnowledgeRetrievalStartedEvent": "crewai.events.types.knowledge_events",
"KnowledgeSearchQueryFailedEvent": "crewai.events.types.knowledge_events",
# llm_events
"LLMCallCompletedEvent": "crewai.events.types.llm_events",
"LLMCallFailedEvent": "crewai.events.types.llm_events",
"LLMCallStartedEvent": "crewai.events.types.llm_events",
"LLMStreamChunkEvent": "crewai.events.types.llm_events",
# llm_guardrail_events
"LLMGuardrailCompletedEvent": "crewai.events.types.llm_guardrail_events",
"LLMGuardrailStartedEvent": "crewai.events.types.llm_guardrail_events",
# logging_events
"AgentLogsExecutionEvent": "crewai.events.types.logging_events",
"AgentLogsStartedEvent": "crewai.events.types.logging_events",
# mcp_events
"MCPConfigFetchFailedEvent": "crewai.events.types.mcp_events",
"MCPConnectionCompletedEvent": "crewai.events.types.mcp_events",
"MCPConnectionFailedEvent": "crewai.events.types.mcp_events",
"MCPConnectionStartedEvent": "crewai.events.types.mcp_events",
"MCPToolExecutionCompletedEvent": "crewai.events.types.mcp_events",
"MCPToolExecutionFailedEvent": "crewai.events.types.mcp_events",
"MCPToolExecutionStartedEvent": "crewai.events.types.mcp_events",
# memory_events
"MemoryQueryCompletedEvent": "crewai.events.types.memory_events",
"MemoryQueryFailedEvent": "crewai.events.types.memory_events",
"MemoryQueryStartedEvent": "crewai.events.types.memory_events",
"MemoryRetrievalCompletedEvent": "crewai.events.types.memory_events",
"MemoryRetrievalFailedEvent": "crewai.events.types.memory_events",
"MemoryRetrievalStartedEvent": "crewai.events.types.memory_events",
"MemorySaveCompletedEvent": "crewai.events.types.memory_events",
"MemorySaveFailedEvent": "crewai.events.types.memory_events",
"MemorySaveStartedEvent": "crewai.events.types.memory_events",
# reasoning_events
"AgentReasoningCompletedEvent": "crewai.events.types.reasoning_events",
"AgentReasoningFailedEvent": "crewai.events.types.reasoning_events",
"AgentReasoningStartedEvent": "crewai.events.types.reasoning_events",
"ReasoningEvent": "crewai.events.types.reasoning_events",
# skill_events
"SkillActivatedEvent": "crewai.events.types.skill_events",
"SkillDiscoveryCompletedEvent": "crewai.events.types.skill_events",
"SkillDiscoveryStartedEvent": "crewai.events.types.skill_events",
"SkillEvent": "crewai.events.types.skill_events",
"SkillLoadFailedEvent": "crewai.events.types.skill_events",
"SkillLoadedEvent": "crewai.events.types.skill_events",
# task_events
"TaskCompletedEvent": "crewai.events.types.task_events",
"TaskEvaluationEvent": "crewai.events.types.task_events",
"TaskFailedEvent": "crewai.events.types.task_events",
"TaskStartedEvent": "crewai.events.types.task_events",
# tool_usage_events
"ToolExecutionErrorEvent": "crewai.events.types.tool_usage_events",
"ToolSelectionErrorEvent": "crewai.events.types.tool_usage_events",
"ToolUsageErrorEvent": "crewai.events.types.tool_usage_events",
"ToolUsageEvent": "crewai.events.types.tool_usage_events",
"ToolUsageFinishedEvent": "crewai.events.types.tool_usage_events",
"ToolUsageStartedEvent": "crewai.events.types.tool_usage_events",
"ToolValidateInputErrorEvent": "crewai.events.types.tool_usage_events",
}
_extension_exports: dict[str, Any] = {}
def __getattr__(name: str) -> Any:
"""Lazy import for event types and registered extensions."""
if name in _LAZY_EVENT_MAPPING:
module_path = _LAZY_EVENT_MAPPING[name]
module = importlib.import_module(module_path)
val = getattr(module, name)
globals()[name] = val # cache for subsequent access
return val
if name in _extension_exports:
value = _extension_exports[name]
if isinstance(value, str):
module_path, _, attr_name = value.rpartition(".")
if module_path:
module = importlib.import_module(module_path)
return getattr(module, attr_name)
return importlib.import_module(value)
return value
msg = f"module {__name__!r} has no attribute {name!r}"
raise AttributeError(msg)
__all__ = [
@@ -214,42 +339,3 @@ __all__ = [
"_extension_exports",
"crewai_event_bus",
]
_AGENT_EVENT_MAPPING = {
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
}
_extension_exports: dict[str, Any] = {}
def __getattr__(name: str) -> Any:
"""Lazy import for agent events and registered extensions."""
if name in _AGENT_EVENT_MAPPING:
import importlib
module_path = _AGENT_EVENT_MAPPING[name]
module = importlib.import_module(module_path)
return getattr(module, name)
if name in _extension_exports:
import importlib
value = _extension_exports[name]
if isinstance(value, str):
module_path, _, attr_name = value.rpartition(".")
if module_path:
module = importlib.import_module(module_path)
return getattr(module, attr_name)
return importlib.import_module(value)
return value
msg = f"module {__name__!r} has no attribute {name!r}"
raise AttributeError(msg)

View File

@@ -81,8 +81,11 @@ class TraceBatchManager:
"""Initialize a new trace batch (thread-safe)"""
with self._batch_ready_cv:
if self.current_batch is not None:
# Lazy init (e.g. DefaultEnvEvent) may have created the batch without
# execution_type; merge metadata from a later flow/crew initializer.
self.current_batch.execution_metadata.update(execution_metadata)
logger.debug(
"Batch already initialized, skipping duplicate initialization"
"Batch already initialized, merged execution metadata and skipped duplicate initialization"
)
return self.current_batch

View File

@@ -60,12 +60,6 @@ from crewai.events.types.crew_events import (
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
)
from crewai.events.types.env_events import (
CCEnvEvent,
CodexEnvEvent,
CursorEnvEvent,
DefaultEnvEvent,
)
from crewai.events.types.flow_events import (
FlowCreatedEvent,
FlowFinishedEvent,
@@ -212,7 +206,6 @@ class TraceCollectionListener(BaseEventListener):
self._listeners_setup = True
return
self._register_env_event_handlers(crewai_event_bus)
self._register_flow_event_handlers(crewai_event_bus)
self._register_context_event_handlers(crewai_event_bus)
self._register_action_event_handlers(crewai_event_bus)
@@ -221,25 +214,6 @@ class TraceCollectionListener(BaseEventListener):
self._listeners_setup = True
def _register_env_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for environment context events."""
@event_bus.on(CCEnvEvent)
def on_cc_env(source: Any, event: CCEnvEvent) -> None:
self._handle_action_event("cc_env", source, event)
@event_bus.on(CodexEnvEvent)
def on_codex_env(source: Any, event: CodexEnvEvent) -> None:
self._handle_action_event("codex_env", source, event)
@event_bus.on(CursorEnvEvent)
def on_cursor_env(source: Any, event: CursorEnvEvent) -> None:
self._handle_action_event("cursor_env", source, event)
@event_bus.on(DefaultEnvEvent)
def on_default_env(source: Any, event: DefaultEnvEvent) -> None:
self._handle_action_event("default_env", source, event)
def _register_flow_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for flow events."""
@@ -286,8 +260,8 @@ class TraceCollectionListener(BaseEventListener):
if self.batch_manager.batch_owner_type != "flow":
# Always call _initialize_crew_batch to claim ownership.
# If batch was already initialized by a concurrent action event
# (race condition with DefaultEnvEvent), initialize_batch() returns
# early but batch_owner_type is still correctly set to "crew".
# (e.g. LLM/tool before crew_kickoff_started), initialize_batch()
# returns early but batch_owner_type is still correctly set to "crew".
# Skip only when a parent flow already owns the batch.
self._initialize_crew_batch(source, event)
self._handle_trace_event("crew_kickoff_started", source, event)

View File

@@ -1503,6 +1503,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
except Exception:
logger.warning("FlowStartedEvent handler failed", exc_info=True)
get_env_context()
context = self._pending_feedback_context
emit = context.emit
default_outcome = context.default_outcome
@@ -2004,7 +2006,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return restored.kickoff(inputs=inputs, input_files=input_files)
get_env_context()
if self.stream:
result_holder: list[Any] = []
current_task_info: TaskInfo = {
@@ -2206,6 +2207,10 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
f"Flow started with ID: {self.flow_id}", color="bold magenta"
)
# After FlowStarted (when not suppressed): env events must not pre-empt
# trace batch init with implicit "crew" execution_type.
get_env_context()
if inputs is not None and "id" not in inputs:
self._initialize_state(inputs)

View File

@@ -175,6 +175,16 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
"us.amazon.nova-pro-v1:0": 300000,
"us.amazon.nova-micro-v1:0": 128000,
"us.amazon.nova-lite-v1:0": 300000,
# Claude 4 models
"us.anthropic.claude-opus-4-7": 1000000,
"us.anthropic.claude-sonnet-4-6": 1000000,
"us.anthropic.claude-opus-4-6-v1": 1000000,
"us.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"us.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"us.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"us.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"us.anthropic.claude-opus-4-20250514-v1:0": 200000,
"us.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"us.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"us.anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
@@ -193,15 +203,44 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
"eu.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"eu.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
"eu.anthropic.claude-3-haiku-20240307-v1:0": 200000,
# Claude 4 EU
"eu.anthropic.claude-opus-4-7": 1000000,
"eu.anthropic.claude-sonnet-4-6": 1000000,
"eu.anthropic.claude-opus-4-6-v1": 1000000,
"eu.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"eu.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"eu.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"eu.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"eu.anthropic.claude-opus-4-20250514-v1:0": 200000,
"eu.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"eu.meta.llama3-2-3b-instruct-v1:0": 131000,
"eu.meta.llama3-2-1b-instruct-v1:0": 131000,
"apac.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
"apac.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
"apac.anthropic.claude-3-haiku-20240307-v1:0": 200000,
# Claude 4 APAC
"apac.anthropic.claude-opus-4-7": 1000000,
"apac.anthropic.claude-sonnet-4-6": 1000000,
"apac.anthropic.claude-opus-4-6-v1": 1000000,
"apac.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"apac.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"apac.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"apac.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"apac.anthropic.claude-opus-4-20250514-v1:0": 200000,
"apac.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"amazon.nova-pro-v1:0": 300000,
"amazon.nova-micro-v1:0": 128000,
"amazon.nova-lite-v1:0": 300000,
"anthropic.claude-opus-4-7": 1000000,
"anthropic.claude-sonnet-4-6": 1000000,
"anthropic.claude-opus-4-6-v1": 1000000,
"anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"anthropic.claude-opus-4-20250514-v1:0": 200000,
"anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
"anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,

View File

@@ -423,6 +423,34 @@ AZURE_MODELS: list[AzureModels] = [
BedrockModels: TypeAlias = Literal[
# Inference profiles (regional) - Claude 4
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"us.anthropic.claude-sonnet-4-20250514-v1:0",
"us.anthropic.claude-opus-4-5-20251101-v1:0",
"us.anthropic.claude-opus-4-20250514-v1:0",
"us.anthropic.claude-opus-4-1-20250805-v1:0",
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
"us.anthropic.claude-sonnet-4-6",
"us.anthropic.claude-opus-4-6-v1",
# Inference profiles - shorter versions
"us.anthropic.claude-sonnet-4-5-v1:0",
"us.anthropic.claude-opus-4-5-v1:0",
"us.anthropic.claude-opus-4-6-v1:0",
"us.anthropic.claude-haiku-4-5-v1:0",
"eu.anthropic.claude-sonnet-4-5-v1:0",
"eu.anthropic.claude-opus-4-5-v1:0",
"eu.anthropic.claude-haiku-4-5-v1:0",
"apac.anthropic.claude-sonnet-4-5-v1:0",
"apac.anthropic.claude-opus-4-5-v1:0",
"apac.anthropic.claude-haiku-4-5-v1:0",
# Global inference profiles
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
"global.anthropic.claude-sonnet-4-20250514-v1:0",
"global.anthropic.claude-opus-4-5-20251101-v1:0",
"global.anthropic.claude-opus-4-6-v1",
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
"global.anthropic.claude-sonnet-4-6",
# Direct model IDs
"ai21.jamba-1-5-large-v1:0",
"ai21.jamba-1-5-mini-v1:0",
"amazon.nova-lite-v1:0",
@@ -496,6 +524,34 @@ BedrockModels: TypeAlias = Literal[
"twelvelabs.pegasus-1-2-v1:0",
]
BEDROCK_MODELS: list[BedrockModels] = [
# Inference profiles (regional) - Claude 4
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"us.anthropic.claude-sonnet-4-20250514-v1:0",
"us.anthropic.claude-opus-4-5-20251101-v1:0",
"us.anthropic.claude-opus-4-20250514-v1:0",
"us.anthropic.claude-opus-4-1-20250805-v1:0",
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
"us.anthropic.claude-sonnet-4-6",
"us.anthropic.claude-opus-4-6-v1",
# Inference profiles - shorter versions
"us.anthropic.claude-sonnet-4-5-v1:0",
"us.anthropic.claude-opus-4-5-v1:0",
"us.anthropic.claude-opus-4-6-v1:0",
"us.anthropic.claude-haiku-4-5-v1:0",
"eu.anthropic.claude-sonnet-4-5-v1:0",
"eu.anthropic.claude-opus-4-5-v1:0",
"eu.anthropic.claude-haiku-4-5-v1:0",
"apac.anthropic.claude-sonnet-4-5-v1:0",
"apac.anthropic.claude-opus-4-5-v1:0",
"apac.anthropic.claude-haiku-4-5-v1:0",
# Global inference profiles
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
"global.anthropic.claude-sonnet-4-20250514-v1:0",
"global.anthropic.claude-opus-4-5-20251101-v1:0",
"global.anthropic.claude-opus-4-6-v1",
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
"global.anthropic.claude-sonnet-4-6",
# Direct model IDs
"ai21.jamba-1-5-large-v1:0",
"ai21.jamba-1-5-mini-v1:0",
"amazon.nova-lite-v1:0",

View File

@@ -183,11 +183,6 @@ class AzureCompletion(BaseLLM):
AzureCompletion._is_azure_openai_endpoint(self.endpoint)
)
if not self.api_key:
raise ValueError(
"Azure API key is required. Set AZURE_API_KEY environment "
"variable or pass api_key parameter."
)
if not self.endpoint:
raise ValueError(
"Azure endpoint is required. Set AZURE_ENDPOINT environment "
@@ -195,12 +190,39 @@ class AzureCompletion(BaseLLM):
)
client_kwargs: dict[str, Any] = {
"endpoint": self.endpoint,
"credential": AzureKeyCredential(self.api_key),
"credential": self._resolve_credential(),
}
if self.api_version:
client_kwargs["api_version"] = self.api_version
return client_kwargs
def _resolve_credential(self) -> Any:
"""Return an Azure credential, preferring the API key when set.
Without an API key, fall back to ``DefaultAzureCredential`` from
``azure-identity``. That chain auto-detects the standard keyless
paths the customer's environment may provide — OIDC Workload
Identity Federation (``AZURE_FEDERATED_TOKEN_FILE`` +
``AZURE_TENANT_ID`` + ``AZURE_CLIENT_ID``), Managed Identity on
AKS/Azure VMs, environment-configured service principals, and
developer tools like the Azure CLI. Installing ``azure-identity``
is what enables these paths; without it we raise the existing
API-key error.
"""
if self.api_key:
return AzureKeyCredential(self.api_key)
try:
from azure.identity import DefaultAzureCredential
except ImportError:
raise ValueError(
"Azure API key is required when azure-identity is not "
"installed. Set AZURE_API_KEY, or install azure-identity "
'for keyless auth: uv add "crewai[azure-ai-inference]"'
) from None
return DefaultAzureCredential()
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()

View File

@@ -2075,6 +2075,9 @@ class BedrockCompletion(BaseLLM):
# Context window sizes for common Bedrock models
context_windows = {
"anthropic.claude-sonnet-4": 200000,
"anthropic.claude-opus-4": 200000,
"anthropic.claude-haiku-4": 200000,
"anthropic.claude-3-5-sonnet": 200000,
"anthropic.claude-3-5-haiku": 200000,
"anthropic.claude-3-opus": 200000,

View File

@@ -2,9 +2,17 @@
This module provides native MCP client functionality, allowing CrewAI agents
to connect to any MCP-compliant server using various transport types.
Heavy imports (MCPClient, MCPToolResolver, BaseTransport, TransportType) are
lazy-loaded on first access to avoid pulling in the ``mcp`` SDK (~400ms)
when only lightweight config/filter types are needed.
"""
from crewai.mcp.client import MCPClient
from __future__ import annotations
import importlib
from typing import TYPE_CHECKING, Any
from crewai.mcp.config import (
MCPServerConfig,
MCPServerHTTP,
@@ -18,8 +26,28 @@ from crewai.mcp.filters import (
create_dynamic_tool_filter,
create_static_tool_filter,
)
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.mcp.transports.base import BaseTransport, TransportType
if TYPE_CHECKING:
from crewai.mcp.client import MCPClient
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.mcp.transports.base import BaseTransport, TransportType
_LAZY: dict[str, tuple[str, str]] = {
"MCPClient": ("crewai.mcp.client", "MCPClient"),
"MCPToolResolver": ("crewai.mcp.tool_resolver", "MCPToolResolver"),
"BaseTransport": ("crewai.mcp.transports.base", "BaseTransport"),
"TransportType": ("crewai.mcp.transports.base", "TransportType"),
}
def __getattr__(name: str) -> Any:
if name in _LAZY:
mod_path, attr = _LAZY[name]
mod = importlib.import_module(mod_path)
val = getattr(mod, attr)
globals()[name] = val # cache for subsequent access
return val
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
__all__ = [

View File

@@ -237,6 +237,8 @@ def crew(
self.tasks = instantiated_tasks
crew_instance: Crew = _call_method(meth, self, *args, **kwargs)
if "name" not in crew_instance.model_fields_set:
crew_instance.name = getattr(self, "_crew_name", None) or crew_instance.name
def callback_wrapper(
hook: Callable[Concatenate[CrewInstance, P2], R2], instance: CrewInstance

View File

@@ -32,6 +32,7 @@ from pydantic import (
field_validator,
model_validator,
)
from pydantic.functional_serializers import PlainSerializer
from pydantic_core import PydanticCustomError
from typing_extensions import Self
@@ -86,6 +87,22 @@ from crewai.utilities.printer import PRINTER
from crewai.utilities.string_utils import interpolate_only
def _serialize_model_class(v: type[BaseModel] | None) -> dict[str, Any] | None:
"""Serialize a Pydantic model class reference to its JSON schema."""
return v.model_json_schema() if v else None
def _deserialize_model_class(v: Any) -> type[BaseModel] | None:
"""Hydrate a model class reference from checkpoint data."""
if v is None or isinstance(v, type):
return v
if isinstance(v, dict):
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
return create_model_from_schema(v)
return None
class Task(BaseModel):
"""Class that represents a task to be executed.
@@ -141,15 +158,33 @@ class Task(BaseModel):
description="Whether the task should be executed asynchronously or not.",
default=False,
)
output_json: type[BaseModel] | None = Field(
output_json: Annotated[
type[BaseModel] | None,
BeforeValidator(_deserialize_model_class),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A Pydantic model to be used to create a JSON output.",
default=None,
)
output_pydantic: type[BaseModel] | None = Field(
output_pydantic: Annotated[
type[BaseModel] | None,
BeforeValidator(_deserialize_model_class),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A Pydantic model to be used to create a Pydantic output.",
default=None,
)
response_model: type[BaseModel] | None = Field(
response_model: Annotated[
type[BaseModel] | None,
BeforeValidator(_deserialize_model_class),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A Pydantic model for structured LLM outputs using native provider features.",
default=None,
)
@@ -189,7 +224,13 @@ class Task(BaseModel):
description="Whether the task should instruct the agent to return the final answer formatted in Markdown",
default=False,
)
converter_cls: type[Converter] | None = Field(
converter_cls: Annotated[
type[Converter] | None,
BeforeValidator(lambda v: v if v is None or isinstance(v, type) else None),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A converter class used to export structured output",
default=None,
)

View File

@@ -389,17 +389,41 @@ def test_azure_raises_error_when_endpoint_missing():
llm._get_sync_client()
def test_azure_raises_error_when_api_key_missing():
"""Credentials are validated lazily: construction succeeds, first
def test_azure_raises_error_when_api_key_missing_without_azure_identity():
"""Without an API key AND without ``azure-identity`` installed,
client build raises the descriptive error."""
from crewai.llms.providers.azure.completion import AzureCompletion
with patch.dict(os.environ, {}, clear=True):
llm = AzureCompletion(
model="gpt-4", endpoint="https://test.openai.azure.com"
)
with pytest.raises(ValueError, match="Azure API key is required"):
llm._get_sync_client()
with patch.dict("sys.modules", {"azure.identity": None}):
llm = AzureCompletion(
model="gpt-4", endpoint="https://test.openai.azure.com"
)
with pytest.raises(ValueError, match="Azure API key is required"):
llm._get_sync_client()
def test_azure_uses_default_credential_when_api_key_missing():
"""With ``azure-identity`` installed, a missing API key falls back to
``DefaultAzureCredential`` instead of raising. This is the path that
enables keyless auth (OIDC WIF on EKS/AKS, Managed Identity, Azure
CLI) without any crewAI-specific config."""
from unittest.mock import MagicMock
from crewai.llms.providers.azure.completion import AzureCompletion
sentinel = MagicMock(name="DefaultAzureCredential()")
with patch.dict(os.environ, {}, clear=True):
with patch(
"azure.identity.DefaultAzureCredential", return_value=sentinel
) as mock_cls:
llm = AzureCompletion(
model="gpt-4",
endpoint="https://test-ai.services.example.com",
)
kwargs = llm._make_client_kwargs()
assert kwargs["credential"] is sentinel
mock_cls.assert_called()
@pytest.mark.asyncio

View File

@@ -8,6 +8,7 @@ from concurrent.futures import Future
from hashlib import md5
import re
import sys
from typing import Any, cast
from unittest.mock import ANY, MagicMock, call, patch
from crewai.agent import Agent
@@ -17,6 +18,7 @@ from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.crew_events import (
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
@@ -4741,6 +4743,61 @@ def test_default_crew_name(researcher, writer):
assert crew.name == "crew"
@pytest.mark.parametrize(
"explicit_name,expected",
[
(None, "ResearchAutomation"),
("My Research Automation", "My Research Automation"),
],
ids=["class_name_from_decorator", "explicit_name_preserved"],
)
def test_crew_kickoff_started_emits_display_name(
researcher, writer, explicit_name, expected
):
"""Kickoff events should use the decorator-provided display name when implicit."""
from crewai.crews.utils import prepare_kickoff
from crewai.project import CrewBase, agent, crew, task
@CrewBase
class ResearchAutomation:
agents_config = None
tasks_config = None
@agent
def researcher(self):
return researcher
@task
def first_task(self):
return Task(
description="Task 1",
expected_output="output",
agent=self.researcher(),
)
@crew
def crew(self):
crew_kwargs: dict[str, Any] = {
"agents": self.agents,
"tasks": self.tasks,
}
if explicit_name is not None:
crew_kwargs["name"] = explicit_name
return Crew(**crew_kwargs)
captured: list[str | None] = []
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStartedEvent)
def _capture(_source: Any, event: CrewKickoffStartedEvent) -> None:
captured.append(event.crew_name)
automation_cls = cast(type[Any], ResearchAutomation)
prepare_kickoff(cast(Any, automation_cls()).crew(), inputs=None)
assert captured == [expected]
@pytest.mark.vcr()
def test_memory_remember_receives_task_content():
"""With memory=True, extract_memories receives raw content with task, agent, expected output, and result."""

View File

@@ -1,4 +1,4 @@
from typing import Any, ClassVar
from typing import Any, ClassVar, cast
from unittest.mock import Mock, create_autospec, patch
import pytest
@@ -261,6 +261,55 @@ def test_crew_name():
assert crew._crew_name == "InternalCrew"
def test_crew_decorator_propagates_class_name_to_instance():
"""@crew-decorated factory method should set Crew.name to the decorated class name."""
sample_agent = Agent(role="r", goal="g", backstory="b")
sample_task = Task(description="d", expected_output="o", agent=sample_agent)
@CrewBase
class ImplicitNameCrewFactory:
agents_config = None
tasks_config = None
agents: list[BaseAgent] = [sample_agent]
tasks: list[Task] = [sample_task]
@crew
def crew(self):
return Crew(
agents=[sample_agent],
tasks=[sample_task],
)
factory_cls = cast(type[Any], ImplicitNameCrewFactory)
crew_instance: Crew = cast(Any, factory_cls()).crew()
assert crew_instance.name == "ImplicitNameCrewFactory"
def test_crew_decorator_preserves_explicit_name():
"""Explicit Crew(name=...) inside @crew should win over the @CrewBase class name."""
sample_agent = Agent(role="r", goal="g", backstory="b")
sample_task = Task(description="d", expected_output="o", agent=sample_agent)
@CrewBase
class NamedCrewFactory:
agents_config = None
tasks_config = None
agents: list[BaseAgent] = [sample_agent]
tasks: list[Task] = [sample_task]
@crew
def crew(self):
return Crew(
name="My Explicit Name",
agents=[sample_agent],
tasks=[sample_task],
)
factory_cls = cast(type[Any], NamedCrewFactory)
crew_instance: Crew = cast(Any, factory_cls()).crew()
assert crew_instance.name == "My Explicit Name"
@tool
def simple_tool():
"""Return 'Hi!'"""

View File

@@ -1640,3 +1640,43 @@ class TestBackendInitializedGatedOnSuccess:
assert bm.backend_initialized is False
assert bm.trace_batch_id is None
class TestTraceBatchManagerDuplicateInitMerge:
"""Second initialize_batch call merges execution_metadata (flow after lazy action)."""
def test_duplicate_initialize_merges_execution_metadata(self):
with (
patch(
"crewai.events.listeners.tracing.trace_batch_manager.should_auto_collect_first_time_traces",
return_value=True,
),
patch(
"crewai.events.listeners.tracing.trace_batch_manager.is_tracing_enabled_in_context",
return_value=True,
),
):
bm = TraceBatchManager()
bm.initialize_batch(
user_context={"privacy_level": "standard"},
execution_metadata={
"crew_name": "Unknown Crew",
"crewai_version": "9.9.9",
},
)
first_batch_id = bm.current_batch.batch_id
bm.initialize_batch(
user_context={"privacy_level": "standard"},
execution_metadata={
"flow_name": "ResearchFlow",
"execution_type": "flow",
"crewai_version": "9.9.9",
"execution_start": "2026-01-01T00:00:00+00:00",
},
)
assert bm.current_batch.batch_id == first_batch_id
meta = bm.current_batch.execution_metadata
assert meta.get("execution_type") == "flow"
assert meta.get("flow_name") == "ResearchFlow"
assert meta.get("crew_name") == "Unknown Crew"

View File

@@ -13,7 +13,7 @@ dependencies = [
"click~=8.1.7",
"tomlkit~=0.13.2",
"openai>=1.83.0,<3",
"python-dotenv~=1.1.1",
"python-dotenv>=1.2.2,<2",
"pygithub~=1.59.1",
"rich>=13.9.4",
]

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.14.3a1"
__version__ = "1.14.3a3"

View File

@@ -164,7 +164,7 @@ info = "Commits must follow Conventional Commits 1.0.0."
[tool.uv]
# Pinned to include the security patch releases (authlib 1.6.11,
# langchain-text-splitters 1.1.2) uploaded on 2026-04-16.
exclude-newer = "2026-04-17"
exclude-newer = "2026-04-22"
# composio-core pins rich<14 but textual requires rich>=14.
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.

628
uv.lock generated

File diff suppressed because it is too large Load Diff