Compare commits

..

1 Commits

Author SHA1 Message Date
Devin AI
ee8b3be8e5 fix(tracing): stop nagging users who declined tracing (#5665)
- When user explicitly declined tracing, skip the 'Tracing is disabled'
  message instead of showing it on every crew/flow execution
- Add CREWAI_SUPPRESS_TRACING_MESSAGES env var to let users fully
  suppress the message
- Remove duplicate identical if/else branches in all four
  _show_tracing_disabled_message implementations
- Add 24 tests covering suppression via env var, context var, and
  user-declined scenarios

Co-Authored-By: João <joao@crewai.com>
2026-04-30 04:52:51 +00:00
66 changed files with 853 additions and 3932 deletions

View File

@@ -4,71 +4,6 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="1 مايو 2026">
## v1.14.5a1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## ما الذي تغير
### الميزات
- إضافة معلمة بدء `restore_from_state_id`
- إضافة تسليط الضوء على ExaSearchTool وإعادة تسميته من EXASearchTool
### إصلاحات الأخطاء
- إصلاح المواقع المفقودة لـ crewai في تدفق الإصدار
- ضمان تحميل أحداث المهارات للآثار
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.4
## المساهمون
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="1 مايو 2026">
## v1.14.4
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## ما الذي تغير
### الميزات
- إضافة دعم لمفتاح الاستمرارية المخصص في @persist
- إضافة دعم واجهة برمجة التطبيقات للردود لمزود Azure OpenAI
- تمرير credential_scopes إلى عميل Azure AI Inference
- إضافة دليل إعداد هوية عبء العمل لـ Vertex AI
- إضافة Tavily Research والحصول على Research
- إضافة أدوات MCP من You.com للبحث، البحث، واستخراج المحتوى
### إصلاحات الأخطاء
- إصلاح مشكلة السقوط عند عدم تطابق تعبير JSON regex مع JSON صالح
- إصلاح للحفاظ على tool_calls عندما تحتوي الاستجابة أيضًا على نص
- إصلاح لتمرير base_url و api_key إلى instructor.from_provider
- إصلاح لتحذير وإرجاع فارغ عندما لا يُرجع خادم MCP الأصلي أي أدوات
- إصلاح لاستخدام متغير الرسائل الموثقة في معالجات غير البث
- إصلاح لحماية مساعدي وصف دردشة الطاقم ضد فشل LLM
- إصلاح لإعادة تعيين الرسائل والتكرارات بين الاستدعاءات
- إصلاح لتمرير ملف trained-agents من خلال replay و test
- إصلاح لاحترام ملف trained-agents المخصص في الاستدلال
- إصلاح لربط الوكلاء المخصصين بالمهام فقط بالطاقم لملفات الإدخال متعددة الأنماط
- إصلاح لتسلسل callable الحواجز كـ null لتسجيل JSON
- إصلاح إعادة تسمية force_final_answer لتجنب توجيه ذاتي
- إصلاح زيادة litellm لإصلاح SSTI؛ تجاهل CVE غير القابل للإصلاح في pip
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.4a1
- إضافة صفحة أدوات E2B Sandbox
- إضافة وثائق أدوات صندوق Daytona
## المساهمون
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="29 أبريل 2026">
## v1.14.4a1

View File

@@ -380,41 +380,32 @@ class AnotherFlow(Flow[dict]):
print("Method-level persisted runs:", self.state["runs"])
```
### تفرع الحالة المستمرة
### مفتاح استمرارية مخصص
يدعم `@persist` نمطين متميزين للترطيب في `kickoff` / `kickoff_async`:
- `kickoff(inputs={"id": <uuid>})` — **استئناف**: يحمّل أحدث لقطة لـ UUID المقدم ويستمر في الكتابة تحت نفس `flow_uuid`. يمتد التاريخ.
- `kickoff(restore_from_state_id=<uuid>)` — **تفرع**: يحمّل أحدث لقطة لـ UUID المقدم، يرطّب حالة التشغيل الجديد منها، ثم يعيّن `state.id` جديدًا (مولّدًا تلقائيًا، أو `inputs["id"]` إذا تم تثبيته). تذهب كتابات `@persist` للتشغيل الجديد تحت `state.id` الجديد؛ يتم الحفاظ على تاريخ تدفق المصدر.
افتراضيًا، يستخدم `@persist` الحقل `state.id` المُولّد تلقائيًا كمفتاح للاستمرارية. إذا كان لتدفقك معرّف خاص به — مثل `conversation_id` مشترك بين عدة جلسات — يمكنك تمرير الوسيط `key` ليستخدم `@persist` تلك السمة كـ UUID للتدفق:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id") # استخدام حقل مخصص كمفتاح للاستمرارية
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
def begin(self):
self.state.turn += 1
print(f"Conversation {self.state.conversation_id} turn {self.state.turn}")
# التشغيل 1: حالة جديدة، العداد 0 -> 1، محفوظ تحت flow_1.state.id
flow_1 = CounterFlow()
flow_1.kickoff()
# التفرع: ترطيب من أحدث لقطة لـ flow_1، لكن باستخدام state.id جديد
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# يبدأ flow_2.state.counter بـ 1 (مرطّب)، ثم تزيده step() إلى 2.
# flow_2.state.id != flow_1.state.id؛ تاريخ flow_1 لم يتغيّر.
# إعادة تشغيل المحادثة بنفس conversation_id يُعيد تحميل الحالة السابقة
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
إذا لم يطابق `restore_from_state_id` المقدم أي حالة مستمرة، يعود kickoff بصمت إلى السلوك الافتراضي — نفس سلوك `inputs["id"]` عند عدم العثور عليه. الجمع بين `restore_from_state_id` و `from_checkpoint` يطلق `ValueError`؛ اختر مصدر ترطيب واحدًا. تثبيت `inputs["id"]` أثناء التفرع يشارك مفتاح الاستمرارية مع تدفق آخر — عادةً ما تريد استخدام `restore_from_state_id` فقط.
يقرأ المزخرف القيمة من `state[key]` للحالات من نوع dict، ومن `getattr(state, key)` للحالات من نوع Pydantic / كائن. إذا كانت السمة المحددة غير موجودة أو قيمتها falsy عند الحفظ، يُطلق `@persist` خطأ `ValueError` مثل `Flow state is missing required persistence key 'conversation_id'`. عند حذف `key`، يظل السلوك الأصلي قائمًا ويُستخدم `state.id`.
### كيف تعمل

View File

@@ -146,14 +146,15 @@ class ProductionFlow(Flow[AppState]):
# ...
```
افتراضيًا، يستأنف `@persist` تدفقًا عند توفير `kickoff(inputs={"id": <uuid>})`، مما يمدّ نفس تاريخ `flow_uuid`. لـ **تفرع** تدفق مستمر إلى نسبٍ جديد — ترطيب الحالة من تشغيل سابق ولكن الكتابة تحت `state.id` جديد — مرّر `restore_from_state_id`:
افتراضيًا، يستخدم `@persist` الحقل `state.id` المُولّد تلقائيًا كمفتاح للحالة المحفوظة. إذا كان تطبيقك يمتلك معرّفًا طبيعيًا بالفعل — مثل `conversation_id` يربط عدة تشغيلات بنفس جلسة المستخدم — مرّره كـ `key` ليستخدمه المزخرف كـ UUID للتدفق. يُطلق `ValueError` إذا كانت السمة المحددة غير موجودة أو قيمتها falsy عند الحفظ.
```python
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# يجب أن يحتوي AppState على conversation_id؛ استئناف الجلسة يُعيد تحميل الحالة السابقة
...
```
يحصل التشغيل الجديد على `state.id` جديد (مولّد تلقائيًا، أو `inputs["id"]` إذا تم تثبيته) لذا لا تمتد كتابات `@persist` الخاصة به إلى تاريخ المصدر. الجمع مع `from_checkpoint` يطلق `ValueError`؛ اختر مصدر ترطيب واحدًا.
## الخلاصة
- **ابدأ بتدفق.**

View File

@@ -133,7 +133,7 @@ crew.kickoff()
| **DirectorySearchTool** | أداة RAG للبحث في المجلدات، مفيدة للتنقل في أنظمة الملفات. |
| **DOCXSearchTool** | أداة RAG للبحث في مستندات DOCX، مثالية لمعالجة ملفات Word. |
| **DirectoryReadTool** | تسهّل قراءة ومعالجة هياكل المجلدات ومحتوياتها. |
| **ExaSearchTool** | أداة مصممة لإجراء عمليات بحث شاملة عبر مصادر بيانات متنوعة. |
| **EXASearchTool** | أداة مصممة لإجراء عمليات بحث شاملة عبر مصادر بيانات متنوعة. |
| **FileReadTool** | تُمكّن قراءة واستخراج البيانات من الملفات، مع دعم تنسيقات ملفات متنوعة. |
| **FirecrawlSearchTool** | أداة للبحث في صفحات الويب باستخدام Firecrawl وإرجاع النتائج. |
| **FirecrawlCrawlWebsiteTool** | أداة لزحف صفحات الويب باستخدام Firecrawl. |

View File

@@ -116,47 +116,32 @@ class PersistentCounterFlow(Flow[CounterState]):
return self.state.value
```
#### تفرع الحالة المستمرة
### استخدام مفتاح استمرارية مخصص
يدعم `@persist` نمطين متميزين للترطيب في `kickoff` / `kickoff_async`. استخدم **استئناف** (`inputs["id"]`) لمواصلة نفس النسب؛ استخدم **تفرع** (`restore_from_state_id`) لبدء نسبٍ جديد من لقطة:
| | `state.id` بعد kickoff | كتابات `@persist` تذهب إلى |
|---|---|---|
| `inputs["id"]` (استئناف) | المعرّف المقدم | المعرّف المقدم (يمد التاريخ) |
| `restore_from_state_id` (تفرع) | معرّف جديد، أو `inputs["id"]` إذا ثُبّت | المعرّف الجديد (المصدر محفوظ) |
افتراضيًا، يستخدم `@persist()` الحقل `state.id` المُولّد تلقائيًا كمفتاح للحالة المحفوظة. عندما يكون لمجالك معرّف طبيعي بالفعل — مثل `conversation_id` يربط عدة تشغيلات للتدفق بنفس جلسة المستخدم — مرّره كوسيط `key` ليستخدمه `@persist` كـ UUID للتدفق بدلًا من `id`:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
def greet(self):
self.state.history.append("hello")
return self.state.history
# التشغيل 1: حالة جديدة، العداد 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# التفرع: الترطيب من أحدث لقطة لـ flow_1، لكن الكتابة تحت state.id جديد
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# يبدأ flow_2 بـ counter=1 (مرطّب)، ثم تزيده step() إلى 2.
# تاريخ flow_uuid لـ flow_1 لم يتغيّر.
# تشغيل ثانٍ بنفس conversation_id يُعيد تحميل الحالة السابقة
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
ملاحظات السلوك:
- `restore_from_state_id` غير موجود في الاستمرارية → يعود kickoff بصمت إلى السلوك الافتراضي (يعكس سلوك `inputs["id"]` عند عدم العثور عليه). لا يُطلق أي استثناء.
- الجمع بين `restore_from_state_id` و `from_checkpoint` يطلق `ValueError` — يستهدفان نظامي حالة مختلفين (`@persist` مقابل Checkpointing) ولا يمكن الجمع بينهما.
- `restore_from_state_id=None` (افتراضي) متطابق بايت ببايت مع kickoff بدون المعامل.
- تثبيت `inputs["id"]` أثناء التفرع يعني أن التشغيل الجديد يشارك مفتاح الاستمرارية مع تدفق آخر — عادةً ما تريد فقط `restore_from_state_id`.
بالنسبة للحالات من نوع dict يقرأ `@persist` القيمة من `state[key]`، ولحالات Pydantic / الكائنات يقرأها من `getattr(state, key)`. إذا كانت السمة المحددة غير موجودة أو قيمتها falsy عند حفظ الحالة، يُطلق `@persist` خطأ `ValueError` مثل `Flow state is missing required persistence key 'conversation_id'`، فيظهر الفشل فورًا بدلًا من فقد بيانات الاستمرارية بصمت. استدعاء `@persist()` بدون `key` يحافظ على السلوك الأصلي ويستخدم `state.id`.
## أنماط حالة متقدمة

View File

@@ -1,11 +1,11 @@
---
title: "أداة بحث Exa"
description: "ابحث في الويب باستخدام Exa Search API للعثور على النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والمقتطفات."
description: "ابحث في الويب باستخدام Exa Search API للعثور على النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والمقتطفات والملخصات."
icon: "magnifying-glass"
mode: "wide"
---
تتيح أداة `ExaSearchTool` لوكلاء CrewAI البحث في الويب باستخدام [Exa](https://exa.ai/) search API. تُرجع النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والمقتطفات الموفرة للرموز.
تتيح أداة `EXASearchTool` لوكلاء CrewAI البحث في الويب باستخدام [Exa](https://exa.ai/) search API. تُرجع النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والملخصات المولّدة بالذكاء الاصطناعي.
## التثبيت
@@ -27,15 +27,15 @@ export EXA_API_KEY='your_exa_api_key'
## مثال على الاستخدام
إليك كيفية استخدام `ExaSearchTool` مع وكيل CrewAI:
إليك كيفية استخدام `EXASearchTool` مع وكيل CrewAI:
```python
import os
from crewai import Agent, Task, Crew
from crewai_tools import ExaSearchTool
from crewai_tools import EXASearchTool
# Initialize the tool
exa_tool = ExaSearchTool()
exa_tool = EXASearchTool()
# Create an agent that uses the tool
researcher = Agent(
@@ -66,11 +66,11 @@ print(result)
## خيارات التكوين
تقبل أداة `ExaSearchTool` المعاملات التالية أثناء التهيئة:
تقبل أداة `EXASearchTool` المعاملات التالية أثناء التهيئة:
- `type` (str، اختياري): نوع البحث المستخدم. الافتراضي هو `"auto"`. الخيارات: `"auto"`، `"instant"`، `"fast"`، `"deep"`.
- `highlights` (bool أو dict، اختياري): إرجاع مقتطفات موفرة للرموز أكثر صلة بالاستعلام بدلاً من الصفحة الكاملة. الافتراضي هو `True`. مرر قاموسًا مثل `{"max_characters": 4000}` للتكوين، أو `False` للتعطيل.
- `content` (bool، اختياري): ما إذا كان يجب تضمين محتوى الصفحة الكامل في النتائج. الافتراضي هو `False`.
- `summary` (bool، اختياري): ما إذا كان يجب تضمين ملخصات مولّدة بالذكاء الاصطناعي لكل نتيجة. يتطلب `content=True`. الافتراضي هو `False`.
- `api_key` (str، اختياري): مفتاح Exa API الخاص بك. يعود إلى متغير البيئة `EXA_API_KEY` إذا لم يتم تقديمه.
- `base_url` (str، اختياري): عنوان URL مخصص لخادم API. يعود إلى متغير البيئة `EXA_BASE_URL` إذا لم يتم تقديمه.
@@ -86,52 +86,25 @@ print(result)
يمكنك تكوين الأداة بمعاملات مخصصة للحصول على نتائج أغنى:
```python
# Use 'deep' for thorough, multi-step searches
exa_tool = ExaSearchTool(
highlights=True,
# Get full page content with AI summaries
exa_tool = EXASearchTool(
content=True,
summary=True,
type="deep"
)
# Use it in an agent
agent = Agent(
role="Deep Researcher",
goal="Conduct thorough research",
goal="Conduct thorough research with full content and summaries",
tools=[exa_tool]
)
```
## استخدام Exa عبر MCP
يمكنك أيضًا ربط وكيلك بخادم MCP المستضاف من Exa. مرّر مفتاح API الخاص بك عبر ترويسة `x-api-key`:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
احصل على مفتاح API من [لوحة تحكم Exa](https://dashboard.exa.ai/api-keys). لمزيد من المعلومات حول MCP في CrewAI، راجع [نظرة عامة على MCP](/ar/mcp/overview).
## الميزات
- **مقتطفات موفرة للرموز**: الحصول على المقتطفات الأكثر صلة من كل نتيجة، باستخدام رموز أقل بكثير من النص الكامل
- **البحث الدلالي**: العثور على نتائج بناءً على المعنى، وليس الكلمات المفتاحية فقط
- **استرجاع المحتوى الكامل**: الحصول على النص الكامل لصفحات الويب مع نتائج البحث
- **ملخصات الذكاء الاصطناعي**: الحصول على ملخصات موجزة مولّدة بالذكاء الاصطناعي لكل نتيجة
- **تصفية التاريخ**: تقييد النتائج لفترات زمنية محددة باستخدام فلاتر تاريخ النشر
- **تصفية النطاقات**: تقييد عمليات البحث على نطاقات محددة
## موارد
- [توثيق Exa](https://exa.ai/docs)
- [لوحة تحكم Exa — إدارة مفاتيح API والاستخدام](https://dashboard.exa.ai)
- **تصفية النطاقات**: تقييد عمليات البحث على نطاقات محددة

File diff suppressed because it is too large Load Diff

View File

@@ -4,71 +4,6 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="May 01, 2026">
## v1.14.5a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## What's Changed
### Features
- Add `restore_from_state_id` kickoff parameter
- Add highlights to ExaSearchTool and rename from EXASearchTool
### Bug Fixes
- Fix missing crewai pin sites in release flow
- Ensure skills loading events for traces
### Documentation
- Update changelog and version for v1.14.4
## Contributors
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="May 01, 2026">
## v1.14.4
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## What's Changed
### Features
- Add support for custom persistence key in @persist
- Add Responses API support for Azure OpenAI provider
- Forward credential_scopes to Azure AI Inference client
- Add Vertex AI workload identity setup guide
- Add Tavily Research and get Research
- Add You.com MCP tools for search, research, and content extraction
### Bug Fixes
- Fix fall through when JSON regex match isn't valid JSON
- Fix to preserve tool_calls when response also contains text
- Fix to forward base_url and api_key to instructor.from_provider
- Fix to warn and return empty when native MCP server returns no tools
- Fix to use validated messages variable in non-streaming handlers
- Fix to guard crew chat description helpers against LLM failures
- Fix to reset messages and iterations between invocations
- Fix to forward trained-agents file through replay and test
- Fix to honor custom trained-agents file at inference
- Fix to bind task-only agents to crew for multimodal input_files
- Fix to serialize guardrail callables as null for JSON checkpointing
- Fix renaming of force_final_answer to avoid self-referential router
- Fix bump of litellm for SSTI fix; ignore unfixable pip CVE
### Documentation
- Update changelog and version for v1.14.4a1
- Add E2B Sandbox Tools page
- Add Daytona sandbox tools documentation
## Contributors
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="Apr 29, 2026">
## v1.14.4a1

View File

@@ -380,41 +380,32 @@ class AnotherFlow(Flow[dict]):
print("Method-level persisted runs:", self.state["runs"])
```
### Forking Persisted State
### Custom Persistence Key
`@persist` supports two distinct hydration modes on `kickoff` / `kickoff_async`:
- `kickoff(inputs={"id": <uuid>})` — **resume**: load the latest snapshot for the supplied UUID and continue writing under the same `flow_uuid`. The history extends.
- `kickoff(restore_from_state_id=<uuid>)` — **fork**: load the latest snapshot for the supplied UUID, hydrate the new run's state from it, and assign a fresh `state.id` (auto-generated, or `inputs["id"]` if pinned). The new run's `@persist` writes land under the new `state.id`; the source flow's history is preserved.
By default, `@persist` uses the auto-generated `state.id` field as the persistence key. If your flow models its own identifier — for example a `conversation_id` shared across sessions — you can pass a `key` argument and `@persist` will use that attribute as the flow UUID instead:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id") # Use a custom field as the persistence key
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
def begin(self):
self.state.turn += 1
print(f"Conversation {self.state.conversation_id} turn {self.state.turn}")
# Run 1: fresh state, counter 0 -> 1, persisted under flow_1.state.id
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hydrate from flow_1's latest snapshot, but use a NEW state.id
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2.state.counter starts at 1 (hydrated), then step() bumps it to 2.
# flow_2.state.id != flow_1.state.id; flow_1's history is unchanged.
# Resuming the same conversation reloads its prior state by conversation_id
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
If the supplied `restore_from_state_id` does not match any persisted state, the kickoff falls back silently — same as the existing `inputs["id"]` resume not-found behavior. Combining `restore_from_state_id` with `from_checkpoint` raises a `ValueError`; pick one hydration source. Pinning `inputs["id"]` while forking shares a persistence key with another flow — usually you want only `restore_from_state_id`.
The decorator reads the value at `state[key]` for dict states, or `getattr(state, key)` for Pydantic / object states. If the named attribute is missing or falsy at save time, `@persist` raises a `ValueError` such as `Flow state is missing required persistence key 'conversation_id'`. When `key` is omitted, the existing behavior is preserved and `state.id` is used.
### How It Works

View File

@@ -146,14 +146,15 @@ class ProductionFlow(Flow[AppState]):
# ...
```
By default, `@persist` resumes a flow when `kickoff(inputs={"id": <uuid>})` is supplied, extending the same `flow_uuid` history. To **fork** a persisted flow into a new lineage — hydrate state from a previous run but write under a fresh `state.id` — pass `restore_from_state_id`:
By default `@persist` keys saved state by the auto-generated `state.id`. If your application already has a natural identifier — for example a `conversation_id` that ties multiple runs to the same user session — pass it as `key` and the decorator will use that attribute as the flow UUID. A `ValueError` is raised if the named attribute is missing or falsy at save time.
```python
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# AppState must expose conversation_id; resuming a session reloads its prior state
...
```
The new run gets a fresh `state.id` (auto-generated, or `inputs["id"]` if pinned) so its `@persist` writes don't extend the source's history. Combining with `from_checkpoint` raises a `ValueError`; pick one hydration source.
## Summary
- **Start with a Flow.**

View File

@@ -133,7 +133,7 @@ Here is a list of the available tools and their descriptions:
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
| **ExaSearchTool** | Search the web with Exa, the fastest and most accurate web search API. Supports token-efficient highlights and full page content. |
| **EXASearchTool** | A tool designed for performing exhaustive searches across various data sources. |
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |

View File

@@ -346,47 +346,32 @@ class SelectivePersistFlow(Flow):
return f"Complete with count {self.state['count']}"
```
#### Forking Persisted State
#### Using a Custom Persistence Key
`@persist` supports two distinct hydration modes on `kickoff` / `kickoff_async`. Use **resume** (`inputs["id"]`) to continue the same lineage; use **fork** (`restore_from_state_id`) to start a new lineage seeded from a snapshot:
| | `state.id` after kickoff | `@persist` writes land under |
|---|---|---|
| `inputs["id"]` (resume) | supplied id | supplied id (extends history) |
| `restore_from_state_id` (fork) | fresh id, or `inputs["id"]` if pinned | new id (source preserved) |
By default, `@persist()` keys persisted state by the flow's auto-generated `state.id`. When your domain already has a natural identifier — for example a `conversation_id` that ties multiple flow runs to the same user session — pass it as the `key` argument and `@persist` will use that attribute as the flow UUID instead of `id`:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
def greet(self):
self.state.history.append("hello")
return self.state.history
# Run 1: fresh state, counter 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hydrate from flow_1's latest snapshot, but write under a NEW state.id
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2 starts with counter=1 (hydrated), then step() bumps it to 2.
# flow_1's flow_uuid history is unchanged.
# A second run with the same conversation_id reloads the prior state
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
Behavior notes:
- `restore_from_state_id` not found in persistence → the kickoff falls back silently to default behavior (mirrors the existing `inputs["id"]` resume not-found behavior). No exception is raised.
- Combining `restore_from_state_id` with `from_checkpoint` raises a `ValueError` — they target different state systems (`@persist` vs. Checkpointing) and cannot be combined.
- `restore_from_state_id=None` (default) is byte-identical to a kickoff without the parameter.
- Pinning `inputs["id"]` while forking means the new run shares a persistence key with another flow — usually you want only `restore_from_state_id`.
For dict-based states `@persist` reads `state[key]`, and for Pydantic / object states it reads `getattr(state, key)`. If the named attribute is missing or falsy when state is being saved, `@persist` raises a `ValueError` like `Flow state is missing required persistence key 'conversation_id'`, so the failure surfaces immediately rather than silently dropping persisted data. Calling `@persist()` without `key` keeps the original behavior of using `state.id`.
## Advanced State Patterns

View File

@@ -1,11 +1,11 @@
---
title: "Exa Search Tool"
description: "Search the web with Exa, the fastest and most accurate web search API. Get token-efficient highlights and full page content."
description: "Search the web using the Exa Search API to find the most relevant results for any query, with options for full page content, highlights, and summaries."
icon: "magnifying-glass"
mode: "wide"
---
The `ExaSearchTool` lets CrewAI agents search the web using [Exa](https://exa.ai/), the fastest and most accurate web search API. It returns the most relevant results for any query, with options for token-efficient highlights and full page content.
The `EXASearchTool` lets CrewAI agents search the web using the [Exa](https://exa.ai/) search API. It returns the most relevant results for any query, with options for full page content and AI-generated summaries.
## Installation
@@ -27,15 +27,15 @@ Get an API key from the [Exa dashboard](https://dashboard.exa.ai/api-keys).
## Example Usage
Here's how to use the `ExaSearchTool` within a CrewAI agent:
Here's how to use the `EXASearchTool` within a CrewAI agent:
```python
import os
from crewai import Agent, Task, Crew
from crewai_tools import ExaSearchTool
from crewai_tools import EXASearchTool
# Initialize the tool
exa_tool = ExaSearchTool()
exa_tool = EXASearchTool()
# Create an agent that uses the tool
researcher = Agent(
@@ -66,11 +66,11 @@ print(result)
## Configuration Options
The `ExaSearchTool` accepts the following parameters during initialization:
The `EXASearchTool` accepts the following parameters during initialization:
- `type` (str, optional): The search type to use. Defaults to `"auto"`. Options: `"auto"`, `"instant"`, `"fast"`, `"deep"`.
- `highlights` (bool or dict, optional): Return token-efficient excerpts most relevant to the query instead of the full page. Defaults to `True`. Pass a dict like `{"max_characters": 4000}` to configure, or `False` to disable.
- `content` (bool, optional): Whether to include full page content in results. Defaults to `False`.
- `summary` (bool, optional): Whether to include AI-generated summaries of each result. Requires `content=True`. Defaults to `False`.
- `api_key` (str, optional): Your Exa API key. Falls back to the `EXA_API_KEY` environment variable if not provided.
- `base_url` (str, optional): Custom API server URL. Falls back to the `EXA_BASE_URL` environment variable if not provided.
@@ -83,70 +83,28 @@ When calling the tool (or when an agent invokes it), the following search parame
## Advanced Usage
For most agent workflows we recommend `highlights` — it returns the most relevant excerpts from each result and uses far fewer tokens than full page content:
You can configure the tool with custom parameters for richer results:
```python
# Get token-efficient excerpts most relevant to the query
exa_tool = ExaSearchTool(
highlights=True,
type="auto",
# Get full page content with AI summaries
exa_tool = EXASearchTool(
content=True,
summary=True,
type="deep"
)
# Use it in an agent
agent = Agent(
role="Researcher",
goal="Answer questions with current web data",
role="Deep Researcher",
goal="Conduct thorough research with full content and summaries",
tools=[exa_tool]
)
```
For thorough, multi-step searches, use `type="deep"`:
```python
exa_tool = ExaSearchTool(
highlights=True,
type="deep",
)
```
For more on choosing between highlights and full content, see the [Exa search best practices](https://exa.ai/docs/reference/search-best-practices).
## Using Exa via MCP
You can also connect your agent to Exa's hosted MCP server. Pass your API key with the `x-api-key` header:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
Get your API key from the [Exa dashboard](https://dashboard.exa.ai/api-keys). For more on MCP in CrewAI, see the [MCP overview](/en/mcp/overview).
## Features
- **Token-Efficient Highlights**: Get the most relevant excerpts from each result, ~10x fewer tokens than full text
- **Semantic Search**: Find results based on meaning, not just keywords
- **Full Content Retrieval**: Get the full text of web pages alongside search results
- **AI Summaries**: Get concise, AI-generated summaries of each result
- **Date Filtering**: Limit results to specific time periods with published date filters
- **Domain Filtering**: Restrict searches to specific domains
<Note>
`EXASearchTool` is a deprecated alias for `ExaSearchTool`. Existing imports continue to work but will emit a deprecation warning; please migrate to `ExaSearchTool`.
</Note>
## Resources
- [Exa documentation](https://exa.ai/docs)
- [Exa dashboard — manage API keys and usage](https://dashboard.exa.ai)

View File

@@ -4,71 +4,6 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 5월 1일">
## v1.14.5a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## 변경 사항
### 기능
- `restore_from_state_id` 시작 매개변수 추가
- ExaSearchTool에 하이라이트 추가 및 EXASearchTool에서 이름 변경
### 버그 수정
- 릴리스 흐름에서 crewai 핀 사이트 누락 수정
- 트레이스를 위한 기술 로딩 이벤트 보장
### 문서
- v1.14.4에 대한 변경 로그 및 버전 업데이트
## 기여자
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="2026년 5월 1일">
## v1.14.4
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## 변경 사항
### 기능
- @persist에서 사용자 정의 지속성 키 지원 추가
- Azure OpenAI 공급자를 위한 응답 API 지원 추가
- Azure AI 추론 클라이언트에 credential_scopes 전달
- Vertex AI 작업 부하 신원 설정 가이드 추가
- Tavily Research 및 Research 가져오기 추가
- 검색, 연구 및 콘텐츠 추출을 위한 You.com MCP 도구 추가
### 버그 수정
- JSON 정규 표현식이 유효한 JSON이 아닐 때의 fall through 수정
- 응답에 텍스트가 포함될 때 tool_calls를 보존하도록 수정
- instructor.from_provider에 base_url 및 api_key를 전달하도록 수정
- 기본 MCP 서버가 도구를 반환하지 않을 때 경고하고 빈 값을 반환하도록 수정
- 비스트리밍 핸들러에서 검증된 메시지 변수를 사용하도록 수정
- LLM 실패에 대한 크루 채팅 설명 도우미를 보호하도록 수정
- 호출 간 메시지 및 반복을 재설정하도록 수정
- replay 및 test를 통해 훈련된 에이전트 파일을 전달하도록 수정
- 추론 시 사용자 정의 훈련된 에이전트 파일을 존중하도록 수정
- 다중 모드 input_files에 대해 작업 전용 에이전트를 크루에 바인딩하도록 수정
- JSON 체크포인팅을 위해 가드레일 호출 가능 항목을 null로 직렬화하도록 수정
- 자기 참조 라우터를 피하기 위해 force_final_answer의 이름 변경 수정
- SSTI 수정을 위한 litellm 버전 증가; 수정할 수 없는 pip CVE 무시
### 문서
- v1.14.4a1에 대한 변경 로그 및 버전 업데이트
- E2B 샌드박스 도구 페이지 추가
- Daytona 샌드박스 도구 문서 추가
## 기여자
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="2026년 4월 29일">
## v1.14.4a1

View File

@@ -373,41 +373,32 @@ class AnotherFlow(Flow[dict]):
print("Method-level persisted runs:", self.state["runs"])
```
### 영속 상태 포크하기
### 사용자 지정 영속성 키
`@persist`는 `kickoff` / `kickoff_async`에서 두 가지 별개의 하이드레이션 모드를 지원합니다:
- `kickoff(inputs={"id": <uuid>})` — **재개(resume)**: 제공된 UUID에 대한 최신 스냅샷을 로드하고 동일한 `flow_uuid` 아래에서 계속 기록합니다. 기록이 확장됩니다.
- `kickoff(restore_from_state_id=<uuid>)` — **포크(fork)**: 제공된 UUID에 대한 최신 스냅샷을 로드하고 새 실행의 상태를 하이드레이트한 후, 새로운 `state.id`(자동 생성, 또는 `inputs["id"]`가 고정된 경우 그 값)를 할당합니다. 새 실행의 `@persist` 기록은 새로운 `state.id` 아래에 저장되며, 원본 플로우의 기록은 보존됩니다.
기본적으로 `@persist`는 자동 생성된 `state.id` 필드를 영속성 키로 사용합니다. 여러 세션에 걸쳐 공유되는 `conversation_id`처럼 플로우에 자체 식별자가 있는 경우, `key` 인자를 전달하면 `@persist`가 해당 속성을 플로우 UUID로 사용합니다:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id") # 사용자 지정 필드를 영속성 키로 사용
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
def begin(self):
self.state.turn += 1
print(f"Conversation {self.state.conversation_id} turn {self.state.turn}")
# 실행 1: 새 상태, counter 0 -> 1, flow_1.state.id 아래에 저장됨
flow_1 = CounterFlow()
flow_1.kickoff()
# 포크: flow_1의 최신 스냅샷에서 하이드레이트하지만, 새 state.id를 사용
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2.state.counter는 1(하이드레이트)로 시작하고, step()이 2로 증가시킵니다.
# flow_2.state.id != flow_1.state.id; flow_1의 기록은 변경되지 않습니다.
# 동일한 conversation_id로 다시 실행하면 이전 상태가 다시 로드됩니다
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
제공된 `restore_from_state_id`가 어떤 영속 상태와도 일치하지 않으면, kickoff는 조용히 기본 동작으로 폴백됩니다 — 기존 `inputs["id"]`의 미발견 동작과 동일합니다. `restore_from_state_id`를 `from_checkpoint`와 결합하면 `ValueError` 발생합니다; 하나의 하이드레이션 소스를 선택하세요. 포크 중 `inputs["id"]`를 고정하면 다른 플로우와 영속 키를 공유하게 됩니다 — 일반적으로 `restore_from_state_id` 사용하는 것이 좋습니다.
이 데코레이터는 dict 상태의 경우 `state[key]`에서, Pydantic / 객체 상태의 경우 `getattr(state, key)`에서 값을 읽습니다. 저장 시점에 지정된 속성이 없거나 falsy 값이면, `@persist`는 `Flow state is missing required persistence key 'conversation_id'`와 같은 `ValueError` 발생시킵니다. `key`를 생략하면 기존 동작이 유지되어 `state.id` 사용니다.
### 작동 방식

View File

@@ -146,14 +146,15 @@ class ProductionFlow(Flow[AppState]):
# ...
```
기본적으로, `@persist`는 `kickoff(inputs={"id": <uuid>})`가 제공될 때 플로우를 재개하여 동일한 `flow_uuid` 기록을 확장합니다. 영속된 플로우를 새 계보로 **포크**하려면 — 이전 실행에서 상태를 하이드레이트하지만 새로운 `state.id` 아래에 기록 — `restore_from_state_id`를 전달하세요:
기본적으로 `@persist`는 자동 생성된 `state.id`를 저장된 상태의 키로 사용합니다. 애플리케이션에 이미 자연스러운 식별자가 있는 경우 — 예를 들어 같은 사용자 세션에 속한 여러 실행을 묶는 `conversation_id` — `key`로 전달하면 데코레이터가 해당 속성을 플로우 UUID로 사용합니다. 저장 시점에 지정된 속성이 없거나 falsy 값이면 `ValueError`가 발생합니다.
```python
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# AppState는 conversation_id를 노출해야 합니다; 세션을 재개하면 이전 상태가 다시 로드됩니다
...
```
새 실행은 새로운 `state.id`(자동 생성, 또는 `inputs["id"]`가 고정된 경우 그 값)를 받아 `@persist` 기록이 원본의 기록을 확장하지 않도록 합니다. `from_checkpoint`와 결합하면 `ValueError`가 발생합니다; 하나의 하이드레이션 소스를 선택하세요.
## 요약
- **Flow로 시작하세요.**

View File

@@ -132,7 +132,7 @@ crew.kickoff()
| **DirectorySearchTool** | 디렉터리 내에서 검색하는 RAG 도구로, 파일 시스템을 탐색할 때 유용합니다. |
| **DOCXSearchTool** | DOCX 문서 내에서 검색하는 데 특화된 RAG 도구로, Word 파일을 처리할 때 이상적입니다. |
| **DirectoryReadTool** | 디렉터리 구조와 그 내용을 읽고 처리하도록 지원하는 도구입니다. |
| **ExaSearchTool** | 다양한 데이터 소스를 폭넓게 검색하기 위해 설계된 도구입니다. |
| **EXASearchTool** | 다양한 데이터 소스를 폭넓게 검색하기 위해 설계된 도구입니다. |
| **FileReadTool** | 다양한 파일 형식을 지원하며 파일에서 데이터를 읽고 추출할 수 있는 도구입니다. |
| **FirecrawlSearchTool** | Firecrawl을 이용해 웹페이지를 검색하고 결과를 반환하는 도구입니다. |
| **FirecrawlCrawlWebsiteTool** | Firecrawl을 사용해 웹페이지를 크롤링하는 도구입니다. |

View File

@@ -346,47 +346,32 @@ class SelectivePersistFlow(Flow):
return f"Complete with count {self.state['count']}"
```
#### 영속 상태 포크하기
#### 사용자 지정 영속성 키 사용하기
`@persist`는 `kickoff` / `kickoff_async`에서 두 가지 별개의 하이드레이션 모드를 지원합니다. 동일한 계보를 계속하려면 **재개**(`inputs["id"]`)를 사용하고, 스냅샷에서 시작하는 새 계보를 시작하려면 **포크**(`restore_from_state_id`)를 사용하세요:
| | kickoff 후 `state.id` | `@persist` 기록 위치 |
|---|---|---|
| `inputs["id"]` (재개) | 제공된 id | 제공된 id (기록 확장) |
| `restore_from_state_id` (포크) | 새 id, 또는 고정 시 `inputs["id"]` | 새 id (원본 보존) |
기본적으로 `@persist()`는 자동 생성된 `state.id`를 영속 상태의 키로 사용합니다. 도메인에 이미 자연스러운 식별자가 있는 경우 — 예를 들어 같은 사용자 세션에 속한 여러 플로우 실행을 묶는 `conversation_id` — `key` 인자로 전달하면 `@persist`는 `id` 대신 해당 속성을 플로우 UUID로 사용합니다:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
def greet(self):
self.state.history.append("hello")
return self.state.history
# 실행 1: 새 상태, counter 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# 포크: flow_1의 최신 스냅샷에서 하이드레이트, 단 새 state.id에 기록
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2는 counter=1(하이드레이트)로 시작하고, step()이 2로 증가시킵니다.
# flow_1의 flow_uuid 기록은 변경되지 않습니다.
# 동일한 conversation_id로 두 번째 실행 시 이전 상태가 다시 로드됩니다
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
동작 노트:
- `restore_from_state_id`가 영속에서 발견되지 않음 → kickoff는 조용히 기본 동작으로 폴백됩니다 (기존 `inputs["id"]`의 미발견 동작 미러링). 예외는 발생하지 않습니다.
- `restore_from_state_id`를 `from_checkpoint`와 결합하면 `ValueError`가 발생합니다 — 서로 다른 상태 시스템(`@persist` 대 Checkpointing)을 대상으로 하므로 결합할 수 없습니다.
- `restore_from_state_id=None`(기본값)은 매개변수 없는 kickoff와 바이트 단위로 동일합니다.
- 포크 중 `inputs["id"]`를 고정하면 새 실행이 다른 플로우와 영속 키를 공유함을 의미합니다 — 일반적으로 `restore_from_state_id`만 사용하는 것이 좋습니다.
dict 기반 상태의 경우 `@persist`는 `state[key]`를 읽고, Pydantic / 객체 상태의 경우 `getattr(state, key)`를 읽습니다. 상태가 저장될 때 지정된 속성이 없거나 falsy 값이면 `@persist`는 `Flow state is missing required persistence key 'conversation_id'`와 같은 `ValueError`를 발생시켜, 영속 데이터가 조용히 손실되는 대신 즉시 실패가 드러나도록 합니다. `key` 없이 `@persist()`를 호출하면 기존 동작대로 `state.id`가 사용됩니다.
## 고급 상태 패턴

View File

@@ -1,15 +1,15 @@
---
title: EXA 검색 웹 로더
description: ExaSearchTool은 인터넷 전반에 걸쳐 텍스트의 내용에서 지정된 쿼리에 대한 시맨틱 검색을 수행하도록 설계되었습니다.
description: EXASearchTool은 인터넷 전반에 걸쳐 텍스트의 내용에서 지정된 쿼리에 대한 시맨틱 검색을 수행하도록 설계되었습니다.
icon: globe-pointer
mode: "wide"
---
# `ExaSearchTool`
# `EXASearchTool`
## 설명
ExaSearchTool은 텍스트의 내용을 기반으로 지정된 쿼리를 인터넷 전반에 걸쳐 의미론적으로 검색하도록 설계되었습니다.
EXASearchTool은 텍스트의 내용을 기반으로 지정된 쿼리를 인터넷 전반에 걸쳐 의미론적으로 검색하도록 설계되었습니다.
사용자가 제공한 쿼리를 기반으로 가장 관련성 높은 검색 결과를 가져오고 표시하기 위해 [exa.ai](https://exa.ai/) API를 활용합니다.
## 설치
@@ -25,15 +25,15 @@ pip install 'crewai[tools]'
다음 예제는 도구를 초기화하고 주어진 쿼리로 검색을 실행하는 방법을 보여줍니다:
```python Code
from crewai_tools import ExaSearchTool
from crewai_tools import EXASearchTool
# Initialize the tool for internet searching capabilities
tool = ExaSearchTool()
tool = EXASearchTool()
```
## 시작 단계
ExaSearchTool을 효과적으로 사용하려면 다음 단계를 따르세요:
EXASearchTool을 효과적으로 사용하려면 다음 단계를 따르세요:
<Steps>
<Step title="패키지 설치">
@@ -47,35 +47,7 @@ ExaSearchTool을 효과적으로 사용하려면 다음 단계를 따르세요:
</Step>
</Steps>
## MCP를 통한 Exa 사용
Exa가 호스팅하는 MCP 서버에 에이전트를 연결할 수도 있습니다. API 키는 `x-api-key` 헤더로 전달하세요:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
API 키는 [Exa 대시보드](https://dashboard.exa.ai/api-keys)에서 발급받을 수 있습니다. CrewAI에서의 MCP 사용에 대한 자세한 내용은 [MCP 개요](/ko/mcp/overview)를 참고하세요.
## 결론
`ExaSearchTool`을 Python 프로젝트에 통합함으로써, 사용자는 애플리케이션 내에서 실시간으로 인터넷을 직접 검색할 수 있는 능력을 얻게 됩니다.
`EXASearchTool`을 Python 프로젝트에 통합함으로써, 사용자는 애플리케이션 내에서 실시간으로 인터넷을 직접 검색할 수 있는 능력을 얻게 됩니다.
제공된 설정 및 사용 지침을 따르면, 이 도구를 프로젝트에 포함하는 과정이 간편하고 직관적입니다.
## 참고 자료
- [Exa 공식 문서](https://exa.ai/docs)
- [Exa 대시보드 — API 키 및 사용량 관리](https://dashboard.exa.ai)

View File

@@ -4,71 +4,6 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="01 mai 2026">
## v1.14.5a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## O que Mudou
### Recursos
- Adicionar parâmetro de início `restore_from_state_id`
- Adicionar destaques ao ExaSearchTool e renomear de EXASearchTool
### Correções de Bugs
- Corrigir sites de pinos do crewai ausentes no fluxo de lançamento
- Garantir eventos de carregamento de habilidades para rastros
### Documentação
- Atualizar changelog e versão para v1.14.4
## Contribuidores
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="01 mai 2026">
## v1.14.4
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## O que mudou
### Recursos
- Adicionar suporte para chave de persistência personalizada em @persist
- Adicionar suporte à API de Respostas para o provedor Azure OpenAI
- Encaminhar credential_scopes para o cliente de Inferência da Azure AI
- Adicionar guia de configuração de identidade de carga de trabalho do Vertex AI
- Adicionar Tavily Research e obter Pesquisa
- Adicionar ferramentas MCP do You.com para pesquisa, pesquisa e extração de conteúdo
### Correções de Bugs
- Corrigir falha quando a correspondência de regex JSON não é um JSON válido
- Corrigir para preservar tool_calls quando a resposta também contém texto
- Corrigir para encaminhar base_url e api_key para instructor.from_provider
- Corrigir para avisar e retornar vazio quando o servidor MCP nativo não retorna ferramentas
- Corrigir para usar a variável de mensagens validadas em manipuladores não-streaming
- Corrigir para proteger os ajudantes de descrição do chat da equipe contra falhas do LLM
- Corrigir para redefinir mensagens e iterações entre invocações
- Corrigir para encaminhar o arquivo de agentes treinados através de replay e teste
- Corrigir para honrar o arquivo de agentes treinados personalizados na inferência
- Corrigir para vincular agentes apenas de tarefa à equipe para arquivos de entrada multimodal
- Corrigir para serializar chamadas de guardrail como nulas para checkpointing JSON
- Corrigir renomeação de force_final_answer para evitar roteador autorreferencial
- Corrigir aumento de litellm para correção de SSTI; ignorar CVE pip não corrigível
### Documentação
- Atualizar changelog e versão para v1.14.4a1
- Adicionar página de Ferramentas do Sandbox E2B
- Adicionar documentação de ferramentas do sandbox Daytona
## Contributors
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="29 abr 2026">
## v1.14.4a1

View File

@@ -193,41 +193,32 @@ Para um controle mais granular, você pode aplicar @persist em métodos específ
# (O código não é traduzido)
```
### Forking de Estado Persistido
### Chave de Persistência Personalizada
`@persist` suporta dois modos distintos de hidratação em `kickoff` / `kickoff_async`:
- `kickoff(inputs={"id": <uuid>})` — **resume**: carrega o snapshot mais recente do UUID informado e continua escrevendo sob o mesmo `flow_uuid`. O histórico se estende.
- `kickoff(restore_from_state_id=<uuid>)` — **fork**: carrega o snapshot mais recente do UUID informado, hidrata o estado da nova execução a partir dele, e atribui um novo `state.id` (auto-gerado, ou `inputs["id"]` se fixado). As escritas do `@persist` da nova execução vão para o novo `state.id`; o histórico do flow de origem é preservado.
Por padrão, `@persist` usa o campo `state.id` gerado automaticamente como chave de persistência. Se o seu flow já possui um identificador natural — por exemplo um `conversation_id` compartilhado entre sessões — você pode passar o argumento `key` e `@persist` usará esse atributo como UUID do flow:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id") # Usa um campo personalizado como chave de persistência
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
def begin(self):
self.state.turn += 1
print(f"Conversa {self.state.conversation_id} turno {self.state.turn}")
# Execução 1: estado novo, counter 0 -> 1, persistido sob flow_1.state.id
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hidrata do snapshot mais recente de flow_1, mas usa um state.id NOVO
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2.state.counter começa em 1 (hidratado), e step() incrementa para 2.
# flow_2.state.id != flow_1.state.id; o histórico de flow_1 não é alterado.
# Retomar a mesma conversa recarrega o estado anterior pelo conversation_id
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
Se o `restore_from_state_id` informado não corresponder a nenhum estado persistido, o kickoff retorna silenciosamente ao comportamento padrão — o mesmo comportamento do `inputs["id"]` quando não encontrado. Combinar `restore_from_state_id` com `from_checkpoint` lança um `ValueError`; escolha uma única fonte de hidratação. Fixar `inputs["id"]` durante o fork compartilha uma chave de persistência com outro flow — geralmente você quer apenas `restore_from_state_id`.
O decorador lê o valor em `state[key]` para estados do tipo dicionário ou `getattr(state, key)` para estados Pydantic / objetos. Se o atributo informado estiver ausente ou for *falsy* no momento de salvar, `@persist` lança um `ValueError` como `Flow state is missing required persistence key 'conversation_id'`. Quando `key` é omitido, o comportamento original é preservado e `state.id` continua sendo usado.
### Como Funciona

View File

@@ -146,14 +146,15 @@ class ProductionFlow(Flow[AppState]):
# ...
```
Por padrão, `@persist` retoma um flow quando `kickoff(inputs={"id": <uuid>})` é informado, estendendo o mesmo histórico do `flow_uuid`. Para **forkar** um flow persistido em uma nova linhagem — hidratar o estado a partir de uma execução anterior mas escrever sob um novo `state.id` — passe `restore_from_state_id`:
Por padrão, `@persist` usa o `state.id` gerado automaticamente como chave do estado salvo. Se a sua aplicação já tem um identificador natural — por exemplo um `conversation_id` que liga várias execuções à mesma sessão de usuário passe-o como `key` e o decorador usará esse atributo como UUID do flow. Um `ValueError` é lançado se o atributo informado estiver ausente ou for *falsy* no momento de salvar.
```python
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# AppState precisa expor conversation_id; retomar a sessão recarrega o estado anterior
...
```
A nova execução recebe um novo `state.id` (auto-gerado, ou `inputs["id"]` se fixado), então suas escritas do `@persist` não estendem o histórico da origem. Combinar com `from_checkpoint` lança um `ValueError`; escolha uma única fonte de hidratação.
## Resumo
- **Comece com um Flow.**

View File

@@ -133,7 +133,7 @@ Aqui está uma lista das ferramentas disponíveis e suas descrições:
| **DirectorySearchTool** | Ferramenta RAG para busca em diretórios, útil para navegação em sistemas de arquivos. |
| **DOCXSearchTool** | Ferramenta RAG voltada para busca em documentos DOCX, ideal para processar arquivos Word. |
| **DirectoryReadTool** | Facilita a leitura e processamento de estruturas de diretórios e seus conteúdos. |
| **ExaSearchTool** | Ferramenta projetada para buscas exaustivas em diversas fontes de dados. |
| **EXASearchTool** | Ferramenta projetada para buscas exaustivas em diversas fontes de dados. |
| **FileReadTool** | Permite a leitura e extração de dados de arquivos, suportando diversos formatos. |
| **FirecrawlSearchTool** | Ferramenta para buscar páginas web usando Firecrawl e retornar os resultados. |
| **FirecrawlCrawlWebsiteTool** | Ferramenta para rastrear páginas web utilizando o Firecrawl. |

View File

@@ -167,47 +167,32 @@ Para mais controle, você pode aplicar `@persist()` em métodos específicos:
# código não traduzido
```
#### Forking de Estado Persistido
#### Usando uma Chave de Persistência Personalizada
`@persist` suporta dois modos distintos de hidratação em `kickoff` / `kickoff_async`. Use **resume** (`inputs["id"]`) para continuar a mesma linhagem; use **fork** (`restore_from_state_id`) para iniciar uma nova linhagem a partir de um snapshot:
| | `state.id` após o kickoff | Escritas do `@persist` vão para |
|---|---|---|
| `inputs["id"]` (resume) | id informado | id informado (estende o histórico) |
| `restore_from_state_id` (fork) | id novo, ou `inputs["id"]` se fixado | id novo (origem preservada) |
Por padrão, `@persist()` usa o `state.id` gerado automaticamente como chave do estado persistido. Quando seu domínio já possui um identificador natural — por exemplo um `conversation_id` que liga várias execuções do flow à mesma sessão de usuário — passe-o como argumento `key` e `@persist` usará esse atributo como UUID do flow em vez de `id`:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
id: str = ""
counter: int = 0
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
@persist
class CounterFlow(Flow[CounterState]):
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@start()
def step(self):
self.state.counter += 1
def greet(self):
self.state.history.append("hello")
return self.state.history
# Execução 1: estado novo, counter 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hidrata do snapshot mais recente de flow_1, mas escreve sob um state.id NOVO
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2 começa com counter=1 (hidratado), e step() incrementa para 2.
# O histórico do flow_uuid de flow_1 não é alterado.
# Uma segunda execução com o mesmo conversation_id recarrega o estado anterior
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
```
Notas sobre o comportamento:
- `restore_from_state_id` não encontrado na persistência → o kickoff retorna silenciosamente ao comportamento padrão (espelha o comportamento de `inputs["id"]` quando não encontrado). Nenhuma exceção é lançada.
- Combinar `restore_from_state_id` com `from_checkpoint` lança um `ValueError` — eles miram sistemas de estado diferentes (`@persist` vs. Checkpointing) e não podem ser combinados.
- `restore_from_state_id=None` (padrão) é byte-idêntico a um kickoff sem o parâmetro.
- Fixar `inputs["id"]` durante o fork significa que a nova execução compartilha uma chave de persistência com outro flow — geralmente você quer apenas `restore_from_state_id`.
Para estados baseados em dicionário `@persist` lê `state[key]`, e para estados Pydantic / objetos lê `getattr(state, key)`. Se o atributo informado estiver ausente ou for *falsy* no momento em que o estado for salvo, `@persist` lança um `ValueError` como `Flow state is missing required persistence key 'conversation_id'`, fazendo com que a falha apareça imediatamente em vez de descartar silenciosamente os dados persistidos. Chamar `@persist()` sem `key` mantém o comportamento original de usar `state.id`.
## Padrões Avançados de Estado

View File

@@ -1,15 +1,15 @@
---
title: Carregador Web EXA Search
description: O `ExaSearchTool` foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
description: O `EXASearchTool` foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
icon: globe-pointer
mode: "wide"
---
# `ExaSearchTool`
# `EXASearchTool`
## Descrição
O ExaSearchTool foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
O EXASearchTool foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
Ele utiliza a API da [exa.ai](https://exa.ai/) para buscar e exibir os resultados de pesquisa mais relevantes com base na consulta fornecida pelo usuário.
## Instalação
@@ -25,15 +25,15 @@ pip install 'crewai[tools]'
O exemplo a seguir demonstra como inicializar a ferramenta e executar uma busca com uma consulta determinada:
```python Code
from crewai_tools import ExaSearchTool
from crewai_tools import EXASearchTool
# Initialize the tool for internet searching capabilities
tool = ExaSearchTool()
tool = EXASearchTool()
```
## Etapas para Começar
Para usar o ExaSearchTool de forma eficaz, siga estas etapas:
Para usar o EXASearchTool de forma eficaz, siga estas etapas:
<Steps>
<Step title="Instalação do Pacote">
@@ -47,35 +47,7 @@ Para usar o ExaSearchTool de forma eficaz, siga estas etapas:
</Step>
</Steps>
## Usando o Exa via MCP
Você também pode conectar seu agente ao servidor MCP hospedado pelo Exa. Passe sua chave de API no cabeçalho `x-api-key`:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
Obtenha sua chave de API no [painel da Exa](https://dashboard.exa.ai/api-keys). Para mais informações sobre MCP no CrewAI, consulte a [visão geral do MCP](/pt-BR/mcp/overview).
## Conclusão
Ao integrar o `ExaSearchTool` em projetos Python, os usuários ganham a capacidade de realizar buscas relevantes e em tempo real pela internet diretamente de suas aplicações.
Seguindo as orientações de configuração e uso fornecidas, a incorporação desta ferramenta em projetos torna-se simples e direta.
## Recursos
- [Documentação do Exa](https://exa.ai/docs)
- [Painel do Exa — gerenciar chaves de API e uso](https://dashboard.exa.ai)
Ao integrar o `EXASearchTool` em projetos Python, os usuários ganham a capacidade de realizar buscas relevantes e em tempo real pela internet diretamente de suas aplicações.
Seguindo as orientações de configuração e uso fornecidas, a incorporação desta ferramenta em projetos torna-se simples e direta.

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.5a1"
__version__ = "1.14.4a1"

View File

@@ -26,7 +26,7 @@ CrewAI provides an extensive collection of powerful tools ready to enhance your
- **Web Scraping**: `ScrapeWebsiteTool`, `SeleniumScrapingTool`
- **Database Integrations**: `MySQLSearchTool`
- **Vector Database Integrations**: `MongoDBVectorSearchTool`, `QdrantVectorSearchTool`, `WeaviateVectorSearchTool`
- **API Integrations**: `SerperApiTool`, `ExaSearchTool`
- **API Integrations**: `SerperApiTool`, `EXASearchTool`
- **AI-powered Tools**: `DallETool`, `VisionTool`, `StagehandTool`
And many more robust tools to simplify your agent integrations.

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests>=2.33.0,<3",
"crewai==1.14.5a1",
"crewai==1.14.4a1",
"tiktoken>=0.8.0,<0.13",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -76,7 +76,7 @@ from crewai_tools.tools.e2b_sandbox_tool import (
E2BFileTool,
E2BPythonTool,
)
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool, ExaSearchTool
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
from crewai_tools.tools.files_compressor_tool.files_compressor_tool import (
@@ -258,7 +258,6 @@ __all__ = [
"E2BPythonTool",
"EXASearchTool",
"EnterpriseActionTool",
"ExaSearchTool",
"FileCompressorTool",
"FileReadTool",
"FileWriterTool",
@@ -330,4 +329,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.5a1"
__version__ = "1.14.4a1"

View File

@@ -65,7 +65,7 @@ from crewai_tools.tools.e2b_sandbox_tool import (
E2BFileTool,
E2BPythonTool,
)
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool, ExaSearchTool
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
from crewai_tools.tools.files_compressor_tool.files_compressor_tool import (
@@ -242,7 +242,6 @@ __all__ = [
"E2BFileTool",
"E2BPythonTool",
"EXASearchTool",
"ExaSearchTool",
"FileCompressorTool",
"FileReadTool",
"FileWriterTool",

View File

@@ -26,8 +26,6 @@ class CrewAIPlatformActionTool(BaseTool):
description: str,
action_name: str,
action_schema: dict[str, Any],
provider: str | None = None,
provider_id: str | None = None,
):
parameters = action_schema.get("function", {}).get("parameters", {})
@@ -50,12 +48,6 @@ class CrewAIPlatformActionTool(BaseTool):
)
self.action_name = action_name
self.action_schema = action_schema
# Private metadata used by enterprise tooling to recover the canonical
# tool_id (e.g. "crewai_oauth:google_drive" or "paragon:<uuid>") for
# ACP rule evaluation. Set by CrewaiPlatformToolBuilder; remains None
# for direct construction.
self._provider = provider
self._provider_id = provider_id
def _run(self, **kwargs: Any) -> str:
try:

View File

@@ -75,7 +75,6 @@ class CrewaiPlatformToolBuilder:
),
"parameters": action.get("parameters", {}),
"app": app,
"provider": action.get("provider"),
}
}
self._actions_schema[action_name] = action_schema
@@ -92,8 +91,6 @@ class CrewaiPlatformToolBuilder:
description=description,
action_name=action_name,
action_schema=action_schema,
provider=function_details.get("provider"),
provider_id=function_details.get("app"),
)
tools.append(tool)

View File

@@ -1,7 +1,7 @@
# ExaSearchTool Documentation
# EXASearchTool Documentation
## Description
This tool lets CrewAI agents search the web using [Exa](https://exa.ai/), the fastest and most accurate web search API. By default the tool returns token-efficient highlights of the most relevant results for any query; you can also opt in to full page content.
This tool is designed to perform a semantic search for a specified query from a text's content across the internet. It utilizes the `https://exa.ai/` API to fetch and display the most relevant search results based on the query provided by the user.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
@@ -10,23 +10,21 @@ uv add crewai[tools] exa_py
```
## Example
The following example demonstrates how to initialize the tool and run a search:
The following example demonstrates how to initialize the tool and execute a search with a given query:
```python
from crewai_tools import ExaSearchTool
from crewai_tools import EXASearchTool
# Default: results with token-efficient highlights
tool = ExaSearchTool(api_key="your_api_key", highlights=True)
# Initialize the tool for internet searching capabilities
tool = EXASearchTool(api_key="your_api_key")
```
## Steps to Get Started
To effectively use the `ExaSearchTool`, follow these steps:
To effectively use the `EXASearchTool`, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
2. **API Key Acquisition**: Get an Exa API key from the [Exa dashboard](https://dashboard.exa.ai/api-keys).
3. **Environment Configuration**: Store your API key in an environment variable named `EXA_API_KEY` so the tool can pick it up automatically.
2. **API Key Acquisition**: Acquire a `https://exa.ai/` API key by registering for a free account at `https://exa.ai/`.
3. **Environment Configuration**: Store your obtained API key in an environment variable named `EXA_API_KEY` to facilitate its use by the tool.
For details on choosing between highlights and full content, see the [Exa search best practices](https://exa.ai/docs/reference/search-best-practices).
## Note
`EXASearchTool` is a deprecated alias for `ExaSearchTool`. Existing imports continue to work but emit a deprecation warning; please migrate to `ExaSearchTool`.
## Conclusion
By integrating the `EXASearchTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

View File

@@ -3,19 +3,12 @@ from __future__ import annotations
from builtins import type as type_
import os
from typing import Any, TypedDict
import warnings
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, ConfigDict, Field
from typing_extensions import Required
try:
from exa_py import Exa
except ImportError:
Exa = None # type: ignore[assignment,misc]
class SearchParams(TypedDict, total=False):
"""Parameters for Exa search API."""
@@ -25,7 +18,7 @@ class SearchParams(TypedDict, total=False):
include_domains: list[str]
class ExaBaseToolSchema(BaseModel):
class EXABaseToolSchema(BaseModel):
search_query: str = Field(
..., description="Mandatory search query you want to use to search the internet"
)
@@ -38,20 +31,14 @@ class ExaBaseToolSchema(BaseModel):
)
EXABaseToolSchema = ExaBaseToolSchema
class ExaSearchTool(BaseTool):
class EXASearchTool(BaseTool):
model_config = ConfigDict(arbitrary_types_allowed=True)
name: str = "ExaSearchTool"
description: str = (
"Search the web with Exa, the fastest and most accurate web search API."
)
args_schema: type_[BaseModel] = ExaBaseToolSchema
name: str = "EXASearchTool"
description: str = "Search the internet using Exa"
args_schema: type_[BaseModel] = EXABaseToolSchema
client: Any | None = None
content: bool | dict[str, Any] | None = False
summary: bool | dict[str, Any] | None = False
highlights: bool | dict[str, Any] | None = True
content: bool | None = False
summary: bool | None = False
type: str | None = "auto"
package_dependencies: list[str] = Field(default_factory=lambda: ["exa_py"])
api_key: str | None = Field(
@@ -81,17 +68,17 @@ class ExaSearchTool(BaseTool):
def __init__(
self,
content: bool | dict[str, Any] | None = False,
summary: bool | dict[str, Any] | None = False,
highlights: bool | dict[str, Any] | None = True,
content: bool | None = False,
summary: bool | None = False,
type: str | None = "auto",
**kwargs: Any,
) -> None:
super().__init__(
**kwargs,
)
global Exa
if Exa is None:
try:
from exa_py import Exa
except ImportError as e:
import click
if click.confirm(
@@ -101,13 +88,12 @@ class ExaSearchTool(BaseTool):
subprocess.run(["uv", "add", "exa_py"], check=True) # noqa: S607
from exa_py import Exa as _Exa
Exa = _Exa # type: ignore[misc]
# Re-import after installation
from exa_py import Exa
else:
raise ImportError(
"You are missing the 'exa_py' package. Please install it to use ExaSearchTool."
)
"You are missing the 'exa_py' package. Would you like to install it?"
) from e
client_kwargs: dict[str, str] = {}
if self.api_key:
@@ -115,10 +101,8 @@ class ExaSearchTool(BaseTool):
if self.base_url:
client_kwargs["base_url"] = self.base_url
self.client = Exa(**client_kwargs)
self.client.headers["x-exa-integration"] = "crewai"
self.content = content
self.summary = summary
self.highlights = highlights
self.type = type
def _run(
@@ -142,31 +126,10 @@ class ExaSearchTool(BaseTool):
if include_domains:
search_params["include_domains"] = include_domains
contents_kwargs: dict[str, Any] = {}
if self.content:
contents_kwargs["text"] = self.content
if self.highlights:
contents_kwargs["highlights"] = self.highlights
if self.summary:
contents_kwargs["summary"] = self.summary
if contents_kwargs:
return self.client.search_and_contents(
search_query, **contents_kwargs, **search_params
results = self.client.search_and_contents(
search_query, summary=self.summary, **search_params
)
return self.client.search(search_query, **search_params)
class EXASearchTool(ExaSearchTool):
"""Deprecated alias for :class:`ExaSearchTool`. Kept for backwards compatibility."""
name: str = "ExaSearchTool"
def __init__(self, *args: Any, **kwargs: Any) -> None:
warnings.warn(
"EXASearchTool is deprecated and will be removed in a future release; "
"use ExaSearchTool instead.",
DeprecationWarning,
stacklevel=2,
)
super().__init__(*args, **kwargs)
else:
results = self.client.search(search_query, **search_params)
return results

View File

@@ -1,120 +0,0 @@
"""Tests for the _provider / _provider_id metadata attached to platform tools.
These attributes are private metadata used by enterprise tooling to recover
the canonical tool_id (e.g. ``crewai_oauth:google_drive`` or
``paragon:<uuid>``) for ACP rule evaluation.
"""
from unittest.mock import Mock, patch
from crewai_tools.tools.crewai_platform_tools import (
CrewAIPlatformActionTool,
CrewaiPlatformToolBuilder,
)
class TestActionToolProviderAttrs:
def setup_method(self):
self.action_schema = {
"function": {
"name": "test_action",
"parameters": {"type": "object", "properties": {}},
}
}
def test_defaults_to_none_when_not_provided(self):
tool = CrewAIPlatformActionTool(
description="x",
action_name="test_action",
action_schema=self.action_schema,
)
assert tool._provider is None
assert tool._provider_id is None
def test_stores_explicit_values(self):
tool = CrewAIPlatformActionTool(
description="x",
action_name="test_action",
action_schema=self.action_schema,
provider="crewai_oauth",
provider_id="google_drive",
)
assert tool._provider == "crewai_oauth"
assert tool._provider_id == "google_drive"
class TestBuilderProviderThreading:
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"})
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_builder_threads_provider_and_app_into_each_tool(self, mock_get):
mock_api_response = {
"actions": {
"google_drive": [
{
"name": "create_file",
"description": "Create a file",
"parameters": {"type": "object", "properties": {}},
"provider": "crewai_oauth",
}
],
"1b5f2395-65a5-4da8-9b2f-c10eafc83a0b": [
{
"name": "send_invoice",
"description": "Send an invoice",
"parameters": {"type": "object", "properties": {}},
"provider": "paragon",
}
],
}
}
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = mock_api_response
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(
apps=["google_drive", "1b5f2395-65a5-4da8-9b2f-c10eafc83a0b"]
)
tools = builder.tools()
by_action = {tool.action_name: tool for tool in tools}
oauth_tool = by_action["create_file"]
assert oauth_tool._provider == "crewai_oauth"
assert oauth_tool._provider_id == "google_drive"
paragon_tool = by_action["send_invoice"]
assert paragon_tool._provider == "paragon"
assert paragon_tool._provider_id == "1b5f2395-65a5-4da8-9b2f-c10eafc83a0b"
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"})
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_builder_handles_response_without_provider_field(self, mock_get):
# Older crewai-plus versions return actions without a "provider" key.
# The builder must remain compatible: provider_id is set, provider is None.
mock_api_response = {
"actions": {
"github": [
{
"name": "create_issue",
"description": "Create issue",
"parameters": {"type": "object", "properties": {}},
}
]
}
}
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = mock_api_response
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
tools = builder.tools()
assert len(tools) == 1
assert tools[0]._provider is None
assert tools[0]._provider_id == "github"

View File

@@ -1,13 +1,13 @@
import os
from unittest.mock import MagicMock, patch
from unittest.mock import patch
from crewai_tools import EXASearchTool, ExaSearchTool
from crewai_tools import EXASearchTool
import pytest
@pytest.fixture
def exa_search_tool():
return ExaSearchTool(api_key="test_api_key")
return EXASearchTool(api_key="test_api_key")
@pytest.fixture(autouse=True)
@@ -22,12 +22,11 @@ def test_exa_search_tool_initialization():
"crewai_tools.tools.exa_tools.exa_search_tool.Exa"
) as mock_exa_class:
api_key = "test_api_key"
tool = ExaSearchTool(api_key=api_key)
tool = EXASearchTool(api_key=api_key)
assert tool.api_key == api_key
assert tool.content is False
assert tool.summary is False
assert tool.highlights is True
assert tool.type == "auto"
mock_exa_class.assert_called_once_with(api_key=api_key)
@@ -37,7 +36,7 @@ def test_exa_search_tool_initialization_with_env(mock_exa_api_key):
with patch(
"crewai_tools.tools.exa_tools.exa_search_tool.Exa"
) as mock_exa_class:
ExaSearchTool()
EXASearchTool()
mock_exa_class.assert_called_once_with(api_key="test_key_from_env")
@@ -48,13 +47,12 @@ def test_exa_search_tool_initialization_with_base_url():
) as mock_exa_class:
api_key = "test_api_key"
base_url = "https://custom.exa.api.com"
tool = ExaSearchTool(api_key=api_key, base_url=base_url)
tool = EXASearchTool(api_key=api_key, base_url=base_url)
assert tool.api_key == api_key
assert tool.base_url == base_url
assert tool.content is False
assert tool.summary is False
assert tool.highlights is True
assert tool.type == "auto"
mock_exa_class.assert_called_once_with(api_key=api_key, base_url=base_url)
@@ -69,7 +67,7 @@ def test_exa_search_tool_initialization_with_env_base_url(
mock_exa_api_key, mock_exa_base_url
):
with patch("crewai_tools.tools.exa_tools.exa_search_tool.Exa") as mock_exa_class:
ExaSearchTool()
EXASearchTool()
mock_exa_class.assert_called_once_with(
api_key="test_key_from_env", base_url="https://env.exa.api.com"
)
@@ -81,33 +79,8 @@ def test_exa_search_tool_initialization_without_base_url():
"crewai_tools.tools.exa_tools.exa_search_tool.Exa"
) as mock_exa_class:
api_key = "test_api_key"
tool = ExaSearchTool(api_key=api_key)
tool = EXASearchTool(api_key=api_key)
assert tool.api_key == api_key
assert tool.base_url is None
mock_exa_class.assert_called_once_with(api_key=api_key)
def test_exa_search_tool_highlights_uses_search_and_contents():
with patch("crewai_tools.tools.exa_tools.exa_search_tool.Exa") as mock_exa_class:
mock_client = MagicMock()
mock_exa_class.return_value = mock_client
tool = ExaSearchTool(
api_key="test_api_key", highlights={"max_characters": 4000}
)
tool._run(search_query="hello world")
mock_client.search_and_contents.assert_called_once_with(
"hello world",
highlights={"max_characters": 4000},
type="auto",
)
mock_client.search.assert_not_called()
def test_exasearchtool_alias_is_deprecated():
with patch("crewai_tools.tools.exa_tools.exa_search_tool.Exa"):
with pytest.warns(DeprecationWarning, match="ExaSearchTool"):
tool = EXASearchTool(api_key="test_api_key")
assert isinstance(tool, ExaSearchTool)

View File

@@ -9397,7 +9397,7 @@
}
},
{
"description": "Search the web with Exa, the fastest and most accurate web search API.",
"description": "Search the internet using Exa",
"env_vars": [
{
"default": null,
@@ -9412,7 +9412,7 @@
"required": false
}
],
"humanized_name": "ExaSearchTool",
"humanized_name": "EXASearchTool",
"init_params_schema": {
"$defs": {
"EnvVar": {
@@ -9451,7 +9451,6 @@
"type": "object"
}
},
"description": "Deprecated alias for :class:`ExaSearchTool`. Kept for backwards compatibility.",
"properties": {
"api_key": {
"anyOf": [
@@ -9494,10 +9493,6 @@
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
@@ -9505,31 +9500,11 @@
"default": false,
"title": "Content"
},
"highlights": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": true,
"title": "Highlights"
},
"summary": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
@@ -9611,225 +9586,7 @@
"required": [
"search_query"
],
"title": "ExaBaseToolSchema",
"type": "object"
}
},
{
"description": "Search the web with Exa, the fastest and most accurate web search API.",
"env_vars": [
{
"default": null,
"description": "API key for Exa services",
"name": "EXA_API_KEY",
"required": false
},
{
"default": null,
"description": "API url for the Exa services",
"name": "EXA_BASE_URL",
"required": false
}
],
"humanized_name": "ExaSearchTool",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"properties": {
"api_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "API key for Exa services",
"required": false,
"title": "Api Key"
},
"base_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "API server url",
"required": false,
"title": "Base Url"
},
"client": {
"anyOf": [
{},
{
"type": "null"
}
],
"default": null,
"title": "Client"
},
"content": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": false,
"title": "Content"
},
"highlights": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": true,
"title": "Highlights"
},
"summary": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": false,
"title": "Summary"
},
"type": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": "auto",
"title": "Type"
}
},
"required": [],
"title": "ExaSearchTool",
"type": "object"
},
"name": "ExaSearchTool",
"package_dependencies": [
"exa_py"
],
"run_params_schema": {
"properties": {
"end_published_date": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "End date for the search",
"title": "End Published Date"
},
"include_domains": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "List of domains to include in the search",
"title": "Include Domains"
},
"search_query": {
"description": "Mandatory search query you want to use to search the internet",
"title": "Search Query",
"type": "string"
},
"start_published_date": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Start date for the search",
"title": "Start Published Date"
}
},
"required": [
"search_query"
],
"title": "ExaBaseToolSchema",
"title": "EXABaseToolSchema",
"type": "object"
}
},

View File

@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.14.5a1",
"crewai-tools==1.14.4a1",
]
embeddings = [
"tiktoken>=0.8.0,<0.13"

View File

@@ -48,7 +48,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.14.5a1"
__version__ = "1.14.4a1"
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
"Memory": ("crewai.memory.unified_memory", "Memory"),

View File

@@ -774,7 +774,7 @@ def calculator(expression: str) -> str:
```
### Built-in Tools (install with `uv add crewai-tools`)
Web/Search: SerperDevTool, ScrapeWebsiteTool, WebsiteSearchTool, ExaSearchTool, FirecrawlSearchTool
Web/Search: SerperDevTool, ScrapeWebsiteTool, WebsiteSearchTool, EXASearchTool, FirecrawlSearchTool
Documents: FileReadTool, DirectoryReadTool, PDFSearchTool, DOCXSearchTool, CSVSearchTool, JSONSearchTool, XMLSearchTool, MDXSearchTool
Code: CodeInterpreterTool, CodeDocsSearchTool, GithubSearchTool
Media: DALL-E Tool, YoutubeChannelSearchTool, YoutubeVideoSearchTool

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.5a1"
"crewai[tools]==1.14.4a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.5a1"
"crewai[tools]==1.14.4a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.5a1"
"crewai[tools]==1.14.4a1"
]
[tool.crewai]

View File

@@ -2272,17 +2272,13 @@ class Crew(FlowTrackable, BaseModel):
if should_suppress_tracing_messages():
return
# Don't nag users who have explicitly declined tracing
if has_user_declined_tracing():
return
console = Console()
if has_user_declined_tracing():
message = """Info: Tracing is disabled.
To enable tracing, do any one of these:
• Set tracing=True in your Crew code
• Set CREWAI_TRACING_ENABLED=true in your project's .env file
• Run: crewai traces enable"""
else:
message = """Info: Tracing is disabled.
message = """Info: Tracing is disabled.
To enable tracing, do any one of these:
• Set tracing=True in your Crew code

View File

@@ -108,13 +108,6 @@ from crewai.events.types.reasoning_events import (
AgentReasoningFailedEvent,
AgentReasoningStartedEvent,
)
from crewai.events.types.skill_events import (
SkillActivatedEvent,
SkillDiscoveryCompletedEvent,
SkillDiscoveryStartedEvent,
SkillLoadFailedEvent,
SkillLoadedEvent,
)
from crewai.events.types.system_events import SignalEvent, on_signal
from crewai.events.types.task_events import (
TaskCompletedEvent,
@@ -537,30 +530,6 @@ class TraceCollectionListener(BaseEventListener):
) -> None:
self._handle_action_event("knowledge_query_failed", source, event)
@event_bus.on(SkillDiscoveryStartedEvent)
def on_skill_discovery_started(
source: Any, event: SkillDiscoveryStartedEvent
) -> None:
self._handle_action_event("skill_discovery_started", source, event)
@event_bus.on(SkillDiscoveryCompletedEvent)
def on_skill_discovery_completed(
source: Any, event: SkillDiscoveryCompletedEvent
) -> None:
self._handle_action_event("skill_discovery_completed", source, event)
@event_bus.on(SkillLoadedEvent)
def on_skill_loaded(source: Any, event: SkillLoadedEvent) -> None:
self._handle_action_event("skill_loaded", source, event)
@event_bus.on(SkillActivatedEvent)
def on_skill_activated(source: Any, event: SkillActivatedEvent) -> None:
self._handle_action_event("skill_activated", source, event)
@event_bus.on(SkillLoadFailedEvent)
def on_skill_load_failed(source: Any, event: SkillLoadFailedEvent) -> None:
self._handle_action_event("skill_load_failed", source, event)
def _register_a2a_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for A2A (Agent-to-Agent) events."""
@@ -899,17 +868,13 @@ class TraceCollectionListener(BaseEventListener):
if should_suppress_tracing_messages():
return
# Don't nag users who have explicitly declined tracing
if has_user_declined_tracing():
return
console = Console()
if has_user_declined_tracing():
message = """Info: Tracing is disabled.
To enable tracing, do any one of these:
• Set tracing=True in your Crew/Flow code
• Set CREWAI_TRACING_ENABLED=true in your project's .env file
• Run: crewai traces enable"""
else:
message = """Info: Tracing is disabled.
message = """Info: Tracing is disabled.
To enable tracing, do any one of these:
• Set tracing=True in your Crew/Flow code

View File

@@ -53,10 +53,19 @@ def set_suppress_tracing_messages(suppress: bool) -> object:
def should_suppress_tracing_messages() -> bool:
"""Check if tracing messages should be suppressed.
Checks the context variable first, then falls back to the
CREWAI_SUPPRESS_TRACING_MESSAGES environment variable.
Returns:
True if messages should be suppressed, False otherwise.
"""
return _suppress_tracing_messages.get()
if _suppress_tracing_messages.get():
return True
return os.getenv("CREWAI_SUPPRESS_TRACING_MESSAGES", "false").lower() in (
"true",
"1",
"yes",
)
def should_enable_tracing(*, override: bool | None = None) -> bool:

View File

@@ -145,16 +145,12 @@ To update, run: uv sync --upgrade-package crewai"""
if listener and listener.first_time_handler.is_first_time:
return
if not is_tracing_enabled_in_context():
if has_user_declined_tracing():
message = """Info: Tracing is disabled.
# Don't nag users who have explicitly declined tracing
if has_user_declined_tracing():
return
To enable tracing, do any one of these:
• Set tracing=True in your Crew/Flow code
• Set CREWAI_TRACING_ENABLED=true in your project's .env file
• Run: crewai traces enable"""
else:
message = """Info: Tracing is disabled.
if not is_tracing_enabled_in_context():
message = """Info: Tracing is disabled.
To enable tracing, do any one of these:
• Set tracing=True in your Crew/Flow code

View File

@@ -1074,6 +1074,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
_human_feedback_method_outputs: dict[str, Any] = PrivateAttr(default_factory=dict)
_input_history: list[InputHistoryEntry] = PrivateAttr(default_factory=list)
_state: Any = PrivateAttr(default=None)
_execution_id: str = PrivateAttr(default_factory=lambda: str(uuid4()))
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]: # type: ignore[override]
class _FlowGeneric(cls): # type: ignore[valid-type,misc]
@@ -1864,6 +1865,27 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
except (AttributeError, TypeError):
return "" # Safely handle any unexpected attribute access issues
@property
def execution_id(self) -> str:
"""Stable identifier for this flow execution.
Separate from ``flow_id`` / ``state.id``, which consumers may
override via ``kickoff(inputs={"id": ...})`` to resume a persisted
flow. ``execution_id`` is never affected by ``inputs`` and stays
stable for the lifetime of a single run, so it is the correct key
for telemetry, tracing, and any external correlation that must
uniquely identify a single execution even when callers pass an
``id`` in ``inputs``.
Defaults to a fresh ``uuid4`` per ``Flow`` instance; assign to
override when an outer system already has an execution identity.
"""
return self._execution_id
@execution_id.setter
def execution_id(self, value: str) -> None:
self._execution_id = value
def _initialize_state(self, inputs: dict[str, Any]) -> None:
"""Initialize or update flow state with new inputs.
@@ -2032,7 +2054,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
restore_from_state_id: str | None = None,
) -> Any | FlowStreamingOutput:
"""Start the flow execution in a synchronous context.
@@ -2044,24 +2065,10 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
restore_from_state_id: Optional UUID of a previously-persisted flow
whose latest snapshot should hydrate this run's state. The new
run is assigned a fresh ``state.id`` (or ``inputs["id"]`` if
pinned), so its ``@persist`` writes land under a separate
persistence key and the source flow's history is preserved.
If the referenced state is not found, the kickoff falls back
silently to baseline behavior. Cannot be combined with
``from_checkpoint``; passing both raises ``ValueError``.
Returns:
The final output from the flow or FlowStreamingOutput if streaming.
"""
if from_checkpoint is not None and restore_from_state_id is not None:
raise ValueError(
"Cannot combine `from_checkpoint` and `restore_from_state_id`. "
"These parameters target different state systems "
"(Checkpointing and @persist) and cannot be used together."
)
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return restored.kickoff(inputs=inputs, input_files=input_files)
@@ -2083,11 +2090,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
def run_flow() -> None:
try:
self.stream = False
result = self.kickoff(
inputs=inputs,
input_files=input_files,
restore_from_state_id=restore_from_state_id,
)
result = self.kickoff(inputs=inputs, input_files=input_files)
result_holder.append(result)
except Exception as e:
# HumanFeedbackPending is expected control flow, not an error
@@ -2110,11 +2113,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
return streaming_output
async def _run_flow() -> Any:
return await self.kickoff_async(
inputs,
input_files,
restore_from_state_id=restore_from_state_id,
)
return await self.kickoff_async(inputs, input_files)
try:
asyncio.get_running_loop()
@@ -2129,7 +2128,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
restore_from_state_id: str | None = None,
) -> Any | FlowStreamingOutput:
"""Start the flow execution asynchronously.
@@ -2143,23 +2141,10 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
restore_from_state_id: Optional UUID of a previously-persisted flow
whose latest snapshot should hydrate this run's state. The new
run is assigned a fresh ``state.id`` (or ``inputs["id"]`` if
pinned), so subsequent ``@persist`` writes land under a
separate persistence key. If the referenced state is not
found, falls back silently to baseline. Cannot be combined
with ``from_checkpoint``; passing both raises ``ValueError``.
Returns:
The final output from the flow, which is the result of the last executed method.
"""
if from_checkpoint is not None and restore_from_state_id is not None:
raise ValueError(
"Cannot combine `from_checkpoint` and `restore_from_state_id`. "
"These parameters target different state systems "
"(Checkpointing and @persist) and cannot be used together."
)
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return await restored.kickoff_async(inputs=inputs, input_files=input_files)
@@ -2182,9 +2167,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
try:
self.stream = False
result = await self.kickoff_async(
inputs=inputs,
input_files=input_files,
restore_from_state_id=restore_from_state_id,
inputs=inputs, input_files=input_files
)
result_holder.append(result)
except Exception as e:
@@ -2216,9 +2199,9 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
flow_id_token = None
request_id_token = None
if current_flow_id.get() is None:
flow_id_token = current_flow_id.set(self.flow_id)
flow_id_token = current_flow_id.set(self.execution_id)
if current_flow_request_id.get() is None:
request_id_token = current_flow_request_id.set(self.flow_id)
request_id_token = current_flow_request_id.set(self.execution_id)
try:
# Reset flow state for fresh execution unless restoring from persistence
@@ -2241,54 +2224,16 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
if self._completed_methods:
self._is_execution_resuming = True
# Fork hydration: when restore_from_state_id is set and persistence is
# available, hydrate self._state from the source UUID's latest snapshot
# and reassign state.id to a fresh value so subsequent @persist writes
# don't extend the source flow's history. If the source state is not
# found, fall through silently to the existing inputs handling.
fork_succeeded = False
if restore_from_state_id is not None and self.persistence is not None:
stored_state = self.persistence.load_state(restore_from_state_id)
if stored_state:
self._log_flow_event(
f"Forking flow state from UUID: {restore_from_state_id}"
)
self._restore_state(stored_state)
# Pin to inputs["id"] when provided, otherwise mint a fresh
# UUID. NOTE: pinning inputs.id while forking shares a
# persistence key with another flow — usually you want only
# restore_from_state_id.
new_state_id = (inputs.get("id") if inputs else None) or str(
uuid4()
)
if isinstance(self._state, dict):
self._state["id"] = new_state_id
elif isinstance(self._state, BaseModel):
setattr(self._state, "id", new_state_id) # noqa: B010
fork_succeeded = True
else:
self._log_flow_event(
"No flow state found for restore_from_state_id: "
f"{restore_from_state_id}; proceeding without hydration",
color="yellow",
)
if inputs:
# Override the id in the state if it exists in inputs.
# Skip when the fork already assigned state.id above.
if "id" in inputs and not fork_succeeded:
# Override the id in the state if it exists in inputs
if "id" in inputs:
if isinstance(self._state, dict):
self._state["id"] = inputs["id"]
elif isinstance(self._state, BaseModel):
setattr(self._state, "id", inputs["id"]) # noqa: B010
# If persistence is enabled, attempt to restore the stored state using the provided id.
# Skip when the fork already restored self._state above.
if (
"id" in inputs
and self.persistence is not None
and not fork_succeeded
):
if "id" in inputs and self.persistence is not None:
restore_uuid = inputs["id"]
stored_state = self.persistence.load_state(restore_uuid)
if stored_state:
@@ -2471,7 +2416,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
restore_from_state_id: str | None = None,
) -> Any | FlowStreamingOutput:
"""Native async method to start the flow execution. Alias for kickoff_async.
@@ -2480,19 +2424,11 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
restore_from_state_id: Optional UUID of a previously-persisted flow
whose latest snapshot should hydrate this run's state. See
``kickoff_async`` for full semantics.
Returns:
The final output from the flow, which is the result of the last executed method.
"""
return await self.kickoff_async(
inputs,
input_files,
from_checkpoint,
restore_from_state_id=restore_from_state_id,
)
return await self.kickoff_async(inputs, input_files, from_checkpoint)
async def _replay_recorded_events(self) -> None:
"""Dispatch recorded ``MethodExecution*`` events from the event record."""
@@ -3610,17 +3546,13 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
if should_suppress_tracing_messages():
return
# Don't nag users who have explicitly declined tracing
if has_user_declined_tracing():
return
console = Console()
if has_user_declined_tracing():
message = """Info: Tracing is disabled.
To enable tracing, do any one of these:
• Set tracing=True in your Flow code
• Set CREWAI_TRACING_ENABLED=true in your project's .env file
• Run: crewai traces enable"""
else:
message = """Info: Tracing is disabled.
message = """Info: Tracing is disabled.
To enable tracing, do any one of these:
• Set tracing=True in your Flow code

View File

@@ -50,6 +50,7 @@ LOG_MESSAGES: Final[dict[str, str]] = {
"save_error": "Failed to persist state for method {}: {}",
"state_missing": "Flow instance has no state",
"id_missing": "Flow state must have an 'id' field for persistence",
"key_missing": "Flow state is missing required persistence key '{}'",
}
@@ -63,6 +64,7 @@ class PersistenceDecorator:
method_name: str,
persistence_instance: FlowPersistence,
verbose: bool = False,
key: str | None = None,
) -> None:
"""Persist flow state with proper error handling and logging.
@@ -74,9 +76,12 @@ class PersistenceDecorator:
method_name: Name of the method that triggered persistence
persistence_instance: The persistence backend to use
verbose: Whether to log persistence operations
key: Optional state attribute/key to use as the persistence key.
When None, falls back to ``state.id``.
Raises:
ValueError: If flow has no state or state lacks an ID
ValueError: If flow has no state, state lacks an ID, or the
requested ``key`` is missing or falsy on state.
RuntimeError: If state persistence fails
AttributeError: If flow instance lacks required state attributes
"""
@@ -85,19 +90,22 @@ class PersistenceDecorator:
if state is None:
raise ValueError("Flow instance has no state")
lookup_key = key if key is not None else "id"
flow_uuid: str | None = None
if isinstance(state, dict):
flow_uuid = state.get("id")
flow_uuid = state.get(lookup_key)
elif hasattr(state, "_unwrap"):
unwrapped = state._unwrap()
if isinstance(unwrapped, dict):
flow_uuid = unwrapped.get("id")
flow_uuid = unwrapped.get(lookup_key)
else:
flow_uuid = getattr(unwrapped, "id", None)
elif isinstance(state, BaseModel) or hasattr(state, "id"):
flow_uuid = getattr(state, "id", None)
flow_uuid = getattr(unwrapped, lookup_key, None)
elif isinstance(state, BaseModel) or hasattr(state, lookup_key):
flow_uuid = getattr(state, lookup_key, None)
if not flow_uuid:
if key is not None:
raise ValueError(LOG_MESSAGES["key_missing"].format(key))
raise ValueError("Flow state must have an 'id' field for persistence")
# Log state saving only if verbose is True
@@ -127,7 +135,7 @@ class PersistenceDecorator:
logger.error(error_msg)
raise ValueError(error_msg) from e
except (TypeError, ValueError) as e:
error_msg = LOG_MESSAGES["id_missing"]
error_msg = str(e) or LOG_MESSAGES["id_missing"]
if verbose:
PRINTER.print(error_msg, color="red")
logger.error(error_msg)
@@ -135,7 +143,9 @@ class PersistenceDecorator:
def persist(
persistence: FlowPersistence | None = None, verbose: bool = False
persistence: FlowPersistence | None = None,
verbose: bool = False,
key: str | None = None,
) -> Callable[[type | Callable[..., T]], type | Callable[..., T]]:
"""Decorator to persist flow state.
@@ -148,12 +158,16 @@ def persist(
persistence: Optional FlowPersistence implementation to use.
If not provided, uses SQLiteFlowPersistence.
verbose: Whether to log persistence operations. Defaults to False.
key: Optional name of the state attribute (for Pydantic/object states)
or dict key (for dict states) to use as the persistence key. When
``None`` (default) the decorator falls back to ``state.id``.
Returns:
A decorator that can be applied to either a class or method
Raises:
ValueError: If the flow state doesn't have an 'id' field
ValueError: If the flow state doesn't have an 'id' field, or the
specified ``key`` is missing or falsy on state.
RuntimeError: If state persistence fails
Example:
@@ -162,6 +176,10 @@ def persist(
@start()
def begin(self):
pass
@persist(key="conversation_id") # Custom persistence key
class MyFlow(Flow[MyState]):
...
"""
def decorator(target: type | Callable[..., T]) -> type | Callable[..., T]:
@@ -207,7 +225,7 @@ def persist(
) -> Any:
result = await original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(
self, method_name, actual_persistence, verbose
self, method_name, actual_persistence, verbose, key
)
return result
@@ -237,7 +255,7 @@ def persist(
def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(
self, method_name, actual_persistence, verbose
self, method_name, actual_persistence, verbose, key
)
return result
@@ -276,7 +294,7 @@ def persist(
else:
result = method_coro
PersistenceDecorator.persist_state(
flow_instance, method.__name__, actual_persistence, verbose
flow_instance, method.__name__, actual_persistence, verbose, key
)
return cast(T, result)
@@ -295,7 +313,7 @@ def persist(
def method_sync_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
result = method(flow_instance, *args, **kwargs)
PersistenceDecorator.persist_state(
flow_instance, method.__name__, actual_persistence, verbose
flow_instance, method.__name__, actual_persistence, verbose, key
)
return result

View File

@@ -1235,12 +1235,8 @@ class LLM(BaseLLM):
# --- 4) Check for tool calls
tool_calls = response_message.tool_calls or []
# --- 5) If there are tool calls but no available functions, return the tool calls
if tool_calls and not available_functions:
return tool_calls
# --- 6) If there are no tool calls to execute, return the text response directly
if not tool_calls and text_response:
# --- 5) If no tool calls or no available functions, return the text response directly as long as there is a text response
if (not tool_calls or not available_functions) and text_response:
self._handle_emit_call_events(
response=text_response,
call_type=LLMCallType.LLM_CALL,
@@ -1251,6 +1247,11 @@ class LLM(BaseLLM):
)
return text_response
# --- 6) If there are tool calls but no available functions, return the tool calls
# This allows the caller (e.g., executor) to handle tool execution
if tool_calls and not available_functions:
return tool_calls
# --- 7) Handle tool calls if present (execute when available_functions provided)
if tool_calls and available_functions:
tool_result = self._handle_tool_call(
@@ -1383,10 +1384,7 @@ class LLM(BaseLLM):
tool_calls = response_message.tool_calls or []
if tool_calls and not available_functions:
return tool_calls
if not tool_calls and text_response:
if (not tool_calls or not available_functions) and text_response:
self._handle_emit_call_events(
response=text_response,
call_type=LLMCallType.LLM_CALL,
@@ -1397,6 +1395,11 @@ class LLM(BaseLLM):
)
return text_response
# If there are tool calls but no available functions, return the tool calls
# This allows the caller (e.g., executor) to handle tool execution
if tool_calls and not available_functions:
return tool_calls
# Handle tool calls if present (execute when available_functions provided)
if tool_calls and available_functions:
tool_result = self._handle_tool_call(

View File

@@ -152,8 +152,6 @@ class MCPToolResolver:
try:
tools, clients = self._resolve_native(mcp_server_config)
for tool in tools:
tool._amp_slug = slug # type: ignore[attr-defined]
resolved_cache[slug] = (tools, clients)
all_clients.extend(clients)
except Exception as e:

View File

@@ -1110,7 +1110,7 @@ Follow these guidelines:
)
def _export_output(
self, result: str | BaseModel
self, result: str
) -> tuple[BaseModel | None, dict[str, Any] | None]:
pydantic_output: BaseModel | None = None
json_output: dict[str, Any] | None = None

View File

@@ -59,11 +59,6 @@ class MCPNativeTool(BaseTool):
self._client_factory = client_factory
self._original_tool_name = original_tool_name or tool_name
self._server_name = server_name
# Set by MCPToolResolver._resolve_amp when this tool is produced for
# an AMP slug; remains None for direct config / external URL refs.
# Consumed downstream by enterprise tooling to recover the canonical
# tool_id (e.g. "crewai_oauth:<slug>|mcp").
self._amp_slug: str | None = None
@property
def original_tool_name(self) -> str:

View File

@@ -54,11 +54,6 @@ class MCPToolWrapper(BaseTool):
self._mcp_server_params = mcp_server_params
self._original_tool_name = tool_name
self._server_name = server_name
# Set by MCPToolResolver._resolve_amp when this wrapper is produced for
# an AMP slug; remains None for direct config / external URL refs.
# Consumed downstream by enterprise tooling to recover the canonical
# tool_id (e.g. "crewai_oauth:<slug>|mcp").
self._amp_slug: str | None = None
@property
def mcp_server_params(self) -> dict[str, Any]:

View File

@@ -153,18 +153,16 @@ class Converter(OutputConverter):
def convert_to_model(
result: str | BaseModel,
result: str,
output_pydantic: type[BaseModel] | None,
output_json: type[BaseModel] | None,
agent: Agent | BaseAgent | None = None,
converter_cls: type[Converter] | None = None,
) -> dict[str, Any] | BaseModel | str:
"""Convert a result to a Pydantic model or JSON.
"""Convert a result string to a Pydantic model or JSON.
Args:
result: The result to convert. Usually a JSON string, but a Pydantic
instance is also accepted when an upstream caller already produced
a structured object.
result: The result string to convert.
output_pydantic: The Pydantic model class to convert to.
output_json: The Pydantic model class to convert to JSON.
agent: The agent instance.
@@ -177,11 +175,6 @@ def convert_to_model(
if model is None:
return result
if isinstance(result, BaseModel):
if isinstance(result, model):
return result.model_dump() if output_json else result
result = result.model_dump_json()
if converter_cls:
return convert_with_instructions(
result=result,
@@ -264,21 +257,12 @@ def handle_partial_json(
match = _JSON_PATTERN.search(result)
if match:
try:
parsed = json.loads(match.group(), strict=False)
except json.JSONDecodeError:
return convert_with_instructions(
result=result,
model=model,
is_json_output=is_json_output,
agent=agent,
converter_cls=converter_cls,
)
try:
exported_result = model.model_validate(parsed)
exported_result = model.model_validate_json(match.group())
if is_json_output:
return exported_result.model_dump()
return exported_result
except json.JSONDecodeError:
pass
except ValidationError:
raise
except Exception as e:

View File

@@ -1,135 +0,0 @@
"""Tests for the _amp_slug attribute set by MCPToolResolver._resolve_amp.
The slug is private metadata used downstream by enterprise tooling to recover
the canonical tool_id (e.g. ``crewai_oauth:<slug>|mcp``) for ACP rule
evaluation.
"""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from crewai.agent.core import Agent
from crewai.mcp.config import MCPServerHTTP
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.tools.mcp_native_tool import MCPNativeTool
from crewai.tools.mcp_tool_wrapper import MCPToolWrapper
@pytest.fixture
def agent():
return Agent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
)
@pytest.fixture
def resolver(agent):
return MCPToolResolver(agent=agent, logger=agent._logger)
class TestAmpSlugDefaultsNone:
def test_native_tool_default_amp_slug_is_none(self):
tool = MCPNativeTool(
client_factory=lambda: None,
tool_name="search",
tool_schema={"description": "Search"},
server_name="notion",
)
assert tool._amp_slug is None
def test_wrapper_tool_default_amp_slug_is_none(self):
tool = MCPToolWrapper(
mcp_server_params={"url": "https://mcp.example.com"},
tool_name="search",
tool_schema={"description": "Search"},
server_name="notion",
)
assert tool._amp_slug is None
class TestAmpSlugSetByResolveAmp:
@patch("crewai.mcp.tool_resolver.MCPToolResolver._resolve_native")
@patch("crewai.mcp.tool_resolver.MCPToolResolver._fetch_amp_mcp_configs")
def test_resolve_amp_tags_each_tool_with_its_slug(
self, mock_fetch_configs, mock_resolve_native, resolver
):
mock_fetch_configs.return_value = {
"notion": {"url": "https://mcp.crewai.com/notion"},
"github": {"url": "https://mcp.crewai.com/github"},
}
notion_tool = MCPNativeTool(
client_factory=lambda: None,
tool_name="search",
tool_schema={"description": "search Notion"},
server_name="notion",
)
github_tool = MCPNativeTool(
client_factory=lambda: None,
tool_name="list_repos",
tool_schema={"description": "list github repos"},
server_name="github",
)
def fake_resolve_native(config):
url = config.url if hasattr(config, "url") else config["url"]
if "notion" in url:
return ([notion_tool], [MagicMock()])
return ([github_tool], [MagicMock()])
mock_resolve_native.side_effect = fake_resolve_native
tools, _ = resolver._resolve_amp(
[("notion", None), ("github", None)]
)
assert {tool._amp_slug for tool in tools} == {"notion", "github"}
@patch("crewai.mcp.tool_resolver.MCPToolResolver._fetch_amp_mcp_configs")
def test_resolve_amp_does_not_tag_when_config_missing(
self, mock_fetch_configs, resolver
):
mock_fetch_configs.return_value = {}
tools, _ = resolver._resolve_amp([("unknown", None)])
assert tools == []
class TestAmpSlugUntaggedForOtherPaths:
@patch("crewai.mcp.tool_resolver.MCPClient")
def test_resolve_external_does_not_set_amp_slug(self, mock_client_class, resolver):
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(
return_value=[{"name": "search", "description": "Search"}]
)
mock_client.connected = False
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client_class.return_value = mock_client
with patch.object(
resolver, "_get_mcp_tool_schemas", return_value={"search": {"description": "Search"}}
):
tools = resolver._resolve_external("https://mcp.example.com/api")
assert len(tools) == 1
assert tools[0]._amp_slug is None
@patch("crewai.mcp.tool_resolver.MCPClient")
def test_resolve_native_does_not_set_amp_slug(self, mock_client_class, resolver):
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(
return_value=[{"name": "search", "description": "Search"}]
)
mock_client.connected = False
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client_class.return_value = mock_client
config = MCPServerHTTP(url="https://mcp.example.com/api")
tools, _ = resolver._resolve_native(config)
assert all(tool._amp_slug is None for tool in tools)

View File

@@ -4519,8 +4519,8 @@ def test_sets_flow_context_when_using_crewbase_pattern_inside_flow():
flow.kickoff()
assert captured_crew is not None
assert captured_crew._flow_id == flow.flow_id # type: ignore[attr-defined]
assert captured_crew._request_id == flow.flow_id # type: ignore[attr-defined]
assert captured_crew._flow_id == flow.execution_id # type: ignore[attr-defined]
assert captured_crew._request_id == flow.execution_id # type: ignore[attr-defined]
def test_sets_flow_context_when_outside_flow(researcher, writer):
@@ -4554,8 +4554,8 @@ def test_sets_flow_context_when_inside_flow(researcher, writer):
flow = MyFlow()
result = flow.kickoff()
assert result._flow_id == flow.flow_id # type: ignore[attr-defined]
assert result._request_id == flow.flow_id # type: ignore[attr-defined]
assert result._flow_id == flow.execution_id # type: ignore[attr-defined]
assert result._request_id == flow.execution_id # type: ignore[attr-defined]
def test_reset_knowledge_with_no_crew_knowledge(researcher, writer):

View File

@@ -0,0 +1,127 @@
"""Regression tests for ``Flow.execution_id``.
``execution_id`` is the stable tracking identifier for a single flow run.
It must stay independent of ``state.id`` so that consumers passing an
``id`` in ``inputs`` (used for persistence restore) cannot destabilize
the identity used by telemetry, tracing, and external correlation.
"""
from __future__ import annotations
from typing import Any
import pytest
from crewai.flow.flow import Flow, FlowState, start
from crewai.flow.flow_context import current_flow_id, current_flow_request_id
class _CaptureState(FlowState):
captured_flow_id: str = ""
captured_state_id: str = ""
captured_current_flow_id: str = ""
captured_execution_id: str = ""
class _IdentityCaptureFlow(Flow[_CaptureState]):
initial_state = _CaptureState
@start()
def capture(self) -> None:
self.state.captured_flow_id = self.flow_id
self.state.captured_state_id = self.state.id
self.state.captured_current_flow_id = current_flow_id.get() or ""
self.state.captured_execution_id = self.execution_id
def test_execution_id_defaults_to_fresh_uuid_per_instance() -> None:
a = _IdentityCaptureFlow()
b = _IdentityCaptureFlow()
assert a.execution_id
assert b.execution_id
assert a.execution_id != b.execution_id
def test_execution_id_survives_consumer_id_in_inputs() -> None:
flow = _IdentityCaptureFlow()
original_execution_id = flow.execution_id
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.state.id == "consumer-supplied-id"
assert flow.flow_id == "consumer-supplied-id"
assert flow.execution_id == original_execution_id
assert flow.execution_id != "consumer-supplied-id"
def test_two_runs_with_same_consumer_id_have_distinct_execution_ids() -> None:
flow_a = _IdentityCaptureFlow()
flow_b = _IdentityCaptureFlow()
colliding_id = "shared-consumer-id"
flow_a.kickoff(inputs={"id": colliding_id})
flow_b.kickoff(inputs={"id": colliding_id})
assert flow_a.state.id == colliding_id
assert flow_b.state.id == colliding_id
assert flow_a.execution_id != flow_b.execution_id
def test_execution_id_is_writable() -> None:
flow = _IdentityCaptureFlow()
flow.execution_id = "external-task-id"
assert flow.execution_id == "external-task-id"
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.execution_id == "external-task-id"
assert flow.state.id == "consumer-supplied-id"
def test_current_flow_id_context_var_matches_execution_id() -> None:
flow = _IdentityCaptureFlow()
flow.execution_id = "external-task-id"
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.state.captured_current_flow_id == "external-task-id"
assert flow.state.captured_flow_id == "consumer-supplied-id"
assert flow.state.captured_execution_id == "external-task-id"
def test_execution_id_not_included_in_serialized_state() -> None:
flow = _IdentityCaptureFlow()
flow.execution_id = "external-task-id"
flow.kickoff()
dumped = flow.state.model_dump()
assert "execution_id" not in dumped
assert "_execution_id" not in dumped
assert dumped["id"] == flow.state.id
def test_dict_state_flow_also_exposes_stable_execution_id() -> None:
class DictFlow(Flow[dict[str, Any]]):
initial_state = dict # type: ignore[assignment]
@start()
def noop(self) -> None:
pass
flow = DictFlow()
original = flow.execution_id
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.state["id"] == "consumer-supplied-id"
assert flow.execution_id == original
@pytest.fixture(autouse=True)
def _reset_flow_context_vars():
yield
for var in (current_flow_id, current_flow_request_id):
try:
var.set(None)
except LookupError:
# ContextVar was never set in this context; nothing to reset.
pass

View File

@@ -251,240 +251,67 @@ def test_persistence_with_base_model(tmp_path):
assert isinstance(flow.state._unwrap(), State)
def test_fork_with_restore_from_state_id(tmp_path):
"""Fork: restore_from_state_id hydrates state from source flow_uuid; new run gets a
fresh state.id; source's history is preserved (the fork's @persist writes go under
the new state.id, not the source's)."""
def test_persist_custom_key_with_pydantic_state(tmp_path):
"""`@persist(key=...)` uses the named attribute on a Pydantic state."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class ForkableFlow(Flow[TestState]):
class KeyedState(FlowState):
conversation_id: str = "conv-42"
message: str = ""
class KeyedFlow(Flow[KeyedState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
@persist(persistence, key="conversation_id")
def init_step(self):
self.state.message = "hello"
# Run 1: build up source state. counter goes 0 -> 1.
flow1 = ForkableFlow(persistence=persistence)
flow1.kickoff()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
flow = KeyedFlow(persistence=persistence)
flow.kickoff()
# Resume on the same uuid bumps counter to 2 in the SAME flow_uuid history.
flow1b = ForkableFlow(persistence=persistence)
flow1b.kickoff(inputs={"id": source_uuid})
assert flow1b.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 2
# Fork: hydrate from source, but persist under a fresh state.id.
flow2 = ForkableFlow(persistence=persistence)
flow2.kickoff(restore_from_state_id=source_uuid)
# Fork has a different state.id from the source.
assert flow2.state.id != source_uuid
# Hydrated from source's latest snapshot (counter=2), then incremented to 3.
assert flow2.state.counter == 3
# Source's history is unchanged after the fork.
assert persistence.load_state(source_uuid)["counter"] == 2
# Fork's writes landed under its own state.id.
assert persistence.load_state(flow2.state.id)["counter"] == 3
saved_state = persistence.load_state("conv-42")
assert saved_state is not None
assert saved_state["message"] == "hello"
# The default `state.id` lookup must NOT have been used as the key.
assert persistence.load_state(flow.state.id) is None
def test_fork_with_pinned_state_id(tmp_path):
"""Fork into a pinned state.id (inputs.id supplied alongside restore_from_state_id):
the new run uses inputs.id as state.id and hydrates from restore_from_state_id."""
def test_persist_custom_key_with_dict_state(tmp_path):
"""`@persist(key=...)` uses the named key on a dict state."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class PinnableFlow(Flow[TestState]):
class DictKeyedFlow(Flow[Dict[str, str]]):
initial_state = dict()
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
@persist(persistence, key="conversation_id")
def init_step(self):
self.state["conversation_id"] = "conv-dict-7"
self.state["message"] = "hi from dict"
flow1 = PinnableFlow(persistence=persistence)
flow1.kickoff()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
flow = DictKeyedFlow(persistence=persistence)
flow.kickoff()
pinned_uuid = "pinned-fork-uuid-1234"
flow2 = PinnableFlow(persistence=persistence)
flow2.kickoff(
inputs={"id": pinned_uuid},
restore_from_state_id=source_uuid,
)
# state.id pinned to inputs.id, NOT the source uuid.
assert flow2.state.id == pinned_uuid
# Hydrated from source: counter started at 1, step incremented to 2.
assert flow2.state.counter == 2
# Source's history is unchanged.
assert persistence.load_state(source_uuid)["counter"] == 1
# Fork's writes are under the pinned uuid.
assert persistence.load_state(pinned_uuid)["counter"] == 2
saved_state = persistence.load_state("conv-dict-7")
assert saved_state is not None
assert saved_state["message"] == "hi from dict"
def test_restore_from_state_id_not_found_silent_fallback(tmp_path):
"""Lookup miss on restore_from_state_id silently falls through to default behavior."""
def test_persist_custom_key_missing_raises(tmp_path):
"""A missing/falsy custom key must raise a clear ValueError."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class FallbackFlow(Flow[TestState]):
class MissingKeyFlow(Flow[Dict[str, str]]):
initial_state = dict()
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
@persist(persistence, key="conversation_id")
def init_step(self):
# Intentionally do NOT set "conversation_id" on state.
self.state["message"] = "no key here"
flow = FallbackFlow(persistence=persistence)
# No source UUID exists — should not raise.
flow.kickoff(restore_from_state_id="no-such-uuid")
# Default state path: counter starts at 0 and step increments to 1.
assert flow.state.counter == 1
# state.id is the auto-generated one, NOT the missing source.
assert flow.state.id != "no-such-uuid"
def test_restore_from_state_id_none_is_no_op(tmp_path):
"""restore_from_state_id=None (default) preserves baseline kickoff behavior."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class BaselineFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow = BaselineFlow(persistence=persistence)
flow.kickoff(restore_from_state_id=None)
assert flow.state.counter == 1
def test_fork_conflict_with_from_checkpoint_raises():
"""Passing both from_checkpoint and restore_from_state_id raises ValueError, naming
both parameters."""
from crewai.state import CheckpointConfig
class ConflictFlow(Flow[TestState]):
@start()
def step(self):
pass
flow = ConflictFlow()
with pytest.raises(ValueError) as excinfo:
flow.kickoff(
from_checkpoint=CheckpointConfig(),
restore_from_state_id="some-uuid",
)
msg = str(excinfo.value)
assert "from_checkpoint" in msg
assert "restore_from_state_id" in msg
@pytest.mark.asyncio
async def test_fork_via_kickoff_async(tmp_path):
"""kickoff_async honors restore_from_state_id: hydrates from source, mints fresh
state.id, persists under the new id, source history preserved."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class AsyncForkableFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow1 = AsyncForkableFlow(persistence=persistence)
await flow1.kickoff_async()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
flow2 = AsyncForkableFlow(persistence=persistence)
await flow2.kickoff_async(restore_from_state_id=source_uuid)
assert flow2.state.id != source_uuid
assert flow2.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 1
assert persistence.load_state(flow2.state.id)["counter"] == 2
@pytest.mark.asyncio
async def test_fork_via_akickoff(tmp_path):
"""akickoff is the public async alias and must accept restore_from_state_id with
the same semantics as kickoff_async."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class AkickoffForkableFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow1 = AkickoffForkableFlow(persistence=persistence)
await flow1.akickoff()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
flow2 = AkickoffForkableFlow(persistence=persistence)
await flow2.akickoff(restore_from_state_id=source_uuid)
assert flow2.state.id != source_uuid
assert flow2.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 1
assert persistence.load_state(flow2.state.id)["counter"] == 2
@pytest.mark.asyncio
async def test_akickoff_pinned_fork(tmp_path):
"""akickoff with both inputs.id and restore_from_state_id pins state.id while
hydrating from the source."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class PinnableAsyncFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow1 = PinnableAsyncFlow(persistence=persistence)
await flow1.akickoff()
source_uuid = flow1.state.id
pinned_uuid = "pinned-akickoff-fork-uuid"
flow2 = PinnableAsyncFlow(persistence=persistence)
await flow2.akickoff(
inputs={"id": pinned_uuid},
restore_from_state_id=source_uuid,
)
assert flow2.state.id == pinned_uuid
assert flow2.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 1
assert persistence.load_state(pinned_uuid)["counter"] == 2
@pytest.mark.asyncio
async def test_akickoff_fork_conflict_with_from_checkpoint_raises():
"""akickoff must raise the same conflict ValueError as kickoff/kickoff_async when
both from_checkpoint and restore_from_state_id are set."""
from crewai.state import CheckpointConfig
class AsyncConflictFlow(Flow[TestState]):
@start()
def step(self):
pass
flow = AsyncConflictFlow()
with pytest.raises(ValueError) as excinfo:
await flow.akickoff(
from_checkpoint=CheckpointConfig(),
restore_from_state_id="some-uuid",
)
msg = str(excinfo.value)
assert "from_checkpoint" in msg
assert "restore_from_state_id" in msg
flow = MissingKeyFlow(persistence=persistence)
with pytest.raises(ValueError, match="conversation_id"):
flow.kickoff()

View File

@@ -177,7 +177,6 @@ def test_llm_passes_additional_params():
# Create mocks for response structure
mock_message = MagicMock()
mock_message.content = "Test response"
mock_message.tool_calls = None
mock_choice = MagicMock()
mock_choice.message = mock_message
mock_response = MagicMock()
@@ -1147,52 +1146,3 @@ async def test_usage_info_streaming_with_acall():
assert llm._token_usage["total_tokens"] > 0
assert len(result) > 0
def _build_response_with_text_and_tool_calls():
"""Mimic a litellm ModelResponse that contains both content and tool_calls."""
from litellm.types.utils import ChatCompletionMessageToolCall, Function
response_message = MagicMock()
response_message.content = "I will search for the given query."
response_message.tool_calls = [
ChatCompletionMessageToolCall(
id="call_123",
type="function",
function=Function(name="search", arguments='{"q": "x"}'),
)
]
choice = MagicMock(message=response_message)
response = MagicMock(choices=[choice], model_extra=None)
return response
def test_non_streaming_returns_tool_calls_when_text_also_present():
"""A response with both text and tool_calls must not drop the tool_calls
when available_functions is None (executor-managed tool execution path).
"""
llm = LLM(model="gpt-4o-mini", is_litellm=True)
response = _build_response_with_text_and_tool_calls()
with patch("crewai.llm.litellm.completion", return_value=response):
result = llm.call("anything", available_functions=None)
assert isinstance(result, list)
assert len(result) == 1
assert result[0].function.name == "search"
@pytest.mark.asyncio
async def test_non_streaming_async_returns_tool_calls_when_text_also_present():
llm = LLM(model="openai/gpt-4o-mini", is_litellm=True, stream=False)
response = _build_response_with_text_and_tool_calls()
async def _ret(*args, **kwargs):
return response
with patch("crewai.llm.litellm.acompletion", side_effect=_ret):
result = await llm.acall("anything", available_functions=None)
assert isinstance(result, list)
assert len(result) == 1
assert result[0].function.name == "search"

View File

@@ -690,27 +690,6 @@ def test_multiple_guardrails_with_pydantic_output():
assert parsed["processed"] is True
def test_export_output_accepts_pydantic_input():
"""Regression test for #5458: _export_output must not crash with TypeError
when called with a Pydantic instance (e.g. when an upstream caller passes
an already-converted model from a context task)."""
from pydantic import BaseModel
class StructuredResult(BaseModel):
value: str
task = create_smart_task(
description="Test pydantic export",
expected_output="Structured output",
output_pydantic=StructuredResult,
)
instance = StructuredResult(value="ok")
pydantic_output, json_output = task._export_output(instance)
assert pydantic_output is instance
assert json_output is None
def test_guardrails_vs_single_guardrail_mutual_exclusion():
"""Test that guardrails list nullifies single guardrail."""

View File

@@ -0,0 +1,259 @@
"""Tests for tracing disabled message suppression (issue #5665).
Verifies that:
- Users who explicitly declined tracing are NOT nagged with the message.
- The CREWAI_SUPPRESS_TRACING_MESSAGES env var suppresses the message.
- The message is shown only when tracing is disabled and user hasn't declined.
"""
from unittest.mock import MagicMock, patch
import pytest
from crewai.events.listeners.tracing.utils import (
set_suppress_tracing_messages,
should_suppress_tracing_messages,
)
class TestShouldSuppressTracingMessages:
"""Tests for the should_suppress_tracing_messages utility function."""
def test_suppress_false_by_default(self):
"""By default, messages should NOT be suppressed."""
token = set_suppress_tracing_messages(False)
try:
assert should_suppress_tracing_messages() is False
finally:
from crewai.events.listeners.tracing.utils import (
_suppress_tracing_messages,
)
_suppress_tracing_messages.reset(token)
def test_suppress_via_context_var(self):
"""Setting the context var should suppress messages."""
token = set_suppress_tracing_messages(True)
try:
assert should_suppress_tracing_messages() is True
finally:
from crewai.events.listeners.tracing.utils import (
_suppress_tracing_messages,
)
_suppress_tracing_messages.reset(token)
@pytest.mark.parametrize("env_value", ["true", "True", "TRUE", "1", "yes", "YES"])
def test_suppress_via_env_var(self, env_value, monkeypatch):
"""CREWAI_SUPPRESS_TRACING_MESSAGES env var should suppress messages."""
token = set_suppress_tracing_messages(False)
try:
monkeypatch.setenv("CREWAI_SUPPRESS_TRACING_MESSAGES", env_value)
assert should_suppress_tracing_messages() is True
finally:
from crewai.events.listeners.tracing.utils import (
_suppress_tracing_messages,
)
_suppress_tracing_messages.reset(token)
@pytest.mark.parametrize("env_value", ["false", "False", "0", "no", ""])
def test_no_suppress_with_falsy_env_var(self, env_value, monkeypatch):
"""Falsy values for the env var should NOT suppress messages."""
token = set_suppress_tracing_messages(False)
try:
monkeypatch.setenv("CREWAI_SUPPRESS_TRACING_MESSAGES", env_value)
assert should_suppress_tracing_messages() is False
finally:
from crewai.events.listeners.tracing.utils import (
_suppress_tracing_messages,
)
_suppress_tracing_messages.reset(token)
def test_context_var_takes_precedence_over_env(self, monkeypatch):
"""Context var set to True should suppress even if env var is false."""
token = set_suppress_tracing_messages(True)
try:
monkeypatch.setenv("CREWAI_SUPPRESS_TRACING_MESSAGES", "false")
assert should_suppress_tracing_messages() is True
finally:
from crewai.events.listeners.tracing.utils import (
_suppress_tracing_messages,
)
_suppress_tracing_messages.reset(token)
class TestShowTracingDisabledMessage:
"""Tests that _show_tracing_disabled_message does not nag declined users."""
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={"first_execution_done": True, "trace_consent": False},
)
def test_crew_no_message_when_user_declined(self, mock_load):
"""Crew._show_tracing_disabled_message should not print when user declined."""
from crewai.crew import Crew
with patch("crewai.crew.Console") as MockConsole:
Crew._show_tracing_disabled_message()
MockConsole.return_value.print.assert_not_called()
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={"first_execution_done": True, "trace_consent": False},
)
def test_flow_no_message_when_user_declined(self, mock_load):
"""Flow._show_tracing_disabled_message should not print when user declined."""
from crewai.flow.flow import Flow
with patch("crewai.flow.flow.Console") as MockConsole:
Flow._show_tracing_disabled_message()
MockConsole.return_value.print.assert_not_called()
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={"first_execution_done": True, "trace_consent": False},
)
def test_trace_listener_no_message_when_user_declined(self, mock_load):
"""TraceCollectionListener._show_tracing_disabled_message should not print when user declined."""
from crewai.events.listeners.tracing.trace_listener import (
TraceCollectionListener,
)
listener = TraceCollectionListener.__new__(TraceCollectionListener)
with patch("rich.console.Console") as MockConsole:
listener._show_tracing_disabled_message()
MockConsole.return_value.print.assert_not_called()
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={},
)
def test_crew_shows_message_when_user_has_not_decided(self, mock_load):
"""Crew._show_tracing_disabled_message should print when user hasn't decided yet."""
from crewai.crew import Crew
with patch("crewai.crew.Console") as MockConsole:
mock_console_instance = MockConsole.return_value
Crew._show_tracing_disabled_message()
mock_console_instance.print.assert_called_once()
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={},
)
def test_crew_no_message_when_suppress_env_set(self, mock_load, monkeypatch):
"""Crew._show_tracing_disabled_message should not print when env var suppresses."""
from crewai.crew import Crew
monkeypatch.setenv("CREWAI_SUPPRESS_TRACING_MESSAGES", "true")
with patch("crewai.crew.Console") as MockConsole:
Crew._show_tracing_disabled_message()
MockConsole.return_value.print.assert_not_called()
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={},
)
def test_flow_no_message_when_suppress_env_set(self, mock_load, monkeypatch):
"""Flow._show_tracing_disabled_message should not print when env var suppresses."""
from crewai.flow.flow import Flow
monkeypatch.setenv("CREWAI_SUPPRESS_TRACING_MESSAGES", "true")
with patch("crewai.flow.flow.Console") as MockConsole:
Flow._show_tracing_disabled_message()
MockConsole.return_value.print.assert_not_called()
class TestConsoleFormatterTracingMessage:
"""Tests for console_formatter._show_tracing_disabled_message_if_needed."""
def _make_formatter(self):
from crewai.events.utils.console_formatter import ConsoleFormatter
formatter = ConsoleFormatter.__new__(ConsoleFormatter)
formatter.console = MagicMock()
formatter.verbose = True
return formatter
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={"first_execution_done": True, "trace_consent": False},
)
def test_no_message_when_user_declined(self, mock_load):
"""Console formatter should not show the message when user declined tracing."""
formatter = self._make_formatter()
with patch(
"crewai.events.listeners.tracing.trace_listener.TraceCollectionListener"
) as mock_listener_cls:
mock_listener_cls._instance = None
formatter._show_tracing_disabled_message_if_needed()
formatter.console.print.assert_not_called()
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={},
)
def test_no_message_when_suppress_env_set(self, mock_load, monkeypatch):
"""Console formatter should not show the message when env var is set."""
monkeypatch.setenv("CREWAI_SUPPRESS_TRACING_MESSAGES", "true")
formatter = self._make_formatter()
formatter._show_tracing_disabled_message_if_needed()
formatter.console.print.assert_not_called()
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={},
)
@patch(
"crewai.events.listeners.tracing.utils.is_tracing_enabled_in_context",
return_value=False,
)
def test_message_shown_when_tracing_disabled_and_not_declined(
self, mock_tracing_ctx, mock_load
):
"""Console formatter should show the message when tracing disabled and user hasn't declined."""
from crewai.events.listeners.tracing.trace_listener import (
TraceCollectionListener,
)
formatter = self._make_formatter()
mock_instance = MagicMock()
mock_instance.first_time_handler.is_first_time = False
original_instance = TraceCollectionListener._instance
try:
TraceCollectionListener._instance = mock_instance # type: ignore[misc]
formatter._show_tracing_disabled_message_if_needed()
formatter.console.print.assert_called_once()
finally:
TraceCollectionListener._instance = original_instance # type: ignore[misc]
@patch(
"crewai.events.listeners.tracing.utils._load_user_data",
return_value={},
)
@patch(
"crewai.events.listeners.tracing.utils.is_tracing_enabled_in_context",
return_value=True,
)
def test_no_message_when_tracing_enabled(self, mock_tracing_ctx, mock_load):
"""Console formatter should not show the message when tracing is enabled."""
from crewai.events.listeners.tracing.trace_listener import (
TraceCollectionListener,
)
formatter = self._make_formatter()
mock_instance = MagicMock()
mock_instance.first_time_handler.is_first_time = False
original_instance = TraceCollectionListener._instance
try:
TraceCollectionListener._instance = mock_instance # type: ignore[misc]
formatter._show_tracing_disabled_message_if_needed()
formatter.console.print.assert_not_called()
finally:
TraceCollectionListener._instance = original_instance # type: ignore[misc]

View File

@@ -87,31 +87,6 @@ def test_convert_to_model_with_no_model() -> None:
assert output == "Plain text"
def test_convert_to_model_with_basemodel_input_matching_pydantic() -> None:
instance = SimpleModel(name="John", age=30)
output = convert_to_model(instance, SimpleModel, None, None)
assert output is instance
def test_convert_to_model_with_basemodel_input_matching_json() -> None:
instance = SimpleModel(name="John", age=30)
output = convert_to_model(instance, None, SimpleModel, None)
assert output == {"name": "John", "age": 30}
def test_convert_to_model_with_basemodel_input_different_class() -> None:
class OtherModel(BaseModel):
name: str
age: int
extra: str = "default"
instance = OtherModel(name="John", age=30, extra="ignored")
output = convert_to_model(instance, SimpleModel, None, None)
assert isinstance(output, SimpleModel)
assert output.name == "John"
assert output.age == 30
def test_convert_to_model_with_special_characters() -> None:
json_string_test = """
{
@@ -202,34 +177,6 @@ def test_handle_partial_json_with_invalid_partial(mock_agent: Mock) -> None:
assert output == "Converted result"
def test_handle_partial_json_accepts_literal_control_chars_in_strings() -> None:
"""JSON values with literal newlines/tabs (lenient parsing) must still
validate, matching the prior model_validate_json behavior.
"""
result = 'prefix {"name": "Charlie\nDoe", "age": 35} suffix'
output = handle_partial_json(result, SimpleModel, False, None)
assert isinstance(output, SimpleModel)
assert output.name == "Charlie\nDoe"
assert output.age == 35
def test_handle_partial_json_falls_through_for_non_json_curly_blocks(
mock_agent: Mock,
) -> None:
"""A regex match that is not actually JSON (e.g. GraphQL) must fall through
to convert_with_instructions instead of raising a ValidationError.
"""
result = (
"type Query {\n countries: [Country]\n}\n\n"
"type Country {\n code: String\n name: String\n}"
)
with patch("crewai.utilities.converter.convert_with_instructions") as mock_convert:
mock_convert.return_value = "Converted result"
output = handle_partial_json(result, SimpleModel, False, mock_agent)
assert output == "Converted result"
mock_convert.assert_called_once()
# Tests for convert_with_instructions
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")

View File

@@ -11,8 +11,6 @@ Installed automatically via the workspace (`uv sync`). Requires:
- `ENTERPRISE_REPO` env var — GitHub repo for enterprise releases
- `ENTERPRISE_VERSION_DIRS` env var — comma-separated directories to bump in the enterprise repo
- `ENTERPRISE_CREWAI_DEP_PATH` env var — path to the pyproject.toml with the `crewai[tools]` pin in the enterprise repo
- `ENTERPRISE_WORKFLOW_PATHS` env var — comma-separated workflow file paths in the enterprise repo whose `crewai[extras]==<version>` pins should be rewritten on each release (e.g. `.github/workflows/tests.yml`)
- `ENTERPRISE_EXTRA_PACKAGES` env var — comma-separated packages to also pin in enterprise pyproject files, in addition to `crewai` / `crewai[extras]`
## Commands

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.14.5a1"
__version__ = "1.14.4a1"

View File

@@ -1207,12 +1207,7 @@ _ENTERPRISE_WORKFLOW_PATHS: Final[tuple[str, ...]] = tuple(
def _update_enterprise_crewai_dep(pyproject_path: Path, version: str) -> bool:
"""Update crewai pins in an enterprise pyproject.toml.
Pins ``crewai`` / ``crewai[extras]`` via ``_pin_crewai_deps`` and
additionally pins any dashed ``crewai-*`` packages configured via
``ENTERPRISE_EXTRA_PACKAGES`` (e.g. ``crewai-enterprise``), which
``_pin_crewai_deps`` does not cover.
"""Update the crewai[tools] pin in an enterprise pyproject.toml.
Args:
pyproject_path: Path to the pyproject.toml file.
@@ -1224,57 +1219,20 @@ def _update_enterprise_crewai_dep(pyproject_path: Path, version: str) -> bool:
if not pyproject_path.exists():
return False
changed = False
content = pyproject_path.read_text()
new_content = _pin_crewai_deps(content, version)
if new_content != content:
pyproject_path.write_text(new_content)
changed = True
if update_pyproject_dependencies(
pyproject_path, version, extra_packages=list(_ENTERPRISE_EXTRA_PACKAGES)
):
changed = True
return changed
def _update_workflow_crewai_pins(workflow_path: Path, version: str) -> bool:
"""Rewrite ``crewai[extras]==<old>`` pins in a single workflow file.
Operates line-by-line on the raw file via ``_repin_crewai_install``
so only version numbers change and all formatting is preserved.
Args:
workflow_path: Path to a workflow YAML file.
version: New crewai version string.
Returns:
True if the file was modified.
"""
if not workflow_path.exists():
return False
raw = workflow_path.read_text()
lines = raw.splitlines(keepends=True)
changed = False
for i, line in enumerate(lines):
if "crewai[" not in line:
continue
new_line = _repin_crewai_install(line, version)
if new_line != line:
lines[i] = new_line
changed = True
if not changed:
return False
workflow_path.write_text("".join(lines))
return True
return True
return False
def _update_enterprise_workflows(repo_dir: Path, version: str) -> list[Path]:
"""Update crewai version pins in enterprise CI workflow files.
Applies ``_repin_crewai_install`` line-by-line on the raw file so
only version numbers change and all formatting is preserved.
Args:
repo_dir: Root of the cloned enterprise repo.
version: New crewai version string.
@@ -1285,31 +1243,29 @@ def _update_enterprise_workflows(repo_dir: Path, version: str) -> list[Path]:
updated: list[Path] = []
for rel_path in _ENTERPRISE_WORKFLOW_PATHS:
workflow = repo_dir / rel_path
if _update_workflow_crewai_pins(workflow, version):
updated.append(workflow)
return updated
def _update_repo_workflows_crewai_pins(repo_dir: Path, version: str) -> list[Path]:
"""Update crewai pins across all GitHub workflow files in a repo.
Args:
repo_dir: Root of the cloned repo.
version: New crewai version string.
Returns:
List of workflow paths that were modified.
"""
workflows_dir = repo_dir / ".github" / "workflows"
if not workflows_dir.exists():
return []
updated: list[Path] = []
for workflow in sorted(workflows_dir.iterdir()):
if workflow.suffix not in (".yml", ".yaml"):
if not workflow.exists():
continue
if _update_workflow_crewai_pins(workflow, version):
raw = workflow.read_text()
lines = raw.splitlines(keepends=True)
changed = False
for i, line in enumerate(lines):
if "crewai[" not in line:
continue
new_line = _repin_crewai_install(line, version)
if new_line != line:
lines[i] = new_line
changed = True
if changed:
new_raw = "".join(lines)
else:
new_raw = raw
if new_raw != raw:
workflow.write_text(new_raw)
updated.append(workflow)
return updated
@@ -1358,10 +1314,8 @@ _PYPI_POLL_TIMEOUT: Final[int] = 600
def _update_deployment_test_repo(version: str, is_prerelease: bool) -> None:
"""Update the deployment test repo to pin the new crewai version.
Clones the repo, updates the crewai[tools] pin in pyproject.toml
and any crewai[extras] pins in .github/workflows, regenerates the
lockfile, commits to a branch, pushes, opens a PR against main,
then polls until the PR is merged (or closed).
Clones the repo, updates the crewai[tools] pin in pyproject.toml,
regenerates the lockfile, commits, and pushes directly to main.
Args:
version: New crewai version string.
@@ -1379,91 +1333,50 @@ def _update_deployment_test_repo(version: str, is_prerelease: bool) -> None:
pyproject = repo_dir / "pyproject.toml"
content = pyproject.read_text()
new_content = _pin_crewai_deps(content, version)
pyproject_changed = new_content != content
if pyproject_changed:
pyproject.write_text(new_content)
console.print(f"[green]✓[/green] Updated crewai[tools] pin to {version}")
else:
if new_content == content:
console.print(
"[yellow]Warning:[/yellow] No crewai[tools] pin found to update"
)
updated_workflows = _update_repo_workflows_crewai_pins(repo_dir, version)
for wf in updated_workflows:
console.print(
f"[green]✓[/green] Updated crewai pin in {wf.relative_to(repo_dir)}"
)
if not pyproject_changed and not updated_workflows:
console.print("[yellow]Nothing to update; skipping commit and PR.[/yellow]")
return
pyproject.write_text(new_content)
console.print(f"[green]✓[/green] Updated crewai[tools] pin to {version}")
paths_to_add: list[str] = [
str(wf.relative_to(repo_dir)) for wf in updated_workflows
lock_cmd = [
"uv",
"lock",
"--refresh-package",
"crewai",
"--refresh-package",
"crewai-tools",
]
if is_prerelease:
lock_cmd.append("--prerelease=allow")
if pyproject_changed:
lock_cmd = [
"uv",
"lock",
"--refresh-package",
"crewai",
"--refresh-package",
"crewai-tools",
]
if is_prerelease:
lock_cmd.append("--prerelease=allow")
max_retries = 10
for attempt in range(1, max_retries + 1):
try:
run_command(lock_cmd, cwd=repo_dir)
break
except subprocess.CalledProcessError:
if attempt == max_retries:
console.print(
f"[red]Error:[/red] uv lock failed after {max_retries} attempts"
)
raise
max_retries = 10
for attempt in range(1, max_retries + 1):
try:
run_command(lock_cmd, cwd=repo_dir)
break
except subprocess.CalledProcessError:
if attempt == max_retries:
console.print(
f"[yellow]uv lock failed (attempt {attempt}/{max_retries}),"
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
f"[red]Error:[/red] uv lock failed after {max_retries} attempts"
)
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Lockfile updated")
paths_to_add.extend(["pyproject.toml", "uv.lock"])
raise
console.print(
f"[yellow]uv lock failed (attempt {attempt}/{max_retries}),"
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
)
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Lockfile updated")
branch = f"chore/bump-crewai-v{version}"
create_or_reset_branch(branch, cwd=repo_dir)
run_command(["git", "add", *paths_to_add], cwd=repo_dir)
run_command(["git", "add", "pyproject.toml", "uv.lock"], cwd=repo_dir)
run_command(
["git", "commit", "-m", f"chore: bump crewai to {version}"],
cwd=repo_dir,
)
run_command(["git", "push", "-u", "origin", branch], cwd=repo_dir)
console.print(f"[green]✓[/green] Pushed branch {branch}")
pr_url = run_command(
[
"gh",
"pr",
"create",
"--base",
"main",
"--head",
branch,
"--title",
f"chore: bump crewai to {version}",
"--body",
"",
],
cwd=repo_dir,
)
console.print(f"[green]✓[/green] Opened PR on {_DEPLOYMENT_TEST_REPO}")
console.print(f"[cyan]PR URL:[/cyan] {pr_url.strip()}")
_wait_for_pr_merged(branch, repo_dir)
run_command(["git", "push"], cwd=repo_dir)
console.print(f"[green]✓[/green] Pushed to {_DEPLOYMENT_TEST_REPO}")
def _wait_for_pypi(package: str, version: str) -> None:
@@ -1495,37 +1408,6 @@ def _wait_for_pypi(package: str, version: str) -> None:
sys.exit(1)
_PR_MERGE_POLL_INTERVAL: Final[int] = 30
def _wait_for_pr_merged(branch: str, cwd: Path) -> None:
"""Poll a PR until it is merged, exiting on close-without-merge.
Args:
branch: Head branch name of the PR to watch.
cwd: Working directory of the cloned repo (so ``gh`` resolves
the right remote).
Raises:
SystemExit: If the PR is closed without being merged.
"""
console.print(f"[cyan]Waiting for PR on branch {branch} to be merged...[/cyan]")
while True:
state = run_command(
["gh", "pr", "view", branch, "--json", "state", "--jq", ".state"],
cwd=cwd,
).strip()
if state == "MERGED":
console.print(f"[green]✓[/green] PR for {branch} merged")
return
if state == "CLOSED":
console.print(
f"[red]Error:[/red] PR for {branch} was closed without merging"
)
sys.exit(1)
time.sleep(_PR_MERGE_POLL_INTERVAL)
def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> None:
"""Clone the enterprise repo, bump versions, and create a release PR.