Compare commits

...

29 Commits

Author SHA1 Message Date
Iris Clawd
42b4f0101e fix(ci): add type annotations to _SSRFSafeAdapter.send and fix test mocks
- Add proper type annotations to _SSRFSafeAdapter.send() to satisfy mypy
- Add 'Any' import from typing
- Update webpage_loader tests to mock safe_get instead of requests.get
  (the loader now uses safe_get for SSRF protection)
2026-05-05 15:47:02 +00:00
Iris Clawd
3dc8c45cc9 fix(security): validate IPs on every redirect hop to prevent SSRF bypass (OSS-51)
Adds a custom HTTPAdapter (_SSRFSafeAdapter) that intercepts every
request — including redirect hops — and validates the resolved IP
against the private/reserved blocklist before the connection proceeds.

New public API:
- safe_request_session(): returns a Session with the adapter mounted
- safe_get(url, **kwargs): drop-in replacement for requests.get() that
  validates the initial URL AND every redirect destination

Updated tools to use safe_get() instead of validate_url() + requests.get():
- ScrapeWebsiteTool
- ScrapeElementFromWebsiteTool
- WebPageLoader (RAG)

Closes OSS-51
2026-05-05 03:57:09 +00:00
iris-clawd
ec8a522c2c fix: correct status endpoint path from /{kickoff_id}/status to /status/{kickoff_id}
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-05-05 07:29:49 +08:00
Greyson LaLonde
e25f6538a8 fix(deps): bump gitpython to >=3.1.47 for GHSA-rpm5-65cw-6hj4
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-05-04 23:44:28 +08:00
Greyson LaLonde
470d4035db docs: update changelog and version for v1.14.5a2 2026-05-04 23:04:56 +08:00
Greyson LaLonde
57d1b338f7 feat: bump versions to 1.14.5a2 2026-05-04 22:58:06 +08:00
huang yutong
01df19b029 fix(a2a): always restore task.output_pydantic in finally block
In `_execute_task_with_a2a` and its async variant, the try body
sets `task.output_pydantic = None` before returning an A2A
response. The finally block then checks
`if task.output_pydantic is not None` before restoring the
original value — but since it was just set to None, the condition
is always False and the original value is never restored. This
permanently mutates the Task object.

Remove the guard so `output_pydantic` is unconditionally restored,
matching the unconditional restoration of `description` and
`response_model` in the same block.

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-05-04 22:41:04 +08:00
Rip&Tear
dca2c3160f chore: update security reporting instructions
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-05-04 22:31:35 +08:00
Greyson LaLonde
6494d68ffc fix(gemini): include thoughts_token_count in completion tokens 2026-05-04 21:03:38 +08:00
Greyson LaLonde
f579aa53ae fix: preserve task outputs across async batch flush 2026-05-04 20:24:24 +08:00
minasami-pr
a23e118b11 fix: forward kwargs to loader calls in CrewAIRagAdapter
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-05-04 19:52:24 +08:00
Greyson LaLonde
095f796922 fix: prevent result_as_answer from returning hook-block message as final answer 2026-05-04 19:42:07 +08:00
Zamuldinov Nikita
bfbdba426f fix: prevent result_as_answer from returning error as final answer
When a tool with result_as_answer=True raises an exception, the agent
was receiving result_as_answer=True and returning the error string as
the final answer. Now we set result_as_answer=False when an error event
is emitted, allowing the agent to reflect and retry.

Fixes crewAIInc/crewAI#5156

---------

Co-authored-by: NIK-TIGER-BILL <nik.tiger.bill@github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-05-04 19:28:21 +08:00
Greyson LaLonde
a058a3b15b fix(task): use acall for output conversion in async paths 2026-05-04 18:42:12 +08:00
Greyson LaLonde
184c228ae9 fix: prevent shared LLM stop words mutation across agents
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-05-04 14:23:17 +08:00
Greyson LaLonde
c9100cb51d docs(devtools): document additional env vars
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-05-03 14:50:44 +08:00
Greyson LaLonde
17e82743f6 fix: handle BaseModel input in convert_to_model 2026-05-03 14:17:03 +08:00
Lorenze Jay
3403f3cba9 docs: update changelog and version for v1.14.5a1 (#5678)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-05-01 14:27:57 -07:00
Lorenze Jay
5db72250b2 feat: bump versions to 1.14.5a1 (#5677)
* feat: bump versions to 1.14.5a1

* chore: update tool specifications

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-05-01 14:21:50 -07:00
Greyson LaLonde
a071838e92 fix(devtools): cover missing crewai pin sites in release flow 2026-05-02 03:26:56 +08:00
Tiago Freire
cd2b9ee38a feat(flow): add restore_from_state_id kickoff parameter (#5674)
## Summary

- Reverts `b0e2fda` ("fix(flow): add execution_id separate from state.id", COR-48): removes `Flow.execution_id` and points `current_flow_id` / `current_flow_request_id` back at `flow_id` (i.e. `state.id`). The separate per-run tracking id was no longer the right abstraction once `restore_from_state_id` reshapes how `state.id` is assigned;

- Adds an optional `restore_from_state_id` kwarg to `Flow.kickoff` / `Flow.kickoff_async` that hydrates state from a previously-persisted flow's latest snapshot

- Reassigns `state.id` to a fresh value (or `inputs["id"]` if pinned) so the new run's `@persist` writes don't extend the source's history

- Existing `inputs["id"]` resume, `@persist`, and `from_checkpoint` paths are unchanged

## Problem
`@persist` only supports *resume* today: `kickoff(inputs={"id": <uuid>})` hydrates state and continues writing under the same `flow_uuid`. There's no way to **fork** — hydrate from a snapshot but persist under a separate key, leaving the source's history intact. This PR adds that.

| | `state.id` after kickoff | `@persist` writes land under |
|---|---|---|
| `inputs["id"]` (resume) | supplied id | supplied id (extends history) |
| `restore_from_state_id` (fork) | fresh id, or `inputs["id"]` if pinned | new id (source preserved) |

## Behavior

| `inputs.id` | `restore_from_state_id` | Effect |
|---|---|---|
| — | — | Fresh kickoff |
| set | — | Existing resume |
| — | UUID | Fork — new `state.id`, hydrated from source |
| set | UUID | Fork into a pinned `state.id`, hydrated from source |

- Source not found → silent fallback (mirrors existing resume)
- Both `from_checkpoint` and `restore_from_state_id` set → `ValueError`
- `restore_from_state_id=None` → byte-identical to current main

## Design
Fork hydration runs before the existing `inputs` block in `kickoff_async`. On a hit, it calls the same `_restore_state` primitive used by resume, then overwrites `state.id` with a fresh UUID (or `inputs["id"]`). A `fork_succeeded` flag gates the existing `inputs["id"]` path so we don't double-load. `_completed_methods` / `_is_execution_resuming` are intentionally untouched — skip-completed-methods remains the territory of `apply_checkpoint` and `from_pending`.

## Test plan
- [ ] `pytest tests/test_flow_persistence.py` — 5 new tests (four-row matrix, not-found fallback, default no-op, conflict raise) + 6 existing as regression
- [ ] `pytest tests/test_flow.py` — broader flow suite
- [ ] Manual end-to-end against an HITL `@persist` flow
2026-05-01 11:46:07 -04:00
Ishan Goswami
07c4a30f2e feat(crewai-tools): add highlights to ExaSearchTool, rename from EXASearchTool
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
* feat(crewai-tools): add highlights to ExaSearchTool, rename from EXASearchTool

- Add a highlights init param so agents can get token-efficient excerpts instead of full pages
- Rename EXASearchTool to ExaSearchTool; keep EXASearchTool as a deprecated alias so existing imports keep working
- Update the docs and example to use highlights as the recommended option
- Add a small note that says Exa is the fastest and most accurate web search API
- Add tests for the new highlights param and the deprecation alias

* fix(crewai-tools): import order and module-level Exa for tests

- Reorder std-lib imports so ruff is happy with force-sort-within-sections.
- Import Exa at module level (with a fallback) so the existing test mocks resolve.
  The lazy install prompt still works if exa_py is missing.
- Allow content and summary to be a dict, matching highlights.
- Trim test file to the cases this PR introduces (highlights param and the
  EXASearchTool deprecation alias). Existing init-shape tests stay.

Co-Authored-By: ishan <ishan@exa.ai>

* chore(crewai-tools): drop self-explanatory comment on schema alias

Co-Authored-By: ishan <ishan@exa.ai>

* docs(crewai-tools): default highlights to True, drop summary from examples

Co-Authored-By: ishan <ishan@exa.ai>

* docs(crewai-tools): simplify highlights examples to highlights=True

Co-Authored-By: ishan <ishan@exa.ai>

* feat(crewai-tools): add x-exa-integration header for usage tracking

Co-Authored-By: ishan <ishan@exa.ai>

* docs(crewai-tools): add Exa MCP section and resources links

Co-Authored-By: ishan <ishan@exa.ai>

---------

Co-authored-by: ishan <ishan@exa.ai>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-05-01 21:25:23 +08:00
Lorenze Jay
b30fdbaa0e fix: ensure skills loading events for traces
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-05-01 12:08:25 +08:00
Greyson LaLonde
898f860916 docs: update changelog and version for v1.14.4
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-05-01 03:11:30 +08:00
Greyson LaLonde
2c0323c3fe feat: bump versions to 1.14.4 2026-05-01 02:57:37 +08:00
Greyson LaLonde
c580d428f0 chore(devtools): open PR for deployment test bump and wait for merge 2026-05-01 02:48:08 +08:00
Greyson LaLonde
70f391994e fix(converter): fall through when JSON regex match isn't valid JSON 2026-05-01 00:48:09 +08:00
Vini Brasil
864f0a8a91 Revert "feat(flow): support custom persistence key in @persist (#5649)" (#5668)
This reverts commit e2deac5575.
2026-04-30 12:04:57 -03:00
Greyson LaLonde
9f13235037 fix(llm): preserve tool_calls when response also contains text 2026-04-30 22:53:01 +08:00
89 changed files with 4821 additions and 800 deletions

5
.github/security.md vendored
View File

@@ -5,7 +5,10 @@ CrewAI ecosystem.
### How to Report
Please submit reports to **crewai-vdp-ess@submit.bugcrowd.com**
Please submit reports through one of the following channels:
- **crewai-vdp-ess@submit.bugcrowd.com**
- https://security.crewai.com
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests,
or social media

View File

@@ -26,7 +26,7 @@ mode: "wide"
</Step>
<Step title="مراقبة التقدم">
استخدم `GET /{kickoff_id}/status` للتحقق من حالة التنفيذ واسترجاع النتائج.
استخدم `GET /status/{kickoff_id}` للتحقق من حالة التنفيذ واسترجاع النتائج.
</Step>
</Steps>
@@ -65,7 +65,7 @@ https://your-crew-name.crewai.com
1. **الاكتشاف**: استدعِ `GET /inputs` لفهم ما يحتاجه طاقمك
2. **التنفيذ**: أرسل المدخلات عبر `POST /kickoff` لبدء المعالجة
3. **المراقبة**: استعلم عن `GET /{kickoff_id}/status` حتى الاكتمال
3. **المراقبة**: استعلم عن `GET /status/{kickoff_id}` حتى الاكتمال
4. **النتائج**: استخرج المخرجات النهائية من الاستجابة المكتملة
## معالجة الأخطاء

View File

@@ -1,6 +1,6 @@
---
title: "GET /{kickoff_id}/status"
title: "GET /status/{kickoff_id}"
description: "الحصول على حالة التنفيذ"
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
mode: "wide"
---

View File

@@ -4,6 +4,99 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="4 مايو 2026">
## v1.14.5a2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a2)
## ما الذي تغير
### إصلاحات الأخطاء
- إصلاح استعادة مخرجات المهام في كتلة finally
- تضمين `thoughts_token_count` في رموز الإكمال
- الحفاظ على مخرجات المهام عبر تفريغ دفعات غير متزامنة
- تمرير kwargs إلى استدعاءات المحمل في `CrewAIRagAdapter`
- منع `result_as_answer` من إرجاع رسالة كتلة الخطاف كإجابة نهائية
- منع `result_as_answer` من إرجاع خطأ كإجابة نهائية
- استخدام `acall` لتحويل المخرجات في المسارات غير المتزامنة
- منع تغيير كلمات التوقف المشتركة في LLM عبر الوكلاء
- التعامل مع مدخلات `BaseModel` في `convert_to_model`
### الوثائق
- توثيق متغيرات البيئة الإضافية
- تحديث سجل التغييرات والإصدار لـ v1.14.5a1
## المساهمون
@NIK-TIGER-BILL, @greysonlalonde, @lorenzejay, @minasami-pr, @theCyberTech, @wishhyt
</Update>
<Update label="1 مايو 2026">
## v1.14.5a1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## ما الذي تغير
### الميزات
- إضافة معلمة بدء `restore_from_state_id`
- إضافة تسليط الضوء على ExaSearchTool وإعادة تسميته من EXASearchTool
### إصلاحات الأخطاء
- إصلاح المواقع المفقودة لـ crewai في تدفق الإصدار
- ضمان تحميل أحداث المهارات للآثار
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.4
## المساهمون
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="1 مايو 2026">
## v1.14.4
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## ما الذي تغير
### الميزات
- إضافة دعم لمفتاح الاستمرارية المخصص في @persist
- إضافة دعم واجهة برمجة التطبيقات للردود لمزود Azure OpenAI
- تمرير credential_scopes إلى عميل Azure AI Inference
- إضافة دليل إعداد هوية عبء العمل لـ Vertex AI
- إضافة Tavily Research والحصول على Research
- إضافة أدوات MCP من You.com للبحث، البحث، واستخراج المحتوى
### إصلاحات الأخطاء
- إصلاح مشكلة السقوط عند عدم تطابق تعبير JSON regex مع JSON صالح
- إصلاح للحفاظ على tool_calls عندما تحتوي الاستجابة أيضًا على نص
- إصلاح لتمرير base_url و api_key إلى instructor.from_provider
- إصلاح لتحذير وإرجاع فارغ عندما لا يُرجع خادم MCP الأصلي أي أدوات
- إصلاح لاستخدام متغير الرسائل الموثقة في معالجات غير البث
- إصلاح لحماية مساعدي وصف دردشة الطاقم ضد فشل LLM
- إصلاح لإعادة تعيين الرسائل والتكرارات بين الاستدعاءات
- إصلاح لتمرير ملف trained-agents من خلال replay و test
- إصلاح لاحترام ملف trained-agents المخصص في الاستدلال
- إصلاح لربط الوكلاء المخصصين بالمهام فقط بالطاقم لملفات الإدخال متعددة الأنماط
- إصلاح لتسلسل callable الحواجز كـ null لتسجيل JSON
- إصلاح إعادة تسمية force_final_answer لتجنب توجيه ذاتي
- إصلاح زيادة litellm لإصلاح SSTI؛ تجاهل CVE غير القابل للإصلاح في pip
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.4a1
- إضافة صفحة أدوات E2B Sandbox
- إضافة وثائق أدوات صندوق Daytona
## المساهمون
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="29 أبريل 2026">
## v1.14.4a1

View File

@@ -380,32 +380,41 @@ class AnotherFlow(Flow[dict]):
print("Method-level persisted runs:", self.state["runs"])
```
### مفتاح استمرارية مخصص
### تفرع الحالة المستمرة
افتراضيًا، يستخدم `@persist` الحقل `state.id` المُولّد تلقائيًا كمفتاح للاستمرارية. إذا كان لتدفقك معرّف خاص به — مثل `conversation_id` مشترك بين عدة جلسات — يمكنك تمرير الوسيط `key` ليستخدم `@persist` تلك السمة كـ UUID للتدفق:
يدعم `@persist` نمطين متميزين للترطيب في `kickoff` / `kickoff_async`:
- `kickoff(inputs={"id": <uuid>})` — **استئناف**: يحمّل أحدث لقطة لـ UUID المقدم ويستمر في الكتابة تحت نفس `flow_uuid`. يمتد التاريخ.
- `kickoff(restore_from_state_id=<uuid>)` — **تفرع**: يحمّل أحدث لقطة لـ UUID المقدم، يرطّب حالة التشغيل الجديد منها، ثم يعيّن `state.id` جديدًا (مولّدًا تلقائيًا، أو `inputs["id"]` إذا تم تثبيته). تذهب كتابات `@persist` للتشغيل الجديد تحت `state.id` الجديد؛ يتم الحفاظ على تاريخ تدفق المصدر.
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id") # استخدام حقل مخصص كمفتاح للاستمرارية
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def begin(self):
self.state.turn += 1
print(f"Conversation {self.state.conversation_id} turn {self.state.turn}")
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
# إعادة تشغيل المحادثة بنفس conversation_id يُعيد تحميل الحالة السابقة
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# التشغيل 1: حالة جديدة، العداد 0 -> 1، محفوظ تحت flow_1.state.id
flow_1 = CounterFlow()
flow_1.kickoff()
# التفرع: ترطيب من أحدث لقطة لـ flow_1، لكن باستخدام state.id جديد
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# يبدأ flow_2.state.counter بـ 1 (مرطّب)، ثم تزيده step() إلى 2.
# flow_2.state.id != flow_1.state.id؛ تاريخ flow_1 لم يتغيّر.
```
يقرأ المزخرف القيمة من `state[key]` للحالات من نوع dict، ومن `getattr(state, key)` للحالات من نوع Pydantic / كائن. إذا كانت السمة المحددة غير موجودة أو قيمتها falsy عند الحفظ، يُطلق `@persist` خطأ `ValueError` مثل `Flow state is missing required persistence key 'conversation_id'`. عند حذف `key`، يظل السلوك الأصلي قائمًا ويُستخدم `state.id`.
إذا لم يطابق `restore_from_state_id` المقدم أي حالة مستمرة، يعود kickoff بصمت إلى السلوك الافتراضي — نفس سلوك `inputs["id"]` عند عدم العثور عليه. الجمع بين `restore_from_state_id` و `from_checkpoint` يطلق `ValueError`؛ اختر مصدر ترطيب واحدًا. تثبيت `inputs["id"]` أثناء التفرع يشارك مفتاح الاستمرارية مع تدفق آخر — عادةً ما تريد استخدام `restore_from_state_id` فقط.
### كيف تعمل

View File

@@ -146,15 +146,14 @@ class ProductionFlow(Flow[AppState]):
# ...
```
افتراضيًا، يستخدم `@persist` الحقل `state.id` المُولّد تلقائيًا كمفتاح للحالة المحفوظة. إذا كان تطبيقك يمتلك معرّفًا طبيعيًا بالفعل — مثل `conversation_id` يربط عدة تشغيلات بنفس جلسة المستخدم — مرّره كـ `key` ليستخدمه المزخرف كـ UUID للتدفق. يُطلق `ValueError` إذا كانت السمة المحددة غير موجودة أو قيمتها falsy عند الحفظ.
افتراضيًا، يستأنف `@persist` تدفقًا عند توفير `kickoff(inputs={"id": <uuid>})`، مما يمدّ نفس تاريخ `flow_uuid`. لـ **تفرع** تدفق مستمر إلى نسبٍ جديد — ترطيب الحالة من تشغيل سابق ولكن الكتابة تحت `state.id` جديد — مرّر `restore_from_state_id`:
```python
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# يجب أن يحتوي AppState على conversation_id؛ استئناف الجلسة يُعيد تحميل الحالة السابقة
...
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
```
يحصل التشغيل الجديد على `state.id` جديد (مولّد تلقائيًا، أو `inputs["id"]` إذا تم تثبيته) لذا لا تمتد كتابات `@persist` الخاصة به إلى تاريخ المصدر. الجمع مع `from_checkpoint` يطلق `ValueError`؛ اختر مصدر ترطيب واحدًا.
## الخلاصة
- **ابدأ بتدفق.**

View File

@@ -133,7 +133,7 @@ crew.kickoff()
| **DirectorySearchTool** | أداة RAG للبحث في المجلدات، مفيدة للتنقل في أنظمة الملفات. |
| **DOCXSearchTool** | أداة RAG للبحث في مستندات DOCX، مثالية لمعالجة ملفات Word. |
| **DirectoryReadTool** | تسهّل قراءة ومعالجة هياكل المجلدات ومحتوياتها. |
| **EXASearchTool** | أداة مصممة لإجراء عمليات بحث شاملة عبر مصادر بيانات متنوعة. |
| **ExaSearchTool** | أداة مصممة لإجراء عمليات بحث شاملة عبر مصادر بيانات متنوعة. |
| **FileReadTool** | تُمكّن قراءة واستخراج البيانات من الملفات، مع دعم تنسيقات ملفات متنوعة. |
| **FirecrawlSearchTool** | أداة للبحث في صفحات الويب باستخدام Firecrawl وإرجاع النتائج. |
| **FirecrawlCrawlWebsiteTool** | أداة لزحف صفحات الويب باستخدام Firecrawl. |

View File

@@ -116,32 +116,47 @@ class PersistentCounterFlow(Flow[CounterState]):
return self.state.value
```
### استخدام مفتاح استمرارية مخصص
#### تفرع الحالة المستمرة
افتراضيًا، يستخدم `@persist()` الحقل `state.id` المُولّد تلقائيًا كمفتاح للحالة المحفوظة. عندما يكون لمجالك معرّف طبيعي بالفعل — مثل `conversation_id` يربط عدة تشغيلات للتدفق بنفس جلسة المستخدم — مرّره كوسيط `key` ليستخدمه `@persist` كـ UUID للتدفق بدلًا من `id`:
يدعم `@persist` نمطين متميزين للترطيب في `kickoff` / `kickoff_async`. استخدم **استئناف** (`inputs["id"]`) لمواصلة نفس النسب؛ استخدم **تفرع** (`restore_from_state_id`) لبدء نسبٍ جديد من لقطة:
| | `state.id` بعد kickoff | كتابات `@persist` تذهب إلى |
|---|---|---|
| `inputs["id"]` (استئناف) | المعرّف المقدم | المعرّف المقدم (يمد التاريخ) |
| `restore_from_state_id` (تفرع) | معرّف جديد، أو `inputs["id"]` إذا ثُبّت | المعرّف الجديد (المصدر محفوظ) |
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def greet(self):
self.state.history.append("hello")
return self.state.history
def step(self):
self.state.counter += 1
# تشغيل ثانٍ بنفس conversation_id يُعيد تحميل الحالة السابقة
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# التشغيل 1: حالة جديدة، العداد 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# التفرع: الترطيب من أحدث لقطة لـ flow_1، لكن الكتابة تحت state.id جديد
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# يبدأ flow_2 بـ counter=1 (مرطّب)، ثم تزيده step() إلى 2.
# تاريخ flow_uuid لـ flow_1 لم يتغيّر.
```
بالنسبة للحالات من نوع dict يقرأ `@persist` القيمة من `state[key]`، ولحالات Pydantic / الكائنات يقرأها من `getattr(state, key)`. إذا كانت السمة المحددة غير موجودة أو قيمتها falsy عند حفظ الحالة، يُطلق `@persist` خطأ `ValueError` مثل `Flow state is missing required persistence key 'conversation_id'`، فيظهر الفشل فورًا بدلًا من فقد بيانات الاستمرارية بصمت. استدعاء `@persist()` بدون `key` يحافظ على السلوك الأصلي ويستخدم `state.id`.
ملاحظات السلوك:
- `restore_from_state_id` غير موجود في الاستمرارية → يعود kickoff بصمت إلى السلوك الافتراضي (يعكس سلوك `inputs["id"]` عند عدم العثور عليه). لا يُطلق أي استثناء.
- الجمع بين `restore_from_state_id` و `from_checkpoint` يطلق `ValueError` — يستهدفان نظامي حالة مختلفين (`@persist` مقابل Checkpointing) ولا يمكن الجمع بينهما.
- `restore_from_state_id=None` (افتراضي) متطابق بايت ببايت مع kickoff بدون المعامل.
- تثبيت `inputs["id"]` أثناء التفرع يعني أن التشغيل الجديد يشارك مفتاح الاستمرارية مع تدفق آخر — عادةً ما تريد فقط `restore_from_state_id`.
## أنماط حالة متقدمة

View File

@@ -1,11 +1,11 @@
---
title: "أداة بحث Exa"
description: "ابحث في الويب باستخدام Exa Search API للعثور على النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والمقتطفات والملخصات."
description: "ابحث في الويب باستخدام Exa Search API للعثور على النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والمقتطفات."
icon: "magnifying-glass"
mode: "wide"
---
تتيح أداة `EXASearchTool` لوكلاء CrewAI البحث في الويب باستخدام [Exa](https://exa.ai/) search API. تُرجع النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والملخصات المولّدة بالذكاء الاصطناعي.
تتيح أداة `ExaSearchTool` لوكلاء CrewAI البحث في الويب باستخدام [Exa](https://exa.ai/) search API. تُرجع النتائج الأكثر صلة لأي استعلام، مع خيارات لمحتوى الصفحة الكامل والمقتطفات الموفرة للرموز.
## التثبيت
@@ -27,15 +27,15 @@ export EXA_API_KEY='your_exa_api_key'
## مثال على الاستخدام
إليك كيفية استخدام `EXASearchTool` مع وكيل CrewAI:
إليك كيفية استخدام `ExaSearchTool` مع وكيل CrewAI:
```python
import os
from crewai import Agent, Task, Crew
from crewai_tools import EXASearchTool
from crewai_tools import ExaSearchTool
# Initialize the tool
exa_tool = EXASearchTool()
exa_tool = ExaSearchTool()
# Create an agent that uses the tool
researcher = Agent(
@@ -66,11 +66,11 @@ print(result)
## خيارات التكوين
تقبل أداة `EXASearchTool` المعاملات التالية أثناء التهيئة:
تقبل أداة `ExaSearchTool` المعاملات التالية أثناء التهيئة:
- `type` (str، اختياري): نوع البحث المستخدم. الافتراضي هو `"auto"`. الخيارات: `"auto"`، `"instant"`، `"fast"`، `"deep"`.
- `highlights` (bool أو dict، اختياري): إرجاع مقتطفات موفرة للرموز أكثر صلة بالاستعلام بدلاً من الصفحة الكاملة. الافتراضي هو `True`. مرر قاموسًا مثل `{"max_characters": 4000}` للتكوين، أو `False` للتعطيل.
- `content` (bool، اختياري): ما إذا كان يجب تضمين محتوى الصفحة الكامل في النتائج. الافتراضي هو `False`.
- `summary` (bool، اختياري): ما إذا كان يجب تضمين ملخصات مولّدة بالذكاء الاصطناعي لكل نتيجة. يتطلب `content=True`. الافتراضي هو `False`.
- `api_key` (str، اختياري): مفتاح Exa API الخاص بك. يعود إلى متغير البيئة `EXA_API_KEY` إذا لم يتم تقديمه.
- `base_url` (str، اختياري): عنوان URL مخصص لخادم API. يعود إلى متغير البيئة `EXA_BASE_URL` إذا لم يتم تقديمه.
@@ -86,25 +86,52 @@ print(result)
يمكنك تكوين الأداة بمعاملات مخصصة للحصول على نتائج أغنى:
```python
# Get full page content with AI summaries
exa_tool = EXASearchTool(
content=True,
summary=True,
# Use 'deep' for thorough, multi-step searches
exa_tool = ExaSearchTool(
highlights=True,
type="deep"
)
# Use it in an agent
agent = Agent(
role="Deep Researcher",
goal="Conduct thorough research with full content and summaries",
goal="Conduct thorough research",
tools=[exa_tool]
)
```
## استخدام Exa عبر MCP
يمكنك أيضًا ربط وكيلك بخادم MCP المستضاف من Exa. مرّر مفتاح API الخاص بك عبر ترويسة `x-api-key`:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
احصل على مفتاح API من [لوحة تحكم Exa](https://dashboard.exa.ai/api-keys). لمزيد من المعلومات حول MCP في CrewAI، راجع [نظرة عامة على MCP](/ar/mcp/overview).
## الميزات
- **مقتطفات موفرة للرموز**: الحصول على المقتطفات الأكثر صلة من كل نتيجة، باستخدام رموز أقل بكثير من النص الكامل
- **البحث الدلالي**: العثور على نتائج بناءً على المعنى، وليس الكلمات المفتاحية فقط
- **استرجاع المحتوى الكامل**: الحصول على النص الكامل لصفحات الويب مع نتائج البحث
- **ملخصات الذكاء الاصطناعي**: الحصول على ملخصات موجزة مولّدة بالذكاء الاصطناعي لكل نتيجة
- **تصفية التاريخ**: تقييد النتائج لفترات زمنية محددة باستخدام فلاتر تاريخ النشر
- **تصفية النطاقات**: تقييد عمليات البحث على نطاقات محددة
- **تصفية النطاقات**: تقييد عمليات البحث على نطاقات محددة
## موارد
- [توثيق Exa](https://exa.ai/docs)
- [لوحة تحكم Exa — إدارة مفاتيح API والاستخدام](https://dashboard.exa.ai)

File diff suppressed because it is too large Load Diff

View File

@@ -26,7 +26,7 @@ Welcome to the CrewAI AMP API reference. This API allows you to programmatically
</Step>
<Step title="Monitor Progress">
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
</Step>
</Steps>
@@ -65,7 +65,7 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
4. **Results**: Extract the final output from the completed response
## Error Handling

View File

@@ -1,6 +1,6 @@
---
title: "GET /{kickoff_id}/status"
title: "GET /status/{kickoff_id}"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
mode: "wide"
---

View File

@@ -4,6 +4,99 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="May 04, 2026">
## v1.14.5a2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a2)
## What's Changed
### Bug Fixes
- Fix task output restoration in finally block
- Include `thoughts_token_count` in completion tokens
- Preserve task outputs across async batch flush
- Forward kwargs to loader calls in `CrewAIRagAdapter`
- Prevent `result_as_answer` from returning hook-block message as final answer
- Prevent `result_as_answer` from returning error as final answer
- Use `acall` for output conversion in async paths
- Prevent shared LLM stop words mutation across agents
- Handle `BaseModel` input in `convert_to_model`
### Documentation
- Document additional environment variables
- Update changelog and version for v1.14.5a1
## Contributors
@NIK-TIGER-BILL, @greysonlalonde, @lorenzejay, @minasami-pr, @theCyberTech, @wishhyt
</Update>
<Update label="May 01, 2026">
## v1.14.5a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## What's Changed
### Features
- Add `restore_from_state_id` kickoff parameter
- Add highlights to ExaSearchTool and rename from EXASearchTool
### Bug Fixes
- Fix missing crewai pin sites in release flow
- Ensure skills loading events for traces
### Documentation
- Update changelog and version for v1.14.4
## Contributors
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="May 01, 2026">
## v1.14.4
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## What's Changed
### Features
- Add support for custom persistence key in @persist
- Add Responses API support for Azure OpenAI provider
- Forward credential_scopes to Azure AI Inference client
- Add Vertex AI workload identity setup guide
- Add Tavily Research and get Research
- Add You.com MCP tools for search, research, and content extraction
### Bug Fixes
- Fix fall through when JSON regex match isn't valid JSON
- Fix to preserve tool_calls when response also contains text
- Fix to forward base_url and api_key to instructor.from_provider
- Fix to warn and return empty when native MCP server returns no tools
- Fix to use validated messages variable in non-streaming handlers
- Fix to guard crew chat description helpers against LLM failures
- Fix to reset messages and iterations between invocations
- Fix to forward trained-agents file through replay and test
- Fix to honor custom trained-agents file at inference
- Fix to bind task-only agents to crew for multimodal input_files
- Fix to serialize guardrail callables as null for JSON checkpointing
- Fix renaming of force_final_answer to avoid self-referential router
- Fix bump of litellm for SSTI fix; ignore unfixable pip CVE
### Documentation
- Update changelog and version for v1.14.4a1
- Add E2B Sandbox Tools page
- Add Daytona sandbox tools documentation
## Contributors
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="Apr 29, 2026">
## v1.14.4a1

View File

@@ -380,32 +380,41 @@ class AnotherFlow(Flow[dict]):
print("Method-level persisted runs:", self.state["runs"])
```
### Custom Persistence Key
### Forking Persisted State
By default, `@persist` uses the auto-generated `state.id` field as the persistence key. If your flow models its own identifier — for example a `conversation_id` shared across sessions — you can pass a `key` argument and `@persist` will use that attribute as the flow UUID instead:
`@persist` supports two distinct hydration modes on `kickoff` / `kickoff_async`:
- `kickoff(inputs={"id": <uuid>})` — **resume**: load the latest snapshot for the supplied UUID and continue writing under the same `flow_uuid`. The history extends.
- `kickoff(restore_from_state_id=<uuid>)` — **fork**: load the latest snapshot for the supplied UUID, hydrate the new run's state from it, and assign a fresh `state.id` (auto-generated, or `inputs["id"]` if pinned). The new run's `@persist` writes land under the new `state.id`; the source flow's history is preserved.
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id") # Use a custom field as the persistence key
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def begin(self):
self.state.turn += 1
print(f"Conversation {self.state.conversation_id} turn {self.state.turn}")
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
# Resuming the same conversation reloads its prior state by conversation_id
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# Run 1: fresh state, counter 0 -> 1, persisted under flow_1.state.id
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hydrate from flow_1's latest snapshot, but use a NEW state.id
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2.state.counter starts at 1 (hydrated), then step() bumps it to 2.
# flow_2.state.id != flow_1.state.id; flow_1's history is unchanged.
```
The decorator reads the value at `state[key]` for dict states, or `getattr(state, key)` for Pydantic / object states. If the named attribute is missing or falsy at save time, `@persist` raises a `ValueError` such as `Flow state is missing required persistence key 'conversation_id'`. When `key` is omitted, the existing behavior is preserved and `state.id` is used.
If the supplied `restore_from_state_id` does not match any persisted state, the kickoff falls back silently — same as the existing `inputs["id"]` resume not-found behavior. Combining `restore_from_state_id` with `from_checkpoint` raises a `ValueError`; pick one hydration source. Pinning `inputs["id"]` while forking shares a persistence key with another flow — usually you want only `restore_from_state_id`.
### How It Works

View File

@@ -146,15 +146,14 @@ class ProductionFlow(Flow[AppState]):
# ...
```
By default `@persist` keys saved state by the auto-generated `state.id`. If your application already has a natural identifier — for example a `conversation_id` that ties multiple runs to the same user session — pass it as `key` and the decorator will use that attribute as the flow UUID. A `ValueError` is raised if the named attribute is missing or falsy at save time.
By default, `@persist` resumes a flow when `kickoff(inputs={"id": <uuid>})` is supplied, extending the same `flow_uuid` history. To **fork** a persisted flow into a new lineage — hydrate state from a previous run but write under a fresh `state.id` — pass `restore_from_state_id`:
```python
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# AppState must expose conversation_id; resuming a session reloads its prior state
...
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
```
The new run gets a fresh `state.id` (auto-generated, or `inputs["id"]` if pinned) so its `@persist` writes don't extend the source's history. Combining with `from_checkpoint` raises a `ValueError`; pick one hydration source.
## Summary
- **Start with a Flow.**

View File

@@ -133,7 +133,7 @@ Here is a list of the available tools and their descriptions:
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
| **EXASearchTool** | A tool designed for performing exhaustive searches across various data sources. |
| **ExaSearchTool** | Search the web with Exa, the fastest and most accurate web search API. Supports token-efficient highlights and full page content. |
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |

View File

@@ -346,32 +346,47 @@ class SelectivePersistFlow(Flow):
return f"Complete with count {self.state['count']}"
```
#### Using a Custom Persistence Key
#### Forking Persisted State
By default, `@persist()` keys persisted state by the flow's auto-generated `state.id`. When your domain already has a natural identifier — for example a `conversation_id` that ties multiple flow runs to the same user session — pass it as the `key` argument and `@persist` will use that attribute as the flow UUID instead of `id`:
`@persist` supports two distinct hydration modes on `kickoff` / `kickoff_async`. Use **resume** (`inputs["id"]`) to continue the same lineage; use **fork** (`restore_from_state_id`) to start a new lineage seeded from a snapshot:
| | `state.id` after kickoff | `@persist` writes land under |
|---|---|---|
| `inputs["id"]` (resume) | supplied id | supplied id (extends history) |
| `restore_from_state_id` (fork) | fresh id, or `inputs["id"]` if pinned | new id (source preserved) |
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def greet(self):
self.state.history.append("hello")
return self.state.history
def step(self):
self.state.counter += 1
# A second run with the same conversation_id reloads the prior state
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# Run 1: fresh state, counter 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hydrate from flow_1's latest snapshot, but write under a NEW state.id
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2 starts with counter=1 (hydrated), then step() bumps it to 2.
# flow_1's flow_uuid history is unchanged.
```
For dict-based states `@persist` reads `state[key]`, and for Pydantic / object states it reads `getattr(state, key)`. If the named attribute is missing or falsy when state is being saved, `@persist` raises a `ValueError` like `Flow state is missing required persistence key 'conversation_id'`, so the failure surfaces immediately rather than silently dropping persisted data. Calling `@persist()` without `key` keeps the original behavior of using `state.id`.
Behavior notes:
- `restore_from_state_id` not found in persistence → the kickoff falls back silently to default behavior (mirrors the existing `inputs["id"]` resume not-found behavior). No exception is raised.
- Combining `restore_from_state_id` with `from_checkpoint` raises a `ValueError` — they target different state systems (`@persist` vs. Checkpointing) and cannot be combined.
- `restore_from_state_id=None` (default) is byte-identical to a kickoff without the parameter.
- Pinning `inputs["id"]` while forking means the new run shares a persistence key with another flow — usually you want only `restore_from_state_id`.
## Advanced State Patterns

View File

@@ -1,11 +1,11 @@
---
title: "Exa Search Tool"
description: "Search the web using the Exa Search API to find the most relevant results for any query, with options for full page content, highlights, and summaries."
description: "Search the web with Exa, the fastest and most accurate web search API. Get token-efficient highlights and full page content."
icon: "magnifying-glass"
mode: "wide"
---
The `EXASearchTool` lets CrewAI agents search the web using the [Exa](https://exa.ai/) search API. It returns the most relevant results for any query, with options for full page content and AI-generated summaries.
The `ExaSearchTool` lets CrewAI agents search the web using [Exa](https://exa.ai/), the fastest and most accurate web search API. It returns the most relevant results for any query, with options for token-efficient highlights and full page content.
## Installation
@@ -27,15 +27,15 @@ Get an API key from the [Exa dashboard](https://dashboard.exa.ai/api-keys).
## Example Usage
Here's how to use the `EXASearchTool` within a CrewAI agent:
Here's how to use the `ExaSearchTool` within a CrewAI agent:
```python
import os
from crewai import Agent, Task, Crew
from crewai_tools import EXASearchTool
from crewai_tools import ExaSearchTool
# Initialize the tool
exa_tool = EXASearchTool()
exa_tool = ExaSearchTool()
# Create an agent that uses the tool
researcher = Agent(
@@ -66,11 +66,11 @@ print(result)
## Configuration Options
The `EXASearchTool` accepts the following parameters during initialization:
The `ExaSearchTool` accepts the following parameters during initialization:
- `type` (str, optional): The search type to use. Defaults to `"auto"`. Options: `"auto"`, `"instant"`, `"fast"`, `"deep"`.
- `highlights` (bool or dict, optional): Return token-efficient excerpts most relevant to the query instead of the full page. Defaults to `True`. Pass a dict like `{"max_characters": 4000}` to configure, or `False` to disable.
- `content` (bool, optional): Whether to include full page content in results. Defaults to `False`.
- `summary` (bool, optional): Whether to include AI-generated summaries of each result. Requires `content=True`. Defaults to `False`.
- `api_key` (str, optional): Your Exa API key. Falls back to the `EXA_API_KEY` environment variable if not provided.
- `base_url` (str, optional): Custom API server URL. Falls back to the `EXA_BASE_URL` environment variable if not provided.
@@ -83,28 +83,70 @@ When calling the tool (or when an agent invokes it), the following search parame
## Advanced Usage
You can configure the tool with custom parameters for richer results:
For most agent workflows we recommend `highlights` — it returns the most relevant excerpts from each result and uses far fewer tokens than full page content:
```python
# Get full page content with AI summaries
exa_tool = EXASearchTool(
content=True,
summary=True,
type="deep"
# Get token-efficient excerpts most relevant to the query
exa_tool = ExaSearchTool(
highlights=True,
type="auto",
)
# Use it in an agent
agent = Agent(
role="Deep Researcher",
goal="Conduct thorough research with full content and summaries",
role="Researcher",
goal="Answer questions with current web data",
tools=[exa_tool]
)
```
For thorough, multi-step searches, use `type="deep"`:
```python
exa_tool = ExaSearchTool(
highlights=True,
type="deep",
)
```
For more on choosing between highlights and full content, see the [Exa search best practices](https://exa.ai/docs/reference/search-best-practices).
## Using Exa via MCP
You can also connect your agent to Exa's hosted MCP server. Pass your API key with the `x-api-key` header:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
Get your API key from the [Exa dashboard](https://dashboard.exa.ai/api-keys). For more on MCP in CrewAI, see the [MCP overview](/en/mcp/overview).
## Features
- **Token-Efficient Highlights**: Get the most relevant excerpts from each result, ~10x fewer tokens than full text
- **Semantic Search**: Find results based on meaning, not just keywords
- **Full Content Retrieval**: Get the full text of web pages alongside search results
- **AI Summaries**: Get concise, AI-generated summaries of each result
- **Date Filtering**: Limit results to specific time periods with published date filters
- **Domain Filtering**: Restrict searches to specific domains
<Note>
`EXASearchTool` is a deprecated alias for `ExaSearchTool`. Existing imports continue to work but will emit a deprecation warning; please migrate to `ExaSearchTool`.
</Note>
## Resources
- [Exa documentation](https://exa.ai/docs)
- [Exa dashboard — manage API keys and usage](https://dashboard.exa.ai)

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /{kickoff_id}/status`
3. **Monitor progress** using `GET /status/{kickoff_id}`
version: 1.0.0
contact:
name: CrewAI Support
@@ -207,7 +207,7 @@ paths:
"500":
$ref: "#/components/responses/ServerError"
/{kickoff_id}/status:
/status/{kickoff_id}:
get:
summary: Get Execution Status
description: |

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /{kickoff_id}/status`
3. **Monitor progress** using `GET /status/{kickoff_id}`
version: 1.0.0
contact:
name: CrewAI Support
@@ -207,7 +207,7 @@ paths:
"500":
$ref: "#/components/responses/ServerError"
/{kickoff_id}/status:
/status/{kickoff_id}:
get:
summary: Get Execution Status
description: |

View File

@@ -84,7 +84,7 @@ paths:
'500':
$ref: '#/components/responses/ServerError'
/{kickoff_id}/status:
/status/{kickoff_id}:
get:
summary: 실행 상태 조회
description: |

View File

@@ -35,7 +35,7 @@ info:
1. **Descubra os inputs** usando `GET /inputs`
2. **Inicie a execução** usando `POST /kickoff`
3. **Monitore o progresso** usando `GET /{kickoff_id}/status`
3. **Monitore o progresso** usando `GET /status/{kickoff_id}`
version: 1.0.0
contact:
name: CrewAI Suporte
@@ -120,7 +120,7 @@ paths:
"500":
$ref: "#/components/responses/ServerError"
/{kickoff_id}/status:
/status/{kickoff_id}:
get:
summary: Obter Status da Execução
description: |

View File

@@ -26,7 +26,7 @@ CrewAI 엔터프라이즈 API 참고 자료에 오신 것을 환영합니다.
</Step>
<Step title="진행 상황 모니터링">
`GET /{kickoff_id}/status`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
`GET /status/{kickoff_id}`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
</Step>
</Steps>
@@ -65,7 +65,7 @@ https://your-crew-name.crewai.com
1. **탐색**: `GET /inputs`를 호출하여 crew가 필요한 것을 파악합니다.
2. **실행**: `POST /kickoff`를 통해 입력값을 제출하여 처리를 시작합니다.
3. **모니터링**: 완료될 때까지 `GET /{kickoff_id}/status`를 주기적으로 조회합니다.
3. **모니터링**: 완료될 때까지 `GET /status/{kickoff_id}`를 주기적으로 조회합니다.
4. **결과**: 완료된 응답에서 최종 출력을 추출합니다.
## 오류 처리

View File

@@ -1,6 +1,6 @@
---
title: "GET /{kickoff_id}/status"
title: "GET /status/{kickoff_id}"
description: "실행 상태 조회"
openapi: "/enterprise-api.ko.yaml GET /{kickoff_id}/status"
openapi: "/enterprise-api.ko.yaml GET /status/{kickoff_id}"
mode: "wide"
---

View File

@@ -4,6 +4,99 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 5월 4일">
## v1.14.5a2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a2)
## 변경 사항
### 버그 수정
- finally 블록에서 작업 출력 복원 수정
- 완료 토큰에 `thoughts_token_count` 포함
- 비동기 배치 플러시 간 작업 출력 보존
- `CrewAIRagAdapter`의 로더 호출에 kwargs 전달
- `result_as_answer`가 후크 차단 메시지를 최종 답변으로 반환하지 않도록 방지
- `result_as_answer`가 오류를 최종 답변으로 반환하지 않도록 방지
- 비동기 경로에서 출력 변환을 위해 `acall` 사용
- 에이전트 간 공유 LLM 중지 단어 변형 방지
- `convert_to_model`에서 `BaseModel` 입력 처리
### 문서화
- 추가 환경 변수 문서화
- v1.14.5a1에 대한 변경 로그 및 버전 업데이트
## 기여자
@NIK-TIGER-BILL, @greysonlalonde, @lorenzejay, @minasami-pr, @theCyberTech, @wishhyt
</Update>
<Update label="2026년 5월 1일">
## v1.14.5a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## 변경 사항
### 기능
- `restore_from_state_id` 시작 매개변수 추가
- ExaSearchTool에 하이라이트 추가 및 EXASearchTool에서 이름 변경
### 버그 수정
- 릴리스 흐름에서 crewai 핀 사이트 누락 수정
- 트레이스를 위한 기술 로딩 이벤트 보장
### 문서
- v1.14.4에 대한 변경 로그 및 버전 업데이트
## 기여자
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="2026년 5월 1일">
## v1.14.4
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## 변경 사항
### 기능
- @persist에서 사용자 정의 지속성 키 지원 추가
- Azure OpenAI 공급자를 위한 응답 API 지원 추가
- Azure AI 추론 클라이언트에 credential_scopes 전달
- Vertex AI 작업 부하 신원 설정 가이드 추가
- Tavily Research 및 Research 가져오기 추가
- 검색, 연구 및 콘텐츠 추출을 위한 You.com MCP 도구 추가
### 버그 수정
- JSON 정규 표현식이 유효한 JSON이 아닐 때의 fall through 수정
- 응답에 텍스트가 포함될 때 tool_calls를 보존하도록 수정
- instructor.from_provider에 base_url 및 api_key를 전달하도록 수정
- 기본 MCP 서버가 도구를 반환하지 않을 때 경고하고 빈 값을 반환하도록 수정
- 비스트리밍 핸들러에서 검증된 메시지 변수를 사용하도록 수정
- LLM 실패에 대한 크루 채팅 설명 도우미를 보호하도록 수정
- 호출 간 메시지 및 반복을 재설정하도록 수정
- replay 및 test를 통해 훈련된 에이전트 파일을 전달하도록 수정
- 추론 시 사용자 정의 훈련된 에이전트 파일을 존중하도록 수정
- 다중 모드 input_files에 대해 작업 전용 에이전트를 크루에 바인딩하도록 수정
- JSON 체크포인팅을 위해 가드레일 호출 가능 항목을 null로 직렬화하도록 수정
- 자기 참조 라우터를 피하기 위해 force_final_answer의 이름 변경 수정
- SSTI 수정을 위한 litellm 버전 증가; 수정할 수 없는 pip CVE 무시
### 문서
- v1.14.4a1에 대한 변경 로그 및 버전 업데이트
- E2B 샌드박스 도구 페이지 추가
- Daytona 샌드박스 도구 문서 추가
## 기여자
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="2026년 4월 29일">
## v1.14.4a1

View File

@@ -373,32 +373,41 @@ class AnotherFlow(Flow[dict]):
print("Method-level persisted runs:", self.state["runs"])
```
### 사용자 지정 영속성 키
### 영속 상태 포크하기
기본적으로 `@persist`는 자동 생성된 `state.id` 필드를 영속성 키로 사용합니다. 여러 세션에 걸쳐 공유되는 `conversation_id`처럼 플로우에 자체 식별자가 있는 경우, `key` 인자를 전달하면 `@persist`가 해당 속성을 플로우 UUID로 사용합니다:
`@persist`는 `kickoff` / `kickoff_async`에서 두 가지 별개의 하이드레이션 모드를 지원합니다:
- `kickoff(inputs={"id": <uuid>})` — **재개(resume)**: 제공된 UUID에 대한 최신 스냅샷을 로드하고 동일한 `flow_uuid` 아래에서 계속 기록합니다. 기록이 확장됩니다.
- `kickoff(restore_from_state_id=<uuid>)` — **포크(fork)**: 제공된 UUID에 대한 최신 스냅샷을 로드하고 새 실행의 상태를 하이드레이트한 후, 새로운 `state.id`(자동 생성, 또는 `inputs["id"]`가 고정된 경우 그 값)를 할당합니다. 새 실행의 `@persist` 기록은 새로운 `state.id` 아래에 저장되며, 원본 플로우의 기록은 보존됩니다.
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id") # 사용자 지정 필드를 영속성 키로 사용
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def begin(self):
self.state.turn += 1
print(f"Conversation {self.state.conversation_id} turn {self.state.turn}")
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
# 동일한 conversation_id로 다시 실행하면 이전 상태가 다시 로드됩니다
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# 실행 1: 새 상태, counter 0 -> 1, flow_1.state.id 아래에 저장됨
flow_1 = CounterFlow()
flow_1.kickoff()
# 포크: flow_1의 최신 스냅샷에서 하이드레이트하지만, 새 state.id를 사용
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2.state.counter는 1(하이드레이트)로 시작하고, step()이 2로 증가시킵니다.
# flow_2.state.id != flow_1.state.id; flow_1의 기록은 변경되지 않습니다.
```
이 데코레이터는 dict 상태의 경우 `state[key]`에서, Pydantic / 객체 상태의 경우 `getattr(state, key)`에서 값을 읽습니다. 저장 시점에 지정된 속성이 없거나 falsy 값이면, `@persist`는 `Flow state is missing required persistence key 'conversation_id'`와 같은 `ValueError` 발생시킵니다. `key`를 생략하면 기존 동작이 유지되어 `state.id` 사용니다.
제공된 `restore_from_state_id`가 어떤 영속 상태와도 일치하지 않으면, kickoff는 조용히 기본 동작으로 폴백됩니다 — 기존 `inputs["id"]`의 미발견 동작과 동일합니다. `restore_from_state_id`를 `from_checkpoint`와 결합하면 `ValueError` 발생합니다; 하나의 하이드레이션 소스를 선택하세요. 포크 중 `inputs["id"]`를 고정하면 다른 플로우와 영속 키를 공유하게 됩니다 — 일반적으로 `restore_from_state_id` 사용하는 것이 좋습니다.
### 작동 방식

View File

@@ -146,15 +146,14 @@ class ProductionFlow(Flow[AppState]):
# ...
```
기본적으로 `@persist`는 자동 생성된 `state.id`를 저장된 상태의 키로 사용합니다. 애플리케이션에 이미 자연스러운 식별자가 있는 경우 — 예를 들어 같은 사용자 세션에 속한 여러 실행을 묶는 `conversation_id` — `key`로 전달하면 데코레이터가 해당 속성을 플로우 UUID로 사용합니다. 저장 시점에 지정된 속성이 없거나 falsy 값이면 `ValueError`가 발생합니다.
기본적으로, `@persist`는 `kickoff(inputs={"id": <uuid>})`가 제공될 때 플로우를 재개하여 동일한 `flow_uuid` 기록을 확장합니다. 영속된 플로우를 새 계보로 **포크**하려면 — 이전 실행에서 상태를 하이드레이트하지만 새로운 `state.id` 아래에 기록 — `restore_from_state_id`를 전달하세요:
```python
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# AppState는 conversation_id를 노출해야 합니다; 세션을 재개하면 이전 상태가 다시 로드됩니다
...
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
```
새 실행은 새로운 `state.id`(자동 생성, 또는 `inputs["id"]`가 고정된 경우 그 값)를 받아 `@persist` 기록이 원본의 기록을 확장하지 않도록 합니다. `from_checkpoint`와 결합하면 `ValueError`가 발생합니다; 하나의 하이드레이션 소스를 선택하세요.
## 요약
- **Flow로 시작하세요.**

View File

@@ -132,7 +132,7 @@ crew.kickoff()
| **DirectorySearchTool** | 디렉터리 내에서 검색하는 RAG 도구로, 파일 시스템을 탐색할 때 유용합니다. |
| **DOCXSearchTool** | DOCX 문서 내에서 검색하는 데 특화된 RAG 도구로, Word 파일을 처리할 때 이상적입니다. |
| **DirectoryReadTool** | 디렉터리 구조와 그 내용을 읽고 처리하도록 지원하는 도구입니다. |
| **EXASearchTool** | 다양한 데이터 소스를 폭넓게 검색하기 위해 설계된 도구입니다. |
| **ExaSearchTool** | 다양한 데이터 소스를 폭넓게 검색하기 위해 설계된 도구입니다. |
| **FileReadTool** | 다양한 파일 형식을 지원하며 파일에서 데이터를 읽고 추출할 수 있는 도구입니다. |
| **FirecrawlSearchTool** | Firecrawl을 이용해 웹페이지를 검색하고 결과를 반환하는 도구입니다. |
| **FirecrawlCrawlWebsiteTool** | Firecrawl을 사용해 웹페이지를 크롤링하는 도구입니다. |

View File

@@ -346,32 +346,47 @@ class SelectivePersistFlow(Flow):
return f"Complete with count {self.state['count']}"
```
#### 사용자 지정 영속성 키 사용하기
#### 영속 상태 포크하기
기본적으로 `@persist()`는 자동 생성된 `state.id`를 영속 상태의 키로 사용합니다. 도메인에 이미 자연스러운 식별자가 있는 경우 — 예를 들어 같은 사용자 세션에 속한 여러 플로우 실행을 묶는 `conversation_id` — `key` 인자로 전달하면 `@persist`는 `id` 대신 해당 속성을 플로우 UUID로 사용합니다:
`@persist`는 `kickoff` / `kickoff_async`에서 두 가지 별개의 하이드레이션 모드를 지원합니다. 동일한 계보를 계속하려면 **재개**(`inputs["id"]`)를 사용하고, 스냅샷에서 시작하는 새 계보를 시작하려면 **포크**(`restore_from_state_id`)를 사용하세요:
| | kickoff 후 `state.id` | `@persist` 기록 위치 |
|---|---|---|
| `inputs["id"]` (재개) | 제공된 id | 제공된 id (기록 확장) |
| `restore_from_state_id` (포크) | 새 id, 또는 고정 시 `inputs["id"]` | 새 id (원본 보존) |
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def greet(self):
self.state.history.append("hello")
return self.state.history
def step(self):
self.state.counter += 1
# 동일한 conversation_id로 두 번째 실행 시 이전 상태가 다시 로드됩니다
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# 실행 1: 새 상태, counter 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# 포크: flow_1의 최신 스냅샷에서 하이드레이트, 단 새 state.id에 기록
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2는 counter=1(하이드레이트)로 시작하고, step()이 2로 증가시킵니다.
# flow_1의 flow_uuid 기록은 변경되지 않습니다.
```
dict 기반 상태의 경우 `@persist`는 `state[key]`를 읽고, Pydantic / 객체 상태의 경우 `getattr(state, key)`를 읽습니다. 상태가 저장될 때 지정된 속성이 없거나 falsy 값이면 `@persist`는 `Flow state is missing required persistence key 'conversation_id'`와 같은 `ValueError`를 발생시켜, 영속 데이터가 조용히 손실되는 대신 즉시 실패가 드러나도록 합니다. `key` 없이 `@persist()`를 호출하면 기존 동작대로 `state.id`가 사용됩니다.
동작 노트:
- `restore_from_state_id`가 영속에서 발견되지 않음 → kickoff는 조용히 기본 동작으로 폴백됩니다 (기존 `inputs["id"]`의 미발견 동작 미러링). 예외는 발생하지 않습니다.
- `restore_from_state_id`를 `from_checkpoint`와 결합하면 `ValueError`가 발생합니다 — 서로 다른 상태 시스템(`@persist` 대 Checkpointing)을 대상으로 하므로 결합할 수 없습니다.
- `restore_from_state_id=None`(기본값)은 매개변수 없는 kickoff와 바이트 단위로 동일합니다.
- 포크 중 `inputs["id"]`를 고정하면 새 실행이 다른 플로우와 영속 키를 공유함을 의미합니다 — 일반적으로 `restore_from_state_id`만 사용하는 것이 좋습니다.
## 고급 상태 패턴

View File

@@ -1,15 +1,15 @@
---
title: EXA 검색 웹 로더
description: EXASearchTool은 인터넷 전반에 걸쳐 텍스트의 내용에서 지정된 쿼리에 대한 시맨틱 검색을 수행하도록 설계되었습니다.
description: ExaSearchTool은 인터넷 전반에 걸쳐 텍스트의 내용에서 지정된 쿼리에 대한 시맨틱 검색을 수행하도록 설계되었습니다.
icon: globe-pointer
mode: "wide"
---
# `EXASearchTool`
# `ExaSearchTool`
## 설명
EXASearchTool은 텍스트의 내용을 기반으로 지정된 쿼리를 인터넷 전반에 걸쳐 의미론적으로 검색하도록 설계되었습니다.
ExaSearchTool은 텍스트의 내용을 기반으로 지정된 쿼리를 인터넷 전반에 걸쳐 의미론적으로 검색하도록 설계되었습니다.
사용자가 제공한 쿼리를 기반으로 가장 관련성 높은 검색 결과를 가져오고 표시하기 위해 [exa.ai](https://exa.ai/) API를 활용합니다.
## 설치
@@ -25,15 +25,15 @@ pip install 'crewai[tools]'
다음 예제는 도구를 초기화하고 주어진 쿼리로 검색을 실행하는 방법을 보여줍니다:
```python Code
from crewai_tools import EXASearchTool
from crewai_tools import ExaSearchTool
# Initialize the tool for internet searching capabilities
tool = EXASearchTool()
tool = ExaSearchTool()
```
## 시작 단계
EXASearchTool을 효과적으로 사용하려면 다음 단계를 따르세요:
ExaSearchTool을 효과적으로 사용하려면 다음 단계를 따르세요:
<Steps>
<Step title="패키지 설치">
@@ -47,7 +47,35 @@ EXASearchTool을 효과적으로 사용하려면 다음 단계를 따르세요:
</Step>
</Steps>
## MCP를 통한 Exa 사용
Exa가 호스팅하는 MCP 서버에 에이전트를 연결할 수도 있습니다. API 키는 `x-api-key` 헤더로 전달하세요:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
API 키는 [Exa 대시보드](https://dashboard.exa.ai/api-keys)에서 발급받을 수 있습니다. CrewAI에서의 MCP 사용에 대한 자세한 내용은 [MCP 개요](/ko/mcp/overview)를 참고하세요.
## 결론
`EXASearchTool`을 Python 프로젝트에 통합함으로써, 사용자는 애플리케이션 내에서 실시간으로 인터넷을 직접 검색할 수 있는 능력을 얻게 됩니다.
`ExaSearchTool`을 Python 프로젝트에 통합함으로써, 사용자는 애플리케이션 내에서 실시간으로 인터넷을 직접 검색할 수 있는 능력을 얻게 됩니다.
제공된 설정 및 사용 지침을 따르면, 이 도구를 프로젝트에 포함하는 과정이 간편하고 직관적입니다.
## 참고 자료
- [Exa 공식 문서](https://exa.ai/docs)
- [Exa 대시보드 — API 키 및 사용량 관리](https://dashboard.exa.ai)

View File

@@ -26,7 +26,7 @@ Bem-vindo à referência da API do CrewAI AMP. Esta API permite que você intera
</Step>
<Step title="Monitore o Progresso">
Use `GET /{kickoff_id}/status` para checar o status da execução e recuperar os resultados.
Use `GET /status/{kickoff_id}` para checar o status da execução e recuperar os resultados.
</Step>
</Steps>
@@ -65,7 +65,7 @@ Substitua `your-crew-name` pela URL real do seu crew no painel.
1. **Descoberta**: Chame `GET /inputs` para entender o que seu crew precisa
2. **Execução**: Envie os inputs via `POST /kickoff` para iniciar o processamento
3. **Monitoramento**: Faça polling em `GET /{kickoff_id}/status` até a conclusão
3. **Monitoramento**: Faça polling em `GET /status/{kickoff_id}` até a conclusão
4. **Resultados**: Extraia o output final da resposta concluída
## Tratamento de Erros

View File

@@ -1,6 +1,6 @@
---
title: "GET /{kickoff_id}/status"
title: "GET /status/{kickoff_id}"
description: "Obter o status da execução"
openapi: "/enterprise-api.pt-BR.yaml GET /{kickoff_id}/status"
openapi: "/enterprise-api.pt-BR.yaml GET /status/{kickoff_id}"
mode: "wide"
---

View File

@@ -4,6 +4,99 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="04 mai 2026">
## v1.14.5a2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a2)
## O que Mudou
### Correções de Bugs
- Corrigir a restauração da saída da tarefa no bloco finally
- Incluir `thoughts_token_count` nos tokens de conclusão
- Preservar as saídas das tarefas durante o descarregamento assíncrono em lote
- Encaminhar kwargs para chamadas de carregador em `CrewAIRagAdapter`
- Impedir que `result_as_answer` retorne mensagem de bloqueio de hook como resposta final
- Impedir que `result_as_answer` retorne erro como resposta final
- Usar `acall` para conversão de saída em caminhos assíncronos
- Prevenir a mutação de palavras de parada compartilhadas do LLM entre agentes
- Lidar com entrada `BaseModel` em `convert_to_model`
### Documentação
- Documentar variáveis de ambiente adicionais
- Atualizar changelog e versão para v1.14.5a1
## Contribuidores
@NIK-TIGER-BILL, @greysonlalonde, @lorenzejay, @minasami-pr, @theCyberTech, @wishhyt
</Update>
<Update label="01 mai 2026">
## v1.14.5a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1)
## O que Mudou
### Recursos
- Adicionar parâmetro de início `restore_from_state_id`
- Adicionar destaques ao ExaSearchTool e renomear de EXASearchTool
### Correções de Bugs
- Corrigir sites de pinos do crewai ausentes no fluxo de lançamento
- Garantir eventos de carregamento de habilidades para rastros
### Documentação
- Atualizar changelog e versão para v1.14.4
## Contribuidores
@akaKuruma, @github-actions[bot], @greysonlalonde, @lorenzejay, @theishangoswami
</Update>
<Update label="01 mai 2026">
## v1.14.4
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.4)
## O que mudou
### Recursos
- Adicionar suporte para chave de persistência personalizada em @persist
- Adicionar suporte à API de Respostas para o provedor Azure OpenAI
- Encaminhar credential_scopes para o cliente de Inferência da Azure AI
- Adicionar guia de configuração de identidade de carga de trabalho do Vertex AI
- Adicionar Tavily Research e obter Pesquisa
- Adicionar ferramentas MCP do You.com para pesquisa, pesquisa e extração de conteúdo
### Correções de Bugs
- Corrigir falha quando a correspondência de regex JSON não é um JSON válido
- Corrigir para preservar tool_calls quando a resposta também contém texto
- Corrigir para encaminhar base_url e api_key para instructor.from_provider
- Corrigir para avisar e retornar vazio quando o servidor MCP nativo não retorna ferramentas
- Corrigir para usar a variável de mensagens validadas em manipuladores não-streaming
- Corrigir para proteger os ajudantes de descrição do chat da equipe contra falhas do LLM
- Corrigir para redefinir mensagens e iterações entre invocações
- Corrigir para encaminhar o arquivo de agentes treinados através de replay e teste
- Corrigir para honrar o arquivo de agentes treinados personalizados na inferência
- Corrigir para vincular agentes apenas de tarefa à equipe para arquivos de entrada multimodal
- Corrigir para serializar chamadas de guardrail como nulas para checkpointing JSON
- Corrigir renomeação de force_final_answer para evitar roteador autorreferencial
- Corrigir aumento de litellm para correção de SSTI; ignorar CVE pip não corrigível
### Documentação
- Atualizar changelog e versão para v1.14.4a1
- Adicionar página de Ferramentas do Sandbox E2B
- Adicionar documentação de ferramentas do sandbox Daytona
## Contributors
@EdwardIrby, @dependabot[bot], @factory-droid-oss, @factory-droid[bot], @greysonlalonde, @kunalk16, @lorenzejay, @lucasgomide, @manisrinivasan2k1, @mattatcha, @vinibrsl
</Update>
<Update label="29 abr 2026">
## v1.14.4a1

View File

@@ -193,32 +193,41 @@ Para um controle mais granular, você pode aplicar @persist em métodos específ
# (O código não é traduzido)
```
### Chave de Persistência Personalizada
### Forking de Estado Persistido
Por padrão, `@persist` usa o campo `state.id` gerado automaticamente como chave de persistência. Se o seu flow já possui um identificador natural — por exemplo um `conversation_id` compartilhado entre sessões — você pode passar o argumento `key` e `@persist` usará esse atributo como UUID do flow:
`@persist` suporta dois modos distintos de hidratação em `kickoff` / `kickoff_async`:
- `kickoff(inputs={"id": <uuid>})` — **resume**: carrega o snapshot mais recente do UUID informado e continua escrevendo sob o mesmo `flow_uuid`. O histórico se estende.
- `kickoff(restore_from_state_id=<uuid>)` — **fork**: carrega o snapshot mais recente do UUID informado, hidrata o estado da nova execução a partir dele, e atribui um novo `state.id` (auto-gerado, ou `inputs["id"]` se fixado). As escritas do `@persist` da nova execução vão para o novo `state.id`; o histórico do flow de origem é preservado.
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
turn: int = 0
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id") # Usa um campo personalizado como chave de persistência
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def begin(self):
self.state.turn += 1
print(f"Conversa {self.state.conversation_id} turno {self.state.turn}")
def step(self):
self.state.counter += 1
print(f"[id={self.state.id}] counter={self.state.counter}")
# Retomar a mesma conversa recarrega o estado anterior pelo conversation_id
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# Execução 1: estado novo, counter 0 -> 1, persistido sob flow_1.state.id
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hidrata do snapshot mais recente de flow_1, mas usa um state.id NOVO
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2.state.counter começa em 1 (hidratado), e step() incrementa para 2.
# flow_2.state.id != flow_1.state.id; o histórico de flow_1 não é alterado.
```
O decorador lê o valor em `state[key]` para estados do tipo dicionário ou `getattr(state, key)` para estados Pydantic / objetos. Se o atributo informado estiver ausente ou for *falsy* no momento de salvar, `@persist` lança um `ValueError` como `Flow state is missing required persistence key 'conversation_id'`. Quando `key` é omitido, o comportamento original é preservado e `state.id` continua sendo usado.
Se o `restore_from_state_id` informado não corresponder a nenhum estado persistido, o kickoff retorna silenciosamente ao comportamento padrão — o mesmo comportamento do `inputs["id"]` quando não encontrado. Combinar `restore_from_state_id` com `from_checkpoint` lança um `ValueError`; escolha uma única fonte de hidratação. Fixar `inputs["id"]` durante o fork compartilha uma chave de persistência com outro flow — geralmente você quer apenas `restore_from_state_id`.
### Como Funciona

View File

@@ -146,15 +146,14 @@ class ProductionFlow(Flow[AppState]):
# ...
```
Por padrão, `@persist` usa o `state.id` gerado automaticamente como chave do estado salvo. Se a sua aplicação já tem um identificador natural — por exemplo um `conversation_id` que liga várias execuções à mesma sessão de usuário passe-o como `key` e o decorador usará esse atributo como UUID do flow. Um `ValueError` é lançado se o atributo informado estiver ausente ou for *falsy* no momento de salvar.
Por padrão, `@persist` retoma um flow quando `kickoff(inputs={"id": <uuid>})` é informado, estendendo o mesmo histórico do `flow_uuid`. Para **forkar** um flow persistido em uma nova linhagem — hidratar o estado a partir de uma execução anterior mas escrever sob um novo `state.id` — passe `restore_from_state_id`:
```python
@persist(key="conversation_id")
class ProductionFlow(Flow[AppState]):
# AppState precisa expor conversation_id; retomar a sessão recarrega o estado anterior
...
flow.kickoff(restore_from_state_id="<previous-run-state-id>")
```
A nova execução recebe um novo `state.id` (auto-gerado, ou `inputs["id"]` se fixado), então suas escritas do `@persist` não estendem o histórico da origem. Combinar com `from_checkpoint` lança um `ValueError`; escolha uma única fonte de hidratação.
## Resumo
- **Comece com um Flow.**

View File

@@ -133,7 +133,7 @@ Aqui está uma lista das ferramentas disponíveis e suas descrições:
| **DirectorySearchTool** | Ferramenta RAG para busca em diretórios, útil para navegação em sistemas de arquivos. |
| **DOCXSearchTool** | Ferramenta RAG voltada para busca em documentos DOCX, ideal para processar arquivos Word. |
| **DirectoryReadTool** | Facilita a leitura e processamento de estruturas de diretórios e seus conteúdos. |
| **EXASearchTool** | Ferramenta projetada para buscas exaustivas em diversas fontes de dados. |
| **ExaSearchTool** | Ferramenta projetada para buscas exaustivas em diversas fontes de dados. |
| **FileReadTool** | Permite a leitura e extração de dados de arquivos, suportando diversos formatos. |
| **FirecrawlSearchTool** | Ferramenta para buscar páginas web usando Firecrawl e retornar os resultados. |
| **FirecrawlCrawlWebsiteTool** | Ferramenta para rastrear páginas web utilizando o Firecrawl. |

View File

@@ -167,32 +167,47 @@ Para mais controle, você pode aplicar `@persist()` em métodos específicos:
# código não traduzido
```
#### Usando uma Chave de Persistência Personalizada
#### Forking de Estado Persistido
Por padrão, `@persist()` usa o `state.id` gerado automaticamente como chave do estado persistido. Quando seu domínio já possui um identificador natural — por exemplo um `conversation_id` que liga várias execuções do flow à mesma sessão de usuário — passe-o como argumento `key` e `@persist` usará esse atributo como UUID do flow em vez de `id`:
`@persist` suporta dois modos distintos de hidratação em `kickoff` / `kickoff_async`. Use **resume** (`inputs["id"]`) para continuar a mesma linhagem; use **fork** (`restore_from_state_id`) para iniciar uma nova linhagem a partir de um snapshot:
| | `state.id` após o kickoff | Escritas do `@persist` vão para |
|---|---|---|
| `inputs["id"]` (resume) | id informado | id informado (estende o histórico) |
| `restore_from_state_id` (fork) | id novo, ou `inputs["id"]` se fixado | id novo (origem preservada) |
```python
from crewai.flow.flow import Flow, listen, start
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class ConversationState(BaseModel):
conversation_id: str
history: list[str] = []
class CounterState(BaseModel):
id: str = ""
counter: int = 0
@persist(key="conversation_id")
class ConversationFlow(Flow[ConversationState]):
@persist
class CounterFlow(Flow[CounterState]):
@start()
def greet(self):
self.state.history.append("hello")
return self.state.history
def step(self):
self.state.counter += 1
# Uma segunda execução com o mesmo conversation_id recarrega o estado anterior
flow = ConversationFlow(conversation_id="user-42")
flow.kickoff()
# Execução 1: estado novo, counter 0 -> 1
flow_1 = CounterFlow()
flow_1.kickoff()
# Fork: hidrata do snapshot mais recente de flow_1, mas escreve sob um state.id NOVO
flow_2 = CounterFlow()
flow_2.kickoff(restore_from_state_id=flow_1.state.id)
# flow_2 começa com counter=1 (hidratado), e step() incrementa para 2.
# O histórico do flow_uuid de flow_1 não é alterado.
```
Para estados baseados em dicionário `@persist` lê `state[key]`, e para estados Pydantic / objetos lê `getattr(state, key)`. Se o atributo informado estiver ausente ou for *falsy* no momento em que o estado for salvo, `@persist` lança um `ValueError` como `Flow state is missing required persistence key 'conversation_id'`, fazendo com que a falha apareça imediatamente em vez de descartar silenciosamente os dados persistidos. Chamar `@persist()` sem `key` mantém o comportamento original de usar `state.id`.
Notas sobre o comportamento:
- `restore_from_state_id` não encontrado na persistência → o kickoff retorna silenciosamente ao comportamento padrão (espelha o comportamento de `inputs["id"]` quando não encontrado). Nenhuma exceção é lançada.
- Combinar `restore_from_state_id` com `from_checkpoint` lança um `ValueError` — eles miram sistemas de estado diferentes (`@persist` vs. Checkpointing) e não podem ser combinados.
- `restore_from_state_id=None` (padrão) é byte-idêntico a um kickoff sem o parâmetro.
- Fixar `inputs["id"]` durante o fork significa que a nova execução compartilha uma chave de persistência com outro flow — geralmente você quer apenas `restore_from_state_id`.
## Padrões Avançados de Estado

View File

@@ -1,15 +1,15 @@
---
title: Carregador Web EXA Search
description: O `EXASearchTool` foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
description: O `ExaSearchTool` foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
icon: globe-pointer
mode: "wide"
---
# `EXASearchTool`
# `ExaSearchTool`
## Descrição
O EXASearchTool foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
O ExaSearchTool foi projetado para realizar uma busca semântica para uma consulta especificada a partir do conteúdo de um texto em toda a internet.
Ele utiliza a API da [exa.ai](https://exa.ai/) para buscar e exibir os resultados de pesquisa mais relevantes com base na consulta fornecida pelo usuário.
## Instalação
@@ -25,15 +25,15 @@ pip install 'crewai[tools]'
O exemplo a seguir demonstra como inicializar a ferramenta e executar uma busca com uma consulta determinada:
```python Code
from crewai_tools import EXASearchTool
from crewai_tools import ExaSearchTool
# Initialize the tool for internet searching capabilities
tool = EXASearchTool()
tool = ExaSearchTool()
```
## Etapas para Começar
Para usar o EXASearchTool de forma eficaz, siga estas etapas:
Para usar o ExaSearchTool de forma eficaz, siga estas etapas:
<Steps>
<Step title="Instalação do Pacote">
@@ -47,7 +47,35 @@ Para usar o EXASearchTool de forma eficaz, siga estas etapas:
</Step>
</Steps>
## Usando o Exa via MCP
Você também pode conectar seu agente ao servidor MCP hospedado pelo Exa. Passe sua chave de API no cabeçalho `x-api-key`:
```python
from crewai import Agent
from crewai.mcp import MCPServerHTTP
agent = Agent(
role="Research Analyst",
goal="Find and analyze information on the web",
backstory="Expert researcher with access to Exa's tools",
mcps=[
MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"x-api-key": "YOUR_EXA_API_KEY"},
),
],
)
```
Obtenha sua chave de API no [painel da Exa](https://dashboard.exa.ai/api-keys). Para mais informações sobre MCP no CrewAI, consulte a [visão geral do MCP](/pt-BR/mcp/overview).
## Conclusão
Ao integrar o `EXASearchTool` em projetos Python, os usuários ganham a capacidade de realizar buscas relevantes e em tempo real pela internet diretamente de suas aplicações.
Seguindo as orientações de configuração e uso fornecidas, a incorporação desta ferramenta em projetos torna-se simples e direta.
Ao integrar o `ExaSearchTool` em projetos Python, os usuários ganham a capacidade de realizar buscas relevantes e em tempo real pela internet diretamente de suas aplicações.
Seguindo as orientações de configuração e uso fornecidas, a incorporação desta ferramenta em projetos torna-se simples e direta.
## Recursos
- [Documentação do Exa](https://exa.ai/docs)
- [Painel do Exa — gerenciar chaves de API e uso](https://dashboard.exa.ai)

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.4a1"
__version__ = "1.14.5a2"

View File

@@ -26,7 +26,7 @@ CrewAI provides an extensive collection of powerful tools ready to enhance your
- **Web Scraping**: `ScrapeWebsiteTool`, `SeleniumScrapingTool`
- **Database Integrations**: `MySQLSearchTool`
- **Vector Database Integrations**: `MongoDBVectorSearchTool`, `QdrantVectorSearchTool`, `WeaviateVectorSearchTool`
- **API Integrations**: `SerperApiTool`, `EXASearchTool`
- **API Integrations**: `SerperApiTool`, `ExaSearchTool`
- **AI-powered Tools**: `DallETool`, `VisionTool`, `StagehandTool`
And many more robust tools to simplify your agent integrations.

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests>=2.33.0,<3",
"crewai==1.14.4a1",
"crewai==1.14.5a2",
"tiktoken>=0.8.0,<0.13",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",
@@ -107,7 +107,7 @@ stagehand = [
"stagehand>=0.4.1",
]
github = [
"gitpython>=3.1.41,<4",
"gitpython>=3.1.47,<4",
"PyGithub==1.59.1",
]
rag = [

View File

@@ -76,7 +76,7 @@ from crewai_tools.tools.e2b_sandbox_tool import (
E2BFileTool,
E2BPythonTool,
)
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool, ExaSearchTool
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
from crewai_tools.tools.files_compressor_tool.files_compressor_tool import (
@@ -258,6 +258,7 @@ __all__ = [
"E2BPythonTool",
"EXASearchTool",
"EnterpriseActionTool",
"ExaSearchTool",
"FileCompressorTool",
"FileReadTool",
"FileWriterTool",
@@ -329,4 +330,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.4a1"
__version__ = "1.14.5a2"

View File

@@ -268,7 +268,9 @@ class CrewAIRagAdapter(Adapter):
file_chunker = file_data_type.get_chunker()
file_source = SourceContent(file_path)
file_result: LoaderResult = file_loader.load(file_source)
file_result: LoaderResult = file_loader.load(
file_source, **kwargs
)
file_chunks = file_chunker.chunk(file_result.content)
@@ -319,7 +321,7 @@ class CrewAIRagAdapter(Adapter):
loader = data_type.get_loader()
chunker = data_type.get_chunker()
loader_result: LoaderResult = loader.load(source_content)
loader_result: LoaderResult = loader.load(source_content, **kwargs)
chunks = chunker.chunk(loader_result.content)

View File

@@ -2,9 +2,8 @@ import re
from typing import Any, Final
from bs4 import BeautifulSoup
import requests
from crewai_tools.rag.base_loader import BaseLoader, LoaderResult
from crewai_tools.security.safe_path import safe_get
from crewai_tools.rag.source_content import SourceContent
@@ -25,7 +24,7 @@ class WebPageLoader(BaseLoader):
)
try:
response = requests.get(url, timeout=15, headers=headers)
response = safe_get(url, timeout=15, headers=headers)
response.encoding = response.apparent_encoding
soup = BeautifulSoup(response.text, "html.parser")

View File

@@ -14,8 +14,12 @@ import ipaddress
import logging
import os
import socket
from typing import Any
from urllib.parse import urlparse
import requests
from requests.adapters import HTTPAdapter
logger = logging.getLogger(__name__)
@@ -203,3 +207,72 @@ def validate_url(url: str) -> str:
)
return url
# ---------------------------------------------------------------------------
# SSRF-safe HTTP requests (validates IPs on every redirect hop)
# ---------------------------------------------------------------------------
class _SSRFSafeAdapter(HTTPAdapter):
"""HTTPAdapter that validates the resolved IP of every request — including
redirect hops — against the private/reserved blocklist before the
connection is made."""
def send( # type: ignore[override]
self, request: requests.PreparedRequest, **kwargs: Any
) -> requests.Response:
parsed = urlparse(request.url)
if not _is_escape_hatch_enabled() and parsed.hostname:
try:
port = parsed.port or (443 if parsed.scheme == "https" else 80)
addrinfos = socket.getaddrinfo(parsed.hostname, port)
except socket.gaierror as exc:
raise ValueError(
f"Could not resolve hostname: '{parsed.hostname}'"
) from exc
for _family, _, _, _, sockaddr in addrinfos:
ip_str = str(sockaddr[0])
if _is_private_or_reserved(ip_str):
raise ValueError(
f"Redirect to '{request.url}' blocked: resolves to "
f"private/reserved IP {ip_str}. Access to internal "
f"networks is not allowed. "
f"Set {_UNSAFE_PATHS_ENV}=true to bypass."
)
return super().send(request, **kwargs)
def safe_request_session() -> requests.Session:
"""Return a :class:`requests.Session` that validates every connection
target (including redirect destinations) against the SSRF blocklist."""
session = requests.Session()
adapter = _SSRFSafeAdapter()
session.mount("http://", adapter)
session.mount("https://", adapter)
return session
def safe_get(url: str, **kwargs: Any) -> requests.Response:
"""Drop-in replacement for ``requests.get()`` with SSRF protection.
Validates the initial URL via :func:`validate_url`, then executes the
request through a session whose adapter re-checks every redirect hop.
Args:
url: The URL to fetch.
**kwargs: Passed through to ``session.get()`` (headers, cookies,
timeout, etc.).
Returns:
The :class:`requests.Response`.
Raises:
ValueError: If the initial URL or any redirect target resolves to
a private/reserved IP.
"""
validate_url(url)
session = safe_request_session()
return session.get(url, **kwargs)

View File

@@ -65,7 +65,7 @@ from crewai_tools.tools.e2b_sandbox_tool import (
E2BFileTool,
E2BPythonTool,
)
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool, ExaSearchTool
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
from crewai_tools.tools.files_compressor_tool.files_compressor_tool import (
@@ -242,6 +242,7 @@ __all__ = [
"E2BFileTool",
"E2BPythonTool",
"EXASearchTool",
"ExaSearchTool",
"FileCompressorTool",
"FileReadTool",
"FileWriterTool",

View File

@@ -1,7 +1,7 @@
# EXASearchTool Documentation
# ExaSearchTool Documentation
## Description
This tool is designed to perform a semantic search for a specified query from a text's content across the internet. It utilizes the `https://exa.ai/` API to fetch and display the most relevant search results based on the query provided by the user.
This tool lets CrewAI agents search the web using [Exa](https://exa.ai/), the fastest and most accurate web search API. By default the tool returns token-efficient highlights of the most relevant results for any query; you can also opt in to full page content.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
@@ -10,21 +10,23 @@ uv add crewai[tools] exa_py
```
## Example
The following example demonstrates how to initialize the tool and execute a search with a given query:
The following example demonstrates how to initialize the tool and run a search:
```python
from crewai_tools import EXASearchTool
from crewai_tools import ExaSearchTool
# Initialize the tool for internet searching capabilities
tool = EXASearchTool(api_key="your_api_key")
# Default: results with token-efficient highlights
tool = ExaSearchTool(api_key="your_api_key", highlights=True)
```
## Steps to Get Started
To effectively use the `EXASearchTool`, follow these steps:
To effectively use the `ExaSearchTool`, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
2. **API Key Acquisition**: Acquire a `https://exa.ai/` API key by registering for a free account at `https://exa.ai/`.
3. **Environment Configuration**: Store your obtained API key in an environment variable named `EXA_API_KEY` to facilitate its use by the tool.
2. **API Key Acquisition**: Get an Exa API key from the [Exa dashboard](https://dashboard.exa.ai/api-keys).
3. **Environment Configuration**: Store your API key in an environment variable named `EXA_API_KEY` so the tool can pick it up automatically.
## Conclusion
By integrating the `EXASearchTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.
For details on choosing between highlights and full content, see the [Exa search best practices](https://exa.ai/docs/reference/search-best-practices).
## Note
`EXASearchTool` is a deprecated alias for `ExaSearchTool`. Existing imports continue to work but emit a deprecation warning; please migrate to `ExaSearchTool`.

View File

@@ -3,12 +3,19 @@ from __future__ import annotations
from builtins import type as type_
import os
from typing import Any, TypedDict
import warnings
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, ConfigDict, Field
from typing_extensions import Required
try:
from exa_py import Exa
except ImportError:
Exa = None # type: ignore[assignment,misc]
class SearchParams(TypedDict, total=False):
"""Parameters for Exa search API."""
@@ -18,7 +25,7 @@ class SearchParams(TypedDict, total=False):
include_domains: list[str]
class EXABaseToolSchema(BaseModel):
class ExaBaseToolSchema(BaseModel):
search_query: str = Field(
..., description="Mandatory search query you want to use to search the internet"
)
@@ -31,14 +38,20 @@ class EXABaseToolSchema(BaseModel):
)
class EXASearchTool(BaseTool):
EXABaseToolSchema = ExaBaseToolSchema
class ExaSearchTool(BaseTool):
model_config = ConfigDict(arbitrary_types_allowed=True)
name: str = "EXASearchTool"
description: str = "Search the internet using Exa"
args_schema: type_[BaseModel] = EXABaseToolSchema
name: str = "ExaSearchTool"
description: str = (
"Search the web with Exa, the fastest and most accurate web search API."
)
args_schema: type_[BaseModel] = ExaBaseToolSchema
client: Any | None = None
content: bool | None = False
summary: bool | None = False
content: bool | dict[str, Any] | None = False
summary: bool | dict[str, Any] | None = False
highlights: bool | dict[str, Any] | None = True
type: str | None = "auto"
package_dependencies: list[str] = Field(default_factory=lambda: ["exa_py"])
api_key: str | None = Field(
@@ -68,17 +81,17 @@ class EXASearchTool(BaseTool):
def __init__(
self,
content: bool | None = False,
summary: bool | None = False,
content: bool | dict[str, Any] | None = False,
summary: bool | dict[str, Any] | None = False,
highlights: bool | dict[str, Any] | None = True,
type: str | None = "auto",
**kwargs: Any,
) -> None:
super().__init__(
**kwargs,
)
try:
from exa_py import Exa
except ImportError as e:
global Exa
if Exa is None:
import click
if click.confirm(
@@ -88,12 +101,13 @@ class EXASearchTool(BaseTool):
subprocess.run(["uv", "add", "exa_py"], check=True) # noqa: S607
# Re-import after installation
from exa_py import Exa
from exa_py import Exa as _Exa
Exa = _Exa # type: ignore[misc]
else:
raise ImportError(
"You are missing the 'exa_py' package. Would you like to install it?"
) from e
"You are missing the 'exa_py' package. Please install it to use ExaSearchTool."
)
client_kwargs: dict[str, str] = {}
if self.api_key:
@@ -101,8 +115,10 @@ class EXASearchTool(BaseTool):
if self.base_url:
client_kwargs["base_url"] = self.base_url
self.client = Exa(**client_kwargs)
self.client.headers["x-exa-integration"] = "crewai"
self.content = content
self.summary = summary
self.highlights = highlights
self.type = type
def _run(
@@ -126,10 +142,31 @@ class EXASearchTool(BaseTool):
if include_domains:
search_params["include_domains"] = include_domains
contents_kwargs: dict[str, Any] = {}
if self.content:
results = self.client.search_and_contents(
search_query, summary=self.summary, **search_params
contents_kwargs["text"] = self.content
if self.highlights:
contents_kwargs["highlights"] = self.highlights
if self.summary:
contents_kwargs["summary"] = self.summary
if contents_kwargs:
return self.client.search_and_contents(
search_query, **contents_kwargs, **search_params
)
else:
results = self.client.search(search_query, **search_params)
return results
return self.client.search(search_query, **search_params)
class EXASearchTool(ExaSearchTool):
"""Deprecated alias for :class:`ExaSearchTool`. Kept for backwards compatibility."""
name: str = "ExaSearchTool"
def __init__(self, *args: Any, **kwargs: Any) -> None:
warnings.warn(
"EXASearchTool is deprecated and will be removed in a future release; "
"use ExaSearchTool instead.",
DeprecationWarning,
stacklevel=2,
)
super().__init__(*args, **kwargs)

View File

@@ -3,9 +3,7 @@ from typing import Any
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import requests
from crewai_tools.security.safe_path import validate_url
from crewai_tools.security.safe_path import safe_get
try:
@@ -83,8 +81,7 @@ class ScrapeElementFromWebsiteTool(BaseTool):
if website_url is None or css_element is None:
raise ValueError("Both website_url and css_element must be provided.")
website_url = validate_url(website_url)
page = requests.get(
page = safe_get(
website_url,
headers=self.headers,
cookies=self.cookies if self.cookies else {},

View File

@@ -3,9 +3,7 @@ import re
from typing import Any
from pydantic import Field
import requests
from crewai_tools.security.safe_path import validate_url
from crewai_tools.security.safe_path import safe_get
try:
@@ -75,8 +73,7 @@ class ScrapeWebsiteTool(BaseTool):
if website_url is None:
raise ValueError("Website URL must be provided.")
website_url = validate_url(website_url)
page = requests.get(
page = safe_get(
website_url,
timeout=15,
headers=self.headers,

View File

@@ -22,7 +22,7 @@ class TestWebPageLoader:
soup.return_value = script_style_elements or []
return soup
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
@patch("crewai_tools.rag.loaders.webpage_loader.BeautifulSoup")
def test_load_basic_webpage(self, mock_bs, mock_get):
mock_get.return_value = self.setup_mock_response(
@@ -37,7 +37,7 @@ class TestWebPageLoader:
assert result.content == "Test content"
assert result.metadata["title"] == "Test Page"
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
@patch("crewai_tools.rag.loaders.webpage_loader.BeautifulSoup")
def test_load_webpage_with_scripts_and_styles(self, mock_bs, mock_get):
html = """
@@ -62,7 +62,7 @@ class TestWebPageLoader:
for el in scripts + styles:
el.decompose.assert_called_once()
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
@patch("crewai_tools.rag.loaders.webpage_loader.BeautifulSoup")
def test_text_cleaning_and_title_handling(self, mock_bs, mock_get):
mock_get.return_value = self.setup_mock_response(
@@ -77,7 +77,7 @@ class TestWebPageLoader:
assert result.content is not None
assert result.metadata["title"] == ""
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
@patch("crewai_tools.rag.loaders.webpage_loader.BeautifulSoup")
def test_empty_or_missing_title(self, mock_bs, mock_get):
for title in [None, ""]:
@@ -90,7 +90,7 @@ class TestWebPageLoader:
result = loader.load(SourceContent("https://example.com"))
assert result.metadata["title"] == ""
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
def test_custom_and_default_headers(self, mock_get):
mock_get.return_value = self.setup_mock_response(
"<html><body>Test</body></html>"
@@ -109,14 +109,14 @@ class TestWebPageLoader:
assert mock_get.call_args[1]["headers"] == custom_headers
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
def test_error_handling(self, mock_get):
for error in [Exception("Fail"), ValueError("Bad"), ImportError("Oops")]:
mock_get.side_effect = error
with pytest.raises(ValueError, match="Error loading webpage"):
WebPageLoader().load(SourceContent("https://example.com"))
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
def test_timeout_and_http_error(self, mock_get):
import requests
@@ -131,7 +131,7 @@ class TestWebPageLoader:
with pytest.raises(ValueError):
WebPageLoader().load(SourceContent("https://example.com/404"))
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
@patch("crewai_tools.rag.loaders.webpage_loader.BeautifulSoup")
def test_doc_id_consistency(self, mock_bs, mock_get):
mock_get.return_value = self.setup_mock_response(
@@ -145,7 +145,7 @@ class TestWebPageLoader:
assert result1.doc_id == result2.doc_id
@patch("requests.get")
@patch("crewai_tools.rag.loaders.webpage_loader.safe_get")
@patch("crewai_tools.rag.loaders.webpage_loader.BeautifulSoup")
def test_status_code_and_content_type(self, mock_bs, mock_get):
for status in [200, 201, 301]:

View File

@@ -1,13 +1,13 @@
import os
from unittest.mock import patch
from unittest.mock import MagicMock, patch
from crewai_tools import EXASearchTool
from crewai_tools import EXASearchTool, ExaSearchTool
import pytest
@pytest.fixture
def exa_search_tool():
return EXASearchTool(api_key="test_api_key")
return ExaSearchTool(api_key="test_api_key")
@pytest.fixture(autouse=True)
@@ -22,11 +22,12 @@ def test_exa_search_tool_initialization():
"crewai_tools.tools.exa_tools.exa_search_tool.Exa"
) as mock_exa_class:
api_key = "test_api_key"
tool = EXASearchTool(api_key=api_key)
tool = ExaSearchTool(api_key=api_key)
assert tool.api_key == api_key
assert tool.content is False
assert tool.summary is False
assert tool.highlights is True
assert tool.type == "auto"
mock_exa_class.assert_called_once_with(api_key=api_key)
@@ -36,7 +37,7 @@ def test_exa_search_tool_initialization_with_env(mock_exa_api_key):
with patch(
"crewai_tools.tools.exa_tools.exa_search_tool.Exa"
) as mock_exa_class:
EXASearchTool()
ExaSearchTool()
mock_exa_class.assert_called_once_with(api_key="test_key_from_env")
@@ -47,12 +48,13 @@ def test_exa_search_tool_initialization_with_base_url():
) as mock_exa_class:
api_key = "test_api_key"
base_url = "https://custom.exa.api.com"
tool = EXASearchTool(api_key=api_key, base_url=base_url)
tool = ExaSearchTool(api_key=api_key, base_url=base_url)
assert tool.api_key == api_key
assert tool.base_url == base_url
assert tool.content is False
assert tool.summary is False
assert tool.highlights is True
assert tool.type == "auto"
mock_exa_class.assert_called_once_with(api_key=api_key, base_url=base_url)
@@ -67,7 +69,7 @@ def test_exa_search_tool_initialization_with_env_base_url(
mock_exa_api_key, mock_exa_base_url
):
with patch("crewai_tools.tools.exa_tools.exa_search_tool.Exa") as mock_exa_class:
EXASearchTool()
ExaSearchTool()
mock_exa_class.assert_called_once_with(
api_key="test_key_from_env", base_url="https://env.exa.api.com"
)
@@ -79,8 +81,33 @@ def test_exa_search_tool_initialization_without_base_url():
"crewai_tools.tools.exa_tools.exa_search_tool.Exa"
) as mock_exa_class:
api_key = "test_api_key"
tool = EXASearchTool(api_key=api_key)
tool = ExaSearchTool(api_key=api_key)
assert tool.api_key == api_key
assert tool.base_url is None
mock_exa_class.assert_called_once_with(api_key=api_key)
def test_exa_search_tool_highlights_uses_search_and_contents():
with patch("crewai_tools.tools.exa_tools.exa_search_tool.Exa") as mock_exa_class:
mock_client = MagicMock()
mock_exa_class.return_value = mock_client
tool = ExaSearchTool(
api_key="test_api_key", highlights={"max_characters": 4000}
)
tool._run(search_query="hello world")
mock_client.search_and_contents.assert_called_once_with(
"hello world",
highlights={"max_characters": 4000},
type="auto",
)
mock_client.search.assert_not_called()
def test_exasearchtool_alias_is_deprecated():
with patch("crewai_tools.tools.exa_tools.exa_search_tool.Exa"):
with pytest.warns(DeprecationWarning, match="ExaSearchTool"):
tool = EXASearchTool(api_key="test_api_key")
assert isinstance(tool, ExaSearchTool)

View File

@@ -6,7 +6,10 @@ import os
import pytest
from unittest.mock import MagicMock, patch
from crewai_tools.security.safe_path import (
safe_get,
validate_directory_path,
validate_file_path,
validate_url,
@@ -168,3 +171,62 @@ class TestValidateUrl:
# file:// would normally be blocked
result = validate_url("file:///etc/passwd")
assert result == "file:///etc/passwd"
# ---------------------------------------------------------------------------
# safe_get — redirect-aware SSRF protection
# ---------------------------------------------------------------------------
def _fake_getaddrinfo_factory(ip: str):
"""Return a getaddrinfo replacement that always resolves to *ip*."""
def _fake(host, port, *args, **kwargs):
return [(2, 1, 6, "", (ip, port or 80))]
return _fake
class TestSafeGet:
"""Tests for safe_get (validates IPs on every redirect hop)."""
@patch("crewai_tools.security.safe_path.socket.getaddrinfo",
side_effect=_fake_getaddrinfo_factory("93.184.216.34"))
@patch("requests.adapters.HTTPAdapter.send")
def test_allows_public_url(self, mock_send, mock_dns):
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.is_redirect = False
mock_response.headers = {}
mock_send.return_value = mock_response
resp = safe_get("https://example.com/page")
assert resp.status_code == 200
@patch("crewai_tools.security.safe_path.socket.getaddrinfo",
side_effect=_fake_getaddrinfo_factory("127.0.0.1"))
def test_blocks_redirect_to_localhost(self, mock_dns):
with pytest.raises(ValueError, match="private/reserved IP"):
safe_get("http://evil.com/redirect")
@patch("crewai_tools.security.safe_path.socket.getaddrinfo",
side_effect=_fake_getaddrinfo_factory("169.254.169.254"))
def test_blocks_redirect_to_metadata(self, mock_dns):
with pytest.raises(ValueError, match="private/reserved IP"):
safe_get("http://evil.com/metadata")
@patch("crewai_tools.security.safe_path.socket.getaddrinfo",
side_effect=_fake_getaddrinfo_factory("10.0.0.1"))
def test_blocks_redirect_to_private_range(self, mock_dns):
with pytest.raises(ValueError, match="private/reserved IP"):
safe_get("http://evil.com/internal")
@patch("crewai_tools.security.safe_path.socket.getaddrinfo",
side_effect=_fake_getaddrinfo_factory("169.254.169.254"))
@patch("requests.adapters.HTTPAdapter.send")
def test_escape_hatch_bypasses_redirect_check(self, mock_send, mock_dns, monkeypatch):
monkeypatch.setenv("CREWAI_TOOLS_ALLOW_UNSAFE_PATHS", "true")
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.is_redirect = False
mock_response.headers = {}
mock_send.return_value = mock_response
resp = safe_get("http://evil.com/metadata")
assert resp.status_code == 200

View File

@@ -9397,7 +9397,7 @@
}
},
{
"description": "Search the internet using Exa",
"description": "Search the web with Exa, the fastest and most accurate web search API.",
"env_vars": [
{
"default": null,
@@ -9412,7 +9412,7 @@
"required": false
}
],
"humanized_name": "EXASearchTool",
"humanized_name": "ExaSearchTool",
"init_params_schema": {
"$defs": {
"EnvVar": {
@@ -9451,6 +9451,7 @@
"type": "object"
}
},
"description": "Deprecated alias for :class:`ExaSearchTool`. Kept for backwards compatibility.",
"properties": {
"api_key": {
"anyOf": [
@@ -9493,6 +9494,10 @@
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
@@ -9500,11 +9505,31 @@
"default": false,
"title": "Content"
},
"highlights": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": true,
"title": "Highlights"
},
"summary": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
@@ -9586,7 +9611,225 @@
"required": [
"search_query"
],
"title": "EXABaseToolSchema",
"title": "ExaBaseToolSchema",
"type": "object"
}
},
{
"description": "Search the web with Exa, the fastest and most accurate web search API.",
"env_vars": [
{
"default": null,
"description": "API key for Exa services",
"name": "EXA_API_KEY",
"required": false
},
{
"default": null,
"description": "API url for the Exa services",
"name": "EXA_BASE_URL",
"required": false
}
],
"humanized_name": "ExaSearchTool",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"properties": {
"api_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "API key for Exa services",
"required": false,
"title": "Api Key"
},
"base_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "API server url",
"required": false,
"title": "Base Url"
},
"client": {
"anyOf": [
{},
{
"type": "null"
}
],
"default": null,
"title": "Client"
},
"content": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": false,
"title": "Content"
},
"highlights": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": true,
"title": "Highlights"
},
"summary": {
"anyOf": [
{
"type": "boolean"
},
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": false,
"title": "Summary"
},
"type": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": "auto",
"title": "Type"
}
},
"required": [],
"title": "ExaSearchTool",
"type": "object"
},
"name": "ExaSearchTool",
"package_dependencies": [
"exa_py"
],
"run_params_schema": {
"properties": {
"end_published_date": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "End date for the search",
"title": "End Published Date"
},
"include_domains": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "List of domains to include in the search",
"title": "Include Domains"
},
"search_query": {
"description": "Mandatory search query you want to use to search the internet",
"title": "Search Query",
"type": "string"
},
"start_published_date": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Start date for the search",
"title": "Start Published Date"
}
},
"required": [
"search_query"
],
"title": "ExaBaseToolSchema",
"type": "object"
}
},

View File

@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.14.4a1",
"crewai-tools==1.14.5a2",
]
embeddings = [
"tiktoken>=0.8.0,<0.13"

View File

@@ -48,7 +48,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.14.4a1"
__version__ = "1.14.5a2"
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
"Memory": ("crewai.memory.unified_memory", "Memory"),

View File

@@ -386,8 +386,7 @@ def _execute_task_with_a2a(
return raw_result
finally:
task.description = original_description
if task.output_pydantic is not None:
task.output_pydantic = original_output_pydantic
task.output_pydantic = original_output_pydantic
task.response_model = original_response_model
@@ -1534,8 +1533,7 @@ async def _aexecute_task_with_a2a(
return raw_result
finally:
task.description = original_description
if task.output_pydantic is not None:
task.output_pydantic = original_output_pydantic
task.output_pydantic = original_output_pydantic
task.response_model = original_response_model

View File

@@ -1102,16 +1102,6 @@ class Agent(BaseAgent):
self.agent_executor.tools_handler = self.tools_handler
self.agent_executor.request_within_rpm_limit = rpm_limit_fn
if isinstance(self.agent_executor.llm, BaseLLM):
existing_stop = getattr(self.agent_executor.llm, "stop", [])
self.agent_executor.llm.stop = list(
set(
existing_stop + stop_words
if isinstance(existing_stop, list)
else stop_words
)
)
def get_delegation_tools(self, agents: Sequence[BaseAgent]) -> list[BaseTool]:
agent_tools = AgentTools(agents=agents)
return agent_tools.tools()

View File

@@ -49,6 +49,7 @@ from crewai.hooks.tool_hooks import (
)
from crewai.types.callback import SerializableCallable
from crewai.utilities.agent_utils import (
_llm_stop_words_applied,
aget_llm_response,
convert_tools_to_openai_schema,
enforce_rpm_limit,
@@ -141,15 +142,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
if not self.after_llm_call_hooks:
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
if self.llm and not isinstance(self.llm, str):
existing_stop = getattr(self.llm, "stop", [])
self.llm.stop = list(
set(
existing_stop + self.stop
if isinstance(existing_stop, list)
else self.stop
)
)
@property
def use_stop_words(self) -> bool:
@@ -210,21 +202,22 @@ class CrewAgentExecutor(BaseAgentExecutor):
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
try:
formatted_answer = self._invoke_loop()
except AssertionError:
if self.agent.verbose:
PRINTER.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(PRINTER, e, verbose=self.agent.verbose)
raise
with _llm_stop_words_applied(self.llm, self):
try:
formatted_answer = self._invoke_loop()
except AssertionError:
if self.agent.verbose:
PRINTER.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(PRINTER, e, verbose=self.agent.verbose)
raise
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
self._save_to_memory(formatted_answer)
return {"output": formatted_answer.output}
@@ -1082,21 +1075,22 @@ class CrewAgentExecutor(BaseAgentExecutor):
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
try:
formatted_answer = await self._ainvoke_loop()
except AssertionError:
if self.agent.verbose:
PRINTER.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(PRINTER, e, verbose=self.agent.verbose)
raise
with _llm_stop_words_applied(self.llm, self):
try:
formatted_answer = await self._ainvoke_loop()
except AssertionError:
if self.agent.verbose:
PRINTER.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(PRINTER, e, verbose=self.agent.verbose)
raise
if self.ask_for_human_input:
formatted_answer = await self._ahandle_human_feedback(formatted_answer)
if self.ask_for_human_input:
formatted_answer = await self._ahandle_human_feedback(formatted_answer)
self._save_to_memory(formatted_answer)
return {"output": formatted_answer.output}

View File

@@ -774,7 +774,7 @@ def calculator(expression: str) -> str:
```
### Built-in Tools (install with `uv add crewai-tools`)
Web/Search: SerperDevTool, ScrapeWebsiteTool, WebsiteSearchTool, EXASearchTool, FirecrawlSearchTool
Web/Search: SerperDevTool, ScrapeWebsiteTool, WebsiteSearchTool, ExaSearchTool, FirecrawlSearchTool
Documents: FileReadTool, DirectoryReadTool, PDFSearchTool, DOCXSearchTool, CSVSearchTool, JSONSearchTool, XMLSearchTool, MDXSearchTool
Code: CodeInterpreterTool, CodeDocsSearchTool, GithubSearchTool
Media: DALL-E Tool, YoutubeChannelSearchTool, YoutubeVideoSearchTool

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.4a1"
"crewai[tools]==1.14.5a2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.4a1"
"crewai[tools]==1.14.5a2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.4a1"
"crewai[tools]==1.14.5a2"
]
[tool.crewai]

View File

@@ -1283,8 +1283,8 @@ class Crew(FlowTrackable, BaseModel):
pending_tasks.append((task, async_task, task_index))
else:
if pending_tasks:
task_outputs = await self._aprocess_async_tasks(
pending_tasks, was_replayed
task_outputs.extend(
await self._aprocess_async_tasks(pending_tasks, was_replayed)
)
pending_tasks.clear()
@@ -1299,7 +1299,9 @@ class Crew(FlowTrackable, BaseModel):
self._store_execution_log(task, task_output, task_index, was_replayed)
if pending_tasks:
task_outputs = await self._aprocess_async_tasks(pending_tasks, was_replayed)
task_outputs.extend(
await self._aprocess_async_tasks(pending_tasks, was_replayed)
)
return self._create_crew_output(task_outputs)
@@ -1313,7 +1315,9 @@ class Crew(FlowTrackable, BaseModel):
) -> TaskOutput | None:
"""Handle conditional task evaluation using native async."""
if pending_tasks:
task_outputs = await self._aprocess_async_tasks(pending_tasks, was_replayed)
task_outputs.extend(
await self._aprocess_async_tasks(pending_tasks, was_replayed)
)
pending_tasks.clear()
return check_conditional_skip(
@@ -1489,7 +1493,9 @@ class Crew(FlowTrackable, BaseModel):
futures.append((task, future, task_index))
else:
if futures:
task_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(
self._process_async_tasks(futures, was_replayed)
)
futures.clear()
context = self._get_context(task, task_outputs)
@@ -1503,7 +1509,7 @@ class Crew(FlowTrackable, BaseModel):
self._store_execution_log(task, task_output, task_index, was_replayed)
if futures:
task_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(self._process_async_tasks(futures, was_replayed))
return self._create_crew_output(task_outputs)
@@ -1516,7 +1522,7 @@ class Crew(FlowTrackable, BaseModel):
was_replayed: bool,
) -> TaskOutput | None:
if futures:
task_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(self._process_async_tasks(futures, was_replayed))
futures.clear()
return check_conditional_skip(

View File

@@ -108,6 +108,13 @@ from crewai.events.types.reasoning_events import (
AgentReasoningFailedEvent,
AgentReasoningStartedEvent,
)
from crewai.events.types.skill_events import (
SkillActivatedEvent,
SkillDiscoveryCompletedEvent,
SkillDiscoveryStartedEvent,
SkillLoadFailedEvent,
SkillLoadedEvent,
)
from crewai.events.types.system_events import SignalEvent, on_signal
from crewai.events.types.task_events import (
TaskCompletedEvent,
@@ -530,6 +537,30 @@ class TraceCollectionListener(BaseEventListener):
) -> None:
self._handle_action_event("knowledge_query_failed", source, event)
@event_bus.on(SkillDiscoveryStartedEvent)
def on_skill_discovery_started(
source: Any, event: SkillDiscoveryStartedEvent
) -> None:
self._handle_action_event("skill_discovery_started", source, event)
@event_bus.on(SkillDiscoveryCompletedEvent)
def on_skill_discovery_completed(
source: Any, event: SkillDiscoveryCompletedEvent
) -> None:
self._handle_action_event("skill_discovery_completed", source, event)
@event_bus.on(SkillLoadedEvent)
def on_skill_loaded(source: Any, event: SkillLoadedEvent) -> None:
self._handle_action_event("skill_loaded", source, event)
@event_bus.on(SkillActivatedEvent)
def on_skill_activated(source: Any, event: SkillActivatedEvent) -> None:
self._handle_action_event("skill_activated", source, event)
@event_bus.on(SkillLoadFailedEvent)
def on_skill_load_failed(source: Any, event: SkillLoadFailedEvent) -> None:
self._handle_action_event("skill_load_failed", source, event)
def _register_a2a_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for A2A (Agent-to-Agent) events."""

View File

@@ -71,6 +71,7 @@ from crewai.hooks.types import (
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.utilities.agent_utils import (
_llm_stop_words_applied,
check_native_tool_support,
enforce_rpm_limit,
extract_tool_call_info,
@@ -215,12 +216,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor):
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
if self.llm:
existing_stop = getattr(self.llm, "stop", [])
if not isinstance(existing_stop, list):
existing_stop = []
self.llm.stop = list(set(existing_stop + self.stop_words))
self._state = AgentExecutorState()
self.max_method_calls = self.max_iter * 10
@@ -2601,17 +2596,18 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor):
inputs.get("ask_for_human_input", False)
)
self.kickoff()
with _llm_stop_words_applied(self.llm, self):
self.kickoff()
formatted_answer = self.state.current_answer
formatted_answer = self.state.current_answer
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer."
)
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer."
)
if self.state.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
if self.state.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
self._save_to_memory(formatted_answer)
@@ -2691,18 +2687,20 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor):
inputs.get("ask_for_human_input", False)
)
# Use async kickoff directly since we're already in an async context
await self.kickoff_async()
with _llm_stop_words_applied(self.llm, self):
await self.kickoff_async()
formatted_answer = self.state.current_answer
formatted_answer = self.state.current_answer
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer."
)
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer."
)
if self.state.ask_for_human_input:
formatted_answer = await self._ahandle_human_feedback(formatted_answer)
if self.state.ask_for_human_input:
formatted_answer = await self._ahandle_human_feedback(
formatted_answer
)
self._save_to_memory(formatted_answer)

View File

@@ -1074,7 +1074,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
_human_feedback_method_outputs: dict[str, Any] = PrivateAttr(default_factory=dict)
_input_history: list[InputHistoryEntry] = PrivateAttr(default_factory=list)
_state: Any = PrivateAttr(default=None)
_execution_id: str = PrivateAttr(default_factory=lambda: str(uuid4()))
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]: # type: ignore[override]
class _FlowGeneric(cls): # type: ignore[valid-type,misc]
@@ -1865,27 +1864,6 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
except (AttributeError, TypeError):
return "" # Safely handle any unexpected attribute access issues
@property
def execution_id(self) -> str:
"""Stable identifier for this flow execution.
Separate from ``flow_id`` / ``state.id``, which consumers may
override via ``kickoff(inputs={"id": ...})`` to resume a persisted
flow. ``execution_id`` is never affected by ``inputs`` and stays
stable for the lifetime of a single run, so it is the correct key
for telemetry, tracing, and any external correlation that must
uniquely identify a single execution even when callers pass an
``id`` in ``inputs``.
Defaults to a fresh ``uuid4`` per ``Flow`` instance; assign to
override when an outer system already has an execution identity.
"""
return self._execution_id
@execution_id.setter
def execution_id(self, value: str) -> None:
self._execution_id = value
def _initialize_state(self, inputs: dict[str, Any]) -> None:
"""Initialize or update flow state with new inputs.
@@ -2054,6 +2032,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
restore_from_state_id: str | None = None,
) -> Any | FlowStreamingOutput:
"""Start the flow execution in a synchronous context.
@@ -2065,10 +2044,24 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
restore_from_state_id: Optional UUID of a previously-persisted flow
whose latest snapshot should hydrate this run's state. The new
run is assigned a fresh ``state.id`` (or ``inputs["id"]`` if
pinned), so its ``@persist`` writes land under a separate
persistence key and the source flow's history is preserved.
If the referenced state is not found, the kickoff falls back
silently to baseline behavior. Cannot be combined with
``from_checkpoint``; passing both raises ``ValueError``.
Returns:
The final output from the flow or FlowStreamingOutput if streaming.
"""
if from_checkpoint is not None and restore_from_state_id is not None:
raise ValueError(
"Cannot combine `from_checkpoint` and `restore_from_state_id`. "
"These parameters target different state systems "
"(Checkpointing and @persist) and cannot be used together."
)
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return restored.kickoff(inputs=inputs, input_files=input_files)
@@ -2090,7 +2083,11 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
def run_flow() -> None:
try:
self.stream = False
result = self.kickoff(inputs=inputs, input_files=input_files)
result = self.kickoff(
inputs=inputs,
input_files=input_files,
restore_from_state_id=restore_from_state_id,
)
result_holder.append(result)
except Exception as e:
# HumanFeedbackPending is expected control flow, not an error
@@ -2113,7 +2110,11 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
return streaming_output
async def _run_flow() -> Any:
return await self.kickoff_async(inputs, input_files)
return await self.kickoff_async(
inputs,
input_files,
restore_from_state_id=restore_from_state_id,
)
try:
asyncio.get_running_loop()
@@ -2128,6 +2129,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
restore_from_state_id: str | None = None,
) -> Any | FlowStreamingOutput:
"""Start the flow execution asynchronously.
@@ -2141,10 +2143,23 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
restore_from_state_id: Optional UUID of a previously-persisted flow
whose latest snapshot should hydrate this run's state. The new
run is assigned a fresh ``state.id`` (or ``inputs["id"]`` if
pinned), so subsequent ``@persist`` writes land under a
separate persistence key. If the referenced state is not
found, falls back silently to baseline. Cannot be combined
with ``from_checkpoint``; passing both raises ``ValueError``.
Returns:
The final output from the flow, which is the result of the last executed method.
"""
if from_checkpoint is not None and restore_from_state_id is not None:
raise ValueError(
"Cannot combine `from_checkpoint` and `restore_from_state_id`. "
"These parameters target different state systems "
"(Checkpointing and @persist) and cannot be used together."
)
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return await restored.kickoff_async(inputs=inputs, input_files=input_files)
@@ -2167,7 +2182,9 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
try:
self.stream = False
result = await self.kickoff_async(
inputs=inputs, input_files=input_files
inputs=inputs,
input_files=input_files,
restore_from_state_id=restore_from_state_id,
)
result_holder.append(result)
except Exception as e:
@@ -2199,9 +2216,9 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
flow_id_token = None
request_id_token = None
if current_flow_id.get() is None:
flow_id_token = current_flow_id.set(self.execution_id)
flow_id_token = current_flow_id.set(self.flow_id)
if current_flow_request_id.get() is None:
request_id_token = current_flow_request_id.set(self.execution_id)
request_id_token = current_flow_request_id.set(self.flow_id)
try:
# Reset flow state for fresh execution unless restoring from persistence
@@ -2224,16 +2241,54 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
if self._completed_methods:
self._is_execution_resuming = True
# Fork hydration: when restore_from_state_id is set and persistence is
# available, hydrate self._state from the source UUID's latest snapshot
# and reassign state.id to a fresh value so subsequent @persist writes
# don't extend the source flow's history. If the source state is not
# found, fall through silently to the existing inputs handling.
fork_succeeded = False
if restore_from_state_id is not None and self.persistence is not None:
stored_state = self.persistence.load_state(restore_from_state_id)
if stored_state:
self._log_flow_event(
f"Forking flow state from UUID: {restore_from_state_id}"
)
self._restore_state(stored_state)
# Pin to inputs["id"] when provided, otherwise mint a fresh
# UUID. NOTE: pinning inputs.id while forking shares a
# persistence key with another flow — usually you want only
# restore_from_state_id.
new_state_id = (inputs.get("id") if inputs else None) or str(
uuid4()
)
if isinstance(self._state, dict):
self._state["id"] = new_state_id
elif isinstance(self._state, BaseModel):
setattr(self._state, "id", new_state_id) # noqa: B010
fork_succeeded = True
else:
self._log_flow_event(
"No flow state found for restore_from_state_id: "
f"{restore_from_state_id}; proceeding without hydration",
color="yellow",
)
if inputs:
# Override the id in the state if it exists in inputs
if "id" in inputs:
# Override the id in the state if it exists in inputs.
# Skip when the fork already assigned state.id above.
if "id" in inputs and not fork_succeeded:
if isinstance(self._state, dict):
self._state["id"] = inputs["id"]
elif isinstance(self._state, BaseModel):
setattr(self._state, "id", inputs["id"]) # noqa: B010
# If persistence is enabled, attempt to restore the stored state using the provided id.
if "id" in inputs and self.persistence is not None:
# Skip when the fork already restored self._state above.
if (
"id" in inputs
and self.persistence is not None
and not fork_succeeded
):
restore_uuid = inputs["id"]
stored_state = self.persistence.load_state(restore_uuid)
if stored_state:
@@ -2416,6 +2471,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
restore_from_state_id: str | None = None,
) -> Any | FlowStreamingOutput:
"""Native async method to start the flow execution. Alias for kickoff_async.
@@ -2424,11 +2480,19 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
restore_from_state_id: Optional UUID of a previously-persisted flow
whose latest snapshot should hydrate this run's state. See
``kickoff_async`` for full semantics.
Returns:
The final output from the flow, which is the result of the last executed method.
"""
return await self.kickoff_async(inputs, input_files, from_checkpoint)
return await self.kickoff_async(
inputs,
input_files,
from_checkpoint,
restore_from_state_id=restore_from_state_id,
)
async def _replay_recorded_events(self) -> None:
"""Dispatch recorded ``MethodExecution*`` events from the event record."""

View File

@@ -50,7 +50,6 @@ LOG_MESSAGES: Final[dict[str, str]] = {
"save_error": "Failed to persist state for method {}: {}",
"state_missing": "Flow instance has no state",
"id_missing": "Flow state must have an 'id' field for persistence",
"key_missing": "Flow state is missing required persistence key '{}'",
}
@@ -64,7 +63,6 @@ class PersistenceDecorator:
method_name: str,
persistence_instance: FlowPersistence,
verbose: bool = False,
key: str | None = None,
) -> None:
"""Persist flow state with proper error handling and logging.
@@ -76,12 +74,9 @@ class PersistenceDecorator:
method_name: Name of the method that triggered persistence
persistence_instance: The persistence backend to use
verbose: Whether to log persistence operations
key: Optional state attribute/key to use as the persistence key.
When None, falls back to ``state.id``.
Raises:
ValueError: If flow has no state, state lacks an ID, or the
requested ``key`` is missing or falsy on state.
ValueError: If flow has no state or state lacks an ID
RuntimeError: If state persistence fails
AttributeError: If flow instance lacks required state attributes
"""
@@ -90,22 +85,19 @@ class PersistenceDecorator:
if state is None:
raise ValueError("Flow instance has no state")
lookup_key = key if key is not None else "id"
flow_uuid: str | None = None
if isinstance(state, dict):
flow_uuid = state.get(lookup_key)
flow_uuid = state.get("id")
elif hasattr(state, "_unwrap"):
unwrapped = state._unwrap()
if isinstance(unwrapped, dict):
flow_uuid = unwrapped.get(lookup_key)
flow_uuid = unwrapped.get("id")
else:
flow_uuid = getattr(unwrapped, lookup_key, None)
elif isinstance(state, BaseModel) or hasattr(state, lookup_key):
flow_uuid = getattr(state, lookup_key, None)
flow_uuid = getattr(unwrapped, "id", None)
elif isinstance(state, BaseModel) or hasattr(state, "id"):
flow_uuid = getattr(state, "id", None)
if not flow_uuid:
if key is not None:
raise ValueError(LOG_MESSAGES["key_missing"].format(key))
raise ValueError("Flow state must have an 'id' field for persistence")
# Log state saving only if verbose is True
@@ -135,7 +127,7 @@ class PersistenceDecorator:
logger.error(error_msg)
raise ValueError(error_msg) from e
except (TypeError, ValueError) as e:
error_msg = str(e) or LOG_MESSAGES["id_missing"]
error_msg = LOG_MESSAGES["id_missing"]
if verbose:
PRINTER.print(error_msg, color="red")
logger.error(error_msg)
@@ -143,9 +135,7 @@ class PersistenceDecorator:
def persist(
persistence: FlowPersistence | None = None,
verbose: bool = False,
key: str | None = None,
persistence: FlowPersistence | None = None, verbose: bool = False
) -> Callable[[type | Callable[..., T]], type | Callable[..., T]]:
"""Decorator to persist flow state.
@@ -158,16 +148,12 @@ def persist(
persistence: Optional FlowPersistence implementation to use.
If not provided, uses SQLiteFlowPersistence.
verbose: Whether to log persistence operations. Defaults to False.
key: Optional name of the state attribute (for Pydantic/object states)
or dict key (for dict states) to use as the persistence key. When
``None`` (default) the decorator falls back to ``state.id``.
Returns:
A decorator that can be applied to either a class or method
Raises:
ValueError: If the flow state doesn't have an 'id' field, or the
specified ``key`` is missing or falsy on state.
ValueError: If the flow state doesn't have an 'id' field
RuntimeError: If state persistence fails
Example:
@@ -176,10 +162,6 @@ def persist(
@start()
def begin(self):
pass
@persist(key="conversation_id") # Custom persistence key
class MyFlow(Flow[MyState]):
...
"""
def decorator(target: type | Callable[..., T]) -> type | Callable[..., T]:
@@ -225,7 +207,7 @@ def persist(
) -> Any:
result = await original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(
self, method_name, actual_persistence, verbose, key
self, method_name, actual_persistence, verbose
)
return result
@@ -255,7 +237,7 @@ def persist(
def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(
self, method_name, actual_persistence, verbose, key
self, method_name, actual_persistence, verbose
)
return result
@@ -294,7 +276,7 @@ def persist(
else:
result = method_coro
PersistenceDecorator.persist_state(
flow_instance, method.__name__, actual_persistence, verbose, key
flow_instance, method.__name__, actual_persistence, verbose
)
return cast(T, result)
@@ -313,7 +295,7 @@ def persist(
def method_sync_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
result = method(flow_instance, *args, **kwargs)
PersistenceDecorator.persist_state(
flow_instance, method.__name__, actual_persistence, verbose, key
flow_instance, method.__name__, actual_persistence, verbose
)
return result

View File

@@ -688,7 +688,9 @@ class LLM(BaseLLM):
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": (self.stop or None) if self.supports_stop_words() else None,
"stop": (self.stop_sequences or None)
if self.supports_stop_words()
else None,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
@@ -1235,8 +1237,12 @@ class LLM(BaseLLM):
# --- 4) Check for tool calls
tool_calls = response_message.tool_calls or []
# --- 5) If no tool calls or no available functions, return the text response directly as long as there is a text response
if (not tool_calls or not available_functions) and text_response:
# --- 5) If there are tool calls but no available functions, return the tool calls
if tool_calls and not available_functions:
return tool_calls
# --- 6) If there are no tool calls to execute, return the text response directly
if not tool_calls and text_response:
self._handle_emit_call_events(
response=text_response,
call_type=LLMCallType.LLM_CALL,
@@ -1247,11 +1253,6 @@ class LLM(BaseLLM):
)
return text_response
# --- 6) If there are tool calls but no available functions, return the tool calls
# This allows the caller (e.g., executor) to handle tool execution
if tool_calls and not available_functions:
return tool_calls
# --- 7) Handle tool calls if present (execute when available_functions provided)
if tool_calls and available_functions:
tool_result = self._handle_tool_call(
@@ -1384,7 +1385,10 @@ class LLM(BaseLLM):
tool_calls = response_message.tool_calls or []
if (not tool_calls or not available_functions) and text_response:
if tool_calls and not available_functions:
return tool_calls
if not tool_calls and text_response:
self._handle_emit_call_events(
response=text_response,
call_type=LLMCallType.LLM_CALL,
@@ -1395,11 +1399,6 @@ class LLM(BaseLLM):
)
return text_response
# If there are tool calls but no available functions, return the tool calls
# This allows the caller (e.g., executor) to handle tool execution
if tool_calls and not available_functions:
return tool_calls
# Handle tool calls if present (execute when available_functions provided)
if tool_calls and available_functions:
tool_result = self._handle_tool_call(

View File

@@ -72,6 +72,9 @@ _JSON_EXTRACTION_PATTERN: Final[re.Pattern[str]] = re.compile(r"\{.*}", re.DOTAL
_current_call_id: contextvars.ContextVar[str | None] = contextvars.ContextVar(
"_current_call_id", default=None
)
_call_stop_override_var: contextvars.ContextVar[dict[int, list[str]] | None] = (
contextvars.ContextVar("_call_stop_override_var", default=None)
)
@contextmanager
@@ -85,6 +88,31 @@ def llm_call_context() -> Generator[str, None, None]:
_current_call_id.reset(token)
@contextmanager
def call_stop_override(
llm: BaseLLM, stop: list[str] | None
) -> Generator[None, None, None]:
"""Override the stop list for ``llm`` within the current call scope.
Only ``llm``'s reads via :attr:`BaseLLM.stop_sequences` see ``stop``;
other LLM instances (e.g. an agent's ``function_calling_llm``) keep their
own ``stop`` field. Passing ``None`` clears any prior override for ``llm``
in the same scope. The instance-level ``stop`` field is never mutated,
so the override is safe under concurrent execution.
"""
current = _call_stop_override_var.get()
new_overrides: dict[int, list[str]] = dict(current) if current else {}
if stop is None:
new_overrides.pop(id(llm), None)
else:
new_overrides[id(llm)] = stop
token = _call_stop_override_var.set(new_overrides)
try:
yield
finally:
_call_stop_override_var.reset(token)
def get_current_call_id() -> str:
"""Get current call_id from context"""
call_id = _current_call_id.get()
@@ -158,11 +186,18 @@ class BaseLLM(BaseModel, ABC):
@property
def stop_sequences(self) -> list[str]:
"""Alias for ``stop`` — kept for backward compatibility with provider APIs.
"""Stop list active for the current call.
Writes are handled by ``__setattr__``, which normalizes and redirects
``stop_sequences`` assignments to the ``stop`` field.
Returns the per-instance override set via :func:`call_stop_override`
when one is in effect for this LLM; otherwise the instance-level
``stop`` field. Kept under this name for backward compatibility with
provider APIs that already read ``stop_sequences``.
"""
overrides = _call_stop_override_var.get()
if overrides is not None:
override = overrides.get(id(self))
if override is not None:
return override
return self.stop
_token_usage: dict[str, int] = PrivateAttr(
@@ -341,7 +376,7 @@ class BaseLLM(BaseModel, ABC):
Returns:
True if stop words are configured and can be applied
"""
return bool(self.stop)
return bool(self.stop_sequences)
def _apply_stop_words(self, content: str) -> str:
"""Apply stop words to truncate response content.
@@ -363,14 +398,14 @@ class BaseLLM(BaseModel, ABC):
>>> llm._apply_stop_words(response)
"I need to search.\\n\\nAction: search"
"""
if not self.stop or not content:
stops = self.stop_sequences
if not stops or not content:
return content
# Find the earliest occurrence of any stop word
earliest_stop_pos = len(content)
found_stop_word = None
for stop_word in self.stop:
for stop_word in stops:
stop_pos = content.find(stop_word)
if stop_pos != -1 and stop_pos < earliest_stop_pos:
earliest_stop_pos = stop_pos

View File

@@ -679,8 +679,9 @@ class AzureCompletion(BaseLLM):
params["presence_penalty"] = self.presence_penalty
if self.max_tokens is not None:
params["max_tokens"] = self.max_tokens
if self.stop and self.supports_stop_words():
params["stop"] = self.stop
stops = self.stop_sequences
if stops and self.supports_stop_words():
params["stop"] = stops
# Handle tools/functions for Azure OpenAI models
if tools and self.is_openai_model:

View File

@@ -1328,9 +1328,11 @@ class GeminiCompletion(BaseLLM):
usage = response.usage_metadata
cached_tokens = getattr(usage, "cached_content_token_count", 0) or 0
thinking_tokens = getattr(usage, "thoughts_token_count", 0) or 0
candidates_tokens = getattr(usage, "candidates_token_count", 0) or 0
result: dict[str, Any] = {
"prompt_token_count": getattr(usage, "prompt_token_count", 0),
"candidates_token_count": getattr(usage, "candidates_token_count", 0),
"candidates_token_count": candidates_tokens,
"completion_tokens": candidates_tokens + thinking_tokens,
"total_token_count": getattr(usage, "total_token_count", 0),
"total_tokens": getattr(usage, "total_token_count", 0),
"cached_prompt_tokens": cached_tokens,

View File

@@ -53,7 +53,11 @@ from crewai.tasks.task_output import TaskOutput
from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config
from crewai.utilities.constants import NOT_SPECIFIED, _NotSpecified
from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.converter import (
Converter,
async_convert_to_model,
convert_to_model,
)
from crewai.utilities.file_store import (
clear_task_files,
get_all_files,
@@ -681,7 +685,7 @@ class Task(BaseModel):
json_output = None
elif not self._guardrails and not self._guardrail:
raw = result
pydantic_output, json_output = self._export_output(result)
pydantic_output, json_output = await self._aexport_output(result)
else:
raw = result
pydantic_output, json_output = None, None
@@ -1110,7 +1114,7 @@ Follow these guidelines:
)
def _export_output(
self, result: str
self, result: str | BaseModel
) -> tuple[BaseModel | None, dict[str, Any] | None]:
pydantic_output: BaseModel | None = None
json_output: dict[str, Any] | None = None
@@ -1123,19 +1127,44 @@ Follow these guidelines:
self.agent,
self.converter_cls,
)
if isinstance(model_output, BaseModel):
pydantic_output = model_output
elif isinstance(model_output, dict):
json_output = model_output
elif isinstance(model_output, str):
try:
json_output = json.loads(model_output)
except json.JSONDecodeError:
json_output = None
pydantic_output, json_output = self._unpack_model_output(model_output)
return pydantic_output, json_output
async def _aexport_output(
self, result: str | BaseModel
) -> tuple[BaseModel | None, dict[str, Any] | None]:
"""Async equivalent of ``_export_output`` — uses ``acall`` so the event loop is not blocked."""
pydantic_output: BaseModel | None = None
json_output: dict[str, Any] | None = None
if self.output_pydantic or self.output_json:
model_output = await async_convert_to_model(
result,
self.output_pydantic,
self.output_json,
self.agent,
self.converter_cls,
)
pydantic_output, json_output = self._unpack_model_output(model_output)
return pydantic_output, json_output
@staticmethod
def _unpack_model_output(
model_output: dict[str, Any] | BaseModel | str,
) -> tuple[BaseModel | None, dict[str, Any] | None]:
if isinstance(model_output, BaseModel):
return model_output, None
if isinstance(model_output, dict):
return None, model_output
if isinstance(model_output, str):
try:
return None, json.loads(model_output)
except json.JSONDecodeError:
return None, None
return None, None
def _get_output_format(self) -> OutputFormat:
if self.output_json:
return OutputFormat.JSON
@@ -1364,7 +1393,7 @@ Follow these guidelines:
if isinstance(guardrail_result.result, str):
task_output.raw = guardrail_result.result
pydantic_output, json_output = self._export_output(
pydantic_output, json_output = await self._aexport_output(
guardrail_result.result
)
task_output.pydantic = pydantic_output
@@ -1421,7 +1450,7 @@ Follow these guidelines:
json_output = None
else:
raw = result
pydantic_output, json_output = self._export_output(result)
pydantic_output, json_output = await self._aexport_output(result)
task_output = TaskOutput(
name=self.name or self.description,

View File

@@ -1,8 +1,9 @@
from __future__ import annotations
import asyncio
from collections.abc import Callable, Sequence
from collections.abc import Callable, Iterator, Sequence
import concurrent.futures
import contextlib
import contextvars
from dataclasses import dataclass, field
from datetime import datetime
@@ -22,7 +23,7 @@ from crewai.agents.parser import (
parse,
)
from crewai.cli.config import Settings
from crewai.llms.base_llm import BaseLLM
from crewai.llms.base_llm import BaseLLM, call_stop_override
from crewai.tools import BaseTool as CrewAITool
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
@@ -238,6 +239,38 @@ def extract_task_section(text: str) -> str:
return text
def _executor_stop_words(
executor_context: CrewAgentExecutor | AgentExecutor | LiteAgent | None,
) -> list[str]:
"""Return the executor's stop words, regardless of which field name it uses."""
if executor_context is None:
return []
stops = getattr(executor_context, "stop", None)
if stops is None:
stops = getattr(executor_context, "stop_words", None)
return list(stops) if stops else []
@contextlib.contextmanager
def _llm_stop_words_applied(
llm: LLM | BaseLLM,
executor_context: CrewAgentExecutor | AgentExecutor | LiteAgent | None,
) -> Iterator[None]:
"""Apply the executor's stop words to the LLM for the duration of one call.
Uses :func:`crewai.llms.base_llm.call_stop_override` so the LLM's stop
field is never mutated. Safe under concurrent execution: the override is
propagated via a :class:`contextvars.ContextVar` and is scoped to this
call's task / thread context.
"""
extra = _executor_stop_words(executor_context)
if not extra or not isinstance(llm, BaseLLM) or set(extra).issubset(llm.stop):
yield
return
with call_stop_override(llm, list(set(llm.stop + extra))):
yield
def has_reached_max_iterations(iterations: int, max_iterations: int) -> bool:
"""Check if the maximum number of iterations has been reached.
@@ -459,18 +492,15 @@ def get_llm_response(
"""
messages = _prepare_llm_call(executor_context, messages, printer, verbose=verbose)
try:
answer = llm.call(
messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
except Exception as e:
raise e
answer = llm.call(
messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
return _validate_and_finalize_llm_response(
answer, executor_context, printer, verbose=verbose
@@ -515,18 +545,15 @@ async def aget_llm_response(
"""
messages = _prepare_llm_call(executor_context, messages, printer, verbose=verbose)
try:
answer = await llm.acall(
messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
except Exception as e:
raise e
answer = await llm.acall(
messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
return _validate_and_finalize_llm_response(
answer, executor_context, printer, verbose=verbose
@@ -1565,11 +1592,12 @@ def execute_single_native_tool_call(
color="green",
)
# Check result_as_answer
is_result_as_answer = bool(
original_tool
and hasattr(original_tool, "result_as_answer")
and original_tool.result_as_answer
and not error_event_emitted
and not hook_blocked
)
return NativeToolCallResult(

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import asyncio
import json
import re
from typing import TYPE_CHECKING, Any, Final, TypedDict
@@ -41,6 +42,45 @@ class ConverterError(Exception):
class Converter(OutputConverter):
"""Class that converts text into either pydantic or json."""
def _build_messages(self) -> list[dict[str, str]]:
return [
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
def _coerce_response_to_pydantic(self, response: Any) -> BaseModel:
"""Validate an LLM response into the configured Pydantic model.
Pure post-processing — performs no I/O. Shared by ``to_pydantic`` and
``ato_pydantic`` so the validation/partial-JSON fallback logic stays in
a single place.
"""
if isinstance(response, BaseModel):
return response
try:
return self.model.model_validate_json(response)
except ValidationError:
partial = handle_partial_json(
result=response,
model=self.model,
is_json_output=False,
agent=None,
)
if isinstance(partial, BaseModel):
return partial
if isinstance(partial, dict):
return self.model.model_validate(partial)
if isinstance(partial, str):
try:
return self.model.model_validate_json(partial)
except Exception as parse_err:
raise ConverterError(
f"Failed to convert partial JSON result into Pydantic: {parse_err}"
) from parse_err
raise ConverterError(
"handle_partial_json returned an unexpected type."
) from None
def to_pydantic(self, current_attempt: int = 1) -> BaseModel:
"""Convert text to pydantic.
@@ -56,50 +96,12 @@ class Converter(OutputConverter):
try:
if self.llm.supports_function_calling():
response = self.llm.call(
messages=[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
],
messages=self._build_messages(),
response_model=self.model,
)
if isinstance(response, BaseModel):
result = response
else:
result = self.model.model_validate_json(response)
else:
response = self.llm.call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
try:
# Try to directly validate the response JSON
result = self.model.model_validate_json(response)
except ValidationError:
# If direct validation fails, attempt to extract valid JSON
result = handle_partial_json( # type: ignore[assignment]
result=response,
model=self.model,
is_json_output=False,
agent=None,
)
# Ensure result is a BaseModel instance
if not isinstance(result, BaseModel):
if isinstance(result, dict):
result = self.model.model_validate(result)
elif isinstance(result, str):
try:
result = self.model.model_validate_json(result)
except Exception as parse_err:
raise ConverterError(
f"Failed to convert partial JSON result into Pydantic: {parse_err}"
) from parse_err
else:
raise ConverterError(
"handle_partial_json returned an unexpected type."
) from None
return result
response = self.llm.call(self._build_messages())
return self._coerce_response_to_pydantic(response)
except ValidationError as e:
if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1)
@@ -113,6 +115,30 @@ class Converter(OutputConverter):
f"Failed to convert text into a Pydantic model due to error: {e}"
) from e
async def ato_pydantic(self, current_attempt: int = 1) -> BaseModel:
"""Async equivalent of ``to_pydantic`` — uses ``acall`` so the event loop is not blocked."""
try:
if self.llm.supports_function_calling():
response = await self.llm.acall(
messages=self._build_messages(),
response_model=self.model,
)
else:
response = await self.llm.acall(self._build_messages())
return self._coerce_response_to_pydantic(response)
except ValidationError as e:
if current_attempt < self.max_attempts:
return await self.ato_pydantic(current_attempt + 1)
raise ConverterError(
f"Failed to convert text into a Pydantic model due to validation error: {e}"
) from e
except Exception as e:
if current_attempt < self.max_attempts:
return await self.ato_pydantic(current_attempt + 1)
raise ConverterError(
f"Failed to convert text into a Pydantic model due to error: {e}"
) from e
def to_json(self, current_attempt: int = 1) -> str | ConverterError | Any: # type: ignore[override]
"""Convert text to json.
@@ -129,19 +155,28 @@ class Converter(OutputConverter):
try:
if self.llm.supports_function_calling():
return self._create_instructor().to_json()
return json.dumps(
self.llm.call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
)
return json.dumps(self.llm.call(self._build_messages()))
except Exception as e:
if current_attempt < self.max_attempts:
return self.to_json(current_attempt + 1)
return ConverterError(f"Failed to convert text into JSON, error: {e}.")
async def ato_json(self, current_attempt: int = 1) -> str | ConverterError | Any:
"""Async equivalent of ``to_json``.
The function-calling path delegates to ``InternalInstructor`` (currently
sync-only); we run it via ``asyncio.to_thread`` so the event loop stays
free.
"""
try:
if self.llm.supports_function_calling():
return await asyncio.to_thread(self._create_instructor().to_json)
return json.dumps(await self.llm.acall(self._build_messages()))
except Exception as e:
if current_attempt < self.max_attempts:
return await self.ato_json(current_attempt + 1)
return ConverterError(f"Failed to convert text into JSON, error: {e}.")
def _create_instructor(self) -> InternalInstructor[Any]:
"""Create an instructor."""
@@ -153,16 +188,18 @@ class Converter(OutputConverter):
def convert_to_model(
result: str,
result: str | BaseModel,
output_pydantic: type[BaseModel] | None,
output_json: type[BaseModel] | None,
agent: Agent | BaseAgent | None = None,
converter_cls: type[Converter] | None = None,
) -> dict[str, Any] | BaseModel | str:
"""Convert a result string to a Pydantic model or JSON.
"""Convert a result to a Pydantic model or JSON.
Args:
result: The result string to convert.
result: The result to convert. Usually a JSON string, but a Pydantic
instance is also accepted when an upstream caller already produced
a structured object.
output_pydantic: The Pydantic model class to convert to.
output_json: The Pydantic model class to convert to JSON.
agent: The agent instance.
@@ -175,6 +212,11 @@ def convert_to_model(
if model is None:
return result
if isinstance(result, BaseModel):
if isinstance(result, model):
return result.model_dump() if output_json else result
result = result.model_dump_json()
if converter_cls:
return convert_with_instructions(
result=result,
@@ -257,12 +299,21 @@ def handle_partial_json(
match = _JSON_PATTERN.search(result)
if match:
try:
exported_result = model.model_validate_json(match.group())
parsed = json.loads(match.group(), strict=False)
except json.JSONDecodeError:
return convert_with_instructions(
result=result,
model=model,
is_json_output=is_json_output,
agent=agent,
converter_cls=converter_cls,
)
try:
exported_result = model.model_validate(parsed)
if is_json_output:
return exported_result.model_dump()
return exported_result
except json.JSONDecodeError:
pass
except ValidationError:
raise
except Exception as e:
@@ -338,6 +389,144 @@ def convert_with_instructions(
return exported_result
async def async_convert_to_model(
result: str | BaseModel,
output_pydantic: type[BaseModel] | None,
output_json: type[BaseModel] | None,
agent: Agent | BaseAgent | None = None,
converter_cls: type[Converter] | None = None,
) -> dict[str, Any] | BaseModel | str:
"""Async equivalent of ``convert_to_model`` — uses native ``acall``.
Mirrors the dispatch semantics of the sync version exactly; the only
difference is that LLM-bearing branches are awaited.
"""
model = output_pydantic or output_json
if model is None:
return result
if isinstance(result, BaseModel):
if isinstance(result, model):
return result.model_dump() if output_json else result
result = result.model_dump_json()
if converter_cls:
return await async_convert_with_instructions(
result=result,
model=model,
is_json_output=bool(output_json),
agent=agent,
converter_cls=converter_cls,
)
try:
escaped_result = json.dumps(json.loads(result, strict=False))
return validate_model(
result=escaped_result, model=model, is_json_output=bool(output_json)
)
except (json.JSONDecodeError, ValidationError):
return await async_handle_partial_json(
result=result,
model=model,
is_json_output=bool(output_json),
agent=agent,
converter_cls=converter_cls,
)
except Exception as e:
if agent and getattr(agent, "verbose", True):
PRINTER.print(
content=f"Unexpected error during model conversion: {type(e).__name__}: {e}. Returning original result.",
color="red",
)
return result
async def async_handle_partial_json(
result: str,
model: type[BaseModel],
is_json_output: bool,
agent: Agent | BaseAgent | None,
converter_cls: type[Converter] | None = None,
) -> dict[str, Any] | BaseModel | str:
"""Async equivalent of ``handle_partial_json`` — defers LLM fallback to ``acall``."""
match = _JSON_PATTERN.search(result)
if match:
try:
parsed = json.loads(match.group(), strict=False)
except json.JSONDecodeError:
return await async_convert_with_instructions(
result=result,
model=model,
is_json_output=is_json_output,
agent=agent,
converter_cls=converter_cls,
)
try:
exported_result = model.model_validate(parsed)
if is_json_output:
return exported_result.model_dump()
return exported_result
except ValidationError:
raise
except Exception as e:
if agent and getattr(agent, "verbose", True):
PRINTER.print(
content=f"Unexpected error during partial JSON handling: {type(e).__name__}: {e}. Attempting alternative conversion method.",
color="red",
)
return await async_convert_with_instructions(
result=result,
model=model,
is_json_output=is_json_output,
agent=agent,
converter_cls=converter_cls,
)
async def async_convert_with_instructions(
result: str,
model: type[BaseModel],
is_json_output: bool,
agent: Agent | BaseAgent | None,
converter_cls: type[Converter] | None = None,
) -> dict[str, Any] | BaseModel | str:
"""Async equivalent of ``convert_with_instructions`` — calls ``ato_pydantic``/``ato_json``."""
if agent is None:
raise TypeError("Agent must be provided if converter_cls is not specified.")
llm = getattr(agent, "function_calling_llm", None) or agent.llm
if llm is None:
raise ValueError("Agent must have a valid LLM instance for conversion")
instructions = get_conversion_instructions(model=model, llm=llm)
converter = create_converter(
agent=agent,
converter_cls=converter_cls,
llm=llm,
text=result,
model=model,
instructions=instructions,
)
exported_result = (
await converter.ato_pydantic()
if not is_json_output
else await converter.ato_json()
)
if isinstance(exported_result, ConverterError):
if agent and getattr(agent, "verbose", True):
PRINTER.print(
content=f"Failed to convert result to model: {exported_result}",
color="red",
)
return result
return exported_result
def get_conversion_instructions(
model: type[BaseModel], llm: BaseLLM | LLM | str | Any
) -> str:

View File

@@ -2452,3 +2452,167 @@ def test_agent_mcps_accepts_legacy_prefix_with_tool():
mcps=["crewai-amp:notion#get_page"],
)
assert agent.mcps == ["crewai-amp:notion#get_page"]
class TestSharedLLMStopWords:
"""Regression tests for shared LLM stop words mutation (issue #5141).
Stop words from one executor must not leak into the shared LLM permanently
or pollute other agents sharing that LLM.
"""
@staticmethod
def _make_executor(llm: LLM, stop_words: list[str]) -> CrewAgentExecutor:
"""Build a CrewAgentExecutor with minimal deps."""
from crewai.agents.tools_handler import ToolsHandler
agent = Agent(role="r", goal="g", backstory="b")
task = Task(description="d", expected_output="o", agent=agent)
return CrewAgentExecutor(
agent=agent,
task=task,
llm=llm,
crew=None,
prompt={"prompt": "p {input} {tool_names} {tools}"},
max_iter=5,
tools=[],
tools_names="",
stop_words=stop_words,
tools_description="",
tools_handler=ToolsHandler(),
)
def test_executor_init_does_not_mutate_shared_llm(self) -> None:
"""Constructing executors must not touch the shared LLM's stop list."""
shared = LLM(model="gpt-4", stop=["Original:"])
original = list(shared.stop)
a = self._make_executor(shared, stop_words=["StopA:"])
b = self._make_executor(shared, stop_words=["StopB:"])
assert shared.stop == original
assert a.llm is shared
assert b.llm is shared
def test_effective_stop_reflects_override_inside_context(self) -> None:
"""Inside the helper, the effective stop list includes the executor's words."""
from crewai.utilities.agent_utils import _llm_stop_words_applied
shared = LLM(model="gpt-4", stop=["Original:"])
executor = self._make_executor(shared, stop_words=["Observation:"])
with _llm_stop_words_applied(shared, executor):
assert set(shared.stop_sequences) == {"Original:", "Observation:"}
assert shared.stop == ["Original:"]
assert shared.stop == ["Original:"]
assert shared.stop_sequences == ["Original:"]
def test_override_cleared_when_context_raises(self) -> None:
"""A failed call must still clear the per-call stop override."""
from crewai.utilities.agent_utils import _llm_stop_words_applied
shared = LLM(model="gpt-4", stop=["Original:"])
executor = self._make_executor(shared, stop_words=["Observation:"])
try:
with _llm_stop_words_applied(shared, executor):
raise RuntimeError("boom")
except RuntimeError:
pass
assert shared.stop == ["Original:"]
assert shared.stop_sequences == ["Original:"]
def test_override_applies_for_post_processing_when_api_lacks_stop_support(
self,
) -> None:
"""Models that lack API-level stop support still need the override.
Native providers (e.g. Azure on gpt-5/o-series) read ``stop_sequences``
in ``_apply_stop_words`` to truncate the response post-hoc even when
``supports_stop_words()`` returns False, so the override must be set
regardless of API-level support. (Issue raised by Cursor Bugbot.)
"""
from unittest.mock import patch
from crewai.utilities.agent_utils import _llm_stop_words_applied
shared = LLM(model="gpt-4", stop=["Original:"])
executor = self._make_executor(shared, stop_words=["Observation:"])
with patch.object(shared, "supports_stop_words", return_value=False):
with _llm_stop_words_applied(shared, executor):
assert set(shared.stop_sequences) == {"Original:", "Observation:"}
assert shared.stop == ["Original:"]
assert shared.stop_sequences == ["Original:"]
def test_concurrent_overrides_do_not_collide(self) -> None:
"""Concurrent agents on a shared LLM must each see their own effective stop."""
import asyncio
from crewai.utilities.agent_utils import _llm_stop_words_applied
shared = LLM(model="gpt-4", stop=["Original:"])
exec_a = self._make_executor(shared, stop_words=["StopA:"])
exec_b = self._make_executor(shared, stop_words=["StopB:"])
async def run(executor: CrewAgentExecutor, expected: str) -> set[str]:
with _llm_stop_words_applied(shared, executor):
await asyncio.sleep(0)
seen = set(shared.stop_sequences)
assert expected in seen
return seen
async def main() -> tuple[set[str], set[str]]:
return await asyncio.gather(
run(exec_a, "StopA:"), run(exec_b, "StopB:")
)
a_seen, b_seen = asyncio.run(main())
assert a_seen == {"Original:", "StopA:"}
assert b_seen == {"Original:", "StopB:"}
assert shared.stop == ["Original:"]
assert shared.stop_sequences == ["Original:"]
def test_override_does_not_leak_to_other_llm_instances(self) -> None:
"""Override for one LLM must not affect another LLM (e.g. function_calling_llm).
Regression for Cursor Bugbot: a global ContextVar would leak the
override to every BaseLLM that reads stop_sequences during the scope.
"""
from crewai.utilities.agent_utils import _llm_stop_words_applied
target = LLM(model="gpt-4", stop=["TargetStop:"])
other = LLM(model="gpt-4", stop=["OtherStop:"])
executor = self._make_executor(target, stop_words=["Observation:"])
with _llm_stop_words_applied(target, executor):
assert set(target.stop_sequences) == {"TargetStop:", "Observation:"}
assert other.stop_sequences == ["OtherStop:"]
assert target.stop_sequences == ["TargetStop:"]
assert other.stop_sequences == ["OtherStop:"]
def test_override_propagates_to_nested_direct_llm_calls(self) -> None:
"""Once invoke wraps with the override, nested direct llm.call sites
(StepExecutor, handle_max_iterations_exceeded) see the merged stops.
Regression for Cursor Bugbot: those direct call sites bypass
get_llm_response, so the override must be set at executor entry, not
only around get_llm_response.
"""
from crewai.utilities.agent_utils import _llm_stop_words_applied
shared = LLM(model="gpt-4", stop=["Original:"])
executor = self._make_executor(shared, stop_words=["Observation:"])
seen: list[set[str]] = []
def nested_direct_call() -> None:
seen.append(set(shared.stop_sequences))
with _llm_stop_words_applied(shared, executor):
nested_direct_call()
assert seen == [{"Original:", "Observation:"}]
assert shared.stop == ["Original:"]

View File

@@ -596,6 +596,35 @@ def test_gemini_token_usage_tracking():
assert usage.total_tokens > 0
def test_gemini_thoughts_tokens_counted_in_completion_and_total():
"""Gemini's thoughts_token_count must be folded into completion_tokens so the
tracked total matches the API's total_token_count for thinking models."""
from crewai.llms.providers.gemini.completion import GeminiCompletion
llm = GeminiCompletion(model="gemini-2.0-flash-001")
response = MagicMock()
response.usage_metadata = MagicMock(
prompt_token_count=100,
candidates_token_count=50,
thoughts_token_count=25,
total_token_count=175,
cached_content_token_count=0,
)
usage = llm._extract_token_usage(response)
assert usage["candidates_token_count"] == 50
assert usage["completion_tokens"] == 75
assert usage["reasoning_tokens"] == 25
llm._track_token_usage_internal(usage)
summary = llm.get_token_usage_summary()
assert summary.prompt_tokens == 100
assert summary.completion_tokens == 75
assert summary.total_tokens == 175
assert summary.reasoning_tokens == 25
@pytest.mark.vcr()
def test_gemini_tool_returning_float():
"""

View File

@@ -1,12 +1,14 @@
"""Tests for async task execution."""
import pytest
from pydantic import BaseModel
from unittest.mock import AsyncMock, MagicMock, patch
from crewai.agent import Agent
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.tasks.output_format import OutputFormat
from crewai.utilities.converter import Converter
@pytest.fixture
@@ -383,4 +385,73 @@ class TestAsyncTaskOutput:
assert result.description == "Test description"
assert result.expected_output == "Test expected"
assert result.raw == "Test result"
assert result.agent == "Test Agent"
assert result.agent == "Test Agent"
class _AsyncOnlyOutput(BaseModel):
value: str
class TestAsyncOutputConversion:
"""Regression tests for native-async output conversion (issue #5230).
Ensures `_aexport_output` reaches the LLM via `acall` and never via the
blocking `call` method.
"""
@pytest.mark.asyncio
async def test_aexport_output_uses_acall_not_call(self) -> None:
mock_llm = MagicMock()
mock_llm.supports_function_calling.return_value = False
mock_llm.acall = AsyncMock(return_value='{"value": "ok"}')
mock_llm.call = MagicMock(
side_effect=AssertionError("call() must NOT be invoked from async path")
)
converter = Converter(
llm=mock_llm,
model=_AsyncOnlyOutput,
text="raw",
instructions="convert",
max_attempts=1,
)
result = await converter.ato_pydantic()
assert isinstance(result, _AsyncOnlyOutput)
assert result.value == "ok"
mock_llm.acall.assert_awaited_once()
mock_llm.call.assert_not_called()
@pytest.mark.asyncio
async def test_ato_json_function_calling_does_not_block_event_loop(self) -> None:
"""The function-calling JSON path must run via asyncio.to_thread.
``InternalInstructor`` is sync-only; `ato_json` should offload it so the
event loop is not blocked.
"""
mock_llm = MagicMock()
mock_llm.supports_function_calling.return_value = True
converter = Converter(
llm=mock_llm,
model=_AsyncOnlyOutput,
text="raw",
instructions="convert",
max_attempts=1,
)
sentinel = '{"value": "ok"}'
with patch.object(
converter, "_create_instructor"
) as mock_create, patch(
"crewai.utilities.converter.asyncio.to_thread", new_callable=AsyncMock
) as mock_to_thread:
instructor = MagicMock()
instructor.to_json = MagicMock(return_value=sentinel)
mock_create.return_value = instructor
mock_to_thread.return_value = sentinel
result = await converter.ato_json()
assert result == sentinel
mock_to_thread.assert_awaited_once_with(instructor.to_json)

View File

@@ -1254,6 +1254,119 @@ async def test_async_task_execution_call_count(researcher, writer):
assert mock_execute_sync.call_count == 1
def test_mixed_sync_async_task_outputs_not_dropped(researcher, writer):
"""Sync outputs accumulated before a pending async batch must survive the flush."""
sync1_output = TaskOutput(description="sync1", raw="s1", agent="researcher")
async1_output = TaskOutput(description="async1", raw="a1", agent="researcher")
sync2_output = TaskOutput(description="sync2", raw="s2", agent="writer")
sync1 = Task(description="sync1", expected_output="x", agent=researcher)
async1 = Task(
description="async1",
expected_output="x",
agent=researcher,
async_execution=True,
)
sync2 = Task(description="sync2", expected_output="x", agent=writer)
sync1.output = sync1_output
async1.output = async1_output
sync2.output = sync2_output
crew = Crew(agents=[researcher, writer], tasks=[sync1, async1, sync2])
mock_future = MagicMock(spec=Future)
mock_future.result.return_value = async1_output
with (
patch.object(
Task, "execute_sync", side_effect=[sync1_output, sync2_output]
),
patch.object(Task, "execute_async", return_value=mock_future),
):
result = crew.kickoff()
assert [o.raw for o in result.tasks_output] == ["s1", "a1", "s2"]
@pytest.mark.asyncio
async def test_mixed_sync_async_task_outputs_not_dropped_native_async(
researcher, writer
):
"""Same regression as the sync path, exercised via akickoff (native async)."""
sync1_output = TaskOutput(description="sync1", raw="s1", agent="researcher")
async1_output = TaskOutput(description="async1", raw="a1", agent="researcher")
sync2_output = TaskOutput(description="sync2", raw="s2", agent="writer")
sync1 = Task(description="sync1", expected_output="x", agent=researcher)
async1 = Task(
description="async1",
expected_output="x",
agent=researcher,
async_execution=True,
)
sync2 = Task(description="sync2", expected_output="x", agent=writer)
sync1.output = sync1_output
async1.output = async1_output
sync2.output = sync2_output
crew = Crew(agents=[researcher, writer], tasks=[sync1, async1, sync2])
aexecute_outputs = iter([sync1_output, async1_output, sync2_output])
async def fake_aexecute_sync(*_args: Any, **_kwargs: Any) -> TaskOutput:
return next(aexecute_outputs)
with patch.object(Task, "aexecute_sync", side_effect=fake_aexecute_sync):
result = await crew.akickoff()
assert [o.raw for o in result.tasks_output] == ["s1", "a1", "s2"]
def test_pending_async_outputs_preserved_through_conditional_task(researcher, writer):
"""A conditional task encountered after a pending async batch must not silently drop the async output."""
sync1_output = TaskOutput(description="sync1", raw="s1", agent="researcher")
async1_output = TaskOutput(description="async1", raw="a1", agent="researcher")
def always_skip(_: TaskOutput) -> bool:
return False
sync1 = Task(description="sync1", expected_output="x", agent=researcher)
async1 = Task(
description="async1",
expected_output="x",
agent=researcher,
async_execution=True,
)
conditional = ConditionalTask(
description="conditional",
expected_output="x",
agent=writer,
condition=always_skip,
)
sync1.output = sync1_output
async1.output = async1_output
crew = Crew(
agents=[researcher, writer], tasks=[sync1, async1, conditional]
)
mock_future = MagicMock(spec=Future)
mock_future.result.return_value = async1_output
with (
patch.object(Task, "execute_sync", return_value=sync1_output),
patch.object(Task, "execute_async", return_value=mock_future),
):
result = crew.kickoff()
raws = [o.raw for o in result.tasks_output]
assert raws[:2] == ["s1", "a1"]
assert len(result.tasks_output) == 3
@pytest.mark.vcr()
def test_kickoff_for_each_single_input():
"""Tests if kickoff_for_each works with a single input."""
@@ -4519,8 +4632,8 @@ def test_sets_flow_context_when_using_crewbase_pattern_inside_flow():
flow.kickoff()
assert captured_crew is not None
assert captured_crew._flow_id == flow.execution_id # type: ignore[attr-defined]
assert captured_crew._request_id == flow.execution_id # type: ignore[attr-defined]
assert captured_crew._flow_id == flow.flow_id # type: ignore[attr-defined]
assert captured_crew._request_id == flow.flow_id # type: ignore[attr-defined]
def test_sets_flow_context_when_outside_flow(researcher, writer):
@@ -4554,8 +4667,8 @@ def test_sets_flow_context_when_inside_flow(researcher, writer):
flow = MyFlow()
result = flow.kickoff()
assert result._flow_id == flow.execution_id # type: ignore[attr-defined]
assert result._request_id == flow.execution_id # type: ignore[attr-defined]
assert result._flow_id == flow.flow_id # type: ignore[attr-defined]
assert result._request_id == flow.flow_id # type: ignore[attr-defined]
def test_reset_knowledge_with_no_crew_knowledge(researcher, writer):

View File

@@ -1,127 +0,0 @@
"""Regression tests for ``Flow.execution_id``.
``execution_id`` is the stable tracking identifier for a single flow run.
It must stay independent of ``state.id`` so that consumers passing an
``id`` in ``inputs`` (used for persistence restore) cannot destabilize
the identity used by telemetry, tracing, and external correlation.
"""
from __future__ import annotations
from typing import Any
import pytest
from crewai.flow.flow import Flow, FlowState, start
from crewai.flow.flow_context import current_flow_id, current_flow_request_id
class _CaptureState(FlowState):
captured_flow_id: str = ""
captured_state_id: str = ""
captured_current_flow_id: str = ""
captured_execution_id: str = ""
class _IdentityCaptureFlow(Flow[_CaptureState]):
initial_state = _CaptureState
@start()
def capture(self) -> None:
self.state.captured_flow_id = self.flow_id
self.state.captured_state_id = self.state.id
self.state.captured_current_flow_id = current_flow_id.get() or ""
self.state.captured_execution_id = self.execution_id
def test_execution_id_defaults_to_fresh_uuid_per_instance() -> None:
a = _IdentityCaptureFlow()
b = _IdentityCaptureFlow()
assert a.execution_id
assert b.execution_id
assert a.execution_id != b.execution_id
def test_execution_id_survives_consumer_id_in_inputs() -> None:
flow = _IdentityCaptureFlow()
original_execution_id = flow.execution_id
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.state.id == "consumer-supplied-id"
assert flow.flow_id == "consumer-supplied-id"
assert flow.execution_id == original_execution_id
assert flow.execution_id != "consumer-supplied-id"
def test_two_runs_with_same_consumer_id_have_distinct_execution_ids() -> None:
flow_a = _IdentityCaptureFlow()
flow_b = _IdentityCaptureFlow()
colliding_id = "shared-consumer-id"
flow_a.kickoff(inputs={"id": colliding_id})
flow_b.kickoff(inputs={"id": colliding_id})
assert flow_a.state.id == colliding_id
assert flow_b.state.id == colliding_id
assert flow_a.execution_id != flow_b.execution_id
def test_execution_id_is_writable() -> None:
flow = _IdentityCaptureFlow()
flow.execution_id = "external-task-id"
assert flow.execution_id == "external-task-id"
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.execution_id == "external-task-id"
assert flow.state.id == "consumer-supplied-id"
def test_current_flow_id_context_var_matches_execution_id() -> None:
flow = _IdentityCaptureFlow()
flow.execution_id = "external-task-id"
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.state.captured_current_flow_id == "external-task-id"
assert flow.state.captured_flow_id == "consumer-supplied-id"
assert flow.state.captured_execution_id == "external-task-id"
def test_execution_id_not_included_in_serialized_state() -> None:
flow = _IdentityCaptureFlow()
flow.execution_id = "external-task-id"
flow.kickoff()
dumped = flow.state.model_dump()
assert "execution_id" not in dumped
assert "_execution_id" not in dumped
assert dumped["id"] == flow.state.id
def test_dict_state_flow_also_exposes_stable_execution_id() -> None:
class DictFlow(Flow[dict[str, Any]]):
initial_state = dict # type: ignore[assignment]
@start()
def noop(self) -> None:
pass
flow = DictFlow()
original = flow.execution_id
flow.kickoff(inputs={"id": "consumer-supplied-id"})
assert flow.state["id"] == "consumer-supplied-id"
assert flow.execution_id == original
@pytest.fixture(autouse=True)
def _reset_flow_context_vars():
yield
for var in (current_flow_id, current_flow_request_id):
try:
var.set(None)
except LookupError:
# ContextVar was never set in this context; nothing to reset.
pass

View File

@@ -251,67 +251,240 @@ def test_persistence_with_base_model(tmp_path):
assert isinstance(flow.state._unwrap(), State)
def test_persist_custom_key_with_pydantic_state(tmp_path):
"""`@persist(key=...)` uses the named attribute on a Pydantic state."""
def test_fork_with_restore_from_state_id(tmp_path):
"""Fork: restore_from_state_id hydrates state from source flow_uuid; new run gets a
fresh state.id; source's history is preserved (the fork's @persist writes go under
the new state.id, not the source's)."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class KeyedState(FlowState):
conversation_id: str = "conv-42"
message: str = ""
class KeyedFlow(Flow[KeyedState]):
class ForkableFlow(Flow[TestState]):
@start()
@persist(persistence, key="conversation_id")
def init_step(self):
self.state.message = "hello"
@persist(persistence)
def step(self):
self.state.counter += 1
flow = KeyedFlow(persistence=persistence)
flow.kickoff()
# Run 1: build up source state. counter goes 0 -> 1.
flow1 = ForkableFlow(persistence=persistence)
flow1.kickoff()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
saved_state = persistence.load_state("conv-42")
assert saved_state is not None
assert saved_state["message"] == "hello"
# The default `state.id` lookup must NOT have been used as the key.
assert persistence.load_state(flow.state.id) is None
# Resume on the same uuid bumps counter to 2 in the SAME flow_uuid history.
flow1b = ForkableFlow(persistence=persistence)
flow1b.kickoff(inputs={"id": source_uuid})
assert flow1b.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 2
# Fork: hydrate from source, but persist under a fresh state.id.
flow2 = ForkableFlow(persistence=persistence)
flow2.kickoff(restore_from_state_id=source_uuid)
# Fork has a different state.id from the source.
assert flow2.state.id != source_uuid
# Hydrated from source's latest snapshot (counter=2), then incremented to 3.
assert flow2.state.counter == 3
# Source's history is unchanged after the fork.
assert persistence.load_state(source_uuid)["counter"] == 2
# Fork's writes landed under its own state.id.
assert persistence.load_state(flow2.state.id)["counter"] == 3
def test_persist_custom_key_with_dict_state(tmp_path):
"""`@persist(key=...)` uses the named key on a dict state."""
def test_fork_with_pinned_state_id(tmp_path):
"""Fork into a pinned state.id (inputs.id supplied alongside restore_from_state_id):
the new run uses inputs.id as state.id and hydrates from restore_from_state_id."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class DictKeyedFlow(Flow[Dict[str, str]]):
initial_state = dict()
class PinnableFlow(Flow[TestState]):
@start()
@persist(persistence, key="conversation_id")
def init_step(self):
self.state["conversation_id"] = "conv-dict-7"
self.state["message"] = "hi from dict"
@persist(persistence)
def step(self):
self.state.counter += 1
flow = DictKeyedFlow(persistence=persistence)
flow.kickoff()
flow1 = PinnableFlow(persistence=persistence)
flow1.kickoff()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
saved_state = persistence.load_state("conv-dict-7")
assert saved_state is not None
assert saved_state["message"] == "hi from dict"
pinned_uuid = "pinned-fork-uuid-1234"
flow2 = PinnableFlow(persistence=persistence)
flow2.kickoff(
inputs={"id": pinned_uuid},
restore_from_state_id=source_uuid,
)
# state.id pinned to inputs.id, NOT the source uuid.
assert flow2.state.id == pinned_uuid
# Hydrated from source: counter started at 1, step incremented to 2.
assert flow2.state.counter == 2
# Source's history is unchanged.
assert persistence.load_state(source_uuid)["counter"] == 1
# Fork's writes are under the pinned uuid.
assert persistence.load_state(pinned_uuid)["counter"] == 2
def test_persist_custom_key_missing_raises(tmp_path):
"""A missing/falsy custom key must raise a clear ValueError."""
def test_restore_from_state_id_not_found_silent_fallback(tmp_path):
"""Lookup miss on restore_from_state_id silently falls through to default behavior."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class MissingKeyFlow(Flow[Dict[str, str]]):
initial_state = dict()
class FallbackFlow(Flow[TestState]):
@start()
@persist(persistence, key="conversation_id")
def init_step(self):
# Intentionally do NOT set "conversation_id" on state.
self.state["message"] = "no key here"
@persist(persistence)
def step(self):
self.state.counter += 1
flow = MissingKeyFlow(persistence=persistence)
with pytest.raises(ValueError, match="conversation_id"):
flow.kickoff()
flow = FallbackFlow(persistence=persistence)
# No source UUID exists — should not raise.
flow.kickoff(restore_from_state_id="no-such-uuid")
# Default state path: counter starts at 0 and step increments to 1.
assert flow.state.counter == 1
# state.id is the auto-generated one, NOT the missing source.
assert flow.state.id != "no-such-uuid"
def test_restore_from_state_id_none_is_no_op(tmp_path):
"""restore_from_state_id=None (default) preserves baseline kickoff behavior."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class BaselineFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow = BaselineFlow(persistence=persistence)
flow.kickoff(restore_from_state_id=None)
assert flow.state.counter == 1
def test_fork_conflict_with_from_checkpoint_raises():
"""Passing both from_checkpoint and restore_from_state_id raises ValueError, naming
both parameters."""
from crewai.state import CheckpointConfig
class ConflictFlow(Flow[TestState]):
@start()
def step(self):
pass
flow = ConflictFlow()
with pytest.raises(ValueError) as excinfo:
flow.kickoff(
from_checkpoint=CheckpointConfig(),
restore_from_state_id="some-uuid",
)
msg = str(excinfo.value)
assert "from_checkpoint" in msg
assert "restore_from_state_id" in msg
@pytest.mark.asyncio
async def test_fork_via_kickoff_async(tmp_path):
"""kickoff_async honors restore_from_state_id: hydrates from source, mints fresh
state.id, persists under the new id, source history preserved."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class AsyncForkableFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow1 = AsyncForkableFlow(persistence=persistence)
await flow1.kickoff_async()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
flow2 = AsyncForkableFlow(persistence=persistence)
await flow2.kickoff_async(restore_from_state_id=source_uuid)
assert flow2.state.id != source_uuid
assert flow2.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 1
assert persistence.load_state(flow2.state.id)["counter"] == 2
@pytest.mark.asyncio
async def test_fork_via_akickoff(tmp_path):
"""akickoff is the public async alias and must accept restore_from_state_id with
the same semantics as kickoff_async."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class AkickoffForkableFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow1 = AkickoffForkableFlow(persistence=persistence)
await flow1.akickoff()
source_uuid = flow1.state.id
assert flow1.state.counter == 1
flow2 = AkickoffForkableFlow(persistence=persistence)
await flow2.akickoff(restore_from_state_id=source_uuid)
assert flow2.state.id != source_uuid
assert flow2.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 1
assert persistence.load_state(flow2.state.id)["counter"] == 2
@pytest.mark.asyncio
async def test_akickoff_pinned_fork(tmp_path):
"""akickoff with both inputs.id and restore_from_state_id pins state.id while
hydrating from the source."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class PinnableAsyncFlow(Flow[TestState]):
@start()
@persist(persistence)
def step(self):
self.state.counter += 1
flow1 = PinnableAsyncFlow(persistence=persistence)
await flow1.akickoff()
source_uuid = flow1.state.id
pinned_uuid = "pinned-akickoff-fork-uuid"
flow2 = PinnableAsyncFlow(persistence=persistence)
await flow2.akickoff(
inputs={"id": pinned_uuid},
restore_from_state_id=source_uuid,
)
assert flow2.state.id == pinned_uuid
assert flow2.state.counter == 2
assert persistence.load_state(source_uuid)["counter"] == 1
assert persistence.load_state(pinned_uuid)["counter"] == 2
@pytest.mark.asyncio
async def test_akickoff_fork_conflict_with_from_checkpoint_raises():
"""akickoff must raise the same conflict ValueError as kickoff/kickoff_async when
both from_checkpoint and restore_from_state_id are set."""
from crewai.state import CheckpointConfig
class AsyncConflictFlow(Flow[TestState]):
@start()
def step(self):
pass
flow = AsyncConflictFlow()
with pytest.raises(ValueError) as excinfo:
await flow.akickoff(
from_checkpoint=CheckpointConfig(),
restore_from_state_id="some-uuid",
)
msg = str(excinfo.value)
assert "from_checkpoint" in msg
assert "restore_from_state_id" in msg

View File

@@ -177,6 +177,7 @@ def test_llm_passes_additional_params():
# Create mocks for response structure
mock_message = MagicMock()
mock_message.content = "Test response"
mock_message.tool_calls = None
mock_choice = MagicMock()
mock_choice.message = mock_message
mock_response = MagicMock()
@@ -1146,3 +1147,52 @@ async def test_usage_info_streaming_with_acall():
assert llm._token_usage["total_tokens"] > 0
assert len(result) > 0
def _build_response_with_text_and_tool_calls():
"""Mimic a litellm ModelResponse that contains both content and tool_calls."""
from litellm.types.utils import ChatCompletionMessageToolCall, Function
response_message = MagicMock()
response_message.content = "I will search for the given query."
response_message.tool_calls = [
ChatCompletionMessageToolCall(
id="call_123",
type="function",
function=Function(name="search", arguments='{"q": "x"}'),
)
]
choice = MagicMock(message=response_message)
response = MagicMock(choices=[choice], model_extra=None)
return response
def test_non_streaming_returns_tool_calls_when_text_also_present():
"""A response with both text and tool_calls must not drop the tool_calls
when available_functions is None (executor-managed tool execution path).
"""
llm = LLM(model="gpt-4o-mini", is_litellm=True)
response = _build_response_with_text_and_tool_calls()
with patch("crewai.llm.litellm.completion", return_value=response):
result = llm.call("anything", available_functions=None)
assert isinstance(result, list)
assert len(result) == 1
assert result[0].function.name == "search"
@pytest.mark.asyncio
async def test_non_streaming_async_returns_tool_calls_when_text_also_present():
llm = LLM(model="openai/gpt-4o-mini", is_litellm=True, stream=False)
response = _build_response_with_text_and_tool_calls()
async def _ret(*args, **kwargs):
return response
with patch("crewai.llm.litellm.acompletion", side_effect=_ret):
result = await llm.acall("anything", available_functions=None)
assert isinstance(result, list)
assert len(result) == 1
assert result[0].function.name == "search"

View File

@@ -690,6 +690,27 @@ def test_multiple_guardrails_with_pydantic_output():
assert parsed["processed"] is True
def test_export_output_accepts_pydantic_input():
"""Regression test for #5458: _export_output must not crash with TypeError
when called with a Pydantic instance (e.g. when an upstream caller passes
an already-converted model from a context task)."""
from pydantic import BaseModel
class StructuredResult(BaseModel):
value: str
task = create_smart_task(
description="Test pydantic export",
expected_output="Structured output",
output_pydantic=StructuredResult,
)
instance = StructuredResult(value="ok")
pydantic_output, json_output = task._export_output(instance)
assert pydantic_output is instance
assert json_output is None
def test_guardrails_vs_single_guardrail_mutual_exclusion():
"""Test that guardrails list nullifies single guardrail."""

View File

@@ -17,6 +17,8 @@ from crewai.utilities.agent_utils import (
_format_messages_for_summary,
_split_messages_into_chunks,
convert_tools_to_openai_schema,
execute_single_native_tool_call,
NativeToolCallResult,
parse_tool_call_args,
summarize_messages,
)
@@ -1033,3 +1035,91 @@ class TestParseToolCallArgs:
_, error = parse_tool_call_args("{bad json}", "tool", "call_7")
assert error is not None
assert set(error.keys()) == {"call_id", "func_name", "result", "from_cache", "original_tool"}
class TestExecuteSingleNativeToolCall:
"""Tests for execute_single_native_tool_call."""
def test_result_as_answer_false_on_tool_error(self) -> None:
"""When a tool with result_as_answer=True raises, result_as_answer must be False.
Regression test for https://github.com/crewAIInc/crewAI/issues/5156
"""
from unittest.mock import MagicMock
class FailingTool(BaseTool):
name: str = "failing_tool"
description: str = "A tool that always fails"
result_as_answer: bool = True
def _run(self, **kwargs: Any) -> str:
raise RuntimeError("intentional failure")
tool = FailingTool()
tool_call = MagicMock()
tool_call.id = "call_1"
tool_call.function.name = "failing_tool"
tool_call.function.arguments = "{}"
result = execute_single_native_tool_call(
tool_call,
available_functions={"failing_tool": tool._run},
original_tools=[tool],
structured_tools=None,
tools_handler=None,
agent=None,
task=None,
crew=None,
event_source=MagicMock(),
printer=None,
verbose=False,
)
assert isinstance(result, NativeToolCallResult)
assert result.result_as_answer is False
assert "Error executing tool" in result.result
def test_result_as_answer_false_when_hook_blocks(self) -> None:
"""When a before-hook blocks a tool with result_as_answer=True, result_as_answer must be False."""
from unittest.mock import MagicMock
from crewai.hooks.tool_hooks import (
clear_before_tool_call_hooks,
register_before_tool_call_hook,
)
class BlockedTool(BaseTool):
name: str = "blocked_tool"
description: str = "A tool whose execution will be blocked by a hook"
result_as_answer: bool = True
def _run(self, **kwargs: Any) -> str:
return "should not run"
tool = BlockedTool()
tool_call = MagicMock()
tool_call.id = "call_1"
tool_call.function.name = "blocked_tool"
tool_call.function.arguments = "{}"
register_before_tool_call_hook(lambda _ctx: False)
try:
result = execute_single_native_tool_call(
tool_call,
available_functions={"blocked_tool": tool._run},
original_tools=[tool],
structured_tools=None,
tools_handler=None,
agent=None,
task=None,
crew=None,
event_source=MagicMock(),
printer=None,
verbose=False,
)
finally:
clear_before_tool_call_hooks()
assert isinstance(result, NativeToolCallResult)
assert result.result_as_answer is False
assert "blocked by hook" in result.result

View File

@@ -87,6 +87,31 @@ def test_convert_to_model_with_no_model() -> None:
assert output == "Plain text"
def test_convert_to_model_with_basemodel_input_matching_pydantic() -> None:
instance = SimpleModel(name="John", age=30)
output = convert_to_model(instance, SimpleModel, None, None)
assert output is instance
def test_convert_to_model_with_basemodel_input_matching_json() -> None:
instance = SimpleModel(name="John", age=30)
output = convert_to_model(instance, None, SimpleModel, None)
assert output == {"name": "John", "age": 30}
def test_convert_to_model_with_basemodel_input_different_class() -> None:
class OtherModel(BaseModel):
name: str
age: int
extra: str = "default"
instance = OtherModel(name="John", age=30, extra="ignored")
output = convert_to_model(instance, SimpleModel, None, None)
assert isinstance(output, SimpleModel)
assert output.name == "John"
assert output.age == 30
def test_convert_to_model_with_special_characters() -> None:
json_string_test = """
{
@@ -177,6 +202,34 @@ def test_handle_partial_json_with_invalid_partial(mock_agent: Mock) -> None:
assert output == "Converted result"
def test_handle_partial_json_accepts_literal_control_chars_in_strings() -> None:
"""JSON values with literal newlines/tabs (lenient parsing) must still
validate, matching the prior model_validate_json behavior.
"""
result = 'prefix {"name": "Charlie\nDoe", "age": 35} suffix'
output = handle_partial_json(result, SimpleModel, False, None)
assert isinstance(output, SimpleModel)
assert output.name == "Charlie\nDoe"
assert output.age == 35
def test_handle_partial_json_falls_through_for_non_json_curly_blocks(
mock_agent: Mock,
) -> None:
"""A regex match that is not actually JSON (e.g. GraphQL) must fall through
to convert_with_instructions instead of raising a ValidationError.
"""
result = (
"type Query {\n countries: [Country]\n}\n\n"
"type Country {\n code: String\n name: String\n}"
)
with patch("crewai.utilities.converter.convert_with_instructions") as mock_convert:
mock_convert.return_value = "Converted result"
output = handle_partial_json(result, SimpleModel, False, mock_agent)
assert output == "Converted result"
mock_convert.assert_called_once()
# Tests for convert_with_instructions
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")

View File

@@ -11,6 +11,8 @@ Installed automatically via the workspace (`uv sync`). Requires:
- `ENTERPRISE_REPO` env var — GitHub repo for enterprise releases
- `ENTERPRISE_VERSION_DIRS` env var — comma-separated directories to bump in the enterprise repo
- `ENTERPRISE_CREWAI_DEP_PATH` env var — path to the pyproject.toml with the `crewai[tools]` pin in the enterprise repo
- `ENTERPRISE_WORKFLOW_PATHS` env var — comma-separated workflow file paths in the enterprise repo whose `crewai[extras]==<version>` pins should be rewritten on each release (e.g. `.github/workflows/tests.yml`)
- `ENTERPRISE_EXTRA_PACKAGES` env var — comma-separated packages to also pin in enterprise pyproject files, in addition to `crewai` / `crewai[extras]`
## Commands

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.14.4a1"
__version__ = "1.14.5a2"

View File

@@ -1207,7 +1207,12 @@ _ENTERPRISE_WORKFLOW_PATHS: Final[tuple[str, ...]] = tuple(
def _update_enterprise_crewai_dep(pyproject_path: Path, version: str) -> bool:
"""Update the crewai[tools] pin in an enterprise pyproject.toml.
"""Update crewai pins in an enterprise pyproject.toml.
Pins ``crewai`` / ``crewai[extras]`` via ``_pin_crewai_deps`` and
additionally pins any dashed ``crewai-*`` packages configured via
``ENTERPRISE_EXTRA_PACKAGES`` (e.g. ``crewai-enterprise``), which
``_pin_crewai_deps`` does not cover.
Args:
pyproject_path: Path to the pyproject.toml file.
@@ -1219,20 +1224,57 @@ def _update_enterprise_crewai_dep(pyproject_path: Path, version: str) -> bool:
if not pyproject_path.exists():
return False
changed = False
content = pyproject_path.read_text()
new_content = _pin_crewai_deps(content, version)
if new_content != content:
pyproject_path.write_text(new_content)
return True
return False
changed = True
if update_pyproject_dependencies(
pyproject_path, version, extra_packages=list(_ENTERPRISE_EXTRA_PACKAGES)
):
changed = True
return changed
def _update_workflow_crewai_pins(workflow_path: Path, version: str) -> bool:
"""Rewrite ``crewai[extras]==<old>`` pins in a single workflow file.
Operates line-by-line on the raw file via ``_repin_crewai_install``
so only version numbers change and all formatting is preserved.
Args:
workflow_path: Path to a workflow YAML file.
version: New crewai version string.
Returns:
True if the file was modified.
"""
if not workflow_path.exists():
return False
raw = workflow_path.read_text()
lines = raw.splitlines(keepends=True)
changed = False
for i, line in enumerate(lines):
if "crewai[" not in line:
continue
new_line = _repin_crewai_install(line, version)
if new_line != line:
lines[i] = new_line
changed = True
if not changed:
return False
workflow_path.write_text("".join(lines))
return True
def _update_enterprise_workflows(repo_dir: Path, version: str) -> list[Path]:
"""Update crewai version pins in enterprise CI workflow files.
Applies ``_repin_crewai_install`` line-by-line on the raw file so
only version numbers change and all formatting is preserved.
Args:
repo_dir: Root of the cloned enterprise repo.
version: New crewai version string.
@@ -1243,29 +1285,31 @@ def _update_enterprise_workflows(repo_dir: Path, version: str) -> list[Path]:
updated: list[Path] = []
for rel_path in _ENTERPRISE_WORKFLOW_PATHS:
workflow = repo_dir / rel_path
if not workflow.exists():
continue
raw = workflow.read_text()
lines = raw.splitlines(keepends=True)
changed = False
for i, line in enumerate(lines):
if "crewai[" not in line:
continue
new_line = _repin_crewai_install(line, version)
if new_line != line:
lines[i] = new_line
changed = True
if changed:
new_raw = "".join(lines)
else:
new_raw = raw
if new_raw != raw:
workflow.write_text(new_raw)
if _update_workflow_crewai_pins(workflow, version):
updated.append(workflow)
return updated
def _update_repo_workflows_crewai_pins(repo_dir: Path, version: str) -> list[Path]:
"""Update crewai pins across all GitHub workflow files in a repo.
Args:
repo_dir: Root of the cloned repo.
version: New crewai version string.
Returns:
List of workflow paths that were modified.
"""
workflows_dir = repo_dir / ".github" / "workflows"
if not workflows_dir.exists():
return []
updated: list[Path] = []
for workflow in sorted(workflows_dir.iterdir()):
if workflow.suffix not in (".yml", ".yaml"):
continue
if _update_workflow_crewai_pins(workflow, version):
updated.append(workflow)
return updated
@@ -1314,8 +1358,10 @@ _PYPI_POLL_TIMEOUT: Final[int] = 600
def _update_deployment_test_repo(version: str, is_prerelease: bool) -> None:
"""Update the deployment test repo to pin the new crewai version.
Clones the repo, updates the crewai[tools] pin in pyproject.toml,
regenerates the lockfile, commits, and pushes directly to main.
Clones the repo, updates the crewai[tools] pin in pyproject.toml
and any crewai[extras] pins in .github/workflows, regenerates the
lockfile, commits to a branch, pushes, opens a PR against main,
then polls until the PR is merged (or closed).
Args:
version: New crewai version string.
@@ -1333,50 +1379,91 @@ def _update_deployment_test_repo(version: str, is_prerelease: bool) -> None:
pyproject = repo_dir / "pyproject.toml"
content = pyproject.read_text()
new_content = _pin_crewai_deps(content, version)
if new_content == content:
pyproject_changed = new_content != content
if pyproject_changed:
pyproject.write_text(new_content)
console.print(f"[green]✓[/green] Updated crewai[tools] pin to {version}")
else:
console.print(
"[yellow]Warning:[/yellow] No crewai[tools] pin found to update"
)
updated_workflows = _update_repo_workflows_crewai_pins(repo_dir, version)
for wf in updated_workflows:
console.print(
f"[green]✓[/green] Updated crewai pin in {wf.relative_to(repo_dir)}"
)
if not pyproject_changed and not updated_workflows:
console.print("[yellow]Nothing to update; skipping commit and PR.[/yellow]")
return
pyproject.write_text(new_content)
console.print(f"[green]✓[/green] Updated crewai[tools] pin to {version}")
lock_cmd = [
"uv",
"lock",
"--refresh-package",
"crewai",
"--refresh-package",
"crewai-tools",
paths_to_add: list[str] = [
str(wf.relative_to(repo_dir)) for wf in updated_workflows
]
if is_prerelease:
lock_cmd.append("--prerelease=allow")
max_retries = 10
for attempt in range(1, max_retries + 1):
try:
run_command(lock_cmd, cwd=repo_dir)
break
except subprocess.CalledProcessError:
if attempt == max_retries:
if pyproject_changed:
lock_cmd = [
"uv",
"lock",
"--refresh-package",
"crewai",
"--refresh-package",
"crewai-tools",
]
if is_prerelease:
lock_cmd.append("--prerelease=allow")
max_retries = 10
for attempt in range(1, max_retries + 1):
try:
run_command(lock_cmd, cwd=repo_dir)
break
except subprocess.CalledProcessError:
if attempt == max_retries:
console.print(
f"[red]Error:[/red] uv lock failed after {max_retries} attempts"
)
raise
console.print(
f"[red]Error:[/red] uv lock failed after {max_retries} attempts"
f"[yellow]uv lock failed (attempt {attempt}/{max_retries}),"
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
)
raise
console.print(
f"[yellow]uv lock failed (attempt {attempt}/{max_retries}),"
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
)
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Lockfile updated")
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Lockfile updated")
paths_to_add.extend(["pyproject.toml", "uv.lock"])
run_command(["git", "add", "pyproject.toml", "uv.lock"], cwd=repo_dir)
branch = f"chore/bump-crewai-v{version}"
create_or_reset_branch(branch, cwd=repo_dir)
run_command(["git", "add", *paths_to_add], cwd=repo_dir)
run_command(
["git", "commit", "-m", f"chore: bump crewai to {version}"],
cwd=repo_dir,
)
run_command(["git", "push"], cwd=repo_dir)
console.print(f"[green]✓[/green] Pushed to {_DEPLOYMENT_TEST_REPO}")
run_command(["git", "push", "-u", "origin", branch], cwd=repo_dir)
console.print(f"[green]✓[/green] Pushed branch {branch}")
pr_url = run_command(
[
"gh",
"pr",
"create",
"--base",
"main",
"--head",
branch,
"--title",
f"chore: bump crewai to {version}",
"--body",
"",
],
cwd=repo_dir,
)
console.print(f"[green]✓[/green] Opened PR on {_DEPLOYMENT_TEST_REPO}")
console.print(f"[cyan]PR URL:[/cyan] {pr_url.strip()}")
_wait_for_pr_merged(branch, repo_dir)
def _wait_for_pypi(package: str, version: str) -> None:
@@ -1408,6 +1495,37 @@ def _wait_for_pypi(package: str, version: str) -> None:
sys.exit(1)
_PR_MERGE_POLL_INTERVAL: Final[int] = 30
def _wait_for_pr_merged(branch: str, cwd: Path) -> None:
"""Poll a PR until it is merged, exiting on close-without-merge.
Args:
branch: Head branch name of the PR to watch.
cwd: Working directory of the cloned repo (so ``gh`` resolves
the right remote).
Raises:
SystemExit: If the PR is closed without being merged.
"""
console.print(f"[cyan]Waiting for PR on branch {branch} to be merged...[/cyan]")
while True:
state = run_command(
["gh", "pr", "view", branch, "--json", "state", "--jq", ".state"],
cwd=cwd,
).strip()
if state == "MERGED":
console.print(f"[green]✓[/green] PR for {branch} merged")
return
if state == "CLOSED":
console.print(
f"[red]Error:[/red] PR for {branch} was closed without merging"
)
sys.exit(1)
time.sleep(_PR_MERGE_POLL_INTERVAL)
def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> None:
"""Clone the enterprise repo, bump versions, and create a release PR.

8
uv.lock generated
View File

@@ -1626,7 +1626,7 @@ requires-dist = [
{ name = "e2b-code-interpreter", marker = "extra == 'e2b'", specifier = "~=2.6.0" },
{ name = "exa-py", marker = "extra == 'exa-py'", specifier = ">=1.8.7" },
{ name = "firecrawl-py", marker = "extra == 'firecrawl-py'", specifier = ">=1.8.0" },
{ name = "gitpython", marker = "extra == 'github'", specifier = ">=3.1.41,<4" },
{ name = "gitpython", marker = "extra == 'github'", specifier = ">=3.1.47,<4" },
{ name = "hyperbrowser", marker = "extra == 'hyperbrowser'", specifier = ">=0.18.0" },
{ name = "langchain-apify", marker = "extra == 'apify'", specifier = ">=0.1.2,<1.0.0" },
{ name = "linkup-sdk", marker = "extra == 'linkup-sdk'", specifier = ">=0.2.2" },
@@ -2619,14 +2619,14 @@ wheels = [
[[package]]
name = "gitpython"
version = "3.1.46"
version = "3.1.47"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "gitdb" },
]
sdist = { url = "https://files.pythonhosted.org/packages/df/b5/59d16470a1f0dfe8c793f9ef56fd3826093fc52b3bd96d6b9d6c26c7e27b/gitpython-3.1.46.tar.gz", hash = "sha256:400124c7d0ef4ea03f7310ac2fbf7151e09ff97f2a3288d64a440c584a29c37f", size = 215371, upload-time = "2026-01-01T15:37:32.073Z" }
sdist = { url = "https://files.pythonhosted.org/packages/c1/bd/50db468e9b1310529a19fce651b3b0e753b5c07954d486cba31bbee9a5d5/gitpython-3.1.47.tar.gz", hash = "sha256:dba27f922bd2b42cb54c87a8ab3cb6beb6bf07f3d564e21ac848913a05a8a3cd", size = 216978, upload-time = "2026-04-22T02:44:44.059Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6a/09/e21df6aef1e1ffc0c816f0522ddc3f6dcded766c3261813131c78a704470/gitpython-3.1.46-py3-none-any.whl", hash = "sha256:79812ed143d9d25b6d176a10bb511de0f9c67b1fa641d82097b0ab90398a2058", size = 208620, upload-time = "2026-01-01T15:37:30.574Z" },
{ url = "https://files.pythonhosted.org/packages/f2/c5/a1bc0996af85757903cf2bf444a7824e68e0035ce63fb41d6f76f9def68b/gitpython-3.1.47-py3-none-any.whl", hash = "sha256:489f590edfd6d20571b2c0e72c6a6ac6915ee8b8cd04572330e3842207a78905", size = 209547, upload-time = "2026-04-22T02:44:41.271Z" },
]
[[package]]