Compare commits

...

41 Commits

Author SHA1 Message Date
Lorenze Jay
e03da9c3cf Merge branch 'main' into docs/stop-execution-endpoint 2026-04-01 09:46:01 -07:00
Lucas Gomide
c8f3a96779 docs: fix RBAC permission levels to match actual UI options (#5210)
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Check Documentation Broken Links / Check broken links (push) Waiting to run
2026-04-01 10:35:06 -04:00
Iris Clawd
4b5a14d688 docs: document /stop/{kickoff_id} endpoint for cancelling executions 2026-04-01 13:36:44 +00:00
João Moura
18ada25f01 docs: update changelog and version for v1.13.0a5 (#5200)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-04-01 04:00:09 -03:00
João Moura
146da8d73a feat: bump versions to 1.13.0a5 (#5199) 2026-04-01 03:59:07 -03:00
Greyson LaLonde
98c6109214 docs: update changelog and version for v1.13.0a4
Some checks failed
Build uv cache / build-cache (3.10) (push) Waiting to run
Build uv cache / build-cache (3.11) (push) Waiting to run
Build uv cache / build-cache (3.12) (push) Waiting to run
Build uv cache / build-cache (3.13) (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-04-01 05:08:12 +08:00
Greyson LaLonde
54a9174c12 feat: bump versions to 1.13.0a4 2026-04-01 05:01:29 +08:00
Greyson LaLonde
c26ae969b3 docs: update changelog and version for v1.13.0a3 2026-04-01 04:16:25 +08:00
Greyson LaLonde
205555b786 feat: bump versions to 1.13.0a3 2026-04-01 04:02:29 +08:00
Greyson LaLonde
d6714a0e60 refactor: convert Flow to Pydantic BaseModel 2026-04-01 03:48:41 +08:00
dependabot[bot]
107bc7f7be chore(deps): bump the security-updates group across 1 directory with 2 updates (#5088)
Bumps the security-updates group with 2 updates in the / directory: [nltk](https://github.com/nltk/nltk) and [pypdf](https://github.com/py-pdf/pypdf).


Updates `nltk` from 3.9.3 to 3.9.4
- [Changelog](https://github.com/nltk/nltk/blob/develop/ChangeLog)
- [Commits](https://github.com/nltk/nltk/compare/3.9.3...3.9.4)

Updates `pypdf` from 6.9.1 to 6.9.2
- [Release notes](https://github.com/py-pdf/pypdf/releases)
- [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md)
- [Commits](https://github.com/py-pdf/pypdf/compare/6.9.1...6.9.2)

---
updated-dependencies:
- dependency-name: nltk
  dependency-version: 3.9.4
  dependency-type: indirect
  dependency-group: security-updates
- dependency-name: pypdf
  dependency-version: 6.9.2
  dependency-type: indirect
  dependency-group: security-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-31 14:03:42 -05:00
iris-clawd
b1f49b1356 docs: fix inaccuracies in agent-capabilities across all languages (#5191)
- Apps run locally (with CREWAI_PLATFORM_INTEGRATION_TOKEN env var), not remotely
- Apps auth is an integration token, not OAuth
- Updated comparison tables and card descriptions in en, pt-BR, ko, ar
2026-03-31 15:00:00 -03:00
iris-clawd
accae5ca43 docs: Add Agent Capabilities overview and improve Skills documentation (#5189)
* docs: add Agent Capabilities overview page and improve Skills docs

- New 'Agent Capabilities' page explaining all 5 extension types (Tools, MCPs, Apps, Skills, Knowledge) with comparison table and decision guide
- Rewrite Skills page with practical examples showing Skills + Tools patterns, common FAQ, and Skills vs Knowledge comparison
- Add cross-reference callout on Tools page linking to the capabilities overview
- Add agent-capabilities to Core Concepts navigation (after agents)

* docs: add pt-BR and ko translations for agent-capabilities and updated skills/tools

* docs: add Arabic (ar) translations for agent-capabilities and updated skills/tools
2026-03-31 14:47:38 -03:00
Lucas Gomide
68e943be68 feat: emit token usage data in LLMCallCompletedEvent 2026-04-01 00:18:36 +08:00
Greyson LaLonde
3283a00e31 fix(deps): cap lancedb below 0.30.1 for Windows compatibility
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
lancedb 0.30.1 dropped the win_amd64 wheel, breaking installation on
Windows. Pin to <0.30.1 so uv resolves to a version that still ships
Windows binaries.
2026-03-31 16:59:45 +08:00
Greyson LaLonde
dfc0f9a317 refactor: replace InstanceOf[T] with plain type annotations
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* refactor: replace InstanceOf[T] with plain type annotations

InstanceOf[] is a Pydantic validation wrapper that adds runtime
isinstance checks. Plain type annotations are sufficient here since
the models already use arbitrary_types_allowed or the types are
BaseModel subclasses.

* refactor: convert BaseKnowledgeStorage to BaseModel

* fix: update tests for BaseKnowledgeStorage BaseModel conversion

* fix: correct embedder config structure in test
2026-03-31 08:11:21 +08:00
Greyson LaLonde
ef79456968 chore: remove unused third_party LLM directory
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-03-31 07:33:56 +08:00
Greyson LaLonde
6c7ea422e7 refactor: convert LLM classes to Pydantic BaseModel 2026-03-31 07:07:11 +08:00
Lorenze Jay
bb9bcd6823 refactor: remove unused and methods from (#5172)
This commit cleans up the  class by removing the  and  methods, which are no longer needed. The changes help streamline the code and improve maintainability.
2026-03-30 15:01:58 -07:00
Lucas Gomide
ac14b9127e fix: handle GPT-5.x models not supporting the stop API parameter (#5144)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
GPT-5.x models reject the `stop` parameter at the API level with "Unsupported parameter: 'stop' is not supported with this model". This breaks CrewAI executions when routing through LiteLLM (e.g. via
OpenAI-compatible gateways like Asimov), because the LiteLLM fallback path always includes `stop` in the API request params.

The native OpenAI provider was unaffected because it never sends `stop` to the API — it applies stop words client-side via `_apply_stop_words()`. However, when the request goes through LiteLLM (custom endpoints, proxy gateways),
`stop` is sent as an API parameter and GPT-5.x rejects it.

Additionally, the existing retry logic that catches this error only matched the OpenAI API error format ("Unsupported parameter") but missed
LiteLLM's own pre-validation error format ("does not support parameters"), so the self-healing retry never triggered for LiteLLM-routed calls.
2026-03-30 11:36:51 -04:00
Thiago Moretto
98b7626784 feat: extract and publish tool metadata to AMP (#4298)
* Exporting tool's metadata to AMP - initial work

* Fix payload (nest under `tools` key)

* Remove debug message + code simplification

* Priting out detected tools

* Extract module name

* fix: address PR review feedback for tool metadata extraction

- Use sha256 instead of md5 for module name hashing (lint S324)
- Filter required list to match filtered properties in JSON schema

* fix: Use sha256 instead of md5 for module name hashing (lint S324)

- Add missing mocks to metadata extraction failure test

* style: fix ruff formatting

* fix: resolve mypy type errors in utils.py

* fix: address bot review feedback on tool metadata

- Use `is not None` instead of truthiness check so empty tools list
  is sent to the API rather than being silently dropped as None
- Strip __init__ suffix from module path for tools in __init__.py files
- Extend _unwrap_schema to handle function-before, function-wrap, and
  definitions wrapper types

* fix: capture env_vars declared with Field(default_factory=...)

When env_vars uses default_factory, pydantic stores a callable in the
schema instead of a static default value. Fall back to calling the
factory when no static default is present.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-03-30 09:21:53 -04:00
iris-clawd
e21c506214 docs: Add comprehensive SSO configuration guide (#5152)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
* docs: add comprehensive SSO configuration guide

Add SSO documentation page covering all supported identity providers
for both SaaS (AMP) and Factory deployments.

Includes:
- Provider overview (WorkOS, Entra ID, Okta, Auth0, Keycloak)
- SaaS vs Factory SSO availability
- Step-by-step setup guides per provider with env vars
- CLI authentication via Device Authorization Grant
- RBAC integration overview
- Troubleshooting common SSO issues
- Complete environment variables reference

Placed in the Manage nav group alongside RBAC.

* fix: add key icon to SSO docs page

* fix: broken links in SSO docs (installation, configuration)
2026-03-28 13:15:34 +08:00
Greyson LaLonde
9fe0c15549 docs: update changelog and version for v1.13.0rc1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
2026-03-27 11:30:45 +08:00
Greyson LaLonde
78d8ddb649 feat: bump versions to 1.13.0rc1 2026-03-27 11:26:04 +08:00
Greyson LaLonde
1b2062009a docs: update changelog and version for v1.13.0a2
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-03-27 04:05:32 +08:00
Greyson LaLonde
886aa4ba8f feat: bump versions to 1.13.0a2 2026-03-27 04:00:59 +08:00
Greyson LaLonde
5bec000b21 feat: auto-update deployment test repo during release
After PyPI publish, clones crewAIInc/crew_deployment_test, bumps the
crewai[tools] pin to the new version, regenerates uv.lock, and pushes
to main. Includes retry logic for CDN propagation delays.
2026-03-27 03:54:10 +08:00
Greyson LaLonde
2965384907 feat: improve enterprise release resilience and UX
- Add --skip-to-enterprise flag to resume just Phase 3 after a failure
- Add --prerelease=allow to uv sync for alpha/beta/rc versions
- Retry uv sync up to 10 times to handle PyPI CDN propagation delay
- Update pyproject.toml [project] version field (fixes apps/api version)
- Print PR URL after creating enterprise bump PR
2026-03-27 03:36:56 +08:00
Greyson LaLonde
032ef06ef6 docs: update changelog and version for v1.13.0a1 2026-03-27 03:07:26 +08:00
Greyson LaLonde
0ce9567cfc feat: bump versions to 1.13.0a1 2026-03-27 03:00:29 +08:00
Greyson LaLonde
d7252bfee7 fix: pin Node to LTS 22 in docs broken links workflow
Mintlify doesn't support Node 25+, and `node-version: latest` was
pulling 25.8.2 causing the workflow to fail.
2026-03-27 02:36:11 +08:00
Greyson LaLonde
10fc3796bb fix: bust uv cache for freshly published packages in enterprise release 2026-03-27 02:21:31 +08:00
iris-clawd
52249683a7 docs: comprehensive RBAC permissions matrix and deployment guide (#5112)
- Add full feature permissions matrix (11 features × permission levels)
- Document Owner vs Member default permissions
- Add deployment guide: what permissions are needed to deploy from GitHub or Zip
- Document entity-level permissions (deployment permission types: run, traces, manage_settings, HITL, full_access)
- Document entity RBAC for env vars, LLM connections, and Git repositories
- Add common role patterns: Developer, Viewer/Stakeholder, Ops/Platform Admin
- Add quick-reference table for minimum deployment permissions

Addresses user feedback that RBAC was too restrictive and unclear:
members didn't know which permissions to configure for a developer profile.
2026-03-26 12:30:17 -04:00
João Moura
6193e082e1 docs: update changelog and version for v1.12.2 (#5103)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-03-26 03:54:26 -03:00
João Moura
33f33c6fcc feat: bump versions to 1.12.2 (#5101) 2026-03-26 03:33:10 -03:00
alex-clawd
74976b157d fix: preserve method return value as flow output for @human_feedback with emit (#5099)
* fix: preserve method return value as flow output for @human_feedback with emit

When a @human_feedback decorated method with emit= is the final method in a
flow (no downstream listeners triggered), the flow's final output was
incorrectly set to the collapsed outcome string (e.g., 'approved') instead
of the method's actual return value (e.g., a state dict).

Root cause: _process_feedback() returns the collapsed_outcome string when
emit is set, and this string was being stored as the method's result in
_method_outputs.

The fix:
1. In human_feedback.py: After _process_feedback, stash the real method_output
   on the flow instance as _human_feedback_method_output when emit is set.

2. In flow.py: After appending a method result to _method_outputs, check if
   _human_feedback_method_output is set. If so, replace the last entry with
   the stashed real output and clear the stash.

This ensures:
- Routing still works correctly (collapsed outcome used for @listen matching)
- The flow's final result is the actual method return value
- If downstream listeners execute, their results become the final output

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* style: ruff format flow.py

* fix: use per-method dict stash for concurrency safety and None returns

Addresses review comments:
- Replace single flow-level slot with dict keyed by method name,
  safe under concurrent @human_feedback+emit execution
- Dict key presence (not value) indicates stashed output,
  correctly preserving None return values
- Added test for None return value preservation

---------

Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-03-26 03:28:17 -03:00
Greyson LaLonde
bd03f6cf64 feat: add enterprise release phase to devtools release
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-03-26 12:22:37 +08:00
Rip&Tear
a91cd1a7d7 Revise security policy and reporting instructions (#5096)
* Revise security policy and reporting instructions

Updated the security reporting process and contact details.

* Update .github/security.md
---------
2026-03-26 10:50:21 +08:00
João Moura
66dee3195f docs: update changelog and version for v1.12.1 (#5095) 2026-03-25 22:52:11 -03:00
João Moura
034f576dc0 feat: bump versions to 1.12.1 (#5094)
* chore: bump version to 1.12.1 across all modules

* feat: bump versions to 1.12.1
2026-03-25 22:45:33 -03:00
Lucas Gomide
918654318b feat: add request_id to HumanFeedbackRequestedEvent (#5092)
* feat: add request_id to HumanFeedbackRequestedEvent

Allow platforms to attach a correlation identifier to human feedback requests so downstream consumers can deterministically match spans to their corresponding feedback records

* feat: add request_id to HumanFeedbackReceivedEvent for correlation

Without request_id on the received event, consumers cannot correlate
a feedback response back to its originating request. Both sides of the
request/response pair need the correlation identifier.

---------

Co-authored-by: Alex <alex@crewai.com>
2026-03-25 22:43:24 -03:00
92 changed files with 11874 additions and 2182 deletions

50
.github/security.md vendored
View File

@@ -1,50 +1,12 @@
## CrewAI Security Policy
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
### Scope
We welcome reports for vulnerabilities that could impact:
- CrewAI-maintained source code and repositories
- CrewAI-operated infrastructure and services
- Official CrewAI releases, packages, and distributions
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
We are committed to protecting the confidentiality, integrity, and availability of the
CrewAI ecosystem.
### How to Report
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
Please submit reports to **crewai-vdp-ess@submit.bugcrowd.com**
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests,
or social media
- Reports submitted via channels other than this Bugcrowd submission email will not be reviewed and will be dismissed

View File

@@ -23,7 +23,7 @@ jobs:
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "latest"
node-version: "22"
- name: Install Mintlify CLI
run: npm i -g mintlify

View File

@@ -4,6 +4,190 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="31 مارس 2026">
## v1.13.0a5
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a5)
## ما الذي تغير
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.13.0a4
## المساهمون
@greysonlalonde, @joaomdmoura
</Update>
<Update label="1 أبريل 2026">
## v1.13.0a4
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a4)
## ما الذي تغير
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.13.0a3
## المساهمون
@greysonlalonde
</Update>
<Update label="1 أبريل 2026">
## v1.13.0a3
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a3)
## ما الذي تغير
### الميزات
- إصدار بيانات استخدام الرمز في LLMCallCompletedEvent
- استخراج ونشر بيانات الأداة إلى AMP
### إصلاح الأخطاء
- التعامل مع نماذج GPT-5.x التي لا تدعم معلمة API `stop`
### الوثائق
- إصلاح عدم الدقة في قدرات الوكيل عبر جميع اللغات
- إضافة نظرة عامة على قدرات الوكيل وتحسين وثائق المهارات
- إضافة دليل شامل لتكوين SSO
- تحديث سجل التغييرات والإصدار لـ v1.13.0rc1
### إعادة الهيكلة
- تحويل Flow إلى Pydantic BaseModel
- تحويل فئات LLM إلى Pydantic BaseModel
- استبدال InstanceOf[T] بتعليقات نوع عادية
- إزالة الطرق غير المستخدمة
## المساهمون
@dependabot[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide, @thiagomoretto
</Update>
<Update label="27 مارس 2026">
## v1.13.0rc1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
## ما الذي تغير
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.13.0a2
## المساهمون
@greysonlalonde
</Update>
<Update label="27 مارس 2026">
## v1.13.0a2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
## ما الذي تغير
### الميزات
- تحديث تلقائي لمستودع اختبار النشر أثناء الإصدار
- تحسين مرونة إصدار المؤسسات وتجربة المستخدم
### الوثائق
- تحديث سجل التغييرات والإصدار للإصدار v1.13.0a1
## المساهمون
@greysonlalonde
</Update>
<Update label="27 مارس 2026">
## v1.13.0a1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
## ما الذي تغير
### إصلاحات الأخطاء
- إصلاح الروابط المعطلة في سير العمل الوثائقي عن طريق تثبيت Node على LTS 22
- مسح ذاكرة التخزين المؤقت لـ uv للحزم المنشورة حديثًا في الإصدار المؤسسي
### الوثائق
- إضافة مصفوفة شاملة لأذونات RBAC ودليل النشر
- تحديث سجل التغييرات والإصدار للإصدار v1.12.2
## المساهمون
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="25 مارس 2026">
## v1.12.2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.12.2)
## ما الذي تغير
### الميزات
- إضافة مرحلة إصدار المؤسسات إلى إصدار أدوات المطورين
### إصلاحات الأخطاء
- الحفاظ على قيمة إرجاع الطريقة كإخراج تدفق لـ @human_feedback مع emit
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.12.1
- مراجعة سياسة الأمان وتعليمات الإبلاغ
## المساهمون
@alex-clawd, @greysonlalonde, @joaomdmoura, @theCyberTech
</Update>
<Update label="25 مارس 2026">
## v1.12.1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.12.1)
## ما الذي تغير
### الميزات
- إضافة request_id إلى HumanFeedbackRequestedEvent
- إضافة Qdrant Edge كخلفية تخزين لنظام الذاكرة
- إضافة أمر docs-check لتحليل التغييرات وتوليد الوثائق مع الترجمات
- إضافة دعم اللغة العربية إلى سجل التغييرات وأدوات الإصدار
- إضافة ترجمة باللغة العربية الفصحى لجميع الوثائق
- إضافة أمر تسجيل الخروج في واجهة سطر الأوامر
- إضافة مهارات الوكيل
- تنفيذ root_scope تلقائيًا لعزل الذاكرة الهيكلية
- تنفيذ مزودين متوافقين مع OpenAI (OpenRouter، DeepSeek، Ollama، vLLM، Cerebras، Dashscope)
### إصلاحات الأخطاء
- إصلاح بيانات اعتماد غير صحيحة لدفع دفعات التتبع (404)
- حل العديد من الأخطاء في نظام تدفق HITL
- إصلاح حفظ ذاكرة الوكيل
- حل جميع أخطاء mypy الصارمة عبر حزمة crewai
- إصلاح استخدام __router_paths__ لطرق المستمع + الموجه في FlowMeta
- إصلاح خطأ القيمة عند عدم دعم الملفات
- تصحيح صياغة الحجر الصحي لـ litellm في الوثائق
- إصلاح جميع أخطاء mypy في crewai-files وإضافة جميع الحزم إلى فحوصات النوع في CI
- تثبيت الحد الأعلى لـ litellm على آخر إصدار تم اختباره (1.82.6)
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.12.0
- إضافة CONTRIBUTING.md
- إضافة دليل لاستخدام CrewAI بدون LiteLLM
## المساهمون
@akaKuruma، @alex-clawd، @greysonlalonde، @iris-clawd، @joaomdmoura، @lorenzejay، @lucasgomide، @nicoferdi96
</Update>
<Update label="25 مارس 2026">
## v1.12.0

View File

@@ -0,0 +1,147 @@
---
title: "قدرات الوكيل"
description: "فهم الطرق الخمس لتوسيع وكلاء CrewAI: الأدوات، MCP، التطبيقات، المهارات، والمعرفة."
icon: puzzle-piece
mode: "wide"
---
## نظرة عامة
يمكن توسيع وكلاء CrewAI بـ **خمسة أنواع مميزة من القدرات**، كل منها يخدم غرضًا مختلفًا. فهم متى تستخدم كل نوع — وكيف يعملون معًا — هو المفتاح لبناء وكلاء فعّالين.
<CardGroup cols={2}>
<Card title="الأدوات" icon="wrench" href="/ar/concepts/tools" color="#3B82F6">
**دوال قابلة للاستدعاء** — تمنح الوكلاء القدرة على اتخاذ إجراءات. البحث على الويب، عمليات الملفات، استدعاءات API، تنفيذ الكود.
</Card>
<Card title="خوادم MCP" icon="plug" href="/ar/mcp/overview" color="#8B5CF6">
**خوادم أدوات عن بُعد** — تربط الوكلاء بخوادم أدوات خارجية عبر Model Context Protocol. نفس تأثير الأدوات، لكن مستضافة خارجيًا.
</Card>
<Card title="التطبيقات" icon="grid-2" color="#EC4899">
**تكاملات المنصة** — تربط الوكلاء بتطبيقات SaaS (Gmail، Slack، Jira، Salesforce) عبر منصة CrewAI. تعمل محليًا مع رمز تكامل المنصة.
</Card>
<Card title="المهارات" icon="bolt" href="/ar/concepts/skills" color="#F59E0B">
**خبرة المجال** — تحقن التعليمات والإرشادات والمواد المرجعية في إرشادات الوكلاء. المهارات تخبر الوكلاء *كيف يفكرون*.
</Card>
<Card title="المعرفة" icon="book" href="/ar/concepts/knowledge" color="#10B981">
**حقائق مُسترجعة** — توفر للوكلاء بيانات من المستندات والملفات وعناوين URL عبر البحث الدلالي (RAG). المعرفة تعطي الوكلاء *ما يحتاجون معرفته*.
</Card>
</CardGroup>
---
## التمييز الأساسي
أهم شيء يجب فهمه: **هذه القدرات تنقسم إلى فئتين**.
### قدرات الإجراء (الأدوات، MCP، التطبيقات)
تمنح الوكلاء القدرة على **فعل أشياء** — استدعاء APIs، قراءة الملفات، البحث على الويب، إرسال رسائل البريد الإلكتروني. عند التنفيذ، تتحول الأنواع الثلاثة إلى نفس التنسيق الداخلي (مثيلات `BaseTool`) وتظهر في قائمة أدوات موحدة يمكن للوكيل استدعاؤها.
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool
agent = Agent(
role="Researcher",
goal="Find and compile market data",
backstory="Expert market analyst",
tools=[SerperDevTool(), FileReadTool()], # أدوات محلية
mcps=["https://mcp.example.com/sse"], # أدوات خادم MCP عن بُعد
apps=["gmail", "google_sheets"], # تكاملات المنصة
)
```
### قدرات السياق (المهارات، المعرفة)
تُعدّل **إرشادات** الوكيل — بحقن الخبرة أو التعليمات أو البيانات المُسترجعة قبل أن يبدأ الوكيل في التفكير. لا تمنح الوكلاء إجراءات جديدة؛ بل تُشكّل كيف يفكر الوكلاء وما هي المعلومات التي يمكنهم الوصول إليها.
```python
from crewai import Agent
agent = Agent(
role="Security Auditor",
goal="Audit cloud infrastructure for vulnerabilities",
backstory="Expert in cloud security with 10 years of experience",
skills=["./skills/security-audit"], # تعليمات المجال
knowledge_sources=[pdf_source, url_source], # حقائق مُسترجعة
)
```
---
## متى تستخدم ماذا
| تحتاج إلى... | استخدم | مثال |
| :------------------------------------------------------- | :---------------- | :--------------------------------------- |
| الوكيل يبحث على الويب | **الأدوات** | `tools=[SerperDevTool()]` |
| الوكيل يستدعي API عن بُعد عبر MCP | **MCP** | `mcps=["https://api.example.com/sse"]` |
| الوكيل يرسل بريد إلكتروني عبر Gmail | **التطبيقات** | `apps=["gmail"]` |
| الوكيل يتبع إجراءات محددة | **المهارات** | `skills=["./skills/code-review"]` |
| الوكيل يرجع لمستندات الشركة | **المعرفة** | `knowledge_sources=[pdf_source]` |
| الوكيل يبحث على الويب ويتبع إرشادات المراجعة | **الأدوات + المهارات** | استخدم كليهما معًا |
---
## دمج القدرات
في الممارسة العملية، غالبًا ما يستخدم الوكلاء **أنواعًا متعددة من القدرات معًا**. إليك مثال واقعي:
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
# وكيل بحث مجهز بالكامل
researcher = Agent(
role="Senior Research Analyst",
goal="Produce comprehensive market analysis reports",
backstory="Expert analyst with deep industry knowledge",
# الإجراء: ما يمكن للوكيل فعله
tools=[
SerperDevTool(), # البحث على الويب
FileReadTool(), # قراءة الملفات المحلية
CodeInterpreterTool(), # تشغيل كود Python للتحليل
],
mcps=["https://data-api.example.com/sse"], # الوصول لـ API بيانات عن بُعد
apps=["google_sheets"], # الكتابة في Google Sheets
# السياق: ما يعرفه الوكيل
skills=["./skills/research-methodology"], # كيفية إجراء البحث
knowledge_sources=[company_docs], # بيانات خاصة بالشركة
)
```
---
## جدول المقارنة
| الميزة | الأدوات | MCP | التطبيقات | المهارات | المعرفة |
| :--- | :---: | :---: | :---: | :---: | :---: |
| **يمنح الوكيل إجراءات** | ✅ | ✅ | ✅ | ❌ | ❌ |
| **يُعدّل الإرشادات** | ❌ | ❌ | ❌ | ✅ | ✅ |
| **يتطلب كود** | نعم | إعداد فقط | إعداد فقط | Markdown فقط | إعداد فقط |
| **يعمل محليًا** | نعم | يعتمد | نعم (مع متغير بيئة) | غير متاح | نعم |
| **يحتاج مفاتيح API** | لكل أداة | لكل خادم | رمز التكامل | لا | المُضمّن فقط |
| **يُعيَّن على Agent** | `tools=[]` | `mcps=[]` | `apps=[]` | `skills=[]` | `knowledge_sources=[]` |
| **يُعيَّن على Crew** | ❌ | ❌ | ❌ | `skills=[]` | `knowledge_sources=[]` |
---
## تعمّق أكثر
هل أنت مستعد لمعرفة المزيد عن كل نوع من أنواع القدرات؟
<CardGroup cols={2}>
<Card title="الأدوات" icon="wrench" href="/ar/concepts/tools">
إنشاء أدوات مخصصة، استخدام كتالوج OSS مع أكثر من 75 خيارًا، تكوين التخزين المؤقت والتنفيذ غير المتزامن.
</Card>
<Card title="تكامل MCP" icon="plug" href="/ar/mcp/overview">
الاتصال بخوادم MCP عبر stdio أو SSE أو HTTP. تصفية الأدوات، تكوين المصادقة.
</Card>
<Card title="المهارات" icon="bolt" href="/ar/concepts/skills">
بناء حزم المهارات مع SKILL.md، حقن خبرة المجال، استخدام الكشف التدريجي.
</Card>
<Card title="المعرفة" icon="book" href="/ar/concepts/knowledge">
إضافة المعرفة من ملفات PDF وCSV وعناوين URL والمزيد. تكوين المُضمّنات والاسترجاع.
</Card>
</CardGroup>

View File

@@ -1,15 +1,217 @@
---
title: المهارات
description: حزم المهارات المبنية على نظام الملفات التي تحقن السياق في إرشادات الوكيل.
description: حزم المهارات المبنية على نظام الملفات التي تحقن خبرة المجال والتعليمات في إرشادات الوكلاء.
icon: bolt
mode: "wide"
---
## نظرة عامة
المهارات هي مجلدات مستقلة توفر للوكلاء تعليمات ومراجع وموارد خاصة بالمجال. تُعرّف كل مهارة بملف `SKILL.md` يحتوي على بيانات وصفية YAML ومحتوى Markdown.
المهارات هي مجلدات مستقلة توفر للوكلاء **تعليمات وإرشادات ومواد مرجعية خاصة بالمجال**. تُعرّف كل مهارة بملف `SKILL.md` يحتوي على بيانات وصفية YAML ومحتوى Markdown.
تستخدم المهارات **الكشف التدريجي** — يتم تحميل البيانات الوصفية أولاً، ثم التعليمات الكاملة فقط عند التفعيل، وكتالوجات الموارد فقط عند الحاجة.
عند التفعيل، يتم حقن تعليمات المهارة مباشرة في إرشادات مهمة الوكيل — مما يمنح الوكيل خبرة دون الحاجة لأي تغييرات في الكود.
<Note type="info" title="المهارات مقابل الأدوات — التمييز الأساسي">
**المهارات ليست أدوات.** هذه هي نقطة الارتباك الأكثر شيوعًا.
- **المهارات** تحقن *تعليمات وسياق* في إرشادات الوكيل. تخبر الوكيل *كيف يفكر* في مشكلة ما.
- **الأدوات** تمنح الوكيل *دوال قابلة للاستدعاء* لاتخاذ إجراءات (البحث، قراءة الملفات، استدعاء APIs).
غالبًا ما تحتاج **كليهما**: مهارات للخبرة، وأدوات للإجراء. يتم تكوينهما بشكل مستقل ويُكمّلان بعضهما.
</Note>
---
## البداية السريعة
### 1. إنشاء مجلد المهارة
```
skills/
└── code-review/
├── SKILL.md # مطلوب — التعليمات
├── references/ # اختياري — مستندات مرجعية
│ └── style-guide.md
└── scripts/ # اختياري — سكربتات قابلة للتنفيذ
```
### 2. كتابة SKILL.md الخاص بك
```markdown
---
name: code-review
description: Guidelines for conducting thorough code reviews with focus on security and performance.
metadata:
author: your-team
version: "1.0"
---
## إرشادات مراجعة الكود
عند مراجعة الكود، اتبع قائمة التحقق هذه:
1. **الأمان**: تحقق من ثغرات الحقن وتجاوز المصادقة وكشف البيانات
2. **الأداء**: ابحث عن استعلامات N+1 والتخصيصات غير الضرورية والاستدعاءات المحظورة
3. **القابلية للقراءة**: تأكد من وضوح التسمية والتعليقات المناسبة والأسلوب المتسق
4. **الاختبارات**: تحقق من تغطية اختبار كافية للوظائف الجديدة
### مستويات الخطورة
- **حرج**: ثغرات أمنية، مخاطر فقدان البيانات → حظر الدمج
- **رئيسي**: مشاكل أداء، أخطاء منطقية → طلب تغييرات
- **ثانوي**: مسائل أسلوبية، اقتراحات تسمية → الموافقة مع تعليقات
```
### 3. ربطها بوكيل
```python
from crewai import Agent
from crewai_tools import GithubSearchTool, FileReadTool
reviewer = Agent(
role="Senior Code Reviewer",
goal="Review pull requests for quality and security issues",
backstory="Staff engineer with expertise in secure coding practices.",
skills=["./skills"], # يحقن إرشادات المراجعة
tools=[GithubSearchTool(), FileReadTool()], # يسمح للوكيل بقراءة الكود
)
```
الوكيل الآن لديه **خبرة** (من المهارة) و**قدرات** (من الأدوات) معًا.
---
## المهارات + الأدوات: العمل معًا
إليك أنماط شائعة توضح كيف تُكمّل المهارات والأدوات بعضهما:
### النمط 1: مهارات فقط (خبرة المجال، بدون إجراءات مطلوبة)
استخدم عندما يحتاج الوكيل لتعليمات محددة لكن لا يحتاج لاستدعاء خدمات خارجية:
```python
agent = Agent(
role="Technical Writer",
goal="Write clear API documentation",
backstory="Expert technical writer",
skills=["./skills/api-docs-style"], # إرشادات وقوالب الكتابة
# لا حاجة لأدوات — الوكيل يكتب بناءً على السياق المقدم
)
```
### النمط 2: أدوات فقط (إجراءات، بدون خبرة خاصة)
استخدم عندما يحتاج الوكيل لاتخاذ إجراءات لكن لا يحتاج لتعليمات مجال محددة:
```python
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
agent = Agent(
role="Web Researcher",
goal="Find information about a topic",
backstory="Skilled at finding information online",
tools=[SerperDevTool(), ScrapeWebsiteTool()], # يمكنه البحث والاستخراج
# لا حاجة لمهارات — البحث العام لا يحتاج إرشادات خاصة
)
```
### النمط 3: مهارات + أدوات (خبرة وإجراءات)
النمط الأكثر شيوعًا في العالم الحقيقي. المهارة توفر *كيف* تقترب من العمل؛ الأدوات توفر *ما* يمكن للوكيل فعله:
```python
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
analyst = Agent(
role="Security Analyst",
goal="Audit infrastructure for vulnerabilities",
backstory="Expert in cloud security and compliance",
skills=["./skills/security-audit"], # منهجية وقوائم تحقق التدقيق
tools=[
SerperDevTool(), # البحث عن ثغرات معروفة
FileReadTool(), # قراءة ملفات التكوين
CodeInterpreterTool(), # تشغيل سكربتات التحليل
],
)
```
### النمط 4: مهارات + MCP
المهارات تعمل مع خوادم MCP بنفس الطريقة التي تعمل بها مع الأدوات:
```python
agent = Agent(
role="Data Analyst",
goal="Analyze customer data and generate reports",
backstory="Expert data analyst with strong statistical background",
skills=["./skills/data-analysis"], # منهجية التحليل
mcps=["https://data-warehouse.example.com/sse"], # وصول بيانات عن بُعد
)
```
### النمط 5: مهارات + تطبيقات
المهارات يمكن أن توجّه كيف يستخدم الوكيل تكاملات المنصة:
```python
agent = Agent(
role="Customer Support Agent",
goal="Respond to customer inquiries professionally",
backstory="Experienced support representative",
skills=["./skills/support-playbook"], # قوالب الردود وقواعد التصعيد
apps=["gmail", "zendesk"], # يمكنه إرسال رسائل بريد وتحديث التذاكر
)
```
---
## المهارات على مستوى الطاقم
يمكن تعيين المهارات على الطاقم لتُطبّق على **جميع الوكلاء**:
```python
from crewai import Crew
crew = Crew(
agents=[researcher, writer, reviewer],
tasks=[research_task, write_task, review_task],
skills=["./skills"], # جميع الوكلاء يحصلون على هذه المهارات
)
```
المهارات على مستوى الوكيل لها الأولوية — إذا تم اكتشاف نفس المهارة في كلا المستويين، يتم استخدام نسخة الوكيل.
---
## تنسيق SKILL.md
```markdown
---
name: my-skill
description: وصف قصير لما تفعله هذه المهارة ومتى تُستخدم.
license: Apache-2.0 # اختياري
compatibility: crewai>=0.1.0 # اختياري
metadata: # اختياري
author: your-name
version: "1.0"
allowed-tools: web-search file-read # اختياري، تجريبي
---
التعليمات للوكيل تُكتب هنا. يتم حقن محتوى Markdown هذا
في إرشادات الوكيل عند تفعيل المهارة.
```
### حقول البيانات الوصفية
| الحقل | مطلوب | الوصف |
| :-------------- | :------- | :----------------------------------------------------------------------- |
| `name` | نعم | 1-64 حرف. أحرف صغيرة أبجدية رقمية وشرطات. يجب أن يطابق اسم المجلد. |
| `description` | نعم | 1-1024 حرف. يصف ما تفعله المهارة ومتى تُستخدم. |
| `license` | لا | اسم الترخيص أو مرجع لملف ترخيص مضمّن. |
| `compatibility` | لا | حد أقصى 500 حرف. متطلبات البيئة (منتجات، حزم، شبكة). |
| `metadata` | لا | تعيين مفتاح-قيمة نصي عشوائي. |
| `allowed-tools` | لا | قائمة أدوات معتمدة مسبقًا مفصولة بمسافات. تجريبي. |
---
## هيكل المجلد
@@ -21,79 +223,25 @@ my-skill/
└── assets/ # اختياري — ملفات ثابتة (إعدادات، بيانات)
```
يجب أن يتطابق اسم المجلد مع حقل `name` في `SKILL.md`.
يجب أن يتطابق اسم المجلد مع حقل `name` في `SKILL.md`. مجلدات `scripts/` و `references/` و `assets/` متاحة في مسار المهارة `path` للوكلاء الذين يحتاجون للإشارة إلى الملفات مباشرة.
## تنسيق SKILL.md
```markdown
---
name: my-skill
description: Short description of what this skill does and when to use it.
license: Apache-2.0 # optional
compatibility: crewai>=0.1.0 # optional
metadata: # optional
author: your-name
version: "1.0"
allowed-tools: web-search file-read # optional, space-delimited
---
Instructions for the agent go here. This markdown body is injected
into the agent's prompt when the skill is activated.
```
## المهارات المحمّلة مسبقًا
### حقول البيانات الوصفية
| الحقل | مطلوب | القيود |
| :-------------- | :------- | :----------------------------------------------------------------------- |
| `name` | نعم | 1-64 حرف. أحرف صغيرة أبجدية رقمية وشرطات. بدون شرطات بادئة/لاحقة/متتالية. يجب أن يطابق اسم المجلد. |
| `description` | نعم | 1-1024 حرف. يصف ما تفعله المهارة ومتى تُستخدم. |
| `license` | لا | اسم الترخيص أو مرجع لملف ترخيص مضمّن. |
| `compatibility` | لا | حد أقصى 500 حرف. متطلبات البيئة (منتجات، حزم، شبكة). |
| `metadata` | لا | تعيين مفتاح-قيمة نصي عشوائي. |
| `allowed-tools` | لا | قائمة أدوات معتمدة مسبقًا مفصولة بمسافات. تجريبي. |
## الاستخدام
### المهارات على مستوى الوكيل
مرر مسارات مجلدات المهارات إلى وكيل:
```python
from crewai import Agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=["./skills"], # يكتشف جميع المهارات في هذا المجلد
)
```
### المهارات على مستوى الطاقم
تُدمج مسارات المهارات في الطاقم مع كل وكيل:
```python
from crewai import Crew
crew = Crew(
agents=[agent],
tasks=[task],
skills=["./skills"],
)
```
### المهارات المحمّلة مسبقًا
يمكنك أيضًا تمرير كائنات `Skill` مباشرة:
للمزيد من التحكم، يمكنك اكتشاف المهارات وتفعيلها برمجيًا:
```python
from pathlib import Path
from crewai.skills import discover_skills, activate_skill
# اكتشاف جميع المهارات في مجلد
skills = discover_skills(Path("./skills"))
# تفعيلها (تحميل محتوى SKILL.md الكامل)
activated = [activate_skill(s) for s in skills]
# تمرير إلى وكيل
agent = Agent(
role="Researcher",
goal="Find relevant information",
@@ -102,13 +250,57 @@ agent = Agent(
)
```
---
## كيف يتم تحميل المهارات
يتم تحميل المهارات تدريجيًافقط البيانات المطلوبة في كل مرحلة يتم قراءتها:
تستخدم المهارات **الكشف التدريجي**تحمّل فقط ما هو مطلوب في كل مرحلة:
| المرحلة | ما يتم تحميله | متى |
| :--------------- | :------------------------------------------------ | :----------------- |
| الاكتشاف | الاسم، الوصف، حقول البيانات الوصفية | `discover_skills()` |
| التفعيل | نص محتوى SKILL.md الكامل | `activate_skill()` |
| المرحلة | ما يتم تحميله | متى |
| :--------- | :------------------------------------ | :------------------ |
| الاكتشاف | الاسم، الوصف، حقول البيانات الوصفية | `discover_skills()` |
| التفعيل | نص محتوى SKILL.md الكامل | `activate_skill()` |
أثناء التنفيذ العادي للوكيل، يتم اكتشاف المهارات وتفعيلها تلقائيًا. مجلدات `scripts/` و `references/` و `assets/` متاحة في مسار المهارة `path` للوكلاء الذين يحتاجون للإشارة إلى الملفات مباشرة.
أثناء التنفيذ العادي للوكيل (تمرير مسارات المجلدات عبر `skills=["./skills"]`)، يتم اكتشاف المهارات وتفعيلها تلقائيًا. التحميل التدريجي مهم فقط عند استخدام الواجهة البرمجية.
---
## المهارات مقابل المعرفة
كلا المهارات والمعرفة تُعدّل إرشادات الوكيل، لكنهما يخدمان أغراضًا مختلفة:
| الجانب | المهارات | المعرفة |
| :--- | :--- | :--- |
| **ما توفره** | تعليمات، إجراءات، إرشادات | حقائق، بيانات، معلومات |
| **كيف تُخزّن** | ملفات Markdown (SKILL.md) | مُضمّنة في مخزن متجهي (ChromaDB) |
| **كيف تُسترجع** | يتم حقن المحتوى الكامل في الإرشادات | البحث الدلالي يجد الأجزاء ذات الصلة |
| **الأفضل لـ** | المنهجيات، قوائم التحقق، أدلة الأسلوب | مستندات الشركة، معلومات المنتج، بيانات مرجعية |
| **يُعيّن عبر** | `skills=["./skills"]` | `knowledge_sources=[source]` |
**القاعدة العامة:** إذا كان الوكيل يحتاج لاتباع *عملية*، استخدم مهارة. إذا كان يحتاج للرجوع إلى *بيانات*، استخدم المعرفة.
---
## الأسئلة الشائعة
<AccordionGroup>
<Accordion title="هل أحتاج لتعيين المهارات والأدوات معًا؟">
يعتمد على حالة الاستخدام. المهارات والأدوات **مستقلتان** — يمكنك استخدام أيّ منهما أو كليهما أو لا شيء.
- **مهارات فقط**: عندما يحتاج الوكيل خبرة لكن لا يحتاج إجراءات خارجية (مثال: الكتابة بإرشادات أسلوبية)
- **أدوات فقط**: عندما يحتاج الوكيل إجراءات لكن لا يحتاج منهجية خاصة (مثال: بحث بسيط على الويب)
- **كليهما**: عندما يحتاج الوكيل خبرة وإجراءات (مثال: تدقيق أمني بقوائم تحقق محددة وقدرة على فحص الكود)
</Accordion>
<Accordion title="هل توفر المهارات أدوات تلقائيًا؟">
**لا.** حقل `allowed-tools` في SKILL.md هو بيانات وصفية تجريبية فقط — لا يُنشئ أو يحقن أي أدوات. يجب عليك دائمًا تعيين الأدوات بشكل منفصل عبر `tools=[]` أو `mcps=[]` أو `apps=[]`.
</Accordion>
<Accordion title="ماذا يحدث إذا عيّنت نفس المهارة على كل من الوكيل والطاقم؟">
المهارة على مستوى الوكيل لها الأولوية. يتم إزالة التكرار حسب الاسم — مهارات الوكيل تُعالج أولاً، لذا إذا ظهر نفس اسم المهارة في كلا المستويين، تُستخدم نسخة الوكيل.
</Accordion>
<Accordion title="ما الحجم الأقصى لمحتوى SKILL.md؟">
هناك تحذير ناعم عند 50,000 حرف، لكن بدون حد صارم. حافظ على تركيز المهارات وإيجازها للحصول على أفضل النتائج — الحقن الكبيرة في الإرشادات قد تُشتت انتباه الوكيل.
</Accordion>
</AccordionGroup>

View File

@@ -10,6 +10,10 @@ mode: "wide"
تُمكّن أدوات CrewAI الوكلاء بقدرات تتراوح من البحث على الويب وتحليل البيانات إلى التعاون وتفويض المهام بين الزملاء.
توضح هذه الوثائق كيفية إنشاء هذه الأدوات ودمجها والاستفادة منها ضمن إطار عمل CrewAI، بما في ذلك التركيز على أدوات التعاون.
<Note type="info" title="الأدوات هي أحد أنواع قدرات الوكيل الخمسة">
الأدوات تمنح الوكلاء **دوال قابلة للاستدعاء** لاتخاذ إجراءات. تعمل جنبًا إلى جنب مع [MCP](/ar/mcp/overview) (خوادم أدوات عن بُعد) و[التطبيقات](/ar/concepts/agent-capabilities) (تكاملات المنصة) و[المهارات](/ar/concepts/skills) (خبرة المجال) و[المعرفة](/ar/concepts/knowledge) (حقائق مُسترجعة). راجع نظرة عامة على [قدرات الوكيل](/ar/concepts/agent-capabilities) لفهم متى تستخدم كل نوع.
</Note>
## ما هي الأداة؟
الأداة في CrewAI هي مهارة أو وظيفة يمكن للوكلاء استخدامها لأداء إجراءات مختلفة.

View File

@@ -7,11 +7,13 @@ mode: "wide"
## نظرة عامة
يتيح RBAC في CrewAI AMP إدارة وصول آمنة وقابلة للتوسع من خلال مزيج من الأدوار على مستوى المؤسسة وعناصر التحكم في الرؤية على مستوى الأتمتة.
يتيح RBAC في CrewAI AMP إدارة وصول آمنة وقابلة للتوسع من خلال طبقتين:
1. **صلاحيات الميزات** — تتحكم في ما يمكن لكل دور القيام به عبر المنصة (إدارة، قراءة، أو بدون وصول)
2. **صلاحيات على مستوى الكيان** — وصول دقيق للأتمتات الفردية ومتغيرات البيئة واتصالات LLM ومستودعات Git
<Frame>
<img src="/images/enterprise/users_and_roles.png" alt="نظرة عامة على RBAC في CrewAI AMP" />
</Frame>
## المستخدمون والأدوار
@@ -39,6 +41,13 @@ mode: "wide"
</Step>
</Steps>
### الأدوار المحددة مسبقاً
| الدور | الوصف |
| :---------- | :-------------------------------------------------------------------- |
| **Owner** | وصول كامل لجميع الميزات والإعدادات. لا يمكن تقييده. |
| **Member** | وصول للقراءة لمعظم الميزات، وصول إدارة لمتغيرات البيئة واتصالات LLM ومشاريع Studio. لا يمكنه تعديل إعدادات المؤسسة أو الإعدادات الافتراضية. |
### ملخص التهيئة
| المجال | مكان التهيئة | الخيارات |
@@ -46,23 +55,80 @@ mode: "wide"
| المستخدمون والأدوار | Settings → Roles | محددة مسبقاً: Owner، Member؛ أدوار مخصصة |
| رؤية الأتمتة | Automation → Settings → Visibility | خاص؛ قائمة بيضاء للمستخدمين/الأدوار |
## التحكم في الوصول على مستوى الأتمتة
---
بالإضافة إلى الأدوار على مستوى المؤسسة، تدعم أتمتات CrewAI إعدادات رؤية دقيقة تتيح لك تقييد الوصول إلى أتمتات محددة حسب المستخدم أو الدور.
## مصفوفة صلاحيات الميزات
هذا مفيد لـ:
لكل دور مستوى صلاحية لكل منطقة ميزة. المستويات الثلاثة هي:
- **إدارة (Manage)** — وصول كامل للقراءة/الكتابة (إنشاء، تعديل، حذف)
- **قراءة (Read)** — وصول للعرض فقط
- **بدون وصول (No access)** — الميزة مخفية/غير قابلة للوصول
| الميزة | Owner | Member (افتراضي) | المستويات المتاحة | الوصف |
| :------------------------ | :------ | :--------------- | :--------------------------------- | :-------------------------------------------------------------- |
| `usage_dashboards` | Manage | Read | Manage / Read / No access | عرض مقاييس وتحليلات الاستخدام |
| `crews_dashboards` | Manage | Read | Manage / Read / No access | عرض لوحات النشر والوصول إلى تفاصيل الأتمتة |
| `invitations` | Manage | Read | Manage / Read / No access | دعوة أعضاء جدد إلى المؤسسة |
| `training_ui` | Manage | Read | Manage / Read / No access | الوصول إلى واجهات التدريب/الضبط الدقيق |
| `tools` | Manage | Read | Manage / Read / No access | إنشاء وإدارة الأدوات |
| `agents` | Manage | Read | Manage / Read / No access | إنشاء وإدارة الوكلاء |
| `environment_variables` | Manage | Manage | Manage / No access | إنشاء وإدارة متغيرات البيئة |
| `llm_connections` | Manage | Manage | Manage / No access | تهيئة اتصالات مزودي LLM |
| `default_settings` | Manage | No access | Manage / No access | تعديل الإعدادات الافتراضية على مستوى المؤسسة |
| `organization_settings` | Manage | No access | Manage / No access | إدارة الفوترة والخطط وتهيئة المؤسسة |
| `studio_projects` | Manage | Manage | Manage / No access | إنشاء وتعديل المشاريع في Studio |
<Tip>
عند إنشاء دور مخصص، يمكن ضبط معظم الميزات على **Manage** أو **Read** أو **No access**. ومع ذلك، فإن `environment_variables` و`llm_connections` و`default_settings` و`organization_settings` و`studio_projects` تدعم فقط **Manage** أو **No access** — لا يوجد خيار للقراءة فقط لهذه الميزات.
</Tip>
---
## النشر من GitHub أو Zip
من أكثر أسئلة RBAC شيوعاً: _"ما الصلاحيات التي يحتاجها عضو الفريق للنشر؟"_
### النشر من GitHub
لنشر أتمتة من مستودع GitHub، يحتاج المستخدم إلى:
1. **`crews_dashboards`**: على الأقل `Read` — مطلوب للوصول إلى لوحة الأتمتات حيث يتم إنشاء عمليات النشر
2. **الوصول إلى مستودع Git** (إذا كان RBAC على مستوى الكيان لمستودعات Git مفعلاً): يجب منح دور المستخدم الوصول إلى مستودع Git المحدد عبر صلاحيات مستوى الكيان
3. **`studio_projects`: `Manage`** — إذا كان يبني الطاقم في Studio قبل النشر
### النشر من Zip
لنشر أتمتة من ملف Zip، يحتاج المستخدم إلى:
1. **`crews_dashboards`**: على الأقل `Read` — مطلوب للوصول إلى لوحة الأتمتات
2. **تفعيل نشر Zip**: يجب ألا تكون المؤسسة قد عطلت نشر Zip في إعدادات المؤسسة
### مرجع سريع: الحد الأدنى من الصلاحيات للنشر
| الإجراء | صلاحيات الميزات المطلوبة | متطلبات إضافية |
| :------------------- | :----------------------------------- | :----------------------------------------------- |
| النشر من GitHub | `crews_dashboards: Read` | وصول كيان مستودع Git (إذا كان Git RBAC مفعلاً) |
| النشر من Zip | `crews_dashboards: Read` | يجب تفعيل نشر Zip على مستوى المؤسسة |
| البناء في Studio | `studio_projects: Manage` | — |
| تهيئة مفاتيح LLM | `llm_connections: Manage` | — |
| ضبط متغيرات البيئة | `environment_variables: Manage` | وصول مستوى الكيان (إذا كان RBAC الكيان مفعلاً) |
---
## التحكم في الوصول على مستوى الأتمتة (صلاحيات الكيان)
بالإضافة إلى الأدوار على مستوى المؤسسة، يدعم CrewAI صلاحيات دقيقة على مستوى الكيان تقيد الوصول إلى موارد فردية.
### رؤية الأتمتة
تدعم الأتمتات إعدادات رؤية تقيد الوصول حسب المستخدم أو الدور. هذا مفيد لـ:
- الحفاظ على خصوصية الأتمتات الحساسة أو التجريبية
- إدارة الرؤية عبر الفرق الكبيرة أو المتعاونين الخارجيين
- اختبار الأتمتات في سياقات معزولة
يمكن تهيئة عمليات النشر كخاصة، مما يعني أن المستخدمين والأدوار المدرجين في القائمة البيضاء فقط سيتمكنون من:
- عرض عملية النشر
- تشغيلها أو التفاعل مع API الخاص بها
- الوصول إلى سجلاتها ومقاييسها وإعداداتها
يتمتع مالك المؤسسة دائماً بالوصول، بغض النظر عن إعدادات الرؤية.
يمكن تهيئة عمليات النشر كخاصة، مما يعني أن المستخدمين والأدوار المدرجين في القائمة البيضاء فقط سيتمكنون من التفاعل معها.
يمكنك تهيئة التحكم في الوصول على مستوى الأتمتة في Automation → Settings → علامة تبويب Visibility.
@@ -99,9 +165,92 @@ mode: "wide"
<Frame>
<img src="/images/enterprise/visibility.png" alt="إعدادات رؤية الأتمتة في CrewAI AMP" />
</Frame>
### أنواع صلاحيات النشر
عند منح وصول على مستوى الكيان لأتمتة محددة، يمكنك تعيين أنواع الصلاحيات التالية:
| الصلاحية | ما تسمح به |
| :------------------- | :-------------------------------------------------- |
| `run` | تنفيذ الأتمتة واستخدام API الخاص بها |
| `traces` | عرض تتبعات التنفيذ والسجلات |
| `manage_settings` | تعديل، إعادة نشر، استرجاع، أو حذف الأتمتة |
| `human_in_the_loop` | الرد على طلبات الإنسان في الحلقة (HITL) |
| `full_access` | جميع ما سبق |
### RBAC على مستوى الكيان لموارد أخرى
عند تفعيل RBAC على مستوى الكيان، يمكن أيضاً التحكم في الوصول لهذه الموارد حسب المستخدم أو الدور:
| المورد | يتم التحكم فيه بواسطة | الوصف |
| :-------------------- | :--------------------------------- | :------------------------------------------------------------- |
| متغيرات البيئة | علامة ميزة RBAC الكيان | تقييد أي الأدوار/المستخدمين يمكنهم عرض أو إدارة متغيرات بيئة محددة |
| اتصالات LLM | علامة ميزة RBAC الكيان | تقييد الوصول لتهيئات مزودي LLM محددة |
| مستودعات Git | إعداد RBAC لمستودعات Git بالمؤسسة | تقييد أي الأدوار/المستخدمين يمكنهم الوصول لمستودعات متصلة محددة |
---
## أنماط الأدوار الشائعة
بينما يأتي CrewAI بدوري Owner وMember، تستفيد معظم الفرق من إنشاء أدوار مخصصة. إليك الأنماط الشائعة:
### دور المطور
دور لأعضاء الفريق الذين يبنون وينشرون الأتمتات لكن لا يديرون إعدادات المؤسسة.
| الميزة | الصلاحية |
| :------------------------ | :---------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Manage |
| `invitations` | Read |
| `training_ui` | Read |
| `tools` | Manage |
| `agents` | Manage |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | Manage |
### دور المشاهد / أصحاب المصلحة
دور للمعنيين غير التقنيين الذين يحتاجون لمراقبة الأتمتات وعرض النتائج.
| الميزة | الصلاحية |
| :------------------------ | :---------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Read |
| `invitations` | No access |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | No access |
| `llm_connections` | No access |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | No access |
### دور مسؤول العمليات / المنصة
دور لمشغلي المنصة الذين يديرون إعدادات البنية التحتية لكن قد لا يبنون الوكلاء.
| الميزة | الصلاحية |
| :------------------------ | :---------- |
| `usage_dashboards` | Manage |
| `crews_dashboards` | Manage |
| `invitations` | Manage |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | Manage |
| `organization_settings` | Read |
| `studio_projects` | No access |
---
<Card title="تحتاج مساعدة؟" icon="headset" href="mailto:support@crewai.com">
تواصل مع فريق الدعم للمساعدة في أسئلة RBAC.
</Card>

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,190 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Mar 31, 2026">
## v1.13.0a5
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a5)
## What's Changed
### Documentation
- Update changelog and version for v1.13.0a4
## Contributors
@greysonlalonde, @joaomdmoura
</Update>
<Update label="Apr 01, 2026">
## v1.13.0a4
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a4)
## What's Changed
### Documentation
- Update changelog and version for v1.13.0a3
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 01, 2026">
## v1.13.0a3
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a3)
## What's Changed
### Features
- Emit token usage data in LLMCallCompletedEvent
- Extract and publish tool metadata to AMP
### Bug Fixes
- Handle GPT-5.x models not supporting the `stop` API parameter
### Documentation
- Fix inaccuracies in agent-capabilities across all languages
- Add Agent Capabilities overview and improve Skills documentation
- Add comprehensive SSO configuration guide
- Update changelog and version for v1.13.0rc1
### Refactoring
- Convert Flow to Pydantic BaseModel
- Convert LLM classes to Pydantic BaseModel
- Replace InstanceOf[T] with plain type annotations
- Remove unused methods
## Contributors
@dependabot[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide, @thiagomoretto
</Update>
<Update label="Mar 27, 2026">
## v1.13.0rc1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
## What's Changed
### Documentation
- Update changelog and version for v1.13.0a2
## Contributors
@greysonlalonde
</Update>
<Update label="Mar 27, 2026">
## v1.13.0a2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
## What's Changed
### Features
- Auto-update deployment test repo during release
- Improve enterprise release resilience and UX
### Documentation
- Update changelog and version for v1.13.0a1
## Contributors
@greysonlalonde
</Update>
<Update label="Mar 27, 2026">
## v1.13.0a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
## What's Changed
### Bug Fixes
- Fix broken links in documentation workflow by pinning Node to LTS 22
- Bust the uv cache for freshly published packages in enterprise release
### Documentation
- Add comprehensive RBAC permissions matrix and deployment guide
- Update changelog and version for v1.12.2
## Contributors
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="Mar 25, 2026">
## v1.12.2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.12.2)
## What's Changed
### Features
- Add enterprise release phase to devtools release
### Bug Fixes
- Preserve method return value as flow output for @human_feedback with emit
### Documentation
- Update changelog and version for v1.12.1
- Revise security policy and reporting instructions
## Contributors
@alex-clawd, @greysonlalonde, @joaomdmoura, @theCyberTech
</Update>
<Update label="Mar 25, 2026">
## v1.12.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.12.1)
## What's Changed
### Features
- Add request_id to HumanFeedbackRequestedEvent
- Add Qdrant Edge storage backend for memory system
- Add docs-check command to analyze changes and generate docs with translations
- Add Arabic language support to changelog and release tooling
- Add modern standard Arabic translation of all documentation
- Add logout command in CLI
- Add agent skills
- Implement automatic root_scope for hierarchical memory isolation
- Implement native OpenAI-compatible providers (OpenRouter, DeepSeek, Ollama, vLLM, Cerebras, Dashscope)
### Bug Fixes
- Fix bad credentials for traces batch push (404)
- Resolve multiple bugs in HITL flow system
- Fix agent memory saving
- Resolve all strict mypy errors across crewai package
- Fix use of __router_paths__ for listener+router methods in FlowMeta
- Fix value error on no file support
- Correct litellm quarantine wording in docs
- Fix all mypy errors in crewai-files and add all packages to CI type checks
- Pin litellm upper bound to last tested version (1.82.6)
### Documentation
- Update changelog and version for v1.12.0
- Add CONTRIBUTING.md
- Add guide for using CrewAI without LiteLLM
## Contributors
@akaKuruma, @alex-clawd, @greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay, @lucasgomide, @nicoferdi96
</Update>
<Update label="Mar 25, 2026">
## v1.12.0

View File

@@ -0,0 +1,147 @@
---
title: "Agent Capabilities"
description: "Understand the five ways to extend CrewAI agents: Tools, MCPs, Apps, Skills, and Knowledge."
icon: puzzle-piece
mode: "wide"
---
## Overview
CrewAI agents can be extended with **five distinct capability types**, each serving a different purpose. Understanding when to use each one — and how they work together — is key to building effective agents.
<CardGroup cols={2}>
<Card title="Tools" icon="wrench" href="/en/concepts/tools" color="#3B82F6">
**Callable functions** — give agents the ability to take action. Web searches, file operations, API calls, code execution.
</Card>
<Card title="MCP Servers" icon="plug" href="/en/mcp/overview" color="#8B5CF6">
**Remote tool servers** — connect agents to external tool servers via the Model Context Protocol. Same effect as tools, but hosted externally.
</Card>
<Card title="Apps" icon="grid-2" color="#EC4899">
**Platform integrations** — connect agents to SaaS apps (Gmail, Slack, Jira, Salesforce) via CrewAI's platform. Runs locally with a platform integration token.
</Card>
<Card title="Skills" icon="bolt" href="/en/concepts/skills" color="#F59E0B">
**Domain expertise** — inject instructions, guidelines, and reference material into agent prompts. Skills tell agents *how to think*.
</Card>
<Card title="Knowledge" icon="book" href="/en/concepts/knowledge" color="#10B981">
**Retrieved facts** — provide agents with data from documents, files, and URLs via semantic search (RAG). Knowledge gives agents *what to know*.
</Card>
</CardGroup>
---
## The Key Distinction
The most important thing to understand: **these capabilities fall into two categories**.
### Action Capabilities (Tools, MCPs, Apps)
These give agents the ability to **do things** — call APIs, read files, search the web, send emails. At execution time, all three resolve into the same internal format (`BaseTool` instances) and appear in a unified tool list the agent can call.
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool
agent = Agent(
role="Researcher",
goal="Find and compile market data",
backstory="Expert market analyst",
tools=[SerperDevTool(), FileReadTool()], # Local tools
mcps=["https://mcp.example.com/sse"], # Remote MCP server tools
apps=["gmail", "google_sheets"], # Platform integrations
)
```
### Context Capabilities (Skills, Knowledge)
These modify the agent's **prompt** — injecting expertise, instructions, or retrieved data before the agent starts reasoning. They don't give agents new actions; they shape how agents think and what information they have access to.
```python
from crewai import Agent
agent = Agent(
role="Security Auditor",
goal="Audit cloud infrastructure for vulnerabilities",
backstory="Expert in cloud security with 10 years of experience",
skills=["./skills/security-audit"], # Domain instructions
knowledge_sources=[pdf_source, url_source], # Retrieved facts
)
```
---
## When to Use What
| You need... | Use | Example |
| :------------------------------------------------ | :---------------- | :--------------------------------------- |
| Agent to search the web | **Tools** | `tools=[SerperDevTool()]` |
| Agent to call a remote API via MCP | **MCPs** | `mcps=["https://api.example.com/sse"]` |
| Agent to send emails via Gmail | **Apps** | `apps=["gmail"]` |
| Agent to follow specific procedures | **Skills** | `skills=["./skills/code-review"]` |
| Agent to reference company docs | **Knowledge** | `knowledge_sources=[pdf_source]` |
| Agent to search the web AND follow review guidelines | **Tools + Skills** | Use both together |
---
## Combining Capabilities
In practice, agents often use **multiple capability types together**. Here's a realistic example:
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
# A fully-equipped research agent
researcher = Agent(
role="Senior Research Analyst",
goal="Produce comprehensive market analysis reports",
backstory="Expert analyst with deep industry knowledge",
# ACTION: What the agent can DO
tools=[
SerperDevTool(), # Search the web
FileReadTool(), # Read local files
CodeInterpreterTool(), # Run Python code for analysis
],
mcps=["https://data-api.example.com/sse"], # Access remote data API
apps=["google_sheets"], # Write to Google Sheets
# CONTEXT: What the agent KNOWS
skills=["./skills/research-methodology"], # How to conduct research
knowledge_sources=[company_docs], # Company-specific data
)
```
---
## Comparison Table
| Feature | Tools | MCPs | Apps | Skills | Knowledge |
| :--- | :---: | :---: | :---: | :---: | :---: |
| **Gives agent actions** | ✅ | ✅ | ✅ | ❌ | ❌ |
| **Modifies prompt** | ❌ | ❌ | ❌ | ✅ | ✅ |
| **Requires code** | Yes | Config only | Config only | Markdown only | Config only |
| **Runs locally** | Yes | Depends | Yes (with env var) | N/A | Yes |
| **Needs API keys** | Per tool | Per server | Integration token | No | Embedder only |
| **Set on Agent** | `tools=[]` | `mcps=[]` | `apps=[]` | `skills=[]` | `knowledge_sources=[]` |
| **Set on Crew** | ❌ | ❌ | ❌ | `skills=[]` | `knowledge_sources=[]` |
---
## Deep Dives
Ready to learn more about each capability type?
<CardGroup cols={2}>
<Card title="Tools" icon="wrench" href="/en/concepts/tools">
Create custom tools, use the 75+ OSS catalog, configure caching and async execution.
</Card>
<Card title="MCP Integration" icon="plug" href="/en/mcp/overview">
Connect to MCP servers via stdio, SSE, or HTTP. Filter tools, configure auth.
</Card>
<Card title="Skills" icon="bolt" href="/en/concepts/skills">
Build skill packages with SKILL.md, inject domain expertise, use progressive disclosure.
</Card>
<Card title="Knowledge" icon="book" href="/en/concepts/knowledge">
Add knowledge from PDFs, CSVs, URLs, and more. Configure embedders and retrieval.
</Card>
</CardGroup>

View File

@@ -1,27 +1,186 @@
---
title: Skills
description: Filesystem-based skill packages that inject context into agent prompts.
description: Filesystem-based skill packages that inject domain expertise and instructions into agent prompts.
icon: bolt
mode: "wide"
---
## Overview
Skills are self-contained directories that provide agents with domain-specific instructions, references, and assets. Each skill is defined by a `SKILL.md` file with YAML frontmatter and a markdown body.
Skills are self-contained directories that provide agents with **domain-specific instructions, guidelines, and reference material**. Each skill is defined by a `SKILL.md` file with YAML frontmatter and a markdown body.
Skills use **progressive disclosure** — metadata is loaded first, full instructions only when activated, and resource catalogs only when needed.
When activated, a skill's instructions are injected directly into the agent's task prompt — giving the agent expertise without requiring any code changes.
## Directory Structure
<Note type="info" title="Skills vs Tools — The Key Distinction">
**Skills are NOT tools.** This is the most common point of confusion.
- **Skills** inject *instructions and context* into the agent's prompt. They tell the agent *how to think* about a problem.
- **Tools** give the agent *callable functions* to take action (search, read files, call APIs).
You often need **both**: skills for expertise, tools for action. They are configured independently and complement each other.
</Note>
---
## Quick Start
### 1. Create a Skill Directory
```
my-skill/
── SKILL.md # Required — frontmatter + instructions
├── scripts/ # Optional — executable scripts
├── references/ # Optional — reference documents
└── assets/ # Optional — static files (configs, data)
skills/
── code-review/
├── SKILL.md # Required — instructions
├── references/ # Optional — reference docs
│ └── style-guide.md
└── scripts/ # Optional — executable scripts
```
The directory name must match the `name` field in `SKILL.md`.
### 2. Write Your SKILL.md
```markdown
---
name: code-review
description: Guidelines for conducting thorough code reviews with focus on security and performance.
metadata:
author: your-team
version: "1.0"
---
## Code Review Guidelines
When reviewing code, follow this checklist:
1. **Security**: Check for injection vulnerabilities, auth bypasses, and data exposure
2. **Performance**: Look for N+1 queries, unnecessary allocations, and blocking calls
3. **Readability**: Ensure clear naming, appropriate comments, and consistent style
4. **Testing**: Verify adequate test coverage for new functionality
### Severity Levels
- **Critical**: Security vulnerabilities, data loss risks → block merge
- **Major**: Performance issues, logic errors → request changes
- **Minor**: Style issues, naming suggestions → approve with comments
```
### 3. Attach to an Agent
```python
from crewai import Agent
from crewai_tools import GithubSearchTool, FileReadTool
reviewer = Agent(
role="Senior Code Reviewer",
goal="Review pull requests for quality and security issues",
backstory="Staff engineer with expertise in secure coding practices.",
skills=["./skills"], # Injects review guidelines
tools=[GithubSearchTool(), FileReadTool()], # Lets agent read code
)
```
The agent now has both **expertise** (from the skill) and **capabilities** (from the tools).
---
## Skills + Tools: Working Together
Here are common patterns showing how skills and tools complement each other:
### Pattern 1: Skills Only (Domain Expertise, No Actions Needed)
Use when the agent needs specific instructions but doesn't need to call external services:
```python
agent = Agent(
role="Technical Writer",
goal="Write clear API documentation",
backstory="Expert technical writer",
skills=["./skills/api-docs-style"], # Writing guidelines and templates
# No tools needed — agent writes based on provided context
)
```
### Pattern 2: Tools Only (Actions, No Special Expertise)
Use when the agent needs to take action but doesn't need domain-specific instructions:
```python
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
agent = Agent(
role="Web Researcher",
goal="Find information about a topic",
backstory="Skilled at finding information online",
tools=[SerperDevTool(), ScrapeWebsiteTool()], # Can search and scrape
# No skills needed — general research doesn't need special guidelines
)
```
### Pattern 3: Skills + Tools (Expertise AND Actions)
The most common real-world pattern. The skill provides *how* to approach the work; tools provide *what* the agent can do:
```python
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
analyst = Agent(
role="Security Analyst",
goal="Audit infrastructure for vulnerabilities",
backstory="Expert in cloud security and compliance",
skills=["./skills/security-audit"], # Audit methodology and checklists
tools=[
SerperDevTool(), # Research known vulnerabilities
FileReadTool(), # Read config files
CodeInterpreterTool(), # Run analysis scripts
],
)
```
### Pattern 4: Skills + MCPs
Skills work alongside MCP servers the same way they work with tools:
```python
agent = Agent(
role="Data Analyst",
goal="Analyze customer data and generate reports",
backstory="Expert data analyst with strong statistical background",
skills=["./skills/data-analysis"], # Analysis methodology
mcps=["https://data-warehouse.example.com/sse"], # Remote data access
)
```
### Pattern 5: Skills + Apps
Skills can guide how an agent uses platform integrations:
```python
agent = Agent(
role="Customer Support Agent",
goal="Respond to customer inquiries professionally",
backstory="Experienced support representative",
skills=["./skills/support-playbook"], # Response templates and escalation rules
apps=["gmail", "zendesk"], # Can send emails and update tickets
)
```
---
## Crew-Level Skills
Skills can be set on a crew to apply to **all agents**:
```python
from crewai import Crew
crew = Crew(
agents=[researcher, writer, reviewer],
tasks=[research_task, write_task, review_task],
skills=["./skills"], # All agents get these skills
)
```
Agent-level skills take priority — if the same skill is discovered at both levels, the agent's version is used.
---
## SKILL.md Format
@@ -34,7 +193,7 @@ compatibility: crewai>=0.1.0 # optional
metadata: # optional
author: your-name
version: "1.0"
allowed-tools: web-search file-read # optional, space-delimited
allowed-tools: web-search file-read # optional, experimental
---
Instructions for the agent go here. This markdown body is injected
@@ -43,57 +202,46 @@ into the agent's prompt when the skill is activated.
### Frontmatter Fields
| Field | Required | Constraints |
| Field | Required | Description |
| :-------------- | :------- | :----------------------------------------------------------------------- |
| `name` | Yes | 164 chars. Lowercase alphanumeric and hyphens. No leading/trailing/consecutive hyphens. Must match directory name. |
| `name` | Yes | 164 chars. Lowercase alphanumeric and hyphens. Must match directory name. |
| `description` | Yes | 11024 chars. Describes what the skill does and when to use it. |
| `license` | No | License name or reference to a bundled license file. |
| `compatibility` | No | Max 500 chars. Environment requirements (products, packages, network). |
| `metadata` | No | Arbitrary string key-value mapping. |
| `allowed-tools` | No | Space-delimited list of pre-approved tools. Experimental. |
## Usage
---
### Agent-level Skills
## Directory Structure
Pass skill directory paths to an agent:
```python
from crewai import Agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=["./skills"], # discovers all skills in this directory
)
```
my-skill/
├── SKILL.md # Required — frontmatter + instructions
├── scripts/ # Optional — executable scripts
├── references/ # Optional — reference documents
└── assets/ # Optional — static files (configs, data)
```
### Crew-level Skills
The directory name must match the `name` field in `SKILL.md`. The `scripts/`, `references/`, and `assets/` directories are available on the skill's `path` for agents that need to reference files directly.
Skill paths on a crew are merged into every agent:
---
```python
from crewai import Crew
## Pre-loading Skills
crew = Crew(
agents=[agent],
tasks=[task],
skills=["./skills"],
)
```
### Pre-loaded Skills
You can also pass `Skill` objects directly:
For more control, you can discover and activate skills programmatically:
```python
from pathlib import Path
from crewai.skills import discover_skills, activate_skill
# Discover all skills in a directory
skills = discover_skills(Path("./skills"))
# Activate them (loads full SKILL.md body)
activated = [activate_skill(s) for s in skills]
# Pass to an agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
@@ -102,14 +250,57 @@ agent = Agent(
)
```
---
## How Skills Are Loaded
Skills load progressively — only the data needed at each stage is read:
Skills use **progressive disclosure** — only loading what's needed at each stage:
| Stage | What's loaded | When |
| :--------------- | :------------------------------------------------ | :----------------- |
| Discovery | Name, description, frontmatter fields | `discover_skills()` |
| Activation | Full SKILL.md body text | `activate_skill()` |
| Stage | What's loaded | When |
| :--------- | :------------------------------------ | :------------------ |
| Discovery | Name, description, frontmatter fields | `discover_skills()` |
| Activation | Full SKILL.md body text | `activate_skill()` |
During normal agent execution, skills are automatically discovered and activated. The `scripts/`, `references/`, and `assets/` directories are available on the skill's `path` for agents that need to reference files directly.
During normal agent execution (passing directory paths via `skills=["./skills"]`), skills are automatically discovered and activated. The progressive loading only matters when using the programmatic API.
---
## Skills vs Knowledge
Both skills and knowledge modify the agent's prompt, but they serve different purposes:
| Aspect | Skills | Knowledge |
| :--- | :--- | :--- |
| **What it provides** | Instructions, procedures, guidelines | Facts, data, information |
| **How it's stored** | Markdown files (SKILL.md) | Embedded in vector store (ChromaDB) |
| **How it's retrieved** | Entire body injected into prompt | Semantic search finds relevant chunks |
| **Best for** | Methodology, checklists, style guides | Company docs, product info, reference data |
| **Set via** | `skills=["./skills"]` | `knowledge_sources=[source]` |
**Rule of thumb:** If the agent needs to follow a *process*, use a skill. If the agent needs to reference *data*, use knowledge.
---
## Common Questions
<AccordionGroup>
<Accordion title="Do I need to set skills AND tools?">
It depends on your use case. Skills and tools are **independent** — you can use either, both, or neither.
- **Skills alone**: When the agent needs expertise but no external actions (e.g., writing with style guidelines)
- **Tools alone**: When the agent needs actions but no special methodology (e.g., simple web search)
- **Both**: When the agent needs expertise AND actions (e.g., security audit with specific checklists AND ability to scan code)
</Accordion>
<Accordion title="Do skills automatically provide tools?">
**No.** The `allowed-tools` field in SKILL.md is experimental metadata only — it does not provision or inject any tools. You must always set tools separately via `tools=[]`, `mcps=[]`, or `apps=[]`.
</Accordion>
<Accordion title="What happens if I set the same skill on both an agent and its crew?">
The agent-level skill takes priority. Skills are deduplicated by name — the agent's skills are processed first, so if the same skill name appears at both levels, the agent's version is used.
</Accordion>
<Accordion title="How large can a SKILL.md body be?">
There's a soft warning at 50,000 characters, but no hard limit. Keep skills focused and concise for best results — large prompt injections can dilute the agent's attention.
</Accordion>
</AccordionGroup>

View File

@@ -10,6 +10,10 @@ mode: "wide"
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
<Note type="info" title="Tools are one of five agent capability types">
Tools give agents **callable functions** to take action. They work alongside [MCPs](/en/mcp/overview) (remote tool servers), [Apps](/en/concepts/agent-capabilities) (platform integrations), [Skills](/en/concepts/skills) (domain expertise), and [Knowledge](/en/concepts/knowledge) (retrieved facts). See the [Agent Capabilities](/en/concepts/agent-capabilities) overview to understand when to use each.
</Note>
## What is a Tool?
A tool in CrewAI is a skill or function that agents can utilize to perform various actions.

View File

@@ -7,11 +7,13 @@ mode: "wide"
## Overview
RBAC in CrewAI AMP enables secure, scalable access management through a combination of organizationlevel roles and automationlevel visibility controls.
RBAC in CrewAI AMP enables secure, scalable access management through two layers:
1. **Feature permissions** — control what each role can do across the platform (manage, read, or no access)
2. **Entity-level permissions** — fine-grained access on individual automations, environment variables, LLM connections, and Git repositories
<Frame>
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI AMP" />
</Frame>
## Users and Roles
@@ -39,6 +41,13 @@ You can configure users and roles in Settings → Roles.
</Step>
</Steps>
### Predefined Roles
| Role | Description |
| :--------- | :-------------------------------------------------------------------------- |
| **Owner** | Full access to all features and settings. Cannot be restricted. |
| **Member** | Read access to most features, manage access to environment variables, LLM connections, and Studio projects. Cannot modify organization or default settings. |
### Configuration summary
| Area | Where to configure | Options |
@@ -46,23 +55,80 @@ You can configure users and roles in Settings → Roles.
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
## Automationlevel Access Control
---
In addition to organizationwide roles, CrewAI Automations support finegrained visibility settings that let you restrict access to specific automations by user or role.
## Feature Permissions Matrix
This is useful for:
Every role has a permission level for each feature area. The three levels are:
- **Manage** — full read/write access (create, edit, delete)
- **Read** — view-only access
- **No access** — feature is hidden/inaccessible
| Feature | Owner | Member (default) | Available levels | Description |
| :------------------------ | :------ | :--------------- | :------------------------ | :-------------------------------------------------------------- |
| `usage_dashboards` | Manage | Read | Manage / Read / No access | View usage metrics and analytics |
| `crews_dashboards` | Manage | Read | Manage / Read / No access | View deployment dashboards, access automation details |
| `invitations` | Manage | Read | Manage / Read / No access | Invite new members to the organization |
| `training_ui` | Manage | Read | Manage / Read / No access | Access training/fine-tuning interfaces |
| `tools` | Manage | Read | Manage / Read / No access | Create and manage tools |
| `agents` | Manage | Read | Manage / Read / No access | Create and manage agents |
| `environment_variables` | Manage | Manage | Manage / No access | Create and manage environment variables |
| `llm_connections` | Manage | Manage | Manage / No access | Configure LLM provider connections |
| `default_settings` | Manage | No access | Manage / No access | Modify organization-wide default settings |
| `organization_settings` | Manage | No access | Manage / No access | Manage billing, plans, and organization configuration |
| `studio_projects` | Manage | Manage | Manage / No access | Create and edit projects in Studio |
<Tip>
When creating a custom role, most features can be set to **Manage**, **Read**, or **No access**. However, `environment_variables`, `llm_connections`, `default_settings`, `organization_settings`, and `studio_projects` only support **Manage** or **No access** — there is no read-only option for these features.
</Tip>
---
## Deploying from GitHub or Zip
One of the most common RBAC questions is: _"What permissions does a team member need to deploy?"_
### Deploy from GitHub
To deploy an automation from a GitHub repository, a user needs:
1. **`crews_dashboards`**: at least `Read` — required to access the automations dashboard where deployments are created
2. **Git repository access** (if entity-level RBAC for Git repositories is enabled): the user's role must be granted access to the specific Git repository via entity-level permissions
3. **`studio_projects`: `Manage`** — if building the crew in Studio before deploying
### Deploy from Zip
To deploy an automation from a Zip file upload, a user needs:
1. **`crews_dashboards`**: at least `Read` — required to access the automations dashboard
2. **Zip deployments enabled**: the organization must not have disabled zip deployments in organization settings
### Quick Reference: Minimum Permissions for Deployment
| Action | Required feature permissions | Additional requirements |
| :------------------- | :------------------------------------ | :----------------------------------------------- |
| Deploy from GitHub | `crews_dashboards: Read` | Git repo entity access (if Git RBAC is enabled) |
| Deploy from Zip | `crews_dashboards: Read` | Zip deployments must be enabled at the org level |
| Build in Studio | `studio_projects: Manage` | — |
| Configure LLM keys | `llm_connections: Manage` | — |
| Set environment vars | `environment_variables: Manage` | Entity-level access (if entity RBAC is enabled) |
---
## Automationlevel Access Control (Entity Permissions)
In addition to organizationwide roles, CrewAI supports finegrained entity-level permissions that restrict access to individual resources.
### Automation Visibility
Automations support visibility settings that restrict access by user or role. This is useful for:
- Keeping sensitive or experimental automations private
- Managing visibility across large teams or external collaborators
- Testing automations in isolated contexts
Deployments can be configured as private, meaning only whitelisted users and roles will be able to:
- View the deployment
- Run it or interact with its API
- Access its logs, metrics, and settings
The organization owner always has access, regardless of visibility settings.
Deployments can be configured as private, meaning only whitelisted users and roles will be able to interact with them.
You can configure automationlevel access control in Automation → Settings → Visibility tab.
@@ -99,9 +165,92 @@ You can configure automationlevel access control in Automation → Settings
<Frame>
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI AMP" />
</Frame>
### Deployment Permission Types
When granting entity-level access to a specific automation, you can assign these permission types:
| Permission | What it allows |
| :------------------- | :-------------------------------------------------- |
| `run` | Execute the automation and use its API |
| `traces` | View execution traces and logs |
| `manage_settings` | Edit, redeploy, rollback, or delete the automation |
| `human_in_the_loop` | Respond to human-in-the-loop (HITL) requests |
| `full_access` | All of the above |
### Entity-level RBAC for Other Resources
When entity-level RBAC is enabled, access to these resources can also be controlled per user or role:
| Resource | Controlled by | Description |
| :--------------------- | :------------------------------- | :---------------------------------------------------- |
| Environment variables | Entity RBAC feature flag | Restrict which roles/users can view or manage specific env vars |
| LLM connections | Entity RBAC feature flag | Restrict access to specific LLM provider configurations |
| Git repositories | Git repositories RBAC org setting | Restrict which roles/users can access specific connected repos |
---
## Common Role Patterns
While CrewAI ships with Owner and Member roles, most teams benefit from creating custom roles. Here are common patterns:
### Developer Role
A role for team members who build and deploy automations but don't manage organization settings.
| Feature | Permission |
| :------------------------ | :--------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Manage |
| `invitations` | Read |
| `training_ui` | Read |
| `tools` | Manage |
| `agents` | Manage |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | Manage |
### Viewer / Stakeholder Role
A role for non-technical stakeholders who need to monitor automations and view results.
| Feature | Permission |
| :------------------------ | :--------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Read |
| `invitations` | No access |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | No access |
| `llm_connections` | No access |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | No access |
### Ops / Platform Admin Role
A role for platform operators who manage infrastructure settings but may not build agents.
| Feature | Permission |
| :------------------------ | :--------- |
| `usage_dashboards` | Manage |
| `crews_dashboards` | Manage |
| `invitations` | Manage |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | Manage |
| `organization_settings` | Read |
| `studio_projects` | No access |
---
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with RBAC questions.
</Card>

View File

@@ -0,0 +1,550 @@
---
title: Single Sign-On (SSO)
icon: "key"
description: Configure enterprise SSO authentication for CrewAI Platform — SaaS and Factory
---
## Overview
CrewAI Platform supports enterprise Single Sign-On (SSO) across both **SaaS (AMP)** and **Factory (self-hosted)** deployments. SSO enables your team to authenticate using your organization's existing identity provider, enforcing centralized access control, MFA policies, and user lifecycle management.
### Supported Providers
| Provider | SaaS | Factory | Protocol | CLI Support |
|---|---|---|---|---|
| **WorkOS** | ✅ (default) | ✅ | OAuth 2.0 / OIDC | ✅ |
| **Microsoft Entra ID** (Azure AD) | ✅ (enterprise) | ✅ | OAuth 2.0 / SAML 2.0 | ✅ |
| **Okta** | ✅ (enterprise) | ✅ | OAuth 2.0 / OIDC | ✅ |
| **Auth0** | ✅ (enterprise) | ✅ | OAuth 2.0 / OIDC | ✅ |
| **Keycloak** | — | ✅ | OAuth 2.0 / OIDC | ✅ |
### Key Capabilities
- **SAML 2.0 and OAuth 2.0 / OIDC** protocol support
- **Device Authorization Grant** flow for CLI authentication
- **Role-Based Access Control (RBAC)** with custom roles and per-resource permissions
- **MFA enforcement** delegated to your identity provider
- **User provisioning** through IdP assignment (users/groups)
---
## SaaS SSO
### Default Authentication
CrewAI's managed SaaS platform (AMP) uses **WorkOS** as the default authentication provider. When you sign up at [app.crewai.com](https://app.crewai.com), authentication is handled through `login.crewai.com` — no additional SSO configuration is required.
### Enterprise Custom SSO
Enterprise SaaS customers can configure SSO with their own identity provider (Entra ID, Okta, Auth0). Contact your CrewAI account team to enable custom SSO for your organization. Once configured:
1. Your team members authenticate through your organization's IdP
2. Access control and MFA policies are enforced by your IdP
3. The CrewAI CLI automatically detects your SSO configuration via `crewai enterprise configure`
### CLI Defaults (SaaS)
| Setting | Default Value |
|---|---|
| `enterprise_base_url` | `https://app.crewai.com` |
| `oauth2_provider` | `workos` |
| `oauth2_domain` | `login.crewai.com` |
---
## Factory SSO Setup
Factory (self-hosted) deployments require you to configure SSO by setting environment variables in your Helm `values.yaml` and registering an application in your identity provider.
### Microsoft Entra ID (Azure AD)
<Steps>
<Step title="Register an Application">
1. Go to [portal.azure.com](https://portal.azure.com) → **Microsoft Entra ID** → **App registrations** → **New registration**
2. Configure:
- **Name:** `CrewAI` (or your preferred name)
- **Supported account types:** Accounts in this organizational directory only
- **Redirect URI:** Select **Web**, enter `https://<your-domain>/auth/entra_id/callback`
3. Click **Register**
</Step>
<Step title="Collect Credentials">
From the app overview page, copy:
- **Application (client) ID** → `ENTRA_ID_CLIENT_ID`
- **Directory (tenant) ID** → `ENTRA_ID_TENANT_ID`
</Step>
<Step title="Create Client Secret">
1. Navigate to **Certificates & Secrets** → **New client secret**
2. Add a description and select expiration period
3. Copy the secret value immediately (it won't be shown again) → `ENTRA_ID_CLIENT_SECRET`
</Step>
<Step title="Grant Admin Consent">
1. Go to **Enterprise applications** → select your app
2. Under **Security** → **Permissions**, click **Grant admin consent**
3. Ensure **Microsoft Graph → User.Read** is granted
</Step>
<Step title="Configure App Roles (Recommended)">
Under **App registrations** → your app → **App roles**, create:
| Display Name | Value | Allowed Member Types |
|---|---|---|
| Member | `member` | Users/Groups |
| Factory Admin | `factory-admin` | Users/Groups |
<Note>
The `member` role grants login access. The `factory-admin` role grants admin panel access. Roles are included in the JWT automatically.
</Note>
</Step>
<Step title="Assign Users">
1. Under **Properties**, set **Assignment required?** to **Yes**
2. Under **Users and groups**, assign users/groups with the appropriate role
</Step>
<Step title="Set Environment Variables">
```yaml
envVars:
AUTH_PROVIDER: "entra_id"
secrets:
ENTRA_ID_CLIENT_ID: "<Application (client) ID>"
ENTRA_ID_CLIENT_SECRET: "<Client Secret>"
ENTRA_ID_TENANT_ID: "<Directory (tenant) ID>"
```
</Step>
<Step title="Enable CLI Support (Optional)">
To allow `crewai login` via Device Authorization Grant:
1. Under **Authentication** → **Advanced settings**, enable **Allow public client flows**
2. Under **Expose an API**, add an Application ID URI (e.g., `api://crewai-cli`)
3. Add a scope (e.g., `read`) with **Admins and users** consent
4. Under **Manifest**, set `accessTokenAcceptedVersion` to `2`
5. Add environment variables:
```yaml
secrets:
ENTRA_ID_DEVICE_AUTHORIZATION_CLIENT_ID: "<Application (client) ID>"
ENTRA_ID_CUSTOM_OPENID_SCOPE: "<scope URI, e.g. api://crewai-cli/read>"
```
</Step>
</Steps>
---
### Okta
<Steps>
<Step title="Create App Integration">
1. Open Okta Admin Console → **Applications** → **Create App Integration**
2. Select **OIDC - OpenID Connect** → **Web Application** → **Next**
3. Configure:
- **App integration name:** `CrewAI SSO`
- **Sign-in redirect URI:** `https://<your-domain>/auth/okta/callback`
- **Sign-out redirect URI:** `https://<your-domain>`
- **Assignments:** Choose who can access (everyone or specific groups)
4. Click **Save**
</Step>
<Step title="Collect Credentials">
From the app details page:
- **Client ID** → `OKTA_CLIENT_ID`
- **Client Secret** → `OKTA_CLIENT_SECRET`
- **Okta URL** (top-right corner, under your username) → `OKTA_SITE`
</Step>
<Step title="Configure Authorization Server">
1. Navigate to **Security** → **API**
2. Select your authorization server (default: `default`)
3. Under **Access Policies**, add a policy and rule:
- In the rule, under **Scopes requested**, select **The following scopes** → **OIDC default scopes**
4. Note the **Name** and **Audience** of the authorization server
<Warning>
The authorization server name and audience must match `OKTA_AUTHORIZATION_SERVER` and `OKTA_AUDIENCE` exactly. Mismatches cause `401 Unauthorized` or `Invalid token: Signature verification failed` errors.
</Warning>
</Step>
<Step title="Set Environment Variables">
```yaml
envVars:
AUTH_PROVIDER: "okta"
secrets:
OKTA_CLIENT_ID: "<Okta app client ID>"
OKTA_CLIENT_SECRET: "<Okta client secret>"
OKTA_SITE: "https://your-domain.okta.com"
OKTA_AUTHORIZATION_SERVER: "default"
OKTA_AUDIENCE: "api://default"
```
</Step>
<Step title="Enable CLI Support (Optional)">
1. Create a **new** app integration: **OIDC** → **Native Application**
2. Enable **Device Authorization** and **Refresh Token** grant types
3. Allow everyone in your organization to access
4. Add environment variable:
```yaml
secrets:
OKTA_DEVICE_AUTHORIZATION_CLIENT_ID: "<Native app client ID>"
```
<Note>
Device Authorization requires a **Native Application** — it cannot use the Web Application created for browser-based SSO.
</Note>
</Step>
</Steps>
---
### Keycloak
<Steps>
<Step title="Create a Client">
1. Open Keycloak Admin Console → navigate to your realm
2. **Clients** → **Create client**:
- **Client type:** OpenID Connect
- **Client ID:** `crewai-factory` (suggested)
3. Capability config:
- **Client authentication:** On
- **Standard flow:** Checked
4. Login settings:
- **Root URL:** `https://<your-domain>`
- **Valid redirect URIs:** `https://<your-domain>/auth/keycloak/callback`
- **Valid post logout redirect URIs:** `https://<your-domain>`
5. Click **Save**
</Step>
<Step title="Collect Credentials">
- **Client ID** → `KEYCLOAK_CLIENT_ID`
- Under **Credentials** tab: **Client secret** → `KEYCLOAK_CLIENT_SECRET`
- **Realm name** → `KEYCLOAK_REALM`
- **Keycloak server URL** → `KEYCLOAK_SITE`
</Step>
<Step title="Set Environment Variables">
```yaml
envVars:
AUTH_PROVIDER: "keycloak"
secrets:
KEYCLOAK_CLIENT_ID: "<client ID>"
KEYCLOAK_CLIENT_SECRET: "<client secret>"
KEYCLOAK_SITE: "https://keycloak.yourdomain.com"
KEYCLOAK_REALM: "<realm name>"
KEYCLOAK_AUDIENCE: "account"
# Only set if using a custom base path (pre-v17 migrations):
# KEYCLOAK_BASE_URL: "/auth"
```
<Note>
Keycloak includes `account` as the default audience in access tokens. For most installations, `KEYCLOAK_AUDIENCE=account` works without additional configuration. See [Keycloak audience documentation](https://www.keycloak.org/docs/latest/authorization_services/index.html) if you need a custom audience.
</Note>
</Step>
<Step title="Enable CLI Support (Optional)">
1. Create a **second** client:
- **Client type:** OpenID Connect
- **Client ID:** `crewai-factory-cli` (suggested)
- **Client authentication:** Off (Device Authorization requires a public client)
- **Authentication flow:** Check **only** OAuth 2.0 Device Authorization Grant
2. Add environment variable:
```yaml
secrets:
KEYCLOAK_DEVICE_AUTHORIZATION_CLIENT_ID: "<CLI client ID>"
```
</Step>
</Steps>
---
### WorkOS
<Steps>
<Step title="Configure in WorkOS Dashboard">
1. Create an application in the [WorkOS Dashboard](https://dashboard.workos.com)
2. Configure the redirect URI: `https://<your-domain>/auth/workos/callback`
3. Note the **Client ID** and **AuthKit domain**
4. Set up organizations in the WorkOS dashboard
</Step>
<Step title="Set Environment Variables">
```yaml
envVars:
AUTH_PROVIDER: "workos"
secrets:
WORKOS_CLIENT_ID: "<WorkOS client ID>"
WORKOS_AUTHKIT_DOMAIN: "<your-authkit-domain.authkit.com>"
```
</Step>
</Steps>
---
### Auth0
<Steps>
<Step title="Create Application">
1. In the [Auth0 Dashboard](https://manage.auth0.com), create a new **Regular Web Application**
2. Configure:
- **Allowed Callback URLs:** `https://<your-domain>/auth/auth0/callback`
- **Allowed Logout URLs:** `https://<your-domain>`
3. Note the **Domain**, **Client ID**, and **Client Secret**
</Step>
<Step title="Set Environment Variables">
```yaml
envVars:
AUTH_PROVIDER: "auth0"
secrets:
AUTH0_CLIENT_ID: "<Auth0 client ID>"
AUTH0_CLIENT_SECRET: "<Auth0 client secret>"
AUTH0_DOMAIN: "<your-tenant.auth0.com>"
```
</Step>
<Step title="Enable CLI Support (Optional)">
1. Create a **Native** application in Auth0 for Device Authorization
2. Enable the **Device Authorization** grant type under application settings
3. Configure the CLI with the appropriate audience and client ID
</Step>
</Steps>
---
## CLI Authentication
The CrewAI CLI supports SSO authentication via the **Device Authorization Grant** flow. This allows developers to authenticate from their terminal without exposing credentials.
### Quick Setup
For Factory installations, the CLI can auto-configure all OAuth2 settings:
```bash
crewai enterprise configure https://your-factory-url.app
```
This command fetches the SSO configuration from your Factory instance and sets all required CLI parameters automatically.
Then authenticate:
```bash
crewai login
```
<Note>
Requires CrewAI CLI version **1.6.0** or higher for Entra ID, **0.159.0** or higher for Okta, and **1.9.0** or higher for Keycloak.
</Note>
### Manual CLI Configuration
If you need to configure the CLI manually, use `crewai config set`:
```bash
# Set the provider
crewai config set oauth2_provider okta
# Set provider-specific values
crewai config set oauth2_domain your-domain.okta.com
crewai config set oauth2_client_id your-client-id
crewai config set oauth2_audience api://default
# Set the enterprise base URL
crewai config set enterprise_base_url https://your-factory-url.app
```
### CLI Configuration Reference
| Setting | Description | Example |
|---|---|---|
| `enterprise_base_url` | Your CrewAI instance URL | `https://crewai.yourcompany.com` |
| `oauth2_provider` | Provider name | `workos`, `okta`, `auth0`, `entra_id`, `keycloak` |
| `oauth2_domain` | Provider domain | `your-domain.okta.com` |
| `oauth2_client_id` | OAuth2 client ID | `0oaqnwji7pGW7VT6T697` |
| `oauth2_audience` | API audience identifier | `api://default` |
View current configuration:
```bash
crewai config list
```
### How Device Authorization Works
1. Run `crewai login` — the CLI requests a device code from your IdP
2. A verification URL and code are displayed in your terminal
3. Your browser opens to the verification URL
4. Enter the code and authenticate with your IdP credentials
5. The CLI receives an access token and stores it locally
---
## Role-Based Access Control (RBAC)
CrewAI Platform provides granular RBAC that integrates with your SSO provider.
### Permission Model
| Permission | Description |
|---|---|
| **Read** | View resources (dashboards, automations, logs) |
| **Write** | Create and modify resources |
| **Manage** | Full control including deletion and configuration |
### Resources
Permissions can be scoped to individual resources:
- **Usage Dashboard** — Platform usage metrics and analytics
- **Automations Dashboard** — Crew and flow management
- **Environment Variables** — Secret and configuration management
- **Individual Automations** — Per-automation access control
### Roles
- **Predefined roles** come out of the box with standard permission sets
- **Custom roles** can be created with any combination of permissions
- **Per-resource assignment** — limit specific automations to individual users or roles
### Factory Admin Access
For Factory deployments using Entra ID, admin access is controlled via App Roles:
- Assign the `factory-admin` role to users who need admin panel access
- Assign the `member` role for standard platform access
- Roles are communicated via JWT claims — no additional configuration needed after IdP setup
---
## Troubleshooting
### Invalid Redirect URI
**Symptom:** Authentication fails with a redirect URI mismatch error.
**Fix:** Ensure the redirect URI in your IdP exactly matches the expected callback URL:
| Provider | Callback URL |
|---|---|
| Entra ID | `https://<domain>/auth/entra_id/callback` |
| Okta | `https://<domain>/auth/okta/callback` |
| Keycloak | `https://<domain>/auth/keycloak/callback` |
| WorkOS | `https://<domain>/auth/workos/callback` |
| Auth0 | `https://<domain>/auth/auth0/callback` |
### CLI Login Fails (Device Authorization)
**Symptom:** `crewai login` returns an error or times out.
**Fix:**
- Verify that Device Authorization Grant is enabled in your IdP
- For Okta: ensure you have a **Native Application** (not Web) with Device Authorization grant
- For Entra ID: ensure **Allow public client flows** is enabled
- For Keycloak: ensure the CLI client has **Client authentication: Off** and only Device Authorization Grant enabled
- Check that `*_DEVICE_AUTHORIZATION_CLIENT_ID` environment variable is set on the server
### Token Validation Errors
**Symptom:** `Invalid token: Signature verification failed` or `401 Unauthorized` after login.
**Fix:**
- **Okta:** Verify `OKTA_AUTHORIZATION_SERVER` and `OKTA_AUDIENCE` match the authorization server's Name and Audience exactly
- **Entra ID:** Ensure `accessTokenAcceptedVersion` is set to `2` in the app manifest
- **Keycloak:** Verify `KEYCLOAK_AUDIENCE` matches the audience in your access tokens (default: `account`)
### Admin Consent Not Granted (Entra ID)
**Symptom:** Users can't log in, see "needs admin approval" message.
**Fix:** Go to **Enterprise applications** → your app → **Permissions** → **Grant admin consent**. Ensure `User.Read` is granted for Microsoft Graph.
### 403 Forbidden After Login
**Symptom:** User authenticates successfully but gets 403 errors.
**Fix:**
- Check that the user is assigned to the application in your IdP
- For Entra ID with **Assignment required = Yes**: ensure the user has a role assignment (Member or Factory Admin)
- For Okta: verify the user or their group is assigned under the app's **Assignments** tab
### CLI Can't Reach Factory Instance
**Symptom:** `crewai enterprise configure` fails to connect.
**Fix:**
- Verify the Factory URL is reachable from your machine
- Check that `enterprise_base_url` is set correctly: `crewai config list`
- Ensure TLS certificates are valid and trusted
---
## Environment Variables Reference
### Common
| Variable | Description |
|---|---|
| `AUTH_PROVIDER` | Authentication provider: `entra_id`, `okta`, `workos`, `auth0`, `keycloak`, `local` |
### Microsoft Entra ID
| Variable | Required | Description |
|---|---|---|
| `ENTRA_ID_CLIENT_ID` | ✅ | Application (client) ID from Azure |
| `ENTRA_ID_CLIENT_SECRET` | ✅ | Client secret from Azure |
| `ENTRA_ID_TENANT_ID` | ✅ | Directory (tenant) ID from Azure |
| `ENTRA_ID_DEVICE_AUTHORIZATION_CLIENT_ID` | CLI only | Client ID for Device Authorization Grant |
| `ENTRA_ID_CUSTOM_OPENID_SCOPE` | CLI only | Custom scope from "Expose an API" (e.g., `api://crewai-cli/read`) |
### Okta
| Variable | Required | Description |
|---|---|---|
| `OKTA_CLIENT_ID` | ✅ | Okta application client ID |
| `OKTA_CLIENT_SECRET` | ✅ | Okta client secret |
| `OKTA_SITE` | ✅ | Okta organization URL (e.g., `https://your-domain.okta.com`) |
| `OKTA_AUTHORIZATION_SERVER` | ✅ | Authorization server name (e.g., `default`) |
| `OKTA_AUDIENCE` | ✅ | Authorization server audience (e.g., `api://default`) |
| `OKTA_DEVICE_AUTHORIZATION_CLIENT_ID` | CLI only | Native app client ID for Device Authorization |
### WorkOS
| Variable | Required | Description |
|---|---|---|
| `WORKOS_CLIENT_ID` | ✅ | WorkOS application client ID |
| `WORKOS_AUTHKIT_DOMAIN` | ✅ | AuthKit domain (e.g., `your-domain.authkit.com`) |
### Auth0
| Variable | Required | Description |
|---|---|---|
| `AUTH0_CLIENT_ID` | ✅ | Auth0 application client ID |
| `AUTH0_CLIENT_SECRET` | ✅ | Auth0 client secret |
| `AUTH0_DOMAIN` | ✅ | Auth0 tenant domain (e.g., `your-tenant.auth0.com`) |
### Keycloak
| Variable | Required | Description |
|---|---|---|
| `KEYCLOAK_CLIENT_ID` | ✅ | Keycloak client ID |
| `KEYCLOAK_CLIENT_SECRET` | ✅ | Keycloak client secret |
| `KEYCLOAK_SITE` | ✅ | Keycloak server URL |
| `KEYCLOAK_REALM` | ✅ | Keycloak realm name |
| `KEYCLOAK_AUDIENCE` | ✅ | Token audience (default: `account`) |
| `KEYCLOAK_BASE_URL` | Optional | Base URL path (e.g., `/auth` for pre-v17 migrations) |
| `KEYCLOAK_DEVICE_AUTHORIZATION_CLIENT_ID` | CLI only | Public client ID for Device Authorization |
---
## Next Steps
- [Installation Guide](/installation) — Get started with CrewAI
- [Quickstart](/quickstart) — Build your first crew
- [RBAC Setup](/enterprise/features/rbac) — Detailed role and permission management

View File

@@ -146,6 +146,36 @@ curl -X GET \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
## Stopping a Running Execution
You can stop or cancel a running crew or flow execution at any time using the stop endpoint. This is useful when you need to abort a long-running execution or cancel one that is no longer needed.
### Stop an Execution
Send a POST request with the `kickoff_id` of the execution you want to stop:
```bash
curl -X POST \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/stop/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
**Success Response:**
```json
{"status": "stopped", "kickoffId": "abcd1234-5678-90ef-ghij-klmnopqrstuv"}
```
**Error Response** (when the execution has already finished):
```json
{"detail": "Cannot stop execution. Current state: SUCCESS"}
```
<Note>
You cannot stop executions that have already completed (`SUCCESS`), failed (`FAILURE`), or been revoked (`REVOKED`). The API returns a `400` status code in those cases.
</Note>
## Handling Executions
### Long-Running Executions

View File

@@ -36,6 +36,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /{kickoff_id}/status`
4. **Stop execution** (if needed) using `POST /stop/{kickoff_id}`
version: 1.0.0
contact:
name: CrewAI Support
@@ -284,6 +285,56 @@ paths:
"500":
$ref: "#/components/responses/ServerError"
/stop/{kickoff_id}:
post:
summary: Stop Crew Execution
description: |
**📋 Reference Example Only** - *This shows the request format. To test with your actual crew, copy the cURL example and replace the URL + token with your real values.*
Stops or cancels a running crew or flow execution. The execution must be in an active state
(not SUCCESS, FAILURE, or REVOKED).
operationId: stopCrewExecution
parameters:
- name: kickoff_id
in: path
required: true
description: The kickoff ID of the execution to stop
schema:
type: string
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
"200":
description: Successfully stopped the execution
content:
application/json:
schema:
$ref: "#/components/schemas/StopExecutionResponse"
example:
status: "stopped"
kickoffId: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
"400":
description: Execution is already in a terminal state (SUCCESS, FAILURE, or REVOKED)
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
example:
detail: "Cannot stop execution. Current state: SUCCESS"
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
summary: Resume Crew Execution with Human Feedback
@@ -508,6 +559,19 @@ components:
description: Time taken to execute this task in seconds
example: 45.2
StopExecutionResponse:
type: object
properties:
status:
type: string
enum: ["stopped"]
description: Indicates the execution was successfully stopped
example: "stopped"
kickoffId:
type: string
description: The kickoff ID of the stopped execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
Error:
type: object
properties:

View File

@@ -36,6 +36,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /{kickoff_id}/status`
4. **Stop execution** (if needed) using `POST /stop/{kickoff_id}`
version: 1.0.0
contact:
name: CrewAI Support
@@ -284,6 +285,56 @@ paths:
"500":
$ref: "#/components/responses/ServerError"
/stop/{kickoff_id}:
post:
summary: Stop Crew Execution
description: |
**📋 Reference Example Only** - *This shows the request format. To test with your actual crew, copy the cURL example and replace the URL + token with your real values.*
Stops or cancels a running crew or flow execution. The execution must be in an active state
(not SUCCESS, FAILURE, or REVOKED).
operationId: stopCrewExecution
parameters:
- name: kickoff_id
in: path
required: true
description: The kickoff ID of the execution to stop
schema:
type: string
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
"200":
description: Successfully stopped the execution
content:
application/json:
schema:
$ref: "#/components/schemas/StopExecutionResponse"
example:
status: "stopped"
kickoffId: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
"400":
description: Execution is already in a terminal state (SUCCESS, FAILURE, or REVOKED)
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
example:
detail: "Cannot stop execution. Current state: SUCCESS"
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
summary: Resume Crew Execution with Human Feedback
@@ -508,6 +559,19 @@ components:
description: Time taken to execute this task in seconds
example: 45.2
StopExecutionResponse:
type: object
properties:
status:
type: string
enum: ["stopped"]
description: Indicates the execution was successfully stopped
example: "stopped"
kickoffId:
type: string
description: The kickoff ID of the stopped execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
Error:
type: object
properties:

View File

@@ -120,6 +120,46 @@ paths:
'500':
$ref: '#/components/responses/ServerError'
/stop/{kickoff_id}:
post:
summary: 실행 중지
description: |
**📋 참조 예제만 제공** - *요청 형식을 보여줍니다. 실제 호출은 cURL 예제를 복사해 URL과 토큰을 교체하세요.*
실행 중인 crew 또는 flow 실행을 중지하거나 취소합니다. 실행이 활성 상태여야 합니다
(SUCCESS, FAILURE, REVOKED 상태가 아닌 경우).
operationId: stopCrewExecution
parameters:
- name: kickoff_id
in: path
required: true
schema:
type: string
format: uuid
responses:
'200':
description: 실행을 성공적으로 중지
content:
application/json:
schema:
$ref: '#/components/schemas/StopExecutionResponse'
'400':
description: 실행이 이미 종료 상태 (SUCCESS, FAILURE, REVOKED)
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
description: Kickoff ID를 찾을 수 없음
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'500':
$ref: '#/components/responses/ServerError'
/resume:
post:
summary: Resume Crew Execution with Human Feedback
@@ -314,6 +354,15 @@ components:
execution_time:
type: number
StopExecutionResponse:
type: object
properties:
status:
type: string
enum: ["stopped"]
kickoffId:
type: string
Error:
type: object
properties:

View File

@@ -36,6 +36,7 @@ info:
1. **Descubra os inputs** usando `GET /inputs`
2. **Inicie a execução** usando `POST /kickoff`
3. **Monitore o progresso** usando `GET /{kickoff_id}/status`
4. **Pare a execução** (se necessário) usando `POST /stop/{kickoff_id}`
version: 1.0.0
contact:
name: CrewAI Suporte
@@ -156,6 +157,46 @@ paths:
"500":
$ref: "#/components/responses/ServerError"
/stop/{kickoff_id}:
post:
summary: Parar Execução da Crew
description: |
**📋 Exemplo de Referência** - *Mostra o formato da requisição. Para testar com sua crew real, copie o cURL e substitua URL + token.*
Para ou cancela uma execução de crew ou flow em andamento. A execução deve estar em um estado ativo
(não SUCCESS, FAILURE ou REVOKED).
operationId: stopCrewExecution
parameters:
- name: kickoff_id
in: path
required: true
schema:
type: string
format: uuid
responses:
"200":
description: Execução parada com sucesso
content:
application/json:
schema:
$ref: "#/components/schemas/StopExecutionResponse"
"400":
description: Execução já em estado terminal (SUCCESS, FAILURE ou REVOKED)
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID não encontrado
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
summary: Resume Crew Execution with Human Feedback
@@ -351,6 +392,15 @@ components:
execution_time:
type: number
StopExecutionResponse:
type: object
properties:
status:
type: string
enum: ["stopped"]
kickoffId:
type: string
Error:
type: object
properties:

View File

@@ -4,6 +4,190 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 3월 31일">
## v1.13.0a5
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a5)
## 변경 사항
### 문서
- v1.13.0a4에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde, @joaomdmoura
</Update>
<Update label="2026년 4월 1일">
## v1.13.0a4
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a4)
## 변경 사항
### 문서
- v1.13.0a3에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 1일">
## v1.13.0a3
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a3)
## 변경 사항
### 기능
- LLMCallCompletedEvent에서 토큰 사용 데이터 발행
- 도구 메타데이터를 AMP로 추출 및 게시
### 버그 수정
- `stop` API 매개변수를 지원하지 않는 GPT-5.x 모델 처리
### 문서
- 모든 언어에서 에이전트 기능의 부정확성 수정
- 에이전트 기능 개요 추가 및 기술 문서 개선
- 포괄적인 SSO 구성 가이드 추가
- v1.13.0rc1에 대한 변경 로그 및 버전 업데이트
### 리팩토링
- Flow를 Pydantic BaseModel로 변환
- LLM 클래스를 Pydantic BaseModel로 변환
- InstanceOf[T]를 일반 타입 주석으로 교체
- 사용되지 않는 메서드 제거
## 기여자
@dependabot[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide, @thiagomoretto
</Update>
<Update label="2026년 3월 27일">
## v1.13.0rc1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
## 변경 사항
### 문서
- v1.13.0a2의 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 3월 27일">
## v1.13.0a2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
## 변경 사항
### 기능
- 릴리스 중 자동 업데이트 배포 테스트 리포지토리
- 기업 릴리스의 복원력 및 사용자 경험 개선
### 문서
- v1.13.0a1에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 3월 27일">
## v1.13.0a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
## 변경 사항
### 버그 수정
- Node를 LTS 22로 고정하여 문서 작업 흐름의 끊어진 링크 수정
- 기업 릴리스에서 새로 게시된 패키지의 uv 캐시 초기화
### 문서
- 포괄적인 RBAC 권한 매트릭스 및 배포 가이드 추가
- v1.12.2에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="2026년 3월 25일">
## v1.12.2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.12.2)
## 변경 사항
### 기능
- devtools 릴리스에 기업 릴리스 단계 추가
### 버그 수정
- @human_feedback과 함께 emit을 사용할 때 메서드 반환 값을 흐름 출력으로 유지
### 문서
- v1.12.1에 대한 변경 로그 및 버전 업데이트
- 보안 정책 및 보고 지침 수정
## 기여자
@alex-clawd, @greysonlalonde, @joaomdmoura, @theCyberTech
</Update>
<Update label="2026년 3월 25일">
## v1.12.1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.12.1)
## 변경 사항
### 기능
- HumanFeedbackRequestedEvent에 request_id 추가
- 메모리 시스템을 위한 Qdrant Edge 저장소 백엔드 추가
- 변경 사항을 분석하고 번역된 문서와 함께 문서를 생성하는 docs-check 명령어 추가
- 변경 로그 및 릴리스 도구에 아랍어 지원 추가
- 모든 문서에 대한 현대 표준 아랍어 번역 추가
- CLI에 로그아웃 명령어 추가
- 에이전트 기술 추가
- 계층적 메모리 격리를 위한 자동 root_scope 구현
- OpenAI 호환 네이티브 제공자 구현 (OpenRouter, DeepSeek, Ollama, vLLM, Cerebras, Dashscope)
### 버그 수정
- 트레이스 배치 푸시에 대한 잘못된 자격 증명 수정 (404)
- HITL 흐름 시스템의 여러 버그 해결
- 에이전트 메모리 저장 수정
- crewai 패키지 전반에 걸쳐 모든 엄격한 mypy 오류 해결
- FlowMeta의 listener+router 메서드에 대한 __router_paths__ 사용 수정
- 파일 지원이 없는 경우 값 오류 수정
- 문서에서 litellm 격리 단어 수정
- crewai-files의 모든 mypy 오류 수정 및 모든 패키지를 CI 유형 검사에 추가
- litellm의 상한을 마지막 테스트된 버전 (1.82.6)으로 고정
### 문서
- v1.12.0에 대한 변경 로그 및 버전 업데이트
- CONTRIBUTING.md 추가
- LiteLLM 없이 CrewAI를 사용하는 가이드 추가
## 기여자
@akaKuruma, @alex-clawd, @greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay, @lucasgomide, @nicoferdi96
</Update>
<Update label="2026년 3월 25일">
## v1.12.0

View File

@@ -0,0 +1,147 @@
---
title: "에이전트 기능"
description: "CrewAI 에이전트를 확장하는 다섯 가지 방법 이해하기: 도구, MCP, 앱, 스킬, 지식."
icon: puzzle-piece
mode: "wide"
---
## 개요
CrewAI 에이전트는 **다섯 가지 고유한 기능 유형**으로 확장할 수 있으며, 각각 다른 목적을 가지고 있습니다. 각 유형을 언제 사용해야 하는지, 그리고 어떻게 함께 작동하는지 이해하는 것이 효과적인 에이전트를 구축하는 핵심입니다.
<CardGroup cols={2}>
<Card title="도구" icon="wrench" href="/ko/concepts/tools" color="#3B82F6">
**호출 가능한 함수** — 에이전트가 행동을 취할 수 있게 합니다. 웹 검색, 파일 작업, API 호출, 코드 실행.
</Card>
<Card title="MCP 서버" icon="plug" href="/ko/mcp/overview" color="#8B5CF6">
**원격 도구 서버** — Model Context Protocol을 통해 에이전트를 외부 도구 서버에 연결합니다. 도구와 같은 효과이지만 외부에서 호스팅됩니다.
</Card>
<Card title="앱" icon="grid-2" color="#EC4899">
**플랫폼 통합** — CrewAI 플랫폼을 통해 에이전트를 SaaS 앱(Gmail, Slack, Jira, Salesforce)에 연결합니다. 플랫폼 통합 토큰으로 로컬에서 실행됩니다.
</Card>
<Card title="스킬" icon="bolt" href="/ko/concepts/skills" color="#F59E0B">
**도메인 전문성** — 에이전트 프롬프트에 지침, 가이드라인 및 참조 자료를 주입합니다. 스킬은 에이전트에게 *어떻게 생각할지*를 알려줍니다.
</Card>
<Card title="지식" icon="book" href="/ko/concepts/knowledge" color="#10B981">
**검색된 사실** — 시맨틱 검색(RAG)을 통해 문서, 파일 및 URL에서 에이전트에게 데이터를 제공합니다. 지식은 에이전트에게 *무엇을 알아야 하는지*를 제공합니다.
</Card>
</CardGroup>
---
## 핵심 구분
가장 중요한 점: **이 기능들은 두 가지 범주로 나뉩니다**.
### 액션 기능 (도구, MCP, 앱)
에이전트에게 **무언가를 할 수 있는** 능력을 부여합니다 — API 호출, 파일 읽기, 웹 검색, 이메일 전송. 실행 시점에 세 가지 모두 동일한 내부 형식(`BaseTool` 인스턴스)으로 변환되며, 에이전트가 호출할 수 있는 통합 도구 목록에 나타납니다.
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool
agent = Agent(
role="Researcher",
goal="Find and compile market data",
backstory="Expert market analyst",
tools=[SerperDevTool(), FileReadTool()], # 로컬 도구
mcps=["https://mcp.example.com/sse"], # 원격 MCP 서버 도구
apps=["gmail", "google_sheets"], # 플랫폼 통합
)
```
### 컨텍스트 기능 (스킬, 지식)
에이전트의 **프롬프트**를 수정합니다 — 에이전트가 추론을 시작하기 전에 전문성, 지침 또는 검색된 데이터를 주입합니다. 에이전트에게 새로운 액션을 제공하는 것이 아니라, 에이전트가 어떻게 생각하고 어떤 정보에 접근할 수 있는지를 형성합니다.
```python
from crewai import Agent
agent = Agent(
role="Security Auditor",
goal="Audit cloud infrastructure for vulnerabilities",
backstory="Expert in cloud security with 10 years of experience",
skills=["./skills/security-audit"], # 도메인 지침
knowledge_sources=[pdf_source, url_source], # 검색된 사실
)
```
---
## 언제 무엇을 사용할까
| 필요한 것... | 사용할 것 | 예시 |
| :------------------------------------------------------- | :---------------- | :--------------------------------------- |
| 에이전트가 웹을 검색 | **도구** | `tools=[SerperDevTool()]` |
| 에이전트가 MCP를 통해 원격 API 호출 | **MCP** | `mcps=["https://api.example.com/sse"]` |
| 에이전트가 Gmail로 이메일 전송 | **앱** | `apps=["gmail"]` |
| 에이전트가 특정 절차를 따름 | **스킬** | `skills=["./skills/code-review"]` |
| 에이전트가 회사 문서 참조 | **지식** | `knowledge_sources=[pdf_source]` |
| 에이전트가 웹 검색 AND 리뷰 가이드라인 준수 | **도구 + 스킬** | 둘 다 함께 사용 |
---
## 기능 조합하기
실제로 에이전트는 종종 **여러 기능 유형을 함께** 사용합니다. 현실적인 예시입니다:
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
# 완전히 갖춘 리서치 에이전트
researcher = Agent(
role="Senior Research Analyst",
goal="Produce comprehensive market analysis reports",
backstory="Expert analyst with deep industry knowledge",
# 액션: 에이전트가 할 수 있는 것
tools=[
SerperDevTool(), # 웹 검색
FileReadTool(), # 로컬 파일 읽기
CodeInterpreterTool(), # 분석을 위한 Python 코드 실행
],
mcps=["https://data-api.example.com/sse"], # 원격 데이터 API 접근
apps=["google_sheets"], # Google Sheets에 쓰기
# 컨텍스트: 에이전트가 아는 것
skills=["./skills/research-methodology"], # 연구 수행 방법
knowledge_sources=[company_docs], # 회사 특화 데이터
)
```
---
## 비교 테이블
| 특성 | 도구 | MCP | 앱 | 스킬 | 지식 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| **에이전트에게 액션 부여** | ✅ | ✅ | ✅ | ❌ | ❌ |
| **프롬프트 수정** | ❌ | ❌ | ❌ | ✅ | ✅ |
| **코드 필요** | 예 | 설정만 | 설정만 | 마크다운만 | 설정만 |
| **로컬 실행** | 예 | 경우에 따라 | 예 (환경 변수 필요) | N/A | 예 |
| **API 키 필요** | 도구별 | 서버별 | 통합 토큰 | 아니오 | 임베더만 |
| **Agent에 설정** | `tools=[]` | `mcps=[]` | `apps=[]` | `skills=[]` | `knowledge_sources=[]` |
| **Crew에 설정** | ❌ | ❌ | ❌ | `skills=[]` | `knowledge_sources=[]` |
---
## 상세 가이드
각 기능 유형에 대해 더 알아볼 준비가 되셨나요?
<CardGroup cols={2}>
<Card title="도구" icon="wrench" href="/ko/concepts/tools">
맞춤형 도구 생성, 75개 이상의 OSS 카탈로그 사용, 캐싱 및 비동기 실행 설정.
</Card>
<Card title="MCP 통합" icon="plug" href="/ko/mcp/overview">
stdio, SSE 또는 HTTP를 통해 MCP 서버에 연결. 도구 필터링, 인증 설정.
</Card>
<Card title="스킬" icon="bolt" href="/ko/concepts/skills">
SKILL.md로 스킬 패키지 구축, 도메인 전문성 주입, 점진적 공개 사용.
</Card>
<Card title="지식" icon="book" href="/ko/concepts/knowledge">
PDF, CSV, URL 등에서 지식 추가. 임베더 및 검색 설정.
</Card>
</CardGroup>

View File

@@ -1,27 +1,186 @@
---
title: 스킬
description: 에이전트 프롬프트에 컨텍스트를 주입하는 파일 시스템 기반 스킬 패키지.
description: 에이전트 프롬프트에 도메인 전문성과 지침을 주입하는 파일 시스템 기반 스킬 패키지.
icon: bolt
mode: "wide"
---
## 개요
스킬은 에이전트에게 도메인별 지침, 참조 자료, 에셋을 제공하는 자체 포함 디렉터리입니다. 각 스킬은 YAML 프론트매터와 마크다운 본문이 포함된 `SKILL.md` 파일로 정의됩니다.
스킬은 에이전트에게 **도메인별 지침, 가이드라인 및 참조 자료**를 제공하는 자체 포함 디렉터리입니다. 각 스킬은 YAML 프론트매터와 마크다운 본문이 포함된 `SKILL.md` 파일로 정의됩니다.
스킬은 **점진적 공개**를 사용합니다 — 메타데이터가 먼저 로드되고, 활성화 시에만 전체 지침이 로드되며, 필요할 때만 리소스 카탈로그가 로드됩니다.
활성화되면 스킬의 지침이 에이전트의 작업 프롬프트에 직접 주입됩니다 — 코드 변경 없이 에이전트에게 전문성을 부여합니다.
## 디렉터리 구조
<Note type="info" title="스킬 vs 도구 — 핵심 구분">
**스킬은 도구가 아닙니다.** 이것이 가장 흔한 혼동 포인트입니다.
- **스킬**은 에이전트의 프롬프트에 *지침과 컨텍스트*를 주입합니다. 에이전트에게 문제에 대해 *어떻게 생각할지*를 알려줍니다.
- **도구**는 에이전트에게 행동을 취할 수 있는 *호출 가능한 함수*를 제공합니다 (검색, 파일 읽기, API 호출).
흔히 **둘 다** 필요합니다: 전문성을 위한 스킬과 행동을 위한 도구. 이들은 독립적으로 구성되며 서로 보완합니다.
</Note>
---
## 빠른 시작
### 1. 스킬 디렉터리 생성
```
my-skill/
── SKILL.md # 필수 — 프론트매터 + 지침
├── scripts/ # 선택 — 실행 가능한 스크립트
├── references/ # 선택 — 참조 문서
└── assets/ # 선택 — 정적 파일 (설정, 데이터)
skills/
── code-review/
├── SKILL.md # 필수 — 지침
├── references/ # 선택 — 참조 문서
│ └── style-guide.md
└── scripts/ # 선택 — 실행 가능한 스크립트
```
디렉터리 이름은 `SKILL.md`의 `name` 필드와 일치해야 합니다.
### 2. SKILL.md 작성
```markdown
---
name: code-review
description: Guidelines for conducting thorough code reviews with focus on security and performance.
metadata:
author: your-team
version: "1.0"
---
## 코드 리뷰 가이드라인
코드를 리뷰할 때 이 체크리스트를 따르세요:
1. **보안**: 인젝션 취약점, 인증 우회, 데이터 노출 확인
2. **성능**: N+1 쿼리, 불필요한 할당, 블로킹 호출 확인
3. **가독성**: 명확한 네이밍, 적절한 주석, 일관된 스타일 보장
4. **테스트**: 새로운 기능에 대한 적절한 테스트 커버리지 확인
### 심각도 수준
- **크리티컬**: 보안 취약점, 데이터 손실 위험 → 머지 차단
- **메이저**: 성능 문제, 로직 오류 → 변경 요청
- **마이너**: 스타일 문제, 네이밍 제안 → 코멘트와 함께 승인
```
### 3. 에이전트에 연결
```python
from crewai import Agent
from crewai_tools import GithubSearchTool, FileReadTool
reviewer = Agent(
role="Senior Code Reviewer",
goal="Review pull requests for quality and security issues",
backstory="Staff engineer with expertise in secure coding practices.",
skills=["./skills"], # 리뷰 가이드라인 주입
tools=[GithubSearchTool(), FileReadTool()], # 에이전트가 코드를 읽을 수 있게 함
)
```
이제 에이전트는 **전문성** (스킬에서)과 **기능** (도구에서) 모두를 갖추게 됩니다.
---
## 스킬 + 도구: 함께 작동하기
스킬과 도구가 어떻게 보완하는지 보여주는 일반적인 패턴입니다:
### 패턴 1: 스킬만 (도메인 전문성, 액션 불필요)
에이전트가 특정 지침이 필요하지만 외부 서비스를 호출할 필요가 없을 때 사용:
```python
agent = Agent(
role="Technical Writer",
goal="Write clear API documentation",
backstory="Expert technical writer",
skills=["./skills/api-docs-style"], # 작성 가이드라인 및 템플릿
# 도구 불필요 — 에이전트가 제공된 컨텍스트를 기반으로 작성
)
```
### 패턴 2: 도구만 (액션, 특별한 전문성 불필요)
에이전트가 행동을 취해야 하지만 도메인별 지침이 필요 없을 때 사용:
```python
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
agent = Agent(
role="Web Researcher",
goal="Find information about a topic",
backstory="Skilled at finding information online",
tools=[SerperDevTool(), ScrapeWebsiteTool()], # 검색 및 스크래핑 가능
# 스킬 불필요 — 일반 연구에는 특별한 가이드라인이 필요 없음
)
```
### 패턴 3: 스킬 + 도구 (전문성 AND 액션)
가장 일반적인 실제 패턴. 스킬은 작업에 *어떻게* 접근할지를 제공하고, 도구는 에이전트가 *무엇을* 할 수 있는지를 제공합니다:
```python
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
analyst = Agent(
role="Security Analyst",
goal="Audit infrastructure for vulnerabilities",
backstory="Expert in cloud security and compliance",
skills=["./skills/security-audit"], # 감사 방법론 및 체크리스트
tools=[
SerperDevTool(), # 알려진 취약점 조사
FileReadTool(), # 설정 파일 읽기
CodeInterpreterTool(), # 분석 스크립트 실행
],
)
```
### 패턴 4: 스킬 + MCP
스킬은 도구와 마찬가지로 MCP 서버와 함께 작동합니다:
```python
agent = Agent(
role="Data Analyst",
goal="Analyze customer data and generate reports",
backstory="Expert data analyst with strong statistical background",
skills=["./skills/data-analysis"], # 분석 방법론
mcps=["https://data-warehouse.example.com/sse"], # 원격 데이터 접근
)
```
### 패턴 5: 스킬 + 앱
스킬은 에이전트가 플랫폼 통합을 사용하는 방법을 안내할 수 있습니다:
```python
agent = Agent(
role="Customer Support Agent",
goal="Respond to customer inquiries professionally",
backstory="Experienced support representative",
skills=["./skills/support-playbook"], # 응답 템플릿 및 에스컬레이션 규칙
apps=["gmail", "zendesk"], # 이메일 전송 및 티켓 업데이트 가능
)
```
---
## 크루 레벨 스킬
스킬을 크루에 설정하여 **모든 에이전트**에 적용할 수 있습니다:
```python
from crewai import Crew
crew = Crew(
agents=[researcher, writer, reviewer],
tasks=[research_task, write_task, review_task],
skills=["./skills"], # 모든 에이전트가 이 스킬을 받음
)
```
에이전트 레벨 스킬이 우선합니다 — 동일한 스킬이 양쪽 레벨에서 발견되면 에이전트의 버전이 사용됩니다.
---
## SKILL.md 형식
@@ -34,7 +193,7 @@ compatibility: crewai>=0.1.0 # 선택
metadata: # 선택
author: your-name
version: "1.0"
allowed-tools: web-search file-read # 선택, 공백으로 구분
allowed-tools: web-search file-read # 선택, 실험적
---
에이전트를 위한 지침이 여기에 들어갑니다. 이 마크다운 본문은
@@ -43,57 +202,46 @@ allowed-tools: web-search file-read # 선택, 공백으로 구분
### 프론트매터 필드
| 필드 | 필수 | 제약 조건 |
| 필드 | 필수 | 설명 |
| :-------------- | :----- | :----------------------------------------------------------------------- |
| `name` | 예 | 164자. 소문자 영숫자와 하이픈. 선행/후행/연속 하이픈 불가. 디렉터리 이름과 일치 필수. |
| `name` | 예 | 164자. 소문자 영숫자와 하이픈. 디렉터리 이름과 일치 필수. |
| `description` | 예 | 11024자. 스킬이 무엇을 하고 언제 사용하는지 설명. |
| `license` | 아니오 | 라이선스 이름 또는 번들된 라이선스 파일 참조. |
| `compatibility` | 아니오 | 최대 500자. 환경 요구 사항 (제품, 패키지, 네트워크). |
| `metadata` | 아니오 | 임의의 문자열 키-값 매핑. |
| `allowed-tools` | 아니오 | 공백으로 구분된 사전 승인 도구 목록. 실험적. |
## 사용법
---
### 에이전트 레벨 스킬
## 디렉터리 구조
에이전트에 스킬 디렉터리 경로를 전달합니다:
```python
from crewai import Agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=["./skills"], # 이 디렉터리의 모든 스킬을 검색
)
```
my-skill/
├── SKILL.md # 필수 — 프론트매터 + 지침
├── scripts/ # 선택 — 실행 가능한 스크립트
├── references/ # 선택 — 참조 문서
└── assets/ # 선택 — 정적 파일 (설정, 데이터)
```
### 크루 레벨 스킬
디렉터리 이름은 `SKILL.md`의 `name` 필드와 일치해야 합니다. `scripts/`, `references/`, `assets/` 디렉터리는 파일을 직접 참조해야 하는 에이전트를 위해 스킬의 `path`에서 사용할 수 있습니다.
크루의 스킬 경로는 모든 에이전트에 병합됩니다:
---
```python
from crewai import Crew
## 사전 로드된 스킬
crew = Crew(
agents=[agent],
tasks=[task],
skills=["./skills"],
)
```
### 사전 로드된 스킬
`Skill` 객체를 직접 전달할 수도 있습니다:
더 세밀한 제어를 위해 프로그래밍 방식으로 스킬을 검색하고 활성화할 수 있습니다:
```python
from pathlib import Path
from crewai.skills import discover_skills, activate_skill
# 디렉터리의 모든 스킬 검색
skills = discover_skills(Path("./skills"))
# 활성화 (전체 SKILL.md 본문 로드)
activated = [activate_skill(s) for s in skills]
# 에이전트에 전달
agent = Agent(
role="Researcher",
goal="Find relevant information",
@@ -102,13 +250,57 @@ agent = Agent(
)
```
---
## 스킬 로드 방식
스킬은 점진적으로 로드됩니다 — 각 단계에서 필요한 데이터만 읽습니다:
스킬은 **점진적 공개**를 사용합니다 — 각 단계에서 필요한 것만 로드합니다:
| 단계 | 로드되는 내용 | 시점 |
| :--------------- | :------------------------------------------------ | :----------------- |
| 검색 | 이름, 설명, 프론트매터 필드 | `discover_skills()` |
| 활성화 | 전체 SKILL.md 본문 텍스트 | `activate_skill()` |
| 단계 | 로드되는 내용 | 시점 |
| :------- | :------------------------------------ | :------------------ |
| 검색 | 이름, 설명, 프론트매터 필드 | `discover_skills()` |
| 활성화 | 전체 SKILL.md 본문 텍스트 | `activate_skill()` |
일반적인 에이전트 실행 중에 스킬은 자동으로 검색되고 활성화됩니다. `scripts/`, `references/`, `assets/` 디렉터리는 파일을 직접 참조해야 하는 에이전트를 위해 스킬의 `path`에서 사용할 수 있습니다.
일반적인 에이전트 실행 중(`skills=["./skills"]`로 디렉터리 경로 전달 시) 스킬은 자동으로 검색되고 활성화됩니다. 점진적 로딩은 프로그래밍 API를 사용할 때만 관련됩니다.
---
## 스킬 vs 지식
스킬과 지식 모두 에이전트의 프롬프트를 수정하지만, 서로 다른 목적을 가지고 있습니다:
| 측면 | 스킬 | 지식 |
| :--- | :--- | :--- |
| **제공하는 것** | 지침, 절차, 가이드라인 | 사실, 데이터, 정보 |
| **저장 방식** | 마크다운 파일 (SKILL.md) | 벡터 스토어에 임베딩 (ChromaDB) |
| **검색 방식** | 전체 본문이 프롬프트에 주입 | 시맨틱 검색으로 관련 청크 찾기 |
| **적합한 용도** | 방법론, 체크리스트, 스타일 가이드 | 회사 문서, 제품 정보, 참조 데이터 |
| **설정 방법** | `skills=["./skills"]` | `knowledge_sources=[source]` |
**경험 법칙:** 에이전트가 *프로세스*를 따라야 하면 스킬을 사용하세요. 에이전트가 *데이터*를 참조해야 하면 지식을 사용하세요.
---
## 자주 묻는 질문
<AccordionGroup>
<Accordion title="스킬과 도구를 모두 설정해야 하나요?">
사용 사례에 따라 다릅니다. 스킬과 도구는 **독립적**입니다 — 둘 중 하나, 둘 다, 또는 아무것도 사용하지 않을 수 있습니다.
- **스킬만**: 에이전트가 전문성은 필요하지만 외부 액션이 필요 없을 때 (예: 스타일 가이드라인으로 작성)
- **도구만**: 에이전트가 액션은 필요하지만 특별한 방법론이 필요 없을 때 (예: 간단한 웹 검색)
- **둘 다**: 에이전트가 전문성 AND 액션이 필요할 때 (예: 특정 체크리스트로 보안 감사 AND 코드 스캔 기능)
</Accordion>
<Accordion title="스킬이 자동으로 도구를 제공하나요?">
**아니요.** SKILL.md의 `allowed-tools` 필드는 실험적 메타데이터일 뿐 — 도구를 프로비저닝하거나 주입하지 않습니다. 항상 `tools=[]`, `mcps=[]` 또는 `apps=[]`를 통해 별도로 도구를 설정해야 합니다.
</Accordion>
<Accordion title="에이전트와 크루 모두에 같은 스킬을 설정하면 어떻게 되나요?">
에이전트 레벨 스킬이 우선합니다. 스킬은 이름으로 중복 제거됩니다 — 에이전트의 스킬이 먼저 처리되므로, 같은 스킬 이름이 양쪽 레벨에 나타나면 에이전트의 버전이 사용됩니다.
</Accordion>
<Accordion title="SKILL.md 본문의 최대 크기는 얼마인가요?">
50,000자에서 소프트 경고가 있지만 하드 리밋은 없습니다. 최상의 결과를 위해 스킬을 집중적이고 간결하게 유지하세요 — 너무 큰 프롬프트 주입은 에이전트의 주의를 분산시킬 수 있습니다.
</Accordion>
</AccordionGroup>

View File

@@ -10,6 +10,10 @@ mode: "wide"
CrewAI 도구는 에이전트에게 웹 검색, 데이터 분석부터 동료 간 협업 및 작업 위임에 이르기까지 다양한 기능을 제공합니다.
이 문서에서는 CrewAI 프레임워크 내에서 이러한 도구를 생성, 통합 및 활용하는 방법과, 협업 도구에 초점을 맞춘 새로운 기능에 대해 설명합니다.
<Note type="info" title="도구는 다섯 가지 에이전트 기능 유형 중 하나입니다">
도구는 에이전트에게 행동을 취할 수 있는 **호출 가능한 함수**를 제공합니다. [MCP](/ko/mcp/overview) (원격 도구 서버), [앱](/ko/concepts/agent-capabilities) (플랫폼 통합), [스킬](/ko/concepts/skills) (도메인 전문성), [지식](/ko/concepts/knowledge) (검색된 사실)과 함께 작동합니다. 각 유형을 언제 사용해야 하는지 알아보려면 [에이전트 기능](/ko/concepts/agent-capabilities) 개요를 참조하세요.
</Note>
## Tool이란 무엇인가?
CrewAI에서 tool은 에이전트가 다양한 작업을 수행하기 위해 활용할 수 있는 기술 또는 기능입니다.

View File

@@ -1,108 +1,260 @@
---
title: "역할 기반 접근 제어 (RBAC)"
description: "역할과 자동화별 가시성으로 crews, 도구, 데이터 접근을 제어합니다."
description: "역할, 범위, 세분화된 권한으로 crews, 도구, 데이터 접근을 제어합니다."
icon: "shield"
mode: "wide"
---
## 개요
CrewAI AOP의 RBAC는 **조직 수준 역할**과 **자동화(Automation) 수준 가시성**을 결합하여 안전하고 확장 가능한 접근 제어를 제공합니다.
CrewAI AMP의 RBAC는 두 가지 계층을 통해 안전하고 확장 가능한 접근 관리를 제공합니다:
1. **기능 권한** — 플랫폼 전반에서 각 역할이 수행할 수 있는 작업을 제어합니다 (관리, 읽기 또는 접근 불가)
2. **엔티티 수준 권한** — 개별 자동화, 환경 변수, LLM 연결, Git 저장소에 대한 세분화된 접근 제어
<Frame>
<img src="/images/enterprise/users_and_roles.png" alt="CrewAI AMP RBAC 개요" />
</Frame>
## 사용자와 역할
워크스페이스의 각 구성원 역할이 있으며, 이는 기능 접근 범위 결정니다.
CrewAI 워크스페이스의 각 구성원에게는 역할이 할당되며, 이를 통해 다양한 기능에 대한 접근 범위 결정니다.
가능한 작업:
- 사전 정의된 역할 사용 (Owner, Member)
- 권한을 세분화한 커스텀 역할 생성
- 설정 화면에서 언제든 역할 할당/변경
- 특정 권한에 맞춘 커스텀 역할 생성
- 설정 패널에서 언제든 역할 할당
설정 위치: Settings → Roles
<Steps>
<Step title="Roles 열기">
<b>Settings → Roles</b>로 이동합니다.
<Step title="Roles 설정 열기">
CrewAI AMP에서 <b>Settings → Roles</b>로 이동합니다.
</Step>
<Step title="역할 선택">
<b>Owner</b> 또는 <b>Member</b> 사용하거나 <b>Create role</b>로 커스텀
역할을 만듭니다.
<Step title="역할 유형 선택">
사전 정의된 역할(<b>Owner</b>, <b>Member</b>)을 사용하거나{" "}
<b>Create role</b>을 클릭하여 커스텀 역할을 만듭니다.
</Step>
<Step title="멤버에 할당">
사용자들을 선택하 역할을 지정합니다. 언제든 변경할 수 있습니다.
사용자 선택하 역할을 할당합니다. 언제든 변경할 수 있습니다.
</Step>
</Steps>
### 사전 정의된 역할
| 역할 | 설명 |
| :--------- | :------------------------------------------------------------------- |
| **Owner** | 모든 기능 및 설정에 대한 전체 접근 권한. 제한할 수 없습니다. |
| **Member** | 대부분의 기능에 대한 읽기 접근, 환경 변수, LLM 연결, Studio 프로젝트에 대한 관리 접근. 조직 설정이나 기본 설정은 수정할 수 없습니다. |
### 구성 요약
| 영역 | 위치 | 옵션 |
| 영역 | 설정 위치 | 옵션 |
| :------------ | :--------------------------------- | :-------------------------------- |
| 사용자 & 역할 | Settings → Roles | Owner, Member; 커스텀 역할 |
| 사용자 & 역할 | Settings → Roles | 사전 정의: Owner, Member; 커스텀 역할 |
| 자동화 가시성 | Automation → Settings → Visibility | Private; 사용자/역할 화이트리스트 |
## 자동화 수준 접근 제어
---
조직 역할과 별개로, **Automations**는 사용자/역할별로 특정 자동화 접근을 제한하는 가시성 설정을 제공합니다.
## 기능 권한 매트릭스
유용한 경우:
각 역할에는 기능 영역별 권한 수준이 있습니다. 세 가지 수준은 다음과 같습니다:
- 민감/실험 자동화를 비공개로 유지
- 대규모 팀/외부 협업에서 가시성 관리
- **Manage** — 전체 읽기/쓰기 접근 (생성, 편집, 삭제)
- **Read** — 읽기 전용 접근
- **No access** — 기능이 숨겨지거나 접근 불가
| 기능 | Owner | Member (기본값) | 사용 가능한 수준 | 설명 |
| :-------------------------- | :------ | :--------------- | :------------------------- | :------------------------------------------------------------- |
| `usage_dashboards` | Manage | Read | Manage / Read / No access | 사용 메트릭 및 분석 보기 |
| `crews_dashboards` | Manage | Read | Manage / Read / No access | 배포 대시보드 보기, 자동화 세부 정보 접근 |
| `invitations` | Manage | Read | Manage / Read / No access | 조직에 새 멤버 초대 |
| `training_ui` | Manage | Read | Manage / Read / No access | 훈련/파인튜닝 인터페이스 접근 |
| `tools` | Manage | Read | Manage / Read / No access | 도구 생성 및 관리 |
| `agents` | Manage | Read | Manage / Read / No access | 에이전트 생성 및 관리 |
| `environment_variables` | Manage | Manage | Manage / No access | 환경 변수 생성 및 관리 |
| `llm_connections` | Manage | Manage | Manage / No access | LLM 제공자 연결 구성 |
| `default_settings` | Manage | No access | Manage / No access | 조직 전체 기본 설정 수정 |
| `organization_settings` | Manage | No access | Manage / No access | 결제, 플랜 및 조직 구성 관리 |
| `studio_projects` | Manage | Manage | Manage / No access | Studio에서 프로젝트 생성 및 편집 |
<Tip>
커스텀 역할을 만들 때 대부분의 기능은 **Manage**, **Read** 또는 **No access**로 설정할 수 있습니다. 그러나 `environment_variables`, `llm_connections`, `default_settings`, `organization_settings`, `studio_projects`는 **Manage** 또는 **No access**만 지원합니다 — 이 기능들에는 읽기 전용 옵션이 없습니다.
</Tip>
---
## GitHub 또는 Zip에서 배포
가장 흔한 RBAC 질문 중 하나: _"팀원이 배포하려면 어떤 권한이 필요한가요?"_
### GitHub에서 배포
GitHub 저장소에서 자동화를 배포하려면 사용자에게 다음이 필요합니다:
1. **`crews_dashboards`**: 최소 `Read` — 배포가 생성되는 자동화 대시보드에 접근하는 데 필요
2. **Git 저장소 접근** (Git 저장소에 대한 엔티티 수준 RBAC가 활성화된 경우): 사용자의 역할에 엔티티 수준 권한을 통해 특정 Git 저장소에 대한 접근이 부여되어야 함
3. **`studio_projects`: `Manage`** — 배포 전에 Studio에서 crew를 빌드하는 경우
### Zip에서 배포
Zip 파일 업로드로 자동화를 배포하려면 사용자에게 다음이 필요합니다:
1. **`crews_dashboards`**: 최소 `Read` — 자동화 대시보드에 접근하는 데 필요
2. **Zip 배포 활성화**: 조직이 조직 설정에서 Zip 배포를 비활성화하지 않아야 함
### 빠른 참조: 배포에 필요한 최소 권한
| 작업 | 필요한 기능 권한 | 추가 요구사항 |
| :------------------- | :----------------------------------- | :----------------------------------------------- |
| GitHub에서 배포 | `crews_dashboards: Read` | Git 저장소 엔티티 접근 (Git RBAC 활성화 시) |
| Zip에서 배포 | `crews_dashboards: Read` | 조직 수준에서 Zip 배포가 활성화되어야 함 |
| Studio에서 빌드 | `studio_projects: Manage` | — |
| LLM 키 구성 | `llm_connections: Manage` | — |
| 환경 변수 설정 | `environment_variables: Manage` | 엔티티 수준 접근 (엔티티 RBAC 활성화 시) |
---
## 자동화 수준 접근 제어 (엔티티 권한)
조직 전체 역할 외에도, CrewAI는 개별 리소스에 대한 접근을 제한하는 세분화된 엔티티 수준 권한을 지원합니다.
### 자동화 가시성
자동화는 사용자 또는 역할별로 접근을 제한하는 가시성 설정을 지원합니다. 다음과 같은 경우에 유용합니다:
- 민감하거나 실험적인 자동화를 비공개로 유지
- 대규모 팀이나 외부 협업자의 가시성 관리
- 격리된 컨텍스트에서 자동화 테스트
Private 모드에서는 화이트리스트에 포함된 사용자/역할만 다음 작업이 가능합니다:
배포를 비공개로 구성할 수 있으며, 이 경우 화이트리스트에 포함된 사용자역할만 상호작용할 수 있습니다.
- 자동화 보기
- 실행/API 사용
- 로그, 메트릭, 설정 접근
조직 Owner는 항상 접근 가능하며, 가시성 설정에 영향을 받지 않습니다.
설정 위치: Automation → Settings → Visibility
설정 위치: Automation → Settings → Visibility 탭
<Steps>
<Step title="Visibility 탭 열기">
<b>Automation → Settings → Visibility</b>로 이동합니다.
</Step>
<Step title="가시성 설정">
<b>Private</b>를 선택합니다. Owner는 항상 접근 가능합니다.
접근을 제한하려면 <b>Private</b>를 선택합니다. 조직 Owner는 항상
접근 권한을 유지합니다.
</Step>
<Step title="허용 대상 추가">
보기/실행/로그·메트릭·설정 접근이 가능한 사용자/역할을 추가합니다.
<Step title="접근 허용 대상 추가">
보기, 실행, 로그/메트릭/설정 접근이 허용된 특정 사용자역할을
추가합니다.
</Step>
<Step title="저장 및 확인">
저장 후, 목록에 없는 사용자가 보거나 실행할 수 없는지 확인합니다.
변경 사항을 저장 후, 화이트리스트에 없는 사용자가 자동화를 보거나 실행할 수
없는지 확인합니다.
</Step>
</Steps>
### Private 모드 접근 결과
### Private 가시성: 접근 결과
| 동작 | Owner | 화이트리스트 사용자/역할 | 비포함 |
| :--------------- | :---- | :----------------------- | :----- |
| 자동화 보기 | ✓ | ✓ | ✗ |
| 실행/API | ✓ | ✓ | ✗ |
| 로그/메트릭/설정 | ✓ | ✓ | ✗ |
| 동작 | Owner | 화이트리스트 사용자/역할 | 비포함 |
| :--------------------- | :---- | :----------------------- | :----- |
| 자동화 보기 | ✓ | ✓ | ✗ |
| 자동화/API 실행 | ✓ | ✓ | ✗ |
| 로그/메트릭/설정 접근 | ✓ | ✓ | ✗ |
<Tip>
Owner는 항상 접근 가능하며, Private 모드에서는 화이트리스트에 포함된
사용자/역할만 권한이 부여됩니다.
조직 Owner는 항상 접근 권한이 있습니다. Private 모드에서는 화이트리스트에 포함된
사용자/역할만 보기, 실행, 로그/메트릭/설정에 접근할 수 있습니다.
</Tip>
<Frame>
<img src="/images/enterprise/visibility.png" alt="CrewAI AMP 가시성 설정" />
<img src="/images/enterprise/visibility.png" alt="CrewAI AMP 자동화 가시성 설정" />
</Frame>
### 배포 권한 유형
특정 자동화에 엔티티 수준 접근을 부여할 때 다음 권한 유형을 할당할 수 있습니다:
| 권한 | 허용 범위 |
| :------------------- | :-------------------------------------------------- |
| `run` | 자동화 실행 및 API 사용 |
| `traces` | 실행 트레이스 및 로그 보기 |
| `manage_settings` | 자동화 편집, 재배포, 롤백 또는 삭제 |
| `human_in_the_loop` | HITL(human-in-the-loop) 요청에 응답 |
| `full_access` | 위의 모든 권한 |
### 기타 리소스에 대한 엔티티 수준 RBAC
엔티티 수준 RBAC가 활성화되면 다음 리소스에 대한 접근도 사용자 또는 역할별로 제어할 수 있습니다:
| 리소스 | 제어 방식 | 설명 |
| :----------------- | :---------------------------------- | :------------------------------------------------------------ |
| 환경 변수 | 엔티티 RBAC 기능 플래그 | 특정 환경 변수를 보거나 관리할 수 있는 역할/사용자 제한 |
| LLM 연결 | 엔티티 RBAC 기능 플래그 | 특정 LLM 제공자 구성에 대한 접근 제한 |
| Git 저장소 | Git 저장소 RBAC 조직 설정 | 특정 연결된 저장소에 접근할 수 있는 역할/사용자 제한 |
---
## 일반적인 역할 패턴
CrewAI는 Owner와 Member 역할을 기본 제공하지만, 대부분의 팀은 커스텀 역할을 만들어 활용합니다. 일반적인 패턴은 다음과 같습니다:
### Developer 역할
자동화를 빌드하고 배포하지만 조직 설정을 관리하지 않는 팀원을 위한 역할입니다.
| 기능 | 권한 |
| :-------------------------- | :---------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Manage |
| `invitations` | Read |
| `training_ui` | Read |
| `tools` | Manage |
| `agents` | Manage |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | Manage |
### Viewer / Stakeholder 역할
자동화를 모니터링하고 결과를 확인해야 하는 비기술 이해관계자를 위한 역할입니다.
| 기능 | 권한 |
| :-------------------------- | :---------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Read |
| `invitations` | No access |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | No access |
| `llm_connections` | No access |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | No access |
### Ops / Platform Admin 역할
인프라 설정을 관리하지만 에이전트를 빌드하지 않을 수 있는 플랫폼 운영자를 위한 역할입니다.
| 기능 | 권한 |
| :-------------------------- | :---------- |
| `usage_dashboards` | Manage |
| `crews_dashboards` | Manage |
| `invitations` | Manage |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | Manage |
| `organization_settings` | Read |
| `studio_projects` | No access |
---
<Card
title="도움이 필요하신가요?"
icon="headset"
href="mailto:support@crewai.com"
>
RBAC 구성과 점검에 대한 지원이 필요하면 연락해 주세요.
RBAC 관련 질문은 지원팀에 문의해 주세요.
</Card>

View File

@@ -4,6 +4,190 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="31 mar 2026">
## v1.13.0a5
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a5)
## O que Mudou
### Documentação
- Atualizar changelog e versão para v1.13.0a4
## Contributors
@greysonlalonde, @joaomdmoura
</Update>
<Update label="01 abr 2026">
## v1.13.0a4
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a4)
## O que Mudou
### Documentação
- Atualizar changelog e versão para v1.13.0a3
## Contribuidores
@greysonlalonde
</Update>
<Update label="01 abr 2026">
## v1.13.0a3
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a3)
## O que Mudou
### Recursos
- Emitir dados de uso de token no LLMCallCompletedEvent
- Extrair e publicar metadados de ferramentas no AMP
### Correções de Bugs
- Lidar com modelos GPT-5.x que não suportam o parâmetro de API `stop`
### Documentação
- Corrigir imprecisões nas capacidades do agente em todas as línguas
- Adicionar visão geral das Capacidades do Agente e melhorar a documentação de Habilidades
- Adicionar um guia abrangente de configuração de SSO
- Atualizar o changelog e a versão para v1.13.0rc1
### Refatoração
- Converter Flow para Pydantic BaseModel
- Converter classes LLM para Pydantic BaseModel
- Substituir InstanceOf[T] por anotações de tipo simples
- Remover métodos não utilizados
## Contribuidores
@dependabot[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide, @thiagomoretto
</Update>
<Update label="27 mar 2026">
## v1.13.0rc1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
## O que Mudou
### Documentação
- Atualizar changelog e versão para v1.13.0a2
## Contribuidores
@greysonlalonde
</Update>
<Update label="27 mar 2026">
## v1.13.0a2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
## O que Mudou
### Recursos
- Repositório de teste de implantação de autoatualização durante o lançamento
- Melhorar a resiliência e a experiência do usuário na versão empresarial
### Documentação
- Atualizar changelog e versão para v1.13.0a1
## Contribuidores
@greysonlalonde
</Update>
<Update label="27 mar 2026">
## v1.13.0a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
## O que Mudou
### Correções de Bugs
- Corrigir links quebrados no fluxo de documentação fixando o Node na LTS 22
- Limpar o cache uv para pacotes recém-publicados na versão empresarial
### Documentação
- Adicionar uma matriz abrangente de permissões RBAC e guia de implantação
- Atualizar o changelog e a versão para v1.12.2
## Contributors
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="25 mar 2026">
## v1.12.2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.12.2)
## O que Mudou
### Recursos
- Adicionar fase de lançamento empresarial ao lançamento do devtools
### Correções de Bugs
- Preservar o valor de retorno do método como saída de fluxo para @human_feedback com emit
### Documentação
- Atualizar changelog e versão para v1.12.1
- Revisar política de segurança e instruções de relatório
## Contributors
@alex-clawd, @greysonlalonde, @joaomdmoura, @theCyberTech
</Update>
<Update label="25 mar 2026">
## v1.12.1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.12.1)
## O que Mudou
### Recursos
- Adicionar request_id ao HumanFeedbackRequestedEvent
- Adicionar backend de armazenamento Qdrant Edge para sistema de memória
- Adicionar comando docs-check para analisar mudanças e gerar documentação com traduções
- Adicionar suporte ao idioma árabe para changelog e ferramentas de lançamento
- Adicionar tradução em árabe padrão moderno de toda a documentação
- Adicionar comando de logout na CLI
- Adicionar habilidades de agente
- Implementar root_scope automático para isolamento hierárquico de memória
- Implementar provedores nativos compatíveis com OpenAI (OpenRouter, DeepSeek, Ollama, vLLM, Cerebras, Dashscope)
### Correções de Bugs
- Corrigir credenciais incorretas para envio em lote de traces (404)
- Resolver múltiplos bugs no sistema de fluxo HITL
- Corrigir salvamento de memória do agente
- Resolver todos os erros estritos do mypy no pacote crewai
- Corrigir uso de __router_paths__ para métodos listener+router em FlowMeta
- Corrigir erro de valor em caso de suporte a nenhum arquivo
- Corrigir redação da quarentena do litellm na documentação
- Corrigir todos os erros do mypy em crewai-files e adicionar todos os pacotes às verificações de tipo do CI
- Fixar limite superior do litellm na última versão testada (1.82.6)
### Documentação
- Atualizar changelog e versão para v1.12.0
- Adicionar CONTRIBUTING.md
- Adicionar guia para usar CrewAI sem LiteLLM
## Contribuidores
@akaKuruma, @alex-clawd, @greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay, @lucasgomide, @nicoferdi96
</Update>
<Update label="25 mar 2026">
## v1.12.0

View File

@@ -0,0 +1,147 @@
---
title: "Capacidades do Agente"
description: "Entenda as cinco formas de estender agentes CrewAI: Ferramentas, MCPs, Apps, Skills e Knowledge."
icon: puzzle-piece
mode: "wide"
---
## Visão Geral
Agentes CrewAI podem ser estendidos com **cinco tipos distintos de capacidades**, cada um servindo a um propósito diferente. Entender quando usar cada um — e como eles funcionam juntos — é fundamental para construir agentes eficazes.
<CardGroup cols={2}>
<Card title="Ferramentas" icon="wrench" href="/pt-BR/concepts/tools" color="#3B82F6">
**Funções chamáveis** — permitem que agentes tomem ações. Buscas na web, operações com arquivos, chamadas de API, execução de código.
</Card>
<Card title="Servidores MCP" icon="plug" href="/pt-BR/mcp/overview" color="#8B5CF6">
**Servidores de ferramentas remotos** — conectam agentes a servidores de ferramentas externos via Model Context Protocol. Mesmo efeito de ferramentas, mas hospedados externamente.
</Card>
<Card title="Apps" icon="grid-2" color="#EC4899">
**Integrações com plataformas** — conectam agentes a aplicativos SaaS (Gmail, Slack, Jira, Salesforce) via plataforma CrewAI. Executa localmente com um token de integração.
</Card>
<Card title="Skills" icon="bolt" href="/pt-BR/concepts/skills" color="#F59E0B">
**Expertise de domínio** — injetam instruções, diretrizes e material de referência nos prompts dos agentes. Skills dizem aos agentes *como pensar*.
</Card>
<Card title="Knowledge" icon="book" href="/pt-BR/concepts/knowledge" color="#10B981">
**Fatos recuperados** — fornecem aos agentes dados de documentos, arquivos e URLs via busca semântica (RAG). Knowledge dá aos agentes *o que saber*.
</Card>
</CardGroup>
---
## A Distinção Fundamental
O mais importante a entender: **essas capacidades se dividem em duas categorias**.
### Capacidades de Ação (Ferramentas, MCPs, Apps)
Estas dão aos agentes a capacidade de **fazer coisas** — chamar APIs, ler arquivos, buscar na web, enviar emails. No momento da execução, os três tipos se resolvem no mesmo formato interno (instâncias de `BaseTool`) e aparecem em uma lista unificada de ferramentas que o agente pode chamar.
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool
agent = Agent(
role="Researcher",
goal="Find and compile market data",
backstory="Expert market analyst",
tools=[SerperDevTool(), FileReadTool()], # Ferramentas locais
mcps=["https://mcp.example.com/sse"], # Ferramentas de servidor MCP remoto
apps=["gmail", "google_sheets"], # Integrações com plataformas
)
```
### Capacidades de Contexto (Skills, Knowledge)
Estas modificam o **prompt** do agente — injetando expertise, instruções ou dados recuperados antes do agente começar a raciocinar. Não dão aos agentes novas ações; elas moldam como os agentes pensam e a quais informações têm acesso.
```python
from crewai import Agent
agent = Agent(
role="Security Auditor",
goal="Audit cloud infrastructure for vulnerabilities",
backstory="Expert in cloud security with 10 years of experience",
skills=["./skills/security-audit"], # Instruções de domínio
knowledge_sources=[pdf_source, url_source], # Fatos recuperados
)
```
---
## Quando Usar o Quê
| Você precisa... | Use | Exemplo |
| :------------------------------------------------------- | :---------------- | :--------------------------------------- |
| Agente buscar na web | **Ferramentas** | `tools=[SerperDevTool()]` |
| Agente chamar uma API remota via MCP | **MCPs** | `mcps=["https://api.example.com/sse"]` |
| Agente enviar emails pelo Gmail | **Apps** | `apps=["gmail"]` |
| Agente seguir procedimentos específicos | **Skills** | `skills=["./skills/code-review"]` |
| Agente consultar documentos da empresa | **Knowledge** | `knowledge_sources=[pdf_source]` |
| Agente buscar na web E seguir diretrizes de revisão | **Ferramentas + Skills** | Use ambos juntos |
---
## Combinando Capacidades
Na prática, agentes frequentemente usam **múltiplos tipos de capacidades juntos**. Aqui está um exemplo realista:
```python
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
# Um agente de pesquisa totalmente equipado
researcher = Agent(
role="Senior Research Analyst",
goal="Produce comprehensive market analysis reports",
backstory="Expert analyst with deep industry knowledge",
# AÇÃO: O que o agente pode FAZER
tools=[
SerperDevTool(), # Buscar na web
FileReadTool(), # Ler arquivos locais
CodeInterpreterTool(), # Executar código Python para análise
],
mcps=["https://data-api.example.com/sse"], # Acessar API de dados remota
apps=["google_sheets"], # Escrever no Google Sheets
# CONTEXTO: O que o agente SABE
skills=["./skills/research-methodology"], # Como conduzir pesquisas
knowledge_sources=[company_docs], # Dados específicos da empresa
)
```
---
## Tabela Comparativa
| Característica | Ferramentas | MCPs | Apps | Skills | Knowledge |
| :--- | :---: | :---: | :---: | :---: | :---: |
| **Dá ações ao agente** | ✅ | ✅ | ✅ | ❌ | ❌ |
| **Modifica o prompt** | ❌ | ❌ | ❌ | ✅ | ✅ |
| **Requer código** | Sim | Apenas config | Apenas config | Apenas Markdown | Apenas config |
| **Executa localmente** | Sim | Depende | Sim (com variável de ambiente) | N/A | Sim |
| **Precisa de chaves API** | Por ferramenta | Por servidor | Token de integração | Não | Apenas embedder |
| **Definido no Agent** | `tools=[]` | `mcps=[]` | `apps=[]` | `skills=[]` | `knowledge_sources=[]` |
| **Definido no Crew** | ❌ | ❌ | ❌ | `skills=[]` | `knowledge_sources=[]` |
---
## Aprofundamentos
Pronto para aprender mais sobre cada tipo de capacidade?
<CardGroup cols={2}>
<Card title="Ferramentas" icon="wrench" href="/pt-BR/concepts/tools">
Crie ferramentas personalizadas, use o catálogo OSS com 75+ opções, configure cache e execução assíncrona.
</Card>
<Card title="Integração MCP" icon="plug" href="/pt-BR/mcp/overview">
Conecte-se a servidores MCP via stdio, SSE ou HTTP. Filtre ferramentas, configure autenticação.
</Card>
<Card title="Skills" icon="bolt" href="/pt-BR/concepts/skills">
Construa pacotes de skills com SKILL.md, injete expertise de domínio, use divulgação progressiva.
</Card>
<Card title="Knowledge" icon="book" href="/pt-BR/concepts/knowledge">
Adicione conhecimento de PDFs, CSVs, URLs e mais. Configure embedders e recuperação.
</Card>
</CardGroup>

View File

@@ -1,27 +1,186 @@
---
title: Skills
description: Pacotes de skills baseados em sistema de arquivos que injetam contexto nos prompts dos agentes.
description: Pacotes de skills baseados em sistema de arquivos que injetam expertise de domínio e instruções nos prompts dos agentes.
icon: bolt
mode: "wide"
---
## Visão Geral
Skills são diretórios autocontidos que fornecem aos agentes instruções, referências e assets específicos de domínio. Cada skill é definida por um arquivo `SKILL.md` com frontmatter YAML e um corpo em markdown.
Skills são diretórios autocontidos que fornecem aos agentes **instruções, diretrizes e material de referência específicos de domínio**. Cada skill é definida por um arquivo `SKILL.md` com frontmatter YAML e um corpo em markdown.
Skills usam **divulgação progressiva** — metadados são carregados primeiro, instruções completas apenas quando ativadas, e catálogos de recursos apenas quando necessário.
Quando ativada, as instruções de uma skill são injetadas diretamente no prompt da tarefa do agente — dando ao agente expertise sem exigir alterações de código.
## Estrutura de Diretório
<Note type="info" title="Skills vs Ferramentas — A Distinção Fundamental">
**Skills NÃO são ferramentas.** Este é o ponto de confusão mais comum.
- **Skills** injetam *instruções e contexto* no prompt do agente. Elas dizem ao agente *como pensar* sobre um problema.
- **Ferramentas** dão ao agente *funções chamáveis* para tomar ações (buscar, ler arquivos, chamar APIs).
Frequentemente você precisa de **ambos**: skills para expertise, ferramentas para ação. Eles são configurados independentemente e se complementam.
</Note>
---
## Início Rápido
### 1. Crie um Diretório de Skill
```
my-skill/
── SKILL.md # Obrigatório — frontmatter + instruções
├── scripts/ # Opcional — scripts executáveis
├── references/ # Opcional — documentos de referência
└── assets/ # Opcional — arquivos estáticos (configs, dados)
skills/
── code-review/
├── SKILL.md # Obrigatório — instruções
├── references/ # Opcional — documentos de referência
└── style-guide.md
└── scripts/ # Opcional — scripts executáveis
```
O nome do diretório deve corresponder ao campo `name` no `SKILL.md`.
### 2. Escreva seu SKILL.md
```markdown
---
name: code-review
description: Guidelines for conducting thorough code reviews with focus on security and performance.
metadata:
author: your-team
version: "1.0"
---
## Diretrizes de Code Review
Ao revisar código, siga esta checklist:
1. **Segurança**: Verifique vulnerabilidades de injeção, bypasses de autenticação e exposição de dados
2. **Performance**: Procure por queries N+1, alocações desnecessárias e chamadas bloqueantes
3. **Legibilidade**: Garanta nomenclatura clara, comentários apropriados e estilo consistente
4. **Testes**: Verifique cobertura adequada de testes para novas funcionalidades
### Níveis de Severidade
- **Crítico**: Vulnerabilidades de segurança, riscos de perda de dados → bloquear merge
- **Major**: Problemas de performance, erros de lógica → solicitar alterações
- **Minor**: Questões de estilo, sugestões de nomenclatura → aprovar com comentários
```
### 3. Anexe a um Agente
```python
from crewai import Agent
from crewai_tools import GithubSearchTool, FileReadTool
reviewer = Agent(
role="Senior Code Reviewer",
goal="Review pull requests for quality and security issues",
backstory="Staff engineer with expertise in secure coding practices.",
skills=["./skills"], # Injeta diretrizes de revisão
tools=[GithubSearchTool(), FileReadTool()], # Permite ao agente ler código
)
```
O agente agora tem tanto **expertise** (da skill) quanto **capacidades** (das ferramentas).
---
## Skills + Ferramentas: Trabalhando Juntos
Aqui estão padrões comuns mostrando como skills e ferramentas se complementam:
### Padrão 1: Apenas Skills (Expertise de Domínio, Sem Ações Necessárias)
Use quando o agente precisa de instruções específicas mas não precisa chamar serviços externos:
```python
agent = Agent(
role="Technical Writer",
goal="Write clear API documentation",
backstory="Expert technical writer",
skills=["./skills/api-docs-style"], # Diretrizes e templates de escrita
# Sem ferramentas necessárias — agente escreve baseado no contexto fornecido
)
```
### Padrão 2: Apenas Ferramentas (Ações, Sem Expertise Especial)
Use quando o agente precisa tomar ações mas não precisa de instruções específicas de domínio:
```python
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
agent = Agent(
role="Web Researcher",
goal="Find information about a topic",
backstory="Skilled at finding information online",
tools=[SerperDevTool(), ScrapeWebsiteTool()], # Pode buscar e extrair dados
# Sem skills necessárias — pesquisa geral não precisa de diretrizes especiais
)
```
### Padrão 3: Skills + Ferramentas (Expertise E Ações)
O padrão mais comum no mundo real. A skill fornece *como* abordar o trabalho; ferramentas fornecem *o que* o agente pode fazer:
```python
from crewai_tools import SerperDevTool, FileReadTool, CodeInterpreterTool
analyst = Agent(
role="Security Analyst",
goal="Audit infrastructure for vulnerabilities",
backstory="Expert in cloud security and compliance",
skills=["./skills/security-audit"], # Metodologia e checklists de auditoria
tools=[
SerperDevTool(), # Pesquisar vulnerabilidades conhecidas
FileReadTool(), # Ler arquivos de configuração
CodeInterpreterTool(), # Executar scripts de análise
],
)
```
### Padrão 4: Skills + MCPs
Skills funcionam junto com servidores MCP da mesma forma que com ferramentas:
```python
agent = Agent(
role="Data Analyst",
goal="Analyze customer data and generate reports",
backstory="Expert data analyst with strong statistical background",
skills=["./skills/data-analysis"], # Metodologia de análise
mcps=["https://data-warehouse.example.com/sse"], # Acesso remoto a dados
)
```
### Padrão 5: Skills + Apps
Skills podem guiar como um agente usa integrações de plataforma:
```python
agent = Agent(
role="Customer Support Agent",
goal="Respond to customer inquiries professionally",
backstory="Experienced support representative",
skills=["./skills/support-playbook"], # Templates de resposta e regras de escalação
apps=["gmail", "zendesk"], # Pode enviar emails e atualizar tickets
)
```
---
## Skills no Nível do Crew
Skills podem ser definidas no crew para aplicar a **todos os agentes**:
```python
from crewai import Crew
crew = Crew(
agents=[researcher, writer, reviewer],
tasks=[research_task, write_task, review_task],
skills=["./skills"], # Todos os agentes recebem essas skills
)
```
Skills no nível do agente têm prioridade — se a mesma skill é descoberta em ambos os níveis, a versão do agente é usada.
---
## Formato do SKILL.md
@@ -34,7 +193,7 @@ compatibility: crewai>=0.1.0 # opcional
metadata: # opcional
author: your-name
version: "1.0"
allowed-tools: web-search file-read # opcional, delimitado por espaços
allowed-tools: web-search file-read # opcional, experimental
---
Instruções para o agente vão aqui. Este corpo em markdown é injetado
@@ -43,57 +202,46 @@ no prompt do agente quando a skill é ativada.
### Campos do Frontmatter
| Campo | Obrigatório | Restrições |
| Campo | Obrigatório | Descrição |
| :-------------- | :---------- | :----------------------------------------------------------------------- |
| `name` | Sim | 164 chars. Alfanumérico minúsculo e hifens. Sem hifens iniciais/finais/consecutivos. Deve corresponder ao nome do diretório. |
| `name` | Sim | 164 chars. Alfanumérico minúsculo e hifens. Deve corresponder ao nome do diretório. |
| `description` | Sim | 11024 chars. Descreve o que a skill faz e quando usá-la. |
| `license` | Não | Nome da licença ou referência a um arquivo de licença incluído. |
| `compatibility` | Não | Máx 500 chars. Requisitos de ambiente (produtos, pacotes, rede). |
| `metadata` | Não | Mapeamento arbitrário de chave-valor string. |
| `allowed-tools` | Não | Lista de ferramentas pré-aprovadas delimitada por espaços. Experimental. |
## Uso
---
### Skills no Nível do Agente
## Estrutura de Diretório
Passe caminhos de diretório de skills para um agente:
```python
from crewai import Agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=["./skills"], # descobre todas as skills neste diretório
)
```
my-skill/
├── SKILL.md # Obrigatório — frontmatter + instruções
├── scripts/ # Opcional — scripts executáveis
├── references/ # Opcional — documentos de referência
└── assets/ # Opcional — arquivos estáticos (configs, dados)
```
### Skills no Nível do Crew
O nome do diretório deve corresponder ao campo `name` no `SKILL.md`. Os diretórios `scripts/`, `references/` e `assets/` estão disponíveis no `path` da skill para agentes que precisam referenciar arquivos diretamente.
Caminhos de skills no crew são mesclados em todos os agentes:
---
```python
from crewai import Crew
## Skills Pré-carregadas
crew = Crew(
agents=[agent],
tasks=[task],
skills=["./skills"],
)
```
### Skills Pré-carregadas
Você também pode passar objetos `Skill` diretamente:
Para mais controle, você pode descobrir e ativar skills programaticamente:
```python
from pathlib import Path
from crewai.skills import discover_skills, activate_skill
# Descobrir todas as skills em um diretório
skills = discover_skills(Path("./skills"))
# Ativá-las (carrega o corpo completo do SKILL.md)
activated = [activate_skill(s) for s in skills]
# Passar para um agente
agent = Agent(
role="Researcher",
goal="Find relevant information",
@@ -102,13 +250,57 @@ agent = Agent(
)
```
---
## Como as Skills São Carregadas
Skills carregam progressivamente — apenas os dados necessários em cada etapa são lidos:
Skills usam **divulgação progressiva** — carregando apenas o necessário em cada estágio:
| Etapa | O que é carregado | Quando |
| :--------------- | :------------------------------------------------ | :------------------ |
| Descoberta | Nome, descrição, campos do frontmatter | `discover_skills()` |
| Ativação | Texto completo do corpo do SKILL.md | `activate_skill()` |
| Estágio | O que é carregado | Quando |
| :--------- | :------------------------------------ | :------------------ |
| Descoberta | Nome, descrição, campos do frontmatter | `discover_skills()` |
| Ativação | Texto completo do corpo do SKILL.md | `activate_skill()` |
Durante a execução normal do agente, skills são automaticamente descobertas e ativadas. Os diretórios `scripts/`, `references/` e `assets/` estão disponíveis no `path` da skill para agentes que precisam referenciar arquivos diretamente.
Durante a execução normal do agente (passando caminhos de diretório via `skills=["./skills"]`), skills são automaticamente descobertas e ativadas. O carregamento progressivo só importa quando usando a API programática.
---
## Skills vs Knowledge
Tanto skills quanto knowledge modificam o prompt do agente, mas servem propósitos diferentes:
| Aspecto | Skills | Knowledge |
| :--- | :--- | :--- |
| **O que fornece** | Instruções, procedimentos, diretrizes | Fatos, dados, informações |
| **Como é armazenado** | Arquivos Markdown (SKILL.md) | Embarcado em banco vetorial (ChromaDB) |
| **Como é recuperado** | Corpo inteiro injetado no prompt | Busca semântica encontra trechos relevantes |
| **Melhor para** | Metodologia, checklists, guias de estilo | Documentos da empresa, info de produto, dados de referência |
| **Definido via** | `skills=["./skills"]` | `knowledge_sources=[source]` |
**Regra prática:** Se o agente precisa seguir um *processo*, use uma skill. Se o agente precisa consultar *dados*, use knowledge.
---
## Perguntas Frequentes
<AccordionGroup>
<Accordion title="Preciso definir skills E ferramentas?">
Depende do seu caso de uso. Skills e ferramentas são **independentes** — você pode usar qualquer um, ambos ou nenhum.
- **Apenas skills**: Quando o agente precisa de expertise mas não de ações externas (ex: escrever com diretrizes de estilo)
- **Apenas ferramentas**: Quando o agente precisa de ações mas não de metodologia especial (ex: busca simples na web)
- **Ambos**: Quando o agente precisa de expertise E ações (ex: auditoria de segurança com checklists específicas E capacidade de escanear código)
</Accordion>
<Accordion title="Skills fornecem ferramentas automaticamente?">
**Não.** O campo `allowed-tools` no SKILL.md é apenas metadado experimental — ele não provisiona nem injeta nenhuma ferramenta. Você deve sempre definir ferramentas separadamente via `tools=[]`, `mcps=[]` ou `apps=[]`.
</Accordion>
<Accordion title="O que acontece se eu definir a mesma skill tanto no agente quanto no crew?">
A skill no nível do agente tem prioridade. Skills são deduplicadas por nome — as skills do agente são processadas primeiro, então se o mesmo nome de skill aparece em ambos os níveis, a versão do agente é usada.
</Accordion>
<Accordion title="Qual o tamanho máximo do corpo do SKILL.md?">
Há um aviso suave em 50.000 caracteres, mas sem limite rígido. Mantenha skills focadas e concisas para melhores resultados — injeções de prompt muito grandes podem diluir a atenção do agente.
</Accordion>
</AccordionGroup>

View File

@@ -10,6 +10,10 @@ mode: "wide"
As ferramentas do CrewAI capacitam agentes com habilidades que vão desde busca na web e análise de dados até colaboração e delegação de tarefas entre colegas de trabalho.
Esta documentação descreve como criar, integrar e aproveitar essas ferramentas dentro do framework CrewAI, incluindo um novo foco em ferramentas de colaboração.
<Note type="info" title="Ferramentas são um dos cinco tipos de capacidades de agentes">
Ferramentas dão aos agentes **funções chamáveis** para tomar ações. Elas funcionam junto com [MCPs](/pt-BR/mcp/overview) (servidores de ferramentas remotos), [Apps](/pt-BR/concepts/agent-capabilities) (integrações com plataformas), [Skills](/pt-BR/concepts/skills) (expertise de domínio) e [Knowledge](/pt-BR/concepts/knowledge) (fatos recuperados). Veja a visão geral de [Capacidades do Agente](/pt-BR/concepts/agent-capabilities) para entender quando usar cada um.
</Note>
## O que é uma Ferramenta?
Uma ferramenta no CrewAI é uma habilidade ou função que os agentes podem utilizar para executar diversas ações.

View File

@@ -1,22 +1,24 @@
---
title: "Controle de Acesso Baseado em Funções (RBAC)"
description: "Controle o acesso a crews, ferramentas e dados com funções e visibilidade por automação."
description: "Controle o acesso a crews, ferramentas e dados com funções, escopos e permissões granulares."
icon: "shield"
mode: "wide"
---
## Visão Geral
O RBAC no CrewAI AMP permite gerenciar acesso de forma segura e escalável combinando **funções em nível de organização** com **controles de visibilidade em nível de automação**.
O RBAC no CrewAI AMP permite gerenciamento de acesso seguro e escalável através de duas camadas:
1. **Permissões de funcionalidade** — controlam o que cada função pode fazer na plataforma (gerenciar, ler ou sem acesso)
2. **Permissões em nível de entidade** — acesso granular em automações individuais, variáveis de ambiente, conexões LLM e repositórios Git
<Frame>
<img src="/images/enterprise/users_and_roles.png" alt="Visão geral de RBAC no CrewAI AMP" />
</Frame>
## Usuários e Funções
Cada membro da sua workspace possui uma função, que determina o acesso aos recursos.
Cada membro da sua workspace CrewAI recebe uma função, que determina seu acesso aos diversos recursos.
Você pode:
@@ -31,14 +33,21 @@ A configuração de usuários e funções é feita em Settings → Roles.
Vá em <b>Settings → Roles</b> no CrewAI AMP.
</Step>
<Step title="Escolher a função">
Use <b>Owner</b> ou <b>Member</b>, ou clique em <b>Create role</b> para
criar uma função personalizada.
Use uma função pré-definida (<b>Owner</b>, <b>Member</b>) ou clique em{" "}
<b>Create role</b> para criar uma personalizada.
</Step>
<Step title="Atribuir aos membros">
Selecione os usuários e atribua a função. Você pode alterar depois.
</Step>
</Steps>
### Funções Pré-definidas
| Função | Descrição |
| :--------- | :------------------------------------------------------------------------ |
| **Owner** | Acesso total a todas as funcionalidades e configurações. Não pode ser restrito. |
| **Member** | Acesso de leitura à maioria das funcionalidades, acesso de gerenciamento a variáveis de ambiente, conexões LLM e projetos Studio. Não pode modificar configurações da organização ou padrões. |
### Resumo de configuração
| Área | Onde configurar | Opções |
@@ -46,35 +55,93 @@ A configuração de usuários e funções é feita em Settings → Roles.
| Usuários & Funções | Settings → Roles | Pré-definidas: Owner, Member; Funções personalizadas |
| Visibilidade da automação | Automation → Settings → Visibility | Private; Lista de usuários/funções |
## Controle de Acesso em Nível de Automação
---
Além das funções na organização, as **Automations** suportam visibilidade refinada para restringir acesso por usuário ou função.
## Matriz de Permissões de Funcionalidades
Útil para:
Cada função possui um nível de permissão para cada área de funcionalidade. Os três níveis são:
- Manter automações sensíveis/experimentais privadas
- **Manage** — acesso total de leitura/escrita (criar, editar, excluir)
- **Read** — acesso somente leitura
- **No access** — funcionalidade oculta/inacessível
| Funcionalidade | Owner | Member (padrão) | Níveis disponíveis | Descrição |
| :------------------------ | :------ | :--------------- | :------------------------ | :-------------------------------------------------------------- |
| `usage_dashboards` | Manage | Read | Manage / Read / No access | Visualizar métricas e análises de uso |
| `crews_dashboards` | Manage | Read | Manage / Read / No access | Visualizar dashboards de deploy, acessar detalhes de automações |
| `invitations` | Manage | Read | Manage / Read / No access | Convidar novos membros para a organização |
| `training_ui` | Manage | Read | Manage / Read / No access | Acessar interfaces de treinamento/fine-tuning |
| `tools` | Manage | Read | Manage / Read / No access | Criar e gerenciar ferramentas |
| `agents` | Manage | Read | Manage / Read / No access | Criar e gerenciar agentes |
| `environment_variables` | Manage | Manage | Manage / No access | Criar e gerenciar variáveis de ambiente |
| `llm_connections` | Manage | Manage | Manage / No access | Configurar conexões de provedores LLM |
| `default_settings` | Manage | No access | Manage / No access | Modificar configurações padrão da organização |
| `organization_settings` | Manage | No access | Manage / No access | Gerenciar cobrança, planos e configuração da organização |
| `studio_projects` | Manage | Manage | Manage / No access | Criar e editar projetos no Studio |
<Tip>
Ao criar uma função personalizada, a maioria das funcionalidades pode ser definida como **Manage**, **Read** ou **No access**. No entanto, `environment_variables`, `llm_connections`, `default_settings`, `organization_settings` e `studio_projects` suportam apenas **Manage** ou **No access** — não há opção somente leitura para essas funcionalidades.
</Tip>
---
## Deploy via GitHub ou Zip
Uma das perguntas mais comuns sobre RBAC é: _"Quais permissões um membro da equipe precisa para fazer deploy?"_
### Deploy via GitHub
Para fazer deploy de uma automação a partir de um repositório GitHub, o usuário precisa de:
1. **`crews_dashboards`**: pelo menos `Read` — necessário para acessar o dashboard de automações onde os deploys são criados
2. **Acesso ao repositório Git** (se RBAC em nível de entidade para repositórios Git estiver habilitado): a função do usuário deve ter acesso ao repositório Git específico via permissões de entidade
3. **`studio_projects`: `Manage`** — se estiver construindo o crew no Studio antes do deploy
### Deploy via Zip
Para fazer deploy de uma automação via upload de arquivo Zip, o usuário precisa de:
1. **`crews_dashboards`**: pelo menos `Read` — necessário para acessar o dashboard de automações
2. **Deploys via Zip habilitados**: a organização não deve ter desabilitado deploys via Zip nas configurações da organização
### Referência Rápida: Permissões Mínimas para Deploy
| Ação | Permissões de funcionalidade necessárias | Requisitos adicionais |
| :------------------------- | :--------------------------------------- | :------------------------------------------------ |
| Deploy via GitHub | `crews_dashboards: Read` | Acesso à entidade do repositório Git (se habilitado) |
| Deploy via Zip | `crews_dashboards: Read` | Deploys via Zip devem estar habilitados na organização |
| Construir no Studio | `studio_projects: Manage` | — |
| Configurar chaves LLM | `llm_connections: Manage` | — |
| Definir variáveis de ambiente | `environment_variables: Manage` | Acesso em nível de entidade (se habilitado) |
---
## Controle de Acesso em Nível de Automação (Permissões de Entidade)
Além das funções em nível de organização, o CrewAI suporta permissões granulares em nível de entidade que restringem o acesso a recursos individuais.
### Visibilidade da Automação
Automações suportam configurações de visibilidade que restringem acesso por usuário ou função. Útil para:
- Manter automações sensíveis ou experimentais privadas
- Gerenciar visibilidade em equipes grandes ou colaboradores externos
- Testar automações em contexto isolado
Em modo privado, somente usuários/funções na whitelist poderão:
Deploys podem ser configurados como privados, significando que apenas usuários e funções na whitelist poderão interagir com eles.
- Ver a automação
- Executar/usar a API
- Acessar logs, métricas e configurações
O owner da organização sempre tem acesso, independente da visibilidade.
Configure em Automation → Settings → Visibility.
Configure em Automation → Settings → aba Visibility.
<Steps>
<Step title="Abrir a aba Visibility">
Acesse <b>Automation → Settings → Visibility</b>.
</Step>
<Step title="Definir visibilidade">
Selecione <b>Private</b> para restringir o acesso. O owner mantém acesso.
Selecione <b>Private</b> para restringir o acesso. O owner da organização
mantém acesso sempre.
</Step>
<Step title="Permitir acesso">
Adicione usuários e funções que poderão ver/executar e acessar
Adicione usuários e funções que poderão ver, executar e acessar
logs/métricas/configurações.
</Step>
<Step title="Salvar e verificar">
@@ -97,9 +164,92 @@ Configure em Automation → Settings → Visibility.
<Frame>
<img src="/images/enterprise/visibility.png" alt="Configuração de visibilidade no CrewAI AMP" />
</Frame>
### Tipos de Permissão de Deploy
Ao conceder acesso em nível de entidade a uma automação específica, você pode atribuir estes tipos de permissão:
| Permissão | O que permite |
| :------------------- | :-------------------------------------------------- |
| `run` | Executar a automação e usar sua API |
| `traces` | Visualizar traces de execução e logs |
| `manage_settings` | Editar, reimplantar, reverter ou excluir a automação |
| `human_in_the_loop` | Responder a solicitações human-in-the-loop (HITL) |
| `full_access` | Todos os anteriores |
### RBAC em Nível de Entidade para Outros Recursos
Quando o RBAC em nível de entidade está habilitado, o acesso a estes recursos também pode ser controlado por usuário ou função:
| Recurso | Controlado por | Descrição |
| :--------------------- | :------------------------------------- | :------------------------------------------------------------- |
| Variáveis de ambiente | Flag de funcionalidade RBAC de entidade | Restringir quais funções/usuários podem ver ou gerenciar variáveis específicas |
| Conexões LLM | Flag de funcionalidade RBAC de entidade | Restringir acesso a configurações de provedores LLM específicos |
| Repositórios Git | Configuração RBAC de repositórios Git | Restringir quais funções/usuários podem acessar repositórios conectados específicos |
---
## Padrões Comuns de Funções
Embora o CrewAI venha com as funções Owner e Member, a maioria das equipes se beneficia da criação de funções personalizadas. Aqui estão os padrões comuns:
### Função Developer
Uma função para membros da equipe que constroem e fazem deploy de automações, mas não gerenciam configurações da organização.
| Funcionalidade | Permissão |
| :------------------------ | :--------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Manage |
| `invitations` | Read |
| `training_ui` | Read |
| `tools` | Manage |
| `agents` | Manage |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | Manage |
### Função Viewer / Stakeholder
Uma função para stakeholders não técnicos que precisam monitorar automações e visualizar resultados.
| Funcionalidade | Permissão |
| :------------------------ | :--------- |
| `usage_dashboards` | Read |
| `crews_dashboards` | Read |
| `invitations` | No access |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | No access |
| `llm_connections` | No access |
| `default_settings` | No access |
| `organization_settings` | No access |
| `studio_projects` | No access |
### Função Ops / Platform Admin
Uma função para operadores de plataforma que gerenciam configurações de infraestrutura, mas podem não construir agentes.
| Funcionalidade | Permissão |
| :------------------------ | :--------- |
| `usage_dashboards` | Manage |
| `crews_dashboards` | Manage |
| `invitations` | Manage |
| `training_ui` | Read |
| `tools` | Read |
| `agents` | Read |
| `environment_variables` | Manage |
| `llm_connections` | Manage |
| `default_settings` | Manage |
| `organization_settings` | Read |
| `studio_projects` | No access |
---
<Card title="Precisa de Ajuda?" icon="headset" href="mailto:support@crewai.com">
Fale com o nosso time para suporte em configuração e auditoria de RBAC.
Fale com o nosso time para suporte em configuração de RBAC.
</Card>

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.12.0"
__version__ = "1.13.0a5"

View File

@@ -11,7 +11,7 @@ dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.12.0a3",
"crewai==1.13.0a5",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -309,4 +309,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.12.0"
__version__ = "1.13.0a5"

View File

@@ -14281,10 +14281,349 @@
],
"title": "EnvVar",
"type": "object"
},
"JsonResponseFormat": {
"description": "Response format requesting raw JSON output (e.g. ``{\"type\": \"json_object\"}``).",
"properties": {
"type": {
"const": "json_object",
"title": "Type",
"type": "string"
}
},
"required": [
"type"
],
"title": "JsonResponseFormat",
"type": "object"
},
"LLM": {
"properties": {
"additional_params": {
"additionalProperties": true,
"title": "Additional Params",
"type": "object"
},
"api_base": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Api Base"
},
"api_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Api Key"
},
"api_version": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Api Version"
},
"base_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Base Url"
},
"callbacks": {
"anyOf": [
{
"items": {},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"title": "Callbacks"
},
"completion_cost": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"title": "Completion Cost"
},
"context_window_size": {
"default": 0,
"title": "Context Window Size",
"type": "integer"
},
"frequency_penalty": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"title": "Frequency Penalty"
},
"interceptor": {
"default": null,
"title": "Interceptor"
},
"is_anthropic": {
"default": false,
"title": "Is Anthropic",
"type": "boolean"
},
"is_litellm": {
"default": false,
"title": "Is Litellm",
"type": "boolean"
},
"logit_bias": {
"anyOf": [
{
"additionalProperties": {
"type": "number"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"title": "Logit Bias"
},
"logprobs": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Logprobs"
},
"max_completion_tokens": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max Completion Tokens"
},
"max_tokens": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"title": "Max Tokens"
},
"model": {
"title": "Model",
"type": "string"
},
"n": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "N"
},
"prefer_upload": {
"default": false,
"title": "Prefer Upload",
"type": "boolean"
},
"presence_penalty": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"title": "Presence Penalty"
},
"provider": {
"default": "openai",
"title": "Provider",
"type": "string"
},
"reasoning_effort": {
"anyOf": [
{
"enum": [
"none",
"low",
"medium",
"high"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Reasoning Effort"
},
"response_format": {
"anyOf": [
{
"$ref": "#/$defs/JsonResponseFormat"
},
{},
{
"type": "null"
}
],
"default": null,
"title": "Response Format"
},
"seed": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Seed"
},
"stop": {
"items": {
"type": "string"
},
"title": "Stop",
"type": "array"
},
"stream": {
"default": false,
"title": "Stream",
"type": "boolean"
},
"temperature": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"title": "Temperature"
},
"thinking": {
"default": null,
"title": "Thinking"
},
"timeout": {
"anyOf": [
{
"type": "number"
},
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Timeout"
},
"top_logprobs": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Top Logprobs"
},
"top_p": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"title": "Top P"
}
},
"required": [
"model"
],
"title": "LLM",
"type": "object"
}
},
"description": "A tool for performing Optical Character Recognition on images.\n\nThis tool leverages LLMs to extract text from images. It can process\nboth local image files and images available via URLs.\n\nAttributes:\n name (str): Name of the tool.\n description (str): Description of the tool's functionality.\n args_schema (Type[BaseModel]): Pydantic schema for input validation.\n\nPrivate Attributes:\n _llm (Optional[LLM]): Language model instance for making API calls.",
"properties": {},
"properties": {
"llm": {
"$ref": "#/$defs/LLM"
}
},
"title": "OCRTool",
"type": "object"
},

View File

@@ -43,7 +43,7 @@ dependencies = [
"uv~=0.9.13",
"aiosqlite~=0.21.0",
"pyyaml~=6.0",
"lancedb>=0.29.2",
"lancedb>=0.29.2,<0.30.1",
]
[project.urls]
@@ -54,7 +54,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.12.0a3",
"crewai-tools==1.13.0a5",
]
embeddings = [
"tiktoken~=0.8.0"

View File

@@ -4,6 +4,8 @@ from typing import Any
import urllib.request
import warnings
from pydantic import PydanticUserError
from crewai.agent.core import Agent
from crewai.agent.planning_config import PlanningConfig
from crewai.crew import Crew
@@ -42,7 +44,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.12.0"
__version__ = "1.13.0a5"
_telemetry_submitted = False
@@ -93,6 +95,38 @@ def __getattr__(name: str) -> Any:
raise AttributeError(f"module 'crewai' has no attribute {name!r}")
try:
from crewai.agents.tools_handler import ToolsHandler as _ToolsHandler
from crewai.experimental.agent_executor import AgentExecutor as _AgentExecutor
from crewai.hooks.llm_hooks import LLMCallHookContext as _LLMCallHookContext
from crewai.tools.tool_types import ToolResult as _ToolResult
from crewai.utilities.prompts import (
StandardPromptResult as _StandardPromptResult,
SystemPromptResult as _SystemPromptResult,
)
_AgentExecutor.model_rebuild(
force=True,
_types_namespace={
"Agent": Agent,
"ToolsHandler": _ToolsHandler,
"Crew": Crew,
"BaseLLM": BaseLLM,
"Task": Task,
"StandardPromptResult": _StandardPromptResult,
"SystemPromptResult": _SystemPromptResult,
"LLMCallHookContext": _LLMCallHookContext,
"ToolResult": _ToolResult,
},
)
except (ImportError, PydanticUserError):
import logging as _logging
_logging.getLogger(__name__).warning(
"AgentExecutor.model_rebuild() failed; forward refs may be unresolved.",
exc_info=True,
)
__all__ = [
"LLM",
"Agent",

View File

@@ -25,7 +25,6 @@ from pydantic import (
BaseModel,
ConfigDict,
Field,
InstanceOf,
PrivateAttr,
model_validator,
)
@@ -167,10 +166,10 @@ class Agent(BaseAgent):
default=True,
description="Use system prompt for the agent.",
)
llm: str | InstanceOf[BaseLLM] | None = Field(
llm: str | BaseLLM | None = Field(
description="Language model that will run the agent.", default=None
)
function_calling_llm: str | InstanceOf[BaseLLM] | None = Field(
function_calling_llm: str | BaseLLM | None = Field(
description="Language model that will run the agent.", default=None
)
system_template: str | None = Field(
@@ -1012,7 +1011,7 @@ class Agent(BaseAgent):
self.agent_executor.tools = tools
self.agent_executor.original_tools = raw_tools
self.agent_executor.prompt = prompt
self.agent_executor.stop = stop_words
self.agent_executor.stop_words = stop_words
self.agent_executor.tools_names = get_tool_names(tools)
self.agent_executor.tools_description = render_text_description_and_args(tools)
self.agent_executor.response_model = (

View File

@@ -12,7 +12,6 @@ from pydantic import (
UUID4,
BaseModel,
Field,
InstanceOf,
PrivateAttr,
field_validator,
model_validator,
@@ -185,7 +184,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
default=None,
description="Knowledge sources for the agent.",
)
knowledge_storage: InstanceOf[BaseKnowledgeStorage] | None = Field(
knowledge_storage: BaseKnowledgeStorage | None = Field(
default=None,
description="Custom knowledge storage for the agent.",
)

View File

@@ -73,6 +73,7 @@ class PlusAPI:
description: str | None,
encoded_file: str,
available_exports: list[dict[str, Any]] | None = None,
tools_metadata: list[dict[str, Any]] | None = None,
) -> httpx.Response:
params = {
"handle": handle,
@@ -81,6 +82,9 @@ class PlusAPI:
"file": encoded_file,
"description": description,
"available_exports": available_exports,
"tools_metadata": {"package": handle, "tools": tools_metadata}
if tools_metadata is not None
else None,
}
return self._make_request("POST", f"{self.TOOLS_RESOURCE}", json=params)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.12.0a3"
"crewai[tools]==1.13.0a5"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.12.0a3"
"crewai[tools]==1.13.0a5"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.12.0a3"
"crewai[tools]==1.13.0a5"
]
[tool.crewai]

View File

@@ -17,6 +17,7 @@ from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.utils import (
build_env_with_tool_repository_credentials,
extract_available_exports,
extract_tools_metadata,
get_project_description,
get_project_name,
get_project_version,
@@ -101,6 +102,18 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
console.print(
f"[green]Found these tools to publish: {', '.join([e['name'] for e in available_exports])}[/green]"
)
console.print("[bold blue]Extracting tool metadata...[/bold blue]")
try:
tools_metadata = extract_tools_metadata()
except Exception as e:
console.print(
f"[yellow]Warning: Could not extract tool metadata: {e}[/yellow]\n"
f"Publishing will continue without detailed metadata."
)
tools_metadata = []
self._print_tools_preview(tools_metadata)
self._print_current_organization()
with tempfile.TemporaryDirectory() as temp_build_dir:
@@ -118,7 +131,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
"Project build failed. Please ensure that the command `uv build --sdist` completes successfully.",
style="bold red",
)
raise SystemExit
raise SystemExit(1)
tarball_path = os.path.join(temp_build_dir, tarball_filename)
with open(tarball_path, "rb") as file:
@@ -134,6 +147,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
description=project_description,
encoded_file=f"data:application/x-gzip;base64,{encoded_tarball}",
available_exports=available_exports,
tools_metadata=tools_metadata,
)
self._validate_response(publish_response)
@@ -246,6 +260,55 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
raise SystemExit
def _print_tools_preview(self, tools_metadata: list[dict[str, Any]]) -> None:
if not tools_metadata:
console.print("[yellow]No tool metadata extracted.[/yellow]")
return
console.print(
f"\n[bold]Tools to be published ({len(tools_metadata)}):[/bold]\n"
)
for tool in tools_metadata:
console.print(f" [bold cyan]{tool.get('name', 'Unknown')}[/bold cyan]")
if tool.get("module"):
console.print(f" Module: {tool.get('module')}")
console.print(f" Name: {tool.get('humanized_name', 'N/A')}")
console.print(
f" Description: {tool.get('description', 'N/A')[:80]}{'...' if len(tool.get('description', '')) > 80 else ''}"
)
init_params = tool.get("init_params_schema", {}).get("properties", {})
if init_params:
required = tool.get("init_params_schema", {}).get("required", [])
console.print(" Init parameters:")
for param_name, param_info in init_params.items():
param_type = param_info.get("type", "any")
is_required = param_name in required
req_marker = "[red]*[/red]" if is_required else ""
default = (
f" = {param_info['default']}" if "default" in param_info else ""
)
console.print(
f" - {param_name}: {param_type}{default} {req_marker}"
)
env_vars = tool.get("env_vars", [])
if env_vars:
console.print(" Environment variables:")
for env_var in env_vars:
req_marker = "[red]*[/red]" if env_var.get("required") else ""
default = (
f" (default: {env_var['default']})"
if env_var.get("default")
else ""
)
console.print(
f" - {env_var['name']}: {env_var.get('description', 'N/A')}{default} {req_marker}"
)
console.print()
def _print_current_organization(self) -> None:
settings = Settings()
if settings.org_uuid:

View File

@@ -1,10 +1,15 @@
from functools import reduce
from collections.abc import Generator, Mapping
from contextlib import contextmanager
from functools import lru_cache, reduce
import hashlib
import importlib.util
import inspect
from inspect import getmro, isclass, isfunction, ismethod
import os
from pathlib import Path
import shutil
import sys
import types
from typing import Any, cast, get_type_hints
import click
@@ -544,43 +549,62 @@ def build_env_with_tool_repository_credentials(
return env
@contextmanager
def _load_module_from_file(
init_file: Path, module_name: str | None = None
) -> Generator[types.ModuleType | None, None, None]:
"""
Context manager for loading a module from file with automatic cleanup.
Yields the loaded module or None if loading fails.
"""
if module_name is None:
module_name = (
f"temp_module_{hashlib.sha256(str(init_file).encode()).hexdigest()[:8]}"
)
spec = importlib.util.spec_from_file_location(module_name, init_file)
if not spec or not spec.loader:
yield None
return
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
try:
spec.loader.exec_module(module)
yield module
finally:
sys.modules.pop(module_name, None)
def _load_tools_from_init(init_file: Path) -> list[dict[str, Any]]:
"""
Load and validate tools from a given __init__.py file.
"""
spec = importlib.util.spec_from_file_location("temp_module", init_file)
if not spec or not spec.loader:
return []
module = importlib.util.module_from_spec(spec)
sys.modules["temp_module"] = module
try:
spec.loader.exec_module(module)
with _load_module_from_file(init_file) as module:
if module is None:
return []
if not hasattr(module, "__all__"):
console.print(
f"Warning: No __all__ defined in {init_file}",
style="bold yellow",
)
raise SystemExit(1)
return [
{
"name": name,
}
for name in module.__all__
if hasattr(module, name) and is_valid_tool(getattr(module, name))
]
if not hasattr(module, "__all__"):
console.print(
f"Warning: No __all__ defined in {init_file}",
style="bold yellow",
)
raise SystemExit(1)
return [
{"name": name}
for name in module.__all__
if hasattr(module, name) and is_valid_tool(getattr(module, name))
]
except SystemExit:
raise
except Exception as e:
console.print(f"[red]Warning: Could not load {init_file}: {e!s}[/red]")
raise SystemExit(1) from e
finally:
sys.modules.pop("temp_module", None)
def _print_no_tools_warning() -> None:
"""
@@ -610,3 +634,242 @@ def _print_no_tools_warning() -> None:
" # ... implementation\n"
" return result\n"
)
def extract_tools_metadata(dir_path: str = "src") -> list[dict[str, Any]]:
"""
Extract rich metadata from tool classes in the project.
Returns a list of tool metadata dictionaries containing:
- name: Class name
- humanized_name: From name field default
- description: From description field default
- run_params_schema: JSON Schema for _run() params (from args_schema)
- init_params_schema: JSON Schema for __init__ params (filtered)
- env_vars: List of environment variable dicts
"""
tools_metadata: list[dict[str, Any]] = []
for init_file in Path(dir_path).glob("**/__init__.py"):
tools = _extract_tool_metadata_from_init(init_file)
tools_metadata.extend(tools)
return tools_metadata
def _extract_tool_metadata_from_init(init_file: Path) -> list[dict[str, Any]]:
"""
Load module from init file and extract metadata from valid tool classes.
"""
from crewai.tools.base_tool import BaseTool
try:
with _load_module_from_file(init_file) as module:
if module is None:
return []
exported_names = getattr(module, "__all__", None)
if not exported_names:
return []
tools_metadata = []
for name in exported_names:
obj = getattr(module, name, None)
if obj is None or not (
inspect.isclass(obj) and issubclass(obj, BaseTool)
):
continue
if tool_info := _extract_single_tool_metadata(obj):
tools_metadata.append(tool_info)
return tools_metadata
except Exception as e:
console.print(
f"[yellow]Warning: Could not extract metadata from {init_file}: {e}[/yellow]"
)
return []
def _extract_single_tool_metadata(tool_class: type) -> dict[str, Any] | None:
"""
Extract metadata from a single tool class.
"""
try:
core_schema = cast(Any, tool_class).__pydantic_core_schema__
if not core_schema:
return None
schema = _unwrap_schema(core_schema)
fields = schema.get("schema", {}).get("fields", {})
try:
file_path = inspect.getfile(tool_class)
relative_path = Path(file_path).relative_to(Path.cwd())
module_path = relative_path.with_suffix("")
if module_path.parts[0] == "src":
module_path = Path(*module_path.parts[1:])
if module_path.name == "__init__":
module_path = module_path.parent
module = ".".join(module_path.parts)
except (TypeError, ValueError):
module = tool_class.__module__
return {
"name": tool_class.__name__,
"module": module,
"humanized_name": _extract_field_default(
fields.get("name"), fallback=tool_class.__name__
),
"description": str(
_extract_field_default(fields.get("description"))
).strip(),
"run_params_schema": _extract_run_params_schema(fields.get("args_schema")),
"init_params_schema": _extract_init_params_schema(tool_class),
"env_vars": _extract_env_vars(fields.get("env_vars")),
}
except Exception:
return None
def _unwrap_schema(schema: Mapping[str, Any] | dict[str, Any]) -> dict[str, Any]:
"""
Unwrap nested schema structures to get to the actual schema definition.
"""
result: dict[str, Any] = dict(schema)
while (
result.get("type")
in {"function-after", "function-before", "function-wrap", "default"}
and "schema" in result
):
result = dict(result["schema"])
if result.get("type") == "definitions" and "schema" in result:
result = dict(result["schema"])
return result
def _extract_field_default(
field: dict[str, Any] | None, fallback: str | list[Any] = ""
) -> str | list[Any] | int:
"""
Extract the default value from a field schema.
"""
if not field:
return fallback
schema = field.get("schema", {})
default = schema.get("default")
return default if isinstance(default, (list, str, int)) else fallback
@lru_cache(maxsize=1)
def _get_schema_generator() -> type:
"""Get a SchemaGenerator that omits non-serializable defaults."""
from pydantic.json_schema import GenerateJsonSchema
from pydantic_core import PydanticOmit
class SchemaGenerator(GenerateJsonSchema):
def handle_invalid_for_json_schema(
self, schema: Any, error_info: Any
) -> dict[str, Any]:
raise PydanticOmit
return SchemaGenerator
def _extract_run_params_schema(
args_schema_field: dict[str, Any] | None,
) -> dict[str, Any]:
"""
Extract JSON Schema for the tool's run parameters from args_schema field.
"""
from pydantic import BaseModel
if not args_schema_field:
return {}
args_schema_class = args_schema_field.get("schema", {}).get("default")
if not (
inspect.isclass(args_schema_class) and issubclass(args_schema_class, BaseModel)
):
return {}
try:
return args_schema_class.model_json_schema(
schema_generator=_get_schema_generator()
)
except Exception:
return {}
_IGNORED_INIT_PARAMS = frozenset(
{
"name",
"description",
"env_vars",
"args_schema",
"description_updated",
"cache_function",
"result_as_answer",
"max_usage_count",
"current_usage_count",
"package_dependencies",
}
)
def _extract_init_params_schema(tool_class: type) -> dict[str, Any]:
"""
Extract JSON Schema for the tool's __init__ parameters, filtering out base fields.
"""
try:
json_schema: dict[str, Any] = cast(Any, tool_class).model_json_schema(
schema_generator=_get_schema_generator(), mode="serialization"
)
filtered_properties = {
key: value
for key, value in json_schema.get("properties", {}).items()
if key not in _IGNORED_INIT_PARAMS
}
json_schema["properties"] = filtered_properties
if "required" in json_schema:
json_schema["required"] = [
key for key in json_schema["required"] if key in filtered_properties
]
return json_schema
except Exception:
return {}
def _extract_env_vars(env_vars_field: dict[str, Any] | None) -> list[dict[str, Any]]:
"""
Extract environment variable definitions from env_vars field.
"""
from crewai.tools.base_tool import EnvVar
if not env_vars_field:
return []
schema = env_vars_field.get("schema", {})
default = schema.get("default")
if default is None:
default_factory = schema.get("default_factory")
if callable(default_factory):
try:
default = default_factory()
except Exception:
default = []
if not isinstance(default, list):
return []
return [
{
"name": env_var.name,
"description": env_var.description,
"required": env_var.required,
"default": env_var.default,
}
for env_var in default
if isinstance(env_var, EnvVar)
]

View File

@@ -22,7 +22,6 @@ from pydantic import (
UUID4,
BaseModel,
Field,
InstanceOf,
Json,
PrivateAttr,
field_validator,
@@ -176,7 +175,7 @@ class Crew(FlowTrackable, BaseModel):
_rpm_controller: RPMController = PrivateAttr()
_logger: Logger = PrivateAttr()
_file_handler: FileHandler = PrivateAttr()
_cache_handler: InstanceOf[CacheHandler] = PrivateAttr(default_factory=CacheHandler)
_cache_handler: CacheHandler = PrivateAttr(default_factory=CacheHandler)
_memory: Memory | MemoryScope | MemorySlice | None = PrivateAttr(default=None)
_train: bool | None = PrivateAttr(default=False)
_train_iteration: int | None = PrivateAttr()
@@ -210,13 +209,13 @@ class Crew(FlowTrackable, BaseModel):
default=None,
description="Metrics for the LLM usage during all tasks execution.",
)
manager_llm: str | InstanceOf[BaseLLM] | None = Field(
manager_llm: str | BaseLLM | None = Field(
description="Language model that will run the agent.", default=None
)
manager_agent: BaseAgent | None = Field(
description="Custom agent that will be used as manager.", default=None
)
function_calling_llm: str | InstanceOf[LLM] | None = Field(
function_calling_llm: str | LLM | None = Field(
description="Language model that will run the agent.", default=None
)
config: Json[dict[str, Any]] | dict[str, Any] | None = Field(default=None)
@@ -267,7 +266,7 @@ class Crew(FlowTrackable, BaseModel):
default=False,
description="Plan the crew execution and add the plan to the crew.",
)
planning_llm: str | InstanceOf[BaseLLM] | Any | None = Field(
planning_llm: str | BaseLLM | Any | None = Field(
default=None,
description=(
"Language model that will run the AgentPlanner if planning is True."
@@ -288,7 +287,7 @@ class Crew(FlowTrackable, BaseModel):
"knowledge object."
),
)
chat_llm: str | InstanceOf[BaseLLM] | Any | None = Field(
chat_llm: str | BaseLLM | Any | None = Field(
default=None,
description="LLM used to handle chatting with the crew.",
)
@@ -1800,7 +1799,7 @@ class Crew(FlowTrackable, BaseModel):
def test(
self,
n_iterations: int,
eval_llm: str | InstanceOf[BaseLLM],
eval_llm: str | BaseLLM,
inputs: dict[str, Any] | None = None,
) -> None:
"""Test and evaluate the Crew with the given inputs for n iterations.

View File

@@ -178,12 +178,15 @@ class HumanFeedbackRequestedEvent(FlowEvent):
output: The method output shown to the human for review.
message: The message displayed when requesting feedback.
emit: Optional list of possible outcomes for routing.
request_id: Platform-assigned identifier for this feedback request,
used for correlating the request across system boundaries.
"""
method_name: str
output: Any
message: str
emit: list[str] | None = None
request_id: str | None = None
type: str = "human_feedback_requested"
@@ -198,9 +201,12 @@ class HumanFeedbackReceivedEvent(FlowEvent):
method_name: Name of the method that received feedback.
feedback: The raw text feedback provided by the human.
outcome: The collapsed outcome string (if emit was specified).
request_id: Platform-assigned identifier for this feedback request,
used for correlating the response back to its originating request.
"""
method_name: str
feedback: str
outcome: str | None = None
request_id: str | None = None
type: str = "human_feedback_received"

View File

@@ -57,6 +57,7 @@ class LLMCallCompletedEvent(LLMEventBase):
messages: str | list[dict[str, Any]] | None = None
response: Any
call_type: LLMCallType
usage: dict[str, Any] | None = None
class LLMCallFailedEvent(LLMEventBase):

View File

@@ -11,10 +11,15 @@ import threading
from typing import TYPE_CHECKING, Any, Literal, TypeVar, cast
from uuid import uuid4
from pydantic import BaseModel, Field, GetCoreSchemaHandler
from pydantic_core import CoreSchema, core_schema
from pydantic import (
BaseModel,
Field,
PrivateAttr,
model_validator,
)
from rich.console import Console
from rich.text import Text
from typing_extensions import Self
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.parser import (
@@ -119,6 +124,7 @@ class AgentExecutorState(BaseModel):
(todos, observations, replan tracking) in a single validated model.
"""
id: str = Field(default_factory=lambda: str(uuid4()))
messages: list[LLMMessage] = Field(default_factory=list)
iterations: int = Field(default=0)
current_answer: AgentAction | AgentFinish | None = Field(default=None)
@@ -152,6 +158,9 @@ class AgentExecutorState(BaseModel):
class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
"""Agent Executor for both standalone agents and crew-bound agents.
_skip_auto_memory prevents Flow from eagerly allocating a Memory
instance — the executor uses agent/crew memory, not its own.
Inherits from:
- Flow[AgentExecutorState]: Provides flow orchestration capabilities
- CrewAgentExecutorMixin: Provides memory methods (short/long/external term)
@@ -159,136 +168,74 @@ class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
This executor can operate in two modes:
- Standalone mode: When crew and task are None (used by Agent.kickoff())
- Crew mode: When crew and task are provided (used by Agent.execute_task())
Note: Multiple instances may be created during agent initialization
(cache setup, RPM controller setup, etc.) but only the final instance
should execute tasks via invoke().
"""
def __init__(
self,
llm: BaseLLM,
agent: Agent,
prompt: SystemPromptResult | StandardPromptResult,
max_iter: int,
tools: list[CrewStructuredTool],
tools_names: str,
stop_words: list[str],
tools_description: str,
tools_handler: ToolsHandler,
task: Task | None = None,
crew: Crew | None = None,
step_callback: Any = None,
original_tools: list[BaseTool] | None = None,
function_calling_llm: BaseLLM | Any | None = None,
respect_context_window: bool = False,
request_within_rpm_limit: Callable[[], bool] | None = None,
callbacks: list[Any] | None = None,
response_model: type[BaseModel] | None = None,
i18n: I18N | None = None,
) -> None:
"""Initialize the flow-based agent executor.
_skip_auto_memory: bool = True
Args:
llm: Language model instance.
agent: Agent to execute.
prompt: Prompt templates.
max_iter: Maximum iterations.
tools: Available tools.
tools_names: Tool names string.
stop_words: Stop word list.
tools_description: Tool descriptions.
tools_handler: Tool handler instance.
task: Optional task to execute (None for standalone agent execution).
crew: Optional crew instance (None for standalone agent execution).
step_callback: Optional step callback.
original_tools: Original tool list.
function_calling_llm: Optional function calling LLM.
respect_context_window: Respect context limits.
request_within_rpm_limit: RPM limit check function.
callbacks: Optional callbacks list.
response_model: Optional Pydantic model for structured outputs.
"""
self._i18n: I18N = i18n or get_i18n()
self.llm = llm
self.task: Task | None = task
self.agent = agent
self.crew: Crew | None = crew
self.prompt = prompt
self.tools = tools
self.tools_names = tools_names
self.stop = stop_words
self.max_iter = max_iter
self.callbacks = callbacks or []
self._printer: Printer = Printer()
self.tools_handler = tools_handler
self.original_tools = original_tools or []
self.step_callback = step_callback
self.tools_description = tools_description
self.function_calling_llm = function_calling_llm
self.respect_context_window = respect_context_window
self.request_within_rpm_limit = request_within_rpm_limit
self.response_model = response_model
self.log_error_after = 3
self._console: Console = Console()
suppress_flow_events: bool = True # always suppress for executor
llm: BaseLLM = Field(exclude=True)
agent: Agent = Field(exclude=True)
prompt: SystemPromptResult | StandardPromptResult = Field(exclude=True)
max_iter: int = Field(default=25, exclude=True)
tools: list[CrewStructuredTool] = Field(default_factory=list, exclude=True)
tools_names: str = Field(default="", exclude=True)
stop_words: list[str] = Field(default_factory=list, exclude=True)
tools_description: str = Field(default="", exclude=True)
tools_handler: ToolsHandler | None = Field(default=None, exclude=True)
task: Task | None = Field(default=None, exclude=True)
crew: Crew | None = Field(default=None, exclude=True)
step_callback: Any = Field(default=None, exclude=True)
original_tools: list[BaseTool] = Field(default_factory=list, exclude=True)
function_calling_llm: BaseLLM | None = Field(default=None, exclude=True)
respect_context_window: bool = Field(default=False, exclude=True)
request_within_rpm_limit: Callable[[], bool] | None = Field(
default=None, exclude=True
)
callbacks: list[Any] = Field(default_factory=list, exclude=True)
response_model: type[BaseModel] | None = Field(default=None, exclude=True)
i18n: I18N | None = Field(default=None, exclude=True)
log_error_after: int = Field(default=3, exclude=True)
before_llm_call_hooks: list[BeforeLLMCallHookType | BeforeLLMCallHookCallable] = (
Field(default_factory=list, exclude=True)
)
after_llm_call_hooks: list[AfterLLMCallHookType | AfterLLMCallHookCallable] = Field(
default_factory=list, exclude=True
)
# Error context storage for recovery
self._last_parser_error: OutputParserError | None = None
self._last_context_error: Exception | None = None
_i18n: I18N = PrivateAttr(default_factory=get_i18n)
_printer: Printer = PrivateAttr(default_factory=Printer)
_console: Console = PrivateAttr(default_factory=Console)
_last_parser_error: OutputParserError | None = PrivateAttr(default=None)
_last_context_error: Exception | None = PrivateAttr(default=None)
_execution_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
_finalize_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
_finalize_called: bool = PrivateAttr(default=False)
_is_executing: bool = PrivateAttr(default=False)
_has_been_invoked: bool = PrivateAttr(default=False)
_instance_id: str = PrivateAttr(default_factory=lambda: str(uuid4())[:8])
_step_executor: Any = PrivateAttr(default=None)
_planner_observer: Any = PrivateAttr(default=None)
# Execution guard to prevent concurrent/duplicate executions
self._execution_lock = threading.Lock()
self._finalize_lock = threading.Lock()
self._finalize_called: bool = False
self._is_executing: bool = False
self._has_been_invoked: bool = False
self._flow_initialized: bool = False
self._instance_id = str(uuid4())[:8]
self.before_llm_call_hooks: list[
BeforeLLMCallHookType | BeforeLLMCallHookCallable
] = []
self.after_llm_call_hooks: list[
AfterLLMCallHookType | AfterLLMCallHookCallable
] = []
@model_validator(mode="after")
def _setup_executor(self) -> Self:
"""Configure executor after Pydantic field initialization."""
self._i18n = self.i18n or get_i18n()
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
if self.llm:
existing_stop = getattr(self.llm, "stop", [])
self.llm.stop = list(
set(
existing_stop + self.stop
if isinstance(existing_stop, list)
else self.stop
)
)
if not isinstance(existing_stop, list):
existing_stop = []
self.llm.stop = list(set(existing_stop + self.stop_words))
self._state = AgentExecutorState()
self.max_method_calls = self.max_iter * 10
# Plan-and-Execute components (Phase 2)
# Lazy-imported to avoid circular imports during module load
self._step_executor: Any = None
self._planner_observer: Any = None
def _ensure_flow_initialized(self) -> None:
"""Ensure Flow.__init__() has been called.
This is deferred from __init__ to prevent FlowCreatedEvent emission
during agent setup when multiple executor instances are created.
Only the instance that actually executes via invoke() will emit events.
"""
if not self._flow_initialized:
current_tracing = is_tracing_enabled_in_context()
# Now call Flow's __init__ which will replace self._state
# with Flow's managed state. Suppress flow events since this is
# an agent executor, not a user-facing flow.
super().__init__(
suppress_flow_events=True,
tracing=current_tracing if current_tracing else None,
max_method_calls=self.max_iter * 10,
)
self._flow_initialized = True
current_tracing = is_tracing_enabled_in_context()
self.tracing = current_tracing if current_tracing else None
self._flow_post_init()
return self
def _check_native_tool_support(self) -> bool:
"""Check if LLM supports native function calling."""
@@ -318,19 +265,13 @@ class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
@property
def state(self) -> AgentExecutorState:
"""Get state - returns temporary state if Flow not yet initialized.
Flow initialization is deferred to prevent event emission during agent setup.
Returns the temporary state until invoke() is called.
"""
if self._flow_initialized and hasattr(self, "_state_lock"):
return StateProxy(self._state, self._state_lock) # type: ignore[return-value]
return self._state
"""Get thread-safe state proxy."""
return StateProxy(self._state, self._state_lock) # type: ignore[return-value]
@property
def iterations(self) -> int:
"""Compatibility property for mixin - returns state iterations."""
return self._state.iterations
return self._state.iterations # type: ignore[no-any-return]
@iterations.setter
def iterations(self, value: int) -> None:
@@ -340,7 +281,7 @@ class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
@property
def messages(self) -> list[LLMMessage]:
"""Compatibility property - returns state messages."""
return self._state.messages
return self._state.messages # type: ignore[no-any-return]
@messages.setter
def messages(self, value: list[LLMMessage]) -> None:
@@ -1966,42 +1907,10 @@ class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
"original_tool": original_tool,
}
def _extract_tool_name(self, tool_call: Any) -> str:
"""Extract tool name from various tool call formats."""
if hasattr(tool_call, "function"):
return sanitize_tool_name(tool_call.function.name)
if hasattr(tool_call, "function_call") and tool_call.function_call:
return sanitize_tool_name(tool_call.function_call.name)
if hasattr(tool_call, "name"):
return sanitize_tool_name(tool_call.name)
if isinstance(tool_call, dict):
func_info = tool_call.get("function", {})
return sanitize_tool_name(
func_info.get("name", "") or tool_call.get("name", "unknown")
)
return "unknown"
@router(execute_native_tool)
def check_native_todo_completion(
self,
) -> Literal["todo_satisfied", "todo_not_satisfied"]:
"""Check if the native tool execution satisfied the active todo.
Similar to check_todo_completion but for native tool execution path.
"""
current_todo = self.state.todos.current_todo
if not current_todo:
return "todo_not_satisfied"
# For native tools, any tool execution satisfies the todo
return "todo_satisfied"
@listen("initialized")
def continue_iteration(self) -> Literal["check_iteration"]:
"""Bridge listener that connects iteration loop back to iteration check."""
if self._flow_initialized:
self._discard_or_listener(FlowMethodName("continue_iteration"))
self._discard_or_listener(FlowMethodName("continue_iteration"))
return "check_iteration"
@router(or_(initialize_reasoning, continue_iteration))
@@ -2629,8 +2538,6 @@ class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
if is_inside_event_loop():
return self.invoke_async(inputs)
self._ensure_flow_initialized()
with self._execution_lock:
if self._is_executing:
raise RuntimeError(
@@ -2721,8 +2628,6 @@ class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
Returns:
Dictionary with agent output.
"""
self._ensure_flow_initialized()
with self._execution_lock:
if self._is_executing:
raise RuntimeError(
@@ -3038,17 +2943,6 @@ class AgentExecutor(Flow[AgentExecutorState], CrewAgentExecutorMixin):
"""
return bool(self.crew and self.crew._train)
@classmethod
def __get_pydantic_core_schema__(
cls, _source_type: Any, _handler: GetCoreSchemaHandler
) -> CoreSchema:
"""Generate Pydantic core schema for Protocol compatibility.
Allows the executor to be used in Pydantic models without
requiring arbitrary_types_allowed=True.
"""
return core_schema.any_schema()
# Backward compatibility alias (deprecated)
CrewAgentExecutorFlow = AgentExecutor

View File

@@ -39,7 +39,14 @@ from uuid import uuid4
from opentelemetry import baggage
from opentelemetry.context import attach, detach
from pydantic import BaseModel, Field, ValidationError
from pydantic import (
BaseModel,
ConfigDict,
Field,
PrivateAttr,
ValidationError,
)
from pydantic._internal._model_construction import ModelMetaclass
from rich.console import Console
from rich.panel import Panel
@@ -81,6 +88,7 @@ from crewai.flow.flow_wrappers import (
SimpleFlowCondition,
StartMethod,
)
from crewai.flow.human_feedback import HumanFeedbackResult
from crewai.flow.input_provider import InputProvider
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.types import (
@@ -108,7 +116,6 @@ if TYPE_CHECKING:
from crewai_files import FileInput
from crewai.flow.async_feedback.types import PendingFeedbackContext
from crewai.flow.human_feedback import HumanFeedbackResult
from crewai.llms.base_llm import BaseLLM
from crewai.flow.visualization import build_flow_structure, render_interactive
@@ -728,7 +735,7 @@ class StateProxy(Generic[T]):
return result
class FlowMeta(type):
class FlowMeta(ModelMetaclass):
def __new__(
mcs,
name: str,
@@ -736,6 +743,45 @@ class FlowMeta(type):
namespace: dict[str, Any],
**kwargs: Any,
) -> type:
parent_fields: set[str] = set()
for base in bases:
if hasattr(base, "model_fields"):
parent_fields.update(base.model_fields)
annotations = namespace.get("__annotations__", {})
_skip_types = (classmethod, staticmethod, property)
for base in bases:
if isinstance(base, ModelMetaclass):
continue
for attr_name in getattr(base, "__annotations__", {}):
if attr_name not in annotations and attr_name not in namespace:
annotations[attr_name] = ClassVar
for attr_name, attr_value in namespace.items():
if isinstance(attr_value, property) and attr_name not in annotations:
for base in bases:
base_ann = getattr(base, "__annotations__", {})
if attr_name in base_ann:
annotations[attr_name] = ClassVar
for attr_name, attr_value in list(namespace.items()):
if attr_name in annotations or attr_name.startswith("_"):
continue
if attr_name in parent_fields:
annotations[attr_name] = Any
if isinstance(attr_value, BaseModel):
namespace[attr_name] = Field(
default_factory=lambda v=attr_value: v, exclude=True
)
continue
if callable(attr_value) or isinstance(
attr_value, (*_skip_types, FlowMethod)
):
continue
annotations[attr_name] = ClassVar[type(attr_value)]
namespace["__annotations__"] = annotations
cls = super().__new__(mcs, name, bases, namespace)
start_methods = []
@@ -820,85 +866,90 @@ class FlowMeta(type):
return cls
class Flow(Generic[T], metaclass=FlowMeta):
class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
"""Base class for all flows.
type parameter T must be either dict[str, Any] or a subclass of BaseModel."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
ignored_types=(StartMethod, ListenMethod, RouterMethod),
revalidate_instances="never",
)
__hash__ = object.__hash__
_start_methods: ClassVar[list[FlowMethodName]] = []
_listeners: ClassVar[dict[FlowMethodName, SimpleFlowCondition | FlowCondition]] = {}
_routers: ClassVar[set[FlowMethodName]] = set()
_router_paths: ClassVar[dict[FlowMethodName, list[FlowMethodName]]] = {}
initial_state: type[T] | T | None = None
name: str | None = None
tracing: bool | None = None
stream: bool = False
memory: Memory | MemoryScope | MemorySlice | None = None
input_provider: InputProvider | None = None
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]:
class _FlowGeneric(cls): # type: ignore
_initial_state_t = item
initial_state: Any = Field(default=None)
name: str | None = Field(default=None)
tracing: bool | None = Field(default=None)
stream: bool = Field(default=False)
memory: Memory | MemoryScope | MemorySlice | None = Field(default=None)
input_provider: InputProvider | None = Field(default=None)
suppress_flow_events: bool = Field(default=False)
human_feedback_history: list[HumanFeedbackResult] = Field(default_factory=list)
last_human_feedback: HumanFeedbackResult | None = Field(default=None)
persistence: Any = Field(default=None, exclude=True)
max_method_calls: int = Field(default=100, exclude=True)
_methods: dict[FlowMethodName, FlowMethod[Any, Any]] = PrivateAttr(
default_factory=dict
)
_method_execution_counts: dict[FlowMethodName, int] = PrivateAttr(
default_factory=dict
)
_pending_and_listeners: dict[PendingListenerKey, set[FlowMethodName]] = PrivateAttr(
default_factory=dict
)
_fired_or_listeners: set[FlowMethodName] = PrivateAttr(default_factory=set)
_method_outputs: list[Any] = PrivateAttr(default_factory=list)
_state_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
_or_listeners_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
_completed_methods: set[FlowMethodName] = PrivateAttr(default_factory=set)
_method_call_counts: dict[FlowMethodName, int] = PrivateAttr(default_factory=dict)
_is_execution_resuming: bool = PrivateAttr(default=False)
_event_futures: list[Future[None]] = PrivateAttr(default_factory=list)
_pending_feedback_context: PendingFeedbackContext | None = PrivateAttr(default=None)
_human_feedback_method_outputs: dict[str, Any] = PrivateAttr(default_factory=dict)
_input_history: list[InputHistoryEntry] = PrivateAttr(default_factory=list)
_state: Any = PrivateAttr(default=None)
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]: # type: ignore[override]
class _FlowGeneric(cls): # type: ignore[valid-type,misc]
pass
_FlowGeneric.__name__ = f"{cls.__name__}[{item.__name__}]"
_FlowGeneric._initial_state_t = item
return _FlowGeneric
def __init__(
self,
persistence: FlowPersistence | None = None,
tracing: bool | None = None,
suppress_flow_events: bool = False,
max_method_calls: int = 100,
**kwargs: Any,
) -> None:
"""Initialize a new Flow instance.
def __setattr__(self, name: str, value: Any) -> None:
"""Allow arbitrary attribute assignment for backward compat with plain class."""
if name in self.model_fields or name in self.__private_attributes__:
super().__setattr__(name, value)
else:
object.__setattr__(self, name, value)
Args:
persistence: Optional persistence backend for storing flow states
tracing: Whether to enable tracing. True=always enable, False=always disable, None=check environment/user settings
suppress_flow_events: Whether to suppress flow event emissions (internal use)
max_method_calls: Maximum times a single method can be called per execution before raising RecursionError
**kwargs: Additional state values to initialize or override
"""
# Initialize basic instance attributes
self._methods: dict[FlowMethodName, FlowMethod[Any, Any]] = {}
self._method_execution_counts: dict[FlowMethodName, int] = {}
self._pending_and_listeners: dict[PendingListenerKey, set[FlowMethodName]] = {}
self._fired_or_listeners: set[FlowMethodName] = (
set()
) # Track OR listeners that already fired
self._method_outputs: list[Any] = [] # list to store all method outputs
self._state_lock = threading.Lock()
self._or_listeners_lock = threading.Lock()
self._completed_methods: set[FlowMethodName] = (
set()
) # Track completed methods for reload
self._method_call_counts: dict[FlowMethodName, int] = {}
self._max_method_calls = max_method_calls
self._persistence: FlowPersistence | None = persistence
self._is_execution_resuming: bool = False
self._event_futures: list[Future[None]] = []
def model_post_init(self, __context: Any) -> None:
self._flow_post_init()
# Human feedback storage
self.human_feedback_history: list[HumanFeedbackResult] = []
self.last_human_feedback: HumanFeedbackResult | None = None
self._pending_feedback_context: PendingFeedbackContext | None = None
self.suppress_flow_events: bool = suppress_flow_events
def _flow_post_init(self) -> None:
"""Heavy initialization: state creation, events, memory, method registration."""
if getattr(self, "_flow_post_init_done", False):
return
object.__setattr__(self, "_flow_post_init_done", True)
# User input history (for self.ask())
self._input_history: list[InputHistoryEntry] = []
if self._state is None:
self._state = self._create_initial_state()
# Initialize state with initial values
self._state = self._create_initial_state()
self.tracing = tracing
tracing_enabled = should_enable_tracing(override=self.tracing)
set_tracing_enabled(tracing_enabled)
trace_listener = TraceCollectionListener()
trace_listener.setup_listeners(crewai_event_bus)
# Apply any additional kwargs
if kwargs:
self._initialize_state(kwargs)
if not self.suppress_flow_events:
crewai_event_bus.emit(
@@ -1382,8 +1433,8 @@ class Flow(Generic[T], metaclass=FlowMeta):
self._pending_feedback_context = None
# Clear pending feedback from persistence
if self._persistence:
self._persistence.clear_pending_feedback(context.flow_id)
if self.persistence:
self.persistence.clear_pending_feedback(context.flow_id)
# Emit feedback received event
crewai_event_bus.emit(
@@ -1424,17 +1475,17 @@ class Flow(Generic[T], metaclass=FlowMeta):
if isinstance(e, HumanFeedbackPending):
self._pending_feedback_context = e.context
if self._persistence is None:
if self.persistence is None:
from crewai.flow.persistence import SQLiteFlowPersistence
self._persistence = SQLiteFlowPersistence()
self.persistence = SQLiteFlowPersistence()
state_data = (
self._state
if isinstance(self._state, dict)
else self._state.model_dump()
)
self._persistence.save_pending_feedback(
self.persistence.save_pending_feedback(
flow_uuid=e.context.flow_id,
context=e.context,
state_data=state_data,
@@ -1484,39 +1535,33 @@ class Flow(Generic[T], metaclass=FlowMeta):
"""
init_state = self.initial_state
# Handle case where initial_state is None but we have a type parameter
if init_state is None and hasattr(self, "_initial_state_t"):
state_type = self._initial_state_t
if isinstance(state_type, type):
if issubclass(state_type, FlowState):
# Create instance - FlowState auto-generates id via default_factory
instance = state_type()
# Ensure id is set - generate UUID if empty
if not getattr(instance, "id", None):
object.__setattr__(instance, "id", str(uuid4()))
return cast(T, instance)
if issubclass(state_type, BaseModel):
# Create a new type with FlowState first for proper id default
class StateWithId(FlowState, state_type): # type: ignore
pass
instance = StateWithId()
# Ensure id is set - generate UUID if empty
if not getattr(instance, "id", None):
object.__setattr__(instance, "id", str(uuid4()))
return cast(T, instance)
if state_type is dict:
return cast(T, {"id": str(uuid4())})
# Handle case where no initial state is provided
if init_state is None:
return cast(T, {"id": str(uuid4())})
# Handle case where initial_state is a type (class)
if isinstance(init_state, type):
state_class = init_state
if issubclass(state_class, FlowState):
return state_class()
return cast(T, state_class())
if issubclass(state_class, BaseModel):
model_fields = getattr(state_class, "model_fields", None)
if not model_fields or "id" not in model_fields:
@@ -1524,7 +1569,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
model_instance = state_class()
if not getattr(model_instance, "id", None):
object.__setattr__(model_instance, "id", str(uuid4()))
return model_instance
return cast(T, model_instance)
if init_state is dict:
return cast(T, {"id": str(uuid4())})
@@ -1535,32 +1580,21 @@ class Flow(Generic[T], metaclass=FlowMeta):
new_state["id"] = str(uuid4())
return cast(T, new_state)
# Handle BaseModel instance case
if isinstance(init_state, BaseModel):
model = cast(BaseModel, init_state)
if not hasattr(model, "id"):
raise ValueError("Flow state model must have an 'id' field")
# Create new instance with same values to avoid mutations
if hasattr(model, "model_dump"):
# Pydantic v2
model = init_state
if hasattr(model, "id"):
state_dict = model.model_dump()
elif hasattr(model, "dict"):
# Pydantic v1
state_dict = model.dict()
else:
# Fallback for other BaseModel implementations
state_dict = {
k: v for k, v in model.__dict__.items() if not k.startswith("_")
}
if not state_dict.get("id"):
state_dict["id"] = str(uuid4())
model_class = type(model)
return cast(T, model_class(**state_dict))
# Ensure id is set - generate UUID if empty
if not state_dict.get("id"):
state_dict["id"] = str(uuid4())
class StateWithId(FlowState, type(model)): # type: ignore
pass
# Create new instance of the same class
model_class = type(model)
return cast(T, model_class(**state_dict))
state_dict = model.model_dump()
state_dict["id"] = str(uuid4())
return cast(T, StateWithId(**state_dict))
raise TypeError(
f"Initial state must be dict or BaseModel, got {type(self.initial_state)}"
)
@@ -1573,17 +1607,17 @@ class Flow(Generic[T], metaclass=FlowMeta):
"""
if isinstance(self._state, BaseModel):
try:
return self._state.model_copy(deep=True)
return cast(T, self._state.model_copy(deep=True))
except (TypeError, AttributeError):
try:
state_dict = self._state.model_dump()
model_class = type(self._state)
return model_class(**state_dict)
return cast(T, model_class(**state_dict))
except Exception:
return self._state.model_copy(deep=False)
return cast(T, self._state.model_copy(deep=False))
else:
try:
return copy.deepcopy(self._state)
return cast(T, copy.deepcopy(self._state))
except (TypeError, AttributeError):
return cast(T, self._state.copy())
@@ -1659,7 +1693,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
elif isinstance(self._state, BaseModel):
# For BaseModel states, preserve existing fields unless overridden
try:
model = cast(BaseModel, self._state)
model = self._state
# Get current state as dict
if hasattr(model, "model_dump"):
current_state = model.model_dump()
@@ -1710,7 +1744,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
self._state.update(stored_state)
elif isinstance(self._state, BaseModel):
# For BaseModel states, create new instance with stored values
model = cast(BaseModel, self._state)
model = self._state
if hasattr(model, "model_validate"):
# Pydantic v2
self._state = cast(T, type(model).model_validate(stored_state))
@@ -1935,7 +1969,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
try:
# Reset flow state for fresh execution unless restoring from persistence
is_restoring = inputs and "id" in inputs and self._persistence is not None
is_restoring = inputs and "id" in inputs and self.persistence is not None
if not is_restoring:
# Clear completed methods and outputs for a fresh start
self._completed_methods.clear()
@@ -1961,9 +1995,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
setattr(self._state, "id", inputs["id"]) # noqa: B010
# If persistence is enabled, attempt to restore the stored state using the provided id.
if "id" in inputs and self._persistence is not None:
if "id" in inputs and self.persistence is not None:
restore_uuid = inputs["id"]
stored_state = self._persistence.load_state(restore_uuid)
stored_state = self.persistence.load_state(restore_uuid)
if stored_state:
self._log_flow_event(
f"Loading flow state from memory for UUID: {restore_uuid}"
@@ -2033,17 +2067,17 @@ class Flow(Generic[T], metaclass=FlowMeta):
if isinstance(e, HumanFeedbackPending):
# Auto-save pending feedback (create default persistence if needed)
if self._persistence is None:
if self.persistence is None:
from crewai.flow.persistence import SQLiteFlowPersistence
self._persistence = SQLiteFlowPersistence()
self.persistence = SQLiteFlowPersistence()
state_data = (
self._state
if isinstance(self._state, dict)
else self._state.model_dump()
)
self._persistence.save_pending_feedback(
self.persistence.save_pending_feedback(
flow_uuid=e.context.flow_id,
context=e.context,
state_data=state_data,
@@ -2290,6 +2324,17 @@ class Flow(Generic[T], metaclass=FlowMeta):
result = await result
self._method_outputs.append(result)
# For @human_feedback methods with emit, the result is the collapsed outcome
# (e.g., "approved") used for routing. But we want the actual method output
# to be the stored result (for final flow output). Replace the last entry
# if a stashed output exists. Dict-based stash is concurrency-safe and
# handles None return values (presence in dict = stashed, not value).
if method_name in self._human_feedback_method_outputs:
self._method_outputs[-1] = self._human_feedback_method_outputs.pop(
method_name
)
self._method_execution_counts[method_name] = (
self._method_execution_counts.get(method_name, 0) + 1
)
@@ -2318,10 +2363,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
if isinstance(e, HumanFeedbackPending):
e.context.method_name = method_name
if self._persistence is None:
if self.persistence is None:
from crewai.flow.persistence import SQLiteFlowPersistence
self._persistence = SQLiteFlowPersistence()
self.persistence = SQLiteFlowPersistence()
# Emit paused event (not failed)
if not self.suppress_flow_events:
@@ -2682,9 +2727,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
- Catches and logs any exceptions during execution, preventing individual listener failures from breaking the entire flow
"""
count = self._method_call_counts.get(listener_name, 0) + 1
if count > self._max_method_calls:
if count > self.max_method_calls:
raise RecursionError(
f"Method '{listener_name}' has been called {self._max_method_calls} times in "
f"Method '{listener_name}' has been called {self.max_method_calls} times in "
f"this flow execution, which indicates an infinite loop. "
f"This commonly happens when a @listen label matches the "
f"method's own name."
@@ -2791,7 +2836,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
This is best-effort: if persistence is not configured, this is a no-op.
"""
if self._persistence is None:
if self.persistence is None:
return
try:
state_data = (
@@ -2799,7 +2844,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
if isinstance(self._state, dict)
else self._state.model_dump()
)
self._persistence.save_state(
self.persistence.save_state(
flow_uuid=self.flow_id,
method_name="_ask_checkpoint",
state_data=state_data,

View File

@@ -591,6 +591,13 @@ def human_feedback(
):
_distill_and_store_lessons(self, method_output, raw_feedback)
# Stash the real method output for final flow result when emit is set
# (result is the collapsed outcome string for routing, but we want to
# preserve the actual method output as the flow's final result)
# Uses per-method dict for concurrency safety and to handle None returns
if emit:
self._human_feedback_method_outputs[func.__name__] = method_output
return result
wrapper: Any = async_wrapper
@@ -615,6 +622,13 @@ def human_feedback(
):
_distill_and_store_lessons(self, method_output, raw_feedback)
# Stash the real method output for final flow result when emit is set
# (result is the collapsed outcome string for routing, but we want to
# preserve the actual method output as the flow's final result)
# Uses per-method dict for concurrency safety and to handle None returns
if emit:
self._human_feedback_method_outputs[func.__name__] = method_output
return result
wrapper = sync_wrapper

View File

@@ -3,12 +3,15 @@ from __future__ import annotations
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel, ConfigDict
if TYPE_CHECKING:
from crewai.rag.types import SearchResult
class BaseKnowledgeStorage(ABC):
class BaseKnowledgeStorage(BaseModel, ABC):
model_config = ConfigDict(arbitrary_types_allowed=True)
"""Abstract base class for knowledge storage implementations."""
@abstractmethod

View File

@@ -3,6 +3,9 @@ import traceback
from typing import Any, cast
import warnings
from pydantic import Field, PrivateAttr, model_validator
from typing_extensions import Self
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
from crewai.rag.chromadb.config import ChromaDBConfig
from crewai.rag.chromadb.types import ChromaEmbeddingFunctionWrapper
@@ -22,31 +25,32 @@ class KnowledgeStorage(BaseKnowledgeStorage):
search efficiency.
"""
def __init__(
self,
embedder: ProviderSpec
collection_name: str | None = None
embedder: (
ProviderSpec
| BaseEmbeddingsProvider[Any]
| type[BaseEmbeddingsProvider[Any]]
| None = None,
collection_name: str | None = None,
) -> None:
self.collection_name = collection_name
self._client: BaseClient | None = None
| None
) = Field(default=None, exclude=True)
_client: BaseClient | None = PrivateAttr(default=None)
@model_validator(mode="after")
def _init_client(self) -> Self:
warnings.filterwarnings(
"ignore",
message=r".*'model_fields'.*is deprecated.*",
module=r"^chromadb(\.|$)",
)
if embedder:
embedding_function = build_embedder(embedder) # type: ignore[arg-type]
if self.embedder:
embedding_function = build_embedder(self.embedder) # type: ignore[arg-type]
config = ChromaDBConfig(
embedding_function=cast(
ChromaEmbeddingFunctionWrapper, embedding_function
)
)
self._client = create_client(config)
return self
def _get_client(self) -> BaseClient:
"""Get the appropriate client - instance-specific or global."""

View File

@@ -22,7 +22,6 @@ from pydantic import (
UUID4,
BaseModel,
Field,
InstanceOf,
PrivateAttr,
field_validator,
model_validator,
@@ -204,7 +203,7 @@ class LiteAgent(FlowTrackable, BaseModel):
role: str = Field(description="Role of the agent")
goal: str = Field(description="Goal of the agent")
backstory: str = Field(description="Backstory of the agent")
llm: str | InstanceOf[BaseLLM] | Any | None = Field(
llm: str | BaseLLM | Any | None = Field(
default=None, description="Language model that will run the agent"
)
tools: list[BaseTool] = Field(

View File

@@ -20,8 +20,7 @@ from typing import (
)
from dotenv import load_dotenv
import httpx
from pydantic import BaseModel, Field
from pydantic import BaseModel, Field, model_validator
from typing_extensions import Self
from crewai.events.event_bus import crewai_event_bus
@@ -37,7 +36,12 @@ from crewai.events.types.tool_usage_events import (
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
)
from crewai.llms.base_llm import BaseLLM, get_current_call_id, llm_call_context
from crewai.llms.base_llm import (
BaseLLM,
JsonResponseFormat,
get_current_call_id,
llm_call_context,
)
from crewai.llms.constants import (
ANTHROPIC_MODELS,
AZURE_MODELS,
@@ -63,8 +67,6 @@ except ImportError:
if TYPE_CHECKING:
from crewai.agent.core import Agent
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.providers.anthropic.completion import AnthropicThinkingConfig
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities.types import LLMMessage
@@ -342,6 +344,27 @@ class AccumulatedToolArgs(BaseModel):
class LLM(BaseLLM):
completion_cost: float | None = None
timeout: float | int | None = None
top_p: float | None = None
n: int | None = None
max_completion_tokens: int | None = None
max_tokens: int | float | None = None
presence_penalty: float | None = None
frequency_penalty: float | None = None
logit_bias: dict[int, float] | None = None
response_format: JsonResponseFormat | type[BaseModel] | None = None
seed: int | None = None
logprobs: int | None = None
top_logprobs: int | None = None
api_base: str | None = None
api_version: str | None = None
callbacks: list[Any] | None = None
reasoning_effort: Literal["none", "low", "medium", "high"] | None = None
stream: bool = False
interceptor: Any = None
thinking: Any = None
context_window_size: int = 0
is_anthropic: bool = False
def __new__(cls, model: str, is_litellm: bool = False, **kwargs: Any) -> LLM:
"""Factory method that routes to native SDK or falls back to LiteLLM.
@@ -436,10 +459,7 @@ class LLM(BaseLLM):
logger.error(error_msg)
raise ImportError(error_msg) from None
instance = object.__new__(cls)
super(LLM, instance).__init__(model=model, is_litellm=True, **kwargs)
instance.is_litellm = True
return instance
return object.__new__(cls)
@classmethod
def _matches_provider_pattern(cls, model: str, provider: str) -> bool:
@@ -624,89 +644,23 @@ class LLM(BaseLLM):
return None
def __init__(
self,
model: str,
timeout: float | int | None = None,
temperature: float | None = None,
top_p: float | None = None,
n: int | None = None,
stop: str | list[str] | None = None,
max_completion_tokens: int | None = None,
max_tokens: int | float | None = None,
presence_penalty: float | None = None,
frequency_penalty: float | None = None,
logit_bias: dict[int, float] | None = None,
response_format: type[BaseModel] | None = None,
seed: int | None = None,
logprobs: int | None = None,
top_logprobs: int | None = None,
base_url: str | None = None,
api_base: str | None = None,
api_version: str | None = None,
api_key: str | None = None,
callbacks: list[Any] | None = None,
reasoning_effort: Literal["none", "low", "medium", "high"] | None = None,
stream: bool = False,
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
thinking: AnthropicThinkingConfig | dict[str, Any] | None = None,
prefer_upload: bool = False,
**kwargs: Any,
) -> None:
"""Initialize LLM instance.
@model_validator(mode="before")
@classmethod
def _validate_llm_fields(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
model = data.get("model", "")
data["is_anthropic"] = cls._is_anthropic_model(model)
return data
Note: This __init__ method is only called for fallback instances.
Native provider instances handle their own initialization in their respective classes.
"""
super().__init__(
model=model,
temperature=temperature,
api_key=api_key,
base_url=base_url,
timeout=timeout,
**kwargs,
)
self.model = model
self.timeout = timeout
self.temperature = temperature
self.top_p = top_p
self.n = n
self.max_completion_tokens = max_completion_tokens
self.max_tokens = max_tokens
self.presence_penalty = presence_penalty
self.frequency_penalty = frequency_penalty
self.logit_bias = logit_bias
self.response_format = response_format
self.seed = seed
self.logprobs = logprobs
self.top_logprobs = top_logprobs
self.base_url = base_url
self.api_base = api_base
self.api_version = api_version
self.api_key = api_key
self.callbacks = callbacks
self.context_window_size = 0
self.reasoning_effort = reasoning_effort
self.prefer_upload = prefer_upload
self.additional_params = {
k: v for k, v in kwargs.items() if k not in ("is_litellm", "provider")
}
self.is_anthropic = self._is_anthropic_model(model)
self.stream = stream
self.interceptor = interceptor
litellm.drop_params = True
# Normalize self.stop to always be a list[str]
if stop is None:
self.stop: list[str] = []
elif isinstance(stop, str):
self.stop = [stop]
else:
self.stop = stop
self.set_callbacks(callbacks or [])
self.set_env_callbacks()
@model_validator(mode="after")
def _init_litellm(self) -> LLM:
self.is_litellm = True
if LITELLM_AVAILABLE:
litellm.drop_params = True
self.set_callbacks(self.callbacks or [])
self.set_env_callbacks()
return self
@staticmethod
def _is_anthropic_model(model: str) -> bool:
@@ -753,7 +707,7 @@ class LLM(BaseLLM):
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": self.stop or None,
"stop": (self.stop or None) if self.supports_stop_words() else None,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
@@ -1016,21 +970,25 @@ class LLM(BaseLLM):
)
result = instructor_instance.to_pydantic()
structured_response = result.model_dump_json()
usage_dict = self._usage_to_dict(usage_info)
self._handle_emit_call_events(
response=structured_response,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_dict,
)
return structured_response
usage_dict = self._usage_to_dict(usage_info)
self._handle_emit_call_events(
response=full_response,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_dict,
)
return full_response
@@ -1040,12 +998,14 @@ class LLM(BaseLLM):
return tool_result
# --- 10) Emit completion event and return response
usage_dict = self._usage_to_dict(usage_info)
self._handle_emit_call_events(
response=full_response,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_dict,
)
return full_response
@@ -1067,6 +1027,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=self._usage_to_dict(usage_info),
)
return full_response
@@ -1218,6 +1179,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=None,
)
return structured_response
@@ -1248,6 +1210,8 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(getattr(response, "usage", None))
# --- 2) Handle structured output response (when response_model is provided)
if response_model is not None:
# When using instructor/response_model, litellm returns a Pydantic model instance
@@ -1259,6 +1223,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=response_usage,
)
return structured_response
@@ -1290,6 +1255,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=response_usage,
)
return text_response
@@ -1313,6 +1279,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=response_usage,
)
return text_response
@@ -1362,6 +1329,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=None,
)
return structured_response
@@ -1388,6 +1356,8 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(getattr(response, "usage", None))
if response_model is not None:
if isinstance(response, BaseModel):
structured_response = response.model_dump_json()
@@ -1397,6 +1367,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=response_usage,
)
return structured_response
@@ -1426,6 +1397,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=response_usage,
)
return text_response
@@ -1448,6 +1420,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=response_usage,
)
return text_response
@@ -1594,12 +1567,14 @@ class LLM(BaseLLM):
if result is not None:
return result
usage_dict = self._usage_to_dict(usage_info)
self._handle_emit_call_events(
response=full_response,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params.get("messages"),
usage=usage_dict,
)
return full_response
@@ -1621,6 +1596,7 @@ class LLM(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("messages"),
usage=self._usage_to_dict(usage_info),
)
return full_response
raise
@@ -1825,9 +1801,11 @@ class LLM(BaseLLM):
# whether to summarize the content or abort based on the respect_context_window flag
raise
except Exception as e:
unsupported_stop = "Unsupported parameter" in str(
e
) and "'stop'" in str(e)
error_str = str(e)
unsupported_stop = "'stop'" in error_str and (
"Unsupported parameter" in error_str
or "does not support parameters" in error_str
)
if unsupported_stop:
if (
@@ -1961,9 +1939,11 @@ class LLM(BaseLLM):
except LLMContextLengthExceededError:
raise
except Exception as e:
unsupported_stop = "Unsupported parameter" in str(
e
) and "'stop'" in str(e)
error_str = str(e)
unsupported_stop = "'stop'" in error_str and (
"Unsupported parameter" in error_str
or "does not support parameters" in error_str
)
if unsupported_stop:
if (
@@ -2003,6 +1983,19 @@ class LLM(BaseLLM):
)
raise
@staticmethod
def _usage_to_dict(usage: Any) -> dict[str, Any] | None:
if usage is None:
return None
if isinstance(usage, dict):
return usage
if hasattr(usage, "model_dump"):
result: dict[str, Any] = usage.model_dump()
return result
if hasattr(usage, "__dict__"):
return {k: v for k, v in vars(usage).items() if not k.startswith("_")}
return None
def _handle_emit_call_events(
self,
response: Any,
@@ -2010,6 +2003,7 @@ class LLM(BaseLLM):
from_task: Task | None = None,
from_agent: Agent | None = None,
messages: str | list[LLMMessage] | None = None,
usage: dict[str, Any] | None = None,
) -> None:
"""Handle the events for the LLM call.
@@ -2019,6 +2013,7 @@ class LLM(BaseLLM):
from_task: Optional task object
from_agent: Optional agent object
messages: Optional messages object
usage: Optional token usage data
"""
crewai_event_bus.emit(
self,
@@ -2030,6 +2025,7 @@ class LLM(BaseLLM):
from_agent=from_agent,
model=self.model,
call_id=get_current_call_id(),
usage=usage,
),
)
@@ -2263,6 +2259,10 @@ class LLM(BaseLLM):
Note: This method is only used by the litellm fallback path.
Native providers override this method with their own implementation.
"""
model_lower = self.model.lower() if self.model else ""
if "gpt-5" in model_lower:
return False
if not LITELLM_AVAILABLE or get_supported_openai_params is None:
# When litellm is not available, assume stop words are supported
return True
@@ -2434,7 +2434,7 @@ class LLM(BaseLLM):
**filtered_params,
)
def __deepcopy__(self, memo: dict[int, Any] | None) -> LLM:
def __deepcopy__(self, memo: dict[int, Any] | None = None) -> LLM:
"""Create a deep copy of the LLM instance."""
import copy

View File

@@ -14,10 +14,18 @@ from datetime import datetime
import json
import logging
import re
from typing import TYPE_CHECKING, Any, Final
from typing import TYPE_CHECKING, Any, Final, Literal
import uuid
from pydantic import BaseModel
from pydantic import (
AliasChoices,
BaseModel,
ConfigDict,
Field,
PrivateAttr,
model_validator,
)
from typing_extensions import TypedDict
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_events import (
@@ -51,6 +59,12 @@ if TYPE_CHECKING:
from crewai.utilities.types import LLMMessage
class JsonResponseFormat(TypedDict):
"""Response format requesting raw JSON output (e.g. ``{"type": "json_object"}``)."""
type: Literal["json_object"]
DEFAULT_CONTEXT_WINDOW_SIZE: Final[int] = 4096
DEFAULT_SUPPORTS_STOP_WORDS: Final[bool] = True
_JSON_EXTRACTION_PATTERN: Final[re.Pattern[str]] = re.compile(r"\{.*}", re.DOTALL)
@@ -82,7 +96,7 @@ def get_current_call_id() -> str:
return call_id
class BaseLLM(ABC):
class BaseLLM(BaseModel, ABC):
"""Abstract base class for LLM implementations.
This class defines the interface that all LLM implementations must follow.
@@ -101,56 +115,100 @@ class BaseLLM(ABC):
additional_params: Additional provider-specific parameters.
"""
model_config = ConfigDict(arbitrary_types_allowed=True, populate_by_name=True)
model: str
temperature: float | None = None
api_key: str | None = None
base_url: str | None = None
provider: str = Field(default="openai")
prefer_upload: bool = False
is_litellm: bool = False
stop: list[str] = Field(
default_factory=list,
validation_alias=AliasChoices("stop", "stop_sequences"),
)
additional_params: dict[str, Any] = Field(default_factory=dict)
def __init__(
self,
model: str,
temperature: float | None = None,
api_key: str | None = None,
base_url: str | None = None,
provider: str | None = None,
prefer_upload: bool = False,
**kwargs: Any,
) -> None:
"""Initialize the BaseLLM with default attributes.
def __setattr__(self, name: str, value: Any) -> None:
if name in ("stop", "stop_sequences"):
if value is None:
value = []
elif isinstance(value, str):
value = [value]
elif not isinstance(value, list):
value = list(value)
name = "stop"
try:
super().__setattr__(name, value)
except ValueError:
if name in self.model_fields:
raise # Re-raise validation errors on declared fields
# Fallback for attributes not declared as fields (e.g. mock patching)
object.__setattr__(self, name, value)
except AttributeError:
object.__setattr__(self, name, value)
Args:
model: The model identifier/name.
temperature: Optional temperature setting for response generation.
stop: Optional list of stop sequences for generation.
prefer_upload: Whether to prefer file upload over inline base64.
**kwargs: Additional provider-specific parameters.
def __delattr__(self, name: str) -> None:
try:
super().__delattr__(name)
except AttributeError:
object.__delattr__(self, name)
@property
def stop_sequences(self) -> list[str]:
"""Alias for ``stop`` — kept for backward compatibility with provider APIs.
Writes are handled by ``__setattr__``, which normalizes and redirects
``stop_sequences`` assignments to the ``stop`` field.
"""
if not model:
raise ValueError("Model name is required and cannot be empty")
return self.stop
self.model = model
self.temperature = temperature
self.api_key = api_key
self.base_url = base_url
self.prefer_upload = prefer_upload
# Store additional parameters for provider-specific use
self.additional_params = kwargs
self._provider = provider or "openai"
stop = kwargs.pop("stop", None)
if stop is None:
self.stop: list[str] = []
elif isinstance(stop, str):
self.stop = [stop]
elif isinstance(stop, list):
self.stop = stop
else:
self.stop = []
self._token_usage = {
_token_usage: dict[str, int] = PrivateAttr(
default_factory=lambda: {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
}
)
@model_validator(mode="before")
@classmethod
def _validate_init_fields(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
if not data.get("model"):
raise ValueError("Model name is required and cannot be empty")
# Normalize stop: accept str, list, or None; also accept stop_sequences alias
stop_seqs = data.pop("stop_sequences", None)
stop = stop_seqs if stop_seqs is not None else data.get("stop")
if stop is None:
data["stop"] = []
elif isinstance(stop, str):
data["stop"] = [stop]
elif isinstance(stop, list):
data["stop"] = stop
else:
data["stop"] = list(stop)
# Default provider
if not data.get("provider"):
data["provider"] = "openai"
# Collect unknown kwargs into additional_params
known_fields = set(cls.model_fields.keys())
extras = {k: v for k, v in data.items() if k not in known_fields}
for k in extras:
data.pop(k)
existing = data.get("additional_params") or {}
existing.update(extras)
data["additional_params"] = existing
return data
def to_config_dict(self) -> dict[str, Any]:
"""Serialize this LLM to a dict that can reconstruct it via ``LLM(**config)``.
@@ -174,16 +232,6 @@ class BaseLLM(ABC):
return config
@property
def provider(self) -> str:
"""Get the provider of the LLM."""
return self._provider
@provider.setter
def provider(self, value: str) -> None:
"""Set the provider of the LLM."""
self._provider = value
@abstractmethod
def call(
self,
@@ -412,6 +460,7 @@ class BaseLLM(ABC):
from_task: Task | None = None,
from_agent: Agent | None = None,
messages: str | list[LLMMessage] | None = None,
usage: dict[str, Any] | None = None,
) -> None:
"""Emit LLM call completed event."""
from crewai.utilities.serialization import to_serializable
@@ -426,6 +475,7 @@ class BaseLLM(ABC):
from_agent=from_agent,
model=self.model,
call_id=get_current_call_id(),
usage=usage,
),
)

View File

@@ -3,12 +3,13 @@ from __future__ import annotations
import json
import logging
import os
from typing import TYPE_CHECKING, Any, Final, Literal, TypeGuard, cast
from typing import Any, Final, Literal, TypeGuard, cast
from pydantic import BaseModel
from pydantic import BaseModel, PrivateAttr, model_validator
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, llm_call_context
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
@@ -17,9 +18,6 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.llms.hooks.base import BaseInterceptor
try:
from anthropic import Anthropic, AsyncAnthropic, transform_schema
from anthropic.types import (
@@ -150,60 +148,47 @@ class AnthropicCompletion(BaseLLM):
offering native tool use, streaming support, and proper message formatting.
"""
def __init__(
self,
model: str = "claude-3-5-sonnet-20241022",
api_key: str | None = None,
base_url: str | None = None,
timeout: float | None = None,
max_retries: int = 2,
temperature: float | None = None,
max_tokens: int = 4096, # Required for Anthropic
top_p: float | None = None,
stop_sequences: list[str] | None = None,
stream: bool = False,
client_params: dict[str, Any] | None = None,
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
thinking: AnthropicThinkingConfig | None = None,
response_format: type[BaseModel] | None = None,
tool_search: AnthropicToolSearchConfig | bool | None = None,
**kwargs: Any,
):
"""Initialize Anthropic chat completion client.
model: str = "claude-3-5-sonnet-20241022"
timeout: float | None = None
max_retries: int = 2
max_tokens: int = 4096
top_p: float | None = None
stream: bool = False
client_params: dict[str, Any] | None = None
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None
thinking: AnthropicThinkingConfig | None = None
response_format: JsonResponseFormat | type[BaseModel] | None = None
tool_search: AnthropicToolSearchConfig | None = None
is_claude_3: bool = False
supports_tools: bool = True
Args:
model: Anthropic model name (e.g., 'claude-3-5-sonnet-20241022')
api_key: Anthropic API key (defaults to ANTHROPIC_API_KEY env var)
base_url: Custom base URL for Anthropic API
timeout: Request timeout in seconds
max_retries: Maximum number of retries
temperature: Sampling temperature (0-1)
max_tokens: Maximum tokens in response (required for Anthropic)
top_p: Nucleus sampling parameter
stop_sequences: Stop sequences (Anthropic uses stop_sequences, not stop)
stream: Enable streaming responses
client_params: Additional parameters for the Anthropic client
interceptor: HTTP interceptor for modifying requests/responses at transport level.
response_format: Pydantic model for structured output. When provided, responses
will be validated against this model schema.
tool_search: Enable Anthropic's server-side tool search. When True, uses "bm25"
variant by default. Pass an AnthropicToolSearchConfig to choose "regex" or
"bm25". When enabled, tools are automatically marked with defer_loading=True
and a tool search tool is injected into the tools list.
**kwargs: Additional parameters
"""
super().__init__(
model=model, temperature=temperature, stop=stop_sequences or [], **kwargs
)
_client: Any = PrivateAttr(default=None)
_async_client: Any = PrivateAttr(default=None)
_previous_thinking_blocks: list[Any] = PrivateAttr(default_factory=list)
# Client params
self.interceptor = interceptor
self.client_params = client_params
self.base_url = base_url
self.timeout = timeout
self.max_retries = max_retries
@model_validator(mode="before")
@classmethod
def _normalize_anthropic_fields(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
# Anthropic uses stop_sequences; normalize from stop kwarg
popped = data.pop("stop_sequences", None)
seqs = popped if popped is not None else (data.get("stop") or [])
if isinstance(seqs, str):
seqs = [seqs]
data["stop"] = seqs
data["is_claude_3"] = "claude-3" in data.get("model", "").lower()
# Normalize tool_search
ts = data.get("tool_search")
if ts is True:
data["tool_search"] = AnthropicToolSearchConfig()
elif ts is not None and not isinstance(ts, AnthropicToolSearchConfig):
data["tool_search"] = None
return data
self.client = Anthropic(**self._get_client_params())
@model_validator(mode="after")
def _init_clients(self) -> AnthropicCompletion:
self._client = Anthropic(**self._get_client_params())
async_client_params = self._get_client_params()
if self.interceptor:
@@ -211,51 +196,8 @@ class AnthropicCompletion(BaseLLM):
async_http_client = httpx.AsyncClient(transport=async_transport)
async_client_params["http_client"] = async_http_client
self.async_client = AsyncAnthropic(**async_client_params)
# Store completion parameters
self.max_tokens = max_tokens
self.top_p = top_p
self.stream = stream
self.stop_sequences = stop_sequences or []
self.thinking = thinking
self.previous_thinking_blocks: list[ThinkingBlock] = []
self.response_format = response_format
# Tool search config
self.tool_search: AnthropicToolSearchConfig | None
if tool_search is True:
self.tool_search = AnthropicToolSearchConfig()
elif isinstance(tool_search, AnthropicToolSearchConfig):
self.tool_search = tool_search
else:
self.tool_search = None
# Model-specific settings
self.is_claude_3 = "claude-3" in model.lower()
self.supports_tools = True
@property
def stop(self) -> list[str]:
"""Get stop sequences sent to the API."""
return self.stop_sequences
@stop.setter
def stop(self, value: list[str] | str | None) -> None:
"""Set stop sequences.
Synchronizes stop_sequences to ensure values set by CrewAgentExecutor
are properly sent to the Anthropic API.
Args:
value: Stop sequences as a list, single string, or None
"""
if value is None:
self.stop_sequences = []
elif isinstance(value, str):
self.stop_sequences = [value]
elif isinstance(value, list):
self.stop_sequences = value
else:
self.stop_sequences = []
self._async_client = AsyncAnthropic(**async_client_params)
return self
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Anthropic-specific fields."""
@@ -751,11 +693,11 @@ class AnthropicCompletion(BaseLLM):
)
elif isinstance(content, list):
formatted_messages.append({"role": "assistant", "content": content})
elif self.thinking and self.previous_thinking_blocks:
elif self.thinking and self._previous_thinking_blocks:
structured_content = cast(
list[dict[str, Any]],
[
*self.previous_thinking_blocks,
*self._previous_thinking_blocks,
{"type": "text", "text": content if content else ""},
],
)
@@ -809,7 +751,7 @@ class AnthropicCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
response_model: JsonResponseFormat | type[BaseModel] | None = None,
) -> str | Any:
"""Handle non-streaming message completion."""
uses_file_api = _contains_file_id_reference(params.get("messages", []))
@@ -843,11 +785,11 @@ class AnthropicCompletion(BaseLLM):
try:
if betas:
params["betas"] = betas
response = self.client.beta.messages.create(
response = self._client.beta.messages.create(
**params, extra_body=extra_body
)
else:
response = self.client.messages.create(**params)
response = self._client.messages.create(**params)
except Exception as e:
if is_context_length_exceeded(e):
@@ -869,6 +811,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
else:
@@ -884,6 +827,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
@@ -906,6 +850,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return list(tool_uses)
@@ -928,7 +873,7 @@ class AnthropicCompletion(BaseLLM):
thinking_blocks.append(cast(ThinkingBlock, thinking_block))
if thinking_blocks:
self.previous_thinking_blocks = thinking_blocks
self._previous_thinking_blocks = thinking_blocks
content = self._apply_stop_words(content)
self._emit_call_completed_event(
@@ -937,6 +882,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
if usage.get("total_tokens", 0) > 0:
@@ -952,7 +898,7 @@ class AnthropicCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
response_model: JsonResponseFormat | type[BaseModel] | None = None,
) -> str | Any:
"""Handle streaming message completion."""
betas: list[str] = []
@@ -991,9 +937,9 @@ class AnthropicCompletion(BaseLLM):
current_tool_calls: dict[int, dict[str, Any]] = {}
stream_context = (
self.client.beta.messages.stream(**stream_params, extra_body=extra_body)
self._client.beta.messages.stream(**stream_params, extra_body=extra_body)
if betas
else self.client.messages.stream(**stream_params)
else self._client.messages.stream(**stream_params)
)
with stream_context as stream:
response_id = None
@@ -1072,7 +1018,7 @@ class AnthropicCompletion(BaseLLM):
thinking_blocks.append(cast(ThinkingBlock, thinking_block))
if thinking_blocks:
self.previous_thinking_blocks = thinking_blocks
self._previous_thinking_blocks = thinking_blocks
usage = self._extract_anthropic_token_usage(final_message)
self._track_token_usage_internal(usage)
@@ -1086,6 +1032,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
for block in final_message.content:
@@ -1100,6 +1047,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
@@ -1129,6 +1077,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return self._invoke_after_llm_call_hooks(
@@ -1269,7 +1218,7 @@ class AnthropicCompletion(BaseLLM):
try:
# Send tool results back to Claude for final response
final_response: Message = self.client.messages.create(**follow_up_params)
final_response: Message = self._client.messages.create(**follow_up_params)
# Track token usage for follow-up call
follow_up_usage = self._extract_anthropic_token_usage(final_response)
@@ -1288,7 +1237,7 @@ class AnthropicCompletion(BaseLLM):
thinking_blocks.append(cast(ThinkingBlock, thinking_block))
if thinking_blocks:
self.previous_thinking_blocks = thinking_blocks
self._previous_thinking_blocks = thinking_blocks
final_content = self._apply_stop_words(final_content)
@@ -1299,6 +1248,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=follow_up_params["messages"],
usage=follow_up_usage,
)
# Log combined token usage
@@ -1330,7 +1280,7 @@ class AnthropicCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
response_model: JsonResponseFormat | type[BaseModel] | None = None,
) -> str | Any:
"""Handle non-streaming async message completion."""
uses_file_api = _contains_file_id_reference(params.get("messages", []))
@@ -1364,11 +1314,11 @@ class AnthropicCompletion(BaseLLM):
try:
if betas:
params["betas"] = betas
response = await self.async_client.beta.messages.create(
response = await self._async_client.beta.messages.create(
**params, extra_body=extra_body
)
else:
response = await self.async_client.messages.create(**params)
response = await self._async_client.messages.create(**params)
except Exception as e:
if is_context_length_exceeded(e):
@@ -1390,6 +1340,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
else:
@@ -1405,6 +1356,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
@@ -1425,6 +1377,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return list(tool_uses)
@@ -1448,6 +1401,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
if usage.get("total_tokens", 0) > 0:
@@ -1461,7 +1415,7 @@ class AnthropicCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
response_model: JsonResponseFormat | type[BaseModel] | None = None,
) -> str | Any:
"""Handle async streaming message completion."""
betas: list[str] = []
@@ -1498,11 +1452,11 @@ class AnthropicCompletion(BaseLLM):
current_tool_calls: dict[int, dict[str, Any]] = {}
stream_context = (
self.async_client.beta.messages.stream(
self._async_client.beta.messages.stream(
**stream_params, extra_body=extra_body
)
if betas
else self.async_client.messages.stream(**stream_params)
else self._async_client.messages.stream(**stream_params)
)
async with stream_context as stream:
response_id = None
@@ -1585,6 +1539,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
for block in final_message.content:
@@ -1599,6 +1554,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
@@ -1627,6 +1583,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return full_response
@@ -1664,7 +1621,7 @@ class AnthropicCompletion(BaseLLM):
]
try:
final_response: Message = await self.async_client.messages.create(
final_response: Message = await self._async_client.messages.create(
**follow_up_params
)
@@ -1685,6 +1642,7 @@ class AnthropicCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=follow_up_params["messages"],
usage=follow_up_usage,
)
total_usage = {
@@ -1786,8 +1744,8 @@ class AnthropicCompletion(BaseLLM):
from crewai_files.uploaders.anthropic import AnthropicFileUploader
return AnthropicFileUploader(
client=self.client,
async_client=self.async_client,
client=self._client,
async_client=self._async_client,
)
except ImportError:
return None

View File

@@ -3,11 +3,13 @@ from __future__ import annotations
import json
import logging
import os
from typing import TYPE_CHECKING, Any, TypedDict
from typing import Any, TypedDict
from urllib.parse import urlparse
from pydantic import BaseModel
from pydantic import BaseModel, PrivateAttr, model_validator
from typing_extensions import Self
from crewai.llms.hooks.base import BaseInterceptor
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
@@ -16,10 +18,6 @@ from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.llms.hooks.base import BaseInterceptor
try:
from azure.ai.inference import (
ChatCompletionsClient,
@@ -76,109 +74,84 @@ class AzureCompletion(BaseLLM):
offering native function calling, streaming support, and proper Azure authentication.
"""
def __init__(
self,
model: str,
api_key: str | None = None,
endpoint: str | None = None,
api_version: str | None = None,
timeout: float | None = None,
max_retries: int = 2,
temperature: float | None = None,
top_p: float | None = None,
frequency_penalty: float | None = None,
presence_penalty: float | None = None,
max_tokens: int | None = None,
stop: list[str] | None = None,
stream: bool = False,
interceptor: BaseInterceptor[Any, Any] | None = None,
response_format: type[BaseModel] | None = None,
**kwargs: Any,
):
"""Initialize Azure AI Inference chat completion client.
endpoint: str | None = None
api_version: str | None = None
timeout: float | None = None
max_retries: int = 2
top_p: float | None = None
frequency_penalty: float | None = None
presence_penalty: float | None = None
max_tokens: int | None = None
stream: bool = False
interceptor: BaseInterceptor[Any, Any] | None = None
response_format: type[BaseModel] | None = None
is_openai_model: bool = False
is_azure_openai_endpoint: bool = False
Args:
model: Azure deployment name or model name
api_key: Azure API key (defaults to AZURE_API_KEY env var)
endpoint: Azure endpoint URL (defaults to AZURE_ENDPOINT env var)
api_version: Azure API version (defaults to AZURE_API_VERSION env var)
timeout: Request timeout in seconds
max_retries: Maximum number of retries
temperature: Sampling temperature (0-2)
top_p: Nucleus sampling parameter
frequency_penalty: Frequency penalty (-2 to 2)
presence_penalty: Presence penalty (-2 to 2)
max_tokens: Maximum tokens in response
stop: Stop sequences
stream: Enable streaming responses
interceptor: HTTP interceptor (not yet supported for Azure).
response_format: Pydantic model for structured output. Used as default when
response_model is not passed to call()/acall() methods.
Only works with OpenAI models deployed on Azure.
**kwargs: Additional parameters
"""
if interceptor is not None:
_client: Any = PrivateAttr(default=None)
_async_client: Any = PrivateAttr(default=None)
@model_validator(mode="before")
@classmethod
def _normalize_azure_fields(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
if data.get("interceptor") is not None:
raise NotImplementedError(
"HTTP interceptors are not yet supported for Azure AI Inference provider. "
"Interceptors are currently supported for OpenAI and Anthropic providers only."
)
super().__init__(
model=model, temperature=temperature, stop=stop or [], **kwargs
)
self.api_key = api_key or os.getenv("AZURE_API_KEY")
self.endpoint = (
endpoint
# Resolve env vars
data["api_key"] = data.get("api_key") or os.getenv("AZURE_API_KEY")
data["endpoint"] = (
data.get("endpoint")
or os.getenv("AZURE_ENDPOINT")
or os.getenv("AZURE_OPENAI_ENDPOINT")
or os.getenv("AZURE_API_BASE")
)
self.api_version = api_version or os.getenv("AZURE_API_VERSION") or "2024-06-01"
self.timeout = timeout
self.max_retries = max_retries
data["api_version"] = (
data.get("api_version") or os.getenv("AZURE_API_VERSION") or "2024-06-01"
)
if not self.api_key:
if not data["api_key"]:
raise ValueError(
"Azure API key is required. Set AZURE_API_KEY environment variable or pass api_key parameter."
)
if not self.endpoint:
if not data["endpoint"]:
raise ValueError(
"Azure endpoint is required. Set AZURE_ENDPOINT environment variable or pass endpoint parameter."
)
# Validate and potentially fix Azure OpenAI endpoint URL
self.endpoint = self._validate_and_fix_endpoint(self.endpoint, model)
model = data.get("model", "")
data["endpoint"] = AzureCompletion._validate_and_fix_endpoint(
data["endpoint"], model
)
data["is_openai_model"] = any(
prefix in model.lower() for prefix in ["gpt-", "o1-", "text-"]
)
parsed = urlparse(data["endpoint"])
hostname = parsed.hostname or ""
data["is_azure_openai_endpoint"] = (
hostname == "openai.azure.com" or hostname.endswith(".openai.azure.com")
) and "/openai/deployments/" in data["endpoint"]
return data
# Build client kwargs
client_kwargs = {
@model_validator(mode="after")
def _init_clients(self) -> AzureCompletion:
if not self.api_key:
raise ValueError("Azure API key is required.")
client_kwargs: dict[str, Any] = {
"endpoint": self.endpoint,
"credential": AzureKeyCredential(self.api_key),
}
# Add api_version if specified (primarily for Azure OpenAI endpoints)
if self.api_version:
client_kwargs["api_version"] = self.api_version
self.client = ChatCompletionsClient(**client_kwargs) # type: ignore[arg-type]
self.async_client = AsyncChatCompletionsClient(**client_kwargs) # type: ignore[arg-type]
self.top_p = top_p
self.frequency_penalty = frequency_penalty
self.presence_penalty = presence_penalty
self.max_tokens = max_tokens
self.stream = stream
self.response_format = response_format
self.is_openai_model = any(
prefix in model.lower() for prefix in ["gpt-", "o1-", "text-"]
)
self.is_azure_openai_endpoint = (
"openai.azure.com" in self.endpoint
and "/openai/deployments/" in self.endpoint
)
self._client = ChatCompletionsClient(**client_kwargs)
self._async_client = AsyncChatCompletionsClient(**client_kwargs)
return self
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Azure-specific fields."""
@@ -215,7 +188,11 @@ class AzureCompletion(BaseLLM):
Returns:
Validated and potentially corrected endpoint URL
"""
if "openai.azure.com" in endpoint and "/openai/deployments/" not in endpoint:
ep_host = urlparse(endpoint).hostname or ""
is_azure_openai = ep_host == "openai.azure.com" or ep_host.endswith(
".openai.azure.com"
)
if is_azure_openai and "/openai/deployments/" not in endpoint:
endpoint = endpoint.rstrip("/")
if not endpoint.endswith("/openai/deployments"):
@@ -592,6 +569,7 @@ class AzureCompletion(BaseLLM):
params: AzureCompletionParams,
from_task: Any | None = None,
from_agent: Any | None = None,
usage: dict[str, Any] | None = None,
) -> BaseModel:
"""Validate content against response model and emit completion event.
@@ -617,6 +595,7 @@ class AzureCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_data
@@ -666,6 +645,7 @@ class AzureCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return list(message.tool_calls)
@@ -703,6 +683,7 @@ class AzureCompletion(BaseLLM):
params=params,
from_task=from_task,
from_agent=from_agent,
usage=usage,
)
content = self._apply_stop_words(content)
@@ -714,6 +695,7 @@ class AzureCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return self._invoke_after_llm_call_hooks(
@@ -731,7 +713,7 @@ class AzureCompletion(BaseLLM):
"""Handle non-streaming chat completion."""
try:
# Cast params to Any to avoid type checking issues with TypedDict unpacking
response: ChatCompletions = self.client.complete(**params) # type: ignore[assignment,arg-type]
response: ChatCompletions = self._client.complete(**params)
return self._process_completion_response(
response=response,
params=params,
@@ -817,7 +799,7 @@ class AzureCompletion(BaseLLM):
self,
full_response: str,
tool_calls: dict[int, dict[str, Any]],
usage_data: dict[str, int],
usage_data: dict[str, Any] | None,
params: AzureCompletionParams,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
@@ -829,7 +811,7 @@ class AzureCompletion(BaseLLM):
Args:
full_response: The complete streamed response content
tool_calls: Dictionary of tool calls accumulated during streaming
usage_data: Token usage data from the stream
usage_data: Token usage data from the stream, or None if unavailable
params: Completion parameters containing messages
available_functions: Available functions for tool calling
from_task: Task that initiated the call
@@ -839,7 +821,8 @@ class AzureCompletion(BaseLLM):
Returns:
Final response content after processing, or structured output
"""
self._track_token_usage_internal(usage_data)
if usage_data:
self._track_token_usage_internal(usage_data)
# Handle structured output validation
if response_model and self.is_openai_model:
@@ -849,6 +832,7 @@ class AzureCompletion(BaseLLM):
params=params,
from_task=from_task,
from_agent=from_agent,
usage=usage_data,
)
# If there are tool_calls but no available_functions, return them
@@ -871,6 +855,7 @@ class AzureCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_data,
)
return formatted_tool_calls
@@ -907,6 +892,7 @@ class AzureCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_data,
)
return self._invoke_after_llm_call_hooks(
@@ -925,8 +911,8 @@ class AzureCompletion(BaseLLM):
full_response = ""
tool_calls: dict[int, dict[str, Any]] = {}
usage_data = {"total_tokens": 0}
for update in self.client.complete(**params): # type: ignore[arg-type]
usage_data: dict[str, Any] | None = None
for update in self._client.complete(**params):
if isinstance(update, StreamingChatCompletionsUpdate):
if update.usage:
usage = update.usage
@@ -967,7 +953,7 @@ class AzureCompletion(BaseLLM):
"""Handle non-streaming chat completion asynchronously."""
try:
# Cast params to Any to avoid type checking issues with TypedDict unpacking
response: ChatCompletions = await self.async_client.complete(**params) # type: ignore[assignment,arg-type]
response: ChatCompletions = await self._async_client.complete(**params)
return self._process_completion_response(
response=response,
params=params,
@@ -991,10 +977,10 @@ class AzureCompletion(BaseLLM):
full_response = ""
tool_calls: dict[int, dict[str, Any]] = {}
usage_data = {"total_tokens": 0}
usage_data: dict[str, Any] | None = None
stream = await self.async_client.complete(**params) # type: ignore[arg-type]
async for update in stream: # type: ignore[union-attr]
stream = await self._async_client.complete(**params)
async for update in stream:
if isinstance(update, StreamingChatCompletionsUpdate):
if hasattr(update, "usage") and update.usage:
usage = update.usage
@@ -1110,8 +1096,8 @@ class AzureCompletion(BaseLLM):
This ensures proper cleanup of the underlying aiohttp session
to avoid unclosed connector warnings.
"""
if hasattr(self.async_client, "close"):
await self.async_client.close()
if hasattr(self._async_client, "close"):
await self._async_client.close()
async def __aenter__(self) -> Self:
"""Async context manager entry."""

View File

@@ -7,7 +7,7 @@ import logging
import os
from typing import TYPE_CHECKING, Any, TypedDict, cast
from pydantic import BaseModel
from pydantic import BaseModel, PrivateAttr, model_validator
from typing_extensions import Required
from crewai.events.types.llm_events import LLMCallType
@@ -33,7 +33,7 @@ if TYPE_CHECKING:
ToolTypeDef,
)
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.hooks.base import BaseInterceptor
try:
@@ -228,129 +228,97 @@ class BedrockCompletion(BaseLLM):
- Model-specific conversation format handling (e.g., Cohere requirements)
"""
def __init__(
self,
model: str = "anthropic.claude-3-5-sonnet-20241022-v2:0",
aws_access_key_id: str | None = None,
aws_secret_access_key: str | None = None,
aws_session_token: str | None = None,
region_name: str | None = None,
temperature: float | None = None,
max_tokens: int | None = None,
top_p: float | None = None,
top_k: int | None = None,
stop_sequences: Sequence[str] | None = None,
stream: bool = False,
guardrail_config: dict[str, Any] | None = None,
additional_model_request_fields: dict[str, Any] | None = None,
additional_model_response_field_paths: list[str] | None = None,
interceptor: BaseInterceptor[Any, Any] | None = None,
response_format: type[BaseModel] | None = None,
**kwargs: Any,
) -> None:
"""Initialize AWS Bedrock completion client.
model: str = "anthropic.claude-3-5-sonnet-20241022-v2:0"
aws_access_key_id: str | None = None
aws_secret_access_key: str | None = None
aws_session_token: str | None = None
region_name: str | None = None
max_tokens: int | None = None
top_p: float | None = None
top_k: int | None = None
stream: bool = False
guardrail_config: dict[str, Any] | None = None
additional_model_request_fields: dict[str, Any] | None = None
additional_model_response_field_paths: list[str] | None = None
interceptor: BaseInterceptor[Any, Any] | None = None
response_format: type[BaseModel] | None = None
is_claude_model: bool = False
supports_tools: bool = True
supports_streaming: bool = True
model_id: str = ""
Args:
model: The Bedrock model ID to use
aws_access_key_id: AWS access key (defaults to environment variable)
aws_secret_access_key: AWS secret key (defaults to environment variable)
aws_session_token: AWS session token for temporary credentials
region_name: AWS region name
temperature: Sampling temperature for response generation
max_tokens: Maximum tokens to generate
top_p: Nucleus sampling parameter
top_k: Top-k sampling parameter (Claude models only)
stop_sequences: List of sequences that stop generation
stream: Whether to use streaming responses
guardrail_config: Guardrail configuration for content filtering
additional_model_request_fields: Model-specific request parameters
additional_model_response_field_paths: Custom response field paths
interceptor: HTTP interceptor (not yet supported for Bedrock).
response_format: Pydantic model for structured output. Used as default when
response_model is not passed to call()/acall() methods.
**kwargs: Additional parameters
"""
if interceptor is not None:
_client: Any = PrivateAttr(default=None)
_async_exit_stack: Any = PrivateAttr(default=None)
_async_client_initialized: bool = PrivateAttr(default=False)
_async_client: Any = PrivateAttr(default=None)
@model_validator(mode="before")
@classmethod
def _normalize_bedrock_fields(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
if data.get("interceptor") is not None:
raise NotImplementedError(
"HTTP interceptors are not yet supported for AWS Bedrock provider. "
"Interceptors are currently supported for OpenAI and Anthropic providers only."
)
# Extract provider from kwargs to avoid duplicate argument
kwargs.pop("provider", None)
# Force provider to bedrock
data.pop("provider", None)
data["provider"] = "bedrock"
super().__init__(
model=model,
temperature=temperature,
stop=stop_sequences or [],
provider="bedrock",
**kwargs,
# Normalize stop_sequences from stop kwarg
popped = data.pop("stop_sequences", None)
seqs = popped if popped is not None else (data.get("stop") or [])
if isinstance(seqs, str):
seqs = [seqs]
elif isinstance(seqs, Sequence) and not isinstance(seqs, list):
seqs = list(seqs)
data["stop"] = seqs
# Resolve env vars
data["aws_access_key_id"] = data.get("aws_access_key_id") or os.getenv(
"AWS_ACCESS_KEY_ID"
)
# Configure client with timeouts and retries following AWS best practices
config = Config(
read_timeout=300,
retries={
"max_attempts": 3,
"mode": "adaptive",
},
tcp_keepalive=True,
data["aws_secret_access_key"] = data.get("aws_secret_access_key") or os.getenv(
"AWS_SECRET_ACCESS_KEY"
)
self.region_name = (
region_name
data["aws_session_token"] = data.get("aws_session_token") or os.getenv(
"AWS_SESSION_TOKEN"
)
data["region_name"] = (
data.get("region_name")
or os.getenv("AWS_DEFAULT_REGION")
or os.getenv("AWS_REGION_NAME")
or "us-east-1"
)
self.aws_access_key_id = aws_access_key_id or os.getenv("AWS_ACCESS_KEY_ID")
self.aws_secret_access_key = aws_secret_access_key or os.getenv(
"AWS_SECRET_ACCESS_KEY"
)
self.aws_session_token = aws_session_token or os.getenv("AWS_SESSION_TOKEN")
model = data.get("model", "anthropic.claude-3-5-sonnet-20241022-v2:0")
data["is_claude_model"] = "claude" in model.lower()
data["model_id"] = model
return data
# Initialize Bedrock client with proper configuration
@model_validator(mode="after")
def _init_clients(self) -> BedrockCompletion:
config = Config(
read_timeout=300,
retries={"max_attempts": 3, "mode": "adaptive"},
tcp_keepalive=True,
)
session = Session(
aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
aws_session_token=self.aws_session_token,
region_name=self.region_name,
)
self.client = session.client("bedrock-runtime", config=config)
self._client = session.client("bedrock-runtime", config=config)
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
self._async_client_initialized = False
# Store completion parameters
self.max_tokens = max_tokens
self.top_p = top_p
self.top_k = top_k
self.stream = stream
self.stop_sequences = stop_sequences
self.response_format = response_format
# Store advanced features (optional)
self.guardrail_config = guardrail_config
self.additional_model_request_fields = additional_model_request_fields
self.additional_model_response_field_paths = (
additional_model_response_field_paths
)
# Model-specific settings
self.is_claude_model = "claude" in model.lower()
self.supports_tools = True # Converse API supports tools for most models
self.supports_streaming = True
# Handle inference profiles for newer models
self.model_id = model
return self
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Bedrock-specific fields."""
config = super().to_config_dict()
# NOTE: AWS credentials (access_key, secret_key, session_token) are
# intentionally excluded — they must come from env on resume.
if self.region_name and self.region_name != "us-east-1":
config["region_name"] = self.region_name
if self.max_tokens is not None:
@@ -363,30 +331,6 @@ class BedrockCompletion(BaseLLM):
config["guardrail_config"] = self.guardrail_config
return config
@property
def stop(self) -> list[str]:
"""Get stop sequences sent to the API."""
return [] if self.stop_sequences is None else list(self.stop_sequences)
@stop.setter
def stop(self, value: Sequence[str] | str | None) -> None:
"""Set stop sequences.
Synchronizes stop_sequences to ensure values set by CrewAgentExecutor
are properly sent to the Bedrock API.
Args:
value: Stop sequences as a Sequence, single string, or None
"""
if value is None:
self.stop_sequences = []
elif isinstance(value, str):
self.stop_sequences = [value]
elif isinstance(value, Sequence):
self.stop_sequences = list(value)
else:
self.stop_sequences = []
def call(
self,
messages: str | list[LLMMessage],
@@ -710,7 +654,7 @@ class BedrockCompletion(BaseLLM):
raise ValueError(f"Invalid message format at index {i}")
# Call Bedrock Converse API with proper error handling
response = self.client.converse(
response = self._client.converse(
modelId=self.model_id,
messages=cast(
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
@@ -720,8 +664,9 @@ class BedrockCompletion(BaseLLM):
)
# Track token usage according to AWS response format
if "usage" in response:
self._track_token_usage_internal(response["usage"])
usage = response.get("usage")
if usage:
self._track_token_usage_internal(usage)
stop_reason = response.get("stopReason")
if stop_reason:
@@ -761,6 +706,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage,
)
return result
except Exception as e:
@@ -783,6 +729,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage,
)
return non_structured_output_tool_uses
@@ -862,6 +809,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage,
)
return self._invoke_after_llm_call_hooks(
@@ -992,15 +940,16 @@ class BedrockCompletion(BaseLLM):
tool_use_id: str | None = None
tool_use_index = 0
accumulated_tool_input = ""
usage_data: dict[str, Any] | None = None
try:
response = self.client.converse_stream(
response = self._client.converse_stream(
modelId=self.model_id,
messages=cast(
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
cast(object, messages),
),
**body, # type: ignore[arg-type]
**body,
)
stream = response.get("stream")
@@ -1101,6 +1050,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage_data,
)
return result # type: ignore[return-value]
except Exception as e:
@@ -1168,6 +1118,7 @@ class BedrockCompletion(BaseLLM):
metadata = event["metadata"]
if "usage" in metadata:
usage_metrics = metadata["usage"]
usage_data = usage_metrics
self._track_token_usage_internal(usage_metrics)
logging.debug(f"Token usage: {usage_metrics}")
if "trace" in metadata:
@@ -1197,6 +1148,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage_data,
)
return full_response
@@ -1308,8 +1260,9 @@ class BedrockCompletion(BaseLLM):
**body,
)
if "usage" in response:
self._track_token_usage_internal(response["usage"])
usage = response.get("usage")
if usage:
self._track_token_usage_internal(usage)
stop_reason = response.get("stopReason")
if stop_reason:
@@ -1348,6 +1301,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage,
)
return result
except Exception as e:
@@ -1370,6 +1324,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage,
)
return non_structured_output_tool_uses
@@ -1444,6 +1399,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage,
)
return text_content
@@ -1564,6 +1520,7 @@ class BedrockCompletion(BaseLLM):
tool_use_id: str | None = None
tool_use_index = 0
accumulated_tool_input = ""
usage_data: dict[str, Any] | None = None
try:
async_client = await self._ensure_async_client()
@@ -1675,6 +1632,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage_data,
)
return result # type: ignore[return-value]
except Exception as e:
@@ -1747,6 +1705,7 @@ class BedrockCompletion(BaseLLM):
metadata = event["metadata"]
if "usage" in metadata:
usage_metrics = metadata["usage"]
usage_data = usage_metrics
self._track_token_usage_internal(usage_metrics)
logging.debug(f"Token usage: {usage_metrics}")
if "trace" in metadata:
@@ -1776,6 +1735,7 @@ class BedrockCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages,
usage=usage_data,
)
return self._invoke_after_llm_call_hooks(

View File

@@ -5,12 +5,13 @@ import json
import logging
import os
import re
from typing import TYPE_CHECKING, Any, Literal, cast
from typing import Any, Literal, cast
from pydantic import BaseModel
from pydantic import BaseModel, Field, PrivateAttr, model_validator
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, llm_call_context
from crewai.llms.hooks.base import BaseInterceptor
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
@@ -19,10 +20,6 @@ from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.llms.hooks.base import BaseInterceptor
try:
from google import genai
from google.genai import types
@@ -44,137 +41,84 @@ class GeminiCompletion(BaseLLM):
offering native function calling, streaming support, and proper Gemini formatting.
"""
def __init__(
self,
model: str = "gemini-2.0-flash-001",
api_key: str | None = None,
project: str | None = None,
location: str | None = None,
temperature: float | None = None,
top_p: float | None = None,
top_k: int | None = None,
max_output_tokens: int | None = None,
stop_sequences: list[str] | None = None,
stream: bool = False,
safety_settings: dict[str, Any] | None = None,
client_params: dict[str, Any] | None = None,
interceptor: BaseInterceptor[Any, Any] | None = None,
use_vertexai: bool | None = None,
response_format: type[BaseModel] | None = None,
thinking_config: types.ThinkingConfig | None = None,
**kwargs: Any,
):
"""Initialize Google Gemini chat completion client.
model: str = "gemini-2.0-flash-001"
project: str | None = None
location: str | None = None
top_p: float | None = None
top_k: int | None = None
max_output_tokens: int | None = None
stream: bool = False
safety_settings: dict[str, Any] = Field(default_factory=dict)
client_params: dict[str, Any] = Field(default_factory=dict)
interceptor: BaseInterceptor[Any, Any] | None = None
use_vertexai: bool = False
response_format: type[BaseModel] | None = None
thinking_config: Any = None
tools: list[dict[str, Any]] | None = None
supports_tools: bool = False
is_gemini_2_0: bool = False
Args:
model: Gemini model name (e.g., 'gemini-2.0-flash-001', 'gemini-1.5-pro')
api_key: Google API key for Gemini API authentication.
Defaults to GOOGLE_API_KEY or GEMINI_API_KEY env var.
NOTE: Cannot be used with Vertex AI (project parameter). Use Gemini API instead.
project: Google Cloud project ID for Vertex AI with ADC authentication.
Requires Application Default Credentials (gcloud auth application-default login).
NOTE: Vertex AI does NOT support API keys, only OAuth2/ADC.
If both api_key and project are set, api_key takes precedence.
location: Google Cloud location (for Vertex AI with ADC, defaults to 'us-central1')
temperature: Sampling temperature (0-2)
top_p: Nucleus sampling parameter
top_k: Top-k sampling parameter
max_output_tokens: Maximum tokens in response
stop_sequences: Stop sequences
stream: Enable streaming responses
safety_settings: Safety filter settings
client_params: Additional parameters to pass to the Google Gen AI Client constructor.
Supports parameters like http_options, credentials, debug_config, etc.
interceptor: HTTP interceptor (not yet supported for Gemini).
use_vertexai: Whether to use Vertex AI instead of Gemini API.
- True: Use Vertex AI (with ADC or Express mode with API key)
- False: Use Gemini API (explicitly override env var)
- None (default): Check GOOGLE_GENAI_USE_VERTEXAI env var
When using Vertex AI with API key (Express mode), http_options with
api_version="v1" is automatically configured.
response_format: Pydantic model for structured output. Used as default when
response_model is not passed to call()/acall() methods.
thinking_config: ThinkingConfig for thinking models (gemini-2.5+, gemini-3+).
Controls thought output via include_thoughts, thinking_budget,
and thinking_level. When None, thinking models automatically
get include_thoughts=True so thought content is surfaced.
**kwargs: Additional parameters
"""
if interceptor is not None:
_client: Any = PrivateAttr(default=None)
@model_validator(mode="before")
@classmethod
def _normalize_gemini_fields(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
if data.get("interceptor") is not None:
raise NotImplementedError(
"HTTP interceptors are not yet supported for Google Gemini provider. "
"Interceptors are currently supported for OpenAI and Anthropic providers only."
)
super().__init__(
model=model, temperature=temperature, stop=stop_sequences or [], **kwargs
# Normalize stop_sequences from stop kwarg
popped = data.pop("stop_sequences", None)
seqs = popped if popped is not None else (data.get("stop") or [])
if isinstance(seqs, str):
seqs = [seqs]
data["stop"] = seqs
# Resolve env vars
data["api_key"] = (
data.get("api_key")
or os.getenv("GOOGLE_API_KEY")
or os.getenv("GEMINI_API_KEY")
)
data["project"] = data.get("project") or os.getenv("GOOGLE_CLOUD_PROJECT")
data["location"] = (
data.get("location") or os.getenv("GOOGLE_CLOUD_LOCATION") or "us-central1"
)
# Store client params for later use
self.client_params = client_params or {}
# Get API configuration with environment variable fallbacks
self.api_key = (
api_key or os.getenv("GOOGLE_API_KEY") or os.getenv("GEMINI_API_KEY")
)
self.project = project or os.getenv("GOOGLE_CLOUD_PROJECT")
self.location = location or os.getenv("GOOGLE_CLOUD_LOCATION") or "us-central1"
if use_vertexai is None:
use_vertexai = os.getenv("GOOGLE_GENAI_USE_VERTEXAI", "").lower() == "true"
self.client = self._initialize_client(use_vertexai)
# Store completion parameters
self.top_p = top_p
self.top_k = top_k
self.max_output_tokens = max_output_tokens
self.stream = stream
self.safety_settings = safety_settings or {}
self.stop_sequences = stop_sequences or []
self.tools: list[dict[str, Any]] | None = None
self.response_format = response_format
use_vx = data.get("use_vertexai")
if use_vx is None:
use_vx = os.getenv("GOOGLE_GENAI_USE_VERTEXAI", "").lower() == "true"
data["use_vertexai"] = use_vx
# Model-specific settings
model = data.get("model", "gemini-2.0-flash-001")
version_match = re.search(r"gemini-(\d+(?:\.\d+)?)", model.lower())
self.supports_tools = bool(
data["supports_tools"] = bool(
version_match and float(version_match.group(1)) >= 1.5
)
self.is_gemini_2_0 = bool(
data["is_gemini_2_0"] = bool(
version_match and float(version_match.group(1)) >= 2.0
)
self.thinking_config = thinking_config
# Auto-enable thinking for gemini-2.5+
if (
self.thinking_config is None
data.get("thinking_config") is None
and version_match
and float(version_match.group(1)) >= 2.5
):
self.thinking_config = types.ThinkingConfig(include_thoughts=True)
data["thinking_config"] = types.ThinkingConfig(include_thoughts=True)
@property
def stop(self) -> list[str]:
"""Get stop sequences sent to the API."""
return self.stop_sequences
return data
@stop.setter
def stop(self, value: list[str] | str | None) -> None:
"""Set stop sequences.
Synchronizes stop_sequences to ensure values set by CrewAgentExecutor
are properly sent to the Gemini API.
Args:
value: Stop sequences as a list, single string, or None
"""
if value is None:
self.stop_sequences = []
elif isinstance(value, str):
self.stop_sequences = [value]
elif isinstance(value, list):
self.stop_sequences = value
else:
self.stop_sequences = []
@model_validator(mode="after")
def _init_client(self) -> GeminiCompletion:
self._client = self._initialize_client(self.use_vertexai)
return self
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Gemini/Vertex-specific fields."""
@@ -283,8 +227,8 @@ class GeminiCompletion(BaseLLM):
if (
hasattr(self, "client")
and hasattr(self.client, "vertexai")
and self.client.vertexai
and hasattr(self._client, "vertexai")
and self._client.vertexai
):
# Vertex AI configuration
params.update(
@@ -721,6 +665,7 @@ class GeminiCompletion(BaseLLM):
messages_for_event: list[LLMMessage],
from_task: Any | None = None,
from_agent: Any | None = None,
usage: dict[str, Any] | None = None,
) -> BaseModel:
"""Validate content against response model and emit completion event.
@@ -746,6 +691,7 @@ class GeminiCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages_for_event,
usage=usage,
)
return structured_data
@@ -761,6 +707,7 @@ class GeminiCompletion(BaseLLM):
response_model: type[BaseModel] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
usage: dict[str, Any] | None = None,
) -> str | BaseModel:
"""Finalize completion response with validation and event emission.
@@ -784,6 +731,7 @@ class GeminiCompletion(BaseLLM):
messages_for_event=messages_for_event,
from_task=from_task,
from_agent=from_agent,
usage=usage,
)
self._emit_call_completed_event(
@@ -792,6 +740,7 @@ class GeminiCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=messages_for_event,
usage=usage,
)
return self._invoke_after_llm_call_hooks(
@@ -805,6 +754,7 @@ class GeminiCompletion(BaseLLM):
contents: list[types.Content],
from_task: Any | None = None,
from_agent: Any | None = None,
usage: dict[str, Any] | None = None,
) -> BaseModel:
"""Validate and emit event for structured_output tool call.
@@ -829,6 +779,7 @@ class GeminiCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=self._convert_contents_to_dict(contents),
usage=usage,
)
return validated_data
except Exception as e:
@@ -847,6 +798,7 @@ class GeminiCompletion(BaseLLM):
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
usage: dict[str, Any] | None = None,
) -> str | Any:
"""Process response, execute function calls, and finalize completion.
@@ -887,6 +839,7 @@ class GeminiCompletion(BaseLLM):
contents=contents,
from_task=from_task,
from_agent=from_agent,
usage=usage,
)
# Filter out structured_output from function calls returned to executor
@@ -908,6 +861,7 @@ class GeminiCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=self._convert_contents_to_dict(contents),
usage=usage,
)
return non_structured_output_parts
@@ -949,6 +903,7 @@ class GeminiCompletion(BaseLLM):
response_model=effective_response_model,
from_task=from_task,
from_agent=from_agent,
usage=usage,
)
def _process_stream_chunk(
@@ -956,10 +911,10 @@ class GeminiCompletion(BaseLLM):
chunk: GenerateContentResponse,
full_response: str,
function_calls: dict[int, dict[str, Any]],
usage_data: dict[str, int],
usage_data: dict[str, int] | None,
from_task: Any | None = None,
from_agent: Any | None = None,
) -> tuple[str, dict[int, dict[str, Any]], dict[str, int]]:
) -> tuple[str, dict[int, dict[str, Any]], dict[str, int] | None]:
"""Process a single streaming chunk.
Args:
@@ -1035,7 +990,7 @@ class GeminiCompletion(BaseLLM):
self,
full_response: str,
function_calls: dict[int, dict[str, Any]],
usage_data: dict[str, int],
usage_data: dict[str, int] | None,
contents: list[types.Content],
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
@@ -1047,7 +1002,7 @@ class GeminiCompletion(BaseLLM):
Args:
full_response: The complete streamed response content
function_calls: Dictionary of function calls accumulated during streaming
usage_data: Token usage data from the stream
usage_data: Token usage data from the stream, or None if unavailable
contents: Original contents for event conversion
available_functions: Available functions for function calling
from_task: Task that initiated the call
@@ -1057,7 +1012,8 @@ class GeminiCompletion(BaseLLM):
Returns:
Final response content after processing
"""
self._track_token_usage_internal(usage_data)
if usage_data:
self._track_token_usage_internal(usage_data)
if response_model and function_calls:
for call_data in function_calls.values():
@@ -1069,6 +1025,7 @@ class GeminiCompletion(BaseLLM):
contents=contents,
from_task=from_task,
from_agent=from_agent,
usage=usage_data,
)
non_structured_output_calls = {
@@ -1097,6 +1054,7 @@ class GeminiCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=self._convert_contents_to_dict(contents),
usage=usage_data,
)
return formatted_function_calls
@@ -1137,6 +1095,7 @@ class GeminiCompletion(BaseLLM):
response_model=effective_response_model,
from_task=from_task,
from_agent=from_agent,
usage=usage_data,
)
def _handle_completion(
@@ -1152,7 +1111,7 @@ class GeminiCompletion(BaseLLM):
try:
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
response = self.client.models.generate_content(
response = self._client.models.generate_content(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1174,6 +1133,7 @@ class GeminiCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
usage=usage,
)
def _handle_streaming_completion(
@@ -1188,11 +1148,11 @@ class GeminiCompletion(BaseLLM):
"""Handle streaming content generation."""
full_response = ""
function_calls: dict[int, dict[str, Any]] = {}
usage_data = {"total_tokens": 0}
usage_data: dict[str, int] | None = None
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
for chunk in self.client.models.generate_content_stream(
for chunk in self._client.models.generate_content_stream(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1230,7 +1190,7 @@ class GeminiCompletion(BaseLLM):
try:
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
response = await self.client.aio.models.generate_content(
response = await self._client.aio.models.generate_content(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1252,6 +1212,7 @@ class GeminiCompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
usage=usage,
)
async def _ahandle_streaming_completion(
@@ -1266,11 +1227,11 @@ class GeminiCompletion(BaseLLM):
"""Handle async streaming content generation."""
full_response = ""
function_calls: dict[int, dict[str, Any]] = {}
usage_data = {"total_tokens": 0}
usage_data: dict[str, int] | None = None
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
stream = await self.client.aio.models.generate_content_stream(
stream = await self._client.aio.models.generate_content_stream(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1474,6 +1435,6 @@ class GeminiCompletion(BaseLLM):
try:
from crewai_files.uploaders.gemini import GeminiFileUploader
return GeminiFileUploader(client=self.client)
return GeminiFileUploader(client=self._client)
except ImportError:
return None

View File

@@ -14,10 +14,11 @@ from openai.types.chat import ChatCompletion, ChatCompletionChunk
from openai.types.chat.chat_completion import Choice
from openai.types.chat.chat_completion_chunk import ChoiceDelta
from openai.types.responses import Response
from pydantic import BaseModel
from pydantic import BaseModel, PrivateAttr, model_validator
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, llm_call_context
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
@@ -29,7 +30,6 @@ from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent.core import Agent
from crewai.llms.hooks.base import BaseInterceptor
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
@@ -183,77 +183,69 @@ class OpenAICompletion(BaseLLM):
"computer_use": "computer_use_preview",
}
def __init__(
self,
model: str = "gpt-4o",
api_key: str | None = None,
base_url: str | None = None,
organization: str | None = None,
project: str | None = None,
timeout: float | None = None,
max_retries: int = 2,
default_headers: dict[str, str] | None = None,
default_query: dict[str, Any] | None = None,
client_params: dict[str, Any] | None = None,
temperature: float | None = None,
top_p: float | None = None,
frequency_penalty: float | None = None,
presence_penalty: float | None = None,
max_tokens: int | None = None,
max_completion_tokens: int | None = None,
seed: int | None = None,
stream: bool = False,
response_format: dict[str, Any] | type[BaseModel] | None = None,
logprobs: bool | None = None,
top_logprobs: int | None = None,
reasoning_effort: str | None = None,
provider: str | None = None,
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
api: Literal["completions", "responses"] = "completions",
instructions: str | None = None,
store: bool | None = None,
previous_response_id: str | None = None,
include: list[str] | None = None,
builtin_tools: list[str] | None = None,
parse_tool_outputs: bool = False,
auto_chain: bool = False,
auto_chain_reasoning: bool = False,
**kwargs: Any,
) -> None:
"""Initialize OpenAI completion client."""
model: str = "gpt-4o"
organization: str | None = None
project: str | None = None
timeout: float | None = None
max_retries: int = 2
default_headers: dict[str, str] | None = None
default_query: dict[str, Any] | None = None
client_params: dict[str, Any] | None = None
top_p: float | None = None
frequency_penalty: float | None = None
presence_penalty: float | None = None
max_tokens: int | None = None
max_completion_tokens: int | None = None
seed: int | None = None
stream: bool = False
response_format: JsonResponseFormat | type[BaseModel] | None = None
logprobs: bool | None = None
top_logprobs: int | None = None
reasoning_effort: str | None = None
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None
api: Literal["completions", "responses"] = "completions"
instructions: str | None = None
store: bool | None = None
previous_response_id: str | None = None
include: list[str] | None = None
builtin_tools: list[str] | None = None
parse_tool_outputs: bool = False
auto_chain: bool = False
auto_chain_reasoning: bool = False
api_base: str | None = None
is_o1_model: bool = False
is_gpt4_model: bool = False
if provider is None:
provider = kwargs.pop("provider", "openai")
_client: Any = PrivateAttr(default=None)
_async_client: Any = PrivateAttr(default=None)
_last_response_id: str | None = PrivateAttr(default=None)
_last_reasoning_items: list[Any] | None = PrivateAttr(default=None)
self.interceptor = interceptor
# Client configuration attributes
self.organization = organization
self.project = project
self.max_retries = max_retries
self.default_headers = default_headers
self.default_query = default_query
self.client_params = client_params
self.timeout = timeout
self.base_url = base_url
self.api_base = kwargs.pop("api_base", None)
super().__init__(
model=model,
temperature=temperature,
api_key=api_key or os.getenv("OPENAI_API_KEY"),
base_url=base_url,
timeout=timeout,
provider=provider,
**kwargs,
)
@model_validator(mode="before")
@classmethod
def _normalize_openai_fields(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
if not data.get("provider"):
data["provider"] = "openai"
data["api_key"] = data.get("api_key") or os.getenv("OPENAI_API_KEY")
# Extract api_base from kwargs if present
if "api_base" not in data:
data["api_base"] = None
model = data.get("model", "gpt-4o")
data["is_o1_model"] = "o1" in model.lower()
data["is_gpt4_model"] = "gpt-4" in model.lower()
return data
@model_validator(mode="after")
def _init_clients(self) -> OpenAICompletion:
client_config = self._get_client_params()
if self.interceptor:
transport = HTTPTransport(interceptor=self.interceptor)
http_client = httpx.Client(transport=transport)
client_config["http_client"] = http_client
self.client = OpenAI(**client_config)
self._client = OpenAI(**client_config)
async_client_config = self._get_client_params()
if self.interceptor:
@@ -261,35 +253,8 @@ class OpenAICompletion(BaseLLM):
async_http_client = httpx.AsyncClient(transport=async_transport)
async_client_config["http_client"] = async_http_client
self.async_client = AsyncOpenAI(**async_client_config)
# Completion parameters
self.top_p = top_p
self.frequency_penalty = frequency_penalty
self.presence_penalty = presence_penalty
self.max_tokens = max_tokens
self.max_completion_tokens = max_completion_tokens
self.seed = seed
self.stream = stream
self.response_format = response_format
self.logprobs = logprobs
self.top_logprobs = top_logprobs
self.reasoning_effort = reasoning_effort
self.is_o1_model = "o1" in model.lower()
self.is_gpt4_model = "gpt-4" in model.lower()
# API selection and Responses API parameters
self.api = api
self.instructions = instructions
self.store = store
self.previous_response_id = previous_response_id
self.include = include
self.builtin_tools = builtin_tools
self.parse_tool_outputs = parse_tool_outputs
self.auto_chain = auto_chain
self.auto_chain_reasoning = auto_chain_reasoning
self._last_response_id: str | None = None
self._last_reasoning_items: list[Any] | None = None
self._async_client = AsyncOpenAI(**async_client_config)
return self
@property
def last_response_id(self) -> str | None:
@@ -818,7 +783,7 @@ class OpenAICompletion(BaseLLM):
) -> str | ResponsesAPIResult | Any:
"""Handle non-streaming Responses API call."""
try:
response: Response = self.client.responses.create(**params)
response: Response = self._client.responses.create(**params)
# Track response ID for auto-chaining
if self.auto_chain and response.id:
@@ -844,6 +809,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return parsed_result
@@ -856,6 +822,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return function_calls
@@ -893,6 +860,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return structured_result
except ValueError as e:
@@ -906,6 +874,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
content = self._invoke_after_llm_call_hooks(
@@ -950,7 +919,7 @@ class OpenAICompletion(BaseLLM):
) -> str | ResponsesAPIResult | Any:
"""Handle async non-streaming Responses API call."""
try:
response: Response = await self.async_client.responses.create(**params)
response: Response = await self._async_client.responses.create(**params)
# Track response ID for auto-chaining
if self.auto_chain and response.id:
@@ -976,6 +945,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return parsed_result
@@ -988,6 +958,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return function_calls
@@ -1025,6 +996,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return structured_result
except ValueError as e:
@@ -1038,6 +1010,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
except NotFoundError as e:
@@ -1080,8 +1053,9 @@ class OpenAICompletion(BaseLLM):
full_response = ""
function_calls: list[dict[str, Any]] = []
final_response: Response | None = None
usage: dict[str, Any] | None = None
stream = self.client.responses.create(**params)
stream = self._client.responses.create(**params)
response_id_stream = None
for event in stream:
@@ -1137,6 +1111,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return parsed_result
@@ -1173,6 +1148,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return structured_result
except ValueError as e:
@@ -1186,6 +1162,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return self._invoke_after_llm_call_hooks(
@@ -1204,8 +1181,9 @@ class OpenAICompletion(BaseLLM):
full_response = ""
function_calls: list[dict[str, Any]] = []
final_response: Response | None = None
usage: dict[str, Any] | None = None
stream = await self.async_client.responses.create(**params)
stream = await self._async_client.responses.create(**params)
response_id_stream = None
async for event in stream:
@@ -1261,6 +1239,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return parsed_result
@@ -1297,6 +1276,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return structured_result
except ValueError as e:
@@ -1310,6 +1290,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params.get("input", []),
usage=usage,
)
return full_response
@@ -1595,7 +1576,7 @@ class OpenAICompletion(BaseLLM):
parse_params = {
k: v for k, v in params.items() if k != "response_format"
}
parsed_response = self.client.beta.chat.completions.parse(
parsed_response = self._client.beta.chat.completions.parse(
**parse_params,
response_format=response_model,
)
@@ -1615,10 +1596,11 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return parsed_object
response: ChatCompletion = self.client.chat.completions.create(**params)
response: ChatCompletion = self._client.chat.completions.create(**params)
usage = self._extract_openai_token_usage(response)
@@ -1636,6 +1618,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return list(message.tool_calls)
@@ -1674,6 +1657,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_result
except ValueError as e:
@@ -1687,6 +1671,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
if usage.get("total_tokens", 0) > 0:
@@ -1728,7 +1713,7 @@ class OpenAICompletion(BaseLLM):
self,
full_response: str,
tool_calls: dict[int, dict[str, Any]],
usage_data: dict[str, int],
usage_data: dict[str, Any] | None,
params: dict[str, Any],
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
@@ -1739,7 +1724,7 @@ class OpenAICompletion(BaseLLM):
Args:
full_response: The accumulated text response from the stream.
tool_calls: Accumulated tool calls from the stream, keyed by index.
usage_data: Token usage data from the stream.
usage_data: Token usage data from the stream, or None if unavailable.
params: The completion parameters containing messages.
available_functions: Available functions for tool calling.
from_task: Task that initiated the call.
@@ -1750,7 +1735,8 @@ class OpenAICompletion(BaseLLM):
tool execution result when available_functions is provided,
or the text response string.
"""
self._track_token_usage_internal(usage_data)
if usage_data:
self._track_token_usage_internal(usage_data)
if tool_calls and not available_functions:
tool_calls_list = [
@@ -1771,6 +1757,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_data,
)
return tool_calls_list
@@ -1813,6 +1800,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_data,
)
return full_response
@@ -1837,7 +1825,7 @@ class OpenAICompletion(BaseLLM):
}
stream: ChatCompletionStream[BaseModel]
with self.client.beta.chat.completions.stream(
with self._client.beta.chat.completions.stream(
**parse_params, response_format=response_model
) as stream:
for chunk in stream:
@@ -1866,6 +1854,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return parsed_result
@@ -1873,10 +1862,10 @@ class OpenAICompletion(BaseLLM):
return ""
completion_stream: Stream[ChatCompletionChunk] = (
self.client.chat.completions.create(**params)
self._client.chat.completions.create(**params)
)
usage_data = {"total_tokens": 0}
usage_data: dict[str, Any] | None = None
for completion_chunk in completion_stream:
response_id_stream = (
@@ -1970,7 +1959,7 @@ class OpenAICompletion(BaseLLM):
parse_params = {
k: v for k, v in params.items() if k != "response_format"
}
parsed_response = await self.async_client.beta.chat.completions.parse(
parsed_response = await self._async_client.beta.chat.completions.parse(
**parse_params,
response_format=response_model,
)
@@ -1990,10 +1979,11 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return parsed_object
response: ChatCompletion = await self.async_client.chat.completions.create(
response: ChatCompletion = await self._async_client.chat.completions.create(
**params
)
@@ -2013,6 +2003,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return list(message.tool_calls)
@@ -2051,6 +2042,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
return structured_result
except ValueError as e:
@@ -2064,6 +2056,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage,
)
if usage.get("total_tokens", 0) > 0:
@@ -2111,10 +2104,10 @@ class OpenAICompletion(BaseLLM):
if response_model:
completion_stream: AsyncIterator[
ChatCompletionChunk
] = await self.async_client.chat.completions.create(**params)
] = await self._async_client.chat.completions.create(**params)
accumulated_content = ""
usage_data = {"total_tokens": 0}
usage_data: dict[str, Any] | None = None
async for chunk in completion_stream:
response_id_stream = chunk.id if hasattr(chunk, "id") else None
@@ -2137,7 +2130,8 @@ class OpenAICompletion(BaseLLM):
response_id=response_id_stream,
)
self._track_token_usage_internal(usage_data)
if usage_data:
self._track_token_usage_internal(usage_data)
try:
parsed_object = response_model.model_validate_json(accumulated_content)
@@ -2148,6 +2142,7 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_data,
)
return parsed_object
@@ -2159,14 +2154,15 @@ class OpenAICompletion(BaseLLM):
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
usage=usage_data,
)
return accumulated_content
stream: AsyncIterator[
ChatCompletionChunk
] = await self.async_client.chat.completions.create(**params)
] = await self._async_client.chat.completions.create(**params)
usage_data = {"total_tokens": 0}
usage_data = None
async for chunk in stream:
response_id_stream = chunk.id if hasattr(chunk, "id") else None
@@ -2245,6 +2241,9 @@ class OpenAICompletion(BaseLLM):
def supports_stop_words(self) -> bool:
"""Check if the model supports stop words."""
model_lower = self.model.lower() if self.model else ""
if "gpt-5" in model_lower:
return False
return not self.is_o1_model
def get_context_window_size(self) -> int:
@@ -2353,8 +2352,8 @@ class OpenAICompletion(BaseLLM):
from crewai_files.uploaders.openai import OpenAIFileUploader
return OpenAIFileUploader(
client=self.client,
async_client=self.async_client,
client=self._client,
async_client=self._async_client,
)
except ImportError:
return None

View File

@@ -16,6 +16,8 @@ from dataclasses import dataclass, field
import os
from typing import Any
from pydantic import model_validator
from crewai.llms.providers.openai.completion import OpenAICompletion
@@ -140,31 +142,13 @@ class OpenAICompatibleCompletion(OpenAICompletion):
)
"""
def __init__(
self,
model: str,
provider: str,
api_key: str | None = None,
base_url: str | None = None,
default_headers: dict[str, str] | None = None,
**kwargs: Any,
) -> None:
"""Initialize OpenAI-compatible completion client.
@model_validator(mode="before")
@classmethod
def _resolve_provider_config(cls, data: Any) -> Any:
if not isinstance(data, dict):
return data
Args:
model: The model identifier.
provider: The provider name (must be in OPENAI_COMPATIBLE_PROVIDERS).
api_key: Optional API key override. If not provided, uses the
provider's configured environment variable.
base_url: Optional base URL override. If not provided, uses the
provider's configured default or environment variable.
default_headers: Optional headers to merge with provider defaults.
**kwargs: Additional arguments passed to OpenAICompletion.
Raises:
ValueError: If the provider is not supported or required API key
is missing.
"""
provider = data.get("provider", "")
config = OPENAI_COMPATIBLE_PROVIDERS.get(provider)
if config is None:
supported = ", ".join(sorted(OPENAI_COMPATIBLE_PROVIDERS.keys()))
@@ -173,21 +157,15 @@ class OpenAICompatibleCompletion(OpenAICompletion):
f"Supported providers: {supported}"
)
resolved_api_key = self._resolve_api_key(api_key, config, provider)
resolved_base_url = self._resolve_base_url(base_url, config, provider)
resolved_headers = self._resolve_headers(default_headers, config)
super().__init__(
model=model,
provider=provider,
api_key=resolved_api_key,
base_url=resolved_base_url,
default_headers=resolved_headers,
**kwargs,
data["api_key"] = cls._resolve_api_key(data.get("api_key"), config, provider)
data["base_url"] = cls._resolve_base_url(data.get("base_url"), config, provider)
data["default_headers"] = cls._resolve_headers(
data.get("default_headers"), config
)
return data
@staticmethod
def _resolve_api_key(
self,
api_key: str | None,
config: ProviderConfig,
provider: str,
@@ -220,8 +198,8 @@ class OpenAICompatibleCompletion(OpenAICompletion):
return config.default_api_key
@staticmethod
def _resolve_base_url(
self,
base_url: str | None,
config: ProviderConfig,
provider: str,
@@ -249,8 +227,8 @@ class OpenAICompatibleCompletion(OpenAICompletion):
return resolved
@staticmethod
def _resolve_headers(
self,
headers: dict[str, str] | None,
config: ProviderConfig,
) -> dict[str, str] | None:

View File

@@ -1 +0,0 @@
"""Third-party LLM implementations for crewAI."""

View File

@@ -98,7 +98,7 @@ class EncodingFlow(Flow[EncodingState]):
_skip_auto_memory: bool = True
initial_state = EncodingState
initial_state: type[EncodingState] = EncodingState
def __init__(
self,

View File

@@ -65,7 +65,7 @@ class RecallFlow(Flow[RecallState]):
_skip_auto_memory: bool = True
initial_state = RecallState
initial_state: type[RecallState] = RecallState
def __init__(
self,

View File

@@ -148,6 +148,36 @@ class Memory(BaseModel):
_pending_saves: list[Future[Any]] = PrivateAttr(default_factory=list)
_pending_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
def __deepcopy__(self, memo: dict[int, Any] | None = None) -> Memory:
"""Deepcopy that handles unpickleable private attrs (ThreadPoolExecutor, Lock)."""
import copy as _copy
cls = type(self)
new = cls.__new__(cls)
if memo is None:
memo = {}
memo[id(self)] = new
object.__setattr__(new, "__dict__", _copy.deepcopy(self.__dict__, memo))
object.__setattr__(
new, "__pydantic_fields_set__", _copy.copy(self.__pydantic_fields_set__)
)
object.__setattr__(
new, "__pydantic_extra__", _copy.deepcopy(self.__pydantic_extra__, memo)
)
# Private attrs: create fresh pool/lock instead of deepcopying
private = {}
for k, v in (self.__pydantic_private__ or {}).items():
if isinstance(v, (ThreadPoolExecutor, threading.Lock)):
attr = self.__private_attributes__[k]
private[k] = attr.get_default()
else:
try:
private[k] = _copy.deepcopy(v, memo)
except Exception:
private[k] = v
object.__setattr__(new, "__pydantic_private__", private)
return new
def model_post_init(self, __context: Any) -> None:
"""Initialize runtime state from field values."""
self._config = MemoryConfig(

View File

@@ -3,7 +3,7 @@ from __future__ import annotations
from collections import defaultdict
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel, Field, InstanceOf
from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
@@ -39,9 +39,9 @@ class CrewEvaluator:
def __init__(
self,
crew: Crew,
eval_llm: InstanceOf[BaseLLM] | str | None = None,
eval_llm: BaseLLM | str | None = None,
openai_model_name: str | None = None,
llm: InstanceOf[BaseLLM] | str | None = None,
llm: BaseLLM | str | None = None,
) -> None:
self.crew = crew
self.llm = eval_llm

View File

@@ -2,9 +2,10 @@
from __future__ import annotations
from typing import Annotated, Any, Literal, TypedDict
from typing import Annotated, Any, Literal
from pydantic import BaseModel, Field
from typing_extensions import TypedDict
from crewai.utilities.i18n import I18N, get_i18n

View File

@@ -1692,9 +1692,27 @@ def test_agent_with_knowledge_sources_works_with_copy():
) as mock_knowledge_storage:
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
mock_knowledge_storage_instance = mock_knowledge_storage.return_value
mock_knowledge_storage_instance.__class__ = BaseKnowledgeStorage
agent.knowledge_storage = mock_knowledge_storage_instance
class _StubStorage(BaseKnowledgeStorage):
def search(self, query, limit=5, metadata_filter=None, score_threshold=0.6):
return []
async def asearch(self, query, limit=5, metadata_filter=None, score_threshold=0.6):
return []
def save(self, documents):
pass
async def asave(self, documents):
pass
def reset(self):
pass
async def areset(self):
pass
mock_knowledge_storage.return_value = _StubStorage()
agent.knowledge_storage = _StubStorage()
agent_copy = agent.copy()

View File

@@ -4,13 +4,55 @@ Tests the Flow-based agent executor implementation including state management,
flow methods, routing logic, and error handling.
"""
from __future__ import annotations
import asyncio
import time
from typing import Any
from unittest.mock import AsyncMock, Mock, patch
import pytest
from crewai.agents.tools_handler import ToolsHandler as _ToolsHandler
from crewai.agents.step_executor import StepExecutor
def _build_executor(**kwargs: Any) -> AgentExecutor:
"""Create an AgentExecutor without validation — for unit tests.
Uses model_construct to skip Pydantic validators so plain Mock()
objects are accepted for typed fields like llm, agent, crew, task.
"""
executor = AgentExecutor.model_construct(**kwargs)
executor._state = AgentExecutorState()
executor._methods = {}
executor._method_outputs = []
executor._completed_methods = set()
executor._fired_or_listeners = set()
executor._pending_and_listeners = {}
executor._method_execution_counts = {}
executor._method_call_counts = {}
executor._event_futures = []
executor._human_feedback_method_outputs = {}
executor._input_history = []
executor._is_execution_resuming = False
import threading
executor._state_lock = threading.Lock()
executor._or_listeners_lock = threading.Lock()
executor._execution_lock = threading.Lock()
executor._finalize_lock = threading.Lock()
executor._finalize_called = False
executor._is_executing = False
executor._has_been_invoked = False
executor._last_parser_error = None
executor._last_context_error = None
executor._step_executor = None
executor._planner_observer = None
from crewai.utilities.printer import Printer
executor._printer = Printer()
from crewai.utilities.i18n import get_i18n
executor._i18n = kwargs.get("i18n") or get_i18n()
return executor
from crewai.agents.planner_observer import PlannerObserver
from crewai.experimental.agent_executor import (
AgentExecutorState,
@@ -75,6 +117,7 @@ class TestAgentExecutor:
"""Create mock dependencies for executor."""
llm = Mock()
llm.supports_stop_words.return_value = True
llm.stop = []
task = Mock()
task.description = "Test task"
@@ -94,7 +137,7 @@ class TestAgentExecutor:
prompt = {"prompt": "Test prompt with {input}, {tool_names}, {tools}"}
tools = []
tools_handler = Mock()
tools_handler = Mock(spec=_ToolsHandler)
return {
"llm": llm,
@@ -112,7 +155,7 @@ class TestAgentExecutor:
def test_executor_initialization(self, mock_dependencies):
"""Test AgentExecutor initialization."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
assert executor.llm == mock_dependencies["llm"]
assert executor.task == mock_dependencies["task"]
@@ -126,7 +169,7 @@ class TestAgentExecutor:
with patch.object(
AgentExecutor, "_show_start_logs"
) as mock_show_start:
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
result = executor.initialize_reasoning()
assert result == "initialized"
@@ -134,7 +177,7 @@ class TestAgentExecutor:
def test_check_max_iterations_not_reached(self, mock_dependencies):
"""Test routing when iterations < max."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.iterations = 5
result = executor.check_max_iterations()
@@ -142,7 +185,7 @@ class TestAgentExecutor:
def test_check_max_iterations_reached(self, mock_dependencies):
"""Test routing when iterations >= max."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.iterations = 10
result = executor.check_max_iterations()
@@ -150,7 +193,7 @@ class TestAgentExecutor:
def test_route_by_answer_type_action(self, mock_dependencies):
"""Test routing for AgentAction."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.current_answer = AgentAction(
thought="thinking", tool="search", tool_input="query", text="action text"
)
@@ -160,7 +203,7 @@ class TestAgentExecutor:
def test_route_by_answer_type_finish(self, mock_dependencies):
"""Test routing for AgentFinish."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.current_answer = AgentFinish(
thought="final thoughts", output="Final answer", text="complete"
)
@@ -170,7 +213,7 @@ class TestAgentExecutor:
def test_continue_iteration(self, mock_dependencies):
"""Test iteration continuation."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
result = executor.continue_iteration()
@@ -179,7 +222,7 @@ class TestAgentExecutor:
def test_finalize_success(self, mock_dependencies):
"""Test finalize with valid AgentFinish."""
with patch.object(AgentExecutor, "_show_logs") as mock_show_logs:
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.current_answer = AgentFinish(
thought="final thinking", output="Done", text="complete"
)
@@ -192,7 +235,7 @@ class TestAgentExecutor:
def test_finalize_failure(self, mock_dependencies):
"""Test finalize skips when given AgentAction instead of AgentFinish."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.current_answer = AgentAction(
thought="thinking", tool="search", tool_input="query", text="action text"
)
@@ -208,7 +251,7 @@ class TestAgentExecutor:
):
"""Finalize should skip synthesis when last todo is already a complete answer."""
with patch.object(AgentExecutor, "_show_logs") as mock_show_logs:
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.todos.items = [
TodoItem(
step_number=1,
@@ -252,7 +295,7 @@ class TestAgentExecutor:
):
"""Finalize should still synthesize when response_model is configured."""
with patch.object(AgentExecutor, "_show_logs"):
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.response_model = Mock()
executor.state.todos.items = [
TodoItem(
@@ -287,7 +330,7 @@ class TestAgentExecutor:
def test_format_prompt(self, mock_dependencies):
"""Test prompt formatting."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
inputs = {"input": "test input", "tool_names": "tool1, tool2", "tools": "desc"}
result = executor._format_prompt("Prompt {input} {tool_names} {tools}", inputs)
@@ -298,18 +341,18 @@ class TestAgentExecutor:
def test_is_training_mode_false(self, mock_dependencies):
"""Test training mode detection when not in training."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
assert executor._is_training_mode() is False
def test_is_training_mode_true(self, mock_dependencies):
"""Test training mode detection when in training."""
mock_dependencies["crew"]._train = True
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
assert executor._is_training_mode() is True
def test_append_message_to_state(self, mock_dependencies):
"""Test message appending to state."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
initial_count = len(executor.state.messages)
executor._append_message_to_state("test message")
@@ -322,7 +365,7 @@ class TestAgentExecutor:
callback = Mock()
mock_dependencies["step_callback"] = callback
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
answer = AgentFinish(thought="thinking", output="test", text="final")
executor._invoke_step_callback(answer)
@@ -332,7 +375,7 @@ class TestAgentExecutor:
def test_invoke_step_callback_none(self, mock_dependencies):
"""Test step callback when none provided."""
mock_dependencies["step_callback"] = None
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
# Should not raise error
executor._invoke_step_callback(
@@ -346,7 +389,7 @@ class TestAgentExecutor:
"""Test async step callback scheduling when already in an event loop."""
callback = AsyncMock()
mock_dependencies["step_callback"] = callback
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
answer = AgentFinish(thought="thinking", output="test", text="final")
with patch("crewai.experimental.agent_executor.asyncio.run") as mock_run:
@@ -364,6 +407,7 @@ class TestStepExecutorCriticalFixes:
def mock_dependencies(self):
"""Create mock dependencies for AgentExecutor tests in this class."""
llm = Mock()
llm.stop = []
llm.supports_stop_words.return_value = True
task = Mock()
@@ -393,6 +437,7 @@ class TestStepExecutorCriticalFixes:
@pytest.fixture
def step_executor(self):
llm = Mock()
llm.stop = []
llm.supports_stop_words.return_value = True
agent = Mock()
@@ -485,7 +530,7 @@ class TestStepExecutorCriticalFixes:
mock_handle_exception.return_value = None
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor._last_parser_error = OutputParserError("test error")
initial_iterations = executor.state.iterations
@@ -500,7 +545,7 @@ class TestStepExecutorCriticalFixes:
self, mock_handle_context, mock_dependencies
):
"""Test recovery from context length error."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor._last_context_error = Exception("context too long")
initial_iterations = executor.state.iterations
@@ -513,16 +558,16 @@ class TestStepExecutorCriticalFixes:
def test_use_stop_words_property(self, mock_dependencies):
"""Test use_stop_words property."""
mock_dependencies["llm"].supports_stop_words.return_value = True
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
assert executor.use_stop_words is True
mock_dependencies["llm"].supports_stop_words.return_value = False
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
assert executor.use_stop_words is False
def test_compatibility_properties(self, mock_dependencies):
"""Test compatibility properties for mixin."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.messages = [{"role": "user", "content": "test"}]
executor.state.iterations = 5
@@ -538,6 +583,7 @@ class TestFlowErrorHandling:
def mock_dependencies(self):
"""Create mock dependencies."""
llm = Mock()
llm.stop = []
llm.supports_stop_words.return_value = True
task = Mock()
@@ -575,7 +621,7 @@ class TestFlowErrorHandling:
mock_enforce_rpm.return_value = None
mock_get_llm.side_effect = OutputParserError("parse failed")
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
result = executor.call_llm_and_parse()
assert result == "parser_error"
@@ -596,7 +642,7 @@ class TestFlowErrorHandling:
mock_get_llm.side_effect = Exception("context length")
mock_is_context_exceeded.return_value = True
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
result = executor.call_llm_and_parse()
assert result == "context_error"
@@ -610,6 +656,7 @@ class TestFlowInvoke:
def mock_dependencies(self):
"""Create mock dependencies."""
llm = Mock()
llm.stop = []
task = Mock()
task.description = "Test"
task.human_input = False
@@ -646,7 +693,7 @@ class TestFlowInvoke:
mock_dependencies,
):
"""Test successful invoke without human feedback."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
# Mock kickoff to set the final answer in state
def mock_kickoff_side_effect():
@@ -666,7 +713,7 @@ class TestFlowInvoke:
@patch.object(AgentExecutor, "kickoff")
def test_invoke_failure_no_agent_finish(self, mock_kickoff, mock_dependencies):
"""Test invoke fails without AgentFinish."""
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
executor.state.current_answer = AgentAction(
thought="thinking", tool="test", tool_input="test", text="action text"
)
@@ -689,7 +736,7 @@ class TestFlowInvoke:
"system": "System: {input}",
"user": "User: {input} {tool_names} {tools}",
}
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
def mock_kickoff_side_effect():
executor.state.current_answer = AgentFinish(
@@ -713,6 +760,7 @@ class TestNativeToolExecution:
@pytest.fixture
def mock_dependencies(self):
llm = Mock()
llm.stop = []
llm.supports_stop_words.return_value = True
task = Mock()
@@ -734,7 +782,7 @@ class TestNativeToolExecution:
prompt = {"prompt": "Test {input} {tool_names} {tools}"}
tools_handler = Mock()
tools_handler = Mock(spec=_ToolsHandler)
tools_handler.cache = None
return {
@@ -754,7 +802,7 @@ class TestNativeToolExecution:
def test_execute_native_tool_runs_parallel_for_multiple_calls(
self, mock_dependencies
):
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
def slow_one() -> str:
time.sleep(0.2)
@@ -790,7 +838,7 @@ class TestNativeToolExecution:
def test_execute_native_tool_falls_back_to_sequential_for_result_as_answer(
self, mock_dependencies
):
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
def slow_one() -> str:
time.sleep(0.2)
@@ -832,7 +880,7 @@ class TestNativeToolExecution:
def test_execute_native_tool_result_as_answer_short_circuits_remaining_calls(
self, mock_dependencies
):
executor = AgentExecutor(**mock_dependencies)
executor = _build_executor(**mock_dependencies)
call_counts = {"slow_one": 0, "slow_two": 0}
def slow_one() -> str:
@@ -879,30 +927,6 @@ class TestNativeToolExecution:
assert len(tool_messages) == 1
assert tool_messages[0]["tool_call_id"] == "call_1"
def test_check_native_todo_completion_requires_current_todo(
self, mock_dependencies
):
from crewai.utilities.planning_types import TodoList
executor = AgentExecutor(**mock_dependencies)
# No current todo → not satisfied
executor.state.todos = TodoList(items=[])
assert executor.check_native_todo_completion() == "todo_not_satisfied"
# With a current todo that has tool_to_use → satisfied
running = TodoItem(
step_number=1,
description="Use the expected tool",
tool_to_use="expected_tool",
status="running",
)
executor.state.todos = TodoList(items=[running])
assert executor.check_native_todo_completion() == "todo_satisfied"
# With a current todo without tool_to_use → still satisfied
running.tool_to_use = None
assert executor.check_native_todo_completion() == "todo_satisfied"
class TestPlannerObserver:

View File

@@ -1,7 +1,11 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a helpful assistant that
uses tools. This is padding text to ensure the prompt is large enough for caching.
body: '{"input":[{"role":"user","content":"What is the weather in Tokyo?"}],"model":"gpt-4.1","instructions":"You
are a helpful assistant that uses tools. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
@@ -68,13 +72,9 @@ interactions:
for caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is large
enough for caching. "},{"role":"user","content":"What is the weather in Tokyo?"}],"model":"gpt-4.1","tool_choice":"auto","tools":[{"type":"function","function":{"name":"get_weather","description":"Get
the current weather for a location","strict":true,"parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
city name"}},"required":["location"],"additionalProperties":false}}}]}'
text to ensure the prompt is large enough for caching. ","tools":[{"type":"function","name":"get_weather","description":"Get
the current weather for a location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
city name"}},"required":["location"]}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -87,7 +87,7 @@ interactions:
connection:
- keep-alive
content-length:
- '6158'
- '6065'
content-type:
- application/json
host:
@@ -109,26 +109,113 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.12
method: POST
uri: https://api.openai.com/v1/chat/completions
uri: https://api.openai.com/v1/responses
response:
body:
string: "{\n \"id\": \"chatcmpl-D7mXQCgT3p3ViImkiqDiZGqLREQtp\",\n \"object\":
\"chat.completion\",\n \"created\": 1770747248,\n \"model\": \"gpt-4.1-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_9ZqMavn3J1fBnQEaqpYol0Bd\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"get_weather\",\n
\ \"arguments\": \"{\\\"location\\\":\\\"Tokyo\\\"}\"\n }\n
\ }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1187,\n \"completion_tokens\":
14,\n \"total_tokens\": 1201,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
1152,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
string: "{\n \"id\": \"resp_0d68149bcc0d14810069caf464a4b48197bd9f098abb2f6303\",\n
\ \"object\": \"response\",\n \"created_at\": 1774908516,\n \"status\":
\"completed\",\n \"background\": false,\n \"billing\": {\n \"payer\":
\"developer\"\n },\n \"completed_at\": 1774908517,\n \"error\": null,\n
\ \"frequency_penalty\": 0.0,\n \"incomplete_details\": null,\n \"instructions\":
\"You are a helpful assistant that uses tools. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. \",\n \"max_output_tokens\":
null,\n \"max_tool_calls\": null,\n \"model\": \"gpt-4.1-2025-04-14\",\n
\ \"output\": [\n {\n \"id\": \"fc_0d68149bcc0d14810069caf46568088197a33be67f16a1fa09\",\n
\ \"type\": \"function_call\",\n \"status\": \"completed\",\n \"arguments\":
\"{\\\"location\\\":\\\"Tokyo\\\"}\",\n \"call_id\": \"call_74rwmYse0DE4JFaFGyAFx9bu\",\n
\ \"name\": \"get_weather\"\n }\n ],\n \"parallel_tool_calls\": true,\n
\ \"presence_penalty\": 0.0,\n \"previous_response_id\": null,\n \"prompt_cache_key\":
null,\n \"prompt_cache_retention\": null,\n \"reasoning\": {\n \"effort\":
null,\n \"summary\": null\n },\n \"safety_identifier\": null,\n \"service_tier\":
\"default\",\n \"store\": true,\n \"temperature\": 1.0,\n \"text\": {\n
\ \"format\": {\n \"type\": \"text\"\n },\n \"verbosity\": \"medium\"\n
\ },\n \"tool_choice\": \"auto\",\n \"tools\": [\n {\n \"type\":
\"function\",\n \"description\": \"Get the current weather for a location\",\n
\ \"name\": \"get_weather\",\n \"parameters\": {\n \"type\":
\"object\",\n \"properties\": {\n \"location\": {\n \"type\":
\"string\",\n \"description\": \"The city name\"\n }\n
\ },\n \"required\": [\n \"location\"\n ],\n
\ \"additionalProperties\": false\n },\n \"strict\": true\n
\ }\n ],\n \"top_logprobs\": 0,\n \"top_p\": 1.0,\n \"truncation\":
\"disabled\",\n \"usage\": {\n \"input_tokens\": 1185,\n \"input_tokens_details\":
{\n \"cached_tokens\": 0\n },\n \"output_tokens\": 15,\n \"output_tokens_details\":
{\n \"reasoning_tokens\": 0\n },\n \"total_tokens\": 1200\n },\n
\ \"user\": null,\n \"metadata\": {}\n}"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -137,7 +224,7 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 10 Feb 2026 18:14:08 GMT
- Mon, 30 Mar 2026 22:08:37 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -146,8 +233,6 @@ interactions:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
@@ -155,15 +240,13 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '484'
- '1085'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
set-cookie:
- SET-COOKIE-XXX
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
@@ -182,8 +265,12 @@ interactions:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are a helpful assistant that
uses tools. This is padding text to ensure the prompt is large enough for caching.
body: '{"input":[{"role":"user","content":"What is the weather in Paris?"}],"model":"gpt-4.1","instructions":"You
are a helpful assistant that uses tools. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
@@ -250,13 +337,9 @@ interactions:
for caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is large
enough for caching. "},{"role":"user","content":"What is the weather in Paris?"}],"model":"gpt-4.1","tool_choice":"auto","tools":[{"type":"function","function":{"name":"get_weather","description":"Get
the current weather for a location","strict":true,"parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
city name"}},"required":["location"],"additionalProperties":false}}}]}'
text to ensure the prompt is large enough for caching. ","tools":[{"type":"function","name":"get_weather","description":"Get
the current weather for a location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
city name"}},"required":["location"]}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -269,7 +352,7 @@ interactions:
connection:
- keep-alive
content-length:
- '6158'
- '6065'
content-type:
- application/json
cookie:
@@ -293,26 +376,113 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.12
method: POST
uri: https://api.openai.com/v1/chat/completions
uri: https://api.openai.com/v1/responses
response:
body:
string: "{\n \"id\": \"chatcmpl-D7mXR8k9vk8TlGvGXlrQSI7iNeAN1\",\n \"object\":
\"chat.completion\",\n \"created\": 1770747249,\n \"model\": \"gpt-4.1-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_6PeUBlRPG8JcV2lspmLjJbnn\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"get_weather\",\n
\ \"arguments\": \"{\\\"location\\\":\\\"Paris\\\"}\"\n }\n
\ }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1187,\n \"completion_tokens\":
14,\n \"total_tokens\": 1201,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
1152,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
string: "{\n \"id\": \"resp_0525bf798202137e0069caf465ee3c8196aa7c83da1c369eb7\",\n
\ \"object\": \"response\",\n \"created_at\": 1774908517,\n \"status\":
\"completed\",\n \"background\": false,\n \"billing\": {\n \"payer\":
\"developer\"\n },\n \"completed_at\": 1774908518,\n \"error\": null,\n
\ \"frequency_penalty\": 0.0,\n \"incomplete_details\": null,\n \"instructions\":
\"You are a helpful assistant that uses tools. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. This is
padding text to ensure the prompt is large enough for caching. This is padding
text to ensure the prompt is large enough for caching. This is padding text
to ensure the prompt is large enough for caching. This is padding text to
ensure the prompt is large enough for caching. This is padding text to ensure
the prompt is large enough for caching. This is padding text to ensure the
prompt is large enough for caching. This is padding text to ensure the prompt
is large enough for caching. This is padding text to ensure the prompt is
large enough for caching. This is padding text to ensure the prompt is large
enough for caching. This is padding text to ensure the prompt is large enough
for caching. This is padding text to ensure the prompt is large enough for
caching. This is padding text to ensure the prompt is large enough for caching.
This is padding text to ensure the prompt is large enough for caching. This
is padding text to ensure the prompt is large enough for caching. \",\n \"max_output_tokens\":
null,\n \"max_tool_calls\": null,\n \"model\": \"gpt-4.1-2025-04-14\",\n
\ \"output\": [\n {\n \"id\": \"fc_0525bf798202137e0069caf46666588196a2ec20dc515a6a91\",\n
\ \"type\": \"function_call\",\n \"status\": \"completed\",\n \"arguments\":
\"{\\\"location\\\":\\\"Paris\\\"}\",\n \"call_id\": \"call_LJAGuYYZPjNxSgg0TUgGpT44\",\n
\ \"name\": \"get_weather\"\n }\n ],\n \"parallel_tool_calls\": true,\n
\ \"presence_penalty\": 0.0,\n \"previous_response_id\": null,\n \"prompt_cache_key\":
null,\n \"prompt_cache_retention\": null,\n \"reasoning\": {\n \"effort\":
null,\n \"summary\": null\n },\n \"safety_identifier\": null,\n \"service_tier\":
\"default\",\n \"store\": true,\n \"temperature\": 1.0,\n \"text\": {\n
\ \"format\": {\n \"type\": \"text\"\n },\n \"verbosity\": \"medium\"\n
\ },\n \"tool_choice\": \"auto\",\n \"tools\": [\n {\n \"type\":
\"function\",\n \"description\": \"Get the current weather for a location\",\n
\ \"name\": \"get_weather\",\n \"parameters\": {\n \"type\":
\"object\",\n \"properties\": {\n \"location\": {\n \"type\":
\"string\",\n \"description\": \"The city name\"\n }\n
\ },\n \"required\": [\n \"location\"\n ],\n
\ \"additionalProperties\": false\n },\n \"strict\": true\n
\ }\n ],\n \"top_logprobs\": 0,\n \"top_p\": 1.0,\n \"truncation\":
\"disabled\",\n \"usage\": {\n \"input_tokens\": 1185,\n \"input_tokens_details\":
{\n \"cached_tokens\": 1152\n },\n \"output_tokens\": 15,\n \"output_tokens_details\":
{\n \"reasoning_tokens\": 0\n },\n \"total_tokens\": 1200\n },\n
\ \"user\": null,\n \"metadata\": {}\n}"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -321,7 +491,7 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 10 Feb 2026 18:14:09 GMT
- Mon, 30 Mar 2026 22:08:38 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -330,8 +500,6 @@ interactions:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
@@ -339,15 +507,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '528'
- '653'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
set-cookie:
- SET-COOKIE-XXX
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:

View File

@@ -0,0 +1,110 @@
interactions:
- request:
body: '{"messages":[{"role":"user","content":"What is the capital of France?"}],"model":"gpt-5"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '89'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.2
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DO4LcSpy72yIXCYSIVOQEXWNXydgn\",\n \"object\":
\"chat.completion\",\n \"created\": 1774628956,\n \"model\": \"gpt-5-2025-08-07\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Paris.\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 13,\n \"completion_tokens\":
11,\n \"total_tokens\": 24,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9e2fc5dce85582fb-GIG
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 27 Mar 2026 16:29:17 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
content-length:
- '772'
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1343'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
set-cookie:
- SET-COOKIE-XXX
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,108 @@
interactions:
- request:
body: '{"messages":[{"role":"user","content":"Say hello"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '74'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.2
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DPS8YQSwQ3pZKZztIoIe1eYodMqh2\",\n \"object\":
\"chat.completion\",\n \"created\": 1774958730,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Hello! How can I assist you today?\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
9,\n \"completion_tokens\": 9,\n \"total_tokens\": 18,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_709f182cb4\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9e4f38fc5d9d82e8-GIG
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 31 Mar 2026 12:05:30 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
content-length:
- '839'
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '680'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
set-cookie:
- SET-COOKIE-XXX
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -136,6 +136,7 @@ class TestPlusAPI(unittest.TestCase):
"file": encoded_file,
"description": description,
"available_exports": None,
"tools_metadata": None,
}
mock_make_request.assert_called_once_with(
"POST", "/crewai_plus/api/v1/tools", json=params
@@ -173,6 +174,7 @@ class TestPlusAPI(unittest.TestCase):
"file": encoded_file,
"description": description,
"available_exports": None,
"tools_metadata": None,
}
self.assert_request_with_org_id(
@@ -201,6 +203,48 @@ class TestPlusAPI(unittest.TestCase):
"file": encoded_file,
"description": description,
"available_exports": None,
"tools_metadata": None,
}
mock_make_request.assert_called_once_with(
"POST", "/crewai_plus/api/v1/tools", json=params
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")
def test_publish_tool_with_tools_metadata(self, mock_make_request):
mock_response = MagicMock()
mock_make_request.return_value = mock_response
handle = "test_tool_handle"
public = True
version = "1.0.0"
description = "Test tool description"
encoded_file = "encoded_test_file"
available_exports = [{"name": "MyTool"}]
tools_metadata = [
{
"name": "MyTool",
"humanized_name": "my_tool",
"description": "A test tool",
"run_params_schema": {"type": "object", "properties": {}},
"init_params_schema": {"type": "object", "properties": {}},
"env_vars": [{"name": "API_KEY", "description": "API key", "required": True, "default": None}],
}
]
response = self.api.publish_tool(
handle, public, version, description, encoded_file,
available_exports=available_exports,
tools_metadata=tools_metadata,
)
params = {
"handle": handle,
"public": public,
"version": version,
"file": encoded_file,
"description": description,
"available_exports": available_exports,
"tools_metadata": {"package": handle, "tools": tools_metadata},
}
mock_make_request.assert_called_once_with(
"POST", "/crewai_plus/api/v1/tools", json=params

View File

@@ -363,3 +363,290 @@ def test_get_crews_ignores_template_directories(
utils.get_crews()
assert not template_crew_detected
# Tests for extract_tools_metadata
def test_extract_tools_metadata_empty_project(temp_project_dir):
"""Test that extract_tools_metadata returns empty list for empty project."""
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert metadata == []
def test_extract_tools_metadata_no_init_file(temp_project_dir):
"""Test that extract_tools_metadata returns empty list when no __init__.py exists."""
(temp_project_dir / "some_file.py").write_text("print('hello')")
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert metadata == []
def test_extract_tools_metadata_empty_init_file(temp_project_dir):
"""Test that extract_tools_metadata returns empty list for empty __init__.py."""
create_init_file(temp_project_dir, "")
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert metadata == []
def test_extract_tools_metadata_no_all_variable(temp_project_dir):
"""Test that extract_tools_metadata returns empty list when __all__ is not defined."""
create_init_file(
temp_project_dir,
"from crewai.tools import BaseTool\n\nclass MyTool(BaseTool):\n pass",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert metadata == []
def test_extract_tools_metadata_valid_base_tool_class(temp_project_dir):
"""Test that extract_tools_metadata extracts metadata from a valid BaseTool class."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
__all__ = ['MyTool']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 1
assert metadata[0]["name"] == "MyTool"
assert metadata[0]["humanized_name"] == "my_tool"
assert metadata[0]["description"] == "A test tool"
def test_extract_tools_metadata_with_args_schema(temp_project_dir):
"""Test that extract_tools_metadata extracts run_params_schema from args_schema."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
from pydantic import BaseModel
class MyToolInput(BaseModel):
query: str
limit: int = 10
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
args_schema: type[BaseModel] = MyToolInput
__all__ = ['MyTool']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 1
assert metadata[0]["name"] == "MyTool"
run_params = metadata[0]["run_params_schema"]
assert "properties" in run_params
assert "query" in run_params["properties"]
assert "limit" in run_params["properties"]
def test_extract_tools_metadata_with_env_vars(temp_project_dir):
"""Test that extract_tools_metadata extracts env_vars."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
from crewai.tools.base_tool import EnvVar
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
env_vars: list[EnvVar] = [
EnvVar(name="MY_API_KEY", description="API key for service", required=True),
EnvVar(name="MY_OPTIONAL_VAR", description="Optional var", required=False, default="default_value"),
]
__all__ = ['MyTool']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 1
env_vars = metadata[0]["env_vars"]
assert len(env_vars) == 2
assert env_vars[0]["name"] == "MY_API_KEY"
assert env_vars[0]["description"] == "API key for service"
assert env_vars[0]["required"] is True
assert env_vars[1]["name"] == "MY_OPTIONAL_VAR"
assert env_vars[1]["required"] is False
assert env_vars[1]["default"] == "default_value"
def test_extract_tools_metadata_with_env_vars_field_default_factory(temp_project_dir):
"""Test that extract_tools_metadata extracts env_vars declared with Field(default_factory=...)."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
from crewai.tools.base_tool import EnvVar
from pydantic import Field
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(name="MY_TOOL_API", description="API token for my tool", required=True),
]
)
__all__ = ['MyTool']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 1
env_vars = metadata[0]["env_vars"]
assert len(env_vars) == 1
assert env_vars[0]["name"] == "MY_TOOL_API"
assert env_vars[0]["description"] == "API token for my tool"
assert env_vars[0]["required"] is True
def test_extract_tools_metadata_with_custom_init_params(temp_project_dir):
"""Test that extract_tools_metadata extracts init_params_schema with custom params."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
api_endpoint: str = "https://api.example.com"
timeout: int = 30
__all__ = ['MyTool']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 1
init_params = metadata[0]["init_params_schema"]
assert "properties" in init_params
# Custom params should be included
assert "api_endpoint" in init_params["properties"]
assert "timeout" in init_params["properties"]
# Base params should be filtered out
assert "name" not in init_params["properties"]
assert "description" not in init_params["properties"]
def test_extract_tools_metadata_multiple_tools(temp_project_dir):
"""Test that extract_tools_metadata extracts metadata from multiple tools."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class FirstTool(BaseTool):
name: str = "first_tool"
description: str = "First test tool"
class SecondTool(BaseTool):
name: str = "second_tool"
description: str = "Second test tool"
__all__ = ['FirstTool', 'SecondTool']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 2
names = [m["name"] for m in metadata]
assert "FirstTool" in names
assert "SecondTool" in names
def test_extract_tools_metadata_multiple_init_files(temp_project_dir):
"""Test that extract_tools_metadata extracts metadata from multiple __init__.py files."""
# Create tool in root __init__.py
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class RootTool(BaseTool):
name: str = "root_tool"
description: str = "Root tool"
__all__ = ['RootTool']
""",
)
# Create nested package with another tool
nested_dir = temp_project_dir / "nested"
nested_dir.mkdir()
create_init_file(
nested_dir,
"""from crewai.tools import BaseTool
class NestedTool(BaseTool):
name: str = "nested_tool"
description: str = "Nested tool"
__all__ = ['NestedTool']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 2
names = [m["name"] for m in metadata]
assert "RootTool" in names
assert "NestedTool" in names
def test_extract_tools_metadata_ignores_non_tool_exports(temp_project_dir):
"""Test that extract_tools_metadata ignores non-BaseTool exports."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
def not_a_tool():
pass
SOME_CONSTANT = "value"
__all__ = ['MyTool', 'not_a_tool', 'SOME_CONSTANT']
""",
)
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert len(metadata) == 1
assert metadata[0]["name"] == "MyTool"
def test_extract_tools_metadata_import_error_returns_empty(temp_project_dir):
"""Test that extract_tools_metadata returns empty list on import error."""
create_init_file(
temp_project_dir,
"""from nonexistent_module import something
class MyTool(BaseTool):
pass
__all__ = ['MyTool']
""",
)
# Should not raise, just return empty list
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert metadata == []
def test_extract_tools_metadata_syntax_error_returns_empty(temp_project_dir):
"""Test that extract_tools_metadata returns empty list on syntax error."""
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class MyTool(BaseTool):
# Missing closing parenthesis
def __init__(self, name:
pass
__all__ = ['MyTool']
""",
)
# Should not raise, just return empty list
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
assert metadata == []

View File

@@ -185,9 +185,14 @@ def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
"crewai.cli.tools.main.extract_available_exports",
return_value=[{"name": "SampleTool"}],
)
@patch(
"crewai.cli.tools.main.extract_tools_metadata",
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
)
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
def test_publish_when_not_in_sync_and_force(
mock_print_org,
mock_tools_metadata,
mock_available_exports,
mock_is_synced,
mock_publish,
@@ -222,6 +227,7 @@ def test_publish_when_not_in_sync_and_force(
description="A sample tool",
encoded_file=unittest.mock.ANY,
available_exports=[{"name": "SampleTool"}],
tools_metadata=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
)
mock_print_org.assert_called_once()
@@ -242,7 +248,12 @@ def test_publish_when_not_in_sync_and_force(
"crewai.cli.tools.main.extract_available_exports",
return_value=[{"name": "SampleTool"}],
)
@patch(
"crewai.cli.tools.main.extract_tools_metadata",
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
)
def test_publish_success(
mock_tools_metadata,
mock_available_exports,
mock_is_synced,
mock_publish,
@@ -277,6 +288,7 @@ def test_publish_success(
description="A sample tool",
encoded_file=unittest.mock.ANY,
available_exports=[{"name": "SampleTool"}],
tools_metadata=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
)
@@ -295,7 +307,12 @@ def test_publish_success(
"crewai.cli.tools.main.extract_available_exports",
return_value=[{"name": "SampleTool"}],
)
@patch(
"crewai.cli.tools.main.extract_tools_metadata",
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
)
def test_publish_failure(
mock_tools_metadata,
mock_available_exports,
mock_publish,
mock_open,
@@ -336,7 +353,12 @@ def test_publish_failure(
"crewai.cli.tools.main.extract_available_exports",
return_value=[{"name": "SampleTool"}],
)
@patch(
"crewai.cli.tools.main.extract_tools_metadata",
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
)
def test_publish_api_error(
mock_tools_metadata,
mock_available_exports,
mock_publish,
mock_open,
@@ -362,6 +384,63 @@ def test_publish_api_error(
mock_publish.assert_called_once()
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.tools.main.os.listdir", return_value=["sample-tool-1.0.0.tar.gz"])
@patch(
"crewai.cli.tools.main.open",
new_callable=unittest.mock.mock_open,
read_data=b"sample tarball content",
)
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=True)
@patch(
"crewai.cli.tools.main.extract_available_exports",
return_value=[{"name": "SampleTool"}],
)
@patch(
"crewai.cli.tools.main.extract_tools_metadata",
side_effect=Exception("Failed to extract metadata"),
)
def test_publish_metadata_extraction_failure_continues_with_warning(
mock_tools_metadata,
mock_available_exports,
mock_is_synced,
mock_publish,
mock_open,
mock_listdir,
mock_subprocess_run,
mock_get_project_description,
mock_get_project_version,
mock_get_project_name,
capsys,
tool_command,
):
"""Test that metadata extraction failure shows warning but continues publishing."""
mock_publish_response = MagicMock()
mock_publish_response.status_code = 200
mock_publish_response.json.return_value = {"handle": "sample-tool"}
mock_publish.return_value = mock_publish_response
tool_command.publish(is_public=True)
output = capsys.readouterr().out
assert "Warning: Could not extract tool metadata" in output
assert "Publishing will continue without detailed metadata" in output
assert "No tool metadata extracted" in output
mock_publish.assert_called_once_with(
handle="sample-tool",
is_public=True,
version="1.0.0",
description="A sample tool",
encoded_file=unittest.mock.ANY,
available_exports=[{"name": "SampleTool"}],
tools_metadata=[],
)
@patch("crewai.cli.tools.main.Settings")
def test_print_current_organization_with_org(mock_settings, capsys, tool_command):
mock_settings_instance = MagicMock()

View File

@@ -0,0 +1,176 @@
from typing import Any
from unittest.mock import patch
import pytest
from pydantic import BaseModel
from crewai.events.event_bus import CrewAIEventsBus
from crewai.events.types.llm_events import LLMCallCompletedEvent, LLMCallType
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
class TestLLMCallCompletedEventUsageField:
def test_accepts_usage_dict(self):
event = LLMCallCompletedEvent(
response="hello",
call_type=LLMCallType.LLM_CALL,
call_id="test-id",
usage={"prompt_tokens": 10, "completion_tokens": 20, "total_tokens": 30},
)
assert event.usage == {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30,
}
def test_usage_defaults_to_none(self):
event = LLMCallCompletedEvent(
response="hello",
call_type=LLMCallType.LLM_CALL,
call_id="test-id",
)
assert event.usage is None
def test_accepts_none_usage(self):
event = LLMCallCompletedEvent(
response="hello",
call_type=LLMCallType.LLM_CALL,
call_id="test-id",
usage=None,
)
assert event.usage is None
def test_accepts_nested_usage_dict(self):
usage = {
"prompt_tokens": 100,
"completion_tokens": 200,
"total_tokens": 300,
"prompt_tokens_details": {"cached_tokens": 50},
}
event = LLMCallCompletedEvent(
response="hello",
call_type=LLMCallType.LLM_CALL,
call_id="test-id",
usage=usage,
)
assert event.usage["prompt_tokens_details"]["cached_tokens"] == 50
class TestUsageToDict:
def test_none_returns_none(self):
assert LLM._usage_to_dict(None) is None
def test_dict_passes_through(self):
usage = {"prompt_tokens": 10, "total_tokens": 30}
assert LLM._usage_to_dict(usage) is usage
def test_pydantic_model_uses_model_dump(self):
class Usage(BaseModel):
prompt_tokens: int = 10
completion_tokens: int = 20
total_tokens: int = 30
result = LLM._usage_to_dict(Usage())
assert result == {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30,
}
def test_object_with_dict_attr(self):
class UsageObj:
def __init__(self):
self.prompt_tokens = 5
self.completion_tokens = 15
self.total_tokens = 20
result = LLM._usage_to_dict(UsageObj())
assert result == {
"prompt_tokens": 5,
"completion_tokens": 15,
"total_tokens": 20,
}
def test_object_with_dict_excludes_private_attrs(self):
class UsageObj:
def __init__(self):
self.total_tokens = 42
self._internal = "hidden"
result = LLM._usage_to_dict(UsageObj())
assert result == {"total_tokens": 42}
assert "_internal" not in result
def test_unsupported_type_returns_none(self):
assert LLM._usage_to_dict(42) is None
assert LLM._usage_to_dict("string") is None
class _StubLLM(BaseLLM):
"""Minimal concrete BaseLLM for testing event emission."""
model: str = "test-model"
def call(self, *args: Any, **kwargs: Any) -> str:
return ""
async def acall(self, *args: Any, **kwargs: Any) -> str:
return ""
def supports_function_calling(self) -> bool:
return False
def supports_stop_words(self) -> bool:
return True
class TestEmitCallCompletedEventPassesUsage:
@pytest.fixture
def mock_emit(self):
with patch.object(CrewAIEventsBus, "emit") as mock:
yield mock
@pytest.fixture
def llm(self):
return _StubLLM(model="test-model")
def test_usage_is_passed_to_event(self, mock_emit, llm):
usage_data = {"prompt_tokens": 10, "completion_tokens": 20, "total_tokens": 30}
llm._emit_call_completed_event(
response="hello",
call_type=LLMCallType.LLM_CALL,
messages="test prompt",
usage=usage_data,
)
mock_emit.assert_called_once()
event = mock_emit.call_args[1]["event"]
assert isinstance(event, LLMCallCompletedEvent)
assert event.usage == usage_data
def test_none_usage_is_passed_to_event(self, mock_emit, llm):
llm._emit_call_completed_event(
response="hello",
call_type=LLMCallType.LLM_CALL,
messages="test prompt",
usage=None,
)
mock_emit.assert_called_once()
event = mock_emit.call_args[1]["event"]
assert isinstance(event, LLMCallCompletedEvent)
assert event.usage is None
def test_usage_omitted_defaults_to_none(self, mock_emit, llm):
llm._emit_call_completed_event(
response="hello",
call_type=LLMCallType.LLM_CALL,
messages="test prompt",
)
mock_emit.assert_called_once()
event = mock_emit.call_args[1]["event"]
assert isinstance(event, LLMCallCompletedEvent)
assert event.usage is None

View File

@@ -132,12 +132,12 @@ def test_embedding_configuration_flow(
embedder_config = {
"provider": "sentence-transformer",
"model_name": "all-MiniLM-L6-v2",
"config": {"model_name": "all-MiniLM-L6-v2"},
}
KnowledgeStorage(embedder=embedder_config, collection_name="embedding_test")
storage = KnowledgeStorage(embedder=embedder_config, collection_name="embedding_test")
mock_get_embedding.assert_called_once_with(embedder_config)
mock_get_embedding.assert_called_once_with(storage.embedder)
@patch("crewai.knowledge.storage.knowledge_storage.get_rag_client")

View File

@@ -125,8 +125,8 @@ def test_anthropic_specific_parameters():
assert isinstance(llm, AnthropicCompletion)
assert llm.stop_sequences == ["Human:", "Assistant:"]
assert llm.stream == True
assert llm.client.max_retries == 5
assert llm.client.timeout == 60
assert llm._client.max_retries == 5
assert llm._client.timeout == 60
def test_anthropic_completion_call():
@@ -563,8 +563,8 @@ def test_anthropic_environment_variable_api_key():
with patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-anthropic-key"}):
llm = LLM(model="anthropic/claude-3-5-sonnet-20241022")
assert llm.client is not None
assert hasattr(llm.client, 'messages')
assert llm._client is not None
assert hasattr(llm._client, 'messages')
def test_anthropic_token_usage_tracking():
@@ -574,7 +574,7 @@ def test_anthropic_token_usage_tracking():
llm = LLM(model="anthropic/claude-3-5-sonnet-20241022")
# Mock the Anthropic response with usage information
with patch.object(llm.client.messages, 'create') as mock_create:
with patch.object(llm._client.messages, 'create') as mock_create:
mock_response = MagicMock()
mock_response.content = [MagicMock(text="test response")]
mock_response.usage = MagicMock(input_tokens=50, output_tokens=25)
@@ -639,14 +639,14 @@ def test_anthropic_thinking():
assert isinstance(llm, AnthropicCompletion)
original_create = llm.client.messages.create
original_create = llm._client.messages.create
captured_params = {}
def capture_and_call(**kwargs):
captured_params.update(kwargs)
return original_create(**kwargs)
with patch.object(llm.client.messages, 'create', side_effect=capture_and_call):
with patch.object(llm._client.messages, 'create', side_effect=capture_and_call):
result = llm.call("What is the weather in Tokyo?")
assert result is not None
@@ -677,14 +677,14 @@ def test_anthropic_thinking_blocks_preserved_across_turns():
assert isinstance(llm, AnthropicCompletion)
# Capture all messages.create calls to verify thinking blocks are included
original_create = llm.client.messages.create
original_create = llm._client.messages.create
captured_calls = []
def capture_and_call(**kwargs):
captured_calls.append(kwargs)
return original_create(**kwargs)
with patch.object(llm.client.messages, 'create', side_effect=capture_and_call):
with patch.object(llm._client.messages, 'create', side_effect=capture_and_call):
# First call - establishes context and generates thinking blocks
messages = [{"role": "user", "content": "What is 2+2?"}]
first_result = llm.call(messages)
@@ -695,8 +695,8 @@ def test_anthropic_thinking_blocks_preserved_across_turns():
assert len(first_result) > 0
# Verify thinking blocks were stored after first response
assert len(llm.previous_thinking_blocks) > 0, "No thinking blocks stored after first call"
first_thinking = llm.previous_thinking_blocks[0]
assert len(llm._previous_thinking_blocks) > 0, "No thinking blocks stored after first call"
first_thinking = llm._previous_thinking_blocks[0]
assert first_thinking["type"] == "thinking"
assert "thinking" in first_thinking
assert "signature" in first_thinking

View File

@@ -66,7 +66,7 @@ def test_azure_tool_use_conversation_flow():
available_functions = {"get_weather": mock_weather_tool}
# Mock the Azure client responses
with patch.object(completion.client, 'complete') as mock_complete:
with patch.object(completion._client, 'complete') as mock_complete:
# Mock tool call in response with proper type
mock_tool_call = MagicMock(spec=ChatCompletionsToolCall)
mock_tool_call.function.name = "get_weather"
@@ -698,7 +698,7 @@ def test_azure_environment_variable_endpoint():
}):
llm = LLM(model="azure/gpt-4")
assert llm.client is not None
assert llm._client is not None
assert llm.endpoint == "https://test.openai.azure.com/openai/deployments/gpt-4"
@@ -709,7 +709,7 @@ def test_azure_token_usage_tracking():
llm = LLM(model="azure/gpt-4")
# Mock the Azure response with usage information
with patch.object(llm.client, 'complete') as mock_complete:
with patch.object(llm._client, 'complete') as mock_complete:
mock_message = MagicMock()
mock_message.content = "test response"
mock_message.tool_calls = None
@@ -747,7 +747,7 @@ def test_azure_http_error_handling():
llm = LLM(model="azure/gpt-4")
# Mock an HTTP error
with patch.object(llm.client, 'complete') as mock_complete:
with patch.object(llm._client, 'complete') as mock_complete:
mock_complete.side_effect = HttpResponseError(message="Rate limit exceeded", response=MagicMock(status_code=429))
with pytest.raises(HttpResponseError):
@@ -966,7 +966,7 @@ def test_azure_improved_error_messages():
llm = LLM(model="azure/gpt-4")
with patch.object(llm.client, 'complete') as mock_complete:
with patch.object(llm._client, 'complete') as mock_complete:
error_401 = HttpResponseError(message="Unauthorized")
error_401.status_code = 401
mock_complete.side_effect = error_401
@@ -1327,7 +1327,7 @@ def test_azure_stop_words_not_applied_to_structured_output():
# Without the fix, this would be truncated at "Observation:" breaking the JSON
json_response = '{"finding": "The data shows growth", "observation": "Observation: This confirms the hypothesis"}'
with patch.object(llm.client, 'complete') as mock_complete:
with patch.object(llm._client, 'complete') as mock_complete:
mock_message = MagicMock()
mock_message.content = json_response
mock_message.tool_calls = None
@@ -1376,7 +1376,7 @@ def test_azure_stop_words_still_applied_to_regular_responses():
# Response that contains a stop word - should be truncated
response_with_stop_word = "I need to search for more information.\n\nAction: search\nObservation: Found results"
with patch.object(llm.client, 'complete') as mock_complete:
with patch.object(llm._client, 'complete') as mock_complete:
mock_message = MagicMock()
mock_message.content = response_with_stop_word
mock_message.tool_calls = None

View File

@@ -674,7 +674,7 @@ def test_bedrock_token_usage_tracking():
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
# Mock the Bedrock response with usage information
with patch.object(llm.client, 'converse') as mock_converse:
with patch.object(llm._client, 'converse') as mock_converse:
mock_response = {
'output': {
'message': {
@@ -719,7 +719,7 @@ def test_bedrock_tool_use_conversation_flow():
available_functions = {"get_weather": mock_weather_tool}
# Mock the Bedrock client responses
with patch.object(llm.client, 'converse') as mock_converse:
with patch.object(llm._client, 'converse') as mock_converse:
# First response: tool use request
tool_use_response = {
'output': {
@@ -805,7 +805,7 @@ def test_bedrock_client_error_handling():
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
# Test ValidationException
with patch.object(llm.client, 'converse') as mock_converse:
with patch.object(llm._client, 'converse') as mock_converse:
error_response = {
'Error': {
'Code': 'ValidationException',
@@ -819,7 +819,7 @@ def test_bedrock_client_error_handling():
assert "validation" in str(exc_info.value).lower()
# Test ThrottlingException
with patch.object(llm.client, 'converse') as mock_converse:
with patch.object(llm._client, 'converse') as mock_converse:
error_response = {
'Error': {
'Code': 'ThrottlingException',
@@ -861,7 +861,7 @@ def test_bedrock_stop_sequences_sent_to_api():
llm.stop = ["\nObservation:", "\nThought:"]
# Patch the API call to capture parameters without making real call
with patch.object(llm.client, 'converse') as mock_converse:
with patch.object(llm._client, 'converse') as mock_converse:
mock_response = {
'output': {
'message': {

View File

@@ -556,8 +556,8 @@ def test_gemini_environment_variable_api_key():
with patch.dict(os.environ, {"GOOGLE_API_KEY": "test-google-key"}):
llm = LLM(model="google/gemini-2.0-flash-001")
assert llm.client is not None
assert hasattr(llm.client, 'models')
assert llm._client is not None
assert hasattr(llm._client, 'models')
assert llm.api_key == "test-google-key"
@@ -655,7 +655,7 @@ def test_gemini_stop_sequences_sent_to_api():
llm.stop = ["\nObservation:", "\nThought:"]
# Patch the API call to capture parameters without making real call
with patch.object(llm.client.models, 'generate_content') as mock_generate:
with patch.object(llm._client.models, 'generate_content') as mock_generate:
mock_response = MagicMock()
mock_response.text = "Hello"
mock_response.candidates = []

View File

@@ -371,11 +371,11 @@ def test_openai_client_setup_with_extra_arguments():
assert llm.top_p == 0.5
# Check that client parameters are properly configured
assert llm.client.max_retries == 3
assert llm.client.timeout == 30
assert llm._client.max_retries == 3
assert llm._client.timeout == 30
# Test that parameters are properly used in API calls
with patch.object(llm.client.chat.completions, 'create') as mock_create:
with patch.object(llm._client.chat.completions, 'create') as mock_create:
mock_create.return_value = MagicMock(
choices=[MagicMock(message=MagicMock(content="test response", tool_calls=None))],
usage=MagicMock(prompt_tokens=10, completion_tokens=20, total_tokens=30)
@@ -396,7 +396,7 @@ def test_extra_arguments_are_passed_to_openai_completion():
"""
llm = LLM(model="gpt-4o", temperature=0.7, max_tokens=1000, top_p=0.5, max_retries=3)
with patch.object(llm.client.chat.completions, 'create') as mock_create:
with patch.object(llm._client.chat.completions, 'create') as mock_create:
mock_create.return_value = MagicMock(
choices=[MagicMock(message=MagicMock(content="test response", tool_calls=None))],
usage=MagicMock(prompt_tokens=10, completion_tokens=20, total_tokens=30)
@@ -507,7 +507,7 @@ def test_openai_streaming_with_response_model():
llm = LLM(model="openai/gpt-4o", stream=True)
with patch.object(llm.client.beta.chat.completions, "stream") as mock_stream:
with patch.object(llm._client.beta.chat.completions, "stream") as mock_stream:
# Create mock chunks with content.delta event structure
mock_chunk1 = MagicMock()
mock_chunk1.type = "content.delta"
@@ -1523,6 +1523,69 @@ def test_openai_stop_words_not_applied_to_structured_output():
assert "Observation:" in result.observation
def test_openai_gpt5_models_do_not_support_stop_words():
"""
Test that GPT-5 family models do not support stop words via the API.
GPT-5 models reject the 'stop' parameter, so stop words must be
applied client-side only.
"""
gpt5_models = [
"gpt-5",
"gpt-5-mini",
"gpt-5-nano",
"gpt-5-pro",
"gpt-5.1",
"gpt-5.1-chat",
"gpt-5.2",
"gpt-5.2-chat",
]
for model_name in gpt5_models:
llm = OpenAICompletion(model=model_name)
assert llm.supports_stop_words() == False, (
f"Expected {model_name} to NOT support stop words"
)
def test_openai_non_gpt5_models_support_stop_words():
"""
Test that non-GPT-5 models still support stop words normally.
"""
supported_models = [
"gpt-4o",
"gpt-4o-mini",
"gpt-4.1",
"gpt-4.1-mini",
"gpt-4-turbo",
]
for model_name in supported_models:
llm = OpenAICompletion(model=model_name)
assert llm.supports_stop_words() == True, (
f"Expected {model_name} to support stop words"
)
def test_openai_gpt5_still_applies_stop_words_client_side():
"""
Test that GPT-5 models still truncate responses at stop words client-side
via _apply_stop_words(), even though they don't send 'stop' to the API.
"""
llm = OpenAICompletion(
model="gpt-5.2",
stop=["Observation:", "Final Answer:"],
)
assert llm.supports_stop_words() == False
response = "I need to search.\n\nAction: search\nObservation: Found results"
result = llm._apply_stop_words(response)
assert "Observation:" not in result
assert "Found results" not in result
assert "I need to search" in result
def test_openai_stop_words_still_applied_to_regular_responses():
"""
Test that stop words ARE still applied for regular (non-structured) responses.
@@ -1767,7 +1830,7 @@ def test_openai_responses_api_cached_prompt_tokens_with_tools():
}
]
llm = OpenAICompletion(model="gpt-4.1", api='response')
llm = OpenAICompletion(model="gpt-4.1", api='responses')
# First call with tool
llm.call(
@@ -1843,7 +1906,7 @@ def test_openai_streaming_returns_tool_calls_without_available_functions():
mock_chunk_3.id = "chatcmpl-1"
with patch.object(
llm.client.chat.completions, "create", return_value=iter([mock_chunk_1, mock_chunk_2, mock_chunk_3])
llm._client.chat.completions, "create", return_value=iter([mock_chunk_1, mock_chunk_2, mock_chunk_3])
):
result = llm.call(
messages=[{"role": "user", "content": "Calculate 1+1"}],
@@ -1934,7 +1997,7 @@ async def test_openai_async_streaming_returns_tool_calls_without_available_funct
return MockAsyncStream([mock_chunk_1, mock_chunk_2, mock_chunk_3])
with patch.object(
llm.async_client.chat.completions, "create", side_effect=mock_create
llm._async_client.chat.completions, "create", side_effect=mock_create
):
result = await llm.acall(
messages=[{"role": "user", "content": "Calculate 1+1"}],

View File

@@ -3,6 +3,8 @@
from unittest.mock import MagicMock, patch
import pytest
from pydantic import ValidationError
from crewai.knowledge.storage.knowledge_storage import ( # type: ignore[import-untyped]
KnowledgeStorage,
)
@@ -59,7 +61,7 @@ def test_knowledge_storage_invalid_embedding_config(mock_get_client: MagicMock)
"Unsupported provider: invalid_provider"
)
with pytest.raises(ValueError, match="Unsupported provider: invalid_provider"):
with pytest.raises(ValidationError):
KnowledgeStorage(
embedder={"provider": "invalid_provider"},
collection_name="invalid_embedding_test",

View File

@@ -873,7 +873,7 @@ class TestAutoPersistence:
# Create flow WITHOUT persistence
flow = TestFlow()
assert flow._persistence is None # No persistence initially
assert flow.persistence is None # No persistence initially
# kickoff should auto-create persistence when HumanFeedbackPending is raised
result = flow.kickoff()
@@ -882,11 +882,11 @@ class TestAutoPersistence:
assert isinstance(result, HumanFeedbackPending)
# Persistence should have been auto-created
assert flow._persistence is not None
assert flow.persistence is not None
# The pending feedback should be saved
flow_id = result.context.flow_id
loaded = flow._persistence.load_pending_feedback(flow_id)
loaded = flow.persistence.load_pending_feedback(flow_id)
assert loaded is not None

View File

@@ -246,7 +246,7 @@ class TestHumanFeedbackExecution:
@patch("builtins.input", return_value="")
@patch("builtins.print")
def test_empty_feedback_with_default_outcome(self, mock_print, mock_input):
"""Test empty feedback uses default_outcome."""
"""Test empty feedback uses default_outcome for routing, but flow returns method output."""
class TestFlow(Flow):
@start()
@@ -264,14 +264,16 @@ class TestHumanFeedbackExecution:
with patch.object(flow, "_request_human_feedback", return_value=""):
result = flow.kickoff()
assert result == "needs_work"
# Flow result is the method's return value, NOT the collapsed outcome
assert result == "Content"
assert flow.last_human_feedback is not None
# But the outcome is still correctly set for routing purposes
assert flow.last_human_feedback.outcome == "needs_work"
@patch("builtins.input", return_value="Approved!")
@patch("builtins.print")
def test_feedback_collapsing(self, mock_print, mock_input):
"""Test that feedback is collapsed to an outcome."""
"""Test that feedback is collapsed to an outcome for routing, but flow returns method output."""
class TestFlow(Flow):
@start()
@@ -291,8 +293,10 @@ class TestHumanFeedbackExecution:
):
result = flow.kickoff()
assert result == "approved"
# Flow result is the method's return value, NOT the collapsed outcome
assert result == "Content"
assert flow.last_human_feedback is not None
# But the outcome is still correctly set for routing purposes
assert flow.last_human_feedback.outcome == "approved"
@@ -591,3 +595,162 @@ class TestHumanFeedbackLearn:
assert config.learn is True
# llm defaults to "gpt-4o-mini" at the function level
assert config.llm == "gpt-4o-mini"
class TestHumanFeedbackFinalOutputPreservation:
"""Tests for preserving method return value as flow's final output when @human_feedback with emit is terminal.
This addresses the bug where the flow's final output was the collapsed outcome string (e.g., 'approved')
instead of the method's actual return value when a @human_feedback method with emit is the final method.
"""
@patch("builtins.input", return_value="Looks good!")
@patch("builtins.print")
def test_final_output_is_method_return_not_collapsed_outcome(
self, mock_print, mock_input
):
"""When @human_feedback with emit is the final method, flow output is the method's return value."""
class FinalHumanFeedbackFlow(Flow):
@start()
@human_feedback(
message="Review this content:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
def generate_and_review(self):
# This dict should be the final output, NOT the string 'approved'
return {"title": "My Article", "content": "Article content here", "status": "ready"}
flow = FinalHumanFeedbackFlow()
with (
patch.object(flow, "_request_human_feedback", return_value="Looks great, approved!"),
patch.object(flow, "_collapse_to_outcome", return_value="approved"),
):
result = flow.kickoff()
# The final output should be the actual method return value, not the collapsed outcome
assert isinstance(result, dict), f"Expected dict, got {type(result).__name__}: {result}"
assert result == {"title": "My Article", "content": "Article content here", "status": "ready"}
# But the outcome should still be tracked in last_human_feedback
assert flow.last_human_feedback is not None
assert flow.last_human_feedback.outcome == "approved"
@patch("builtins.input", return_value="approved")
@patch("builtins.print")
def test_routing_still_works_with_downstream_listener(self, mock_print, mock_input):
"""When @human_feedback has a downstream listener, routing still triggers the listener."""
publish_called = []
class RoutingFlow(Flow):
@start()
@human_feedback(
message="Review:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
def review(self):
return {"content": "original content"}
@listen("approved")
def publish(self):
publish_called.append(True)
return {"published": True, "timestamp": "2024-01-01"}
flow = RoutingFlow()
with (
patch.object(flow, "_request_human_feedback", return_value="LGTM"),
patch.object(flow, "_collapse_to_outcome", return_value="approved"),
):
result = flow.kickoff()
# The downstream listener should have been triggered
assert len(publish_called) == 1, "publish() should have been called"
# The final output should be from the listener, not the human_feedback method
assert result == {"published": True, "timestamp": "2024-01-01"}
@patch("builtins.input", return_value="")
@patch("builtins.print")
@pytest.mark.asyncio
async def test_async_human_feedback_final_output_preserved(self, mock_print, mock_input):
"""Async @human_feedback methods also preserve the real return value."""
class AsyncFinalFlow(Flow):
@start()
@human_feedback(
message="Review async content:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
default_outcome="approved",
)
async def async_generate(self):
return {"async_data": "value", "computed": 42}
flow = AsyncFinalFlow()
with (
patch.object(flow, "_request_human_feedback", return_value=""),
):
result = await flow.kickoff_async()
# The final output should be the dict, not "approved"
assert isinstance(result, dict), f"Expected dict, got {type(result).__name__}: {result}"
assert result == {"async_data": "value", "computed": 42}
assert flow.last_human_feedback.outcome == "approved"
@patch("builtins.input", return_value="feedback")
@patch("builtins.print")
def test_method_outputs_contains_real_output(self, mock_print, mock_input):
"""The _method_outputs list should contain the real method output, not the collapsed outcome."""
class OutputTrackingFlow(Flow):
@start()
@human_feedback(
message="Review:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
def generate(self):
return {"data": "real output"}
flow = OutputTrackingFlow()
with (
patch.object(flow, "_request_human_feedback", return_value="approved"),
patch.object(flow, "_collapse_to_outcome", return_value="approved"),
):
flow.kickoff()
# _method_outputs should contain the real output
assert len(flow._method_outputs) == 1
assert flow._method_outputs[0] == {"data": "real output"}
@patch("builtins.input", return_value="looks good")
@patch("builtins.print")
def test_none_return_value_is_preserved(self, mock_print, mock_input):
"""A method returning None should preserve None as flow output, not the outcome string."""
class NoneReturnFlow(Flow):
@start()
@human_feedback(
message="Review:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
def process(self):
# Method does work but returns None (implicit)
pass
flow = NoneReturnFlow()
with (
patch.object(flow, "_request_human_feedback", return_value=""),
patch.object(flow, "_collapse_to_outcome", return_value="approved"),
):
result = flow.kickoff()
# Final output should be None (the method's real return), not "approved"
assert result is None, f"Expected None, got {result!r}"
assert flow.last_human_feedback.outcome == "approved"

View File

@@ -708,7 +708,7 @@ class TestEdgeCases:
@patch("builtins.input", return_value="")
@patch("builtins.print")
def test_empty_feedback_first_outcome_fallback(self, mock_print, mock_input):
"""Test that empty feedback without default uses first outcome."""
"""Test that empty feedback without default uses first outcome for routing, but returns method output."""
class FallbackFlow(Flow):
@start()
@@ -726,12 +726,15 @@ class TestEdgeCases:
with patch.object(flow, "_request_human_feedback", return_value=""):
result = flow.kickoff()
assert result == "first" # Falls back to first outcome
# Flow result is the method's return value, NOT the collapsed outcome
assert result == "content"
# But outcome is still set to first for routing purposes
assert flow.last_human_feedback.outcome == "first"
@patch("builtins.input", return_value="whitespace only ")
@patch("builtins.print")
def test_whitespace_only_feedback_treated_as_empty(self, mock_print, mock_input):
"""Test that whitespace-only feedback is treated as empty."""
"""Test that whitespace-only feedback is treated as empty for routing, but returns method output."""
class WhitespaceFlow(Flow):
@start()
@@ -749,7 +752,10 @@ class TestEdgeCases:
with patch.object(flow, "_request_human_feedback", return_value=" "):
result = flow.kickoff()
assert result == "reject" # Uses default because feedback is empty after strip
# Flow result is the method's return value, NOT the collapsed outcome
assert result == "content"
# But outcome is set to default because feedback is empty after strip
assert flow.last_human_feedback.outcome == "reject"
@patch("builtins.input", return_value="feedback")
@patch("builtins.print")

View File

@@ -682,6 +682,118 @@ def test_llm_call_when_stop_is_unsupported_when_additional_drop_params_is_provid
assert "Paris" in result
@pytest.mark.vcr()
def test_litellm_gpt5_call_succeeds_without_stop_error():
"""
Integration test: GPT-5 call succeeds when stop words are configured,
because stop is omitted from API params and applied client-side.
"""
llm = LLM(model="gpt-5", stop=["Observation:"], is_litellm=True)
result = llm.call("What is the capital of France?")
assert isinstance(result, str)
assert len(result) > 0
def test_litellm_gpt5_does_not_send_stop_in_params():
"""
Test that the LiteLLM fallback path does not include 'stop' in API params
for GPT-5.x models, since they reject it at the API level.
"""
llm = LLM(model="openai/gpt-5.2", stop=["Observation:"], is_litellm=True)
params = llm._prepare_completion_params(
messages=[{"role": "user", "content": "Hello"}]
)
assert params.get("stop") is None, (
"GPT-5.x models should not have 'stop' in API params"
)
def test_litellm_non_gpt5_sends_stop_in_params():
"""
Test that the LiteLLM fallback path still includes 'stop' in API params
for models that support it.
"""
llm = LLM(model="gpt-4o", stop=["Observation:"], is_litellm=True)
params = llm._prepare_completion_params(
messages=[{"role": "user", "content": "Hello"}]
)
assert params.get("stop") == ["Observation:"], (
"Non-GPT-5 models should have 'stop' in API params"
)
def test_litellm_retry_catches_litellm_unsupported_params_error(caplog):
"""
Test that the retry logic catches LiteLLM's UnsupportedParamsError format
("does not support parameters") in addition to the OpenAI API format.
"""
llm = LLM(model="openai/gpt-5.2", stop=["Observation:"], is_litellm=True)
litellm_error = Exception(
"litellm.UnsupportedParamsError: openai does not support parameters: "
"['stop'], for model=openai/gpt-5.2."
)
call_count = 0
try:
import litellm
except ImportError:
pytest.skip("litellm is not installed; skipping LiteLLM retry test")
def mock_completion(*args, **kwargs):
nonlocal call_count
call_count += 1
if call_count == 1:
raise litellm_error
return MagicMock(
choices=[MagicMock(message=MagicMock(content="Paris", tool_calls=None))],
usage={"prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15},
)
with patch("litellm.completion", side_effect=mock_completion):
with caplog.at_level(logging.INFO):
result = llm.call("What is the capital of France?")
assert "Retrying LLM call without the unsupported 'stop'" in caplog.text
assert "stop" in llm.additional_params.get("additional_drop_params", [])
def test_litellm_retry_catches_openai_api_stop_error(caplog):
"""
Test that the retry logic still catches the OpenAI API error format
("Unsupported parameter: 'stop'").
"""
llm = LLM(model="openai/gpt-5.2", stop=["Observation:"], is_litellm=True)
api_error = Exception(
"Unsupported parameter: 'stop' is not supported with this model."
)
call_count = 0
def mock_completion(*args, **kwargs):
nonlocal call_count
call_count += 1
if call_count == 1:
raise api_error
return MagicMock(
choices=[MagicMock(message=MagicMock(content="Paris", tool_calls=None))],
usage={"prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15},
)
with patch("litellm.completion", side_effect=mock_completion):
with caplog.at_level(logging.INFO):
llm.call("What is the capital of France?")
assert "Retrying LLM call without the unsupported 'stop'" in caplog.text
assert "stop" in llm.additional_params.get("additional_drop_params", [])
@pytest.fixture
def ollama_llm():
return LLM(model="ollama/llama3.2:3b", is_litellm=True)

View File

@@ -1,5 +1,5 @@
from typing import Any, ClassVar
from unittest.mock import Mock, patch
from unittest.mock import Mock, create_autospec, patch
import pytest
from crewai.agent import Agent
@@ -372,8 +372,11 @@ def test_internal_crew_with_mcp():
mock_adapter = Mock()
mock_adapter.tools = ToolCollection([simple_tool, another_simple_tool])
mock_llm = Mock()
mock_llm.__class__ = BaseLLM
class _StubLLM(BaseLLM):
def call(self, *a: Any, **kw: Any) -> str:
return ""
mock_llm = create_autospec(_StubLLM(model="stub"), instance=True)
with (
patch("crewai_tools.MCPServerAdapter", return_value=mock_adapter) as adapter_mock,

View File

@@ -879,6 +879,35 @@ def test_llm_emits_call_started_event():
assert started_events[0].task_id is None
@pytest.mark.vcr()
def test_llm_completed_event_includes_usage():
completed_events: list[LLMCallCompletedEvent] = []
condition = threading.Condition()
@crewai_event_bus.on(LLMCallCompletedEvent)
def handle_llm_call_completed(source, event):
with condition:
completed_events.append(event)
condition.notify()
llm = LLM(model="gpt-4o-mini")
llm.call("Say hello")
with condition:
success = condition.wait_for(
lambda: len(completed_events) >= 1,
timeout=10,
)
assert success, "Timeout waiting for LLMCallCompletedEvent"
event = completed_events[0]
assert event.usage is not None
assert isinstance(event.usage, dict)
assert event.usage.get("prompt_tokens", 0) > 0
assert event.usage.get("completion_tokens", 0) > 0
assert event.usage.get("total_tokens", 0) > 0
@pytest.mark.vcr()
def test_llm_emits_call_failed_event():
received_events = []

View File

@@ -8,18 +8,22 @@ Installed automatically via the workspace (`uv sync`). Requires:
- [GitHub CLI](https://cli.github.com/) (`gh`) — authenticated
- `OPENAI_API_KEY` env var — for release note generation and translation
- `ENTERPRISE_REPO` env var — GitHub repo for enterprise releases
- `ENTERPRISE_VERSION_DIRS` env var — comma-separated directories to bump in the enterprise repo
- `ENTERPRISE_CREWAI_DEP_PATH` env var — path to the pyproject.toml with the `crewai[tools]` pin in the enterprise repo
## Commands
### `devtools release <version>`
Full end-to-end release. Bumps versions, creates PRs, tags, and publishes a GitHub release.
Full end-to-end release. Bumps versions, creates PRs, tags, publishes a GitHub release, and releases the enterprise repo.
```
devtools release 1.10.3
devtools release 1.10.3a1 # pre-release
devtools release 1.10.3 --no-edit # skip editing release notes
devtools release 1.10.3 --dry-run # preview without changes
devtools release 1.10.3a1 # pre-release
devtools release 1.10.3 --no-edit # skip editing release notes
devtools release 1.10.3 --dry-run # preview without changes
devtools release 1.10.3 --skip-enterprise # skip enterprise release phase
```
**Flow:**
@@ -31,6 +35,10 @@ devtools release 1.10.3 --dry-run # preview without changes
5. Updates changelogs (en, pt-BR, ko) and docs version switcher
6. Creates docs PR against main, polls until merged
7. Tags main and creates GitHub release
8. Triggers PyPI publish workflow
9. Clones enterprise repo, bumps versions and `crewai[tools]` dep, runs `uv sync`
10. Creates enterprise bump PR, polls until merged
11. Tags and creates GitHub release on enterprise repo
### `devtools bump <version>`

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.12.0"
__version__ = "1.13.0a5"

View File

@@ -2,10 +2,13 @@
import os
from pathlib import Path
import re
import subprocess
import sys
import tempfile
import time
from typing import Final, Literal
from urllib.request import urlopen
import click
from dotenv import load_dotenv
@@ -153,12 +156,51 @@ def update_version_in_file(file_path: Path, new_version: str) -> bool:
return False
def update_pyproject_dependencies(file_path: Path, new_version: str) -> bool:
def update_pyproject_version(file_path: Path, new_version: str) -> bool:
"""Update the [project] version field in a pyproject.toml file.
Args:
file_path: Path to pyproject.toml file.
new_version: New version string.
Returns:
True if version was updated, False otherwise.
"""
if not file_path.exists():
return False
content = file_path.read_text()
new_content = re.sub(
r'^(version\s*=\s*")[^"]+(")',
rf"\g<1>{new_version}\2",
content,
count=1,
flags=re.MULTILINE,
)
if new_content != content:
file_path.write_text(new_content)
return True
return False
_DEFAULT_WORKSPACE_PACKAGES: Final[list[str]] = [
"crewai",
"crewai-tools",
"crewai-devtools",
]
def update_pyproject_dependencies(
file_path: Path,
new_version: str,
extra_packages: list[str] | None = None,
) -> bool:
"""Update workspace dependency versions in pyproject.toml.
Args:
file_path: Path to pyproject.toml file.
new_version: New version string.
extra_packages: Additional package names to update beyond the defaults.
Returns:
True if any dependencies were updated, False otherwise.
@@ -170,7 +212,7 @@ def update_pyproject_dependencies(file_path: Path, new_version: str) -> bool:
lines = content.splitlines()
updated = False
workspace_packages = ["crewai", "crewai-tools", "crewai-devtools"]
workspace_packages = _DEFAULT_WORKSPACE_PACKAGES + (extra_packages or [])
for i, line in enumerate(lines):
for pkg in workspace_packages:
@@ -431,12 +473,29 @@ def update_changelog(
return True
def update_template_dependencies(templates_dir: Path, new_version: str) -> list[Path]:
"""Update crewai dependency versions in CLI template pyproject.toml files.
def _pin_crewai_deps(content: str, version: str) -> str:
"""Replace crewai dependency version pins in a pyproject.toml string.
Handles both pinned (==) and minimum (>=) version specifiers,
as well as extras like [tools].
Args:
content: File content to transform.
version: New version string.
Returns:
Transformed content.
"""
return re.sub(
r'"crewai(\[tools\])?(==|>=)[^"]*"',
lambda m: f'"crewai{(m.group(1) or "")!s}=={version}"',
content,
)
def update_template_dependencies(templates_dir: Path, new_version: str) -> list[Path]:
"""Update crewai dependency versions in CLI template pyproject.toml files.
Args:
templates_dir: Path to the CLI templates directory.
new_version: New version string.
@@ -444,16 +503,10 @@ def update_template_dependencies(templates_dir: Path, new_version: str) -> list[
Returns:
List of paths that were updated.
"""
import re
updated = []
for pyproject in templates_dir.rglob("pyproject.toml"):
content = pyproject.read_text()
new_content = re.sub(
r'"crewai(\[tools\])?(==|>=)[^"]*"',
lambda m: f'"crewai{(m.group(1) or "")!s}=={new_version}"',
content,
)
new_content = _pin_crewai_deps(content, new_version)
if new_content != content:
pyproject.write_text(new_content)
updated.append(pyproject)
@@ -607,24 +660,26 @@ def get_github_contributors(commit_range: str) -> list[str]:
# ---------------------------------------------------------------------------
def _poll_pr_until_merged(branch_name: str, label: str) -> None:
"""Poll a GitHub PR until it is merged. Exit if closed without merging."""
def _poll_pr_until_merged(
branch_name: str, label: str, repo: str | None = None
) -> None:
"""Poll a GitHub PR until it is merged. Exit if closed without merging.
Args:
branch_name: Branch name to look up the PR.
label: Human-readable label for status messages.
repo: Optional GitHub repo (owner/name) for cross-repo PRs.
"""
console.print(f"[cyan]Waiting for {label} to be merged...[/cyan]")
cmd = ["gh", "pr", "view", branch_name]
if repo:
cmd.extend(["--repo", repo])
cmd.extend(["--json", "state", "--jq", ".state"])
while True:
time.sleep(10)
try:
state = run_command(
[
"gh",
"pr",
"view",
branch_name,
"--json",
"state",
"--jq",
".state",
]
)
state = run_command(cmd)
except subprocess.CalledProcessError:
state = ""
@@ -984,8 +1039,360 @@ def _create_tag_and_release(
console.print(f"[green]✓[/green] Created GitHub {release_type} for {tag_name}")
def _trigger_pypi_publish(tag_name: str) -> None:
"""Trigger the PyPI publish GitHub Actions workflow."""
_ENTERPRISE_REPO: Final[str | None] = os.getenv("ENTERPRISE_REPO")
_ENTERPRISE_VERSION_DIRS: Final[tuple[str, ...]] = tuple(
d.strip() for d in os.getenv("ENTERPRISE_VERSION_DIRS", "").split(",") if d.strip()
)
_ENTERPRISE_CREWAI_DEP_PATH: Final[str | None] = os.getenv("ENTERPRISE_CREWAI_DEP_PATH")
_ENTERPRISE_EXTRA_PACKAGES: Final[tuple[str, ...]] = tuple(
p.strip()
for p in os.getenv("ENTERPRISE_EXTRA_PACKAGES", "").split(",")
if p.strip()
)
def _update_enterprise_crewai_dep(pyproject_path: Path, version: str) -> bool:
"""Update the crewai[tools] pin in an enterprise pyproject.toml.
Args:
pyproject_path: Path to the pyproject.toml file.
version: New crewai version string.
Returns:
True if the file was modified.
"""
if not pyproject_path.exists():
return False
content = pyproject_path.read_text()
new_content = _pin_crewai_deps(content, version)
if new_content != content:
pyproject_path.write_text(new_content)
return True
return False
_DEPLOYMENT_TEST_REPO: Final[str] = "crewAIInc/crew_deployment_test"
_PYPI_POLL_INTERVAL: Final[int] = 15
_PYPI_POLL_TIMEOUT: Final[int] = 600
def _update_deployment_test_repo(version: str, is_prerelease: bool) -> None:
"""Update the deployment test repo to pin the new crewai version.
Clones the repo, updates the crewai[tools] pin in pyproject.toml,
regenerates the lockfile, commits, and pushes directly to main.
Args:
version: New crewai version string.
is_prerelease: Whether this is a pre-release version.
"""
console.print(
f"\n[bold cyan]Updating {_DEPLOYMENT_TEST_REPO} to {version}[/bold cyan]"
)
with tempfile.TemporaryDirectory() as tmp:
repo_dir = Path(tmp) / "crew_deployment_test"
run_command(["gh", "repo", "clone", _DEPLOYMENT_TEST_REPO, str(repo_dir)])
console.print(f"[green]✓[/green] Cloned {_DEPLOYMENT_TEST_REPO}")
pyproject = repo_dir / "pyproject.toml"
content = pyproject.read_text()
new_content = re.sub(
r'"crewai\[tools\]==[^"]+"',
f'"crewai[tools]=={version}"',
content,
)
if new_content == content:
console.print(
"[yellow]Warning:[/yellow] No crewai[tools] pin found to update"
)
return
pyproject.write_text(new_content)
console.print(f"[green]✓[/green] Updated crewai[tools] pin to {version}")
lock_cmd = [
"uv",
"lock",
"--refresh-package",
"crewai",
"--refresh-package",
"crewai-tools",
]
if is_prerelease:
lock_cmd.append("--prerelease=allow")
max_retries = 10
for attempt in range(1, max_retries + 1):
try:
run_command(lock_cmd, cwd=repo_dir)
break
except subprocess.CalledProcessError:
if attempt == max_retries:
console.print(
f"[red]Error:[/red] uv lock failed after {max_retries} attempts"
)
raise
console.print(
f"[yellow]uv lock failed (attempt {attempt}/{max_retries}),"
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
)
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Lockfile updated")
run_command(["git", "add", "pyproject.toml", "uv.lock"], cwd=repo_dir)
run_command(
["git", "commit", "-m", f"chore: bump crewai to {version}"],
cwd=repo_dir,
)
run_command(["git", "push"], cwd=repo_dir)
console.print(f"[green]✓[/green] Pushed to {_DEPLOYMENT_TEST_REPO}")
def _wait_for_pypi(package: str, version: str) -> None:
"""Poll PyPI until a specific package version is available.
Args:
package: PyPI package name.
version: Version string to wait for.
"""
url = f"https://pypi.org/pypi/{package}/{version}/json"
deadline = time.monotonic() + _PYPI_POLL_TIMEOUT
console.print(f"[cyan]Waiting for {package}=={version} to appear on PyPI...[/cyan]")
while time.monotonic() < deadline:
try:
with urlopen(url) as resp: # noqa: S310
if resp.status == 200:
console.print(
f"[green]✓[/green] {package}=={version} is available on PyPI"
)
return
except Exception: # noqa: S110
pass
time.sleep(_PYPI_POLL_INTERVAL)
console.print(
f"[red]Error:[/red] Timed out waiting for {package}=={version} on PyPI"
)
sys.exit(1)
def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> None:
"""Clone the enterprise repo, bump versions, and create a release PR.
Expects ENTERPRISE_REPO, ENTERPRISE_VERSION_DIRS, and
ENTERPRISE_CREWAI_DEP_PATH to be validated before calling.
Args:
version: New version string.
is_prerelease: Whether this is a pre-release version.
dry_run: Show what would be done without making changes.
"""
if (
not _ENTERPRISE_REPO
or not _ENTERPRISE_VERSION_DIRS
or not _ENTERPRISE_CREWAI_DEP_PATH
):
console.print("[red]Error:[/red] Enterprise env vars not configured")
sys.exit(1)
enterprise_repo: str = _ENTERPRISE_REPO
enterprise_dep_path: str = _ENTERPRISE_CREWAI_DEP_PATH
console.print(
f"\n[bold cyan]Phase 3: Releasing {enterprise_repo} {version}[/bold cyan]"
)
if dry_run:
console.print(f"[dim][DRY RUN][/dim] Would clone {enterprise_repo}")
for d in _ENTERPRISE_VERSION_DIRS:
console.print(f"[dim][DRY RUN][/dim] Would update versions in {d}")
console.print(
f"[dim][DRY RUN][/dim] Would update crewai[tools] dep in "
f"{enterprise_dep_path}"
)
console.print(
"[dim][DRY RUN][/dim] Would create bump PR, wait for merge, "
"then tag and release"
)
return
with tempfile.TemporaryDirectory() as tmp:
repo_dir = Path(tmp) / enterprise_repo.split("/")[-1]
console.print(f"Cloning {enterprise_repo}...")
run_command(["gh", "repo", "clone", enterprise_repo, str(repo_dir)])
console.print(f"[green]✓[/green] Cloned {enterprise_repo}")
# --- bump versions ---
for rel_dir in _ENTERPRISE_VERSION_DIRS:
pkg_dir = repo_dir / rel_dir
if not pkg_dir.exists():
console.print(
f"[yellow]Warning:[/yellow] {rel_dir} not found, skipping"
)
continue
for vfile in find_version_files(pkg_dir):
if update_version_in_file(vfile, version):
console.print(
f"[green]✓[/green] Updated: {vfile.relative_to(repo_dir)}"
)
pyproject = pkg_dir / "pyproject.toml"
if pyproject.exists():
if update_pyproject_version(pyproject, version):
console.print(
f"[green]✓[/green] Updated version in: "
f"{pyproject.relative_to(repo_dir)}"
)
if update_pyproject_dependencies(
pyproject, version, extra_packages=list(_ENTERPRISE_EXTRA_PACKAGES)
):
console.print(
f"[green]✓[/green] Updated deps in: "
f"{pyproject.relative_to(repo_dir)}"
)
# --- update crewai[tools] pin ---
enterprise_pyproject = repo_dir / enterprise_dep_path
if _update_enterprise_crewai_dep(enterprise_pyproject, version):
console.print(
f"[green]✓[/green] Updated crewai[tools] dep in {enterprise_dep_path}"
)
_wait_for_pypi("crewai", version)
console.print("\nSyncing workspace...")
sync_cmd = [
"uv",
"sync",
"--refresh-package",
"crewai",
"--refresh-package",
"crewai-tools",
"--refresh-package",
"crewai-files",
]
if is_prerelease:
sync_cmd.append("--prerelease=allow")
max_retries = 10
for attempt in range(1, max_retries + 1):
try:
run_command(sync_cmd, cwd=repo_dir)
break
except subprocess.CalledProcessError:
if attempt == max_retries:
console.print(
f"[red]Error:[/red] uv sync failed after {max_retries} attempts"
)
raise
console.print(
f"[yellow]uv sync failed (attempt {attempt}/{max_retries}),"
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
)
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Workspace synced")
# --- branch, commit, push, PR ---
branch_name = f"feat/bump-version-{version}"
run_command(["git", "checkout", "-b", branch_name], cwd=repo_dir)
run_command(["git", "add", "."], cwd=repo_dir)
run_command(
["git", "commit", "-m", f"feat: bump versions to {version}"],
cwd=repo_dir,
)
console.print("[green]✓[/green] Changes committed")
run_command(["git", "push", "-u", "origin", branch_name], cwd=repo_dir)
console.print("[green]✓[/green] Branch pushed")
pr_url = run_command(
[
"gh",
"pr",
"create",
"--repo",
enterprise_repo,
"--base",
"main",
"--title",
f"feat: bump versions to {version}",
"--body",
"",
],
cwd=repo_dir,
)
console.print("[green]✓[/green] Enterprise bump PR created")
console.print(f"[cyan]PR URL:[/cyan] {pr_url}")
_poll_pr_until_merged(branch_name, "enterprise bump PR", repo=enterprise_repo)
# --- tag and release ---
run_command(["git", "checkout", "main"], cwd=repo_dir)
run_command(["git", "pull"], cwd=repo_dir)
tag_name = version
run_command(
["git", "tag", "-a", tag_name, "-m", f"Release {version}"],
cwd=repo_dir,
)
run_command(["git", "push", "origin", tag_name], cwd=repo_dir)
console.print(f"[green]✓[/green] Pushed tag {tag_name}")
gh_cmd = [
"gh",
"release",
"create",
tag_name,
"--repo",
enterprise_repo,
"--title",
tag_name,
"--notes",
f"Release {version}",
]
if is_prerelease:
gh_cmd.append("--prerelease")
run_command(gh_cmd)
release_type = "prerelease" if is_prerelease else "release"
console.print(
f"[green]✓[/green] Created GitHub {release_type} for "
f"{enterprise_repo} {tag_name}"
)
def _trigger_pypi_publish(tag_name: str, wait: bool = False) -> None:
"""Trigger the PyPI publish GitHub Actions workflow.
Args:
tag_name: The release tag to publish.
wait: Block until the workflow run completes.
"""
# Capture the latest run ID before triggering so we can detect the new one
prev_run_id = ""
if wait:
try:
prev_run_id = run_command(
[
"gh",
"run",
"list",
"--workflow=publish.yml",
"--limit=1",
"--json=databaseId",
"--jq=.[0].databaseId",
]
)
except subprocess.CalledProcessError:
console.print(
"[yellow]Note:[/yellow] Could not determine previous workflow run; "
"continuing without previous run ID"
)
with console.status("[cyan]Triggering PyPI publish workflow..."):
try:
run_command(
@@ -1003,6 +1410,42 @@ def _trigger_pypi_publish(tag_name: str) -> None:
sys.exit(1)
console.print("[green]✓[/green] Triggered PyPI publish workflow")
if wait:
console.print("[cyan]Waiting for PyPI publish workflow to complete...[/cyan]")
run_id = ""
deadline = time.monotonic() + 120
while time.monotonic() < deadline:
time.sleep(5)
try:
run_id = run_command(
[
"gh",
"run",
"list",
"--workflow=publish.yml",
"--limit=1",
"--json=databaseId",
"--jq=.[0].databaseId",
]
)
except subprocess.CalledProcessError:
continue
if run_id and run_id != prev_run_id:
break
if not run_id or run_id == prev_run_id:
console.print(
"[red]Error:[/red] Could not find the PyPI publish workflow run"
)
sys.exit(1)
try:
run_command(["gh", "run", "watch", run_id, "--exit-status"])
except subprocess.CalledProcessError as e:
console.print(f"[red]✗[/red] PyPI publish workflow failed: {e}")
sys.exit(1)
console.print("[green]✓[/green] PyPI publish workflow completed")
# ---------------------------------------------------------------------------
# CLI commands
@@ -1032,6 +1475,15 @@ def bump(version: str, dry_run: bool, no_push: bool, no_commit: bool) -> None:
no_push: Don't push changes to remote.
no_commit: Don't commit changes (just update files).
"""
console.print(
f"\n[yellow]Note:[/yellow] [bold]devtools bump[/bold] only bumps versions "
f"in this repo. It will not tag, publish to PyPI, or release enterprise.\n"
f"If you want a full end-to-end release, run "
f"[bold]devtools release {version}[/bold] instead."
)
if not Confirm.ask("Continue with bump only?", default=True):
sys.exit(0)
try:
check_gh_installed()
@@ -1136,6 +1588,16 @@ def tag(dry_run: bool, no_edit: bool) -> None:
dry_run: Show what would be done without making changes.
no_edit: Skip editing release notes.
"""
console.print(
"\n[yellow]Note:[/yellow] [bold]devtools tag[/bold] only tags and creates "
"a GitHub release for this repo. It will not bump versions, publish to "
"PyPI, or release enterprise.\n"
"If you want a full end-to-end release, run "
"[bold]devtools release <version>[/bold] instead."
)
if not Confirm.ask("Continue with tag only?", default=True):
sys.exit(0)
try:
cwd = Path.cwd()
lib_dir = cwd / "lib"
@@ -1226,24 +1688,75 @@ def tag(dry_run: bool, no_edit: bool) -> None:
"--dry-run", is_flag=True, help="Show what would be done without making changes"
)
@click.option("--no-edit", is_flag=True, help="Skip editing release notes")
def release(version: str, dry_run: bool, no_edit: bool) -> None:
@click.option(
"--skip-enterprise",
is_flag=True,
help="Skip the enterprise release phase",
)
@click.option(
"--skip-to-enterprise",
is_flag=True,
help="Skip phases 1 & 2, run only the enterprise release phase",
)
def release(
version: str,
dry_run: bool,
no_edit: bool,
skip_enterprise: bool,
skip_to_enterprise: bool,
) -> None:
"""Full release: bump versions, tag, and publish a GitHub release.
Combines bump and tag into a single workflow. Creates a version bump PR,
waits for it to be merged, then generates release notes, updates docs,
creates the tag, and publishes a GitHub release.
creates the tag, and publishes a GitHub release. Then bumps versions and
releases the enterprise repo.
Args:
version: New version to set (e.g., 1.0.0, 1.0.0a1).
dry_run: Show what would be done without making changes.
no_edit: Skip editing release notes.
skip_enterprise: Skip the enterprise release phase.
skip_to_enterprise: Skip phases 1 & 2, run only the enterprise release phase.
"""
try:
check_gh_installed()
if skip_enterprise and skip_to_enterprise:
console.print(
"[red]Error:[/red] Cannot use both --skip-enterprise "
"and --skip-to-enterprise"
)
sys.exit(1)
if not skip_enterprise or skip_to_enterprise:
missing: list[str] = []
if not _ENTERPRISE_REPO:
missing.append("ENTERPRISE_REPO")
if not _ENTERPRISE_VERSION_DIRS:
missing.append("ENTERPRISE_VERSION_DIRS")
if not _ENTERPRISE_CREWAI_DEP_PATH:
missing.append("ENTERPRISE_CREWAI_DEP_PATH")
if missing:
console.print(
f"[red]Error:[/red] Missing required environment variable(s): "
f"{', '.join(missing)}\n"
f"Set them or pass --skip-enterprise to skip the enterprise release."
)
sys.exit(1)
cwd = Path.cwd()
lib_dir = cwd / "lib"
is_prerelease = _is_prerelease(version)
if skip_to_enterprise:
_release_enterprise(version, is_prerelease, dry_run)
console.print(
f"\n[green]✓[/green] Enterprise release [bold]{version}[/bold] complete!"
)
return
if not dry_run:
console.print("Checking git status...")
check_git_clean()
@@ -1337,7 +1850,11 @@ def release(version: str, dry_run: bool, no_edit: bool) -> None:
if not dry_run:
_create_tag_and_release(tag_name, release_notes, is_prerelease)
_trigger_pypi_publish(tag_name)
_trigger_pypi_publish(tag_name, wait=True)
_update_deployment_test_repo(version, is_prerelease)
if not skip_enterprise:
_release_enterprise(version, is_prerelease, dry_run)
console.print(f"\n[green]✓[/green] Release [bold]{version}[/bold] complete!")

14
uv.lock generated
View File

@@ -1243,7 +1243,7 @@ requires-dist = [
{ name = "json-repair", specifier = "~=0.25.2" },
{ name = "json5", specifier = "~=0.10.0" },
{ name = "jsonref", specifier = "~=1.1.0" },
{ name = "lancedb", specifier = ">=0.29.2" },
{ name = "lancedb", specifier = ">=0.29.2,<0.30.1" },
{ name = "litellm", marker = "extra == 'litellm'", specifier = ">=1.74.9,<=1.82.6" },
{ name = "mcp", specifier = "~=1.26.0" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = "~=0.1.94" },
@@ -4275,7 +4275,7 @@ wheels = [
[[package]]
name = "nltk"
version = "3.9.3"
version = "3.9.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
@@ -4283,9 +4283,9 @@ dependencies = [
{ name = "regex" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e1/8f/915e1c12df07c70ed779d18ab83d065718a926e70d3ea33eb0cd66ffb7c0/nltk-3.9.3.tar.gz", hash = "sha256:cb5945d6424a98d694c2b9a0264519fab4363711065a46aa0ae7a2195b92e71f", size = 2923673, upload-time = "2026-02-24T12:05:53.833Z" }
sdist = { url = "https://files.pythonhosted.org/packages/74/a1/b3b4adf15585a5bc4c357adde150c01ebeeb642173ded4d871e89468767c/nltk-3.9.4.tar.gz", hash = "sha256:ed03bc098a40481310320808b2db712d95d13ca65b27372f8a403949c8b523d0", size = 2946864, upload-time = "2026-03-24T06:13:40.641Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c2/7e/9af5a710a1236e4772de8dfcc6af942a561327bb9f42b5b4a24d0cf100fd/nltk-3.9.3-py3-none-any.whl", hash = "sha256:60b3db6e9995b3dd976b1f0fa7dec22069b2677e759c28eb69b62ddd44870522", size = 1525385, upload-time = "2026-02-24T12:05:46.54Z" },
{ url = "https://files.pythonhosted.org/packages/9d/91/04e965f8e717ba0ab4bdca5c112deeab11c9e750d94c4d4602f050295d39/nltk-3.9.4-py3-none-any.whl", hash = "sha256:f2fa301c3a12718ce4a0e9305c5675299da5ad9e26068218b69d692fda84828f", size = 1552087, upload-time = "2026-03-24T06:13:38.47Z" },
]
[[package]]
@@ -6235,14 +6235,14 @@ wheels = [
[[package]]
name = "pypdf"
version = "6.9.1"
version = "6.9.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f9/fb/dc2e8cb006e80b0020ed20d8649106fe4274e82d8e756ad3e24ade19c0df/pypdf-6.9.1.tar.gz", hash = "sha256:ae052407d33d34de0c86c5c729be6d51010bf36e03035a8f23ab449bca52377d", size = 5311551, upload-time = "2026-03-17T10:46:07.876Z" }
sdist = { url = "https://files.pythonhosted.org/packages/31/83/691bdb309306232362503083cb15777491045dd54f45393a317dc7d8082f/pypdf-6.9.2.tar.gz", hash = "sha256:7f850faf2b0d4ab936582c05da32c52214c2b089d61a316627b5bfb5b0dab46c", size = 5311837, upload-time = "2026-03-23T14:53:27.983Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f9/f4/75543fa802b86e72f87e9395440fe1a89a6d149887e3e55745715c3352ac/pypdf-6.9.1-py3-none-any.whl", hash = "sha256:f35a6a022348fae47e092a908339a8f3dc993510c026bb39a96718fc7185e89f", size = 333661, upload-time = "2026-03-17T10:46:06.286Z" },
{ url = "https://files.pythonhosted.org/packages/a5/7e/c85f41243086a8fe5d1baeba527cb26a1918158a565932b41e0f7c0b32e9/pypdf-6.9.2-py3-none-any.whl", hash = "sha256:662cf29bcb419a36a1365232449624ab40b7c2d0cfc28e54f42eeecd1fd7e844", size = 333744, upload-time = "2026-03-23T14:53:26.573Z" },
]
[[package]]