mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-03-31 16:18:14 +00:00
Compare commits
15 Commits
1.12.2
...
perf/reduc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c57536e3d0 | ||
|
|
ac14b9127e | ||
|
|
98b7626784 | ||
|
|
e21c506214 | ||
|
|
9fe0c15549 | ||
|
|
78d8ddb649 | ||
|
|
1b2062009a | ||
|
|
886aa4ba8f | ||
|
|
5bec000b21 | ||
|
|
2965384907 | ||
|
|
032ef06ef6 | ||
|
|
0ce9567cfc | ||
|
|
d7252bfee7 | ||
|
|
10fc3796bb | ||
|
|
52249683a7 |
2
.github/workflows/docs-broken-links.yml
vendored
2
.github/workflows/docs-broken-links.yml
vendored
@@ -23,7 +23,7 @@ jobs:
|
||||
- name: Set up Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "latest"
|
||||
node-version: "22"
|
||||
|
||||
- name: Install Mintlify CLI
|
||||
run: npm i -g mintlify
|
||||
|
||||
@@ -4,6 +4,63 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="27 مارس 2026">
|
||||
## v1.13.0rc1
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.13.0a2
|
||||
|
||||
## المساهمون
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="27 مارس 2026">
|
||||
## v1.13.0a2
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- تحديث تلقائي لمستودع اختبار النشر أثناء الإصدار
|
||||
- تحسين مرونة إصدار المؤسسات وتجربة المستخدم
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار للإصدار v1.13.0a1
|
||||
|
||||
## المساهمون
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="27 مارس 2026">
|
||||
## v1.13.0a1
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح الروابط المعطلة في سير العمل الوثائقي عن طريق تثبيت Node على LTS 22
|
||||
- مسح ذاكرة التخزين المؤقت لـ uv للحزم المنشورة حديثًا في الإصدار المؤسسي
|
||||
|
||||
### الوثائق
|
||||
- إضافة مصفوفة شاملة لأذونات RBAC ودليل النشر
|
||||
- تحديث سجل التغييرات والإصدار للإصدار v1.12.2
|
||||
|
||||
## المساهمون
|
||||
|
||||
@greysonlalonde, @iris-clawd, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="25 مارس 2026">
|
||||
## v1.12.2
|
||||
|
||||
|
||||
1303
docs/docs.json
1303
docs/docs.json
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,63 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="Mar 27, 2026">
|
||||
## v1.13.0rc1
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.13.0a2
|
||||
|
||||
## Contributors
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Mar 27, 2026">
|
||||
## v1.13.0a2
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Auto-update deployment test repo during release
|
||||
- Improve enterprise release resilience and UX
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.13.0a1
|
||||
|
||||
## Contributors
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Mar 27, 2026">
|
||||
## v1.13.0a1
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Bug Fixes
|
||||
- Fix broken links in documentation workflow by pinning Node to LTS 22
|
||||
- Bust the uv cache for freshly published packages in enterprise release
|
||||
|
||||
### Documentation
|
||||
- Add comprehensive RBAC permissions matrix and deployment guide
|
||||
- Update changelog and version for v1.12.2
|
||||
|
||||
## Contributors
|
||||
|
||||
@greysonlalonde, @iris-clawd, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Mar 25, 2026">
|
||||
## v1.12.2
|
||||
|
||||
|
||||
@@ -7,11 +7,13 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
RBAC in CrewAI AMP enables secure, scalable access management through a combination of organization‑level roles and automation‑level visibility controls.
|
||||
RBAC in CrewAI AMP enables secure, scalable access management through two layers:
|
||||
|
||||
1. **Feature permissions** — control what each role can do across the platform (manage, read, or no access)
|
||||
2. **Entity-level permissions** — fine-grained access on individual automations, environment variables, LLM connections, and Git repositories
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI AMP" />
|
||||
|
||||
</Frame>
|
||||
|
||||
## Users and Roles
|
||||
@@ -39,6 +41,13 @@ You can configure users and roles in Settings → Roles.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Predefined Roles
|
||||
|
||||
| Role | Description |
|
||||
| :--------- | :-------------------------------------------------------------------------- |
|
||||
| **Owner** | Full access to all features and settings. Cannot be restricted. |
|
||||
| **Member** | Read access to most features, manage access to Studio projects. Cannot modify organization or default settings. |
|
||||
|
||||
### Configuration summary
|
||||
|
||||
| Area | Where to configure | Options |
|
||||
@@ -46,23 +55,80 @@ You can configure users and roles in Settings → Roles.
|
||||
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
|
||||
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
|
||||
|
||||
## Automation‑level Access Control
|
||||
---
|
||||
|
||||
In addition to organization‑wide roles, CrewAI Automations support fine‑grained visibility settings that let you restrict access to specific automations by user or role.
|
||||
## Feature Permissions Matrix
|
||||
|
||||
This is useful for:
|
||||
Every role has a permission level for each feature area. The three levels are:
|
||||
|
||||
- **Manage** — full read/write access (create, edit, delete)
|
||||
- **Read** — view-only access
|
||||
- **No access** — feature is hidden/inaccessible
|
||||
|
||||
| Feature | Owner | Member (default) | Description |
|
||||
| :------------------------ | :------ | :--------------- | :-------------------------------------------------------------- |
|
||||
| `usage_dashboards` | Manage | Read | View usage metrics and analytics |
|
||||
| `crews_dashboards` | Manage | Read | View deployment dashboards, access automation details |
|
||||
| `invitations` | Manage | Read | Invite new members to the organization |
|
||||
| `training_ui` | Manage | Read | Access training/fine-tuning interfaces |
|
||||
| `tools` | Manage | Read | Create and manage tools |
|
||||
| `agents` | Manage | Read | Create and manage agents |
|
||||
| `environment_variables` | Manage | Read | Create and manage environment variables |
|
||||
| `llm_connections` | Manage | Read | Configure LLM provider connections |
|
||||
| `default_settings` | Manage | No access | Modify organization-wide default settings |
|
||||
| `organization_settings` | Manage | No access | Manage billing, plans, and organization configuration |
|
||||
| `studio_projects` | Manage | Manage | Create and edit projects in Studio |
|
||||
|
||||
<Tip>
|
||||
When creating a custom role, you can set each feature independently to **Manage**, **Read**, or **No access** to match your team's needs.
|
||||
</Tip>
|
||||
|
||||
---
|
||||
|
||||
## Deploying from GitHub or Zip
|
||||
|
||||
One of the most common RBAC questions is: _"What permissions does a team member need to deploy?"_
|
||||
|
||||
### Deploy from GitHub
|
||||
|
||||
To deploy an automation from a GitHub repository, a user needs:
|
||||
|
||||
1. **`crews_dashboards`**: at least `Read` — required to access the automations dashboard where deployments are created
|
||||
2. **Git repository access** (if entity-level RBAC for Git repositories is enabled): the user's role must be granted access to the specific Git repository via entity-level permissions
|
||||
3. **`studio_projects`: `Manage`** — if building the crew in Studio before deploying
|
||||
|
||||
### Deploy from Zip
|
||||
|
||||
To deploy an automation from a Zip file upload, a user needs:
|
||||
|
||||
1. **`crews_dashboards`**: at least `Read` — required to access the automations dashboard
|
||||
2. **Zip deployments enabled**: the organization must not have disabled zip deployments in organization settings
|
||||
|
||||
### Quick Reference: Minimum Permissions for Deployment
|
||||
|
||||
| Action | Required feature permissions | Additional requirements |
|
||||
| :------------------- | :------------------------------------ | :----------------------------------------------- |
|
||||
| Deploy from GitHub | `crews_dashboards: Read` | Git repo entity access (if Git RBAC is enabled) |
|
||||
| Deploy from Zip | `crews_dashboards: Read` | Zip deployments must be enabled at the org level |
|
||||
| Build in Studio | `studio_projects: Manage` | — |
|
||||
| Configure LLM keys | `llm_connections: Manage` | — |
|
||||
| Set environment vars | `environment_variables: Manage` | Entity-level access (if entity RBAC is enabled) |
|
||||
|
||||
---
|
||||
|
||||
## Automation‑level Access Control (Entity Permissions)
|
||||
|
||||
In addition to organization‑wide roles, CrewAI supports fine‑grained entity-level permissions that restrict access to individual resources.
|
||||
|
||||
### Automation Visibility
|
||||
|
||||
Automations support visibility settings that restrict access by user or role. This is useful for:
|
||||
|
||||
- Keeping sensitive or experimental automations private
|
||||
- Managing visibility across large teams or external collaborators
|
||||
- Testing automations in isolated contexts
|
||||
|
||||
Deployments can be configured as private, meaning only whitelisted users and roles will be able to:
|
||||
|
||||
- View the deployment
|
||||
- Run it or interact with its API
|
||||
- Access its logs, metrics, and settings
|
||||
|
||||
The organization owner always has access, regardless of visibility settings.
|
||||
Deployments can be configured as private, meaning only whitelisted users and roles will be able to interact with them.
|
||||
|
||||
You can configure automation‑level access control in Automation → Settings → Visibility tab.
|
||||
|
||||
@@ -99,9 +165,92 @@ You can configure automation‑level access control in Automation → Settings
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI AMP" />
|
||||
|
||||
</Frame>
|
||||
|
||||
### Deployment Permission Types
|
||||
|
||||
When granting entity-level access to a specific automation, you can assign these permission types:
|
||||
|
||||
| Permission | What it allows |
|
||||
| :------------------- | :-------------------------------------------------- |
|
||||
| `run` | Execute the automation and use its API |
|
||||
| `traces` | View execution traces and logs |
|
||||
| `manage_settings` | Edit, redeploy, rollback, or delete the automation |
|
||||
| `human_in_the_loop` | Respond to human-in-the-loop (HITL) requests |
|
||||
| `full_access` | All of the above |
|
||||
|
||||
### Entity-level RBAC for Other Resources
|
||||
|
||||
When entity-level RBAC is enabled, access to these resources can also be controlled per user or role:
|
||||
|
||||
| Resource | Controlled by | Description |
|
||||
| :--------------------- | :------------------------------- | :---------------------------------------------------- |
|
||||
| Environment variables | Entity RBAC feature flag | Restrict which roles/users can view or manage specific env vars |
|
||||
| LLM connections | Entity RBAC feature flag | Restrict access to specific LLM provider configurations |
|
||||
| Git repositories | Git repositories RBAC org setting | Restrict which roles/users can access specific connected repos |
|
||||
|
||||
---
|
||||
|
||||
## Common Role Patterns
|
||||
|
||||
While CrewAI ships with Owner and Member roles, most teams benefit from creating custom roles. Here are common patterns:
|
||||
|
||||
### Developer Role
|
||||
|
||||
A role for team members who build and deploy automations but don't manage organization settings.
|
||||
|
||||
| Feature | Permission |
|
||||
| :------------------------ | :--------- |
|
||||
| `usage_dashboards` | Read |
|
||||
| `crews_dashboards` | Manage |
|
||||
| `invitations` | Read |
|
||||
| `training_ui` | Read |
|
||||
| `tools` | Manage |
|
||||
| `agents` | Manage |
|
||||
| `environment_variables` | Manage |
|
||||
| `llm_connections` | Read |
|
||||
| `default_settings` | No access |
|
||||
| `organization_settings` | No access |
|
||||
| `studio_projects` | Manage |
|
||||
|
||||
### Viewer / Stakeholder Role
|
||||
|
||||
A role for non-technical stakeholders who need to monitor automations and view results.
|
||||
|
||||
| Feature | Permission |
|
||||
| :------------------------ | :--------- |
|
||||
| `usage_dashboards` | Read |
|
||||
| `crews_dashboards` | Read |
|
||||
| `invitations` | No access |
|
||||
| `training_ui` | Read |
|
||||
| `tools` | Read |
|
||||
| `agents` | Read |
|
||||
| `environment_variables` | No access |
|
||||
| `llm_connections` | No access |
|
||||
| `default_settings` | No access |
|
||||
| `organization_settings` | No access |
|
||||
| `studio_projects` | Read |
|
||||
|
||||
### Ops / Platform Admin Role
|
||||
|
||||
A role for platform operators who manage infrastructure settings but may not build agents.
|
||||
|
||||
| Feature | Permission |
|
||||
| :------------------------ | :--------- |
|
||||
| `usage_dashboards` | Manage |
|
||||
| `crews_dashboards` | Manage |
|
||||
| `invitations` | Manage |
|
||||
| `training_ui` | Read |
|
||||
| `tools` | Read |
|
||||
| `agents` | Read |
|
||||
| `environment_variables` | Manage |
|
||||
| `llm_connections` | Manage |
|
||||
| `default_settings` | Manage |
|
||||
| `organization_settings` | Read |
|
||||
| `studio_projects` | Read |
|
||||
|
||||
---
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with RBAC questions.
|
||||
</Card>
|
||||
|
||||
550
docs/en/enterprise/features/sso.mdx
Normal file
550
docs/en/enterprise/features/sso.mdx
Normal file
@@ -0,0 +1,550 @@
|
||||
---
|
||||
title: Single Sign-On (SSO)
|
||||
icon: "key"
|
||||
description: Configure enterprise SSO authentication for CrewAI Platform — SaaS and Factory
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI Platform supports enterprise Single Sign-On (SSO) across both **SaaS (AMP)** and **Factory (self-hosted)** deployments. SSO enables your team to authenticate using your organization's existing identity provider, enforcing centralized access control, MFA policies, and user lifecycle management.
|
||||
|
||||
### Supported Providers
|
||||
|
||||
| Provider | SaaS | Factory | Protocol | CLI Support |
|
||||
|---|---|---|---|---|
|
||||
| **WorkOS** | ✅ (default) | ✅ | OAuth 2.0 / OIDC | ✅ |
|
||||
| **Microsoft Entra ID** (Azure AD) | ✅ (enterprise) | ✅ | OAuth 2.0 / SAML 2.0 | ✅ |
|
||||
| **Okta** | ✅ (enterprise) | ✅ | OAuth 2.0 / OIDC | ✅ |
|
||||
| **Auth0** | ✅ (enterprise) | ✅ | OAuth 2.0 / OIDC | ✅ |
|
||||
| **Keycloak** | — | ✅ | OAuth 2.0 / OIDC | ✅ |
|
||||
|
||||
### Key Capabilities
|
||||
|
||||
- **SAML 2.0 and OAuth 2.0 / OIDC** protocol support
|
||||
- **Device Authorization Grant** flow for CLI authentication
|
||||
- **Role-Based Access Control (RBAC)** with custom roles and per-resource permissions
|
||||
- **MFA enforcement** delegated to your identity provider
|
||||
- **User provisioning** through IdP assignment (users/groups)
|
||||
|
||||
---
|
||||
|
||||
## SaaS SSO
|
||||
|
||||
### Default Authentication
|
||||
|
||||
CrewAI's managed SaaS platform (AMP) uses **WorkOS** as the default authentication provider. When you sign up at [app.crewai.com](https://app.crewai.com), authentication is handled through `login.crewai.com` — no additional SSO configuration is required.
|
||||
|
||||
### Enterprise Custom SSO
|
||||
|
||||
Enterprise SaaS customers can configure SSO with their own identity provider (Entra ID, Okta, Auth0). Contact your CrewAI account team to enable custom SSO for your organization. Once configured:
|
||||
|
||||
1. Your team members authenticate through your organization's IdP
|
||||
2. Access control and MFA policies are enforced by your IdP
|
||||
3. The CrewAI CLI automatically detects your SSO configuration via `crewai enterprise configure`
|
||||
|
||||
### CLI Defaults (SaaS)
|
||||
|
||||
| Setting | Default Value |
|
||||
|---|---|
|
||||
| `enterprise_base_url` | `https://app.crewai.com` |
|
||||
| `oauth2_provider` | `workos` |
|
||||
| `oauth2_domain` | `login.crewai.com` |
|
||||
|
||||
---
|
||||
|
||||
## Factory SSO Setup
|
||||
|
||||
Factory (self-hosted) deployments require you to configure SSO by setting environment variables in your Helm `values.yaml` and registering an application in your identity provider.
|
||||
|
||||
### Microsoft Entra ID (Azure AD)
|
||||
|
||||
<Steps>
|
||||
<Step title="Register an Application">
|
||||
1. Go to [portal.azure.com](https://portal.azure.com) → **Microsoft Entra ID** → **App registrations** → **New registration**
|
||||
2. Configure:
|
||||
- **Name:** `CrewAI` (or your preferred name)
|
||||
- **Supported account types:** Accounts in this organizational directory only
|
||||
- **Redirect URI:** Select **Web**, enter `https://<your-domain>/auth/entra_id/callback`
|
||||
3. Click **Register**
|
||||
</Step>
|
||||
|
||||
<Step title="Collect Credentials">
|
||||
From the app overview page, copy:
|
||||
- **Application (client) ID** → `ENTRA_ID_CLIENT_ID`
|
||||
- **Directory (tenant) ID** → `ENTRA_ID_TENANT_ID`
|
||||
</Step>
|
||||
|
||||
<Step title="Create Client Secret">
|
||||
1. Navigate to **Certificates & Secrets** → **New client secret**
|
||||
2. Add a description and select expiration period
|
||||
3. Copy the secret value immediately (it won't be shown again) → `ENTRA_ID_CLIENT_SECRET`
|
||||
</Step>
|
||||
|
||||
<Step title="Grant Admin Consent">
|
||||
1. Go to **Enterprise applications** → select your app
|
||||
2. Under **Security** → **Permissions**, click **Grant admin consent**
|
||||
3. Ensure **Microsoft Graph → User.Read** is granted
|
||||
</Step>
|
||||
|
||||
<Step title="Configure App Roles (Recommended)">
|
||||
Under **App registrations** → your app → **App roles**, create:
|
||||
|
||||
| Display Name | Value | Allowed Member Types |
|
||||
|---|---|---|
|
||||
| Member | `member` | Users/Groups |
|
||||
| Factory Admin | `factory-admin` | Users/Groups |
|
||||
|
||||
<Note>
|
||||
The `member` role grants login access. The `factory-admin` role grants admin panel access. Roles are included in the JWT automatically.
|
||||
</Note>
|
||||
</Step>
|
||||
|
||||
<Step title="Assign Users">
|
||||
1. Under **Properties**, set **Assignment required?** to **Yes**
|
||||
2. Under **Users and groups**, assign users/groups with the appropriate role
|
||||
</Step>
|
||||
|
||||
<Step title="Set Environment Variables">
|
||||
```yaml
|
||||
envVars:
|
||||
AUTH_PROVIDER: "entra_id"
|
||||
|
||||
secrets:
|
||||
ENTRA_ID_CLIENT_ID: "<Application (client) ID>"
|
||||
ENTRA_ID_CLIENT_SECRET: "<Client Secret>"
|
||||
ENTRA_ID_TENANT_ID: "<Directory (tenant) ID>"
|
||||
```
|
||||
</Step>
|
||||
|
||||
<Step title="Enable CLI Support (Optional)">
|
||||
To allow `crewai login` via Device Authorization Grant:
|
||||
|
||||
1. Under **Authentication** → **Advanced settings**, enable **Allow public client flows**
|
||||
2. Under **Expose an API**, add an Application ID URI (e.g., `api://crewai-cli`)
|
||||
3. Add a scope (e.g., `read`) with **Admins and users** consent
|
||||
4. Under **Manifest**, set `accessTokenAcceptedVersion` to `2`
|
||||
5. Add environment variables:
|
||||
|
||||
```yaml
|
||||
secrets:
|
||||
ENTRA_ID_DEVICE_AUTHORIZATION_CLIENT_ID: "<Application (client) ID>"
|
||||
ENTRA_ID_CUSTOM_OPENID_SCOPE: "<scope URI, e.g. api://crewai-cli/read>"
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
---
|
||||
|
||||
### Okta
|
||||
|
||||
<Steps>
|
||||
<Step title="Create App Integration">
|
||||
1. Open Okta Admin Console → **Applications** → **Create App Integration**
|
||||
2. Select **OIDC - OpenID Connect** → **Web Application** → **Next**
|
||||
3. Configure:
|
||||
- **App integration name:** `CrewAI SSO`
|
||||
- **Sign-in redirect URI:** `https://<your-domain>/auth/okta/callback`
|
||||
- **Sign-out redirect URI:** `https://<your-domain>`
|
||||
- **Assignments:** Choose who can access (everyone or specific groups)
|
||||
4. Click **Save**
|
||||
</Step>
|
||||
|
||||
<Step title="Collect Credentials">
|
||||
From the app details page:
|
||||
- **Client ID** → `OKTA_CLIENT_ID`
|
||||
- **Client Secret** → `OKTA_CLIENT_SECRET`
|
||||
- **Okta URL** (top-right corner, under your username) → `OKTA_SITE`
|
||||
</Step>
|
||||
|
||||
<Step title="Configure Authorization Server">
|
||||
1. Navigate to **Security** → **API**
|
||||
2. Select your authorization server (default: `default`)
|
||||
3. Under **Access Policies**, add a policy and rule:
|
||||
- In the rule, under **Scopes requested**, select **The following scopes** → **OIDC default scopes**
|
||||
4. Note the **Name** and **Audience** of the authorization server
|
||||
|
||||
<Warning>
|
||||
The authorization server name and audience must match `OKTA_AUTHORIZATION_SERVER` and `OKTA_AUDIENCE` exactly. Mismatches cause `401 Unauthorized` or `Invalid token: Signature verification failed` errors.
|
||||
</Warning>
|
||||
</Step>
|
||||
|
||||
<Step title="Set Environment Variables">
|
||||
```yaml
|
||||
envVars:
|
||||
AUTH_PROVIDER: "okta"
|
||||
|
||||
secrets:
|
||||
OKTA_CLIENT_ID: "<Okta app client ID>"
|
||||
OKTA_CLIENT_SECRET: "<Okta client secret>"
|
||||
OKTA_SITE: "https://your-domain.okta.com"
|
||||
OKTA_AUTHORIZATION_SERVER: "default"
|
||||
OKTA_AUDIENCE: "api://default"
|
||||
```
|
||||
</Step>
|
||||
|
||||
<Step title="Enable CLI Support (Optional)">
|
||||
1. Create a **new** app integration: **OIDC** → **Native Application**
|
||||
2. Enable **Device Authorization** and **Refresh Token** grant types
|
||||
3. Allow everyone in your organization to access
|
||||
4. Add environment variable:
|
||||
|
||||
```yaml
|
||||
secrets:
|
||||
OKTA_DEVICE_AUTHORIZATION_CLIENT_ID: "<Native app client ID>"
|
||||
```
|
||||
|
||||
<Note>
|
||||
Device Authorization requires a **Native Application** — it cannot use the Web Application created for browser-based SSO.
|
||||
</Note>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
---
|
||||
|
||||
### Keycloak
|
||||
|
||||
<Steps>
|
||||
<Step title="Create a Client">
|
||||
1. Open Keycloak Admin Console → navigate to your realm
|
||||
2. **Clients** → **Create client**:
|
||||
- **Client type:** OpenID Connect
|
||||
- **Client ID:** `crewai-factory` (suggested)
|
||||
3. Capability config:
|
||||
- **Client authentication:** On
|
||||
- **Standard flow:** Checked
|
||||
4. Login settings:
|
||||
- **Root URL:** `https://<your-domain>`
|
||||
- **Valid redirect URIs:** `https://<your-domain>/auth/keycloak/callback`
|
||||
- **Valid post logout redirect URIs:** `https://<your-domain>`
|
||||
5. Click **Save**
|
||||
</Step>
|
||||
|
||||
<Step title="Collect Credentials">
|
||||
- **Client ID** → `KEYCLOAK_CLIENT_ID`
|
||||
- Under **Credentials** tab: **Client secret** → `KEYCLOAK_CLIENT_SECRET`
|
||||
- **Realm name** → `KEYCLOAK_REALM`
|
||||
- **Keycloak server URL** → `KEYCLOAK_SITE`
|
||||
</Step>
|
||||
|
||||
<Step title="Set Environment Variables">
|
||||
```yaml
|
||||
envVars:
|
||||
AUTH_PROVIDER: "keycloak"
|
||||
|
||||
secrets:
|
||||
KEYCLOAK_CLIENT_ID: "<client ID>"
|
||||
KEYCLOAK_CLIENT_SECRET: "<client secret>"
|
||||
KEYCLOAK_SITE: "https://keycloak.yourdomain.com"
|
||||
KEYCLOAK_REALM: "<realm name>"
|
||||
KEYCLOAK_AUDIENCE: "account"
|
||||
# Only set if using a custom base path (pre-v17 migrations):
|
||||
# KEYCLOAK_BASE_URL: "/auth"
|
||||
```
|
||||
|
||||
<Note>
|
||||
Keycloak includes `account` as the default audience in access tokens. For most installations, `KEYCLOAK_AUDIENCE=account` works without additional configuration. See [Keycloak audience documentation](https://www.keycloak.org/docs/latest/authorization_services/index.html) if you need a custom audience.
|
||||
</Note>
|
||||
</Step>
|
||||
|
||||
<Step title="Enable CLI Support (Optional)">
|
||||
1. Create a **second** client:
|
||||
- **Client type:** OpenID Connect
|
||||
- **Client ID:** `crewai-factory-cli` (suggested)
|
||||
- **Client authentication:** Off (Device Authorization requires a public client)
|
||||
- **Authentication flow:** Check **only** OAuth 2.0 Device Authorization Grant
|
||||
2. Add environment variable:
|
||||
|
||||
```yaml
|
||||
secrets:
|
||||
KEYCLOAK_DEVICE_AUTHORIZATION_CLIENT_ID: "<CLI client ID>"
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
---
|
||||
|
||||
### WorkOS
|
||||
|
||||
<Steps>
|
||||
<Step title="Configure in WorkOS Dashboard">
|
||||
1. Create an application in the [WorkOS Dashboard](https://dashboard.workos.com)
|
||||
2. Configure the redirect URI: `https://<your-domain>/auth/workos/callback`
|
||||
3. Note the **Client ID** and **AuthKit domain**
|
||||
4. Set up organizations in the WorkOS dashboard
|
||||
</Step>
|
||||
|
||||
<Step title="Set Environment Variables">
|
||||
```yaml
|
||||
envVars:
|
||||
AUTH_PROVIDER: "workos"
|
||||
|
||||
secrets:
|
||||
WORKOS_CLIENT_ID: "<WorkOS client ID>"
|
||||
WORKOS_AUTHKIT_DOMAIN: "<your-authkit-domain.authkit.com>"
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
---
|
||||
|
||||
### Auth0
|
||||
|
||||
<Steps>
|
||||
<Step title="Create Application">
|
||||
1. In the [Auth0 Dashboard](https://manage.auth0.com), create a new **Regular Web Application**
|
||||
2. Configure:
|
||||
- **Allowed Callback URLs:** `https://<your-domain>/auth/auth0/callback`
|
||||
- **Allowed Logout URLs:** `https://<your-domain>`
|
||||
3. Note the **Domain**, **Client ID**, and **Client Secret**
|
||||
</Step>
|
||||
|
||||
<Step title="Set Environment Variables">
|
||||
```yaml
|
||||
envVars:
|
||||
AUTH_PROVIDER: "auth0"
|
||||
|
||||
secrets:
|
||||
AUTH0_CLIENT_ID: "<Auth0 client ID>"
|
||||
AUTH0_CLIENT_SECRET: "<Auth0 client secret>"
|
||||
AUTH0_DOMAIN: "<your-tenant.auth0.com>"
|
||||
```
|
||||
</Step>
|
||||
|
||||
<Step title="Enable CLI Support (Optional)">
|
||||
1. Create a **Native** application in Auth0 for Device Authorization
|
||||
2. Enable the **Device Authorization** grant type under application settings
|
||||
3. Configure the CLI with the appropriate audience and client ID
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
---
|
||||
|
||||
## CLI Authentication
|
||||
|
||||
The CrewAI CLI supports SSO authentication via the **Device Authorization Grant** flow. This allows developers to authenticate from their terminal without exposing credentials.
|
||||
|
||||
### Quick Setup
|
||||
|
||||
For Factory installations, the CLI can auto-configure all OAuth2 settings:
|
||||
|
||||
```bash
|
||||
crewai enterprise configure https://your-factory-url.app
|
||||
```
|
||||
|
||||
This command fetches the SSO configuration from your Factory instance and sets all required CLI parameters automatically.
|
||||
|
||||
Then authenticate:
|
||||
|
||||
```bash
|
||||
crewai login
|
||||
```
|
||||
|
||||
<Note>
|
||||
Requires CrewAI CLI version **1.6.0** or higher for Entra ID, **0.159.0** or higher for Okta, and **1.9.0** or higher for Keycloak.
|
||||
</Note>
|
||||
|
||||
### Manual CLI Configuration
|
||||
|
||||
If you need to configure the CLI manually, use `crewai config set`:
|
||||
|
||||
```bash
|
||||
# Set the provider
|
||||
crewai config set oauth2_provider okta
|
||||
|
||||
# Set provider-specific values
|
||||
crewai config set oauth2_domain your-domain.okta.com
|
||||
crewai config set oauth2_client_id your-client-id
|
||||
crewai config set oauth2_audience api://default
|
||||
|
||||
# Set the enterprise base URL
|
||||
crewai config set enterprise_base_url https://your-factory-url.app
|
||||
```
|
||||
|
||||
### CLI Configuration Reference
|
||||
|
||||
| Setting | Description | Example |
|
||||
|---|---|---|
|
||||
| `enterprise_base_url` | Your CrewAI instance URL | `https://crewai.yourcompany.com` |
|
||||
| `oauth2_provider` | Provider name | `workos`, `okta`, `auth0`, `entra_id`, `keycloak` |
|
||||
| `oauth2_domain` | Provider domain | `your-domain.okta.com` |
|
||||
| `oauth2_client_id` | OAuth2 client ID | `0oaqnwji7pGW7VT6T697` |
|
||||
| `oauth2_audience` | API audience identifier | `api://default` |
|
||||
|
||||
View current configuration:
|
||||
|
||||
```bash
|
||||
crewai config list
|
||||
```
|
||||
|
||||
### How Device Authorization Works
|
||||
|
||||
1. Run `crewai login` — the CLI requests a device code from your IdP
|
||||
2. A verification URL and code are displayed in your terminal
|
||||
3. Your browser opens to the verification URL
|
||||
4. Enter the code and authenticate with your IdP credentials
|
||||
5. The CLI receives an access token and stores it locally
|
||||
|
||||
---
|
||||
|
||||
## Role-Based Access Control (RBAC)
|
||||
|
||||
CrewAI Platform provides granular RBAC that integrates with your SSO provider.
|
||||
|
||||
### Permission Model
|
||||
|
||||
| Permission | Description |
|
||||
|---|---|
|
||||
| **Read** | View resources (dashboards, automations, logs) |
|
||||
| **Write** | Create and modify resources |
|
||||
| **Manage** | Full control including deletion and configuration |
|
||||
|
||||
### Resources
|
||||
|
||||
Permissions can be scoped to individual resources:
|
||||
|
||||
- **Usage Dashboard** — Platform usage metrics and analytics
|
||||
- **Automations Dashboard** — Crew and flow management
|
||||
- **Environment Variables** — Secret and configuration management
|
||||
- **Individual Automations** — Per-automation access control
|
||||
|
||||
### Roles
|
||||
|
||||
- **Predefined roles** come out of the box with standard permission sets
|
||||
- **Custom roles** can be created with any combination of permissions
|
||||
- **Per-resource assignment** — limit specific automations to individual users or roles
|
||||
|
||||
### Factory Admin Access
|
||||
|
||||
For Factory deployments using Entra ID, admin access is controlled via App Roles:
|
||||
|
||||
- Assign the `factory-admin` role to users who need admin panel access
|
||||
- Assign the `member` role for standard platform access
|
||||
- Roles are communicated via JWT claims — no additional configuration needed after IdP setup
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Invalid Redirect URI
|
||||
|
||||
**Symptom:** Authentication fails with a redirect URI mismatch error.
|
||||
|
||||
**Fix:** Ensure the redirect URI in your IdP exactly matches the expected callback URL:
|
||||
|
||||
| Provider | Callback URL |
|
||||
|---|---|
|
||||
| Entra ID | `https://<domain>/auth/entra_id/callback` |
|
||||
| Okta | `https://<domain>/auth/okta/callback` |
|
||||
| Keycloak | `https://<domain>/auth/keycloak/callback` |
|
||||
| WorkOS | `https://<domain>/auth/workos/callback` |
|
||||
| Auth0 | `https://<domain>/auth/auth0/callback` |
|
||||
|
||||
### CLI Login Fails (Device Authorization)
|
||||
|
||||
**Symptom:** `crewai login` returns an error or times out.
|
||||
|
||||
**Fix:**
|
||||
- Verify that Device Authorization Grant is enabled in your IdP
|
||||
- For Okta: ensure you have a **Native Application** (not Web) with Device Authorization grant
|
||||
- For Entra ID: ensure **Allow public client flows** is enabled
|
||||
- For Keycloak: ensure the CLI client has **Client authentication: Off** and only Device Authorization Grant enabled
|
||||
- Check that `*_DEVICE_AUTHORIZATION_CLIENT_ID` environment variable is set on the server
|
||||
|
||||
### Token Validation Errors
|
||||
|
||||
**Symptom:** `Invalid token: Signature verification failed` or `401 Unauthorized` after login.
|
||||
|
||||
**Fix:**
|
||||
- **Okta:** Verify `OKTA_AUTHORIZATION_SERVER` and `OKTA_AUDIENCE` match the authorization server's Name and Audience exactly
|
||||
- **Entra ID:** Ensure `accessTokenAcceptedVersion` is set to `2` in the app manifest
|
||||
- **Keycloak:** Verify `KEYCLOAK_AUDIENCE` matches the audience in your access tokens (default: `account`)
|
||||
|
||||
### Admin Consent Not Granted (Entra ID)
|
||||
|
||||
**Symptom:** Users can't log in, see "needs admin approval" message.
|
||||
|
||||
**Fix:** Go to **Enterprise applications** → your app → **Permissions** → **Grant admin consent**. Ensure `User.Read` is granted for Microsoft Graph.
|
||||
|
||||
### 403 Forbidden After Login
|
||||
|
||||
**Symptom:** User authenticates successfully but gets 403 errors.
|
||||
|
||||
**Fix:**
|
||||
- Check that the user is assigned to the application in your IdP
|
||||
- For Entra ID with **Assignment required = Yes**: ensure the user has a role assignment (Member or Factory Admin)
|
||||
- For Okta: verify the user or their group is assigned under the app's **Assignments** tab
|
||||
|
||||
### CLI Can't Reach Factory Instance
|
||||
|
||||
**Symptom:** `crewai enterprise configure` fails to connect.
|
||||
|
||||
**Fix:**
|
||||
- Verify the Factory URL is reachable from your machine
|
||||
- Check that `enterprise_base_url` is set correctly: `crewai config list`
|
||||
- Ensure TLS certificates are valid and trusted
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables Reference
|
||||
|
||||
### Common
|
||||
|
||||
| Variable | Description |
|
||||
|---|---|
|
||||
| `AUTH_PROVIDER` | Authentication provider: `entra_id`, `okta`, `workos`, `auth0`, `keycloak`, `local` |
|
||||
|
||||
### Microsoft Entra ID
|
||||
|
||||
| Variable | Required | Description |
|
||||
|---|---|---|
|
||||
| `ENTRA_ID_CLIENT_ID` | ✅ | Application (client) ID from Azure |
|
||||
| `ENTRA_ID_CLIENT_SECRET` | ✅ | Client secret from Azure |
|
||||
| `ENTRA_ID_TENANT_ID` | ✅ | Directory (tenant) ID from Azure |
|
||||
| `ENTRA_ID_DEVICE_AUTHORIZATION_CLIENT_ID` | CLI only | Client ID for Device Authorization Grant |
|
||||
| `ENTRA_ID_CUSTOM_OPENID_SCOPE` | CLI only | Custom scope from "Expose an API" (e.g., `api://crewai-cli/read`) |
|
||||
|
||||
### Okta
|
||||
|
||||
| Variable | Required | Description |
|
||||
|---|---|---|
|
||||
| `OKTA_CLIENT_ID` | ✅ | Okta application client ID |
|
||||
| `OKTA_CLIENT_SECRET` | ✅ | Okta client secret |
|
||||
| `OKTA_SITE` | ✅ | Okta organization URL (e.g., `https://your-domain.okta.com`) |
|
||||
| `OKTA_AUTHORIZATION_SERVER` | ✅ | Authorization server name (e.g., `default`) |
|
||||
| `OKTA_AUDIENCE` | ✅ | Authorization server audience (e.g., `api://default`) |
|
||||
| `OKTA_DEVICE_AUTHORIZATION_CLIENT_ID` | CLI only | Native app client ID for Device Authorization |
|
||||
|
||||
### WorkOS
|
||||
|
||||
| Variable | Required | Description |
|
||||
|---|---|---|
|
||||
| `WORKOS_CLIENT_ID` | ✅ | WorkOS application client ID |
|
||||
| `WORKOS_AUTHKIT_DOMAIN` | ✅ | AuthKit domain (e.g., `your-domain.authkit.com`) |
|
||||
|
||||
### Auth0
|
||||
|
||||
| Variable | Required | Description |
|
||||
|---|---|---|
|
||||
| `AUTH0_CLIENT_ID` | ✅ | Auth0 application client ID |
|
||||
| `AUTH0_CLIENT_SECRET` | ✅ | Auth0 client secret |
|
||||
| `AUTH0_DOMAIN` | ✅ | Auth0 tenant domain (e.g., `your-tenant.auth0.com`) |
|
||||
|
||||
### Keycloak
|
||||
|
||||
| Variable | Required | Description |
|
||||
|---|---|---|
|
||||
| `KEYCLOAK_CLIENT_ID` | ✅ | Keycloak client ID |
|
||||
| `KEYCLOAK_CLIENT_SECRET` | ✅ | Keycloak client secret |
|
||||
| `KEYCLOAK_SITE` | ✅ | Keycloak server URL |
|
||||
| `KEYCLOAK_REALM` | ✅ | Keycloak realm name |
|
||||
| `KEYCLOAK_AUDIENCE` | ✅ | Token audience (default: `account`) |
|
||||
| `KEYCLOAK_BASE_URL` | Optional | Base URL path (e.g., `/auth` for pre-v17 migrations) |
|
||||
| `KEYCLOAK_DEVICE_AUTHORIZATION_CLIENT_ID` | CLI only | Public client ID for Device Authorization |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Installation Guide](/installation) — Get started with CrewAI
|
||||
- [Quickstart](/quickstart) — Build your first crew
|
||||
- [RBAC Setup](/enterprise/features/rbac) — Detailed role and permission management
|
||||
@@ -4,6 +4,63 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2026년 3월 27일">
|
||||
## v1.13.0rc1
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 문서
|
||||
- v1.13.0a2의 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 3월 27일">
|
||||
## v1.13.0a2
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 릴리스 중 자동 업데이트 배포 테스트 리포지토리
|
||||
- 기업 릴리스의 복원력 및 사용자 경험 개선
|
||||
|
||||
### 문서
|
||||
- v1.13.0a1에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 3월 27일">
|
||||
## v1.13.0a1
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 버그 수정
|
||||
- Node를 LTS 22로 고정하여 문서 작업 흐름의 끊어진 링크 수정
|
||||
- 기업 릴리스에서 새로 게시된 패키지의 uv 캐시 초기화
|
||||
|
||||
### 문서
|
||||
- 포괄적인 RBAC 권한 매트릭스 및 배포 가이드 추가
|
||||
- v1.12.2에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@greysonlalonde, @iris-clawd, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 3월 25일">
|
||||
## v1.12.2
|
||||
|
||||
|
||||
@@ -4,6 +4,63 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="27 mar 2026">
|
||||
## v1.13.0rc1
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0rc1)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.13.0a2
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="27 mar 2026">
|
||||
## v1.13.0a2
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a2)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Repositório de teste de implantação de autoatualização durante o lançamento
|
||||
- Melhorar a resiliência e a experiência do usuário na versão empresarial
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.13.0a1
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@greysonlalonde
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="27 mar 2026">
|
||||
## v1.13.0a1
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a1)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir links quebrados no fluxo de documentação fixando o Node na LTS 22
|
||||
- Limpar o cache uv para pacotes recém-publicados na versão empresarial
|
||||
|
||||
### Documentação
|
||||
- Adicionar uma matriz abrangente de permissões RBAC e guia de implantação
|
||||
- Atualizar o changelog e a versão para v1.12.2
|
||||
|
||||
## Contributors
|
||||
|
||||
@greysonlalonde, @iris-clawd, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="25 mar 2026">
|
||||
## v1.12.2
|
||||
|
||||
|
||||
@@ -152,4 +152,4 @@ __all__ = [
|
||||
"wrap_file_source",
|
||||
]
|
||||
|
||||
__version__ = "1.12.2"
|
||||
__version__ = "1.13.0rc1"
|
||||
|
||||
@@ -11,7 +11,7 @@ dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests~=2.32.5",
|
||||
"docker~=7.1.0",
|
||||
"crewai==1.12.2",
|
||||
"crewai==1.13.0rc1",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
"python-docx~=1.2.0",
|
||||
|
||||
@@ -309,4 +309,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.12.2"
|
||||
__version__ = "1.13.0rc1"
|
||||
|
||||
@@ -54,7 +54,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.12.2",
|
||||
"crewai-tools==1.13.0rc1",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
|
||||
@@ -42,7 +42,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.12.2"
|
||||
__version__ = "1.13.0rc1"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -73,6 +73,7 @@ class PlusAPI:
|
||||
description: str | None,
|
||||
encoded_file: str,
|
||||
available_exports: list[dict[str, Any]] | None = None,
|
||||
tools_metadata: list[dict[str, Any]] | None = None,
|
||||
) -> httpx.Response:
|
||||
params = {
|
||||
"handle": handle,
|
||||
@@ -81,6 +82,9 @@ class PlusAPI:
|
||||
"file": encoded_file,
|
||||
"description": description,
|
||||
"available_exports": available_exports,
|
||||
"tools_metadata": {"package": handle, "tools": tools_metadata}
|
||||
if tools_metadata is not None
|
||||
else None,
|
||||
}
|
||||
return self._make_request("POST", f"{self.TOOLS_RESOURCE}", json=params)
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.12.2"
|
||||
"crewai[tools]==1.13.0rc1"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.12.2"
|
||||
"crewai[tools]==1.13.0rc1"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.12.2"
|
||||
"crewai[tools]==1.13.0rc1"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -17,6 +17,7 @@ from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
from crewai.cli.utils import (
|
||||
build_env_with_tool_repository_credentials,
|
||||
extract_available_exports,
|
||||
extract_tools_metadata,
|
||||
get_project_description,
|
||||
get_project_name,
|
||||
get_project_version,
|
||||
@@ -101,6 +102,18 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
console.print(
|
||||
f"[green]Found these tools to publish: {', '.join([e['name'] for e in available_exports])}[/green]"
|
||||
)
|
||||
|
||||
console.print("[bold blue]Extracting tool metadata...[/bold blue]")
|
||||
try:
|
||||
tools_metadata = extract_tools_metadata()
|
||||
except Exception as e:
|
||||
console.print(
|
||||
f"[yellow]Warning: Could not extract tool metadata: {e}[/yellow]\n"
|
||||
f"Publishing will continue without detailed metadata."
|
||||
)
|
||||
tools_metadata = []
|
||||
|
||||
self._print_tools_preview(tools_metadata)
|
||||
self._print_current_organization()
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_build_dir:
|
||||
@@ -118,7 +131,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
"Project build failed. Please ensure that the command `uv build --sdist` completes successfully.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
raise SystemExit(1)
|
||||
|
||||
tarball_path = os.path.join(temp_build_dir, tarball_filename)
|
||||
with open(tarball_path, "rb") as file:
|
||||
@@ -134,6 +147,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
description=project_description,
|
||||
encoded_file=f"data:application/x-gzip;base64,{encoded_tarball}",
|
||||
available_exports=available_exports,
|
||||
tools_metadata=tools_metadata,
|
||||
)
|
||||
|
||||
self._validate_response(publish_response)
|
||||
@@ -246,6 +260,55 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
def _print_tools_preview(self, tools_metadata: list[dict[str, Any]]) -> None:
|
||||
if not tools_metadata:
|
||||
console.print("[yellow]No tool metadata extracted.[/yellow]")
|
||||
return
|
||||
|
||||
console.print(
|
||||
f"\n[bold]Tools to be published ({len(tools_metadata)}):[/bold]\n"
|
||||
)
|
||||
|
||||
for tool in tools_metadata:
|
||||
console.print(f" [bold cyan]{tool.get('name', 'Unknown')}[/bold cyan]")
|
||||
if tool.get("module"):
|
||||
console.print(f" Module: {tool.get('module')}")
|
||||
console.print(f" Name: {tool.get('humanized_name', 'N/A')}")
|
||||
console.print(
|
||||
f" Description: {tool.get('description', 'N/A')[:80]}{'...' if len(tool.get('description', '')) > 80 else ''}"
|
||||
)
|
||||
|
||||
init_params = tool.get("init_params_schema", {}).get("properties", {})
|
||||
if init_params:
|
||||
required = tool.get("init_params_schema", {}).get("required", [])
|
||||
console.print(" Init parameters:")
|
||||
for param_name, param_info in init_params.items():
|
||||
param_type = param_info.get("type", "any")
|
||||
is_required = param_name in required
|
||||
req_marker = "[red]*[/red]" if is_required else ""
|
||||
default = (
|
||||
f" = {param_info['default']}" if "default" in param_info else ""
|
||||
)
|
||||
console.print(
|
||||
f" - {param_name}: {param_type}{default} {req_marker}"
|
||||
)
|
||||
|
||||
env_vars = tool.get("env_vars", [])
|
||||
if env_vars:
|
||||
console.print(" Environment variables:")
|
||||
for env_var in env_vars:
|
||||
req_marker = "[red]*[/red]" if env_var.get("required") else ""
|
||||
default = (
|
||||
f" (default: {env_var['default']})"
|
||||
if env_var.get("default")
|
||||
else ""
|
||||
)
|
||||
console.print(
|
||||
f" - {env_var['name']}: {env_var.get('description', 'N/A')}{default} {req_marker}"
|
||||
)
|
||||
|
||||
console.print()
|
||||
|
||||
def _print_current_organization(self) -> None:
|
||||
settings = Settings()
|
||||
if settings.org_uuid:
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
from functools import reduce
|
||||
from collections.abc import Generator, Mapping
|
||||
from contextlib import contextmanager
|
||||
from functools import lru_cache, reduce
|
||||
import hashlib
|
||||
import importlib.util
|
||||
import inspect
|
||||
from inspect import getmro, isclass, isfunction, ismethod
|
||||
import os
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
import sys
|
||||
import types
|
||||
from typing import Any, cast, get_type_hints
|
||||
|
||||
import click
|
||||
@@ -544,43 +549,62 @@ def build_env_with_tool_repository_credentials(
|
||||
return env
|
||||
|
||||
|
||||
@contextmanager
|
||||
def _load_module_from_file(
|
||||
init_file: Path, module_name: str | None = None
|
||||
) -> Generator[types.ModuleType | None, None, None]:
|
||||
"""
|
||||
Context manager for loading a module from file with automatic cleanup.
|
||||
|
||||
Yields the loaded module or None if loading fails.
|
||||
"""
|
||||
if module_name is None:
|
||||
module_name = (
|
||||
f"temp_module_{hashlib.sha256(str(init_file).encode()).hexdigest()[:8]}"
|
||||
)
|
||||
|
||||
spec = importlib.util.spec_from_file_location(module_name, init_file)
|
||||
if not spec or not spec.loader:
|
||||
yield None
|
||||
return
|
||||
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
sys.modules[module_name] = module
|
||||
|
||||
try:
|
||||
spec.loader.exec_module(module)
|
||||
yield module
|
||||
finally:
|
||||
sys.modules.pop(module_name, None)
|
||||
|
||||
|
||||
def _load_tools_from_init(init_file: Path) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Load and validate tools from a given __init__.py file.
|
||||
"""
|
||||
spec = importlib.util.spec_from_file_location("temp_module", init_file)
|
||||
|
||||
if not spec or not spec.loader:
|
||||
return []
|
||||
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
sys.modules["temp_module"] = module
|
||||
|
||||
try:
|
||||
spec.loader.exec_module(module)
|
||||
with _load_module_from_file(init_file) as module:
|
||||
if module is None:
|
||||
return []
|
||||
|
||||
if not hasattr(module, "__all__"):
|
||||
console.print(
|
||||
f"Warning: No __all__ defined in {init_file}",
|
||||
style="bold yellow",
|
||||
)
|
||||
raise SystemExit(1)
|
||||
|
||||
return [
|
||||
{
|
||||
"name": name,
|
||||
}
|
||||
for name in module.__all__
|
||||
if hasattr(module, name) and is_valid_tool(getattr(module, name))
|
||||
]
|
||||
if not hasattr(module, "__all__"):
|
||||
console.print(
|
||||
f"Warning: No __all__ defined in {init_file}",
|
||||
style="bold yellow",
|
||||
)
|
||||
raise SystemExit(1)
|
||||
|
||||
return [
|
||||
{"name": name}
|
||||
for name in module.__all__
|
||||
if hasattr(module, name) and is_valid_tool(getattr(module, name))
|
||||
]
|
||||
except SystemExit:
|
||||
raise
|
||||
except Exception as e:
|
||||
console.print(f"[red]Warning: Could not load {init_file}: {e!s}[/red]")
|
||||
raise SystemExit(1) from e
|
||||
|
||||
finally:
|
||||
sys.modules.pop("temp_module", None)
|
||||
|
||||
|
||||
def _print_no_tools_warning() -> None:
|
||||
"""
|
||||
@@ -610,3 +634,242 @@ def _print_no_tools_warning() -> None:
|
||||
" # ... implementation\n"
|
||||
" return result\n"
|
||||
)
|
||||
|
||||
|
||||
def extract_tools_metadata(dir_path: str = "src") -> list[dict[str, Any]]:
|
||||
"""
|
||||
Extract rich metadata from tool classes in the project.
|
||||
|
||||
Returns a list of tool metadata dictionaries containing:
|
||||
- name: Class name
|
||||
- humanized_name: From name field default
|
||||
- description: From description field default
|
||||
- run_params_schema: JSON Schema for _run() params (from args_schema)
|
||||
- init_params_schema: JSON Schema for __init__ params (filtered)
|
||||
- env_vars: List of environment variable dicts
|
||||
"""
|
||||
tools_metadata: list[dict[str, Any]] = []
|
||||
|
||||
for init_file in Path(dir_path).glob("**/__init__.py"):
|
||||
tools = _extract_tool_metadata_from_init(init_file)
|
||||
tools_metadata.extend(tools)
|
||||
|
||||
return tools_metadata
|
||||
|
||||
|
||||
def _extract_tool_metadata_from_init(init_file: Path) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Load module from init file and extract metadata from valid tool classes.
|
||||
"""
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
try:
|
||||
with _load_module_from_file(init_file) as module:
|
||||
if module is None:
|
||||
return []
|
||||
|
||||
exported_names = getattr(module, "__all__", None)
|
||||
if not exported_names:
|
||||
return []
|
||||
|
||||
tools_metadata = []
|
||||
for name in exported_names:
|
||||
obj = getattr(module, name, None)
|
||||
if obj is None or not (
|
||||
inspect.isclass(obj) and issubclass(obj, BaseTool)
|
||||
):
|
||||
continue
|
||||
if tool_info := _extract_single_tool_metadata(obj):
|
||||
tools_metadata.append(tool_info)
|
||||
|
||||
return tools_metadata
|
||||
except Exception as e:
|
||||
console.print(
|
||||
f"[yellow]Warning: Could not extract metadata from {init_file}: {e}[/yellow]"
|
||||
)
|
||||
return []
|
||||
|
||||
|
||||
def _extract_single_tool_metadata(tool_class: type) -> dict[str, Any] | None:
|
||||
"""
|
||||
Extract metadata from a single tool class.
|
||||
"""
|
||||
try:
|
||||
core_schema = cast(Any, tool_class).__pydantic_core_schema__
|
||||
if not core_schema:
|
||||
return None
|
||||
|
||||
schema = _unwrap_schema(core_schema)
|
||||
fields = schema.get("schema", {}).get("fields", {})
|
||||
|
||||
try:
|
||||
file_path = inspect.getfile(tool_class)
|
||||
relative_path = Path(file_path).relative_to(Path.cwd())
|
||||
module_path = relative_path.with_suffix("")
|
||||
if module_path.parts[0] == "src":
|
||||
module_path = Path(*module_path.parts[1:])
|
||||
if module_path.name == "__init__":
|
||||
module_path = module_path.parent
|
||||
module = ".".join(module_path.parts)
|
||||
except (TypeError, ValueError):
|
||||
module = tool_class.__module__
|
||||
|
||||
return {
|
||||
"name": tool_class.__name__,
|
||||
"module": module,
|
||||
"humanized_name": _extract_field_default(
|
||||
fields.get("name"), fallback=tool_class.__name__
|
||||
),
|
||||
"description": str(
|
||||
_extract_field_default(fields.get("description"))
|
||||
).strip(),
|
||||
"run_params_schema": _extract_run_params_schema(fields.get("args_schema")),
|
||||
"init_params_schema": _extract_init_params_schema(tool_class),
|
||||
"env_vars": _extract_env_vars(fields.get("env_vars")),
|
||||
}
|
||||
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _unwrap_schema(schema: Mapping[str, Any] | dict[str, Any]) -> dict[str, Any]:
|
||||
"""
|
||||
Unwrap nested schema structures to get to the actual schema definition.
|
||||
"""
|
||||
result: dict[str, Any] = dict(schema)
|
||||
while (
|
||||
result.get("type")
|
||||
in {"function-after", "function-before", "function-wrap", "default"}
|
||||
and "schema" in result
|
||||
):
|
||||
result = dict(result["schema"])
|
||||
if result.get("type") == "definitions" and "schema" in result:
|
||||
result = dict(result["schema"])
|
||||
return result
|
||||
|
||||
|
||||
def _extract_field_default(
|
||||
field: dict[str, Any] | None, fallback: str | list[Any] = ""
|
||||
) -> str | list[Any] | int:
|
||||
"""
|
||||
Extract the default value from a field schema.
|
||||
"""
|
||||
if not field:
|
||||
return fallback
|
||||
|
||||
schema = field.get("schema", {})
|
||||
default = schema.get("default")
|
||||
return default if isinstance(default, (list, str, int)) else fallback
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _get_schema_generator() -> type:
|
||||
"""Get a SchemaGenerator that omits non-serializable defaults."""
|
||||
from pydantic.json_schema import GenerateJsonSchema
|
||||
from pydantic_core import PydanticOmit
|
||||
|
||||
class SchemaGenerator(GenerateJsonSchema):
|
||||
def handle_invalid_for_json_schema(
|
||||
self, schema: Any, error_info: Any
|
||||
) -> dict[str, Any]:
|
||||
raise PydanticOmit
|
||||
|
||||
return SchemaGenerator
|
||||
|
||||
|
||||
def _extract_run_params_schema(
|
||||
args_schema_field: dict[str, Any] | None,
|
||||
) -> dict[str, Any]:
|
||||
"""
|
||||
Extract JSON Schema for the tool's run parameters from args_schema field.
|
||||
"""
|
||||
from pydantic import BaseModel
|
||||
|
||||
if not args_schema_field:
|
||||
return {}
|
||||
|
||||
args_schema_class = args_schema_field.get("schema", {}).get("default")
|
||||
if not (
|
||||
inspect.isclass(args_schema_class) and issubclass(args_schema_class, BaseModel)
|
||||
):
|
||||
return {}
|
||||
|
||||
try:
|
||||
return args_schema_class.model_json_schema(
|
||||
schema_generator=_get_schema_generator()
|
||||
)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
_IGNORED_INIT_PARAMS = frozenset(
|
||||
{
|
||||
"name",
|
||||
"description",
|
||||
"env_vars",
|
||||
"args_schema",
|
||||
"description_updated",
|
||||
"cache_function",
|
||||
"result_as_answer",
|
||||
"max_usage_count",
|
||||
"current_usage_count",
|
||||
"package_dependencies",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def _extract_init_params_schema(tool_class: type) -> dict[str, Any]:
|
||||
"""
|
||||
Extract JSON Schema for the tool's __init__ parameters, filtering out base fields.
|
||||
"""
|
||||
try:
|
||||
json_schema: dict[str, Any] = cast(Any, tool_class).model_json_schema(
|
||||
schema_generator=_get_schema_generator(), mode="serialization"
|
||||
)
|
||||
filtered_properties = {
|
||||
key: value
|
||||
for key, value in json_schema.get("properties", {}).items()
|
||||
if key not in _IGNORED_INIT_PARAMS
|
||||
}
|
||||
json_schema["properties"] = filtered_properties
|
||||
if "required" in json_schema:
|
||||
json_schema["required"] = [
|
||||
key for key in json_schema["required"] if key in filtered_properties
|
||||
]
|
||||
return json_schema
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def _extract_env_vars(env_vars_field: dict[str, Any] | None) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Extract environment variable definitions from env_vars field.
|
||||
"""
|
||||
from crewai.tools.base_tool import EnvVar
|
||||
|
||||
if not env_vars_field:
|
||||
return []
|
||||
|
||||
schema = env_vars_field.get("schema", {})
|
||||
default = schema.get("default")
|
||||
if default is None:
|
||||
default_factory = schema.get("default_factory")
|
||||
if callable(default_factory):
|
||||
try:
|
||||
default = default_factory()
|
||||
except Exception:
|
||||
default = []
|
||||
|
||||
if not isinstance(default, list):
|
||||
return []
|
||||
|
||||
return [
|
||||
{
|
||||
"name": env_var.name,
|
||||
"description": env_var.description,
|
||||
"required": env_var.required,
|
||||
"default": env_var.default,
|
||||
}
|
||||
for env_var in default
|
||||
if isinstance(env_var, EnvVar)
|
||||
]
|
||||
|
||||
@@ -85,6 +85,8 @@ class CrewAIEventsBus:
|
||||
_shutting_down: bool
|
||||
_pending_futures: set[Future[Any]]
|
||||
_futures_lock: threading.Lock
|
||||
_executor_initialized: bool
|
||||
_has_pending_events: bool
|
||||
|
||||
def __new__(cls) -> Self:
|
||||
"""Create or return the singleton instance.
|
||||
@@ -102,8 +104,9 @@ class CrewAIEventsBus:
|
||||
def _initialize(self) -> None:
|
||||
"""Initialize the event bus internal state.
|
||||
|
||||
Creates handler dictionaries and starts a dedicated background
|
||||
event loop for async handler execution.
|
||||
Creates handler dictionaries. The thread pool executor and event loop
|
||||
are lazily initialized on first emit() to avoid overhead when events
|
||||
are never emitted.
|
||||
"""
|
||||
self._shutting_down = False
|
||||
self._rwlock = RWLock()
|
||||
@@ -115,19 +118,37 @@ class CrewAIEventsBus:
|
||||
type[BaseEvent], dict[Handler, list[Depends[Any]]]
|
||||
] = {}
|
||||
self._execution_plan_cache: dict[type[BaseEvent], ExecutionPlan] = {}
|
||||
self._sync_executor = ThreadPoolExecutor(
|
||||
max_workers=10,
|
||||
thread_name_prefix="CrewAISyncHandler",
|
||||
)
|
||||
self._console = ConsoleFormatter()
|
||||
# Lazy initialization flags - executor and loop created on first emit
|
||||
self._executor_initialized = False
|
||||
self._has_pending_events = False
|
||||
|
||||
self._loop = asyncio.new_event_loop()
|
||||
self._loop_thread = threading.Thread(
|
||||
target=self._run_loop,
|
||||
name="CrewAIEventsLoop",
|
||||
daemon=True,
|
||||
)
|
||||
self._loop_thread.start()
|
||||
def _ensure_executor_initialized(self) -> None:
|
||||
"""Lazily initialize the thread pool executor and event loop.
|
||||
|
||||
Called on first emit() to avoid startup overhead when events are never used.
|
||||
Thread-safe via double-checked locking.
|
||||
"""
|
||||
if self._executor_initialized:
|
||||
return
|
||||
|
||||
with self._instance_lock:
|
||||
if self._executor_initialized:
|
||||
return
|
||||
|
||||
self._sync_executor = ThreadPoolExecutor(
|
||||
max_workers=10,
|
||||
thread_name_prefix="CrewAISyncHandler",
|
||||
)
|
||||
|
||||
self._loop = asyncio.new_event_loop()
|
||||
self._loop_thread = threading.Thread(
|
||||
target=self._run_loop,
|
||||
name="CrewAIEventsLoop",
|
||||
daemon=True,
|
||||
)
|
||||
self._loop_thread.start()
|
||||
self._executor_initialized = True
|
||||
|
||||
def _track_future(self, future: Future[Any]) -> Future[Any]:
|
||||
"""Track a future and set up automatic cleanup when it completes.
|
||||
@@ -396,6 +417,11 @@ class CrewAIEventsBus:
|
||||
... await asyncio.wrap_future(future) # In async test
|
||||
... # or future.result(timeout=5.0) in sync code
|
||||
"""
|
||||
# Lazily initialize executor and event loop on first emit
|
||||
self._ensure_executor_initialized()
|
||||
# Track that we have pending events for flush optimization
|
||||
self._has_pending_events = True
|
||||
|
||||
event.previous_event_id = get_last_event_id()
|
||||
event.triggered_by_event_id = get_triggering_event_id()
|
||||
event.emission_sequence = get_next_emission_sequence()
|
||||
@@ -474,6 +500,10 @@ class CrewAIEventsBus:
|
||||
Returns:
|
||||
True if all handlers completed, False if timeout occurred.
|
||||
"""
|
||||
# Skip flush entirely if no events were ever emitted
|
||||
if not self._has_pending_events:
|
||||
return True
|
||||
|
||||
with self._futures_lock:
|
||||
futures_to_wait = list(self._pending_futures)
|
||||
|
||||
@@ -629,6 +659,9 @@ class CrewAIEventsBus:
|
||||
|
||||
with self._rwlock.w_locked():
|
||||
self._shutting_down = True
|
||||
# Check if executor was ever initialized (lazy init optimization)
|
||||
if not self._executor_initialized:
|
||||
return
|
||||
loop = getattr(self, "_loop", None)
|
||||
|
||||
if loop is None or loop.is_closed():
|
||||
|
||||
@@ -17,7 +17,10 @@ from crewai.events.listeners.tracing.first_time_trace_handler import (
|
||||
from crewai.events.listeners.tracing.trace_batch_manager import TraceBatchManager
|
||||
from crewai.events.listeners.tracing.types import TraceEvent
|
||||
from crewai.events.listeners.tracing.utils import (
|
||||
is_tracing_enabled,
|
||||
safe_serialize_to_dict,
|
||||
should_auto_collect_first_time_traces,
|
||||
should_enable_tracing,
|
||||
)
|
||||
from crewai.events.types.a2a_events import (
|
||||
A2AAgentCardFetchedEvent,
|
||||
@@ -198,6 +201,12 @@ class TraceCollectionListener(BaseEventListener):
|
||||
if self._listeners_setup:
|
||||
return
|
||||
|
||||
# Skip registration entirely if tracing is disabled and not first-time user
|
||||
# This avoids overhead of 50+ handler registrations when tracing won't be used
|
||||
if not should_enable_tracing() and not should_auto_collect_first_time_traces():
|
||||
self._listeners_setup = True
|
||||
return
|
||||
|
||||
self._register_env_event_handlers(crewai_event_bus)
|
||||
self._register_flow_event_handlers(crewai_event_bus)
|
||||
self._register_context_event_handlers(crewai_event_bus)
|
||||
|
||||
@@ -481,6 +481,17 @@ def should_auto_collect_first_time_traces() -> bool:
|
||||
return is_first_execution()
|
||||
|
||||
|
||||
def _is_interactive_terminal() -> bool:
|
||||
"""Check if stdin is an interactive terminal.
|
||||
|
||||
Returns False in non-interactive contexts (CI, API servers, Docker, etc.)
|
||||
to avoid blocking on prompts that no one can respond to.
|
||||
"""
|
||||
import sys
|
||||
|
||||
return sys.stdin.isatty()
|
||||
|
||||
|
||||
def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
|
||||
"""
|
||||
Prompt user if they want to see their traces with timeout.
|
||||
@@ -492,6 +503,11 @@ def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
|
||||
if should_suppress_tracing_messages():
|
||||
return False
|
||||
|
||||
# Skip prompt in non-interactive contexts (CI, API servers, Docker, etc.)
|
||||
# This avoids blocking for 20 seconds when no one can respond
|
||||
if not _is_interactive_terminal():
|
||||
return False
|
||||
|
||||
try:
|
||||
import threading
|
||||
|
||||
|
||||
@@ -753,7 +753,7 @@ class LLM(BaseLLM):
|
||||
"temperature": self.temperature,
|
||||
"top_p": self.top_p,
|
||||
"n": self.n,
|
||||
"stop": self.stop or None,
|
||||
"stop": (self.stop or None) if self.supports_stop_words() else None,
|
||||
"max_tokens": self.max_tokens or self.max_completion_tokens,
|
||||
"presence_penalty": self.presence_penalty,
|
||||
"frequency_penalty": self.frequency_penalty,
|
||||
@@ -1825,9 +1825,11 @@ class LLM(BaseLLM):
|
||||
# whether to summarize the content or abort based on the respect_context_window flag
|
||||
raise
|
||||
except Exception as e:
|
||||
unsupported_stop = "Unsupported parameter" in str(
|
||||
e
|
||||
) and "'stop'" in str(e)
|
||||
error_str = str(e)
|
||||
unsupported_stop = "'stop'" in error_str and (
|
||||
"Unsupported parameter" in error_str
|
||||
or "does not support parameters" in error_str
|
||||
)
|
||||
|
||||
if unsupported_stop:
|
||||
if (
|
||||
@@ -1961,9 +1963,11 @@ class LLM(BaseLLM):
|
||||
except LLMContextLengthExceededError:
|
||||
raise
|
||||
except Exception as e:
|
||||
unsupported_stop = "Unsupported parameter" in str(
|
||||
e
|
||||
) and "'stop'" in str(e)
|
||||
error_str = str(e)
|
||||
unsupported_stop = "'stop'" in error_str and (
|
||||
"Unsupported parameter" in error_str
|
||||
or "does not support parameters" in error_str
|
||||
)
|
||||
|
||||
if unsupported_stop:
|
||||
if (
|
||||
@@ -2263,6 +2267,10 @@ class LLM(BaseLLM):
|
||||
Note: This method is only used by the litellm fallback path.
|
||||
Native providers override this method with their own implementation.
|
||||
"""
|
||||
model_lower = self.model.lower() if self.model else ""
|
||||
if "gpt-5" in model_lower:
|
||||
return False
|
||||
|
||||
if not LITELLM_AVAILABLE or get_supported_openai_params is None:
|
||||
# When litellm is not available, assume stop words are supported
|
||||
return True
|
||||
|
||||
@@ -2245,6 +2245,9 @@ class OpenAICompletion(BaseLLM):
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the model supports stop words."""
|
||||
model_lower = self.model.lower() if self.model else ""
|
||||
if "gpt-5" in model_lower:
|
||||
return False
|
||||
return not self.is_o1_model
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
|
||||
@@ -0,0 +1,110 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"user","content":"What is the capital of France?"}],"model":"gpt-5"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '89'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.2
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-DO4LcSpy72yIXCYSIVOQEXWNXydgn\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1774628956,\n \"model\": \"gpt-5-2025-08-07\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Paris.\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 13,\n \"completion_tokens\":
|
||||
11,\n \"total_tokens\": 24,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": null\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-Ray:
|
||||
- 9e2fc5dce85582fb-GIG
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 27 Mar 2026 16:29:17 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
content-length:
|
||||
- '772'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1343'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -136,6 +136,7 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"file": encoded_file,
|
||||
"description": description,
|
||||
"available_exports": None,
|
||||
"tools_metadata": None,
|
||||
}
|
||||
mock_make_request.assert_called_once_with(
|
||||
"POST", "/crewai_plus/api/v1/tools", json=params
|
||||
@@ -173,6 +174,7 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"file": encoded_file,
|
||||
"description": description,
|
||||
"available_exports": None,
|
||||
"tools_metadata": None,
|
||||
}
|
||||
|
||||
self.assert_request_with_org_id(
|
||||
@@ -201,6 +203,48 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"file": encoded_file,
|
||||
"description": description,
|
||||
"available_exports": None,
|
||||
"tools_metadata": None,
|
||||
}
|
||||
mock_make_request.assert_called_once_with(
|
||||
"POST", "/crewai_plus/api/v1/tools", json=params
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI._make_request")
|
||||
def test_publish_tool_with_tools_metadata(self, mock_make_request):
|
||||
mock_response = MagicMock()
|
||||
mock_make_request.return_value = mock_response
|
||||
handle = "test_tool_handle"
|
||||
public = True
|
||||
version = "1.0.0"
|
||||
description = "Test tool description"
|
||||
encoded_file = "encoded_test_file"
|
||||
available_exports = [{"name": "MyTool"}]
|
||||
tools_metadata = [
|
||||
{
|
||||
"name": "MyTool",
|
||||
"humanized_name": "my_tool",
|
||||
"description": "A test tool",
|
||||
"run_params_schema": {"type": "object", "properties": {}},
|
||||
"init_params_schema": {"type": "object", "properties": {}},
|
||||
"env_vars": [{"name": "API_KEY", "description": "API key", "required": True, "default": None}],
|
||||
}
|
||||
]
|
||||
|
||||
response = self.api.publish_tool(
|
||||
handle, public, version, description, encoded_file,
|
||||
available_exports=available_exports,
|
||||
tools_metadata=tools_metadata,
|
||||
)
|
||||
|
||||
params = {
|
||||
"handle": handle,
|
||||
"public": public,
|
||||
"version": version,
|
||||
"file": encoded_file,
|
||||
"description": description,
|
||||
"available_exports": available_exports,
|
||||
"tools_metadata": {"package": handle, "tools": tools_metadata},
|
||||
}
|
||||
mock_make_request.assert_called_once_with(
|
||||
"POST", "/crewai_plus/api/v1/tools", json=params
|
||||
|
||||
@@ -363,3 +363,290 @@ def test_get_crews_ignores_template_directories(
|
||||
utils.get_crews()
|
||||
|
||||
assert not template_crew_detected
|
||||
|
||||
|
||||
# Tests for extract_tools_metadata
|
||||
|
||||
|
||||
def test_extract_tools_metadata_empty_project(temp_project_dir):
|
||||
"""Test that extract_tools_metadata returns empty list for empty project."""
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert metadata == []
|
||||
|
||||
|
||||
def test_extract_tools_metadata_no_init_file(temp_project_dir):
|
||||
"""Test that extract_tools_metadata returns empty list when no __init__.py exists."""
|
||||
(temp_project_dir / "some_file.py").write_text("print('hello')")
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert metadata == []
|
||||
|
||||
|
||||
def test_extract_tools_metadata_empty_init_file(temp_project_dir):
|
||||
"""Test that extract_tools_metadata returns empty list for empty __init__.py."""
|
||||
create_init_file(temp_project_dir, "")
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert metadata == []
|
||||
|
||||
|
||||
def test_extract_tools_metadata_no_all_variable(temp_project_dir):
|
||||
"""Test that extract_tools_metadata returns empty list when __all__ is not defined."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"from crewai.tools import BaseTool\n\nclass MyTool(BaseTool):\n pass",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert metadata == []
|
||||
|
||||
|
||||
def test_extract_tools_metadata_valid_base_tool_class(temp_project_dir):
|
||||
"""Test that extract_tools_metadata extracts metadata from a valid BaseTool class."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
|
||||
class MyTool(BaseTool):
|
||||
name: str = "my_tool"
|
||||
description: str = "A test tool"
|
||||
|
||||
__all__ = ['MyTool']
|
||||
""",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 1
|
||||
assert metadata[0]["name"] == "MyTool"
|
||||
assert metadata[0]["humanized_name"] == "my_tool"
|
||||
assert metadata[0]["description"] == "A test tool"
|
||||
|
||||
|
||||
def test_extract_tools_metadata_with_args_schema(temp_project_dir):
|
||||
"""Test that extract_tools_metadata extracts run_params_schema from args_schema."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
from pydantic import BaseModel
|
||||
|
||||
class MyToolInput(BaseModel):
|
||||
query: str
|
||||
limit: int = 10
|
||||
|
||||
class MyTool(BaseTool):
|
||||
name: str = "my_tool"
|
||||
description: str = "A test tool"
|
||||
args_schema: type[BaseModel] = MyToolInput
|
||||
|
||||
__all__ = ['MyTool']
|
||||
""",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 1
|
||||
assert metadata[0]["name"] == "MyTool"
|
||||
run_params = metadata[0]["run_params_schema"]
|
||||
assert "properties" in run_params
|
||||
assert "query" in run_params["properties"]
|
||||
assert "limit" in run_params["properties"]
|
||||
|
||||
|
||||
def test_extract_tools_metadata_with_env_vars(temp_project_dir):
|
||||
"""Test that extract_tools_metadata extracts env_vars."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
from crewai.tools.base_tool import EnvVar
|
||||
|
||||
class MyTool(BaseTool):
|
||||
name: str = "my_tool"
|
||||
description: str = "A test tool"
|
||||
env_vars: list[EnvVar] = [
|
||||
EnvVar(name="MY_API_KEY", description="API key for service", required=True),
|
||||
EnvVar(name="MY_OPTIONAL_VAR", description="Optional var", required=False, default="default_value"),
|
||||
]
|
||||
|
||||
__all__ = ['MyTool']
|
||||
""",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 1
|
||||
env_vars = metadata[0]["env_vars"]
|
||||
assert len(env_vars) == 2
|
||||
assert env_vars[0]["name"] == "MY_API_KEY"
|
||||
assert env_vars[0]["description"] == "API key for service"
|
||||
assert env_vars[0]["required"] is True
|
||||
assert env_vars[1]["name"] == "MY_OPTIONAL_VAR"
|
||||
assert env_vars[1]["required"] is False
|
||||
assert env_vars[1]["default"] == "default_value"
|
||||
|
||||
|
||||
def test_extract_tools_metadata_with_env_vars_field_default_factory(temp_project_dir):
|
||||
"""Test that extract_tools_metadata extracts env_vars declared with Field(default_factory=...)."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
from crewai.tools.base_tool import EnvVar
|
||||
from pydantic import Field
|
||||
|
||||
class MyTool(BaseTool):
|
||||
name: str = "my_tool"
|
||||
description: str = "A test tool"
|
||||
env_vars: list[EnvVar] = Field(
|
||||
default_factory=lambda: [
|
||||
EnvVar(name="MY_TOOL_API", description="API token for my tool", required=True),
|
||||
]
|
||||
)
|
||||
|
||||
__all__ = ['MyTool']
|
||||
""",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 1
|
||||
env_vars = metadata[0]["env_vars"]
|
||||
assert len(env_vars) == 1
|
||||
assert env_vars[0]["name"] == "MY_TOOL_API"
|
||||
assert env_vars[0]["description"] == "API token for my tool"
|
||||
assert env_vars[0]["required"] is True
|
||||
|
||||
|
||||
def test_extract_tools_metadata_with_custom_init_params(temp_project_dir):
|
||||
"""Test that extract_tools_metadata extracts init_params_schema with custom params."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
|
||||
class MyTool(BaseTool):
|
||||
name: str = "my_tool"
|
||||
description: str = "A test tool"
|
||||
api_endpoint: str = "https://api.example.com"
|
||||
timeout: int = 30
|
||||
|
||||
__all__ = ['MyTool']
|
||||
""",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 1
|
||||
init_params = metadata[0]["init_params_schema"]
|
||||
assert "properties" in init_params
|
||||
# Custom params should be included
|
||||
assert "api_endpoint" in init_params["properties"]
|
||||
assert "timeout" in init_params["properties"]
|
||||
# Base params should be filtered out
|
||||
assert "name" not in init_params["properties"]
|
||||
assert "description" not in init_params["properties"]
|
||||
|
||||
|
||||
def test_extract_tools_metadata_multiple_tools(temp_project_dir):
|
||||
"""Test that extract_tools_metadata extracts metadata from multiple tools."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
|
||||
class FirstTool(BaseTool):
|
||||
name: str = "first_tool"
|
||||
description: str = "First test tool"
|
||||
|
||||
class SecondTool(BaseTool):
|
||||
name: str = "second_tool"
|
||||
description: str = "Second test tool"
|
||||
|
||||
__all__ = ['FirstTool', 'SecondTool']
|
||||
""",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 2
|
||||
names = [m["name"] for m in metadata]
|
||||
assert "FirstTool" in names
|
||||
assert "SecondTool" in names
|
||||
|
||||
|
||||
def test_extract_tools_metadata_multiple_init_files(temp_project_dir):
|
||||
"""Test that extract_tools_metadata extracts metadata from multiple __init__.py files."""
|
||||
# Create tool in root __init__.py
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
|
||||
class RootTool(BaseTool):
|
||||
name: str = "root_tool"
|
||||
description: str = "Root tool"
|
||||
|
||||
__all__ = ['RootTool']
|
||||
""",
|
||||
)
|
||||
|
||||
# Create nested package with another tool
|
||||
nested_dir = temp_project_dir / "nested"
|
||||
nested_dir.mkdir()
|
||||
create_init_file(
|
||||
nested_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
|
||||
class NestedTool(BaseTool):
|
||||
name: str = "nested_tool"
|
||||
description: str = "Nested tool"
|
||||
|
||||
__all__ = ['NestedTool']
|
||||
""",
|
||||
)
|
||||
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 2
|
||||
names = [m["name"] for m in metadata]
|
||||
assert "RootTool" in names
|
||||
assert "NestedTool" in names
|
||||
|
||||
|
||||
def test_extract_tools_metadata_ignores_non_tool_exports(temp_project_dir):
|
||||
"""Test that extract_tools_metadata ignores non-BaseTool exports."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
|
||||
class MyTool(BaseTool):
|
||||
name: str = "my_tool"
|
||||
description: str = "A test tool"
|
||||
|
||||
def not_a_tool():
|
||||
pass
|
||||
|
||||
SOME_CONSTANT = "value"
|
||||
|
||||
__all__ = ['MyTool', 'not_a_tool', 'SOME_CONSTANT']
|
||||
""",
|
||||
)
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert len(metadata) == 1
|
||||
assert metadata[0]["name"] == "MyTool"
|
||||
|
||||
|
||||
def test_extract_tools_metadata_import_error_returns_empty(temp_project_dir):
|
||||
"""Test that extract_tools_metadata returns empty list on import error."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from nonexistent_module import something
|
||||
|
||||
class MyTool(BaseTool):
|
||||
pass
|
||||
|
||||
__all__ = ['MyTool']
|
||||
""",
|
||||
)
|
||||
# Should not raise, just return empty list
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert metadata == []
|
||||
|
||||
|
||||
def test_extract_tools_metadata_syntax_error_returns_empty(temp_project_dir):
|
||||
"""Test that extract_tools_metadata returns empty list on syntax error."""
|
||||
create_init_file(
|
||||
temp_project_dir,
|
||||
"""from crewai.tools import BaseTool
|
||||
|
||||
class MyTool(BaseTool):
|
||||
# Missing closing parenthesis
|
||||
def __init__(self, name:
|
||||
pass
|
||||
|
||||
__all__ = ['MyTool']
|
||||
""",
|
||||
)
|
||||
# Should not raise, just return empty list
|
||||
metadata = utils.extract_tools_metadata(dir_path=str(temp_project_dir))
|
||||
assert metadata == []
|
||||
|
||||
@@ -185,9 +185,14 @@ def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
|
||||
"crewai.cli.tools.main.extract_available_exports",
|
||||
return_value=[{"name": "SampleTool"}],
|
||||
)
|
||||
@patch(
|
||||
"crewai.cli.tools.main.extract_tools_metadata",
|
||||
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
|
||||
)
|
||||
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
|
||||
def test_publish_when_not_in_sync_and_force(
|
||||
mock_print_org,
|
||||
mock_tools_metadata,
|
||||
mock_available_exports,
|
||||
mock_is_synced,
|
||||
mock_publish,
|
||||
@@ -222,6 +227,7 @@ def test_publish_when_not_in_sync_and_force(
|
||||
description="A sample tool",
|
||||
encoded_file=unittest.mock.ANY,
|
||||
available_exports=[{"name": "SampleTool"}],
|
||||
tools_metadata=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
|
||||
)
|
||||
mock_print_org.assert_called_once()
|
||||
|
||||
@@ -242,7 +248,12 @@ def test_publish_when_not_in_sync_and_force(
|
||||
"crewai.cli.tools.main.extract_available_exports",
|
||||
return_value=[{"name": "SampleTool"}],
|
||||
)
|
||||
@patch(
|
||||
"crewai.cli.tools.main.extract_tools_metadata",
|
||||
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
|
||||
)
|
||||
def test_publish_success(
|
||||
mock_tools_metadata,
|
||||
mock_available_exports,
|
||||
mock_is_synced,
|
||||
mock_publish,
|
||||
@@ -277,6 +288,7 @@ def test_publish_success(
|
||||
description="A sample tool",
|
||||
encoded_file=unittest.mock.ANY,
|
||||
available_exports=[{"name": "SampleTool"}],
|
||||
tools_metadata=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
|
||||
)
|
||||
|
||||
|
||||
@@ -295,7 +307,12 @@ def test_publish_success(
|
||||
"crewai.cli.tools.main.extract_available_exports",
|
||||
return_value=[{"name": "SampleTool"}],
|
||||
)
|
||||
@patch(
|
||||
"crewai.cli.tools.main.extract_tools_metadata",
|
||||
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
|
||||
)
|
||||
def test_publish_failure(
|
||||
mock_tools_metadata,
|
||||
mock_available_exports,
|
||||
mock_publish,
|
||||
mock_open,
|
||||
@@ -336,7 +353,12 @@ def test_publish_failure(
|
||||
"crewai.cli.tools.main.extract_available_exports",
|
||||
return_value=[{"name": "SampleTool"}],
|
||||
)
|
||||
@patch(
|
||||
"crewai.cli.tools.main.extract_tools_metadata",
|
||||
return_value=[{"name": "SampleTool", "humanized_name": "sample_tool", "description": "A sample tool", "run_params_schema": {}, "init_params_schema": {}, "env_vars": []}],
|
||||
)
|
||||
def test_publish_api_error(
|
||||
mock_tools_metadata,
|
||||
mock_available_exports,
|
||||
mock_publish,
|
||||
mock_open,
|
||||
@@ -362,6 +384,63 @@ def test_publish_api_error(
|
||||
mock_publish.assert_called_once()
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
@patch("crewai.cli.tools.main.os.listdir", return_value=["sample-tool-1.0.0.tar.gz"])
|
||||
@patch(
|
||||
"crewai.cli.tools.main.open",
|
||||
new_callable=unittest.mock.mock_open,
|
||||
read_data=b"sample tarball content",
|
||||
)
|
||||
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
|
||||
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=True)
|
||||
@patch(
|
||||
"crewai.cli.tools.main.extract_available_exports",
|
||||
return_value=[{"name": "SampleTool"}],
|
||||
)
|
||||
@patch(
|
||||
"crewai.cli.tools.main.extract_tools_metadata",
|
||||
side_effect=Exception("Failed to extract metadata"),
|
||||
)
|
||||
def test_publish_metadata_extraction_failure_continues_with_warning(
|
||||
mock_tools_metadata,
|
||||
mock_available_exports,
|
||||
mock_is_synced,
|
||||
mock_publish,
|
||||
mock_open,
|
||||
mock_listdir,
|
||||
mock_subprocess_run,
|
||||
mock_get_project_description,
|
||||
mock_get_project_version,
|
||||
mock_get_project_name,
|
||||
capsys,
|
||||
tool_command,
|
||||
):
|
||||
"""Test that metadata extraction failure shows warning but continues publishing."""
|
||||
mock_publish_response = MagicMock()
|
||||
mock_publish_response.status_code = 200
|
||||
mock_publish_response.json.return_value = {"handle": "sample-tool"}
|
||||
mock_publish.return_value = mock_publish_response
|
||||
|
||||
tool_command.publish(is_public=True)
|
||||
|
||||
output = capsys.readouterr().out
|
||||
assert "Warning: Could not extract tool metadata" in output
|
||||
assert "Publishing will continue without detailed metadata" in output
|
||||
assert "No tool metadata extracted" in output
|
||||
mock_publish.assert_called_once_with(
|
||||
handle="sample-tool",
|
||||
is_public=True,
|
||||
version="1.0.0",
|
||||
description="A sample tool",
|
||||
encoded_file=unittest.mock.ANY,
|
||||
available_exports=[{"name": "SampleTool"}],
|
||||
tools_metadata=[],
|
||||
)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.Settings")
|
||||
def test_print_current_organization_with_org(mock_settings, capsys, tool_command):
|
||||
mock_settings_instance = MagicMock()
|
||||
|
||||
@@ -1523,6 +1523,69 @@ def test_openai_stop_words_not_applied_to_structured_output():
|
||||
assert "Observation:" in result.observation
|
||||
|
||||
|
||||
def test_openai_gpt5_models_do_not_support_stop_words():
|
||||
"""
|
||||
Test that GPT-5 family models do not support stop words via the API.
|
||||
GPT-5 models reject the 'stop' parameter, so stop words must be
|
||||
applied client-side only.
|
||||
"""
|
||||
gpt5_models = [
|
||||
"gpt-5",
|
||||
"gpt-5-mini",
|
||||
"gpt-5-nano",
|
||||
"gpt-5-pro",
|
||||
"gpt-5.1",
|
||||
"gpt-5.1-chat",
|
||||
"gpt-5.2",
|
||||
"gpt-5.2-chat",
|
||||
]
|
||||
|
||||
for model_name in gpt5_models:
|
||||
llm = OpenAICompletion(model=model_name)
|
||||
assert llm.supports_stop_words() == False, (
|
||||
f"Expected {model_name} to NOT support stop words"
|
||||
)
|
||||
|
||||
|
||||
def test_openai_non_gpt5_models_support_stop_words():
|
||||
"""
|
||||
Test that non-GPT-5 models still support stop words normally.
|
||||
"""
|
||||
supported_models = [
|
||||
"gpt-4o",
|
||||
"gpt-4o-mini",
|
||||
"gpt-4.1",
|
||||
"gpt-4.1-mini",
|
||||
"gpt-4-turbo",
|
||||
]
|
||||
|
||||
for model_name in supported_models:
|
||||
llm = OpenAICompletion(model=model_name)
|
||||
assert llm.supports_stop_words() == True, (
|
||||
f"Expected {model_name} to support stop words"
|
||||
)
|
||||
|
||||
|
||||
def test_openai_gpt5_still_applies_stop_words_client_side():
|
||||
"""
|
||||
Test that GPT-5 models still truncate responses at stop words client-side
|
||||
via _apply_stop_words(), even though they don't send 'stop' to the API.
|
||||
"""
|
||||
llm = OpenAICompletion(
|
||||
model="gpt-5.2",
|
||||
stop=["Observation:", "Final Answer:"],
|
||||
)
|
||||
|
||||
assert llm.supports_stop_words() == False
|
||||
|
||||
response = "I need to search.\n\nAction: search\nObservation: Found results"
|
||||
result = llm._apply_stop_words(response)
|
||||
|
||||
assert "Observation:" not in result
|
||||
assert "Found results" not in result
|
||||
assert "I need to search" in result
|
||||
|
||||
|
||||
def test_openai_stop_words_still_applied_to_regular_responses():
|
||||
"""
|
||||
Test that stop words ARE still applied for regular (non-structured) responses.
|
||||
|
||||
@@ -682,6 +682,126 @@ def test_llm_call_when_stop_is_unsupported_when_additional_drop_params_is_provid
|
||||
assert "Paris" in result
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_litellm_gpt5_call_succeeds_without_stop_error():
|
||||
"""
|
||||
Integration test: GPT-5 call succeeds when stop words are configured,
|
||||
because stop is omitted from API params and applied client-side.
|
||||
"""
|
||||
llm = LLM(model="gpt-5", stop=["Observation:"], is_litellm=True)
|
||||
result = llm.call("What is the capital of France?")
|
||||
assert isinstance(result, str)
|
||||
assert len(result) > 0
|
||||
|
||||
|
||||
def test_litellm_gpt5_does_not_send_stop_in_params():
|
||||
"""
|
||||
Test that the LiteLLM fallback path does not include 'stop' in API params
|
||||
for GPT-5.x models, since they reject it at the API level.
|
||||
"""
|
||||
llm = LLM(model="openai/gpt-5.2", stop=["Observation:"], is_litellm=True)
|
||||
|
||||
params = llm._prepare_completion_params(
|
||||
messages=[{"role": "user", "content": "Hello"}]
|
||||
)
|
||||
|
||||
assert params.get("stop") is None, (
|
||||
"GPT-5.x models should not have 'stop' in API params"
|
||||
)
|
||||
|
||||
|
||||
def test_litellm_non_gpt5_sends_stop_in_params():
|
||||
"""
|
||||
Test that the LiteLLM fallback path still includes 'stop' in API params
|
||||
for models that support it.
|
||||
"""
|
||||
llm = LLM(model="gpt-4o", stop=["Observation:"], is_litellm=True)
|
||||
|
||||
params = llm._prepare_completion_params(
|
||||
messages=[{"role": "user", "content": "Hello"}]
|
||||
)
|
||||
|
||||
assert params.get("stop") == ["Observation:"], (
|
||||
"Non-GPT-5 models should have 'stop' in API params"
|
||||
)
|
||||
|
||||
|
||||
def test_litellm_retry_catches_litellm_unsupported_params_error(caplog):
|
||||
"""
|
||||
Test that the retry logic catches LiteLLM's UnsupportedParamsError format
|
||||
("does not support parameters") in addition to the OpenAI API format.
|
||||
"""
|
||||
llm = LLM(model="openai/gpt-5.2", stop=["Observation:"], is_litellm=True)
|
||||
|
||||
litellm_error = Exception(
|
||||
"litellm.UnsupportedParamsError: openai does not support parameters: "
|
||||
"['stop'], for model=openai/gpt-5.2."
|
||||
)
|
||||
|
||||
call_count = 0
|
||||
|
||||
try:
|
||||
import litellm
|
||||
except ImportError:
|
||||
pytest.skip("litellm is not installed; skipping LiteLLM retry test")
|
||||
|
||||
def mock_completion(*args, **kwargs):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
if call_count == 1:
|
||||
raise litellm_error
|
||||
return MagicMock(
|
||||
choices=[MagicMock(message=MagicMock(content="Paris", tool_calls=None))],
|
||||
usage=MagicMock(
|
||||
prompt_tokens=10,
|
||||
completion_tokens=5,
|
||||
total_tokens=15,
|
||||
),
|
||||
)
|
||||
|
||||
with patch("litellm.completion", side_effect=mock_completion):
|
||||
with caplog.at_level(logging.INFO):
|
||||
result = llm.call("What is the capital of France?")
|
||||
|
||||
assert "Retrying LLM call without the unsupported 'stop'" in caplog.text
|
||||
assert "stop" in llm.additional_params.get("additional_drop_params", [])
|
||||
|
||||
|
||||
def test_litellm_retry_catches_openai_api_stop_error(caplog):
|
||||
"""
|
||||
Test that the retry logic still catches the OpenAI API error format
|
||||
("Unsupported parameter: 'stop'").
|
||||
"""
|
||||
llm = LLM(model="openai/gpt-5.2", stop=["Observation:"], is_litellm=True)
|
||||
|
||||
api_error = Exception(
|
||||
"Unsupported parameter: 'stop' is not supported with this model."
|
||||
)
|
||||
|
||||
call_count = 0
|
||||
|
||||
def mock_completion(*args, **kwargs):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
if call_count == 1:
|
||||
raise api_error
|
||||
return MagicMock(
|
||||
choices=[MagicMock(message=MagicMock(content="Paris", tool_calls=None))],
|
||||
usage=MagicMock(
|
||||
prompt_tokens=10,
|
||||
completion_tokens=5,
|
||||
total_tokens=15,
|
||||
),
|
||||
)
|
||||
|
||||
with patch("litellm.completion", side_effect=mock_completion):
|
||||
with caplog.at_level(logging.INFO):
|
||||
llm.call("What is the capital of France?")
|
||||
|
||||
assert "Retrying LLM call without the unsupported 'stop'" in caplog.text
|
||||
assert "stop" in llm.additional_params.get("additional_drop_params", [])
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ollama_llm():
|
||||
return LLM(model="ollama/llama3.2:3b", is_litellm=True)
|
||||
|
||||
@@ -793,6 +793,10 @@ class TestTraceListenerSetup:
|
||||
"crewai.events.listeners.tracing.utils._is_test_environment",
|
||||
return_value=False,
|
||||
),
|
||||
patch(
|
||||
"crewai.events.listeners.tracing.utils._is_interactive_terminal",
|
||||
return_value=True,
|
||||
),
|
||||
patch("threading.Thread") as mock_thread,
|
||||
):
|
||||
from crewai.events.listeners.tracing.utils import (
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.12.2"
|
||||
__version__ = "1.13.0rc1"
|
||||
|
||||
@@ -156,6 +156,33 @@ def update_version_in_file(file_path: Path, new_version: str) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
def update_pyproject_version(file_path: Path, new_version: str) -> bool:
|
||||
"""Update the [project] version field in a pyproject.toml file.
|
||||
|
||||
Args:
|
||||
file_path: Path to pyproject.toml file.
|
||||
new_version: New version string.
|
||||
|
||||
Returns:
|
||||
True if version was updated, False otherwise.
|
||||
"""
|
||||
if not file_path.exists():
|
||||
return False
|
||||
|
||||
content = file_path.read_text()
|
||||
new_content = re.sub(
|
||||
r'^(version\s*=\s*")[^"]+(")',
|
||||
rf"\g<1>{new_version}\2",
|
||||
content,
|
||||
count=1,
|
||||
flags=re.MULTILINE,
|
||||
)
|
||||
if new_content != content:
|
||||
file_path.write_text(new_content)
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
_DEFAULT_WORKSPACE_PACKAGES: Final[list[str]] = [
|
||||
"crewai",
|
||||
"crewai-tools",
|
||||
@@ -1045,10 +1072,84 @@ def _update_enterprise_crewai_dep(pyproject_path: Path, version: str) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
_DEPLOYMENT_TEST_REPO: Final[str] = "crewAIInc/crew_deployment_test"
|
||||
|
||||
_PYPI_POLL_INTERVAL: Final[int] = 15
|
||||
_PYPI_POLL_TIMEOUT: Final[int] = 600
|
||||
|
||||
|
||||
def _update_deployment_test_repo(version: str, is_prerelease: bool) -> None:
|
||||
"""Update the deployment test repo to pin the new crewai version.
|
||||
|
||||
Clones the repo, updates the crewai[tools] pin in pyproject.toml,
|
||||
regenerates the lockfile, commits, and pushes directly to main.
|
||||
|
||||
Args:
|
||||
version: New crewai version string.
|
||||
is_prerelease: Whether this is a pre-release version.
|
||||
"""
|
||||
console.print(
|
||||
f"\n[bold cyan]Updating {_DEPLOYMENT_TEST_REPO} to {version}[/bold cyan]"
|
||||
)
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
repo_dir = Path(tmp) / "crew_deployment_test"
|
||||
run_command(["gh", "repo", "clone", _DEPLOYMENT_TEST_REPO, str(repo_dir)])
|
||||
console.print(f"[green]✓[/green] Cloned {_DEPLOYMENT_TEST_REPO}")
|
||||
|
||||
pyproject = repo_dir / "pyproject.toml"
|
||||
content = pyproject.read_text()
|
||||
new_content = re.sub(
|
||||
r'"crewai\[tools\]==[^"]+"',
|
||||
f'"crewai[tools]=={version}"',
|
||||
content,
|
||||
)
|
||||
if new_content == content:
|
||||
console.print(
|
||||
"[yellow]Warning:[/yellow] No crewai[tools] pin found to update"
|
||||
)
|
||||
return
|
||||
pyproject.write_text(new_content)
|
||||
console.print(f"[green]✓[/green] Updated crewai[tools] pin to {version}")
|
||||
|
||||
lock_cmd = [
|
||||
"uv",
|
||||
"lock",
|
||||
"--refresh-package",
|
||||
"crewai",
|
||||
"--refresh-package",
|
||||
"crewai-tools",
|
||||
]
|
||||
if is_prerelease:
|
||||
lock_cmd.append("--prerelease=allow")
|
||||
|
||||
max_retries = 10
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
run_command(lock_cmd, cwd=repo_dir)
|
||||
break
|
||||
except subprocess.CalledProcessError:
|
||||
if attempt == max_retries:
|
||||
console.print(
|
||||
f"[red]Error:[/red] uv lock failed after {max_retries} attempts"
|
||||
)
|
||||
raise
|
||||
console.print(
|
||||
f"[yellow]uv lock failed (attempt {attempt}/{max_retries}),"
|
||||
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
|
||||
)
|
||||
time.sleep(_PYPI_POLL_INTERVAL)
|
||||
console.print("[green]✓[/green] Lockfile updated")
|
||||
|
||||
run_command(["git", "add", "pyproject.toml", "uv.lock"], cwd=repo_dir)
|
||||
run_command(
|
||||
["git", "commit", "-m", f"chore: bump crewai to {version}"],
|
||||
cwd=repo_dir,
|
||||
)
|
||||
run_command(["git", "push"], cwd=repo_dir)
|
||||
console.print(f"[green]✓[/green] Pushed to {_DEPLOYMENT_TEST_REPO}")
|
||||
|
||||
|
||||
def _wait_for_pypi(package: str, version: str) -> None:
|
||||
"""Poll PyPI until a specific package version is available.
|
||||
|
||||
@@ -1141,6 +1242,11 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
|
||||
|
||||
pyproject = pkg_dir / "pyproject.toml"
|
||||
if pyproject.exists():
|
||||
if update_pyproject_version(pyproject, version):
|
||||
console.print(
|
||||
f"[green]✓[/green] Updated version in: "
|
||||
f"{pyproject.relative_to(repo_dir)}"
|
||||
)
|
||||
if update_pyproject_dependencies(
|
||||
pyproject, version, extra_packages=list(_ENTERPRISE_EXTRA_PACKAGES)
|
||||
):
|
||||
@@ -1159,7 +1265,35 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
|
||||
_wait_for_pypi("crewai", version)
|
||||
|
||||
console.print("\nSyncing workspace...")
|
||||
run_command(["uv", "sync"], cwd=repo_dir)
|
||||
sync_cmd = [
|
||||
"uv",
|
||||
"sync",
|
||||
"--refresh-package",
|
||||
"crewai",
|
||||
"--refresh-package",
|
||||
"crewai-tools",
|
||||
"--refresh-package",
|
||||
"crewai-files",
|
||||
]
|
||||
if is_prerelease:
|
||||
sync_cmd.append("--prerelease=allow")
|
||||
|
||||
max_retries = 10
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
run_command(sync_cmd, cwd=repo_dir)
|
||||
break
|
||||
except subprocess.CalledProcessError:
|
||||
if attempt == max_retries:
|
||||
console.print(
|
||||
f"[red]Error:[/red] uv sync failed after {max_retries} attempts"
|
||||
)
|
||||
raise
|
||||
console.print(
|
||||
f"[yellow]uv sync failed (attempt {attempt}/{max_retries}),"
|
||||
f" retrying in {_PYPI_POLL_INTERVAL}s...[/yellow]"
|
||||
)
|
||||
time.sleep(_PYPI_POLL_INTERVAL)
|
||||
console.print("[green]✓[/green] Workspace synced")
|
||||
|
||||
# --- branch, commit, push, PR ---
|
||||
@@ -1175,7 +1309,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
|
||||
run_command(["git", "push", "-u", "origin", branch_name], cwd=repo_dir)
|
||||
console.print("[green]✓[/green] Branch pushed")
|
||||
|
||||
run_command(
|
||||
pr_url = run_command(
|
||||
[
|
||||
"gh",
|
||||
"pr",
|
||||
@@ -1192,6 +1326,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
|
||||
cwd=repo_dir,
|
||||
)
|
||||
console.print("[green]✓[/green] Enterprise bump PR created")
|
||||
console.print(f"[cyan]PR URL:[/cyan] {pr_url}")
|
||||
|
||||
_poll_pr_until_merged(branch_name, "enterprise bump PR", repo=enterprise_repo)
|
||||
|
||||
@@ -1558,7 +1693,18 @@ def tag(dry_run: bool, no_edit: bool) -> None:
|
||||
is_flag=True,
|
||||
help="Skip the enterprise release phase",
|
||||
)
|
||||
def release(version: str, dry_run: bool, no_edit: bool, skip_enterprise: bool) -> None:
|
||||
@click.option(
|
||||
"--skip-to-enterprise",
|
||||
is_flag=True,
|
||||
help="Skip phases 1 & 2, run only the enterprise release phase",
|
||||
)
|
||||
def release(
|
||||
version: str,
|
||||
dry_run: bool,
|
||||
no_edit: bool,
|
||||
skip_enterprise: bool,
|
||||
skip_to_enterprise: bool,
|
||||
) -> None:
|
||||
"""Full release: bump versions, tag, and publish a GitHub release.
|
||||
|
||||
Combines bump and tag into a single workflow. Creates a version bump PR,
|
||||
@@ -1571,11 +1717,19 @@ def release(version: str, dry_run: bool, no_edit: bool, skip_enterprise: bool) -
|
||||
dry_run: Show what would be done without making changes.
|
||||
no_edit: Skip editing release notes.
|
||||
skip_enterprise: Skip the enterprise release phase.
|
||||
skip_to_enterprise: Skip phases 1 & 2, run only the enterprise release phase.
|
||||
"""
|
||||
try:
|
||||
check_gh_installed()
|
||||
|
||||
if not skip_enterprise:
|
||||
if skip_enterprise and skip_to_enterprise:
|
||||
console.print(
|
||||
"[red]Error:[/red] Cannot use both --skip-enterprise "
|
||||
"and --skip-to-enterprise"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
if not skip_enterprise or skip_to_enterprise:
|
||||
missing: list[str] = []
|
||||
if not _ENTERPRISE_REPO:
|
||||
missing.append("ENTERPRISE_REPO")
|
||||
@@ -1594,6 +1748,15 @@ def release(version: str, dry_run: bool, no_edit: bool, skip_enterprise: bool) -
|
||||
cwd = Path.cwd()
|
||||
lib_dir = cwd / "lib"
|
||||
|
||||
is_prerelease = _is_prerelease(version)
|
||||
|
||||
if skip_to_enterprise:
|
||||
_release_enterprise(version, is_prerelease, dry_run)
|
||||
console.print(
|
||||
f"\n[green]✓[/green] Enterprise release [bold]{version}[/bold] complete!"
|
||||
)
|
||||
return
|
||||
|
||||
if not dry_run:
|
||||
console.print("Checking git status...")
|
||||
check_git_clean()
|
||||
@@ -1687,7 +1850,8 @@ def release(version: str, dry_run: bool, no_edit: bool, skip_enterprise: bool) -
|
||||
|
||||
if not dry_run:
|
||||
_create_tag_and_release(tag_name, release_notes, is_prerelease)
|
||||
_trigger_pypi_publish(tag_name, wait=not skip_enterprise)
|
||||
_trigger_pypi_publish(tag_name, wait=True)
|
||||
_update_deployment_test_repo(version, is_prerelease)
|
||||
|
||||
if not skip_enterprise:
|
||||
_release_enterprise(version, is_prerelease, dry_run)
|
||||
|
||||
Reference in New Issue
Block a user