Compare commits

...

16 Commits

Author SHA1 Message Date
Devin AI
06c4974cb3 fix: exclude empty stop parameter from LLM completion params
This fixes issue #4149 where models like gpt-5.1 don't support the
stop parameter at all. Previously, an empty list was always passed
to the API, causing an error.

Changes:
- Only include stop in params when it has values (non-empty)
- Respect additional_drop_params to ensure retry logic works correctly
- Add tests to verify the fix

Co-Authored-By: João <joao@crewai.com>
2025-12-24 07:12:18 +00:00
Lucas Gomide
0c020991c4 docs: fix wrong trigger name in sample docs (#4147)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-23 08:41:51 -05:00
Heitor Carvalho
be70a04153 fix: correct error fetching for workos login polling (#4124)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-19 20:00:26 -03:00
Greyson LaLonde
0c359f4df8 feat: bump versions to 1.7.2
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-19 15:47:00 -05:00
Lucas Gomide
fe288dbe73 Resolving some connection issues (#4129)
* fix: use CREWAI_PLUS_URL env var in precedence over PlusAPI configured value

* feat: bypass TLS certificate verification when calling platform

* test: fix test
2025-12-19 10:15:20 -05:00
Heitor Carvalho
dc63bc2319 chore: remove CREWAI_BASE_URL and fetch url from settings instead
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-18 15:41:38 -03:00
Greyson LaLonde
8d0effafec chore: add commitizen pre-commit hook
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-17 15:49:24 -05:00
Greyson LaLonde
1cdbe79b34 chore: add deployment action, trigger for releases 2025-12-17 08:40:14 -05:00
Lorenze Jay
84328d9311 fixed api-reference/status docs page (#4109)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-16 15:31:30 -08:00
Lorenze Jay
88d3c0fa97 feat: bump versions to 1.7.1 (#4092)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.7.1

* bump projects
2025-12-15 21:51:53 -08:00
Matt Aitchison
75ff7dce0c feat: add --no-commit flag to bump command (#4087)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Allows updating version files without creating a commit, branch, or PR.
2025-12-15 15:32:37 -06:00
Greyson LaLonde
38b0b125d3 feat: use json schema for tool argument serialization
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Replace Python representation with JsonSchema for tool arguments
  - Remove deprecated PydanticSchemaParser in favor of direct schema generation
  - Add handling for VAR_POSITIONAL and VAR_KEYWORD parameters
  - Improve tool argument schema collection
2025-12-11 15:50:19 -05:00
Vini Brasil
9bd8ad51f7 Add docs for AOP Deploy API (#4076)
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 15:58:17 -03:00
Heitor Carvalho
0632a054ca chore: display error message from response when tool repository login fails (#4075)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-11 14:56:00 -03:00
Dragos Ciupureanu
feec6b440e fix: gracefully terminate the future when executing a task async
* fix: gracefully terminate the future when executing a task async

* core: add unit test

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 12:03:33 -05:00
Greyson LaLonde
e43c7debbd fix: add idx for task ordering, tests 2025-12-11 10:18:15 -05:00
57 changed files with 1475 additions and 908 deletions

View File

@@ -1,9 +1,14 @@
name: Publish to PyPI
on:
release:
types: [ published ]
repository_dispatch:
types: [deployment-tests-passed]
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag to publish'
required: false
type: string
jobs:
build:
@@ -12,7 +17,21 @@ jobs:
permissions:
contents: read
steps:
- name: Determine release tag
id: release
run: |
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
if [ -n "${{ inputs.release_tag }}" ]; then
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
else
echo "tag=" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v4
with:
ref: ${{ steps.release.outputs.tag || github.ref }}
- name: Set up Python
uses: actions/setup-python@v5

View File

@@ -0,0 +1,18 @@
name: Trigger Deployment Tests
on:
release:
types: [published]
jobs:
trigger:
name: Trigger deployment tests
runs-on: ubuntu-latest
steps:
- name: Trigger deployment tests
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
event-type: crewai-release
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'

View File

@@ -24,4 +24,10 @@ repos:
rev: 0.9.3
hooks:
- id: uv-lock
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:
- id: commitizen
- id: commitizen-branch
stages: [ pre-push ]

View File

@@ -16,16 +16,17 @@ Welcome to the CrewAI AOP API reference. This API allows you to programmatically
Navigate to your crew's detail page in the CrewAI AOP dashboard and copy your Bearer Token from the Status tab.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive
a `kickoff_id`.
</Step>
<Step title="Monitor Progress">
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Token Types
| Token Type | Scope | Use Case |
|:-----------|:--------|:----------|
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
| Token Type | Scope | Use Case |
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI AOP dashboard.
You can find both token types in the Status tab of your crew's detail page in
the CrewAI AOP dashboard.
</Tip>
## Base URL
@@ -63,29 +65,33 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
4. **Results**: Extract the final output from the completed response
## Error Handling
The API uses standard HTTP status codes:
| Code | Meaning |
|------|:--------|
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| Code | Meaning |
| ----- | :----------------------------------------- |
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| `422` | Validation Error - Missing required inputs |
| `500` | Server Error - Contact support |
| `500` | Server Error - Contact support |
## Interactive Testing
<Info>
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew
URL, we use **reference mode** instead of an interactive playground to avoid
confusion. This shows you exactly what the requests should look like without
non-functional send buttons.
</Info>
Each endpoint page shows you:
- ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Each endpoint page shows you:
</CardGroup>
**Example workflow:**
1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard
@@ -111,10 +118,18 @@ Each endpoint page shows you:
## Need Help?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
Get help with API integration and troubleshooting
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
Manage your crews and view execution logs
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -187,6 +187,97 @@ You can also deploy your crews directly through the CrewAI AOP web interface by
</Steps>
## Option 3: Redeploy Using API (CI/CD Integration)
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
<Steps>
<Step title="Get Your Personal Access Token">
Navigate to your CrewAI AOP account settings to generate an API token:
1. Go to [app.crewai.com](https://app.crewai.com)
2. Click on **Settings** → **Account** → **Personal Access Token**
3. Generate a new token and copy it securely
4. Store this token as a secret in your CI/CD system
</Step>
<Step title="Find Your Automation UUID">
Locate the unique identifier for your deployed crew:
1. Go to **Automations** in your CrewAI AOP dashboard
2. Select your existing automation/crew
3. Click on **Additional Details**
4. Copy the **UUID** - this identifies your specific crew deployment
</Step>
<Step title="Trigger Redeployment via API">
Use the Deploy API endpoint to trigger a redeployment:
```bash
curl -i -X POST \
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
# HTTP/2 200
# content-type: application/json
#
# {
# "uuid": "your-automation-uuid",
# "status": "Deploy Enqueued",
# "public_url": "https://your-crew-deployment.crewai.com",
# "token": "your-bearer-token"
# }
```
<Info>
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
</Info>
</Step>
<Step title="GitHub Actions Integration Example">
Here's a GitHub Actions workflow with more complex deployment triggers:
```yaml
name: Deploy CrewAI Automation
on:
push:
branches: [ main ]
pull_request:
types: [ labeled ]
release:
types: [ published ]
jobs:
deploy:
runs-on: ubuntu-latest
if: |
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
(github.event_name == 'release')
steps:
- name: Trigger CrewAI Redeployment
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
```
<Tip>
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
</Tip>
</Step>
</Steps>
## ⚠️ Environment Variable Security Requirements
<Warning>

View File

@@ -62,13 +62,13 @@ Test your Gmail trigger integration locally using the CrewAI CLI:
crewai triggers list
# Simulate a Gmail trigger with realistic payload
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
Use `crewai triggers run gmail/new_email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
- Test locally with `crewai triggers run gmail/new_email_received` to see the exact payload structure
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /status/{kickoff_id}`
3. **Monitor progress** using `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Support
@@ -63,7 +63,7 @@ paths:
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Successfully retrieved required inputs
content:
application/json:
@@ -84,13 +84,21 @@ paths:
outreach_crew:
summary: Outreach crew inputs
value:
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
inputs:
[
"name",
"title",
"company",
"industry",
"our_product",
"linkedin_url",
]
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -170,7 +178,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Crew execution started successfully
content:
application/json:
@@ -182,24 +190,24 @@ paths:
format: uuid
description: Unique identifier for tracking this execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'400':
"400":
description: Invalid request body or missing required inputs
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
$ref: '#/components/responses/UnauthorizedError'
'422':
$ref: "#/components/schemas/Error"
"401":
$ref: "#/components/responses/UnauthorizedError"
"422":
description: Validation error - ensure all required inputs are provided
content:
application/json:
schema:
$ref: '#/components/schemas/ValidationError'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/ValidationError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Get Execution Status
description: |
@@ -222,15 +230,15 @@ paths:
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
'200':
"200":
description: Successfully retrieved execution status
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
examples:
running:
summary: Execution in progress
@@ -262,19 +270,19 @@ paths:
status: "error"
error: "Task execution failed: Invalid API key for external service"
execution_time: 23.1
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -354,7 +362,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -381,28 +389,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -458,7 +466,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
description: Total execution time in seconds
@@ -536,7 +544,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Unauthorized"
message: "Invalid or missing bearer token"
@@ -546,7 +554,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "The requested resource was not found"
@@ -556,7 +564,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Internal Server Error"
message: "An unexpected error occurred"

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /status/{kickoff_id}`
3. **Monitor progress** using `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Support
@@ -63,7 +63,7 @@ paths:
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Successfully retrieved required inputs
content:
application/json:
@@ -84,13 +84,21 @@ paths:
outreach_crew:
summary: Outreach crew inputs
value:
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
inputs:
[
"name",
"title",
"company",
"industry",
"our_product",
"linkedin_url",
]
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -170,7 +178,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Crew execution started successfully
content:
application/json:
@@ -182,24 +190,24 @@ paths:
format: uuid
description: Unique identifier for tracking this execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'400':
"400":
description: Invalid request body or missing required inputs
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
$ref: '#/components/responses/UnauthorizedError'
'422':
$ref: "#/components/schemas/Error"
"401":
$ref: "#/components/responses/UnauthorizedError"
"422":
description: Validation error - ensure all required inputs are provided
content:
application/json:
schema:
$ref: '#/components/schemas/ValidationError'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/ValidationError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Get Execution Status
description: |
@@ -222,15 +230,15 @@ paths:
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
'200':
"200":
description: Successfully retrieved execution status
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
examples:
running:
summary: Execution in progress
@@ -262,19 +270,19 @@ paths:
status: "error"
error: "Task execution failed: Invalid API key for external service"
execution_time: 23.1
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -354,7 +362,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -381,28 +389,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -458,7 +466,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
description: Total execution time in seconds
@@ -536,7 +544,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Unauthorized"
message: "Invalid or missing bearer token"
@@ -546,7 +554,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "The requested resource was not found"
@@ -556,7 +564,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Internal Server Error"
message: "An unexpected error occurred"

View File

@@ -84,7 +84,7 @@ paths:
'500':
$ref: '#/components/responses/ServerError'
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: 실행 상태 조회
description: |

View File

@@ -35,7 +35,7 @@ info:
1. **Descubra os inputs** usando `GET /inputs`
2. **Inicie a execução** usando `POST /kickoff`
3. **Monitore o progresso** usando `GET /status/{kickoff_id}`
3. **Monitore o progresso** usando `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Suporte
@@ -56,7 +56,7 @@ paths:
Retorna a lista de parâmetros de entrada que sua crew espera.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Inputs requeridos obtidos com sucesso
content:
application/json:
@@ -69,12 +69,12 @@ paths:
type: string
description: Nomes dos parâmetros de entrada
example: ["budget", "interests", "duration", "age"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -104,7 +104,7 @@ paths:
age: "35"
responses:
'200':
"200":
description: Execução iniciada com sucesso
content:
application/json:
@@ -115,12 +115,12 @@ paths:
type: string
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'401':
$ref: '#/components/responses/UnauthorizedError'
'500':
$ref: '#/components/responses/ServerError'
"401":
$ref: "#/components/responses/UnauthorizedError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Obter Status da Execução
description: |
@@ -136,25 +136,25 @@ paths:
type: string
format: uuid
responses:
'200':
"200":
description: Status recuperado com sucesso
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID não encontrado
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/Error"
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -234,7 +234,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -261,28 +261,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -324,7 +324,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
@@ -380,16 +380,16 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
NotFoundError:
description: Recurso não encontrado
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
ServerError:
description: Erro interno do servidor
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"

View File

@@ -16,16 +16,17 @@ CrewAI 엔터프라이즈 API 참고 자료에 오신 것을 환영합니다.
CrewAI AOP 대시보드에서 자신의 crew 상세 페이지로 이동하여 Status 탭에서 Bearer Token을 복사하세요.
</Step>
<Step title="필수 입력값 확인하기">
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
</Step>
<Step title="필수 입력값 확인하기">
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
</Step>
<Step title="Crew 실행 시작하기">
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를 받으세요.
</Step>
<Step title="Crew 실행 시작하기">
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를
받으세요.
</Step>
<Step title="진행 상황 모니터링">
`GET /status/{kickoff_id}`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
`GET /{kickoff_id}/status`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### 토큰 유형
| 토큰 유형 | 범위 | 사용 사례 |
|:-----------|:--------|:----------|
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
| 토큰 유형 | 범위 | 사용 사례 |
| :-------------------- | :--------------- | :------------------------------------ |
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
<Tip>
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할 수 있습니다.
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할
수 있습니다.
</Tip>
## 기본 URL
@@ -63,29 +65,33 @@ https://your-crew-name.crewai.com
1. **탐색**: `GET /inputs`를 호출하여 crew가 필요한 것을 파악합니다.
2. **실행**: `POST /kickoff`를 통해 입력값을 제출하여 처리를 시작합니다.
3. **모니터링**: 완료될 때까지 `GET /status/{kickoff_id}`를 주기적으로 조회합니다.
3. **모니터링**: 완료될 때까지 `GET /{kickoff_id}/status`를 주기적으로 조회합니다.
4. **결과**: 완료된 응답에서 최종 출력을 추출합니다.
## 오류 처리
API는 표준 HTTP 상태 코드를 사용합니다:
| 코드 | 의미 |
|------|:--------|
| `200` | 성공 |
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
| 코드 | 의미 |
| ----- | :------------------------------------ |
| `200` | 성공 |
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
| `404` | 찾을 수 없음 - 리소스가 존재하지 않음 |
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
## 인터랙티브 테스트
<Info>
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을 가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를 사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히 보여줍니다.
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을
가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를
사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히
보여줍니다.
</Info>
각 엔드포인트 페이지에서는 다음을 확인할 수 있습니다:
- ✅ 모든 파라미터가 포함된 **정확한 요청 형식**
- ✅ 성공 및 오류 사례에 대한 **응답 예시**
- ✅ 여러 언어(cURL, Python, JavaScript 등)로 제공되는 **코드 샘플**
@@ -103,6 +109,7 @@ API는 표준 HTTP 상태 코드를 사용합니다:
</CardGroup>
**예시 작업 흐름:**
1. **cURL 예제를 복사**합니다 (엔드포인트 페이지에서)
2. **`your-actual-crew-name.crewai.com`**을(를) 실제 crew URL로 교체합니다
3. **Bearer 토큰을** 대시보드에서 복사한 실제 토큰으로 교체합니다
@@ -111,10 +118,18 @@ API는 표준 HTTP 상태 코드를 사용합니다:
## 도움이 필요하신가요?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
API 통합 및 문제 해결에 대한 지원을 받으세요
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
crew를 관리하고 실행 로그를 확인하세요
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "실행 상태 조회"
openapi: "/enterprise-api.ko.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.ko.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -62,13 +62,13 @@ CrewAI CLI를 사용하여 Gmail 트리거 통합을 로컬에서 테스트하
crewai triggers list
# 실제 payload로 Gmail 트리거 시뮬레이션
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
`crewai triggers run` 명령은 완전한 Gmail payload로 크루를 실행하여 배포 전에 파싱 로직을 테스트할 수 있게 해줍니다.
<Warning>
개발 중에는 `crewai triggers run gmail/new_email`을 사용하세요 (`crewai run`이 아님). 배포 후에는 크루가 자동으로 트리거 payload를 받습니다.
개발 중에는 `crewai triggers run gmail/new_email_received`을 사용하세요 (`crewai run`이 아님). 배포 후에는 크루가 자동으로 트리거 payload를 받습니다.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- `crewai triggers run gmail/new_email`로 로컬 테스트하여 정확한 payload 구조를 확인하세요
- `crewai triggers run gmail/new_email_received`로 로컬 테스트하여 정확한 payload 구조를 확인하세요
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- 주의: 트리거 실행을 시뮬레이션하려면 `crewai triggers run`을 사용하세요 (`crewai run`이 아님)

View File

@@ -16,16 +16,17 @@ Bem-vindo à referência da API do CrewAI AOP. Esta API permite que você intera
Navegue até a página de detalhes do seu crew no painel do CrewAI AOP e copie seu Bearer Token na aba Status.
</Step>
<Step title="Descubra os Inputs Necessários">
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
</Step>
<Step title="Descubra os Inputs Necessários">
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
</Step>
<Step title="Inicie uma Execução de Crew">
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e receber um `kickoff_id`.
</Step>
<Step title="Inicie uma Execução de Crew">
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e
receber um `kickoff_id`.
</Step>
<Step title="Monitore o Progresso">
Use `GET /status/{kickoff_id}` para checar o status da execução e recuperar os resultados.
Use `GET /{kickoff_id}/status` para checar o status da execução e recuperar os resultados.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Tipos de Token
| Tipo de Token | Escopo | Caso de Uso |
|:--------------------|:------------------------|:---------------------------------------------------------|
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
| Tipo de Token | Escopo | Caso de Uso |
| :-------------------- | :----------------------------- | :------------------------------------------------------------------- |
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
<Tip>
Você pode encontrar ambos os tipos de token na aba Status da página de detalhes do seu crew no painel do CrewAI AOP.
Você pode encontrar ambos os tipos de token na aba Status da página de
detalhes do seu crew no painel do CrewAI AOP.
</Tip>
## URL Base
@@ -63,29 +65,33 @@ Substitua `your-crew-name` pela URL real do seu crew no painel.
1. **Descoberta**: Chame `GET /inputs` para entender o que seu crew precisa
2. **Execução**: Envie os inputs via `POST /kickoff` para iniciar o processamento
3. **Monitoramento**: Faça polling em `GET /status/{kickoff_id}` até a conclusão
3. **Monitoramento**: Faça polling em `GET /{kickoff_id}/status` até a conclusão
4. **Resultados**: Extraia o output final da resposta concluída
## Tratamento de Erros
A API utiliza códigos de status HTTP padrão:
| Código | Significado |
|--------|:--------------------------------------|
| `200` | Sucesso |
| `400` | Requisição Inválida - Formato de input inválido |
| `401` | Não Autorizado - Bearer token inválido |
| `404` | Não Encontrado - Recurso não existe |
| Código | Significado |
| ------ | :----------------------------------------------- |
| `200` | Sucesso |
| `400` | Requisição Inválida - Formato de input inválido |
| `401` | Não Autorizado - Bearer token inválido |
| `404` | Não Encontrado - Recurso não existe |
| `422` | Erro de Validação - Inputs obrigatórios ausentes |
| `500` | Erro no Servidor - Contate o suporte |
| `500` | Erro no Servidor - Contate o suporte |
## Testes Interativos
<Info>
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua própria URL de crew, utilizamos o **modo referência** em vez de um playground interativo para evitar confusão. Isso mostra exatamente como as requisições devem ser feitas, sem botões de envio não funcionais.
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua
própria URL de crew, utilizamos o **modo referência** em vez de um playground
interativo para evitar confusão. Isso mostra exatamente como as requisições
devem ser feitas, sem botões de envio não funcionais.
</Info>
Cada página de endpoint mostra para você:
- ✅ **Formato exato da requisição** com todos os parâmetros
- ✅ **Exemplos de resposta** para casos de sucesso e erro
- ✅ **Exemplos de código** em várias linguagens (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Cada página de endpoint mostra para você:
</CardGroup>
**Exemplo de fluxo:**
1. **Copie este exemplo cURL** de qualquer página de endpoint
2. **Substitua `your-actual-crew-name.crewai.com`** pela URL real do seu crew
3. **Substitua o Bearer token** pelo seu token real do painel
@@ -111,10 +118,18 @@ Cada página de endpoint mostra para você:
## Precisa de Ajuda?
<CardGroup cols={2}>
<Card title="Suporte Enterprise" icon="headset" href="mailto:support@crewai.com">
<Card
title="Suporte Enterprise"
icon="headset"
href="mailto:support@crewai.com"
>
Obtenha ajuda com integração da API e resolução de problemas
</Card>
<Card title="Painel Enterprise" icon="chart-line" href="https://app.crewai.com">
<Card
title="Painel Enterprise"
icon="chart-line"
href="https://app.crewai.com"
>
Gerencie seus crews e visualize logs de execução
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "Obter o status da execução"
openapi: "/enterprise-api.pt-BR.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.pt-BR.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -62,13 +62,13 @@ Teste sua integração de trigger do Gmail localmente usando a CLI da CrewAI:
crewai triggers list
# Simule um trigger do Gmail com payload realista
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
O comando `crewai triggers run` executará sua crew com um payload completo do Gmail, permitindo que você teste sua lógica de parsing antes do deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (não `crewai run`) para simular execução de trigger durante o desenvolvimento. Após o deployment, sua crew receberá automaticamente o payload do trigger.
Use `crewai triggers run gmail/new_email_received` (não `crewai run`) para simular execução de trigger durante o desenvolvimento. Após o deployment, sua crew receberá automaticamente o payload do trigger.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Teste localmente com `crewai triggers run gmail/new_email` para ver a estrutura exata do payload
- Teste localmente com `crewai triggers run gmail/new_email_received` para ver a estrutura exata do payload
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Lembre-se: use `crewai triggers run` (não `crewai run`) para simular execução de trigger

View File

@@ -12,7 +12,7 @@ dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.7.0",
"crewai==1.7.2",
"lancedb~=0.5.4",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",

View File

@@ -291,4 +291,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.7.0"
__version__ = "1.7.2"

View File

@@ -1,5 +1,5 @@
"""Crewai Enterprise Tools."""
import os
import json
import re
from typing import Any, Optional, Union, cast, get_origin
@@ -432,7 +432,11 @@ class CrewAIPlatformActionTool(BaseTool):
payload = cleaned_kwargs
response = requests.post(
url=api_url, headers=headers, json=payload, timeout=60
url=api_url,
headers=headers,
json=payload,
timeout=60,
verify=os.environ.get("CREWAI_FACTORY", "false").lower() != "true",
)
data = response.json()

View File

@@ -1,5 +1,5 @@
from typing import Any
import os
from crewai.tools import BaseTool
import requests
@@ -37,6 +37,7 @@ class CrewaiPlatformToolBuilder:
headers=headers,
timeout=30,
params={"apps": ",".join(self._apps)},
verify=os.environ.get("CREWAI_FACTORY", "false").lower() != "true",
)
response.raise_for_status()
except Exception:

View File

@@ -1,4 +1,6 @@
from typing import Union, get_args, get_origin
from unittest.mock import patch, Mock
import os
from crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool import (
CrewAIPlatformActionTool,
@@ -249,3 +251,109 @@ class TestSchemaProcessing:
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfMixed")
assert result_type is str
class TestCrewAIPlatformActionToolVerify:
"""Test suite for SSL verification behavior based on CREWAI_FACTORY environment variable"""
def setup_method(self):
self.action_schema = {
"function": {
"name": "test_action",
"parameters": {
"properties": {
"test_param": {
"type": "string",
"description": "Test parameter"
}
},
"required": []
}
}
}
def create_test_tool(self):
return CrewAIPlatformActionTool(
description="Test action tool",
action_name="test_action",
action_schema=self.action_schema
)
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_with_ssl_verification_default(self, mock_post):
"""Test that _run uses SSL verification by default when CREWAI_FACTORY is not set"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "false"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_with_ssl_verification_factory_false(self, mock_post):
"""Test that _run uses SSL verification when CREWAI_FACTORY is 'false'"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "FALSE"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_with_ssl_verification_factory_false_uppercase(self, mock_post):
"""Test that _run uses SSL verification when CREWAI_FACTORY is 'FALSE' (case-insensitive)"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "true"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_without_ssl_verification_factory_true(self, mock_post):
"""Test that _run disables SSL verification when CREWAI_FACTORY is 'true'"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is False
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "TRUE"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_without_ssl_verification_factory_true_uppercase(self, mock_post):
"""Test that _run disables SSL verification when CREWAI_FACTORY is 'TRUE' (case-insensitive)"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is False

View File

@@ -258,3 +258,98 @@ class TestCrewaiPlatformToolBuilder(unittest.TestCase):
assert "simple_string" in description_text
assert "nested_object" in description_text
assert "array_prop" in description_text
class TestCrewaiPlatformToolBuilderVerify(unittest.TestCase):
"""Test suite for SSL verification behavior in CrewaiPlatformToolBuilder"""
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_with_ssl_verification_default(self, mock_get):
"""Test that _fetch_actions uses SSL verification by default when CREWAI_FACTORY is not set"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "false"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_with_ssl_verification_factory_false(self, mock_get):
"""Test that _fetch_actions uses SSL verification when CREWAI_FACTORY is 'false'"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "FALSE"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_with_ssl_verification_factory_false_uppercase(self, mock_get):
"""Test that _fetch_actions uses SSL verification when CREWAI_FACTORY is 'FALSE' (case-insensitive)"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "true"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_without_ssl_verification_factory_true(self, mock_get):
"""Test that _fetch_actions disables SSL verification when CREWAI_FACTORY is 'true'"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is False
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "TRUE"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_without_ssl_verification_factory_true_uppercase(self, mock_get):
"""Test that _fetch_actions disables SSL verification when CREWAI_FACTORY is 'TRUE' (case-insensitive)"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is False

View File

@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.7.0",
"crewai-tools==1.7.2",
]
embeddings = [
"tiktoken~=0.8.0"

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.7.0"
__version__ = "1.7.2"
_telemetry_submitted = False

View File

@@ -16,7 +16,7 @@ from crewai.events.types.knowledge_events import (
KnowledgeSearchQueryFailedEvent,
)
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
from crewai.utilities.converter import generate_model_description
from crewai.utilities.pydantic_schema_utils import generate_model_description
if TYPE_CHECKING:

View File

@@ -5,10 +5,9 @@ from __future__ import annotations
from abc import ABC, abstractmethod
import json
import re
from typing import TYPE_CHECKING, Final, Literal
from crewai.utilities.converter import generate_model_description
from typing import TYPE_CHECKING, Any, Final, Literal
from crewai.utilities.pydantic_schema_utils import generate_model_description
if TYPE_CHECKING:
@@ -42,7 +41,7 @@ class BaseConverterAdapter(ABC):
"""
self.agent_adapter = agent_adapter
self._output_format: Literal["json", "pydantic"] | None = None
self._schema: str | None = None
self._schema: dict[str, Any] | None = None
@abstractmethod
def configure_structured_output(self, task: Task) -> None:
@@ -129,7 +128,7 @@ class BaseConverterAdapter(ABC):
@staticmethod
def _configure_format_from_task(
task: Task,
) -> tuple[Literal["json", "pydantic"] | None, str | None]:
) -> tuple[Literal["json", "pydantic"] | None, dict[str, Any] | None]:
"""Determine output format and schema from task requirements.
This is a helper method that examines the task's output requirements

View File

@@ -4,6 +4,7 @@ This module contains the OpenAIConverterAdapter class that handles structured
output conversion for OpenAI agents, supporting JSON and Pydantic model formats.
"""
import json
from typing import Any
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
@@ -61,7 +62,7 @@ class OpenAIConverterAdapter(BaseConverterAdapter):
output_schema: str = (
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=self._schema)
.format(output_format=json.dumps(self._schema, indent=2))
)
return f"{base_prompt}\n\n{output_schema}"

View File

@@ -149,7 +149,9 @@ class AuthenticationCommand:
return
if token_data["error"] not in ("authorization_pending", "slow_down"):
raise requests.HTTPError(token_data["error_description"])
raise requests.HTTPError(
token_data.get("error_description") or token_data.get("error")
)
time.sleep(device_code_data["interval"])
attempts += 1

View File

@@ -1,6 +1,6 @@
from typing import Any
from urllib.parse import urljoin
import os
import requests
from crewai.cli.config import Settings
@@ -33,9 +33,7 @@ class PlusAPI:
if settings.org_uuid:
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
self.base_url = (
str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
)
self.base_url = os.getenv("CREWAI_PLUS_URL") or str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
def _make_request(
self, method: str, endpoint: str, **kwargs: Any

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.7.0"
"crewai[tools]==1.7.2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.7.0"
"crewai[tools]==1.7.2"
]
[project.scripts]

View File

@@ -1,4 +1,5 @@
import base64
from json import JSONDecodeError
import os
from pathlib import Path
import subprocess
@@ -11,6 +12,7 @@ from rich.console import Console
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.utils import (
build_env_with_tool_repository_credentials,
extract_available_exports,
@@ -130,10 +132,13 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
self._validate_response(publish_response)
published_handle = publish_response.json()["handle"]
settings = Settings()
base_url = settings.enterprise_base_url or DEFAULT_CREWAI_ENTERPRISE_URL
console.print(
f"Successfully published `{published_handle}` ({project_version}).\n\n"
+ "⚠️ Security checks are running in the background. Your tool will be available once these are complete.\n"
+ f"You can monitor the status or access your tool here:\nhttps://app.crewai.com/crewai_plus/tools/{published_handle}",
+ f"You can monitor the status or access your tool here:\n{base_url}/crewai_plus/tools/{published_handle}",
style="bold green",
)
@@ -162,9 +167,19 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
if login_response.status_code != 200:
console.print(
"Authentication failed. Verify if the currently active organization access to the tool repository, and run 'crewai login' again. ",
"Authentication failed. Verify if the currently active organization can access the tool repository, and run 'crewai login' again.",
style="bold red",
)
try:
console.print(
f"[{login_response.status_code} error - {login_response.json().get('message', 'Unknown error')}]",
style="bold red italic",
)
except JSONDecodeError:
console.print(
f"[{login_response.status_code} error - Unknown error - Invalid JSON response]",
style="bold red italic",
)
raise SystemExit
login_response_json = login_response.json()

View File

@@ -1017,10 +1017,26 @@ class Crew(FlowTrackable, BaseModel):
tasks=self.tasks, planning_agent_llm=self.planning_llm
)._handle_crew_planning()
for task, step_plan in zip(
self.tasks, result.list_of_plans_per_task, strict=False
):
task.description += step_plan.plan
plan_map: dict[int, str] = {}
for step_plan in result.list_of_plans_per_task:
if step_plan.task_number in plan_map:
self._logger.log(
"warning",
f"Duplicate plan for Task Number {step_plan.task_number}, "
"using the first plan",
)
else:
plan_map[step_plan.task_number] = step_plan.plan
for idx, task in enumerate(self.tasks):
task_number = idx + 1
if task_number in plan_map:
task.description += plan_map[task_number]
else:
self._logger.log(
"warning",
f"No plan found for Task Number {task_number}",
)
def _store_execution_log(
self,

View File

@@ -9,6 +9,8 @@ from rich.console import Console
from rich.panel import Panel
from crewai.cli.authentication.token import AuthError, get_auth_token
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.plus_api import PlusAPI
from crewai.cli.version import get_crewai_version
from crewai.events.listeners.tracing.types import TraceEvent
@@ -16,7 +18,6 @@ from crewai.events.listeners.tracing.utils import (
is_tracing_enabled_in_context,
should_auto_collect_first_time_traces,
)
from crewai.utilities.constants import CREWAI_BASE_URL
logger = getLogger(__name__)
@@ -326,10 +327,12 @@ class TraceBatchManager:
if response.status_code == 200:
access_code = response.json().get("access_code", None)
console = Console()
settings = Settings()
base_url = settings.enterprise_base_url or DEFAULT_CREWAI_ENTERPRISE_URL
return_link = (
f"{CREWAI_BASE_URL}/crewai_plus/trace_batches/{self.trace_batch_id}"
f"{base_url}/crewai_plus/trace_batches/{self.trace_batch_id}"
if not self.is_current_batch_ephemeral and access_code is None
else f"{CREWAI_BASE_URL}/crewai_plus/ephemeral_trace_batches/{self.trace_batch_id}?access_code={access_code}"
else f"{base_url}/crewai_plus/ephemeral_trace_batches/{self.trace_batch_id}?access_code={access_code}"
)
if self.is_current_batch_ephemeral:

View File

@@ -676,14 +676,13 @@ class LLM(BaseLLM):
formatted_messages = self._format_messages_for_provider(messages)
# --- 2) Prepare the parameters for the completion call
params = {
params: dict[str, Any] = {
"model": self.model,
"messages": formatted_messages,
"timeout": self.timeout,
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": self.stop,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
@@ -702,6 +701,12 @@ class LLM(BaseLLM):
**self.additional_params,
}
# Only include stop if it has values and is not in additional_drop_params
# Some models (e.g., gpt-5.1) don't support the stop parameter at all
drop_params = self.additional_params.get("additional_drop_params", [])
if self.stop and "stop" not in drop_params:
params["stop"] = self.stop
# Remove None values from params
return {k: v for k, v in params.items() if v is not None}

View File

@@ -9,10 +9,10 @@ from pydantic import BaseModel
from typing_extensions import Self
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.converter import generate_model_description
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.types import LLMMessage

View File

@@ -18,10 +18,10 @@ from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.converter import generate_model_description
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.types import LLMMessage

View File

@@ -494,8 +494,11 @@ class Task(BaseModel):
future: Future[TaskOutput],
) -> None:
"""Execute the task asynchronously with context handling."""
result = self._execute_core(agent, context, tools)
future.set_result(result)
try:
result = self._execute_core(agent, context, tools)
future.set_result(result)
except Exception as e:
future.set_exception(e)
async def aexecute_sync(
self,

View File

@@ -3,15 +3,13 @@ from __future__ import annotations
from abc import ABC, abstractmethod
import asyncio
from collections.abc import Awaitable, Callable
from inspect import signature
from inspect import Parameter, signature
import json
from typing import (
Any,
Generic,
ParamSpec,
TypeVar,
cast,
get_args,
get_origin,
overload,
)
@@ -27,6 +25,7 @@ from typing_extensions import TypeIs
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_utils import generate_model_description
_printer = Printer()
@@ -103,20 +102,40 @@ class BaseTool(BaseModel, ABC):
if v != cls._ArgsSchemaPlaceholder:
return v
return cast(
type[PydanticBaseModel],
type(
f"{cls.__name__}Schema",
(PydanticBaseModel,),
{
"__annotations__": {
k: v
for k, v in cls._run.__annotations__.items()
if k != "return"
},
},
),
)
run_sig = signature(cls._run)
fields: dict[str, Any] = {}
for param_name, param in run_sig.parameters.items():
if param_name in ("self", "return"):
continue
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
continue
annotation = param.annotation if param.annotation != param.empty else Any
if param.default is param.empty:
fields[param_name] = (annotation, ...)
else:
fields[param_name] = (annotation, param.default)
if not fields:
arun_sig = signature(cls._arun)
for param_name, param in arun_sig.parameters.items():
if param_name in ("self", "return"):
continue
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
continue
annotation = (
param.annotation if param.annotation != param.empty else Any
)
if param.default is param.empty:
fields[param_name] = (annotation, ...)
else:
fields[param_name] = (annotation, param.default)
return create_model(f"{cls.__name__}Schema", **fields)
@field_validator("max_usage_count", mode="before")
@classmethod
@@ -226,24 +245,23 @@ class BaseTool(BaseModel, ABC):
args_schema = getattr(tool, "args_schema", None)
if args_schema is None:
# Infer args_schema from the function signature if not provided
func_signature = signature(tool.func)
annotations = func_signature.parameters
args_fields: dict[str, Any] = {}
for name, param in annotations.items():
if name != "self":
param_annotation = (
param.annotation if param.annotation != param.empty else Any
)
field_info = Field(
default=...,
description="",
)
args_fields[name] = (param_annotation, field_info)
if args_fields:
args_schema = create_model(f"{tool.name}Input", **args_fields)
fields: dict[str, Any] = {}
for name, param in func_signature.parameters.items():
if name == "self":
continue
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
continue
param_annotation = (
param.annotation if param.annotation != param.empty else Any
)
if param.default is param.empty:
fields[name] = (param_annotation, ...)
else:
fields[name] = (param_annotation, param.default)
if fields:
args_schema = create_model(f"{tool.name}Input", **fields)
else:
# Create a default schema with no fields if no parameters are found
args_schema = create_model(
f"{tool.name}Input", __base__=PydanticBaseModel
)
@@ -257,53 +275,37 @@ class BaseTool(BaseModel, ABC):
def _set_args_schema(self) -> None:
if self.args_schema is None:
class_name = f"{self.__class__.__name__}Schema"
self.args_schema = cast(
type[PydanticBaseModel],
type(
class_name,
(PydanticBaseModel,),
{
"__annotations__": {
k: v
for k, v in self._run.__annotations__.items()
if k != "return"
},
},
),
run_sig = signature(self._run)
fields: dict[str, Any] = {}
for param_name, param in run_sig.parameters.items():
if param_name in ("self", "return"):
continue
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
continue
annotation = (
param.annotation if param.annotation != param.empty else Any
)
if param.default is param.empty:
fields[param_name] = (annotation, ...)
else:
fields[param_name] = (annotation, param.default)
self.args_schema = create_model(
f"{self.__class__.__name__}Schema", **fields
)
def _generate_description(self) -> None:
args_schema = {
name: {
"description": field.description,
"type": BaseTool._get_arg_annotations(field.annotation),
}
for name, field in self.args_schema.model_fields.items()
}
self.description = f"Tool Name: {self.name}\nTool Arguments: {args_schema}\nTool Description: {self.description}"
@staticmethod
def _get_arg_annotations(annotation: type[Any] | None) -> str:
if annotation is None:
return "None"
origin = get_origin(annotation)
args = get_args(annotation)
if origin is None:
return (
annotation.__name__
if hasattr(annotation, "__name__")
else str(annotation)
)
if args:
args_str = ", ".join(BaseTool._get_arg_annotations(arg) for arg in args)
return str(f"{origin.__name__}[{args_str}]")
return str(origin.__name__)
"""Generate the tool description with a JSON schema for arguments."""
schema = generate_model_description(self.args_schema)
args_json = json.dumps(schema["json_schema"]["schema"], indent=2)
self.description = (
f"Tool Name: {self.name}\n"
f"Tool Arguments: {args_json}\n"
f"Tool Description: {self.description}"
)
class Tool(BaseTool, Generic[P, R]):
@@ -406,24 +408,23 @@ class Tool(BaseTool, Generic[P, R]):
args_schema = getattr(tool, "args_schema", None)
if args_schema is None:
# Infer args_schema from the function signature if not provided
func_signature = signature(tool.func)
annotations = func_signature.parameters
args_fields: dict[str, Any] = {}
for name, param in annotations.items():
if name != "self":
param_annotation = (
param.annotation if param.annotation != param.empty else Any
)
field_info = Field(
default=...,
description="",
)
args_fields[name] = (param_annotation, field_info)
if args_fields:
args_schema = create_model(f"{tool.name}Input", **args_fields)
fields: dict[str, Any] = {}
for name, param in func_signature.parameters.items():
if name == "self":
continue
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
continue
param_annotation = (
param.annotation if param.annotation != param.empty else Any
)
if param.default is param.empty:
fields[name] = (param_annotation, ...)
else:
fields[name] = (param_annotation, param.default)
if fields:
args_schema = create_model(f"{tool.name}Input", **fields)
else:
# Create a default schema with no fields if no parameters are found
args_schema = create_model(
f"{tool.name}Input", __base__=PydanticBaseModel
)
@@ -502,32 +503,38 @@ def tool(
def _make_tool(f: Callable[P2, R2]) -> Tool[P2, R2]:
if f.__doc__ is None:
raise ValueError("Function must have a docstring")
func_annotations = getattr(f, "__annotations__", None)
if func_annotations is None:
if f.__annotations__ is None:
raise ValueError("Function must have type annotations")
func_sig = signature(f)
fields: dict[str, Any] = {}
for param_name, param in func_sig.parameters.items():
if param_name == "return":
continue
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
continue
annotation = (
param.annotation if param.annotation != param.empty else Any
)
if param.default is param.empty:
fields[param_name] = (annotation, ...)
else:
fields[param_name] = (annotation, param.default)
class_name = "".join(tool_name.split()).title()
tool_args_schema = cast(
type[PydanticBaseModel],
type(
class_name,
(PydanticBaseModel,),
{
"__annotations__": {
k: v for k, v in func_annotations.items() if k != "return"
},
},
),
)
args_schema = create_model(class_name, **fields)
return Tool(
name=tool_name,
description=f.__doc__,
func=f,
args_schema=tool_args_schema,
args_schema=args_schema,
result_as_answer=result_as_answer,
max_usage_count=max_usage_count,
current_usage_count=0,
)
return _make_tool

View File

@@ -30,4 +30,3 @@ NOT_SPECIFIED: Final[
"allows us to distinguish between 'not passed at all' and 'explicitly passed None' or '[]'.",
]
] = _NotSpecified()
CREWAI_BASE_URL: Final[str] = "https://app.crewai.com"

View File

@@ -1,7 +1,5 @@
from __future__ import annotations
from collections.abc import Callable
from copy import deepcopy
import json
import re
from typing import TYPE_CHECKING, Any, Final, TypedDict
@@ -13,6 +11,7 @@ from crewai.agents.agent_builder.utilities.base_output_converter import OutputCo
from crewai.utilities.i18n import get_i18n
from crewai.utilities.internal_instructor import InternalInstructor
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_utils import generate_model_description
if TYPE_CHECKING:
@@ -421,221 +420,3 @@ def create_converter(
raise Exception("No output converter found or set.")
return converter # type: ignore[no-any-return]
def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
"""Recursively resolve all local $refs in the given JSON Schema using $defs as the source.
This is needed because Pydantic generates $ref-based schemas that
some consumers (e.g. LLMs, tool frameworks) don't handle well.
Args:
schema: JSON Schema dict that may contain "$refs" and "$defs".
Returns:
A new schema dictionary with all local $refs replaced by their definitions.
"""
defs = schema.get("$defs", {})
schema_copy = deepcopy(schema)
def _resolve(node: Any) -> Any:
if isinstance(node, dict):
ref = node.get("$ref")
if isinstance(ref, str) and ref.startswith("#/$defs/"):
def_name = ref.replace("#/$defs/", "")
if def_name in defs:
return _resolve(deepcopy(defs[def_name]))
raise KeyError(f"Definition '{def_name}' not found in $defs.")
return {k: _resolve(v) for k, v in node.items()}
if isinstance(node, list):
return [_resolve(i) for i in node]
return node
return _resolve(schema_copy) # type: ignore[no-any-return]
def add_key_in_dict_recursively(
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
) -> dict[str, Any]:
"""Recursively adds a key/value pair to all nested dicts matching `criteria`."""
if isinstance(d, dict):
if criteria(d) and key not in d:
d[key] = value
for v in d.values():
add_key_in_dict_recursively(v, key, value, criteria)
elif isinstance(d, list):
for i in d:
add_key_in_dict_recursively(i, key, value, criteria)
return d
def fix_discriminator_mappings(schema: dict[str, Any]) -> dict[str, Any]:
"""Replace '#/$defs/...' references in discriminator.mapping with just the model name."""
output = schema.get("properties", {}).get("output")
if not output:
return schema
disc = output.get("discriminator")
if not disc or "mapping" not in disc:
return schema
disc["mapping"] = {k: v.split("/")[-1] for k, v in disc["mapping"].items()}
return schema
def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
"""Add const fields to oneOf variants for discriminated unions.
The json_schema_to_pydantic library requires each oneOf variant to have
a const field for the discriminator property. This function adds those
const fields based on the discriminator mapping.
Args:
schema: JSON Schema dict that may contain discriminated unions
Returns:
Modified schema with const fields added to oneOf variants
"""
def _process_oneof(node: dict[str, Any]) -> dict[str, Any]:
"""Process a single node that might contain a oneOf with discriminator."""
if not isinstance(node, dict):
return node
if "oneOf" in node and "discriminator" in node:
discriminator = node["discriminator"]
property_name = discriminator.get("propertyName")
mapping = discriminator.get("mapping", {})
if property_name and mapping:
one_of_variants = node.get("oneOf", [])
for variant in one_of_variants:
if isinstance(variant, dict) and "properties" in variant:
variant_title = variant.get("title", "")
matched_disc_value = None
for disc_value, schema_name in mapping.items():
if variant_title == schema_name or variant_title.endswith(
schema_name
):
matched_disc_value = disc_value
break
if matched_disc_value is not None:
props = variant["properties"]
if property_name in props:
props[property_name]["const"] = matched_disc_value
for key, value in node.items():
if isinstance(value, dict):
node[key] = _process_oneof(value)
elif isinstance(value, list):
node[key] = [
_process_oneof(item) if isinstance(item, dict) else item
for item in value
]
return node
return _process_oneof(deepcopy(schema))
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
"""Convert oneOf to anyOf for OpenAI compatibility.
OpenAI's Structured Outputs support anyOf better than oneOf.
This recursively converts all oneOf occurrences to anyOf.
Args:
schema: JSON schema dictionary.
Returns:
Modified schema with anyOf instead of oneOf.
"""
if isinstance(schema, dict):
if "oneOf" in schema:
schema["anyOf"] = schema.pop("oneOf")
for value in schema.values():
if isinstance(value, dict):
convert_oneof_to_anyof(value)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
convert_oneof_to_anyof(item)
return schema
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
"""Ensure all properties are in the required array for OpenAI strict mode.
OpenAI's strict structured outputs require all properties to be listed
in the required array. This recursively updates all objects to include
all their properties in required.
Args:
schema: JSON schema dictionary.
Returns:
Modified schema with all properties marked as required.
"""
if isinstance(schema, dict):
if schema.get("type") == "object" and "properties" in schema:
properties = schema["properties"]
if properties:
schema["required"] = list(properties.keys())
for value in schema.values():
if isinstance(value, dict):
ensure_all_properties_required(value)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
ensure_all_properties_required(item)
return schema
def generate_model_description(model: type[BaseModel]) -> dict[str, Any]:
"""Generate JSON schema description of a Pydantic model.
This function takes a Pydantic model class and returns its JSON schema,
which includes full type information, discriminators, and all metadata.
The schema is dereferenced to inline all $ref references for better LLM understanding.
Args:
model: A Pydantic model class.
Returns:
A JSON schema dictionary representation of the model.
"""
json_schema = model.model_json_schema(ref_template="#/$defs/{model}")
json_schema = add_key_in_dict_recursively(
json_schema,
key="additionalProperties",
value=False,
criteria=lambda d: d.get("type") == "object"
and "additionalProperties" not in d,
)
json_schema = resolve_refs(json_schema)
json_schema.pop("$defs", None)
json_schema = fix_discriminator_mappings(json_schema)
json_schema = convert_oneof_to_anyof(json_schema)
json_schema = ensure_all_properties_required(json_schema)
return {
"type": "json_schema",
"json_schema": {
"name": model.__name__,
"strict": True,
"schema": json_schema,
},
}

View File

@@ -1,14 +1,15 @@
from __future__ import annotations
from typing import TYPE_CHECKING, cast
import json
from typing import TYPE_CHECKING, Any, cast
from pydantic import BaseModel, Field
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.task_events import TaskEvaluationEvent
from crewai.llm import LLM
from crewai.utilities.converter import Converter
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
from crewai.utilities.i18n import get_i18n
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.training_converter import TrainingConverter
@@ -62,7 +63,7 @@ class TaskEvaluator:
Args:
original_agent: The agent to evaluate.
"""
self.llm = cast(LLM, original_agent.llm)
self.llm = original_agent.llm
self.original_agent = original_agent
def evaluate(self, task: Task, output: str) -> TaskEvaluation:
@@ -79,7 +80,8 @@ class TaskEvaluator:
- Investigate the Converter.to_pydantic signature, returns BaseModel strictly?
"""
crewai_event_bus.emit(
self, TaskEvaluationEvent(evaluation_type="task_evaluation", task=task)
self,
TaskEvaluationEvent(evaluation_type="task_evaluation", task=task), # type: ignore[no-untyped-call]
)
evaluation_query = (
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
@@ -94,9 +96,14 @@ class TaskEvaluator:
instructions = "Convert all responses into valid JSON output."
if not self.llm.supports_function_calling():
model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema()
instructions = f"{instructions}\n\nReturn only valid JSON with the following schema:\n```json\n{model_schema}\n```"
if not self.llm.supports_function_calling(): # type: ignore[union-attr]
schema_dict = generate_model_description(TaskEvaluation)
output_schema: str = (
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=json.dumps(schema_dict, indent=2))
)
instructions = f"{instructions}\n\n{output_schema}"
converter = Converter(
llm=self.llm,
@@ -108,7 +115,7 @@ class TaskEvaluator:
return cast(TaskEvaluation, converter.to_pydantic())
def evaluate_training_data(
self, training_data: dict, agent_id: str
self, training_data: dict[str, Any], agent_id: str
) -> TrainingTaskEvaluation:
"""
Evaluate the training data based on the llm output, human feedback, and improved output.
@@ -121,7 +128,8 @@ class TaskEvaluator:
- Investigate the Converter.to_pydantic signature, returns BaseModel strictly?
"""
crewai_event_bus.emit(
self, TaskEvaluationEvent(evaluation_type="training_data_evaluation")
self,
TaskEvaluationEvent(evaluation_type="training_data_evaluation"), # type: ignore[no-untyped-call]
)
output_training_data = training_data[agent_id]
@@ -164,11 +172,14 @@ class TaskEvaluator:
)
instructions = "I'm gonna convert this raw text into valid JSON."
if not self.llm.supports_function_calling():
model_schema = PydanticSchemaParser(
model=TrainingTaskEvaluation
).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
if not self.llm.supports_function_calling(): # type: ignore[union-attr]
schema_dict = generate_model_description(TrainingTaskEvaluation)
output_schema: str = (
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=json.dumps(schema_dict, indent=2))
)
instructions = f"{instructions}\n\n{output_schema}"
converter = TrainingConverter(
llm=self.llm,

View File

@@ -15,9 +15,12 @@ logger = logging.getLogger(__name__)
class PlanPerTask(BaseModel):
"""Represents a plan for a specific task."""
task: str = Field(..., description="The task for which the plan is created")
task_number: int = Field(
description="The 1-indexed task number this plan corresponds to",
ge=1,
)
task: str = Field(description="The task for which the plan is created")
plan: str = Field(
...,
description="The step by step plan on how the agents can execute their tasks using the available tools with mastery",
)

View File

@@ -1,103 +0,0 @@
from typing import Any, Union, get_args, get_origin
from pydantic import BaseModel, Field
class PydanticSchemaParser(BaseModel):
model: type[BaseModel] = Field(..., description="The Pydantic model to parse.")
def get_schema(self) -> str:
"""Public method to get the schema of a Pydantic model.
Returns:
String representation of the model schema.
"""
return "{\n" + self._get_model_schema(self.model) + "\n}"
def _get_model_schema(self, model: type[BaseModel], depth: int = 0) -> str:
"""Recursively get the schema of a Pydantic model, handling nested models and lists.
Args:
model: The Pydantic model to process.
depth: The current depth of recursion for indentation purposes.
Returns:
A string representation of the model schema.
"""
indent: str = " " * 4 * depth
lines: list[str] = [
f"{indent} {field_name}: {self._get_field_type_for_annotation(field.annotation, depth + 1)}"
for field_name, field in model.model_fields.items()
]
return ",\n".join(lines)
def _format_list_type(self, list_item_type: Any, depth: int) -> str:
"""Format a List type, handling nested models if necessary.
Args:
list_item_type: The type of items in the list.
depth: The current depth of recursion for indentation purposes.
Returns:
A string representation of the List type.
"""
if isinstance(list_item_type, type) and issubclass(list_item_type, BaseModel):
nested_schema = self._get_model_schema(list_item_type, depth + 1)
nested_indent = " " * 4 * depth
return f"List[\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}\n{nested_indent}]"
return f"List[{list_item_type.__name__}]"
def _format_union_type(self, field_type: Any, depth: int) -> str:
"""Format a Union type, handling Optional and nested types.
Args:
field_type: The Union type to format.
depth: The current depth of recursion for indentation purposes.
Returns:
A string representation of the Union type.
"""
args = get_args(field_type)
if type(None) in args:
# It's an Optional type
non_none_args = [arg for arg in args if arg is not type(None)]
if len(non_none_args) == 1:
inner_type = self._get_field_type_for_annotation(
non_none_args[0], depth
)
return f"Optional[{inner_type}]"
# Union with None and multiple other types
inner_types = ", ".join(
self._get_field_type_for_annotation(arg, depth) for arg in non_none_args
)
return f"Optional[Union[{inner_types}]]"
# General Union type
inner_types = ", ".join(
self._get_field_type_for_annotation(arg, depth) for arg in args
)
return f"Union[{inner_types}]"
def _get_field_type_for_annotation(self, annotation: Any, depth: int) -> str:
"""Recursively get the string representation of a field's type annotation.
Args:
annotation: The type annotation to process.
depth: The current depth of recursion for indentation purposes.
Returns:
A string representation of the type annotation.
"""
origin: Any = get_origin(annotation)
if origin is list:
list_item_type = get_args(annotation)[0]
return self._format_list_type(list_item_type, depth)
if origin is dict:
key_type, value_type = get_args(annotation)
return f"Dict[{key_type.__name__}, {value_type.__name__}]"
if origin is Union:
return self._format_union_type(annotation, depth)
if isinstance(annotation, type) and issubclass(annotation, BaseModel):
nested_schema = self._get_model_schema(annotation, depth)
nested_indent = " " * 4 * depth
return f"{annotation.__name__}\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}"
return annotation.__name__

View File

@@ -0,0 +1,245 @@
"""Utilities for generating JSON schemas from Pydantic models.
This module provides functions for converting Pydantic models to JSON schemas
suitable for use with LLMs and tool definitions.
"""
from collections.abc import Callable
from copy import deepcopy
from typing import Any
from pydantic import BaseModel
def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
"""Recursively resolve all local $refs in the given JSON Schema using $defs as the source.
This is needed because Pydantic generates $ref-based schemas that
some consumers (e.g. LLMs, tool frameworks) don't handle well.
Args:
schema: JSON Schema dict that may contain "$refs" and "$defs".
Returns:
A new schema dictionary with all local $refs replaced by their definitions.
"""
defs = schema.get("$defs", {})
schema_copy = deepcopy(schema)
def _resolve(node: Any) -> Any:
if isinstance(node, dict):
ref = node.get("$ref")
if isinstance(ref, str) and ref.startswith("#/$defs/"):
def_name = ref.replace("#/$defs/", "")
if def_name in defs:
return _resolve(deepcopy(defs[def_name]))
raise KeyError(f"Definition '{def_name}' not found in $defs.")
return {k: _resolve(v) for k, v in node.items()}
if isinstance(node, list):
return [_resolve(i) for i in node]
return node
return _resolve(schema_copy) # type: ignore[no-any-return]
def add_key_in_dict_recursively(
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
) -> dict[str, Any]:
"""Recursively adds a key/value pair to all nested dicts matching `criteria`.
Args:
d: The dictionary to modify.
key: The key to add.
value: The value to add.
criteria: A function that returns True for dicts that should receive the key.
Returns:
The modified dictionary.
"""
if isinstance(d, dict):
if criteria(d) and key not in d:
d[key] = value
for v in d.values():
add_key_in_dict_recursively(v, key, value, criteria)
elif isinstance(d, list):
for i in d:
add_key_in_dict_recursively(i, key, value, criteria)
return d
def fix_discriminator_mappings(schema: dict[str, Any]) -> dict[str, Any]:
"""Replace '#/$defs/...' references in discriminator.mapping with just the model name.
Args:
schema: JSON schema dictionary.
Returns:
Modified schema with fixed discriminator mappings.
"""
output = schema.get("properties", {}).get("output")
if not output:
return schema
disc = output.get("discriminator")
if not disc or "mapping" not in disc:
return schema
disc["mapping"] = {k: v.split("/")[-1] for k, v in disc["mapping"].items()}
return schema
def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
"""Add const fields to oneOf variants for discriminated unions.
The json_schema_to_pydantic library requires each oneOf variant to have
a const field for the discriminator property. This function adds those
const fields based on the discriminator mapping.
Args:
schema: JSON Schema dict that may contain discriminated unions
Returns:
Modified schema with const fields added to oneOf variants
"""
def _process_oneof(node: dict[str, Any]) -> dict[str, Any]:
"""Process a single node that might contain a oneOf with discriminator."""
if not isinstance(node, dict):
return node
if "oneOf" in node and "discriminator" in node:
discriminator = node["discriminator"]
property_name = discriminator.get("propertyName")
mapping = discriminator.get("mapping", {})
if property_name and mapping:
one_of_variants = node.get("oneOf", [])
for variant in one_of_variants:
if isinstance(variant, dict) and "properties" in variant:
variant_title = variant.get("title", "")
matched_disc_value = None
for disc_value, schema_name in mapping.items():
if variant_title == schema_name or variant_title.endswith(
schema_name
):
matched_disc_value = disc_value
break
if matched_disc_value is not None:
props = variant["properties"]
if property_name in props:
props[property_name]["const"] = matched_disc_value
for key, value in node.items():
if isinstance(value, dict):
node[key] = _process_oneof(value)
elif isinstance(value, list):
node[key] = [
_process_oneof(item) if isinstance(item, dict) else item
for item in value
]
return node
return _process_oneof(deepcopy(schema))
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
"""Convert oneOf to anyOf for OpenAI compatibility.
OpenAI's Structured Outputs support anyOf better than oneOf.
This recursively converts all oneOf occurrences to anyOf.
Args:
schema: JSON schema dictionary.
Returns:
Modified schema with anyOf instead of oneOf.
"""
if isinstance(schema, dict):
if "oneOf" in schema:
schema["anyOf"] = schema.pop("oneOf")
for value in schema.values():
if isinstance(value, dict):
convert_oneof_to_anyof(value)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
convert_oneof_to_anyof(item)
return schema
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
"""Ensure all properties are in the required array for OpenAI strict mode.
OpenAI's strict structured outputs require all properties to be listed
in the required array. This recursively updates all objects to include
all their properties in required.
Args:
schema: JSON schema dictionary.
Returns:
Modified schema with all properties marked as required.
"""
if isinstance(schema, dict):
if schema.get("type") == "object" and "properties" in schema:
properties = schema["properties"]
if properties:
schema["required"] = list(properties.keys())
for value in schema.values():
if isinstance(value, dict):
ensure_all_properties_required(value)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
ensure_all_properties_required(item)
return schema
def generate_model_description(model: type[BaseModel]) -> dict[str, Any]:
"""Generate JSON schema description of a Pydantic model.
This function takes a Pydantic model class and returns its JSON schema,
which includes full type information, discriminators, and all metadata.
The schema is dereferenced to inline all $ref references for better LLM understanding.
Args:
model: A Pydantic model class.
Returns:
A JSON schema dictionary representation of the model.
"""
json_schema = model.model_json_schema(ref_template="#/$defs/{model}")
json_schema = add_key_in_dict_recursively(
json_schema,
key="additionalProperties",
value=False,
criteria=lambda d: d.get("type") == "object"
and "additionalProperties" not in d,
)
json_schema = resolve_refs(json_schema)
json_schema.pop("$defs", None)
json_schema = fix_discriminator_mappings(json_schema)
json_schema = convert_oneof_to_anyof(json_schema)
json_schema = ensure_all_properties_required(json_schema)
return {
"type": "json_schema",
"json_schema": {
"name": model.__name__,
"strict": True,
"schema": json_schema,
},
}

View File

@@ -1,7 +1,7 @@
import os
import unittest
from unittest.mock import ANY, MagicMock, patch
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.plus_api import PlusAPI
@@ -35,7 +35,7 @@ class TestPlusAPI(unittest.TestCase):
):
mock_make_request.assert_called_once_with(
method,
f"{DEFAULT_CREWAI_ENTERPRISE_URL}{endpoint}",
f"{os.getenv('CREWAI_PLUS_URL')}{endpoint}",
headers={
"Authorization": ANY,
"Content-Type": ANY,
@@ -53,7 +53,7 @@ class TestPlusAPI(unittest.TestCase):
):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
mock_settings.enterprise_base_url = os.getenv('CREWAI_PLUS_URL')
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
@@ -84,7 +84,7 @@ class TestPlusAPI(unittest.TestCase):
def test_get_agent_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
mock_settings.enterprise_base_url = os.getenv('CREWAI_PLUS_URL')
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
@@ -115,7 +115,7 @@ class TestPlusAPI(unittest.TestCase):
def test_get_tool_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
mock_settings.enterprise_base_url = os.getenv('CREWAI_PLUS_URL')
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
@@ -163,7 +163,7 @@ class TestPlusAPI(unittest.TestCase):
def test_publish_tool_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
mock_settings.enterprise_base_url = os.getenv('CREWAI_PLUS_URL')
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
@@ -320,6 +320,7 @@ class TestPlusAPI(unittest.TestCase):
)
@patch("crewai.cli.plus_api.Settings")
@patch.dict(os.environ, {"CREWAI_PLUS_URL": ""})
def test_custom_base_url(self, mock_settings_class):
mock_settings = MagicMock()
mock_settings.enterprise_base_url = "https://custom-url.com/api"
@@ -329,3 +330,11 @@ class TestPlusAPI(unittest.TestCase):
custom_api.base_url,
"https://custom-url.com/api",
)
@patch.dict(os.environ, {"CREWAI_PLUS_URL": "https://custom-url-from-env.com"})
def test_custom_base_url_from_env(self):
custom_api = PlusAPI("test_key")
self.assertEqual(
custom_api.base_url,
"https://custom-url-from-env.com",
)

View File

@@ -877,3 +877,62 @@ def test_validate_model_in_constants():
LLM._validate_model_in_constants("anthropic.claude-future-v1:0", "bedrock")
is True
)
def test_prepare_completion_params_excludes_empty_stop():
"""Test that _prepare_completion_params excludes stop when it's empty.
This is a regression test for issue #4149 where models like gpt-5.1
don't support the stop parameter at all, and passing an empty list
would cause an error.
"""
llm = LLM(model="gpt-4o", is_litellm=True)
# By default, stop is initialized to an empty list
assert llm.stop == []
params = llm._prepare_completion_params("Hello")
# stop should not be in params when it's empty
assert "stop" not in params
def test_prepare_completion_params_includes_stop_when_provided():
"""Test that _prepare_completion_params includes stop when it has values."""
llm = LLM(model="gpt-4o", stop=["Observation:"], is_litellm=True)
assert llm.stop == ["Observation:"]
params = llm._prepare_completion_params("Hello")
# stop should be in params when it has values
assert "stop" in params
assert params["stop"] == ["Observation:"]
def test_prepare_completion_params_excludes_stop_when_in_drop_params():
"""Test that _prepare_completion_params excludes stop when it's in additional_drop_params.
This ensures the retry logic works correctly when a model doesn't support stop.
"""
llm = LLM(
model="gpt-4o",
stop=["Observation:"],
additional_drop_params=["stop"],
is_litellm=True,
)
assert llm.stop == ["Observation:"]
params = llm._prepare_completion_params("Hello")
# stop should not be in params when it's in additional_drop_params
assert "stop" not in params
def test_prepare_completion_params_excludes_stop_with_existing_drop_params():
"""Test that stop is excluded when additional_drop_params already has other params."""
llm = LLM(
model="gpt-4o",
stop=["Observation:"],
additional_drop_params=["another_param", "stop"],
is_litellm=True,
)
params = llm._prepare_completion_params("Hello")
# stop should not be in params
assert "stop" not in params

View File

@@ -1727,3 +1727,24 @@ def test_task_output_includes_messages():
assert hasattr(task2_output, "messages")
assert isinstance(task2_output.messages, list)
assert len(task2_output.messages) > 0
def test_async_execution_fails():
researcher = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
allow_delegation=False,
)
task = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 interesting ideas.",
async_execution=True,
agent=researcher,
)
with patch.object(Task, "_execute_core", side_effect=RuntimeError("boom!")):
with pytest.raises(RuntimeError, match="boom!"):
execution = task.execute_async(agent=researcher)
execution.result()

View File

@@ -17,10 +17,11 @@ def test_creating_a_tool_using_annotation():
# Assert all the right attributes were defined
assert my_tool.name == "Name of my tool"
assert (
my_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert "Tool Name: Name of my tool" in my_tool.description
assert "Tool Arguments:" in my_tool.description
assert '"question"' in my_tool.description
assert '"type": "string"' in my_tool.description
assert "Tool Description: Clear description for what this tool is useful for" in my_tool.description
assert my_tool.args_schema.model_json_schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}
@@ -31,10 +32,9 @@ def test_creating_a_tool_using_annotation():
converted_tool = my_tool.to_structured_tool()
assert converted_tool.name == "Name of my tool"
assert (
converted_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert "Tool Name: Name of my tool" in converted_tool.description
assert "Tool Arguments:" in converted_tool.description
assert '"question"' in converted_tool.description
assert converted_tool.args_schema.model_json_schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}
@@ -56,10 +56,11 @@ def test_creating_a_tool_using_baseclass():
# Assert all the right attributes were defined
assert my_tool.name == "Name of my tool"
assert (
my_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert "Tool Name: Name of my tool" in my_tool.description
assert "Tool Arguments:" in my_tool.description
assert '"question"' in my_tool.description
assert '"type": "string"' in my_tool.description
assert "Tool Description: Clear description for what this tool is useful for" in my_tool.description
assert my_tool.args_schema.model_json_schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}
@@ -68,10 +69,9 @@ def test_creating_a_tool_using_baseclass():
converted_tool = my_tool.to_structured_tool()
assert converted_tool.name == "Name of my tool"
assert (
converted_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert "Tool Name: Name of my tool" in converted_tool.description
assert "Tool Arguments:" in converted_tool.description
assert '"question"' in converted_tool.description
assert converted_tool.args_schema.model_json_schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}

View File

@@ -107,25 +107,20 @@ def test_tool_usage_render():
rendered = tool_usage._render()
# Updated checks to match the actual output
# Check that the rendered output contains the expected tool information
assert "Tool Name: Random Number Generator" in rendered
assert "Tool Arguments:" in rendered
assert (
"'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}"
in rendered
)
assert (
"'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}"
in rendered
)
assert (
"Tool Description: Generates a random number within a specified range"
in rendered
)
assert (
"Tool Name: Random Number Generator\nTool Arguments: {'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}, 'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}}\nTool Description: Generates a random number within a specified range"
in rendered
)
# Check that the JSON schema format is used (proper JSON schema types)
assert '"min_value"' in rendered
assert '"max_value"' in rendered
assert '"type": "integer"' in rendered
assert '"description": "The minimum value of the range (inclusive)"' in rendered
assert '"description": "The maximum value of the range (inclusive)"' in rendered
def test_validate_tool_input_booleans_and_none():

View File

@@ -1,4 +1,3 @@
from unittest import mock
from unittest.mock import MagicMock, patch
from crewai.utilities.converter import ConverterError
@@ -44,26 +43,26 @@ def test_evaluate_training_data(converter_mock):
)
assert result == function_return_value
converter_mock.assert_has_calls(
[
mock.call(
llm=original_agent.llm,
text="Assess the quality of the training data based on the llm output, human feedback , and llm "
"output improved result.\n\nIteration: data1\nInitial Output:\nInitial output 1\n\nHuman Feedback:\nHuman feedback "
"1\n\nImproved Output:\nImproved output 1\n\n------------------------------------------------\n\nIteration: data2\nInitial Output:\nInitial output 2\n\nHuman "
"Feedback:\nHuman feedback 2\n\nImproved Output:\nImproved output 2\n\n------------------------------------------------\n\nPlease provide:\n- Provide "
"a list of clear, actionable instructions derived from the Human Feedbacks to enhance the Agent's "
"performance. Analyze the differences between Initial Outputs and Improved Outputs to generate specific "
"action items for future tasks. Ensure all key and specificpoints from the human feedback are "
"incorporated into these instructions.\n- A score from 0 to 10 evaluating on completion, quality, and "
"overall performance from the improved output to the initial output based on the human feedback\n",
model=TrainingTaskEvaluation,
instructions="I'm gonna convert this raw text into valid JSON.\n\nThe json should have the "
"following structure, with the following keys:\n{\n suggestions: List[str],\n quality: float,\n final_summary: str\n}",
),
mock.call().to_pydantic(),
]
)
# Verify the converter was called with correct arguments
converter_mock.assert_called_once()
call_kwargs = converter_mock.call_args.kwargs
assert call_kwargs["llm"] == original_agent.llm
assert call_kwargs["model"] == TrainingTaskEvaluation
assert "Iteration: data1" in call_kwargs["text"]
assert "Iteration: data2" in call_kwargs["text"]
instructions = call_kwargs["instructions"]
assert "I'm gonna convert this raw text into valid JSON." in instructions
assert "OpenAPI schema" in instructions
assert '"type": "json_schema"' in instructions
assert '"name": "TrainingTaskEvaluation"' in instructions
assert '"suggestions"' in instructions
assert '"quality"' in instructions
assert '"final_summary"' in instructions
converter_mock.return_value.to_pydantic.assert_called_once()
@patch("crewai.utilities.converter.Converter.to_pydantic")

View File

@@ -16,7 +16,6 @@ from crewai.utilities.converter import (
handle_partial_json,
validate_model,
)
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
from pydantic import BaseModel
import pytest

View File

@@ -1,7 +1,11 @@
from unittest.mock import patch
"""Tests for the planning handler module."""
from unittest.mock import MagicMock, patch
import pytest
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
@@ -13,7 +17,7 @@ from crewai.utilities.planning_handler import (
)
class InternalCrewPlanner:
class TestInternalCrewPlanner:
@pytest.fixture
def crew_planner(self):
tasks = [
@@ -49,9 +53,9 @@ class InternalCrewPlanner:
def test_handle_crew_planning(self, crew_planner):
list_of_plans_per_task = [
PlanPerTask(task="Task1", plan="Plan 1"),
PlanPerTask(task="Task2", plan="Plan 2"),
PlanPerTask(task="Task3", plan="Plan 3"),
PlanPerTask(task_number=1, task="Task1", plan="Plan 1"),
PlanPerTask(task_number=2, task="Task2", plan="Plan 2"),
PlanPerTask(task_number=3, task="Task3", plan="Plan 3"),
]
with patch.object(Task, "execute_sync") as execute:
execute.return_value = TaskOutput(
@@ -97,12 +101,12 @@ class InternalCrewPlanner:
# Knowledge field should not be present when empty
assert '"agent_knowledge"' not in tasks_summary
@patch("crewai.knowledge.storage.knowledge_storage.chromadb")
def test_create_tasks_summary_with_knowledge_and_tools(self, mock_chroma):
@patch("crewai.knowledge.knowledge.Knowledge.add_sources")
@patch("crewai.knowledge.storage.knowledge_storage.KnowledgeStorage")
def test_create_tasks_summary_with_knowledge_and_tools(
self, mock_storage, mock_add_sources
):
"""Test task summary generation with both knowledge and tools present."""
# Mock ChromaDB collection
mock_collection = mock_chroma.return_value.get_or_create_collection.return_value
mock_collection.add.return_value = None
# Create mock tools with proper string descriptions and structured tool support
class MockTool(BaseTool):
@@ -166,7 +170,9 @@ class InternalCrewPlanner:
description="Description",
agent="agent",
pydantic=PlannerTaskPydanticOutput(
list_of_plans_per_task=[PlanPerTask(task="Task1", plan="Plan 1")]
list_of_plans_per_task=[
PlanPerTask(task_number=1, task="Task1", plan="Plan 1")
]
),
)
result = crew_planner_different_llm._handle_crew_planning()
@@ -177,3 +183,181 @@ class InternalCrewPlanner:
crew_planner_different_llm.tasks
)
execute.assert_called_once()
def test_plan_per_task_requires_task_number(self):
"""Test that PlanPerTask model requires task_number field."""
with pytest.raises(ValueError):
PlanPerTask(task="Task1", plan="Plan 1")
def test_plan_per_task_with_task_number(self):
"""Test PlanPerTask model with task_number field."""
plan = PlanPerTask(task_number=5, task="Task5", plan="Plan for task 5")
assert plan.task_number == 5
assert plan.task == "Task5"
assert plan.plan == "Plan for task 5"
class TestCrewPlanningIntegration:
"""Tests for Crew._handle_crew_planning integration with task_number matching."""
def test_crew_planning_with_out_of_order_plans(self):
"""Test that plans are correctly matched to tasks even when returned out of order.
This test verifies the fix for issue #3953 where plans returned by the LLM
in a different order than the tasks would be incorrectly assigned.
"""
agent1 = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
agent2 = Agent(role="Agent 2", goal="Goal 2", backstory="Backstory 2")
agent3 = Agent(role="Agent 3", goal="Goal 3", backstory="Backstory 3")
task1 = Task(
description="First task description",
expected_output="Output 1",
agent=agent1,
)
task2 = Task(
description="Second task description",
expected_output="Output 2",
agent=agent2,
)
task3 = Task(
description="Third task description",
expected_output="Output 3",
agent=agent3,
)
crew = Crew(
agents=[agent1, agent2, agent3],
tasks=[task1, task2, task3],
planning=True,
)
out_of_order_plans = [
PlanPerTask(task_number=3, task="Task 3", plan=" [PLAN FOR TASK 3]"),
PlanPerTask(task_number=1, task="Task 1", plan=" [PLAN FOR TASK 1]"),
PlanPerTask(task_number=2, task="Task 2", plan=" [PLAN FOR TASK 2]"),
]
mock_planner_result = PlannerTaskPydanticOutput(
list_of_plans_per_task=out_of_order_plans
)
with patch.object(
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
):
crew._handle_crew_planning()
assert "[PLAN FOR TASK 1]" in task1.description
assert "[PLAN FOR TASK 2]" in task2.description
assert "[PLAN FOR TASK 3]" in task3.description
assert "[PLAN FOR TASK 3]" not in task1.description
assert "[PLAN FOR TASK 1]" not in task2.description
assert "[PLAN FOR TASK 2]" not in task3.description
def test_crew_planning_with_missing_plan(self):
"""Test that missing plans are handled gracefully with a warning."""
agent1 = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
agent2 = Agent(role="Agent 2", goal="Goal 2", backstory="Backstory 2")
task1 = Task(
description="First task description",
expected_output="Output 1",
agent=agent1,
)
task2 = Task(
description="Second task description",
expected_output="Output 2",
agent=agent2,
)
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
planning=True,
)
original_task1_desc = task1.description
original_task2_desc = task2.description
incomplete_plans = [
PlanPerTask(task_number=1, task="Task 1", plan=" [PLAN FOR TASK 1]"),
]
mock_planner_result = PlannerTaskPydanticOutput(
list_of_plans_per_task=incomplete_plans
)
with patch.object(
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
):
crew._handle_crew_planning()
assert "[PLAN FOR TASK 1]" in task1.description
assert task2.description == original_task2_desc
def test_crew_planning_preserves_original_description(self):
"""Test that planning appends to the original task description."""
agent = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
task = Task(
description="Original task description",
expected_output="Output 1",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
planning=True,
)
plans = [
PlanPerTask(task_number=1, task="Task 1", plan=" - Additional plan steps"),
]
mock_planner_result = PlannerTaskPydanticOutput(list_of_plans_per_task=plans)
with patch.object(
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
):
crew._handle_crew_planning()
assert "Original task description" in task.description
assert "Additional plan steps" in task.description
def test_crew_planning_with_duplicate_task_numbers(self):
"""Test that duplicate task numbers use the first plan and log a warning."""
agent = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
task = Task(
description="Task description",
expected_output="Output 1",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
planning=True,
)
# Two plans with the same task_number - should use the first one
duplicate_plans = [
PlanPerTask(task_number=1, task="Task 1", plan=" [FIRST PLAN]"),
PlanPerTask(task_number=1, task="Task 1", plan=" [SECOND PLAN]"),
]
mock_planner_result = PlannerTaskPydanticOutput(
list_of_plans_per_task=duplicate_plans
)
with patch.object(
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
):
crew._handle_crew_planning()
# Should use the first plan, not the second
assert "[FIRST PLAN]" in task.description
assert "[SECOND PLAN]" not in task.description

View File

@@ -1,94 +0,0 @@
from typing import Any, Dict, List, Optional, Set, Tuple, Union
import pytest
from pydantic import BaseModel, Field
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
def test_simple_model():
class SimpleModel(BaseModel):
field1: int
field2: str
parser = PydanticSchemaParser(model=SimpleModel)
schema = parser.get_schema()
expected_schema = """{
field1: int,
field2: str
}"""
assert schema.strip() == expected_schema.strip()
def test_nested_model():
class NestedModel(BaseModel):
nested_field: int
class ParentModel(BaseModel):
parent_field: str
nested: NestedModel
parser = PydanticSchemaParser(model=ParentModel)
schema = parser.get_schema()
expected_schema = """{
parent_field: str,
nested: NestedModel
{
nested_field: int
}
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_list():
class ListModel(BaseModel):
list_field: List[int]
parser = PydanticSchemaParser(model=ListModel)
schema = parser.get_schema()
expected_schema = """{
list_field: List[int]
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_optional_field():
class OptionalModel(BaseModel):
optional_field: Optional[str]
parser = PydanticSchemaParser(model=OptionalModel)
schema = parser.get_schema()
expected_schema = """{
optional_field: Optional[str]
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_union():
class UnionModel(BaseModel):
union_field: Union[int, str]
parser = PydanticSchemaParser(model=UnionModel)
schema = parser.get_schema()
expected_schema = """{
union_field: Union[int, str]
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_dict():
class DictModel(BaseModel):
dict_field: Dict[str, int]
parser = PydanticSchemaParser(model=DictModel)
schema = parser.get_schema()
expected_schema = """{
dict_field: Dict[str, int]
}"""
assert schema.strip() == expected_schema.strip()

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.7.0"
__version__ = "1.7.2"

View File

@@ -323,13 +323,17 @@ def cli() -> None:
"--dry-run", is_flag=True, help="Show what would be done without making changes"
)
@click.option("--no-push", is_flag=True, help="Don't push changes to remote")
def bump(version: str, dry_run: bool, no_push: bool) -> None:
@click.option(
"--no-commit", is_flag=True, help="Don't commit changes (just update files)"
)
def bump(version: str, dry_run: bool, no_push: bool, no_commit: bool) -> None:
"""Bump version across all packages in lib/.
Args:
version: New version to set (e.g., 1.0.0, 1.0.0a1).
dry_run: Show what would be done without making changes.
no_push: Don't push changes to remote.
no_commit: Don't commit changes (just update files).
"""
try:
# Check prerequisites
@@ -397,51 +401,60 @@ def bump(version: str, dry_run: bool, no_push: bool) -> None:
else:
console.print("[dim][DRY RUN][/dim] Would run: uv sync")
branch_name = f"feat/bump-version-{version}"
if not dry_run:
console.print(f"\nCreating branch {branch_name}...")
run_command(["git", "checkout", "-b", branch_name])
console.print("[green]✓[/green] Branch created")
console.print("\nCommitting changes...")
run_command(["git", "add", "."])
run_command(["git", "commit", "-m", f"feat: bump versions to {version}"])
console.print("[green]✓[/green] Changes committed")
if not no_push:
console.print("\nPushing branch...")
run_command(["git", "push", "-u", "origin", branch_name])
console.print("[green]✓[/green] Branch pushed")
if no_commit:
console.print("\nSkipping git operations (--no-commit flag set)")
else:
console.print(f"[dim][DRY RUN][/dim] Would create branch: {branch_name}")
console.print(
f"[dim][DRY RUN][/dim] Would commit: feat: bump versions to {version}"
)
if not no_push:
console.print(f"[dim][DRY RUN][/dim] Would push branch: {branch_name}")
branch_name = f"feat/bump-version-{version}"
if not dry_run:
console.print(f"\nCreating branch {branch_name}...")
run_command(["git", "checkout", "-b", branch_name])
console.print("[green]✓[/green] Branch created")
if not dry_run and not no_push:
console.print("\nCreating pull request...")
run_command(
[
"gh",
"pr",
"create",
"--base",
"main",
"--title",
f"feat: bump versions to {version}",
"--body",
"",
]
)
console.print("[green]✓[/green] Pull request created")
elif dry_run:
console.print(
f"[dim][DRY RUN][/dim] Would create PR: feat: bump versions to {version}"
)
else:
console.print("\nSkipping PR creation (--no-push flag set)")
console.print("\nCommitting changes...")
run_command(["git", "add", "."])
run_command(
["git", "commit", "-m", f"feat: bump versions to {version}"]
)
console.print("[green]✓[/green] Changes committed")
if not no_push:
console.print("\nPushing branch...")
run_command(["git", "push", "-u", "origin", branch_name])
console.print("[green]✓[/green] Branch pushed")
else:
console.print(
f"[dim][DRY RUN][/dim] Would create branch: {branch_name}"
)
console.print(
f"[dim][DRY RUN][/dim] Would commit: feat: bump versions to {version}"
)
if not no_push:
console.print(
f"[dim][DRY RUN][/dim] Would push branch: {branch_name}"
)
if not dry_run and not no_push:
console.print("\nCreating pull request...")
run_command(
[
"gh",
"pr",
"create",
"--base",
"main",
"--title",
f"feat: bump versions to {version}",
"--body",
"",
]
)
console.print("[green]✓[/green] Pull request created")
elif dry_run:
console.print(
f"[dim][DRY RUN][/dim] Would create PR: feat: bump versions to {version}"
)
else:
console.print("\nSkipping PR creation (--no-push flag set)")
console.print(f"\n[green]✓[/green] Version bump to {version} complete!")