mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-07 15:18:29 +00:00
Compare commits
1 Commits
devin/1765
...
devin/1765
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aded8ef74a |
23
.github/workflows/publish.yml
vendored
23
.github/workflows/publish.yml
vendored
@@ -1,14 +1,9 @@
|
||||
name: Publish to PyPI
|
||||
|
||||
on:
|
||||
repository_dispatch:
|
||||
types: [deployment-tests-passed]
|
||||
release:
|
||||
types: [ published ]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
release_tag:
|
||||
description: 'Release tag to publish'
|
||||
required: false
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build:
|
||||
@@ -17,21 +12,7 @@ jobs:
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- name: Determine release tag
|
||||
id: release
|
||||
run: |
|
||||
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
|
||||
if [ -n "${{ inputs.release_tag }}" ]; then
|
||||
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
|
||||
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
|
||||
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "tag=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ steps.release.outputs.tag || github.ref }}
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
|
||||
18
.github/workflows/trigger-deployment-tests.yml
vendored
18
.github/workflows/trigger-deployment-tests.yml
vendored
@@ -1,18 +0,0 @@
|
||||
name: Trigger Deployment Tests
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
trigger:
|
||||
name: Trigger deployment tests
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger deployment tests
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
|
||||
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
|
||||
event-type: crewai-release
|
||||
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'
|
||||
@@ -16,17 +16,16 @@ Welcome to the CrewAI AOP API reference. This API allows you to programmatically
|
||||
Navigate to your crew's detail page in the CrewAI AOP dashboard and copy your Bearer Token from the Status tab.
|
||||
</Step>
|
||||
|
||||
<Step title="Discover Required Inputs">
|
||||
Use the `GET /inputs` endpoint to see what parameters your crew expects.
|
||||
</Step>
|
||||
<Step title="Discover Required Inputs">
|
||||
Use the `GET /inputs` endpoint to see what parameters your crew expects.
|
||||
</Step>
|
||||
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive
|
||||
a `kickoff_id`.
|
||||
</Step>
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
|
||||
</Step>
|
||||
|
||||
<Step title="Monitor Progress">
|
||||
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
|
||||
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -41,14 +40,13 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
|
||||
|
||||
### Token Types
|
||||
|
||||
| Token Type | Scope | Use Case |
|
||||
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
|
||||
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
| Token Type | Scope | Use Case |
|
||||
|:-----------|:--------|:----------|
|
||||
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
|
||||
<Tip>
|
||||
You can find both token types in the Status tab of your crew's detail page in
|
||||
the CrewAI AOP dashboard.
|
||||
You can find both token types in the Status tab of your crew's detail page in the CrewAI AOP dashboard.
|
||||
</Tip>
|
||||
|
||||
## Base URL
|
||||
@@ -65,33 +63,29 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
|
||||
|
||||
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
|
||||
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
|
||||
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
|
||||
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
|
||||
4. **Results**: Extract the final output from the completed response
|
||||
|
||||
## Error Handling
|
||||
|
||||
The API uses standard HTTP status codes:
|
||||
|
||||
| Code | Meaning |
|
||||
| ----- | :----------------------------------------- |
|
||||
| `200` | Success |
|
||||
| `400` | Bad Request - Invalid input format |
|
||||
| `401` | Unauthorized - Invalid bearer token |
|
||||
| `404` | Not Found - Resource doesn't exist |
|
||||
| Code | Meaning |
|
||||
|------|:--------|
|
||||
| `200` | Success |
|
||||
| `400` | Bad Request - Invalid input format |
|
||||
| `401` | Unauthorized - Invalid bearer token |
|
||||
| `404` | Not Found - Resource doesn't exist |
|
||||
| `422` | Validation Error - Missing required inputs |
|
||||
| `500` | Server Error - Contact support |
|
||||
| `500` | Server Error - Contact support |
|
||||
|
||||
## Interactive Testing
|
||||
|
||||
<Info>
|
||||
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew
|
||||
URL, we use **reference mode** instead of an interactive playground to avoid
|
||||
confusion. This shows you exactly what the requests should look like without
|
||||
non-functional send buttons.
|
||||
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
|
||||
</Info>
|
||||
|
||||
Each endpoint page shows you:
|
||||
|
||||
- ✅ **Exact request format** with all parameters
|
||||
- ✅ **Response examples** for success and error cases
|
||||
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
|
||||
@@ -109,7 +103,6 @@ Each endpoint page shows you:
|
||||
</CardGroup>
|
||||
|
||||
**Example workflow:**
|
||||
|
||||
1. **Copy this cURL example** from any endpoint page
|
||||
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
|
||||
3. **Replace the Bearer token** with your real token from the dashboard
|
||||
@@ -118,18 +111,10 @@ Each endpoint page shows you:
|
||||
## Need Help?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Enterprise Support"
|
||||
icon="headset"
|
||||
href="mailto:support@crewai.com"
|
||||
>
|
||||
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
|
||||
Get help with API integration and troubleshooting
|
||||
</Card>
|
||||
<Card
|
||||
title="Enterprise Dashboard"
|
||||
icon="chart-line"
|
||||
href="https://app.crewai.com"
|
||||
>
|
||||
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
|
||||
Manage your crews and view execution logs
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: "GET /{kickoff_id}/status"
|
||||
title: "GET /status/{kickoff_id}"
|
||||
description: "Get execution status"
|
||||
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
|
||||
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
|
||||
|
||||
@@ -187,97 +187,6 @@ You can also deploy your crews directly through the CrewAI AOP web interface by
|
||||
|
||||
</Steps>
|
||||
|
||||
## Option 3: Redeploy Using API (CI/CD Integration)
|
||||
|
||||
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
|
||||
|
||||
<Steps>
|
||||
<Step title="Get Your Personal Access Token">
|
||||
|
||||
Navigate to your CrewAI AOP account settings to generate an API token:
|
||||
|
||||
1. Go to [app.crewai.com](https://app.crewai.com)
|
||||
2. Click on **Settings** → **Account** → **Personal Access Token**
|
||||
3. Generate a new token and copy it securely
|
||||
4. Store this token as a secret in your CI/CD system
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Find Your Automation UUID">
|
||||
|
||||
Locate the unique identifier for your deployed crew:
|
||||
|
||||
1. Go to **Automations** in your CrewAI AOP dashboard
|
||||
2. Select your existing automation/crew
|
||||
3. Click on **Additional Details**
|
||||
4. Copy the **UUID** - this identifies your specific crew deployment
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Trigger Redeployment via API">
|
||||
|
||||
Use the Deploy API endpoint to trigger a redeployment:
|
||||
|
||||
```bash
|
||||
curl -i -X POST \
|
||||
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
|
||||
|
||||
# HTTP/2 200
|
||||
# content-type: application/json
|
||||
#
|
||||
# {
|
||||
# "uuid": "your-automation-uuid",
|
||||
# "status": "Deploy Enqueued",
|
||||
# "public_url": "https://your-crew-deployment.crewai.com",
|
||||
# "token": "your-bearer-token"
|
||||
# }
|
||||
```
|
||||
|
||||
<Info>
|
||||
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
|
||||
</Info>
|
||||
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="GitHub Actions Integration Example">
|
||||
|
||||
Here's a GitHub Actions workflow with more complex deployment triggers:
|
||||
|
||||
```yaml
|
||||
name: Deploy CrewAI Automation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
pull_request:
|
||||
types: [ labeled ]
|
||||
release:
|
||||
types: [ published ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
|
||||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
|
||||
(github.event_name == 'release')
|
||||
steps:
|
||||
- name: Trigger CrewAI Redeployment
|
||||
run: |
|
||||
curl -X POST \
|
||||
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## ⚠️ Environment Variable Security Requirements
|
||||
|
||||
<Warning>
|
||||
|
||||
@@ -35,7 +35,7 @@ info:
|
||||
|
||||
1. **Discover inputs** using `GET /inputs`
|
||||
2. **Start execution** using `POST /kickoff`
|
||||
3. **Monitor progress** using `GET /{kickoff_id}/status`
|
||||
3. **Monitor progress** using `GET /status/{kickoff_id}`
|
||||
version: 1.0.0
|
||||
contact:
|
||||
name: CrewAI Support
|
||||
@@ -63,7 +63,7 @@ paths:
|
||||
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
|
||||
operationId: getRequiredInputs
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Successfully retrieved required inputs
|
||||
content:
|
||||
application/json:
|
||||
@@ -84,21 +84,13 @@ paths:
|
||||
outreach_crew:
|
||||
summary: Outreach crew inputs
|
||||
value:
|
||||
inputs:
|
||||
[
|
||||
"name",
|
||||
"title",
|
||||
"company",
|
||||
"industry",
|
||||
"our_product",
|
||||
"linkedin_url",
|
||||
]
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
$ref: "#/components/responses/NotFoundError"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFoundError'
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/kickoff:
|
||||
post:
|
||||
@@ -178,7 +170,7 @@ paths:
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Crew execution started successfully
|
||||
content:
|
||||
application/json:
|
||||
@@ -190,24 +182,24 @@ paths:
|
||||
format: uuid
|
||||
description: Unique identifier for tracking this execution
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
"400":
|
||||
'400':
|
||||
description: Invalid request body or missing required inputs
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"422":
|
||||
$ref: '#/components/schemas/Error'
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'422':
|
||||
description: Validation error - ensure all required inputs are provided
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/ValidationError"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
$ref: '#/components/schemas/ValidationError'
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/{kickoff_id}/status:
|
||||
/status/{kickoff_id}:
|
||||
get:
|
||||
summary: Get Execution Status
|
||||
description: |
|
||||
@@ -230,15 +222,15 @@ paths:
|
||||
format: uuid
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Successfully retrieved execution status
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
oneOf:
|
||||
- $ref: "#/components/schemas/ExecutionRunning"
|
||||
- $ref: "#/components/schemas/ExecutionCompleted"
|
||||
- $ref: "#/components/schemas/ExecutionError"
|
||||
- $ref: '#/components/schemas/ExecutionRunning'
|
||||
- $ref: '#/components/schemas/ExecutionCompleted'
|
||||
- $ref: '#/components/schemas/ExecutionError'
|
||||
examples:
|
||||
running:
|
||||
summary: Execution in progress
|
||||
@@ -270,19 +262,19 @@ paths:
|
||||
status: "error"
|
||||
error: "Task execution failed: Invalid API key for external service"
|
||||
execution_time: 23.1
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Kickoff ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Execution not found"
|
||||
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/resume:
|
||||
post:
|
||||
@@ -362,7 +354,7 @@ paths:
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Execution resumed successfully
|
||||
content:
|
||||
application/json:
|
||||
@@ -389,28 +381,28 @@ paths:
|
||||
value:
|
||||
status: "retrying"
|
||||
message: "Task will be retried with your feedback"
|
||||
"400":
|
||||
'400':
|
||||
description: Invalid request body or execution not in pending state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Invalid Request"
|
||||
message: "Execution is not in pending human input state"
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Execution ID or Task ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "Execution ID not found"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
@@ -466,7 +458,7 @@ components:
|
||||
tasks:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/TaskResult"
|
||||
$ref: '#/components/schemas/TaskResult'
|
||||
execution_time:
|
||||
type: number
|
||||
description: Total execution time in seconds
|
||||
@@ -544,7 +536,7 @@ components:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Unauthorized"
|
||||
message: "Invalid or missing bearer token"
|
||||
@@ -554,7 +546,7 @@ components:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "The requested resource was not found"
|
||||
@@ -564,7 +556,7 @@ components:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Internal Server Error"
|
||||
message: "An unexpected error occurred"
|
||||
|
||||
@@ -35,7 +35,7 @@ info:
|
||||
|
||||
1. **Discover inputs** using `GET /inputs`
|
||||
2. **Start execution** using `POST /kickoff`
|
||||
3. **Monitor progress** using `GET /{kickoff_id}/status`
|
||||
3. **Monitor progress** using `GET /status/{kickoff_id}`
|
||||
version: 1.0.0
|
||||
contact:
|
||||
name: CrewAI Support
|
||||
@@ -63,7 +63,7 @@ paths:
|
||||
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
|
||||
operationId: getRequiredInputs
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Successfully retrieved required inputs
|
||||
content:
|
||||
application/json:
|
||||
@@ -84,21 +84,13 @@ paths:
|
||||
outreach_crew:
|
||||
summary: Outreach crew inputs
|
||||
value:
|
||||
inputs:
|
||||
[
|
||||
"name",
|
||||
"title",
|
||||
"company",
|
||||
"industry",
|
||||
"our_product",
|
||||
"linkedin_url",
|
||||
]
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
$ref: "#/components/responses/NotFoundError"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFoundError'
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/kickoff:
|
||||
post:
|
||||
@@ -178,7 +170,7 @@ paths:
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Crew execution started successfully
|
||||
content:
|
||||
application/json:
|
||||
@@ -190,24 +182,24 @@ paths:
|
||||
format: uuid
|
||||
description: Unique identifier for tracking this execution
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
"400":
|
||||
'400':
|
||||
description: Invalid request body or missing required inputs
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"422":
|
||||
$ref: '#/components/schemas/Error'
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'422':
|
||||
description: Validation error - ensure all required inputs are provided
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/ValidationError"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
$ref: '#/components/schemas/ValidationError'
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/{kickoff_id}/status:
|
||||
/status/{kickoff_id}:
|
||||
get:
|
||||
summary: Get Execution Status
|
||||
description: |
|
||||
@@ -230,15 +222,15 @@ paths:
|
||||
format: uuid
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Successfully retrieved execution status
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
oneOf:
|
||||
- $ref: "#/components/schemas/ExecutionRunning"
|
||||
- $ref: "#/components/schemas/ExecutionCompleted"
|
||||
- $ref: "#/components/schemas/ExecutionError"
|
||||
- $ref: '#/components/schemas/ExecutionRunning'
|
||||
- $ref: '#/components/schemas/ExecutionCompleted'
|
||||
- $ref: '#/components/schemas/ExecutionError'
|
||||
examples:
|
||||
running:
|
||||
summary: Execution in progress
|
||||
@@ -270,19 +262,19 @@ paths:
|
||||
status: "error"
|
||||
error: "Task execution failed: Invalid API key for external service"
|
||||
execution_time: 23.1
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Kickoff ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Execution not found"
|
||||
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/resume:
|
||||
post:
|
||||
@@ -362,7 +354,7 @@ paths:
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Execution resumed successfully
|
||||
content:
|
||||
application/json:
|
||||
@@ -389,28 +381,28 @@ paths:
|
||||
value:
|
||||
status: "retrying"
|
||||
message: "Task will be retried with your feedback"
|
||||
"400":
|
||||
'400':
|
||||
description: Invalid request body or execution not in pending state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Invalid Request"
|
||||
message: "Execution is not in pending human input state"
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Execution ID or Task ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "Execution ID not found"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
@@ -466,7 +458,7 @@ components:
|
||||
tasks:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/TaskResult"
|
||||
$ref: '#/components/schemas/TaskResult'
|
||||
execution_time:
|
||||
type: number
|
||||
description: Total execution time in seconds
|
||||
@@ -544,7 +536,7 @@ components:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Unauthorized"
|
||||
message: "Invalid or missing bearer token"
|
||||
@@ -554,7 +546,7 @@ components:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "The requested resource was not found"
|
||||
@@ -564,7 +556,7 @@ components:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Internal Server Error"
|
||||
message: "An unexpected error occurred"
|
||||
|
||||
@@ -84,7 +84,7 @@ paths:
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/{kickoff_id}/status:
|
||||
/status/{kickoff_id}:
|
||||
get:
|
||||
summary: 실행 상태 조회
|
||||
description: |
|
||||
|
||||
@@ -35,7 +35,7 @@ info:
|
||||
|
||||
1. **Descubra os inputs** usando `GET /inputs`
|
||||
2. **Inicie a execução** usando `POST /kickoff`
|
||||
3. **Monitore o progresso** usando `GET /{kickoff_id}/status`
|
||||
3. **Monitore o progresso** usando `GET /status/{kickoff_id}`
|
||||
version: 1.0.0
|
||||
contact:
|
||||
name: CrewAI Suporte
|
||||
@@ -56,7 +56,7 @@ paths:
|
||||
Retorna a lista de parâmetros de entrada que sua crew espera.
|
||||
operationId: getRequiredInputs
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Inputs requeridos obtidos com sucesso
|
||||
content:
|
||||
application/json:
|
||||
@@ -69,12 +69,12 @@ paths:
|
||||
type: string
|
||||
description: Nomes dos parâmetros de entrada
|
||||
example: ["budget", "interests", "duration", "age"]
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
$ref: "#/components/responses/NotFoundError"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFoundError'
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/kickoff:
|
||||
post:
|
||||
@@ -104,7 +104,7 @@ paths:
|
||||
age: "35"
|
||||
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Execução iniciada com sucesso
|
||||
content:
|
||||
application/json:
|
||||
@@ -115,12 +115,12 @@ paths:
|
||||
type: string
|
||||
format: uuid
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/{kickoff_id}/status:
|
||||
/status/{kickoff_id}:
|
||||
get:
|
||||
summary: Obter Status da Execução
|
||||
description: |
|
||||
@@ -136,25 +136,25 @@ paths:
|
||||
type: string
|
||||
format: uuid
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Status recuperado com sucesso
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
oneOf:
|
||||
- $ref: "#/components/schemas/ExecutionRunning"
|
||||
- $ref: "#/components/schemas/ExecutionCompleted"
|
||||
- $ref: "#/components/schemas/ExecutionError"
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
- $ref: '#/components/schemas/ExecutionRunning'
|
||||
- $ref: '#/components/schemas/ExecutionCompleted'
|
||||
- $ref: '#/components/schemas/ExecutionError'
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Kickoff ID não encontrado
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
$ref: '#/components/schemas/Error'
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/resume:
|
||||
post:
|
||||
@@ -234,7 +234,7 @@ paths:
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
"200":
|
||||
'200':
|
||||
description: Execution resumed successfully
|
||||
content:
|
||||
application/json:
|
||||
@@ -261,28 +261,28 @@ paths:
|
||||
value:
|
||||
status: "retrying"
|
||||
message: "Task will be retried with your feedback"
|
||||
"400":
|
||||
'400':
|
||||
description: Invalid request body or execution not in pending state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Invalid Request"
|
||||
message: "Execution is not in pending human input state"
|
||||
"401":
|
||||
$ref: "#/components/responses/UnauthorizedError"
|
||||
"404":
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Execution ID or Task ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "Execution ID not found"
|
||||
"500":
|
||||
$ref: "#/components/responses/ServerError"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
@@ -324,7 +324,7 @@ components:
|
||||
tasks:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/TaskResult"
|
||||
$ref: '#/components/schemas/TaskResult'
|
||||
execution_time:
|
||||
type: number
|
||||
|
||||
@@ -380,16 +380,16 @@ components:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
NotFoundError:
|
||||
description: Recurso não encontrado
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
ServerError:
|
||||
description: Erro interno do servidor
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
$ref: '#/components/schemas/Error'
|
||||
|
||||
@@ -16,17 +16,16 @@ CrewAI 엔터프라이즈 API 참고 자료에 오신 것을 환영합니다.
|
||||
CrewAI AOP 대시보드에서 자신의 crew 상세 페이지로 이동하여 Status 탭에서 Bearer Token을 복사하세요.
|
||||
</Step>
|
||||
|
||||
<Step title="필수 입력값 확인하기">
|
||||
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
|
||||
</Step>
|
||||
<Step title="필수 입력값 확인하기">
|
||||
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
|
||||
</Step>
|
||||
|
||||
<Step title="Crew 실행 시작하기">
|
||||
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를
|
||||
받으세요.
|
||||
</Step>
|
||||
<Step title="Crew 실행 시작하기">
|
||||
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를 받으세요.
|
||||
</Step>
|
||||
|
||||
<Step title="진행 상황 모니터링">
|
||||
`GET /{kickoff_id}/status`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
|
||||
`GET /status/{kickoff_id}`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -41,14 +40,13 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
|
||||
|
||||
### 토큰 유형
|
||||
|
||||
| 토큰 유형 | 범위 | 사용 사례 |
|
||||
| :-------------------- | :--------------- | :------------------------------------ |
|
||||
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
|
||||
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
|
||||
| 토큰 유형 | 범위 | 사용 사례 |
|
||||
|:-----------|:--------|:----------|
|
||||
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
|
||||
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
|
||||
|
||||
<Tip>
|
||||
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할
|
||||
수 있습니다.
|
||||
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할 수 있습니다.
|
||||
</Tip>
|
||||
|
||||
## 기본 URL
|
||||
@@ -65,33 +63,29 @@ https://your-crew-name.crewai.com
|
||||
|
||||
1. **탐색**: `GET /inputs`를 호출하여 crew가 필요한 것을 파악합니다.
|
||||
2. **실행**: `POST /kickoff`를 통해 입력값을 제출하여 처리를 시작합니다.
|
||||
3. **모니터링**: 완료될 때까지 `GET /{kickoff_id}/status`를 주기적으로 조회합니다.
|
||||
3. **모니터링**: 완료될 때까지 `GET /status/{kickoff_id}`를 주기적으로 조회합니다.
|
||||
4. **결과**: 완료된 응답에서 최종 출력을 추출합니다.
|
||||
|
||||
## 오류 처리
|
||||
|
||||
API는 표준 HTTP 상태 코드를 사용합니다:
|
||||
|
||||
| 코드 | 의미 |
|
||||
| ----- | :------------------------------------ |
|
||||
| `200` | 성공 |
|
||||
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
|
||||
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
|
||||
| 코드 | 의미 |
|
||||
|------|:--------|
|
||||
| `200` | 성공 |
|
||||
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
|
||||
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
|
||||
| `404` | 찾을 수 없음 - 리소스가 존재하지 않음 |
|
||||
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
|
||||
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
|
||||
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
|
||||
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
|
||||
|
||||
## 인터랙티브 테스트
|
||||
|
||||
<Info>
|
||||
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을
|
||||
가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를
|
||||
사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히
|
||||
보여줍니다.
|
||||
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을 가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를 사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히 보여줍니다.
|
||||
</Info>
|
||||
|
||||
각 엔드포인트 페이지에서는 다음을 확인할 수 있습니다:
|
||||
|
||||
- ✅ 모든 파라미터가 포함된 **정확한 요청 형식**
|
||||
- ✅ 성공 및 오류 사례에 대한 **응답 예시**
|
||||
- ✅ 여러 언어(cURL, Python, JavaScript 등)로 제공되는 **코드 샘플**
|
||||
@@ -109,7 +103,6 @@ API는 표준 HTTP 상태 코드를 사용합니다:
|
||||
</CardGroup>
|
||||
|
||||
**예시 작업 흐름:**
|
||||
|
||||
1. **cURL 예제를 복사**합니다 (엔드포인트 페이지에서)
|
||||
2. **`your-actual-crew-name.crewai.com`**을(를) 실제 crew URL로 교체합니다
|
||||
3. **Bearer 토큰을** 대시보드에서 복사한 실제 토큰으로 교체합니다
|
||||
@@ -118,18 +111,10 @@ API는 표준 HTTP 상태 코드를 사용합니다:
|
||||
## 도움이 필요하신가요?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Enterprise Support"
|
||||
icon="headset"
|
||||
href="mailto:support@crewai.com"
|
||||
>
|
||||
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
|
||||
API 통합 및 문제 해결에 대한 지원을 받으세요
|
||||
</Card>
|
||||
<Card
|
||||
title="Enterprise Dashboard"
|
||||
icon="chart-line"
|
||||
href="https://app.crewai.com"
|
||||
>
|
||||
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
|
||||
crew를 관리하고 실행 로그를 확인하세요
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: "GET /{kickoff_id}/status"
|
||||
title: "GET /status/{kickoff_id}"
|
||||
description: "실행 상태 조회"
|
||||
openapi: "/enterprise-api.ko.yaml GET /{kickoff_id}/status"
|
||||
openapi: "/enterprise-api.ko.yaml GET /status/{kickoff_id}"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
|
||||
|
||||
@@ -16,17 +16,16 @@ Bem-vindo à referência da API do CrewAI AOP. Esta API permite que você intera
|
||||
Navegue até a página de detalhes do seu crew no painel do CrewAI AOP e copie seu Bearer Token na aba Status.
|
||||
</Step>
|
||||
|
||||
<Step title="Descubra os Inputs Necessários">
|
||||
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
|
||||
</Step>
|
||||
<Step title="Descubra os Inputs Necessários">
|
||||
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
|
||||
</Step>
|
||||
|
||||
<Step title="Inicie uma Execução de Crew">
|
||||
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e
|
||||
receber um `kickoff_id`.
|
||||
</Step>
|
||||
<Step title="Inicie uma Execução de Crew">
|
||||
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e receber um `kickoff_id`.
|
||||
</Step>
|
||||
|
||||
<Step title="Monitore o Progresso">
|
||||
Use `GET /{kickoff_id}/status` para checar o status da execução e recuperar os resultados.
|
||||
Use `GET /status/{kickoff_id}` para checar o status da execução e recuperar os resultados.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -41,14 +40,13 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
|
||||
|
||||
### Tipos de Token
|
||||
|
||||
| Tipo de Token | Escopo | Caso de Uso |
|
||||
| :-------------------- | :----------------------------- | :------------------------------------------------------------------- |
|
||||
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
|
||||
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
|
||||
| Tipo de Token | Escopo | Caso de Uso |
|
||||
|:--------------------|:------------------------|:---------------------------------------------------------|
|
||||
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
|
||||
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
|
||||
|
||||
<Tip>
|
||||
Você pode encontrar ambos os tipos de token na aba Status da página de
|
||||
detalhes do seu crew no painel do CrewAI AOP.
|
||||
Você pode encontrar ambos os tipos de token na aba Status da página de detalhes do seu crew no painel do CrewAI AOP.
|
||||
</Tip>
|
||||
|
||||
## URL Base
|
||||
@@ -65,33 +63,29 @@ Substitua `your-crew-name` pela URL real do seu crew no painel.
|
||||
|
||||
1. **Descoberta**: Chame `GET /inputs` para entender o que seu crew precisa
|
||||
2. **Execução**: Envie os inputs via `POST /kickoff` para iniciar o processamento
|
||||
3. **Monitoramento**: Faça polling em `GET /{kickoff_id}/status` até a conclusão
|
||||
3. **Monitoramento**: Faça polling em `GET /status/{kickoff_id}` até a conclusão
|
||||
4. **Resultados**: Extraia o output final da resposta concluída
|
||||
|
||||
## Tratamento de Erros
|
||||
|
||||
A API utiliza códigos de status HTTP padrão:
|
||||
|
||||
| Código | Significado |
|
||||
| ------ | :----------------------------------------------- |
|
||||
| `200` | Sucesso |
|
||||
| `400` | Requisição Inválida - Formato de input inválido |
|
||||
| `401` | Não Autorizado - Bearer token inválido |
|
||||
| `404` | Não Encontrado - Recurso não existe |
|
||||
| Código | Significado |
|
||||
|--------|:--------------------------------------|
|
||||
| `200` | Sucesso |
|
||||
| `400` | Requisição Inválida - Formato de input inválido |
|
||||
| `401` | Não Autorizado - Bearer token inválido |
|
||||
| `404` | Não Encontrado - Recurso não existe |
|
||||
| `422` | Erro de Validação - Inputs obrigatórios ausentes |
|
||||
| `500` | Erro no Servidor - Contate o suporte |
|
||||
| `500` | Erro no Servidor - Contate o suporte |
|
||||
|
||||
## Testes Interativos
|
||||
|
||||
<Info>
|
||||
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua
|
||||
própria URL de crew, utilizamos o **modo referência** em vez de um playground
|
||||
interativo para evitar confusão. Isso mostra exatamente como as requisições
|
||||
devem ser feitas, sem botões de envio não funcionais.
|
||||
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua própria URL de crew, utilizamos o **modo referência** em vez de um playground interativo para evitar confusão. Isso mostra exatamente como as requisições devem ser feitas, sem botões de envio não funcionais.
|
||||
</Info>
|
||||
|
||||
Cada página de endpoint mostra para você:
|
||||
|
||||
- ✅ **Formato exato da requisição** com todos os parâmetros
|
||||
- ✅ **Exemplos de resposta** para casos de sucesso e erro
|
||||
- ✅ **Exemplos de código** em várias linguagens (cURL, Python, JavaScript, etc.)
|
||||
@@ -109,7 +103,6 @@ Cada página de endpoint mostra para você:
|
||||
</CardGroup>
|
||||
|
||||
**Exemplo de fluxo:**
|
||||
|
||||
1. **Copie este exemplo cURL** de qualquer página de endpoint
|
||||
2. **Substitua `your-actual-crew-name.crewai.com`** pela URL real do seu crew
|
||||
3. **Substitua o Bearer token** pelo seu token real do painel
|
||||
@@ -118,18 +111,10 @@ Cada página de endpoint mostra para você:
|
||||
## Precisa de Ajuda?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Suporte Enterprise"
|
||||
icon="headset"
|
||||
href="mailto:support@crewai.com"
|
||||
>
|
||||
<Card title="Suporte Enterprise" icon="headset" href="mailto:support@crewai.com">
|
||||
Obtenha ajuda com integração da API e resolução de problemas
|
||||
</Card>
|
||||
<Card
|
||||
title="Painel Enterprise"
|
||||
icon="chart-line"
|
||||
href="https://app.crewai.com"
|
||||
>
|
||||
<Card title="Painel Enterprise" icon="chart-line" href="https://app.crewai.com">
|
||||
Gerencie seus crews e visualize logs de execução
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: "GET /{kickoff_id}/status"
|
||||
title: "GET /status/{kickoff_id}"
|
||||
description: "Obter o status da execução"
|
||||
openapi: "/enterprise-api.pt-BR.yaml GET /{kickoff_id}/status"
|
||||
openapi: "/enterprise-api.pt-BR.yaml GET /status/{kickoff_id}"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests~=2.32.5",
|
||||
"docker~=7.1.0",
|
||||
"crewai==1.7.1",
|
||||
"crewai==1.7.0",
|
||||
"lancedb~=0.5.4",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
|
||||
@@ -291,4 +291,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.7.1"
|
||||
__version__ = "1.7.0"
|
||||
|
||||
@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.7.1",
|
||||
"crewai-tools==1.7.0",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
|
||||
@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.7.1"
|
||||
__version__ = "1.7.0"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ from crewai.events.types.knowledge_events import (
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
from crewai.utilities.converter import generate_model_description
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -5,9 +5,10 @@ from __future__ import annotations
|
||||
from abc import ABC, abstractmethod
|
||||
import json
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Final, Literal
|
||||
from typing import TYPE_CHECKING, Final, Literal
|
||||
|
||||
from crewai.utilities.converter import generate_model_description
|
||||
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -41,7 +42,7 @@ class BaseConverterAdapter(ABC):
|
||||
"""
|
||||
self.agent_adapter = agent_adapter
|
||||
self._output_format: Literal["json", "pydantic"] | None = None
|
||||
self._schema: dict[str, Any] | None = None
|
||||
self._schema: str | None = None
|
||||
|
||||
@abstractmethod
|
||||
def configure_structured_output(self, task: Task) -> None:
|
||||
@@ -128,7 +129,7 @@ class BaseConverterAdapter(ABC):
|
||||
@staticmethod
|
||||
def _configure_format_from_task(
|
||||
task: Task,
|
||||
) -> tuple[Literal["json", "pydantic"] | None, dict[str, Any] | None]:
|
||||
) -> tuple[Literal["json", "pydantic"] | None, str | None]:
|
||||
"""Determine output format and schema from task requirements.
|
||||
|
||||
This is a helper method that examines the task's output requirements
|
||||
|
||||
@@ -4,7 +4,6 @@ This module contains the OpenAIConverterAdapter class that handles structured
|
||||
output conversion for OpenAI agents, supporting JSON and Pydantic model formats.
|
||||
"""
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
|
||||
@@ -62,7 +61,7 @@ class OpenAIConverterAdapter(BaseConverterAdapter):
|
||||
output_schema: str = (
|
||||
get_i18n()
|
||||
.slice("formatted_task_instructions")
|
||||
.format(output_format=json.dumps(self._schema, indent=2))
|
||||
.format(output_format=self._schema)
|
||||
)
|
||||
|
||||
return f"{base_prompt}\n\n{output_schema}"
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.7.1"
|
||||
"crewai[tools]==1.7.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.7.1"
|
||||
"crewai[tools]==1.7.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import base64
|
||||
from json import JSONDecodeError
|
||||
import os
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
@@ -163,19 +162,9 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
|
||||
if login_response.status_code != 200:
|
||||
console.print(
|
||||
"Authentication failed. Verify if the currently active organization can access the tool repository, and run 'crewai login' again.",
|
||||
"Authentication failed. Verify if the currently active organization access to the tool repository, and run 'crewai login' again. ",
|
||||
style="bold red",
|
||||
)
|
||||
try:
|
||||
console.print(
|
||||
f"[{login_response.status_code} error - {login_response.json().get('message', 'Unknown error')}]",
|
||||
style="bold red italic",
|
||||
)
|
||||
except JSONDecodeError:
|
||||
console.print(
|
||||
f"[{login_response.status_code} error - Unknown error - Invalid JSON response]",
|
||||
style="bold red italic",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
login_response_json = login_response.json()
|
||||
|
||||
@@ -1017,26 +1017,10 @@ class Crew(FlowTrackable, BaseModel):
|
||||
tasks=self.tasks, planning_agent_llm=self.planning_llm
|
||||
)._handle_crew_planning()
|
||||
|
||||
plan_map: dict[int, str] = {}
|
||||
for step_plan in result.list_of_plans_per_task:
|
||||
if step_plan.task_number in plan_map:
|
||||
self._logger.log(
|
||||
"warning",
|
||||
f"Duplicate plan for Task Number {step_plan.task_number}, "
|
||||
"using the first plan",
|
||||
)
|
||||
else:
|
||||
plan_map[step_plan.task_number] = step_plan.plan
|
||||
|
||||
for idx, task in enumerate(self.tasks):
|
||||
task_number = idx + 1
|
||||
if task_number in plan_map:
|
||||
task.description += plan_map[task_number]
|
||||
else:
|
||||
self._logger.log(
|
||||
"warning",
|
||||
f"No plan found for Task Number {task_number}",
|
||||
)
|
||||
for task, step_plan in zip(
|
||||
self.tasks, result.list_of_plans_per_task, strict=False
|
||||
):
|
||||
task.description += step_plan.plan
|
||||
|
||||
def _store_execution_log(
|
||||
self,
|
||||
|
||||
@@ -19,9 +19,9 @@ class SignalType(IntEnum):
|
||||
|
||||
SIGTERM = signal.SIGTERM
|
||||
SIGINT = signal.SIGINT
|
||||
SIGHUP = getattr(signal, "SIGHUP", 1)
|
||||
SIGTSTP = getattr(signal, "SIGTSTP", 20)
|
||||
SIGCONT = getattr(signal, "SIGCONT", 18)
|
||||
SIGHUP = signal.SIGHUP
|
||||
SIGTSTP = signal.SIGTSTP
|
||||
SIGCONT = signal.SIGCONT
|
||||
|
||||
|
||||
class SigTermEvent(BaseEvent):
|
||||
|
||||
@@ -3,20 +3,23 @@ from __future__ import annotations
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from typing import TYPE_CHECKING, Any, TypedDict
|
||||
|
||||
from pydantic import BaseModel
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.converter import generate_model_description
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
)
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from azure.core.credentials import AccessToken, TokenCredential
|
||||
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
|
||||
|
||||
@@ -51,6 +54,39 @@ except ImportError:
|
||||
) from None
|
||||
|
||||
|
||||
class _StaticTokenCredential:
|
||||
"""A simple TokenCredential implementation for static Azure AD tokens.
|
||||
|
||||
This class wraps a static token string and provides it as a TokenCredential
|
||||
that can be used with Azure SDK clients. The token is assumed to be valid
|
||||
and the user is responsible for token rotation.
|
||||
"""
|
||||
|
||||
def __init__(self, token: str) -> None:
|
||||
"""Initialize with a static token.
|
||||
|
||||
Args:
|
||||
token: The Azure AD bearer token string.
|
||||
"""
|
||||
self._token = token
|
||||
|
||||
def get_token(
|
||||
self, *scopes: str, **kwargs: Any
|
||||
) -> AccessToken:
|
||||
"""Get the static token as an AccessToken.
|
||||
|
||||
Args:
|
||||
*scopes: Token scopes (ignored for static tokens).
|
||||
**kwargs: Additional arguments (ignored).
|
||||
|
||||
Returns:
|
||||
AccessToken with the static token and a far-future expiry.
|
||||
"""
|
||||
from azure.core.credentials import AccessToken
|
||||
|
||||
return AccessToken(self._token, int(time.time()) + 3600)
|
||||
|
||||
|
||||
class AzureCompletionParams(TypedDict, total=False):
|
||||
"""Type definition for Azure chat completion parameters."""
|
||||
|
||||
@@ -92,6 +128,9 @@ class AzureCompletion(BaseLLM):
|
||||
stop: list[str] | None = None,
|
||||
stream: bool = False,
|
||||
interceptor: BaseInterceptor[Any, Any] | None = None,
|
||||
credential: TokenCredential | None = None,
|
||||
azure_ad_token: str | None = None,
|
||||
use_default_credential: bool = False,
|
||||
**kwargs: Any,
|
||||
):
|
||||
"""Initialize Azure AI Inference chat completion client.
|
||||
@@ -111,7 +150,36 @@ class AzureCompletion(BaseLLM):
|
||||
stop: Stop sequences
|
||||
stream: Enable streaming responses
|
||||
interceptor: HTTP interceptor (not yet supported for Azure).
|
||||
credential: Azure TokenCredential for Azure AD authentication (e.g.,
|
||||
DefaultAzureCredential, ManagedIdentityCredential). Takes precedence
|
||||
over other authentication methods.
|
||||
azure_ad_token: Static Azure AD token string (defaults to AZURE_AD_TOKEN
|
||||
env var). Use this for scenarios where you have a pre-fetched token.
|
||||
use_default_credential: If True, automatically use DefaultAzureCredential
|
||||
for Azure AD authentication. Requires azure-identity package.
|
||||
**kwargs: Additional parameters
|
||||
|
||||
Authentication Priority:
|
||||
1. credential parameter (explicit TokenCredential)
|
||||
2. azure_ad_token parameter or AZURE_AD_TOKEN env var
|
||||
3. api_key parameter or AZURE_API_KEY env var
|
||||
4. use_default_credential=True (DefaultAzureCredential)
|
||||
|
||||
Example:
|
||||
# Using API key (existing behavior)
|
||||
llm = LLM(model="azure/gpt-4", api_key="...", endpoint="...")
|
||||
|
||||
# Using Azure AD token from environment
|
||||
os.environ["AZURE_AD_TOKEN"] = token_provider()
|
||||
llm = LLM(model="azure/gpt-4", endpoint="...")
|
||||
|
||||
# Using DefaultAzureCredential (Managed Identity, Azure CLI, etc.)
|
||||
llm = LLM(model="azure/gpt-4", endpoint="...", use_default_credential=True)
|
||||
|
||||
# Using explicit TokenCredential
|
||||
from azure.identity import ManagedIdentityCredential
|
||||
llm = LLM(model="azure/gpt-4", endpoint="...",
|
||||
credential=ManagedIdentityCredential())
|
||||
"""
|
||||
if interceptor is not None:
|
||||
raise NotImplementedError(
|
||||
@@ -124,6 +192,9 @@ class AzureCompletion(BaseLLM):
|
||||
)
|
||||
|
||||
self.api_key = api_key or os.getenv("AZURE_API_KEY")
|
||||
self.azure_ad_token = azure_ad_token or os.getenv("AZURE_AD_TOKEN")
|
||||
self._explicit_credential = credential
|
||||
self.use_default_credential = use_default_credential
|
||||
self.endpoint = (
|
||||
endpoint
|
||||
or os.getenv("AZURE_ENDPOINT")
|
||||
@@ -134,10 +205,6 @@ class AzureCompletion(BaseLLM):
|
||||
self.timeout = timeout
|
||||
self.max_retries = max_retries
|
||||
|
||||
if not self.api_key:
|
||||
raise ValueError(
|
||||
"Azure API key is required. Set AZURE_API_KEY environment variable or pass api_key parameter."
|
||||
)
|
||||
if not self.endpoint:
|
||||
raise ValueError(
|
||||
"Azure endpoint is required. Set AZURE_ENDPOINT environment variable or pass endpoint parameter."
|
||||
@@ -146,19 +213,22 @@ class AzureCompletion(BaseLLM):
|
||||
# Validate and potentially fix Azure OpenAI endpoint URL
|
||||
self.endpoint = self._validate_and_fix_endpoint(self.endpoint, model)
|
||||
|
||||
# Select credential based on priority
|
||||
selected_credential = self._select_credential()
|
||||
|
||||
# Build client kwargs
|
||||
client_kwargs = {
|
||||
client_kwargs: dict[str, Any] = {
|
||||
"endpoint": self.endpoint,
|
||||
"credential": AzureKeyCredential(self.api_key),
|
||||
"credential": selected_credential,
|
||||
}
|
||||
|
||||
# Add api_version if specified (primarily for Azure OpenAI endpoints)
|
||||
if self.api_version:
|
||||
client_kwargs["api_version"] = self.api_version
|
||||
|
||||
self.client = ChatCompletionsClient(**client_kwargs) # type: ignore[arg-type]
|
||||
self.client = ChatCompletionsClient(**client_kwargs)
|
||||
|
||||
self.async_client = AsyncChatCompletionsClient(**client_kwargs) # type: ignore[arg-type]
|
||||
self.async_client = AsyncChatCompletionsClient(**client_kwargs)
|
||||
|
||||
self.top_p = top_p
|
||||
self.frequency_penalty = frequency_penalty
|
||||
@@ -175,6 +245,47 @@ class AzureCompletion(BaseLLM):
|
||||
and "/openai/deployments/" in self.endpoint
|
||||
)
|
||||
|
||||
def _select_credential(self) -> AzureKeyCredential | TokenCredential:
|
||||
"""Select the appropriate credential based on configuration priority.
|
||||
|
||||
Priority order:
|
||||
1. Explicit credential parameter (TokenCredential)
|
||||
2. azure_ad_token parameter or AZURE_AD_TOKEN env var
|
||||
3. api_key parameter or AZURE_API_KEY env var
|
||||
4. use_default_credential=True (DefaultAzureCredential)
|
||||
|
||||
Returns:
|
||||
The selected credential for Azure authentication.
|
||||
|
||||
Raises:
|
||||
ValueError: If no valid credentials are configured.
|
||||
"""
|
||||
if self._explicit_credential is not None:
|
||||
return self._explicit_credential
|
||||
|
||||
if self.azure_ad_token:
|
||||
return _StaticTokenCredential(self.azure_ad_token)
|
||||
|
||||
if self.api_key:
|
||||
return AzureKeyCredential(self.api_key)
|
||||
|
||||
if self.use_default_credential:
|
||||
try:
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
return DefaultAzureCredential()
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"azure-identity package is required for use_default_credential=True. "
|
||||
'Install it with: uv add "azure-identity"'
|
||||
) from None
|
||||
|
||||
raise ValueError(
|
||||
"Azure credentials are required. Provide one of: "
|
||||
"api_key / AZURE_API_KEY, azure_ad_token / AZURE_AD_TOKEN, "
|
||||
"a TokenCredential via 'credential', or set use_default_credential=True."
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _validate_and_fix_endpoint(endpoint: str, model: str) -> str:
|
||||
"""Validate and fix Azure endpoint URL format.
|
||||
|
||||
@@ -18,10 +18,10 @@ from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.converter import generate_model_description
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
)
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
|
||||
@@ -494,11 +494,8 @@ class Task(BaseModel):
|
||||
future: Future[TaskOutput],
|
||||
) -> None:
|
||||
"""Execute the task asynchronously with context handling."""
|
||||
try:
|
||||
result = self._execute_core(agent, context, tools)
|
||||
future.set_result(result)
|
||||
except Exception as e:
|
||||
future.set_exception(e)
|
||||
result = self._execute_core(agent, context, tools)
|
||||
future.set_result(result)
|
||||
|
||||
async def aexecute_sync(
|
||||
self,
|
||||
|
||||
@@ -174,12 +174,9 @@ class Telemetry:
|
||||
|
||||
self._register_signal_handler(signal.SIGTERM, SigTermEvent, shutdown=True)
|
||||
self._register_signal_handler(signal.SIGINT, SigIntEvent, shutdown=True)
|
||||
if hasattr(signal, "SIGHUP"):
|
||||
self._register_signal_handler(signal.SIGHUP, SigHupEvent, shutdown=False)
|
||||
if hasattr(signal, "SIGTSTP"):
|
||||
self._register_signal_handler(signal.SIGTSTP, SigTStpEvent, shutdown=False)
|
||||
if hasattr(signal, "SIGCONT"):
|
||||
self._register_signal_handler(signal.SIGCONT, SigContEvent, shutdown=False)
|
||||
self._register_signal_handler(signal.SIGHUP, SigHupEvent, shutdown=False)
|
||||
self._register_signal_handler(signal.SIGTSTP, SigTStpEvent, shutdown=False)
|
||||
self._register_signal_handler(signal.SIGCONT, SigContEvent, shutdown=False)
|
||||
|
||||
def _register_signal_handler(
|
||||
self,
|
||||
|
||||
@@ -3,13 +3,15 @@ from __future__ import annotations
|
||||
from abc import ABC, abstractmethod
|
||||
import asyncio
|
||||
from collections.abc import Awaitable, Callable
|
||||
from inspect import Parameter, signature
|
||||
import json
|
||||
from inspect import signature
|
||||
from typing import (
|
||||
Any,
|
||||
Generic,
|
||||
ParamSpec,
|
||||
TypeVar,
|
||||
cast,
|
||||
get_args,
|
||||
get_origin,
|
||||
overload,
|
||||
)
|
||||
|
||||
@@ -25,7 +27,6 @@ from typing_extensions import TypeIs
|
||||
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
|
||||
|
||||
_printer = Printer()
|
||||
@@ -102,40 +103,20 @@ class BaseTool(BaseModel, ABC):
|
||||
if v != cls._ArgsSchemaPlaceholder:
|
||||
return v
|
||||
|
||||
run_sig = signature(cls._run)
|
||||
fields: dict[str, Any] = {}
|
||||
|
||||
for param_name, param in run_sig.parameters.items():
|
||||
if param_name in ("self", "return"):
|
||||
continue
|
||||
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
|
||||
continue
|
||||
|
||||
annotation = param.annotation if param.annotation != param.empty else Any
|
||||
|
||||
if param.default is param.empty:
|
||||
fields[param_name] = (annotation, ...)
|
||||
else:
|
||||
fields[param_name] = (annotation, param.default)
|
||||
|
||||
if not fields:
|
||||
arun_sig = signature(cls._arun)
|
||||
for param_name, param in arun_sig.parameters.items():
|
||||
if param_name in ("self", "return"):
|
||||
continue
|
||||
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
|
||||
continue
|
||||
|
||||
annotation = (
|
||||
param.annotation if param.annotation != param.empty else Any
|
||||
)
|
||||
|
||||
if param.default is param.empty:
|
||||
fields[param_name] = (annotation, ...)
|
||||
else:
|
||||
fields[param_name] = (annotation, param.default)
|
||||
|
||||
return create_model(f"{cls.__name__}Schema", **fields)
|
||||
return cast(
|
||||
type[PydanticBaseModel],
|
||||
type(
|
||||
f"{cls.__name__}Schema",
|
||||
(PydanticBaseModel,),
|
||||
{
|
||||
"__annotations__": {
|
||||
k: v
|
||||
for k, v in cls._run.__annotations__.items()
|
||||
if k != "return"
|
||||
},
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
@field_validator("max_usage_count", mode="before")
|
||||
@classmethod
|
||||
@@ -245,23 +226,24 @@ class BaseTool(BaseModel, ABC):
|
||||
args_schema = getattr(tool, "args_schema", None)
|
||||
|
||||
if args_schema is None:
|
||||
# Infer args_schema from the function signature if not provided
|
||||
func_signature = signature(tool.func)
|
||||
fields: dict[str, Any] = {}
|
||||
for name, param in func_signature.parameters.items():
|
||||
if name == "self":
|
||||
continue
|
||||
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
|
||||
continue
|
||||
param_annotation = (
|
||||
param.annotation if param.annotation != param.empty else Any
|
||||
)
|
||||
if param.default is param.empty:
|
||||
fields[name] = (param_annotation, ...)
|
||||
else:
|
||||
fields[name] = (param_annotation, param.default)
|
||||
if fields:
|
||||
args_schema = create_model(f"{tool.name}Input", **fields)
|
||||
annotations = func_signature.parameters
|
||||
args_fields: dict[str, Any] = {}
|
||||
for name, param in annotations.items():
|
||||
if name != "self":
|
||||
param_annotation = (
|
||||
param.annotation if param.annotation != param.empty else Any
|
||||
)
|
||||
field_info = Field(
|
||||
default=...,
|
||||
description="",
|
||||
)
|
||||
args_fields[name] = (param_annotation, field_info)
|
||||
if args_fields:
|
||||
args_schema = create_model(f"{tool.name}Input", **args_fields)
|
||||
else:
|
||||
# Create a default schema with no fields if no parameters are found
|
||||
args_schema = create_model(
|
||||
f"{tool.name}Input", __base__=PydanticBaseModel
|
||||
)
|
||||
@@ -275,37 +257,53 @@ class BaseTool(BaseModel, ABC):
|
||||
|
||||
def _set_args_schema(self) -> None:
|
||||
if self.args_schema is None:
|
||||
run_sig = signature(self._run)
|
||||
fields: dict[str, Any] = {}
|
||||
|
||||
for param_name, param in run_sig.parameters.items():
|
||||
if param_name in ("self", "return"):
|
||||
continue
|
||||
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
|
||||
continue
|
||||
|
||||
annotation = (
|
||||
param.annotation if param.annotation != param.empty else Any
|
||||
)
|
||||
|
||||
if param.default is param.empty:
|
||||
fields[param_name] = (annotation, ...)
|
||||
else:
|
||||
fields[param_name] = (annotation, param.default)
|
||||
|
||||
self.args_schema = create_model(
|
||||
f"{self.__class__.__name__}Schema", **fields
|
||||
class_name = f"{self.__class__.__name__}Schema"
|
||||
self.args_schema = cast(
|
||||
type[PydanticBaseModel],
|
||||
type(
|
||||
class_name,
|
||||
(PydanticBaseModel,),
|
||||
{
|
||||
"__annotations__": {
|
||||
k: v
|
||||
for k, v in self._run.__annotations__.items()
|
||||
if k != "return"
|
||||
},
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
def _generate_description(self) -> None:
|
||||
"""Generate the tool description with a JSON schema for arguments."""
|
||||
schema = generate_model_description(self.args_schema)
|
||||
args_json = json.dumps(schema["json_schema"]["schema"], indent=2)
|
||||
self.description = (
|
||||
f"Tool Name: {self.name}\n"
|
||||
f"Tool Arguments: {args_json}\n"
|
||||
f"Tool Description: {self.description}"
|
||||
)
|
||||
args_schema = {
|
||||
name: {
|
||||
"description": field.description,
|
||||
"type": BaseTool._get_arg_annotations(field.annotation),
|
||||
}
|
||||
for name, field in self.args_schema.model_fields.items()
|
||||
}
|
||||
|
||||
self.description = f"Tool Name: {self.name}\nTool Arguments: {args_schema}\nTool Description: {self.description}"
|
||||
|
||||
@staticmethod
|
||||
def _get_arg_annotations(annotation: type[Any] | None) -> str:
|
||||
if annotation is None:
|
||||
return "None"
|
||||
|
||||
origin = get_origin(annotation)
|
||||
args = get_args(annotation)
|
||||
|
||||
if origin is None:
|
||||
return (
|
||||
annotation.__name__
|
||||
if hasattr(annotation, "__name__")
|
||||
else str(annotation)
|
||||
)
|
||||
|
||||
if args:
|
||||
args_str = ", ".join(BaseTool._get_arg_annotations(arg) for arg in args)
|
||||
return str(f"{origin.__name__}[{args_str}]")
|
||||
|
||||
return str(origin.__name__)
|
||||
|
||||
|
||||
class Tool(BaseTool, Generic[P, R]):
|
||||
@@ -408,23 +406,24 @@ class Tool(BaseTool, Generic[P, R]):
|
||||
args_schema = getattr(tool, "args_schema", None)
|
||||
|
||||
if args_schema is None:
|
||||
# Infer args_schema from the function signature if not provided
|
||||
func_signature = signature(tool.func)
|
||||
fields: dict[str, Any] = {}
|
||||
for name, param in func_signature.parameters.items():
|
||||
if name == "self":
|
||||
continue
|
||||
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
|
||||
continue
|
||||
param_annotation = (
|
||||
param.annotation if param.annotation != param.empty else Any
|
||||
)
|
||||
if param.default is param.empty:
|
||||
fields[name] = (param_annotation, ...)
|
||||
else:
|
||||
fields[name] = (param_annotation, param.default)
|
||||
if fields:
|
||||
args_schema = create_model(f"{tool.name}Input", **fields)
|
||||
annotations = func_signature.parameters
|
||||
args_fields: dict[str, Any] = {}
|
||||
for name, param in annotations.items():
|
||||
if name != "self":
|
||||
param_annotation = (
|
||||
param.annotation if param.annotation != param.empty else Any
|
||||
)
|
||||
field_info = Field(
|
||||
default=...,
|
||||
description="",
|
||||
)
|
||||
args_fields[name] = (param_annotation, field_info)
|
||||
if args_fields:
|
||||
args_schema = create_model(f"{tool.name}Input", **args_fields)
|
||||
else:
|
||||
# Create a default schema with no fields if no parameters are found
|
||||
args_schema = create_model(
|
||||
f"{tool.name}Input", __base__=PydanticBaseModel
|
||||
)
|
||||
@@ -503,38 +502,32 @@ def tool(
|
||||
def _make_tool(f: Callable[P2, R2]) -> Tool[P2, R2]:
|
||||
if f.__doc__ is None:
|
||||
raise ValueError("Function must have a docstring")
|
||||
if f.__annotations__ is None:
|
||||
|
||||
func_annotations = getattr(f, "__annotations__", None)
|
||||
if func_annotations is None:
|
||||
raise ValueError("Function must have type annotations")
|
||||
|
||||
func_sig = signature(f)
|
||||
fields: dict[str, Any] = {}
|
||||
|
||||
for param_name, param in func_sig.parameters.items():
|
||||
if param_name == "return":
|
||||
continue
|
||||
if param.kind in (Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD):
|
||||
continue
|
||||
|
||||
annotation = (
|
||||
param.annotation if param.annotation != param.empty else Any
|
||||
)
|
||||
|
||||
if param.default is param.empty:
|
||||
fields[param_name] = (annotation, ...)
|
||||
else:
|
||||
fields[param_name] = (annotation, param.default)
|
||||
|
||||
class_name = "".join(tool_name.split()).title()
|
||||
args_schema = create_model(class_name, **fields)
|
||||
tool_args_schema = cast(
|
||||
type[PydanticBaseModel],
|
||||
type(
|
||||
class_name,
|
||||
(PydanticBaseModel,),
|
||||
{
|
||||
"__annotations__": {
|
||||
k: v for k, v in func_annotations.items() if k != "return"
|
||||
},
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
return Tool(
|
||||
name=tool_name,
|
||||
description=f.__doc__,
|
||||
func=f,
|
||||
args_schema=args_schema,
|
||||
args_schema=tool_args_schema,
|
||||
result_as_answer=result_as_answer,
|
||||
max_usage_count=max_usage_count,
|
||||
current_usage_count=0,
|
||||
)
|
||||
|
||||
return _make_tool
|
||||
|
||||
@@ -169,18 +169,7 @@ def handle_max_iterations_exceeded(
|
||||
)
|
||||
raise ValueError("Invalid response from LLM call - None or empty.")
|
||||
|
||||
try:
|
||||
formatted = format_answer(answer=answer)
|
||||
except OutputParserError:
|
||||
printer.print(
|
||||
content="Failed to parse forced final answer. Returning raw response.",
|
||||
color="yellow",
|
||||
)
|
||||
return AgentFinish(
|
||||
thought="Failed to parse LLM response during max iterations",
|
||||
output=answer,
|
||||
text=answer,
|
||||
)
|
||||
formatted = format_answer(answer=answer)
|
||||
|
||||
# If format_answer returned an AgentAction, convert it to AgentFinish
|
||||
if isinstance(formatted, AgentFinish):
|
||||
@@ -217,15 +206,9 @@ def format_answer(answer: str) -> AgentAction | AgentFinish:
|
||||
|
||||
Returns:
|
||||
Either an AgentAction or AgentFinish
|
||||
|
||||
Raises:
|
||||
OutputParserError: If parsing fails due to malformed LLM output format.
|
||||
This allows the retry logic in _invoke_loop() to handle the error.
|
||||
"""
|
||||
try:
|
||||
return parse(answer)
|
||||
except OutputParserError:
|
||||
raise
|
||||
except Exception:
|
||||
return AgentFinish(
|
||||
thought="Failed to parse LLM response",
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from copy import deepcopy
|
||||
import json
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Final, TypedDict
|
||||
@@ -11,7 +13,6 @@ from crewai.agents.agent_builder.utilities.base_output_converter import OutputCo
|
||||
from crewai.utilities.i18n import get_i18n
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -420,3 +421,221 @@ def create_converter(
|
||||
raise Exception("No output converter found or set.")
|
||||
|
||||
return converter # type: ignore[no-any-return]
|
||||
|
||||
|
||||
def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Recursively resolve all local $refs in the given JSON Schema using $defs as the source.
|
||||
|
||||
This is needed because Pydantic generates $ref-based schemas that
|
||||
some consumers (e.g. LLMs, tool frameworks) don't handle well.
|
||||
|
||||
Args:
|
||||
schema: JSON Schema dict that may contain "$refs" and "$defs".
|
||||
|
||||
Returns:
|
||||
A new schema dictionary with all local $refs replaced by their definitions.
|
||||
"""
|
||||
defs = schema.get("$defs", {})
|
||||
schema_copy = deepcopy(schema)
|
||||
|
||||
def _resolve(node: Any) -> Any:
|
||||
if isinstance(node, dict):
|
||||
ref = node.get("$ref")
|
||||
if isinstance(ref, str) and ref.startswith("#/$defs/"):
|
||||
def_name = ref.replace("#/$defs/", "")
|
||||
if def_name in defs:
|
||||
return _resolve(deepcopy(defs[def_name]))
|
||||
raise KeyError(f"Definition '{def_name}' not found in $defs.")
|
||||
return {k: _resolve(v) for k, v in node.items()}
|
||||
|
||||
if isinstance(node, list):
|
||||
return [_resolve(i) for i in node]
|
||||
|
||||
return node
|
||||
|
||||
return _resolve(schema_copy) # type: ignore[no-any-return]
|
||||
|
||||
|
||||
def add_key_in_dict_recursively(
|
||||
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
|
||||
) -> dict[str, Any]:
|
||||
"""Recursively adds a key/value pair to all nested dicts matching `criteria`."""
|
||||
if isinstance(d, dict):
|
||||
if criteria(d) and key not in d:
|
||||
d[key] = value
|
||||
for v in d.values():
|
||||
add_key_in_dict_recursively(v, key, value, criteria)
|
||||
elif isinstance(d, list):
|
||||
for i in d:
|
||||
add_key_in_dict_recursively(i, key, value, criteria)
|
||||
return d
|
||||
|
||||
|
||||
def fix_discriminator_mappings(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Replace '#/$defs/...' references in discriminator.mapping with just the model name."""
|
||||
output = schema.get("properties", {}).get("output")
|
||||
if not output:
|
||||
return schema
|
||||
|
||||
disc = output.get("discriminator")
|
||||
if not disc or "mapping" not in disc:
|
||||
return schema
|
||||
|
||||
disc["mapping"] = {k: v.split("/")[-1] for k, v in disc["mapping"].items()}
|
||||
return schema
|
||||
|
||||
|
||||
def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Add const fields to oneOf variants for discriminated unions.
|
||||
|
||||
The json_schema_to_pydantic library requires each oneOf variant to have
|
||||
a const field for the discriminator property. This function adds those
|
||||
const fields based on the discriminator mapping.
|
||||
|
||||
Args:
|
||||
schema: JSON Schema dict that may contain discriminated unions
|
||||
|
||||
Returns:
|
||||
Modified schema with const fields added to oneOf variants
|
||||
"""
|
||||
|
||||
def _process_oneof(node: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Process a single node that might contain a oneOf with discriminator."""
|
||||
if not isinstance(node, dict):
|
||||
return node
|
||||
|
||||
if "oneOf" in node and "discriminator" in node:
|
||||
discriminator = node["discriminator"]
|
||||
property_name = discriminator.get("propertyName")
|
||||
mapping = discriminator.get("mapping", {})
|
||||
|
||||
if property_name and mapping:
|
||||
one_of_variants = node.get("oneOf", [])
|
||||
|
||||
for variant in one_of_variants:
|
||||
if isinstance(variant, dict) and "properties" in variant:
|
||||
variant_title = variant.get("title", "")
|
||||
|
||||
matched_disc_value = None
|
||||
for disc_value, schema_name in mapping.items():
|
||||
if variant_title == schema_name or variant_title.endswith(
|
||||
schema_name
|
||||
):
|
||||
matched_disc_value = disc_value
|
||||
break
|
||||
|
||||
if matched_disc_value is not None:
|
||||
props = variant["properties"]
|
||||
if property_name in props:
|
||||
props[property_name]["const"] = matched_disc_value
|
||||
|
||||
for key, value in node.items():
|
||||
if isinstance(value, dict):
|
||||
node[key] = _process_oneof(value)
|
||||
elif isinstance(value, list):
|
||||
node[key] = [
|
||||
_process_oneof(item) if isinstance(item, dict) else item
|
||||
for item in value
|
||||
]
|
||||
|
||||
return node
|
||||
|
||||
return _process_oneof(deepcopy(schema))
|
||||
|
||||
|
||||
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Convert oneOf to anyOf for OpenAI compatibility.
|
||||
|
||||
OpenAI's Structured Outputs support anyOf better than oneOf.
|
||||
This recursively converts all oneOf occurrences to anyOf.
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
|
||||
Returns:
|
||||
Modified schema with anyOf instead of oneOf.
|
||||
"""
|
||||
if isinstance(schema, dict):
|
||||
if "oneOf" in schema:
|
||||
schema["anyOf"] = schema.pop("oneOf")
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
convert_oneof_to_anyof(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
convert_oneof_to_anyof(item)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Ensure all properties are in the required array for OpenAI strict mode.
|
||||
|
||||
OpenAI's strict structured outputs require all properties to be listed
|
||||
in the required array. This recursively updates all objects to include
|
||||
all their properties in required.
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
|
||||
Returns:
|
||||
Modified schema with all properties marked as required.
|
||||
"""
|
||||
if isinstance(schema, dict):
|
||||
if schema.get("type") == "object" and "properties" in schema:
|
||||
properties = schema["properties"]
|
||||
if properties:
|
||||
schema["required"] = list(properties.keys())
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
ensure_all_properties_required(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
ensure_all_properties_required(item)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def generate_model_description(model: type[BaseModel]) -> dict[str, Any]:
|
||||
"""Generate JSON schema description of a Pydantic model.
|
||||
|
||||
This function takes a Pydantic model class and returns its JSON schema,
|
||||
which includes full type information, discriminators, and all metadata.
|
||||
The schema is dereferenced to inline all $ref references for better LLM understanding.
|
||||
|
||||
Args:
|
||||
model: A Pydantic model class.
|
||||
|
||||
Returns:
|
||||
A JSON schema dictionary representation of the model.
|
||||
"""
|
||||
|
||||
json_schema = model.model_json_schema(ref_template="#/$defs/{model}")
|
||||
|
||||
json_schema = add_key_in_dict_recursively(
|
||||
json_schema,
|
||||
key="additionalProperties",
|
||||
value=False,
|
||||
criteria=lambda d: d.get("type") == "object"
|
||||
and "additionalProperties" not in d,
|
||||
)
|
||||
|
||||
json_schema = resolve_refs(json_schema)
|
||||
|
||||
json_schema.pop("$defs", None)
|
||||
json_schema = fix_discriminator_mappings(json_schema)
|
||||
json_schema = convert_oneof_to_anyof(json_schema)
|
||||
json_schema = ensure_all_properties_required(json_schema)
|
||||
|
||||
return {
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
"name": model.__name__,
|
||||
"strict": True,
|
||||
"schema": json_schema,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1,15 +1,14 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import TYPE_CHECKING, Any, cast
|
||||
from typing import TYPE_CHECKING, cast
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.task_events import TaskEvaluationEvent
|
||||
from crewai.llm import LLM
|
||||
from crewai.utilities.converter import Converter
|
||||
from crewai.utilities.i18n import get_i18n
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
|
||||
from crewai.utilities.training_converter import TrainingConverter
|
||||
|
||||
|
||||
@@ -63,7 +62,7 @@ class TaskEvaluator:
|
||||
Args:
|
||||
original_agent: The agent to evaluate.
|
||||
"""
|
||||
self.llm = original_agent.llm
|
||||
self.llm = cast(LLM, original_agent.llm)
|
||||
self.original_agent = original_agent
|
||||
|
||||
def evaluate(self, task: Task, output: str) -> TaskEvaluation:
|
||||
@@ -80,8 +79,7 @@ class TaskEvaluator:
|
||||
- Investigate the Converter.to_pydantic signature, returns BaseModel strictly?
|
||||
"""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
TaskEvaluationEvent(evaluation_type="task_evaluation", task=task), # type: ignore[no-untyped-call]
|
||||
self, TaskEvaluationEvent(evaluation_type="task_evaluation", task=task)
|
||||
)
|
||||
evaluation_query = (
|
||||
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
|
||||
@@ -96,14 +94,9 @@ class TaskEvaluator:
|
||||
|
||||
instructions = "Convert all responses into valid JSON output."
|
||||
|
||||
if not self.llm.supports_function_calling(): # type: ignore[union-attr]
|
||||
schema_dict = generate_model_description(TaskEvaluation)
|
||||
output_schema: str = (
|
||||
get_i18n()
|
||||
.slice("formatted_task_instructions")
|
||||
.format(output_format=json.dumps(schema_dict, indent=2))
|
||||
)
|
||||
instructions = f"{instructions}\n\n{output_schema}"
|
||||
if not self.llm.supports_function_calling():
|
||||
model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema()
|
||||
instructions = f"{instructions}\n\nReturn only valid JSON with the following schema:\n```json\n{model_schema}\n```"
|
||||
|
||||
converter = Converter(
|
||||
llm=self.llm,
|
||||
@@ -115,7 +108,7 @@ class TaskEvaluator:
|
||||
return cast(TaskEvaluation, converter.to_pydantic())
|
||||
|
||||
def evaluate_training_data(
|
||||
self, training_data: dict[str, Any], agent_id: str
|
||||
self, training_data: dict, agent_id: str
|
||||
) -> TrainingTaskEvaluation:
|
||||
"""
|
||||
Evaluate the training data based on the llm output, human feedback, and improved output.
|
||||
@@ -128,8 +121,7 @@ class TaskEvaluator:
|
||||
- Investigate the Converter.to_pydantic signature, returns BaseModel strictly?
|
||||
"""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
TaskEvaluationEvent(evaluation_type="training_data_evaluation"), # type: ignore[no-untyped-call]
|
||||
self, TaskEvaluationEvent(evaluation_type="training_data_evaluation")
|
||||
)
|
||||
|
||||
output_training_data = training_data[agent_id]
|
||||
@@ -172,14 +164,11 @@ class TaskEvaluator:
|
||||
)
|
||||
instructions = "I'm gonna convert this raw text into valid JSON."
|
||||
|
||||
if not self.llm.supports_function_calling(): # type: ignore[union-attr]
|
||||
schema_dict = generate_model_description(TrainingTaskEvaluation)
|
||||
output_schema: str = (
|
||||
get_i18n()
|
||||
.slice("formatted_task_instructions")
|
||||
.format(output_format=json.dumps(schema_dict, indent=2))
|
||||
)
|
||||
instructions = f"{instructions}\n\n{output_schema}"
|
||||
if not self.llm.supports_function_calling():
|
||||
model_schema = PydanticSchemaParser(
|
||||
model=TrainingTaskEvaluation
|
||||
).get_schema()
|
||||
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
|
||||
|
||||
converter = TrainingConverter(
|
||||
llm=self.llm,
|
||||
|
||||
@@ -15,12 +15,9 @@ logger = logging.getLogger(__name__)
|
||||
class PlanPerTask(BaseModel):
|
||||
"""Represents a plan for a specific task."""
|
||||
|
||||
task_number: int = Field(
|
||||
description="The 1-indexed task number this plan corresponds to",
|
||||
ge=1,
|
||||
)
|
||||
task: str = Field(description="The task for which the plan is created")
|
||||
task: str = Field(..., description="The task for which the plan is created")
|
||||
plan: str = Field(
|
||||
...,
|
||||
description="The step by step plan on how the agents can execute their tasks using the available tools with mastery",
|
||||
)
|
||||
|
||||
|
||||
103
lib/crewai/src/crewai/utilities/pydantic_schema_parser.py
Normal file
103
lib/crewai/src/crewai/utilities/pydantic_schema_parser.py
Normal file
@@ -0,0 +1,103 @@
|
||||
from typing import Any, Union, get_args, get_origin
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class PydanticSchemaParser(BaseModel):
|
||||
model: type[BaseModel] = Field(..., description="The Pydantic model to parse.")
|
||||
|
||||
def get_schema(self) -> str:
|
||||
"""Public method to get the schema of a Pydantic model.
|
||||
|
||||
Returns:
|
||||
String representation of the model schema.
|
||||
"""
|
||||
return "{\n" + self._get_model_schema(self.model) + "\n}"
|
||||
|
||||
def _get_model_schema(self, model: type[BaseModel], depth: int = 0) -> str:
|
||||
"""Recursively get the schema of a Pydantic model, handling nested models and lists.
|
||||
|
||||
Args:
|
||||
model: The Pydantic model to process.
|
||||
depth: The current depth of recursion for indentation purposes.
|
||||
|
||||
Returns:
|
||||
A string representation of the model schema.
|
||||
"""
|
||||
indent: str = " " * 4 * depth
|
||||
lines: list[str] = [
|
||||
f"{indent} {field_name}: {self._get_field_type_for_annotation(field.annotation, depth + 1)}"
|
||||
for field_name, field in model.model_fields.items()
|
||||
]
|
||||
return ",\n".join(lines)
|
||||
|
||||
def _format_list_type(self, list_item_type: Any, depth: int) -> str:
|
||||
"""Format a List type, handling nested models if necessary.
|
||||
|
||||
Args:
|
||||
list_item_type: The type of items in the list.
|
||||
depth: The current depth of recursion for indentation purposes.
|
||||
|
||||
Returns:
|
||||
A string representation of the List type.
|
||||
"""
|
||||
if isinstance(list_item_type, type) and issubclass(list_item_type, BaseModel):
|
||||
nested_schema = self._get_model_schema(list_item_type, depth + 1)
|
||||
nested_indent = " " * 4 * depth
|
||||
return f"List[\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}\n{nested_indent}]"
|
||||
return f"List[{list_item_type.__name__}]"
|
||||
|
||||
def _format_union_type(self, field_type: Any, depth: int) -> str:
|
||||
"""Format a Union type, handling Optional and nested types.
|
||||
|
||||
Args:
|
||||
field_type: The Union type to format.
|
||||
depth: The current depth of recursion for indentation purposes.
|
||||
|
||||
Returns:
|
||||
A string representation of the Union type.
|
||||
"""
|
||||
args = get_args(field_type)
|
||||
if type(None) in args:
|
||||
# It's an Optional type
|
||||
non_none_args = [arg for arg in args if arg is not type(None)]
|
||||
if len(non_none_args) == 1:
|
||||
inner_type = self._get_field_type_for_annotation(
|
||||
non_none_args[0], depth
|
||||
)
|
||||
return f"Optional[{inner_type}]"
|
||||
# Union with None and multiple other types
|
||||
inner_types = ", ".join(
|
||||
self._get_field_type_for_annotation(arg, depth) for arg in non_none_args
|
||||
)
|
||||
return f"Optional[Union[{inner_types}]]"
|
||||
# General Union type
|
||||
inner_types = ", ".join(
|
||||
self._get_field_type_for_annotation(arg, depth) for arg in args
|
||||
)
|
||||
return f"Union[{inner_types}]"
|
||||
|
||||
def _get_field_type_for_annotation(self, annotation: Any, depth: int) -> str:
|
||||
"""Recursively get the string representation of a field's type annotation.
|
||||
|
||||
Args:
|
||||
annotation: The type annotation to process.
|
||||
depth: The current depth of recursion for indentation purposes.
|
||||
|
||||
Returns:
|
||||
A string representation of the type annotation.
|
||||
"""
|
||||
origin: Any = get_origin(annotation)
|
||||
if origin is list:
|
||||
list_item_type = get_args(annotation)[0]
|
||||
return self._format_list_type(list_item_type, depth)
|
||||
if origin is dict:
|
||||
key_type, value_type = get_args(annotation)
|
||||
return f"Dict[{key_type.__name__}, {value_type.__name__}]"
|
||||
if origin is Union:
|
||||
return self._format_union_type(annotation, depth)
|
||||
if isinstance(annotation, type) and issubclass(annotation, BaseModel):
|
||||
nested_schema = self._get_model_schema(annotation, depth)
|
||||
nested_indent = " " * 4 * depth
|
||||
return f"{annotation.__name__}\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}"
|
||||
return annotation.__name__
|
||||
@@ -1,245 +0,0 @@
|
||||
"""Utilities for generating JSON schemas from Pydantic models.
|
||||
|
||||
This module provides functions for converting Pydantic models to JSON schemas
|
||||
suitable for use with LLMs and tool definitions.
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
from copy import deepcopy
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Recursively resolve all local $refs in the given JSON Schema using $defs as the source.
|
||||
|
||||
This is needed because Pydantic generates $ref-based schemas that
|
||||
some consumers (e.g. LLMs, tool frameworks) don't handle well.
|
||||
|
||||
Args:
|
||||
schema: JSON Schema dict that may contain "$refs" and "$defs".
|
||||
|
||||
Returns:
|
||||
A new schema dictionary with all local $refs replaced by their definitions.
|
||||
"""
|
||||
defs = schema.get("$defs", {})
|
||||
schema_copy = deepcopy(schema)
|
||||
|
||||
def _resolve(node: Any) -> Any:
|
||||
if isinstance(node, dict):
|
||||
ref = node.get("$ref")
|
||||
if isinstance(ref, str) and ref.startswith("#/$defs/"):
|
||||
def_name = ref.replace("#/$defs/", "")
|
||||
if def_name in defs:
|
||||
return _resolve(deepcopy(defs[def_name]))
|
||||
raise KeyError(f"Definition '{def_name}' not found in $defs.")
|
||||
return {k: _resolve(v) for k, v in node.items()}
|
||||
|
||||
if isinstance(node, list):
|
||||
return [_resolve(i) for i in node]
|
||||
|
||||
return node
|
||||
|
||||
return _resolve(schema_copy) # type: ignore[no-any-return]
|
||||
|
||||
|
||||
def add_key_in_dict_recursively(
|
||||
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
|
||||
) -> dict[str, Any]:
|
||||
"""Recursively adds a key/value pair to all nested dicts matching `criteria`.
|
||||
|
||||
Args:
|
||||
d: The dictionary to modify.
|
||||
key: The key to add.
|
||||
value: The value to add.
|
||||
criteria: A function that returns True for dicts that should receive the key.
|
||||
|
||||
Returns:
|
||||
The modified dictionary.
|
||||
"""
|
||||
if isinstance(d, dict):
|
||||
if criteria(d) and key not in d:
|
||||
d[key] = value
|
||||
for v in d.values():
|
||||
add_key_in_dict_recursively(v, key, value, criteria)
|
||||
elif isinstance(d, list):
|
||||
for i in d:
|
||||
add_key_in_dict_recursively(i, key, value, criteria)
|
||||
return d
|
||||
|
||||
|
||||
def fix_discriminator_mappings(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Replace '#/$defs/...' references in discriminator.mapping with just the model name.
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
|
||||
Returns:
|
||||
Modified schema with fixed discriminator mappings.
|
||||
"""
|
||||
output = schema.get("properties", {}).get("output")
|
||||
if not output:
|
||||
return schema
|
||||
|
||||
disc = output.get("discriminator")
|
||||
if not disc or "mapping" not in disc:
|
||||
return schema
|
||||
|
||||
disc["mapping"] = {k: v.split("/")[-1] for k, v in disc["mapping"].items()}
|
||||
return schema
|
||||
|
||||
|
||||
def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Add const fields to oneOf variants for discriminated unions.
|
||||
|
||||
The json_schema_to_pydantic library requires each oneOf variant to have
|
||||
a const field for the discriminator property. This function adds those
|
||||
const fields based on the discriminator mapping.
|
||||
|
||||
Args:
|
||||
schema: JSON Schema dict that may contain discriminated unions
|
||||
|
||||
Returns:
|
||||
Modified schema with const fields added to oneOf variants
|
||||
"""
|
||||
|
||||
def _process_oneof(node: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Process a single node that might contain a oneOf with discriminator."""
|
||||
if not isinstance(node, dict):
|
||||
return node
|
||||
|
||||
if "oneOf" in node and "discriminator" in node:
|
||||
discriminator = node["discriminator"]
|
||||
property_name = discriminator.get("propertyName")
|
||||
mapping = discriminator.get("mapping", {})
|
||||
|
||||
if property_name and mapping:
|
||||
one_of_variants = node.get("oneOf", [])
|
||||
|
||||
for variant in one_of_variants:
|
||||
if isinstance(variant, dict) and "properties" in variant:
|
||||
variant_title = variant.get("title", "")
|
||||
|
||||
matched_disc_value = None
|
||||
for disc_value, schema_name in mapping.items():
|
||||
if variant_title == schema_name or variant_title.endswith(
|
||||
schema_name
|
||||
):
|
||||
matched_disc_value = disc_value
|
||||
break
|
||||
|
||||
if matched_disc_value is not None:
|
||||
props = variant["properties"]
|
||||
if property_name in props:
|
||||
props[property_name]["const"] = matched_disc_value
|
||||
|
||||
for key, value in node.items():
|
||||
if isinstance(value, dict):
|
||||
node[key] = _process_oneof(value)
|
||||
elif isinstance(value, list):
|
||||
node[key] = [
|
||||
_process_oneof(item) if isinstance(item, dict) else item
|
||||
for item in value
|
||||
]
|
||||
|
||||
return node
|
||||
|
||||
return _process_oneof(deepcopy(schema))
|
||||
|
||||
|
||||
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Convert oneOf to anyOf for OpenAI compatibility.
|
||||
|
||||
OpenAI's Structured Outputs support anyOf better than oneOf.
|
||||
This recursively converts all oneOf occurrences to anyOf.
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
|
||||
Returns:
|
||||
Modified schema with anyOf instead of oneOf.
|
||||
"""
|
||||
if isinstance(schema, dict):
|
||||
if "oneOf" in schema:
|
||||
schema["anyOf"] = schema.pop("oneOf")
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
convert_oneof_to_anyof(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
convert_oneof_to_anyof(item)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Ensure all properties are in the required array for OpenAI strict mode.
|
||||
|
||||
OpenAI's strict structured outputs require all properties to be listed
|
||||
in the required array. This recursively updates all objects to include
|
||||
all their properties in required.
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
|
||||
Returns:
|
||||
Modified schema with all properties marked as required.
|
||||
"""
|
||||
if isinstance(schema, dict):
|
||||
if schema.get("type") == "object" and "properties" in schema:
|
||||
properties = schema["properties"]
|
||||
if properties:
|
||||
schema["required"] = list(properties.keys())
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
ensure_all_properties_required(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
ensure_all_properties_required(item)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def generate_model_description(model: type[BaseModel]) -> dict[str, Any]:
|
||||
"""Generate JSON schema description of a Pydantic model.
|
||||
|
||||
This function takes a Pydantic model class and returns its JSON schema,
|
||||
which includes full type information, discriminators, and all metadata.
|
||||
The schema is dereferenced to inline all $ref references for better LLM understanding.
|
||||
|
||||
Args:
|
||||
model: A Pydantic model class.
|
||||
|
||||
Returns:
|
||||
A JSON schema dictionary representation of the model.
|
||||
"""
|
||||
json_schema = model.model_json_schema(ref_template="#/$defs/{model}")
|
||||
|
||||
json_schema = add_key_in_dict_recursively(
|
||||
json_schema,
|
||||
key="additionalProperties",
|
||||
value=False,
|
||||
criteria=lambda d: d.get("type") == "object"
|
||||
and "additionalProperties" not in d,
|
||||
)
|
||||
|
||||
json_schema = resolve_refs(json_schema)
|
||||
|
||||
json_schema.pop("$defs", None)
|
||||
json_schema = fix_discriminator_mappings(json_schema)
|
||||
json_schema = convert_oneof_to_anyof(json_schema)
|
||||
json_schema = ensure_all_properties_required(json_schema)
|
||||
|
||||
return {
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
"name": model.__name__,
|
||||
"strict": True,
|
||||
"schema": json_schema,
|
||||
},
|
||||
}
|
||||
@@ -27,9 +27,9 @@ class TestSignalType:
|
||||
"""Verify SignalType maps to correct signal numbers."""
|
||||
assert SignalType.SIGTERM == signal.SIGTERM
|
||||
assert SignalType.SIGINT == signal.SIGINT
|
||||
assert SignalType.SIGHUP == getattr(signal, "SIGHUP", 1)
|
||||
assert SignalType.SIGTSTP == getattr(signal, "SIGTSTP", 20)
|
||||
assert SignalType.SIGCONT == getattr(signal, "SIGCONT", 18)
|
||||
assert SignalType.SIGHUP == signal.SIGHUP
|
||||
assert SignalType.SIGTSTP == signal.SIGTSTP
|
||||
assert SignalType.SIGCONT == signal.SIGCONT
|
||||
|
||||
|
||||
class TestSignalEvents:
|
||||
|
||||
@@ -389,12 +389,12 @@ def test_azure_raises_error_when_endpoint_missing():
|
||||
|
||||
|
||||
def test_azure_raises_error_when_api_key_missing():
|
||||
"""Test that AzureCompletion raises ValueError when API key is missing"""
|
||||
"""Test that AzureCompletion raises ValueError when no credentials are provided"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
# Clear environment variables
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with pytest.raises(ValueError, match="Azure API key is required"):
|
||||
with pytest.raises(ValueError, match="Azure credentials are required"):
|
||||
AzureCompletion(model="gpt-4", endpoint="https://test.openai.azure.com")
|
||||
|
||||
|
||||
@@ -1127,3 +1127,288 @@ def test_azure_streaming_returns_usage_metrics():
|
||||
assert result.token_usage.prompt_tokens > 0
|
||||
assert result.token_usage.completion_tokens > 0
|
||||
assert result.token_usage.successful_requests >= 1
|
||||
|
||||
|
||||
def test_azure_ad_token_authentication():
|
||||
"""
|
||||
Test that Azure AD token authentication works via AZURE_AD_TOKEN env var.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion, _StaticTokenCredential
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_AD_TOKEN": "test-ad-token",
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
llm = LLM(model="azure/gpt-4")
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm.azure_ad_token == "test-ad-token"
|
||||
assert llm.api_key is None
|
||||
|
||||
|
||||
def test_azure_ad_token_parameter():
|
||||
"""
|
||||
Test that azure_ad_token parameter works for Azure AD authentication.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
azure_ad_token="my-ad-token",
|
||||
endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm.azure_ad_token == "my-ad-token"
|
||||
|
||||
|
||||
def test_azure_credential_parameter():
|
||||
"""
|
||||
Test that credential parameter works for passing TokenCredential directly.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
class MockTokenCredential:
|
||||
def get_token(self, *scopes, **kwargs):
|
||||
from azure.core.credentials import AccessToken
|
||||
return AccessToken("mock-token", 9999999999)
|
||||
|
||||
mock_credential = MockTokenCredential()
|
||||
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
credential=mock_credential,
|
||||
endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm._explicit_credential is mock_credential
|
||||
|
||||
|
||||
def test_azure_use_default_credential():
|
||||
"""
|
||||
Test that use_default_credential=True uses DefaultAzureCredential.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
try:
|
||||
from azure.identity import DefaultAzureCredential
|
||||
azure_identity_available = True
|
||||
except ImportError:
|
||||
azure_identity_available = False
|
||||
|
||||
if azure_identity_available:
|
||||
with patch('azure.identity.DefaultAzureCredential') as mock_default_cred:
|
||||
mock_default_cred.return_value = MagicMock()
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
use_default_credential=True
|
||||
)
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm.use_default_credential is True
|
||||
mock_default_cred.assert_called_once()
|
||||
else:
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
with pytest.raises(ImportError, match="azure-identity package is required"):
|
||||
LLM(
|
||||
model="azure/gpt-4",
|
||||
use_default_credential=True
|
||||
)
|
||||
|
||||
|
||||
def test_azure_credential_priority_explicit_credential_first():
|
||||
"""
|
||||
Test that explicit credential takes priority over other auth methods.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
class MockTokenCredential:
|
||||
def get_token(self, *scopes, **kwargs):
|
||||
from azure.core.credentials import AccessToken
|
||||
return AccessToken("mock-token", 9999999999)
|
||||
|
||||
mock_credential = MockTokenCredential()
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_API_KEY": "test-key",
|
||||
"AZURE_AD_TOKEN": "test-ad-token",
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}):
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
credential=mock_credential,
|
||||
api_key="another-key",
|
||||
azure_ad_token="another-token"
|
||||
)
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm._explicit_credential is mock_credential
|
||||
|
||||
|
||||
def test_azure_credential_priority_ad_token_over_api_key():
|
||||
"""
|
||||
Test that azure_ad_token takes priority over api_key.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion, _StaticTokenCredential
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
azure_ad_token="my-ad-token",
|
||||
api_key="my-api-key"
|
||||
)
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm.azure_ad_token == "my-ad-token"
|
||||
assert llm.api_key == "my-api-key"
|
||||
|
||||
|
||||
def test_azure_raises_error_when_no_credentials():
|
||||
"""
|
||||
Test that AzureCompletion raises ValueError when no credentials are provided.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
with pytest.raises(ValueError, match="Azure credentials are required"):
|
||||
AzureCompletion(model="gpt-4", endpoint="https://test.openai.azure.com")
|
||||
|
||||
|
||||
def test_azure_static_token_credential():
|
||||
"""
|
||||
Test that _StaticTokenCredential properly wraps a static token.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import _StaticTokenCredential
|
||||
from azure.core.credentials import AccessToken
|
||||
|
||||
token = "my-static-token"
|
||||
credential = _StaticTokenCredential(token)
|
||||
|
||||
access_token = credential.get_token("https://cognitiveservices.azure.com/.default")
|
||||
|
||||
assert isinstance(access_token, AccessToken)
|
||||
assert access_token.token == token
|
||||
assert access_token.expires_on > 0
|
||||
|
||||
|
||||
def test_azure_ad_token_env_var_used_when_no_api_key():
|
||||
"""
|
||||
Test that AZURE_AD_TOKEN env var is used when AZURE_API_KEY is not set.
|
||||
This reproduces the scenario from GitHub issue #4069.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_AD_TOKEN": "token-from-provider",
|
||||
"AZURE_ENDPOINT": "https://my-endpoint.openai.azure.com"
|
||||
}, clear=True):
|
||||
llm = LLM(
|
||||
model="azure/gpt-4o-mini",
|
||||
api_version="2024-02-01"
|
||||
)
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm.azure_ad_token == "token-from-provider"
|
||||
assert llm.api_key is None
|
||||
|
||||
|
||||
def test_azure_backward_compatibility_api_key():
|
||||
"""
|
||||
Test that existing API key authentication still works (backward compatibility).
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_API_KEY": "test-api-key",
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
llm = LLM(model="azure/gpt-4")
|
||||
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm.api_key == "test-api-key"
|
||||
assert llm.azure_ad_token is None
|
||||
|
||||
|
||||
def test_azure_select_credential_returns_correct_type():
|
||||
"""
|
||||
Test that _select_credential returns the correct credential type based on config.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion, _StaticTokenCredential
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
llm_api_key = AzureCompletion(
|
||||
model="gpt-4",
|
||||
api_key="test-key",
|
||||
endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
credential = llm_api_key._select_credential()
|
||||
assert isinstance(credential, AzureKeyCredential)
|
||||
|
||||
llm_ad_token = AzureCompletion(
|
||||
model="gpt-4",
|
||||
azure_ad_token="test-ad-token",
|
||||
endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
credential = llm_ad_token._select_credential()
|
||||
assert isinstance(credential, _StaticTokenCredential)
|
||||
|
||||
|
||||
def test_azure_use_default_credential_import_error():
|
||||
"""
|
||||
Test that use_default_credential raises ImportError when azure-identity is not available.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
import builtins
|
||||
|
||||
original_import = builtins.__import__
|
||||
|
||||
def mock_import(name, *args, **kwargs):
|
||||
if name == 'azure.identity':
|
||||
raise ImportError("No module named 'azure.identity'")
|
||||
return original_import(name, *args, **kwargs)
|
||||
|
||||
with patch.dict(os.environ, {
|
||||
"AZURE_ENDPOINT": "https://test.openai.azure.com"
|
||||
}, clear=True):
|
||||
with patch.object(builtins, '__import__', side_effect=mock_import):
|
||||
with pytest.raises(ImportError, match="azure-identity package is required"):
|
||||
AzureCompletion(
|
||||
model="gpt-4",
|
||||
endpoint="https://test.openai.azure.com",
|
||||
use_default_credential=True
|
||||
)
|
||||
|
||||
|
||||
def test_azure_improved_error_message_no_credentials():
|
||||
"""
|
||||
Test that the error message when no credentials are provided is helpful.
|
||||
"""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with pytest.raises(ValueError) as excinfo:
|
||||
AzureCompletion(model="gpt-4", endpoint="https://test.openai.azure.com")
|
||||
|
||||
error_message = str(excinfo.value)
|
||||
assert "Azure credentials are required" in error_message
|
||||
assert "api_key" in error_message
|
||||
assert "AZURE_API_KEY" in error_message
|
||||
assert "azure_ad_token" in error_message
|
||||
assert "AZURE_AD_TOKEN" in error_message
|
||||
assert "credential" in error_message
|
||||
assert "use_default_credential" in error_message
|
||||
|
||||
@@ -1727,24 +1727,3 @@ def test_task_output_includes_messages():
|
||||
assert hasattr(task2_output, "messages")
|
||||
assert isinstance(task2_output.messages, list)
|
||||
assert len(task2_output.messages) > 0
|
||||
|
||||
|
||||
def test_async_execution_fails():
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 interesting ideas.",
|
||||
async_execution=True,
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
with patch.object(Task, "_execute_core", side_effect=RuntimeError("boom!")):
|
||||
with pytest.raises(RuntimeError, match="boom!"):
|
||||
execution = task.execute_async(agent=researcher)
|
||||
execution.result()
|
||||
|
||||
@@ -17,11 +17,10 @@ def test_creating_a_tool_using_annotation():
|
||||
|
||||
# Assert all the right attributes were defined
|
||||
assert my_tool.name == "Name of my tool"
|
||||
assert "Tool Name: Name of my tool" in my_tool.description
|
||||
assert "Tool Arguments:" in my_tool.description
|
||||
assert '"question"' in my_tool.description
|
||||
assert '"type": "string"' in my_tool.description
|
||||
assert "Tool Description: Clear description for what this tool is useful for" in my_tool.description
|
||||
assert (
|
||||
my_tool.description
|
||||
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
|
||||
)
|
||||
assert my_tool.args_schema.model_json_schema()["properties"] == {
|
||||
"question": {"title": "Question", "type": "string"}
|
||||
}
|
||||
@@ -32,9 +31,10 @@ def test_creating_a_tool_using_annotation():
|
||||
converted_tool = my_tool.to_structured_tool()
|
||||
assert converted_tool.name == "Name of my tool"
|
||||
|
||||
assert "Tool Name: Name of my tool" in converted_tool.description
|
||||
assert "Tool Arguments:" in converted_tool.description
|
||||
assert '"question"' in converted_tool.description
|
||||
assert (
|
||||
converted_tool.description
|
||||
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
|
||||
)
|
||||
assert converted_tool.args_schema.model_json_schema()["properties"] == {
|
||||
"question": {"title": "Question", "type": "string"}
|
||||
}
|
||||
@@ -56,11 +56,10 @@ def test_creating_a_tool_using_baseclass():
|
||||
# Assert all the right attributes were defined
|
||||
assert my_tool.name == "Name of my tool"
|
||||
|
||||
assert "Tool Name: Name of my tool" in my_tool.description
|
||||
assert "Tool Arguments:" in my_tool.description
|
||||
assert '"question"' in my_tool.description
|
||||
assert '"type": "string"' in my_tool.description
|
||||
assert "Tool Description: Clear description for what this tool is useful for" in my_tool.description
|
||||
assert (
|
||||
my_tool.description
|
||||
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
|
||||
)
|
||||
assert my_tool.args_schema.model_json_schema()["properties"] == {
|
||||
"question": {"title": "Question", "type": "string"}
|
||||
}
|
||||
@@ -69,9 +68,10 @@ def test_creating_a_tool_using_baseclass():
|
||||
converted_tool = my_tool.to_structured_tool()
|
||||
assert converted_tool.name == "Name of my tool"
|
||||
|
||||
assert "Tool Name: Name of my tool" in converted_tool.description
|
||||
assert "Tool Arguments:" in converted_tool.description
|
||||
assert '"question"' in converted_tool.description
|
||||
assert (
|
||||
converted_tool.description
|
||||
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
|
||||
)
|
||||
assert converted_tool.args_schema.model_json_schema()["properties"] == {
|
||||
"question": {"title": "Question", "type": "string"}
|
||||
}
|
||||
|
||||
@@ -107,20 +107,25 @@ def test_tool_usage_render():
|
||||
|
||||
rendered = tool_usage._render()
|
||||
|
||||
# Check that the rendered output contains the expected tool information
|
||||
# Updated checks to match the actual output
|
||||
assert "Tool Name: Random Number Generator" in rendered
|
||||
assert "Tool Arguments:" in rendered
|
||||
assert (
|
||||
"'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}"
|
||||
in rendered
|
||||
)
|
||||
assert (
|
||||
"'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}"
|
||||
in rendered
|
||||
)
|
||||
assert (
|
||||
"Tool Description: Generates a random number within a specified range"
|
||||
in rendered
|
||||
)
|
||||
|
||||
# Check that the JSON schema format is used (proper JSON schema types)
|
||||
assert '"min_value"' in rendered
|
||||
assert '"max_value"' in rendered
|
||||
assert '"type": "integer"' in rendered
|
||||
assert '"description": "The minimum value of the range (inclusive)"' in rendered
|
||||
assert '"description": "The maximum value of the range (inclusive)"' in rendered
|
||||
assert (
|
||||
"Tool Name: Random Number Generator\nTool Arguments: {'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}, 'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}}\nTool Description: Generates a random number within a specified range"
|
||||
in rendered
|
||||
)
|
||||
|
||||
|
||||
def test_validate_tool_input_booleans_and_none():
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from unittest import mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from crewai.utilities.converter import ConverterError
|
||||
@@ -43,26 +44,26 @@ def test_evaluate_training_data(converter_mock):
|
||||
)
|
||||
|
||||
assert result == function_return_value
|
||||
|
||||
# Verify the converter was called with correct arguments
|
||||
converter_mock.assert_called_once()
|
||||
call_kwargs = converter_mock.call_args.kwargs
|
||||
|
||||
assert call_kwargs["llm"] == original_agent.llm
|
||||
assert call_kwargs["model"] == TrainingTaskEvaluation
|
||||
assert "Iteration: data1" in call_kwargs["text"]
|
||||
assert "Iteration: data2" in call_kwargs["text"]
|
||||
|
||||
instructions = call_kwargs["instructions"]
|
||||
assert "I'm gonna convert this raw text into valid JSON." in instructions
|
||||
assert "OpenAPI schema" in instructions
|
||||
assert '"type": "json_schema"' in instructions
|
||||
assert '"name": "TrainingTaskEvaluation"' in instructions
|
||||
assert '"suggestions"' in instructions
|
||||
assert '"quality"' in instructions
|
||||
assert '"final_summary"' in instructions
|
||||
|
||||
converter_mock.return_value.to_pydantic.assert_called_once()
|
||||
converter_mock.assert_has_calls(
|
||||
[
|
||||
mock.call(
|
||||
llm=original_agent.llm,
|
||||
text="Assess the quality of the training data based on the llm output, human feedback , and llm "
|
||||
"output improved result.\n\nIteration: data1\nInitial Output:\nInitial output 1\n\nHuman Feedback:\nHuman feedback "
|
||||
"1\n\nImproved Output:\nImproved output 1\n\n------------------------------------------------\n\nIteration: data2\nInitial Output:\nInitial output 2\n\nHuman "
|
||||
"Feedback:\nHuman feedback 2\n\nImproved Output:\nImproved output 2\n\n------------------------------------------------\n\nPlease provide:\n- Provide "
|
||||
"a list of clear, actionable instructions derived from the Human Feedbacks to enhance the Agent's "
|
||||
"performance. Analyze the differences between Initial Outputs and Improved Outputs to generate specific "
|
||||
"action items for future tasks. Ensure all key and specificpoints from the human feedback are "
|
||||
"incorporated into these instructions.\n- A score from 0 to 10 evaluating on completion, quality, and "
|
||||
"overall performance from the improved output to the initial output based on the human feedback\n",
|
||||
model=TrainingTaskEvaluation,
|
||||
instructions="I'm gonna convert this raw text into valid JSON.\n\nThe json should have the "
|
||||
"following structure, with the following keys:\n{\n suggestions: List[str],\n quality: float,\n final_summary: str\n}",
|
||||
),
|
||||
mock.call().to_pydantic(),
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
@patch("crewai.utilities.converter.Converter.to_pydantic")
|
||||
|
||||
@@ -1,272 +0,0 @@
|
||||
"""Tests for agent_utils module.
|
||||
|
||||
These tests cover the format_answer() and handle_max_iterations_exceeded() functions,
|
||||
specifically testing the fix for issue #4113 where OutputParserError was being
|
||||
swallowed instead of being re-raised for retry logic.
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.agents.parser import (
|
||||
AgentAction,
|
||||
AgentFinish,
|
||||
OutputParserError,
|
||||
)
|
||||
from crewai.utilities.agent_utils import (
|
||||
format_answer,
|
||||
handle_max_iterations_exceeded,
|
||||
process_llm_response,
|
||||
)
|
||||
|
||||
|
||||
class TestFormatAnswer:
|
||||
"""Tests for the format_answer function."""
|
||||
|
||||
def test_format_answer_with_valid_action(self) -> None:
|
||||
"""Test that format_answer correctly parses a valid action."""
|
||||
answer = "Thought: Let's search\nAction: search\nAction Input: query"
|
||||
result = format_answer(answer)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "query"
|
||||
|
||||
def test_format_answer_with_valid_final_answer(self) -> None:
|
||||
"""Test that format_answer correctly parses a valid final answer."""
|
||||
answer = "Thought: I found the answer\nFinal Answer: The result is 42"
|
||||
result = format_answer(answer)
|
||||
assert isinstance(result, AgentFinish)
|
||||
assert result.output == "The result is 42"
|
||||
|
||||
def test_format_answer_raises_output_parser_error_for_malformed_output(
|
||||
self,
|
||||
) -> None:
|
||||
"""Test that format_answer re-raises OutputParserError for malformed output.
|
||||
|
||||
This is the core fix for issue #4113. Previously, format_answer would catch
|
||||
all exceptions and return AgentFinish, which broke the retry logic.
|
||||
"""
|
||||
malformed_answer = """Thought
|
||||
The user wants to verify something.
|
||||
Action
|
||||
Video Analysis Tool
|
||||
Action Input:
|
||||
{"query": "Is there something?"}"""
|
||||
|
||||
with pytest.raises(OutputParserError):
|
||||
format_answer(malformed_answer)
|
||||
|
||||
def test_format_answer_raises_output_parser_error_missing_action(self) -> None:
|
||||
"""Test that format_answer re-raises OutputParserError when Action is missing."""
|
||||
answer = "Thought: Let's search\nAction Input: query"
|
||||
with pytest.raises(OutputParserError) as exc_info:
|
||||
format_answer(answer)
|
||||
assert "Action:" in str(exc_info.value)
|
||||
|
||||
def test_format_answer_raises_output_parser_error_missing_action_input(
|
||||
self,
|
||||
) -> None:
|
||||
"""Test that format_answer re-raises OutputParserError when Action Input is missing."""
|
||||
answer = "Thought: Let's search\nAction: search"
|
||||
with pytest.raises(OutputParserError) as exc_info:
|
||||
format_answer(answer)
|
||||
assert "Action Input:" in str(exc_info.value)
|
||||
|
||||
def test_format_answer_returns_agent_finish_for_generic_exception(self) -> None:
|
||||
"""Test that format_answer returns AgentFinish for non-OutputParserError exceptions."""
|
||||
with patch(
|
||||
"crewai.utilities.agent_utils.parse",
|
||||
side_effect=ValueError("Unexpected error"),
|
||||
):
|
||||
result = format_answer("some answer")
|
||||
assert isinstance(result, AgentFinish)
|
||||
assert result.thought == "Failed to parse LLM response"
|
||||
assert result.output == "some answer"
|
||||
|
||||
|
||||
class TestProcessLlmResponse:
|
||||
"""Tests for the process_llm_response function."""
|
||||
|
||||
def test_process_llm_response_raises_output_parser_error(self) -> None:
|
||||
"""Test that process_llm_response propagates OutputParserError."""
|
||||
malformed_answer = "Thought\nMissing colons\nAction\nSome Tool"
|
||||
with pytest.raises(OutputParserError):
|
||||
process_llm_response(malformed_answer, use_stop_words=True)
|
||||
|
||||
def test_process_llm_response_with_valid_action(self) -> None:
|
||||
"""Test that process_llm_response correctly processes a valid action."""
|
||||
answer = "Thought: Let's search\nAction: search\nAction Input: query"
|
||||
result = process_llm_response(answer, use_stop_words=True)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
|
||||
def test_process_llm_response_with_valid_final_answer(self) -> None:
|
||||
"""Test that process_llm_response correctly processes a valid final answer."""
|
||||
answer = "Thought: Done\nFinal Answer: The result"
|
||||
result = process_llm_response(answer, use_stop_words=True)
|
||||
assert isinstance(result, AgentFinish)
|
||||
assert result.output == "The result"
|
||||
|
||||
|
||||
class TestHandleMaxIterationsExceeded:
|
||||
"""Tests for the handle_max_iterations_exceeded function."""
|
||||
|
||||
def test_handle_max_iterations_exceeded_with_valid_final_answer(self) -> None:
|
||||
"""Test that handle_max_iterations_exceeded returns AgentFinish for valid output."""
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.call.return_value = "Thought: Done\nFinal Answer: The final result"
|
||||
mock_printer = MagicMock()
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.errors.return_value = "Please provide final answer"
|
||||
|
||||
result = handle_max_iterations_exceeded(
|
||||
formatted_answer=None,
|
||||
printer=mock_printer,
|
||||
i18n=mock_i18n,
|
||||
messages=[],
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
)
|
||||
|
||||
assert isinstance(result, AgentFinish)
|
||||
assert result.output == "The final result"
|
||||
|
||||
def test_handle_max_iterations_exceeded_with_valid_action_converts_to_finish(
|
||||
self,
|
||||
) -> None:
|
||||
"""Test that handle_max_iterations_exceeded converts AgentAction to AgentFinish."""
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.call.return_value = (
|
||||
"Thought: Using tool\nAction: search\nAction Input: query"
|
||||
)
|
||||
mock_printer = MagicMock()
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.errors.return_value = "Please provide final answer"
|
||||
|
||||
result = handle_max_iterations_exceeded(
|
||||
formatted_answer=None,
|
||||
printer=mock_printer,
|
||||
i18n=mock_i18n,
|
||||
messages=[],
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
)
|
||||
|
||||
assert isinstance(result, AgentFinish)
|
||||
|
||||
def test_handle_max_iterations_exceeded_catches_output_parser_error(self) -> None:
|
||||
"""Test that handle_max_iterations_exceeded catches OutputParserError and returns AgentFinish.
|
||||
|
||||
This prevents infinite loops when the forced final answer is malformed.
|
||||
Without this safeguard, the OutputParserError would bubble up to _invoke_loop(),
|
||||
which would retry, hit max iterations again, and loop forever.
|
||||
"""
|
||||
malformed_response = """Thought
|
||||
Missing colons everywhere
|
||||
Action
|
||||
Some Tool
|
||||
Action Input:
|
||||
{"query": "test"}"""
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.call.return_value = malformed_response
|
||||
mock_printer = MagicMock()
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.errors.return_value = "Please provide final answer"
|
||||
|
||||
result = handle_max_iterations_exceeded(
|
||||
formatted_answer=None,
|
||||
printer=mock_printer,
|
||||
i18n=mock_i18n,
|
||||
messages=[],
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
)
|
||||
|
||||
assert isinstance(result, AgentFinish)
|
||||
assert result.output == malformed_response
|
||||
assert "Failed to parse LLM response during max iterations" in result.thought
|
||||
mock_printer.print.assert_any_call(
|
||||
content="Failed to parse forced final answer. Returning raw response.",
|
||||
color="yellow",
|
||||
)
|
||||
|
||||
def test_handle_max_iterations_exceeded_with_previous_formatted_answer(
|
||||
self,
|
||||
) -> None:
|
||||
"""Test that handle_max_iterations_exceeded uses previous answer text."""
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.call.return_value = "Thought: Done\nFinal Answer: New result"
|
||||
mock_printer = MagicMock()
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.errors.return_value = "Please provide final answer"
|
||||
|
||||
previous_answer = AgentAction(
|
||||
thought="Previous thought",
|
||||
tool="search",
|
||||
tool_input="query",
|
||||
text="Previous text",
|
||||
)
|
||||
|
||||
result = handle_max_iterations_exceeded(
|
||||
formatted_answer=previous_answer,
|
||||
printer=mock_printer,
|
||||
i18n=mock_i18n,
|
||||
messages=[],
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
)
|
||||
|
||||
assert isinstance(result, AgentFinish)
|
||||
assert result.output == "New result"
|
||||
|
||||
def test_handle_max_iterations_exceeded_raises_on_empty_response(self) -> None:
|
||||
"""Test that handle_max_iterations_exceeded raises ValueError for empty response."""
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.call.return_value = ""
|
||||
mock_printer = MagicMock()
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.errors.return_value = "Please provide final answer"
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid response from LLM call"):
|
||||
handle_max_iterations_exceeded(
|
||||
formatted_answer=None,
|
||||
printer=mock_printer,
|
||||
i18n=mock_i18n,
|
||||
messages=[],
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
)
|
||||
|
||||
|
||||
class TestRetryLogicIntegration:
|
||||
"""Integration tests to verify the retry logic works correctly with the fix."""
|
||||
|
||||
def test_malformed_output_allows_retry_in_format_answer(self) -> None:
|
||||
"""Test that malformed output raises OutputParserError which can be caught for retry.
|
||||
|
||||
This simulates what happens in _invoke_loop() when the LLM returns malformed output.
|
||||
The OutputParserError should be raised so the loop can catch it and retry.
|
||||
"""
|
||||
malformed_outputs = [
|
||||
"Thought\nMissing colon after Thought",
|
||||
"Thought: OK\nAction\nMissing colon after Action",
|
||||
"Thought: OK\nAction: tool\nAction Input\nMissing colon",
|
||||
"Random text without any structure",
|
||||
]
|
||||
|
||||
for malformed in malformed_outputs:
|
||||
with pytest.raises(OutputParserError):
|
||||
format_answer(malformed)
|
||||
|
||||
def test_valid_output_does_not_raise(self) -> None:
|
||||
"""Test that valid outputs are parsed correctly without raising."""
|
||||
valid_outputs = [
|
||||
("Thought: Let's search\nAction: search\nAction Input: query", AgentAction),
|
||||
("Thought: Done\nFinal Answer: The result", AgentFinish),
|
||||
]
|
||||
|
||||
for output, expected_type in valid_outputs:
|
||||
result = format_answer(output)
|
||||
assert isinstance(result, expected_type)
|
||||
@@ -16,6 +16,7 @@ from crewai.utilities.converter import (
|
||||
handle_partial_json,
|
||||
validate_model,
|
||||
)
|
||||
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
|
||||
from pydantic import BaseModel
|
||||
import pytest
|
||||
|
||||
|
||||
@@ -1,11 +1,7 @@
|
||||
"""Tests for the planning handler module."""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
@@ -17,7 +13,7 @@ from crewai.utilities.planning_handler import (
|
||||
)
|
||||
|
||||
|
||||
class TestInternalCrewPlanner:
|
||||
class InternalCrewPlanner:
|
||||
@pytest.fixture
|
||||
def crew_planner(self):
|
||||
tasks = [
|
||||
@@ -53,9 +49,9 @@ class TestInternalCrewPlanner:
|
||||
|
||||
def test_handle_crew_planning(self, crew_planner):
|
||||
list_of_plans_per_task = [
|
||||
PlanPerTask(task_number=1, task="Task1", plan="Plan 1"),
|
||||
PlanPerTask(task_number=2, task="Task2", plan="Plan 2"),
|
||||
PlanPerTask(task_number=3, task="Task3", plan="Plan 3"),
|
||||
PlanPerTask(task="Task1", plan="Plan 1"),
|
||||
PlanPerTask(task="Task2", plan="Plan 2"),
|
||||
PlanPerTask(task="Task3", plan="Plan 3"),
|
||||
]
|
||||
with patch.object(Task, "execute_sync") as execute:
|
||||
execute.return_value = TaskOutput(
|
||||
@@ -101,12 +97,12 @@ class TestInternalCrewPlanner:
|
||||
# Knowledge field should not be present when empty
|
||||
assert '"agent_knowledge"' not in tasks_summary
|
||||
|
||||
@patch("crewai.knowledge.knowledge.Knowledge.add_sources")
|
||||
@patch("crewai.knowledge.storage.knowledge_storage.KnowledgeStorage")
|
||||
def test_create_tasks_summary_with_knowledge_and_tools(
|
||||
self, mock_storage, mock_add_sources
|
||||
):
|
||||
@patch("crewai.knowledge.storage.knowledge_storage.chromadb")
|
||||
def test_create_tasks_summary_with_knowledge_and_tools(self, mock_chroma):
|
||||
"""Test task summary generation with both knowledge and tools present."""
|
||||
# Mock ChromaDB collection
|
||||
mock_collection = mock_chroma.return_value.get_or_create_collection.return_value
|
||||
mock_collection.add.return_value = None
|
||||
|
||||
# Create mock tools with proper string descriptions and structured tool support
|
||||
class MockTool(BaseTool):
|
||||
@@ -170,9 +166,7 @@ class TestInternalCrewPlanner:
|
||||
description="Description",
|
||||
agent="agent",
|
||||
pydantic=PlannerTaskPydanticOutput(
|
||||
list_of_plans_per_task=[
|
||||
PlanPerTask(task_number=1, task="Task1", plan="Plan 1")
|
||||
]
|
||||
list_of_plans_per_task=[PlanPerTask(task="Task1", plan="Plan 1")]
|
||||
),
|
||||
)
|
||||
result = crew_planner_different_llm._handle_crew_planning()
|
||||
@@ -183,181 +177,3 @@ class TestInternalCrewPlanner:
|
||||
crew_planner_different_llm.tasks
|
||||
)
|
||||
execute.assert_called_once()
|
||||
|
||||
def test_plan_per_task_requires_task_number(self):
|
||||
"""Test that PlanPerTask model requires task_number field."""
|
||||
with pytest.raises(ValueError):
|
||||
PlanPerTask(task="Task1", plan="Plan 1")
|
||||
|
||||
def test_plan_per_task_with_task_number(self):
|
||||
"""Test PlanPerTask model with task_number field."""
|
||||
plan = PlanPerTask(task_number=5, task="Task5", plan="Plan for task 5")
|
||||
assert plan.task_number == 5
|
||||
assert plan.task == "Task5"
|
||||
assert plan.plan == "Plan for task 5"
|
||||
|
||||
|
||||
class TestCrewPlanningIntegration:
|
||||
"""Tests for Crew._handle_crew_planning integration with task_number matching."""
|
||||
|
||||
def test_crew_planning_with_out_of_order_plans(self):
|
||||
"""Test that plans are correctly matched to tasks even when returned out of order.
|
||||
|
||||
This test verifies the fix for issue #3953 where plans returned by the LLM
|
||||
in a different order than the tasks would be incorrectly assigned.
|
||||
"""
|
||||
agent1 = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
|
||||
agent2 = Agent(role="Agent 2", goal="Goal 2", backstory="Backstory 2")
|
||||
agent3 = Agent(role="Agent 3", goal="Goal 3", backstory="Backstory 3")
|
||||
|
||||
task1 = Task(
|
||||
description="First task description",
|
||||
expected_output="Output 1",
|
||||
agent=agent1,
|
||||
)
|
||||
task2 = Task(
|
||||
description="Second task description",
|
||||
expected_output="Output 2",
|
||||
agent=agent2,
|
||||
)
|
||||
task3 = Task(
|
||||
description="Third task description",
|
||||
expected_output="Output 3",
|
||||
agent=agent3,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2, agent3],
|
||||
tasks=[task1, task2, task3],
|
||||
planning=True,
|
||||
)
|
||||
|
||||
out_of_order_plans = [
|
||||
PlanPerTask(task_number=3, task="Task 3", plan=" [PLAN FOR TASK 3]"),
|
||||
PlanPerTask(task_number=1, task="Task 1", plan=" [PLAN FOR TASK 1]"),
|
||||
PlanPerTask(task_number=2, task="Task 2", plan=" [PLAN FOR TASK 2]"),
|
||||
]
|
||||
|
||||
mock_planner_result = PlannerTaskPydanticOutput(
|
||||
list_of_plans_per_task=out_of_order_plans
|
||||
)
|
||||
|
||||
with patch.object(
|
||||
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
|
||||
):
|
||||
crew._handle_crew_planning()
|
||||
|
||||
assert "[PLAN FOR TASK 1]" in task1.description
|
||||
assert "[PLAN FOR TASK 2]" in task2.description
|
||||
assert "[PLAN FOR TASK 3]" in task3.description
|
||||
|
||||
assert "[PLAN FOR TASK 3]" not in task1.description
|
||||
assert "[PLAN FOR TASK 1]" not in task2.description
|
||||
assert "[PLAN FOR TASK 2]" not in task3.description
|
||||
|
||||
def test_crew_planning_with_missing_plan(self):
|
||||
"""Test that missing plans are handled gracefully with a warning."""
|
||||
agent1 = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
|
||||
agent2 = Agent(role="Agent 2", goal="Goal 2", backstory="Backstory 2")
|
||||
|
||||
task1 = Task(
|
||||
description="First task description",
|
||||
expected_output="Output 1",
|
||||
agent=agent1,
|
||||
)
|
||||
task2 = Task(
|
||||
description="Second task description",
|
||||
expected_output="Output 2",
|
||||
agent=agent2,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[task1, task2],
|
||||
planning=True,
|
||||
)
|
||||
|
||||
original_task1_desc = task1.description
|
||||
original_task2_desc = task2.description
|
||||
|
||||
incomplete_plans = [
|
||||
PlanPerTask(task_number=1, task="Task 1", plan=" [PLAN FOR TASK 1]"),
|
||||
]
|
||||
|
||||
mock_planner_result = PlannerTaskPydanticOutput(
|
||||
list_of_plans_per_task=incomplete_plans
|
||||
)
|
||||
|
||||
with patch.object(
|
||||
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
|
||||
):
|
||||
crew._handle_crew_planning()
|
||||
|
||||
assert "[PLAN FOR TASK 1]" in task1.description
|
||||
|
||||
assert task2.description == original_task2_desc
|
||||
|
||||
def test_crew_planning_preserves_original_description(self):
|
||||
"""Test that planning appends to the original task description."""
|
||||
agent = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
|
||||
|
||||
task = Task(
|
||||
description="Original task description",
|
||||
expected_output="Output 1",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
planning=True,
|
||||
)
|
||||
|
||||
plans = [
|
||||
PlanPerTask(task_number=1, task="Task 1", plan=" - Additional plan steps"),
|
||||
]
|
||||
|
||||
mock_planner_result = PlannerTaskPydanticOutput(list_of_plans_per_task=plans)
|
||||
|
||||
with patch.object(
|
||||
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
|
||||
):
|
||||
crew._handle_crew_planning()
|
||||
|
||||
assert "Original task description" in task.description
|
||||
assert "Additional plan steps" in task.description
|
||||
|
||||
def test_crew_planning_with_duplicate_task_numbers(self):
|
||||
"""Test that duplicate task numbers use the first plan and log a warning."""
|
||||
agent = Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1")
|
||||
|
||||
task = Task(
|
||||
description="Task description",
|
||||
expected_output="Output 1",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
planning=True,
|
||||
)
|
||||
|
||||
# Two plans with the same task_number - should use the first one
|
||||
duplicate_plans = [
|
||||
PlanPerTask(task_number=1, task="Task 1", plan=" [FIRST PLAN]"),
|
||||
PlanPerTask(task_number=1, task="Task 1", plan=" [SECOND PLAN]"),
|
||||
]
|
||||
|
||||
mock_planner_result = PlannerTaskPydanticOutput(
|
||||
list_of_plans_per_task=duplicate_plans
|
||||
)
|
||||
|
||||
with patch.object(
|
||||
CrewPlanner, "_handle_crew_planning", return_value=mock_planner_result
|
||||
):
|
||||
crew._handle_crew_planning()
|
||||
|
||||
# Should use the first plan, not the second
|
||||
assert "[FIRST PLAN]" in task.description
|
||||
assert "[SECOND PLAN]" not in task.description
|
||||
|
||||
94
lib/crewai/tests/utilities/test_pydantic_schema_parser.py
Normal file
94
lib/crewai/tests/utilities/test_pydantic_schema_parser.py
Normal file
@@ -0,0 +1,94 @@
|
||||
from typing import Any, Dict, List, Optional, Set, Tuple, Union
|
||||
|
||||
import pytest
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
|
||||
|
||||
|
||||
def test_simple_model():
|
||||
class SimpleModel(BaseModel):
|
||||
field1: int
|
||||
field2: str
|
||||
|
||||
parser = PydanticSchemaParser(model=SimpleModel)
|
||||
schema = parser.get_schema()
|
||||
|
||||
expected_schema = """{
|
||||
field1: int,
|
||||
field2: str
|
||||
}"""
|
||||
assert schema.strip() == expected_schema.strip()
|
||||
|
||||
|
||||
def test_nested_model():
|
||||
class NestedModel(BaseModel):
|
||||
nested_field: int
|
||||
|
||||
class ParentModel(BaseModel):
|
||||
parent_field: str
|
||||
nested: NestedModel
|
||||
|
||||
parser = PydanticSchemaParser(model=ParentModel)
|
||||
schema = parser.get_schema()
|
||||
|
||||
expected_schema = """{
|
||||
parent_field: str,
|
||||
nested: NestedModel
|
||||
{
|
||||
nested_field: int
|
||||
}
|
||||
}"""
|
||||
assert schema.strip() == expected_schema.strip()
|
||||
|
||||
|
||||
def test_model_with_list():
|
||||
class ListModel(BaseModel):
|
||||
list_field: List[int]
|
||||
|
||||
parser = PydanticSchemaParser(model=ListModel)
|
||||
schema = parser.get_schema()
|
||||
|
||||
expected_schema = """{
|
||||
list_field: List[int]
|
||||
}"""
|
||||
assert schema.strip() == expected_schema.strip()
|
||||
|
||||
|
||||
def test_model_with_optional_field():
|
||||
class OptionalModel(BaseModel):
|
||||
optional_field: Optional[str]
|
||||
|
||||
parser = PydanticSchemaParser(model=OptionalModel)
|
||||
schema = parser.get_schema()
|
||||
|
||||
expected_schema = """{
|
||||
optional_field: Optional[str]
|
||||
}"""
|
||||
assert schema.strip() == expected_schema.strip()
|
||||
|
||||
|
||||
def test_model_with_union():
|
||||
class UnionModel(BaseModel):
|
||||
union_field: Union[int, str]
|
||||
|
||||
parser = PydanticSchemaParser(model=UnionModel)
|
||||
schema = parser.get_schema()
|
||||
|
||||
expected_schema = """{
|
||||
union_field: Union[int, str]
|
||||
}"""
|
||||
assert schema.strip() == expected_schema.strip()
|
||||
|
||||
|
||||
def test_model_with_dict():
|
||||
class DictModel(BaseModel):
|
||||
dict_field: Dict[str, int]
|
||||
|
||||
parser = PydanticSchemaParser(model=DictModel)
|
||||
schema = parser.get_schema()
|
||||
|
||||
expected_schema = """{
|
||||
dict_field: Dict[str, int]
|
||||
}"""
|
||||
assert schema.strip() == expected_schema.strip()
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.7.1"
|
||||
__version__ = "1.7.0"
|
||||
|
||||
@@ -323,17 +323,13 @@ def cli() -> None:
|
||||
"--dry-run", is_flag=True, help="Show what would be done without making changes"
|
||||
)
|
||||
@click.option("--no-push", is_flag=True, help="Don't push changes to remote")
|
||||
@click.option(
|
||||
"--no-commit", is_flag=True, help="Don't commit changes (just update files)"
|
||||
)
|
||||
def bump(version: str, dry_run: bool, no_push: bool, no_commit: bool) -> None:
|
||||
def bump(version: str, dry_run: bool, no_push: bool) -> None:
|
||||
"""Bump version across all packages in lib/.
|
||||
|
||||
Args:
|
||||
version: New version to set (e.g., 1.0.0, 1.0.0a1).
|
||||
dry_run: Show what would be done without making changes.
|
||||
no_push: Don't push changes to remote.
|
||||
no_commit: Don't commit changes (just update files).
|
||||
"""
|
||||
try:
|
||||
# Check prerequisites
|
||||
@@ -401,60 +397,51 @@ def bump(version: str, dry_run: bool, no_push: bool, no_commit: bool) -> None:
|
||||
else:
|
||||
console.print("[dim][DRY RUN][/dim] Would run: uv sync")
|
||||
|
||||
if no_commit:
|
||||
console.print("\nSkipping git operations (--no-commit flag set)")
|
||||
branch_name = f"feat/bump-version-{version}"
|
||||
if not dry_run:
|
||||
console.print(f"\nCreating branch {branch_name}...")
|
||||
run_command(["git", "checkout", "-b", branch_name])
|
||||
console.print("[green]✓[/green] Branch created")
|
||||
|
||||
console.print("\nCommitting changes...")
|
||||
run_command(["git", "add", "."])
|
||||
run_command(["git", "commit", "-m", f"feat: bump versions to {version}"])
|
||||
console.print("[green]✓[/green] Changes committed")
|
||||
|
||||
if not no_push:
|
||||
console.print("\nPushing branch...")
|
||||
run_command(["git", "push", "-u", "origin", branch_name])
|
||||
console.print("[green]✓[/green] Branch pushed")
|
||||
else:
|
||||
branch_name = f"feat/bump-version-{version}"
|
||||
if not dry_run:
|
||||
console.print(f"\nCreating branch {branch_name}...")
|
||||
run_command(["git", "checkout", "-b", branch_name])
|
||||
console.print("[green]✓[/green] Branch created")
|
||||
console.print(f"[dim][DRY RUN][/dim] Would create branch: {branch_name}")
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would commit: feat: bump versions to {version}"
|
||||
)
|
||||
if not no_push:
|
||||
console.print(f"[dim][DRY RUN][/dim] Would push branch: {branch_name}")
|
||||
|
||||
console.print("\nCommitting changes...")
|
||||
run_command(["git", "add", "."])
|
||||
run_command(
|
||||
["git", "commit", "-m", f"feat: bump versions to {version}"]
|
||||
)
|
||||
console.print("[green]✓[/green] Changes committed")
|
||||
|
||||
if not no_push:
|
||||
console.print("\nPushing branch...")
|
||||
run_command(["git", "push", "-u", "origin", branch_name])
|
||||
console.print("[green]✓[/green] Branch pushed")
|
||||
else:
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would create branch: {branch_name}"
|
||||
)
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would commit: feat: bump versions to {version}"
|
||||
)
|
||||
if not no_push:
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would push branch: {branch_name}"
|
||||
)
|
||||
|
||||
if not dry_run and not no_push:
|
||||
console.print("\nCreating pull request...")
|
||||
run_command(
|
||||
[
|
||||
"gh",
|
||||
"pr",
|
||||
"create",
|
||||
"--base",
|
||||
"main",
|
||||
"--title",
|
||||
f"feat: bump versions to {version}",
|
||||
"--body",
|
||||
"",
|
||||
]
|
||||
)
|
||||
console.print("[green]✓[/green] Pull request created")
|
||||
elif dry_run:
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would create PR: feat: bump versions to {version}"
|
||||
)
|
||||
else:
|
||||
console.print("\nSkipping PR creation (--no-push flag set)")
|
||||
if not dry_run and not no_push:
|
||||
console.print("\nCreating pull request...")
|
||||
run_command(
|
||||
[
|
||||
"gh",
|
||||
"pr",
|
||||
"create",
|
||||
"--base",
|
||||
"main",
|
||||
"--title",
|
||||
f"feat: bump versions to {version}",
|
||||
"--body",
|
||||
"",
|
||||
]
|
||||
)
|
||||
console.print("[green]✓[/green] Pull request created")
|
||||
elif dry_run:
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would create PR: feat: bump versions to {version}"
|
||||
)
|
||||
else:
|
||||
console.print("\nSkipping PR creation (--no-push flag set)")
|
||||
|
||||
console.print(f"\n[green]✓[/green] Version bump to {version} complete!")
|
||||
|
||||
|
||||
Reference in New Issue
Block a user