Compare commits
23 Commits
v1.0.0a2
...
devin/1760
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4cb8df09dd | ||
|
|
0938279b89 | ||
|
|
42f2b4d551 | ||
|
|
0229390ad1 | ||
|
|
f0fb349ddf | ||
|
|
bf2e2a42da | ||
|
|
814c962196 | ||
|
|
2ebb2e845f | ||
|
|
7b550ebfe8 | ||
|
|
29919c2d81 | ||
|
|
b71c88814f | ||
|
|
cb8bcfe214 | ||
|
|
13a514f8be | ||
|
|
316b1cea69 | ||
|
|
6f2e39c0dd | ||
|
|
8d93361cb3 | ||
|
|
54ec245d84 | ||
|
|
f589ab9b80 | ||
|
|
fadb59e0f0 | ||
|
|
1a60848425 | ||
|
|
0135163040 | ||
|
|
dac5d6d664 | ||
|
|
f0f94f2540 |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 45 KiB |
|
Before Width: | Height: | Size: 48 KiB |
|
Before Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 23 KiB |
|
Before Width: | Height: | Size: 43 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 24 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 25 KiB |
|
Before Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 33 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 45 KiB |
|
Before Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 52 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 17 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 33 KiB |
21
.github/codeql/codeql-config.yml
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
name: "CodeQL Config"
|
||||
|
||||
paths-ignore:
|
||||
# Ignore template files - these are boilerplate code that shouldn't be analyzed
|
||||
- "src/crewai/cli/templates/**"
|
||||
# Ignore test cassettes - these are test fixtures/recordings
|
||||
- "tests/cassettes/**"
|
||||
# Ignore cache and build artifacts
|
||||
- ".cache/**"
|
||||
# Ignore documentation build artifacts
|
||||
- "docs/.cache/**"
|
||||
|
||||
paths:
|
||||
# Include all Python source code
|
||||
- "src/**"
|
||||
# Include tests (but exclude cassettes)
|
||||
- "tests/**"
|
||||
|
||||
# Configure specific queries or packs if needed
|
||||
# queries:
|
||||
# - uses: security-and-quality
|
||||
63
.github/security.md
vendored
@@ -1,27 +1,50 @@
|
||||
## CrewAI Security Vulnerability Reporting Policy
|
||||
## CrewAI Security Policy
|
||||
|
||||
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
|
||||
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
|
||||
|
||||
### Reporting Process
|
||||
Do **not** report vulnerabilities via public GitHub issues.
|
||||
### Scope
|
||||
|
||||
Email all vulnerability reports directly to:
|
||||
**security@crewai.com**
|
||||
We welcome reports for vulnerabilities that could impact:
|
||||
|
||||
### Required Information
|
||||
To help us quickly validate and remediate the issue, your report must include:
|
||||
- CrewAI-maintained source code and repositories
|
||||
- CrewAI-operated infrastructure and services
|
||||
- Official CrewAI releases, packages, and distributions
|
||||
|
||||
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
|
||||
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
|
||||
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
|
||||
- **Special Configuration:** Document any special settings or configurations required to reproduce.
|
||||
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
|
||||
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
|
||||
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
|
||||
|
||||
### Our Response
|
||||
- We will acknowledge receipt of your report promptly via your provided email.
|
||||
- Confirmed vulnerabilities will receive priority remediation based on severity.
|
||||
- Patches will be released as swiftly as possible following verification.
|
||||
### How to Report
|
||||
|
||||
### Reward Notice
|
||||
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
|
||||
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
|
||||
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
|
||||
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
|
||||
|
||||
### What to Include
|
||||
|
||||
Providing comprehensive information enables us to validate the issue quickly:
|
||||
|
||||
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
|
||||
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
|
||||
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
|
||||
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
|
||||
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
|
||||
|
||||
### Our Commitment
|
||||
|
||||
- **Acknowledgement:** We aim to acknowledge your report within two business days.
|
||||
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
|
||||
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
|
||||
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
|
||||
|
||||
### Coordinated Disclosure
|
||||
|
||||
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
|
||||
|
||||
### Safe Harbor
|
||||
|
||||
We will not pursue or support legal action against individuals who, in good faith:
|
||||
|
||||
- Follow this policy and refrain from violating any applicable laws
|
||||
- Avoid privacy violations, data destruction, or service disruption
|
||||
- Limit testing to systems in scope and respect rate limits and terms of service
|
||||
|
||||
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
|
||||
|
||||
2
.github/workflows/build-uv-cache.yml
vendored
@@ -7,6 +7,8 @@ on:
|
||||
paths:
|
||||
- "uv.lock"
|
||||
- "pyproject.toml"
|
||||
schedule:
|
||||
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
|
||||
1
.github/workflows/codeql.yml
vendored
@@ -73,6 +73,7 @@ jobs:
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
build-mode: ${{ matrix.build-mode }}
|
||||
config-file: ./.github/codeql/codeql-config.yml
|
||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||
# By default, queries listed here will override any specified in a config file.
|
||||
# Prefix the list here with "+" to use these queries and those in the config file.
|
||||
|
||||
1737
crewAI.excalidraw
@@ -397,6 +397,7 @@
|
||||
"en/enterprise/guides/kickoff-crew",
|
||||
"en/enterprise/guides/update-crew",
|
||||
"en/enterprise/guides/enable-crew-studio",
|
||||
"en/enterprise/guides/capture_telemetry_logs",
|
||||
"en/enterprise/guides/azure-openai-setup",
|
||||
"en/enterprise/guides/tool-repository",
|
||||
"en/enterprise/guides/react-component-export",
|
||||
@@ -421,6 +422,7 @@
|
||||
"en/api-reference/introduction",
|
||||
"en/api-reference/inputs",
|
||||
"en/api-reference/kickoff",
|
||||
"en/api-reference/resume",
|
||||
"en/api-reference/status"
|
||||
]
|
||||
}
|
||||
@@ -827,6 +829,7 @@
|
||||
"pt-BR/api-reference/introduction",
|
||||
"pt-BR/api-reference/inputs",
|
||||
"pt-BR/api-reference/kickoff",
|
||||
"pt-BR/api-reference/resume",
|
||||
"pt-BR/api-reference/status"
|
||||
]
|
||||
}
|
||||
@@ -1239,6 +1242,7 @@
|
||||
"ko/api-reference/introduction",
|
||||
"ko/api-reference/inputs",
|
||||
"ko/api-reference/kickoff",
|
||||
"ko/api-reference/resume",
|
||||
"ko/api-reference/status"
|
||||
]
|
||||
}
|
||||
|
||||
6
docs/en/api-reference/resume.mdx
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
title: "POST /resume"
|
||||
description: "Resume crew execution with human feedback"
|
||||
openapi: "/enterprise-api.en.yaml POST /resume"
|
||||
mode: "wide"
|
||||
---
|
||||
35
docs/en/enterprise/guides/capture_telemetry_logs.mdx
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
title: "Open Telemetry Logs"
|
||||
description: "Understand how to capture telemetry logs from your CrewAI AMP deployments"
|
||||
icon: "magnifying-glass-chart"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
CrewAI AMP provides a powerful way to capture telemetry logs from your deployments. This allows you to monitor the performance of your agents and workflows, and to debug issues that may arise.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="ENTERPRISE OTEL SETUP enabled" icon="users">
|
||||
Your organization should have ENTERPRISE OTEL SETUP enabled
|
||||
</Card>
|
||||
<Card title="OTEL collector setup" icon="server">
|
||||
Your organization should have an OTEL collector setup or a provider like Datadog log intake setup
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
|
||||
## How to capture telemetry logs
|
||||
|
||||
1. Go to settings/organization tab
|
||||
2. Configure your OTEL collector setup
|
||||
3. Save
|
||||
|
||||
|
||||
|
||||
Example to setup OTEL log collection capture to Datadog.
|
||||
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
@@ -40,6 +40,28 @@ Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelli
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
||||
</Frame>
|
||||
|
||||
<Warning>
|
||||
**Critical: Webhook URLs Must Be Provided Again**:
|
||||
You **must** provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) in the resume call that you used in the kickoff call. Webhook configurations are **NOT** automatically carried over from kickoff - they must be explicitly included in the resume request to continue receiving notifications for task completion, agent steps, and crew completion.
|
||||
</Warning>
|
||||
|
||||
Example resume call with webhooks:
|
||||
```bash
|
||||
curl -X POST {BASE_URL}/resume \
|
||||
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"task_id": "research_task",
|
||||
"human_feedback": "Great work! Please add more details.",
|
||||
"is_approve": true,
|
||||
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
||||
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
||||
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
||||
}'
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**Feedback Impact on Task Execution**:
|
||||
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
|
||||
@@ -76,4 +98,4 @@ HITL workflows are particularly valuable for:
|
||||
- Complex decision-making scenarios
|
||||
- Sensitive or high-stakes operations
|
||||
- Creative tasks requiring human judgment
|
||||
- Compliance and regulatory reviews
|
||||
- Compliance and regulatory reviews
|
||||
|
||||
@@ -79,6 +79,28 @@ Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelli
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
||||
</Frame>
|
||||
|
||||
<Warning>
|
||||
**Critical: Webhook URLs Must Be Provided Again**:
|
||||
You **must** provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) in the resume call that you used in the kickoff call. Webhook configurations are **NOT** automatically carried over from kickoff - they must be explicitly included in the resume request to continue receiving notifications for task completion, agent steps, and crew completion.
|
||||
</Warning>
|
||||
|
||||
Example resume call with webhooks:
|
||||
```bash
|
||||
curl -X POST {BASE_URL}/resume \
|
||||
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"task_id": "research_task",
|
||||
"human_feedback": "Great work! Please add more details.",
|
||||
"is_approve": true,
|
||||
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
||||
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
||||
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
||||
}'
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**Feedback Impact on Task Execution**:
|
||||
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
|
||||
|
||||
@@ -276,6 +276,134 @@ paths:
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/resume:
|
||||
post:
|
||||
summary: Resume Crew Execution with Human Feedback
|
||||
description: |
|
||||
**📋 Reference Example Only** - *This shows the request format. To test with your actual crew, copy the cURL example and replace the URL + token with your real values.*
|
||||
|
||||
Resume a paused crew execution with human feedback for Human-in-the-Loop (HITL) workflows.
|
||||
When a task with `human_input=True` completes, the crew execution pauses and waits for human feedback.
|
||||
|
||||
**IMPORTANT**: You must provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`)
|
||||
that were used in the original kickoff call. Webhook configurations are NOT automatically carried over -
|
||||
they must be explicitly provided in the resume request to continue receiving notifications.
|
||||
operationId: resumeCrewExecution
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required:
|
||||
- execution_id
|
||||
- task_id
|
||||
- human_feedback
|
||||
- is_approve
|
||||
properties:
|
||||
execution_id:
|
||||
type: string
|
||||
format: uuid
|
||||
description: The unique identifier for the crew execution (from kickoff)
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id:
|
||||
type: string
|
||||
description: The ID of the task that requires human feedback
|
||||
example: "research_task"
|
||||
human_feedback:
|
||||
type: string
|
||||
description: Your feedback on the task output. This will be incorporated as additional context for subsequent task executions.
|
||||
example: "Great research! Please add more details about recent developments in the field."
|
||||
is_approve:
|
||||
type: boolean
|
||||
description: "Whether you approve the task output: true = positive feedback (continue), false = negative feedback (retry task)"
|
||||
example: true
|
||||
taskWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each task completion. MUST be provided to continue receiving task notifications.
|
||||
example: "https://your-server.com/webhooks/task"
|
||||
stepWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each agent thought/action. MUST be provided to continue receiving step notifications.
|
||||
example: "https://your-server.com/webhooks/step"
|
||||
crewWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed when the crew execution completes. MUST be provided to receive completion notification.
|
||||
example: "https://your-server.com/webhooks/crew"
|
||||
examples:
|
||||
approve_and_continue:
|
||||
summary: Approve task and continue execution
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "research_task"
|
||||
human_feedback: "Excellent research! Proceed to the next task."
|
||||
is_approve: true
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
stepWebhookUrl: "https://api.example.com/webhooks/step"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
request_revision:
|
||||
summary: Request task revision with feedback
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "analysis_task"
|
||||
human_feedback: "Please include more quantitative data and cite your sources."
|
||||
is_approve: false
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
'200':
|
||||
description: Execution resumed successfully
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
enum: ["resumed", "retrying", "completed"]
|
||||
description: Status of the resumed execution
|
||||
example: "resumed"
|
||||
message:
|
||||
type: string
|
||||
description: Human-readable message about the resume operation
|
||||
example: "Execution resumed successfully"
|
||||
examples:
|
||||
resumed:
|
||||
summary: Execution resumed with positive feedback
|
||||
value:
|
||||
status: "resumed"
|
||||
message: "Execution resumed successfully"
|
||||
retrying:
|
||||
summary: Task will be retried with negative feedback
|
||||
value:
|
||||
status: "retrying"
|
||||
message: "Task will be retried with your feedback"
|
||||
'400':
|
||||
description: Invalid request body or execution not in pending state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Invalid Request"
|
||||
message: "Execution is not in pending human input state"
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Execution ID or Task ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "Execution ID not found"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
BearerAuth:
|
||||
|
||||
@@ -276,6 +276,134 @@ paths:
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/resume:
|
||||
post:
|
||||
summary: Resume Crew Execution with Human Feedback
|
||||
description: |
|
||||
**📋 Reference Example Only** - *This shows the request format. To test with your actual crew, copy the cURL example and replace the URL + token with your real values.*
|
||||
|
||||
Resume a paused crew execution with human feedback for Human-in-the-Loop (HITL) workflows.
|
||||
When a task with `human_input=True` completes, the crew execution pauses and waits for human feedback.
|
||||
|
||||
**IMPORTANT**: You must provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`)
|
||||
that were used in the original kickoff call. Webhook configurations are NOT automatically carried over -
|
||||
they must be explicitly provided in the resume request to continue receiving notifications.
|
||||
operationId: resumeCrewExecution
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required:
|
||||
- execution_id
|
||||
- task_id
|
||||
- human_feedback
|
||||
- is_approve
|
||||
properties:
|
||||
execution_id:
|
||||
type: string
|
||||
format: uuid
|
||||
description: The unique identifier for the crew execution (from kickoff)
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id:
|
||||
type: string
|
||||
description: The ID of the task that requires human feedback
|
||||
example: "research_task"
|
||||
human_feedback:
|
||||
type: string
|
||||
description: Your feedback on the task output. This will be incorporated as additional context for subsequent task executions.
|
||||
example: "Great research! Please add more details about recent developments in the field."
|
||||
is_approve:
|
||||
type: boolean
|
||||
description: "Whether you approve the task output: true = positive feedback (continue), false = negative feedback (retry task)"
|
||||
example: true
|
||||
taskWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each task completion. MUST be provided to continue receiving task notifications.
|
||||
example: "https://your-server.com/webhooks/task"
|
||||
stepWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each agent thought/action. MUST be provided to continue receiving step notifications.
|
||||
example: "https://your-server.com/webhooks/step"
|
||||
crewWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed when the crew execution completes. MUST be provided to receive completion notification.
|
||||
example: "https://your-server.com/webhooks/crew"
|
||||
examples:
|
||||
approve_and_continue:
|
||||
summary: Approve task and continue execution
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "research_task"
|
||||
human_feedback: "Excellent research! Proceed to the next task."
|
||||
is_approve: true
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
stepWebhookUrl: "https://api.example.com/webhooks/step"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
request_revision:
|
||||
summary: Request task revision with feedback
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "analysis_task"
|
||||
human_feedback: "Please include more quantitative data and cite your sources."
|
||||
is_approve: false
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
'200':
|
||||
description: Execution resumed successfully
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
enum: ["resumed", "retrying", "completed"]
|
||||
description: Status of the resumed execution
|
||||
example: "resumed"
|
||||
message:
|
||||
type: string
|
||||
description: Human-readable message about the resume operation
|
||||
example: "Execution resumed successfully"
|
||||
examples:
|
||||
resumed:
|
||||
summary: Execution resumed with positive feedback
|
||||
value:
|
||||
status: "resumed"
|
||||
message: "Execution resumed successfully"
|
||||
retrying:
|
||||
summary: Task will be retried with negative feedback
|
||||
value:
|
||||
status: "retrying"
|
||||
message: "Task will be retried with your feedback"
|
||||
'400':
|
||||
description: Invalid request body or execution not in pending state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Invalid Request"
|
||||
message: "Execution is not in pending human input state"
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Execution ID or Task ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "Execution ID not found"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
BearerAuth:
|
||||
|
||||
@@ -120,6 +120,134 @@ paths:
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/resume:
|
||||
post:
|
||||
summary: Resume Crew Execution with Human Feedback
|
||||
description: |
|
||||
**📋 Reference Example Only** - *This shows the request format. To test with your actual crew, copy the cURL example and replace the URL + token with your real values.*
|
||||
|
||||
Resume a paused crew execution with human feedback for Human-in-the-Loop (HITL) workflows.
|
||||
When a task with `human_input=True` completes, the crew execution pauses and waits for human feedback.
|
||||
|
||||
**IMPORTANT**: You must provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`)
|
||||
that were used in the original kickoff call. Webhook configurations are NOT automatically carried over -
|
||||
they must be explicitly provided in the resume request to continue receiving notifications.
|
||||
operationId: resumeCrewExecution
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required:
|
||||
- execution_id
|
||||
- task_id
|
||||
- human_feedback
|
||||
- is_approve
|
||||
properties:
|
||||
execution_id:
|
||||
type: string
|
||||
format: uuid
|
||||
description: The unique identifier for the crew execution (from kickoff)
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id:
|
||||
type: string
|
||||
description: The ID of the task that requires human feedback
|
||||
example: "research_task"
|
||||
human_feedback:
|
||||
type: string
|
||||
description: Your feedback on the task output. This will be incorporated as additional context for subsequent task executions.
|
||||
example: "Great research! Please add more details about recent developments in the field."
|
||||
is_approve:
|
||||
type: boolean
|
||||
description: "Whether you approve the task output: true = positive feedback (continue), false = negative feedback (retry task)"
|
||||
example: true
|
||||
taskWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each task completion. MUST be provided to continue receiving task notifications.
|
||||
example: "https://your-server.com/webhooks/task"
|
||||
stepWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each agent thought/action. MUST be provided to continue receiving step notifications.
|
||||
example: "https://your-server.com/webhooks/step"
|
||||
crewWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed when the crew execution completes. MUST be provided to receive completion notification.
|
||||
example: "https://your-server.com/webhooks/crew"
|
||||
examples:
|
||||
approve_and_continue:
|
||||
summary: Approve task and continue execution
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "research_task"
|
||||
human_feedback: "Excellent research! Proceed to the next task."
|
||||
is_approve: true
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
stepWebhookUrl: "https://api.example.com/webhooks/step"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
request_revision:
|
||||
summary: Request task revision with feedback
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "analysis_task"
|
||||
human_feedback: "Please include more quantitative data and cite your sources."
|
||||
is_approve: false
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
'200':
|
||||
description: Execution resumed successfully
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
enum: ["resumed", "retrying", "completed"]
|
||||
description: Status of the resumed execution
|
||||
example: "resumed"
|
||||
message:
|
||||
type: string
|
||||
description: Human-readable message about the resume operation
|
||||
example: "Execution resumed successfully"
|
||||
examples:
|
||||
resumed:
|
||||
summary: Execution resumed with positive feedback
|
||||
value:
|
||||
status: "resumed"
|
||||
message: "Execution resumed successfully"
|
||||
retrying:
|
||||
summary: Task will be retried with negative feedback
|
||||
value:
|
||||
status: "retrying"
|
||||
message: "Task will be retried with your feedback"
|
||||
'400':
|
||||
description: Invalid request body or execution not in pending state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Invalid Request"
|
||||
message: "Execution is not in pending human input state"
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Execution ID or Task ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "Execution ID not found"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
BearerAuth:
|
||||
|
||||
@@ -156,6 +156,134 @@ paths:
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
/resume:
|
||||
post:
|
||||
summary: Resume Crew Execution with Human Feedback
|
||||
description: |
|
||||
**📋 Reference Example Only** - *This shows the request format. To test with your actual crew, copy the cURL example and replace the URL + token with your real values.*
|
||||
|
||||
Resume a paused crew execution with human feedback for Human-in-the-Loop (HITL) workflows.
|
||||
When a task with `human_input=True` completes, the crew execution pauses and waits for human feedback.
|
||||
|
||||
**IMPORTANT**: You must provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`)
|
||||
that were used in the original kickoff call. Webhook configurations are NOT automatically carried over -
|
||||
they must be explicitly provided in the resume request to continue receiving notifications.
|
||||
operationId: resumeCrewExecution
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required:
|
||||
- execution_id
|
||||
- task_id
|
||||
- human_feedback
|
||||
- is_approve
|
||||
properties:
|
||||
execution_id:
|
||||
type: string
|
||||
format: uuid
|
||||
description: The unique identifier for the crew execution (from kickoff)
|
||||
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id:
|
||||
type: string
|
||||
description: The ID of the task that requires human feedback
|
||||
example: "research_task"
|
||||
human_feedback:
|
||||
type: string
|
||||
description: Your feedback on the task output. This will be incorporated as additional context for subsequent task executions.
|
||||
example: "Great research! Please add more details about recent developments in the field."
|
||||
is_approve:
|
||||
type: boolean
|
||||
description: "Whether you approve the task output: true = positive feedback (continue), false = negative feedback (retry task)"
|
||||
example: true
|
||||
taskWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each task completion. MUST be provided to continue receiving task notifications.
|
||||
example: "https://your-server.com/webhooks/task"
|
||||
stepWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed after each agent thought/action. MUST be provided to continue receiving step notifications.
|
||||
example: "https://your-server.com/webhooks/step"
|
||||
crewWebhookUrl:
|
||||
type: string
|
||||
format: uri
|
||||
description: Callback URL executed when the crew execution completes. MUST be provided to receive completion notification.
|
||||
example: "https://your-server.com/webhooks/crew"
|
||||
examples:
|
||||
approve_and_continue:
|
||||
summary: Approve task and continue execution
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "research_task"
|
||||
human_feedback: "Excellent research! Proceed to the next task."
|
||||
is_approve: true
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
stepWebhookUrl: "https://api.example.com/webhooks/step"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
request_revision:
|
||||
summary: Request task revision with feedback
|
||||
value:
|
||||
execution_id: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
|
||||
task_id: "analysis_task"
|
||||
human_feedback: "Please include more quantitative data and cite your sources."
|
||||
is_approve: false
|
||||
taskWebhookUrl: "https://api.example.com/webhooks/task"
|
||||
crewWebhookUrl: "https://api.example.com/webhooks/crew"
|
||||
responses:
|
||||
'200':
|
||||
description: Execution resumed successfully
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
enum: ["resumed", "retrying", "completed"]
|
||||
description: Status of the resumed execution
|
||||
example: "resumed"
|
||||
message:
|
||||
type: string
|
||||
description: Human-readable message about the resume operation
|
||||
example: "Execution resumed successfully"
|
||||
examples:
|
||||
resumed:
|
||||
summary: Execution resumed with positive feedback
|
||||
value:
|
||||
status: "resumed"
|
||||
message: "Execution resumed successfully"
|
||||
retrying:
|
||||
summary: Task will be retried with negative feedback
|
||||
value:
|
||||
status: "retrying"
|
||||
message: "Task will be retried with your feedback"
|
||||
'400':
|
||||
description: Invalid request body or execution not in pending state
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Invalid Request"
|
||||
message: "Execution is not in pending human input state"
|
||||
'401':
|
||||
$ref: '#/components/responses/UnauthorizedError'
|
||||
'404':
|
||||
description: Execution ID or Task ID not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
example:
|
||||
error: "Not Found"
|
||||
message: "Execution ID not found"
|
||||
'500':
|
||||
$ref: '#/components/responses/ServerError'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
BearerAuth:
|
||||
|
||||
BIN
docs/images/crewai-otel-export.png
Normal file
|
After Width: | Height: | Size: 317 KiB |
6
docs/ko/api-reference/resume.mdx
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
title: "POST /resume"
|
||||
description: "인간 피드백으로 crew 실행 재개"
|
||||
openapi: "/enterprise-api.ko.yaml POST /resume"
|
||||
mode: "wide"
|
||||
---
|
||||
@@ -40,6 +40,28 @@ mode: "wide"
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
||||
</Frame>
|
||||
|
||||
<Warning>
|
||||
**중요: Webhook URL을 다시 제공해야 합니다**:
|
||||
kickoff 호출에서 사용한 것과 동일한 webhook URL(`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`)을 resume 호출에서 **반드시** 제공해야 합니다. Webhook 설정은 kickoff에서 자동으로 전달되지 **않으므로**, 작업 완료, 에이전트 단계, crew 완료에 대한 알림을 계속 받으려면 resume 요청에 명시적으로 포함해야 합니다.
|
||||
</Warning>
|
||||
|
||||
Webhook을 포함한 resume 호출 예시:
|
||||
```bash
|
||||
curl -X POST {BASE_URL}/resume \
|
||||
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"task_id": "research_task",
|
||||
"human_feedback": "훌륭한 작업입니다! 더 자세한 내용을 추가해주세요.",
|
||||
"is_approve": true,
|
||||
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
||||
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
||||
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
||||
}'
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**피드백이 작업 실행에 미치는 영향**:
|
||||
피드백 전체 내용이 이후 작업 실행을 위한 추가 컨텍스트로 통합되므로 피드백 제공 시 신중함이 매우 중요합니다.
|
||||
@@ -76,4 +98,4 @@ HITL 워크플로우는 특히 다음과 같은 경우에 유용합니다:
|
||||
- 복잡한 의사 결정 시나리오
|
||||
- 민감하거나 위험도가 높은 작업
|
||||
- 인간의 판단이 필요한 창의적 작업
|
||||
- 준수 및 규제 검토
|
||||
- 준수 및 규제 검토
|
||||
|
||||
@@ -40,6 +40,28 @@ mode: "wide"
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
||||
</Frame>
|
||||
|
||||
<Warning>
|
||||
**중요: Webhook URL을 다시 제공해야 합니다**:
|
||||
kickoff 호출에서 사용한 것과 동일한 webhook URL(`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`)을 resume 호출에서 **반드시** 제공해야 합니다. Webhook 설정은 kickoff에서 자동으로 전달되지 **않으므로**, 작업 완료, 에이전트 단계, crew 완료에 대한 알림을 계속 받으려면 resume 요청에 명시적으로 포함해야 합니다.
|
||||
</Warning>
|
||||
|
||||
Webhook을 포함한 resume 호출 예시:
|
||||
```bash
|
||||
curl -X POST {BASE_URL}/resume \
|
||||
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"task_id": "research_task",
|
||||
"human_feedback": "훌륭한 작업입니다! 더 자세한 내용을 추가해주세요.",
|
||||
"is_approve": true,
|
||||
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
||||
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
||||
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
||||
}'
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**피드백이 작업 실행에 미치는 영향**:
|
||||
피드백의 전체 내용이 추가 컨텍스트로서 이후 작업 실행에 통합되므로, 피드백 제공 시 신중을 기하는 것이 매우 중요합니다.
|
||||
@@ -76,4 +98,4 @@ HITL 워크플로우는 다음과 같은 경우에 특히 유용합니다:
|
||||
- 복잡한 의사결정 시나리오
|
||||
- 민감하거나 고위험 작업
|
||||
- 인간의 판단이 필요한 창의적 과제
|
||||
- 컴플라이언스 및 규제 검토
|
||||
- 컴플라이언스 및 규제 검토
|
||||
|
||||
6
docs/pt-BR/api-reference/resume.mdx
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
title: "POST /resume"
|
||||
description: "Retomar execução do crew com feedback humano"
|
||||
openapi: "/enterprise-api.pt-BR.yaml POST /resume"
|
||||
mode: "wide"
|
||||
---
|
||||
@@ -40,6 +40,28 @@ Human-In-The-Loop (HITL) é uma abordagem poderosa que combina inteligência art
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
||||
</Frame>
|
||||
|
||||
<Warning>
|
||||
**Crítico: URLs de Webhook Devem Ser Fornecidas Novamente**:
|
||||
Você **deve** fornecer as mesmas URLs de webhook (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) na chamada de resume que você usou na chamada de kickoff. As configurações de webhook **NÃO** são automaticamente transferidas do kickoff - elas devem ser explicitamente incluídas na solicitação de resume para continuar recebendo notificações de conclusão de tarefa, etapas do agente e conclusão do crew.
|
||||
</Warning>
|
||||
|
||||
Exemplo de chamada resume com webhooks:
|
||||
```bash
|
||||
curl -X POST {BASE_URL}/resume \
|
||||
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"task_id": "research_task",
|
||||
"human_feedback": "Ótimo trabalho! Por favor, adicione mais detalhes.",
|
||||
"is_approve": true,
|
||||
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
||||
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
||||
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
||||
}'
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**Impacto do Feedback na Execução da Tarefa**:
|
||||
É crucial ter cuidado ao fornecer o feedback, pois todo o conteúdo do feedback será incorporado como contexto adicional para as próximas execuções da tarefa.
|
||||
@@ -76,4 +98,4 @@ Workflows HITL são particularmente valiosos para:
|
||||
- Cenários de tomada de decisão complexa
|
||||
- Operações sensíveis ou de alto risco
|
||||
- Tarefas criativas que exigem julgamento humano
|
||||
- Revisões de conformidade e regulatórias
|
||||
- Revisões de conformidade e regulatórias
|
||||
|
||||
@@ -40,6 +40,28 @@ Human-in-the-Loop (HITL) é uma abordagem poderosa que combina a inteligência a
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Endpoint de Retomada Crew" />
|
||||
</Frame>
|
||||
|
||||
<Warning>
|
||||
**Crítico: URLs de Webhook Devem Ser Fornecidas Novamente**:
|
||||
Você **deve** fornecer as mesmas URLs de webhook (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) na chamada de resume que você usou na chamada de kickoff. As configurações de webhook **NÃO** são automaticamente transferidas do kickoff - elas devem ser explicitamente incluídas na solicitação de resume para continuar recebendo notificações de conclusão de tarefa, etapas do agente e conclusão do crew.
|
||||
</Warning>
|
||||
|
||||
Exemplo de chamada resume com webhooks:
|
||||
```bash
|
||||
curl -X POST {BASE_URL}/resume \
|
||||
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"task_id": "research_task",
|
||||
"human_feedback": "Ótimo trabalho! Por favor, adicione mais detalhes.",
|
||||
"is_approve": true,
|
||||
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
||||
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
||||
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
||||
}'
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**Impacto do Feedback na Execução da Tarefa**:
|
||||
É fundamental ter cuidado ao fornecer feedback, pois todo o conteúdo do feedback será incorporado como contexto adicional para execuções futuras da tarefa.
|
||||
@@ -76,4 +98,4 @@ Workflows HITL são particularmente valiosos para:
|
||||
- Cenários de tomada de decisão complexa
|
||||
- Operações sensíveis ou de alto risco
|
||||
- Tarefas criativas que requerem julgamento humano
|
||||
- Revisões de conformidade e regulamentação
|
||||
- Revisões de conformidade e regulamentação
|
||||
|
||||
@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools>=0.74.0",
|
||||
"crewai-tools>=0.76.0",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
|
||||
@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "0.201.1"
|
||||
__version__ = "0.203.1"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -53,6 +53,7 @@ from crewai.utilities.converter import generate_model_description
|
||||
from crewai.utilities.llm_utils import create_llm
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
class Agent(BaseAgent):
|
||||
@@ -174,7 +175,7 @@ class Agent(BaseAgent):
|
||||
)
|
||||
|
||||
@model_validator(mode="before")
|
||||
def validate_from_repository(cls, v): # noqa: N805
|
||||
def validate_from_repository(cls, v): # noqa: N805
|
||||
if v is not None and (from_repository := v.get("from_repository")):
|
||||
return load_agent_from_repository(from_repository) | v
|
||||
return v
|
||||
@@ -347,15 +348,16 @@ class Agent(BaseAgent):
|
||||
)
|
||||
|
||||
if self.knowledge or (self.crew and self.crew.knowledge):
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeRetrievalStartedEvent(
|
||||
agent=self,
|
||||
),
|
||||
)
|
||||
try:
|
||||
self.knowledge_search_query = self._get_knowledge_search_query(
|
||||
task_prompt
|
||||
task_prompt, task
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeRetrievalStartedEvent(
|
||||
from_task=task,
|
||||
from_agent=self,
|
||||
),
|
||||
)
|
||||
if self.knowledge_search_query:
|
||||
# Quering agent specific knowledge
|
||||
@@ -385,7 +387,8 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
event=KnowledgeRetrievalCompletedEvent(
|
||||
query=self.knowledge_search_query,
|
||||
agent=self,
|
||||
from_task=task,
|
||||
from_agent=self,
|
||||
retrieved_knowledge=(
|
||||
(self.agent_knowledge_context or "")
|
||||
+ (
|
||||
@@ -403,8 +406,9 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
event=KnowledgeSearchQueryFailedEvent(
|
||||
query=self.knowledge_search_query or "",
|
||||
agent=self,
|
||||
error=str(e),
|
||||
from_task=task,
|
||||
from_agent=self,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -702,7 +706,7 @@ class Agent(BaseAgent):
|
||||
|
||||
try:
|
||||
subprocess.run(
|
||||
["/usr/bin/docker", "info"],
|
||||
["docker", "info"], # noqa: S607
|
||||
check=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
@@ -728,13 +732,14 @@ class Agent(BaseAgent):
|
||||
def set_fingerprint(self, fingerprint: Fingerprint):
|
||||
self.security_config.fingerprint = fingerprint
|
||||
|
||||
def _get_knowledge_search_query(self, task_prompt: str) -> str | None:
|
||||
def _get_knowledge_search_query(self, task_prompt: str, task: Task) -> str | None:
|
||||
"""Generate a search query for the knowledge base based on the task description."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeQueryStartedEvent(
|
||||
task_prompt=task_prompt,
|
||||
agent=self,
|
||||
from_task=task,
|
||||
from_agent=self,
|
||||
),
|
||||
)
|
||||
query = self.i18n.slice("knowledge_search_query").format(
|
||||
@@ -749,8 +754,9 @@ class Agent(BaseAgent):
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeQueryFailedEvent(
|
||||
agent=self,
|
||||
error="LLM is not compatible with knowledge search queries",
|
||||
from_task=task,
|
||||
from_agent=self,
|
||||
),
|
||||
)
|
||||
return None
|
||||
@@ -769,7 +775,8 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
event=KnowledgeQueryCompletedEvent(
|
||||
query=query,
|
||||
agent=self,
|
||||
from_task=task,
|
||||
from_agent=self,
|
||||
),
|
||||
)
|
||||
return rewritten_query
|
||||
@@ -777,15 +784,16 @@ class Agent(BaseAgent):
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeQueryFailedEvent(
|
||||
agent=self,
|
||||
error=str(e),
|
||||
from_task=task,
|
||||
from_agent=self,
|
||||
),
|
||||
)
|
||||
return None
|
||||
|
||||
def kickoff(
|
||||
self,
|
||||
messages: str | list[dict[str, str]],
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""
|
||||
@@ -825,7 +833,7 @@ class Agent(BaseAgent):
|
||||
|
||||
async def kickoff_async(
|
||||
self,
|
||||
messages: str | list[dict[str, str]],
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""
|
||||
@@ -855,6 +863,7 @@ class Agent(BaseAgent):
|
||||
response_format=response_format,
|
||||
i18n=self.i18n,
|
||||
original_agent=self,
|
||||
guardrail=self.guardrail,
|
||||
)
|
||||
|
||||
return await lite_agent.kickoff_async(messages)
|
||||
|
||||
@@ -30,6 +30,7 @@ def validate_jwt_token(
|
||||
algorithms=["RS256"],
|
||||
audience=audience,
|
||||
issuer=issuer,
|
||||
leeway=10.0,
|
||||
options={
|
||||
"verify_signature": True,
|
||||
"verify_exp": True,
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import os
|
||||
import subprocess
|
||||
from enum import Enum
|
||||
|
||||
import click
|
||||
from packaging import version
|
||||
|
||||
from crewai.cli.utils import read_toml
|
||||
from crewai.cli.utils import build_env_with_tool_repository_credentials, read_toml
|
||||
from crewai.cli.version import get_crewai_version
|
||||
|
||||
|
||||
@@ -55,8 +56,22 @@ def execute_command(crew_type: CrewType) -> None:
|
||||
"""
|
||||
command = ["uv", "run", "kickoff" if crew_type == CrewType.FLOW else "run_crew"]
|
||||
|
||||
env = os.environ.copy()
|
||||
try:
|
||||
subprocess.run(command, capture_output=False, text=True, check=True) # noqa: S603
|
||||
pyproject_data = read_toml()
|
||||
sources = pyproject_data.get("tool", {}).get("uv", {}).get("sources", {})
|
||||
|
||||
for source_config in sources.values():
|
||||
if isinstance(source_config, dict):
|
||||
index = source_config.get("index")
|
||||
if index:
|
||||
index_env = build_env_with_tool_repository_credentials(index)
|
||||
env.update(index_env)
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
try:
|
||||
subprocess.run(command, capture_output=False, text=True, check=True, env=env) # noqa: S603
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
handle_error(e, crew_type)
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.201.1,<1.0.0"
|
||||
"crewai[tools]>=0.203.1,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.201.1,<1.0.0",
|
||||
"crewai[tools]>=0.203.1,<1.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.201.1"
|
||||
"crewai[tools]>=0.203.1"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -32,6 +32,13 @@ from crewai.events.types.flow_events import (
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
)
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
@@ -310,6 +317,26 @@ class TraceCollectionListener(BaseEventListener):
|
||||
def on_agent_reasoning_failed(source, event):
|
||||
self._handle_action_event("agent_reasoning_failed", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeRetrievalStartedEvent)
|
||||
def on_knowledge_retrieval_started(source, event):
|
||||
self._handle_action_event("knowledge_retrieval_started", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeRetrievalCompletedEvent)
|
||||
def on_knowledge_retrieval_completed(source, event):
|
||||
self._handle_action_event("knowledge_retrieval_completed", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeQueryStartedEvent)
|
||||
def on_knowledge_query_started(source, event):
|
||||
self._handle_action_event("knowledge_query_started", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeQueryCompletedEvent)
|
||||
def on_knowledge_query_completed(source, event):
|
||||
self._handle_action_event("knowledge_query_completed", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeQueryFailedEvent)
|
||||
def on_knowledge_query_failed(source, event):
|
||||
self._handle_action_event("knowledge_query_failed", source, event)
|
||||
|
||||
def _initialize_crew_batch(self, source: Any, event: Any):
|
||||
"""Initialize trace batch"""
|
||||
user_context = self._get_user_context()
|
||||
|
||||
@@ -358,7 +358,8 @@ def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
|
||||
try:
|
||||
response = input().strip().lower()
|
||||
result[0] = response in ["y", "yes"]
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
except (EOFError, KeyboardInterrupt, OSError, LookupError):
|
||||
# Handle all input-related errors silently
|
||||
result[0] = False
|
||||
|
||||
input_thread = threading.Thread(target=get_input, daemon=True)
|
||||
@@ -371,6 +372,7 @@ def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
|
||||
return result[0]
|
||||
|
||||
except Exception:
|
||||
# Suppress any warnings or errors and assume "no"
|
||||
return False
|
||||
|
||||
|
||||
|
||||
@@ -1,51 +1,60 @@
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from typing import Any
|
||||
|
||||
from crewai.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class KnowledgeRetrievalStartedEvent(BaseEvent):
|
||||
class KnowledgeEventBase(BaseEvent):
|
||||
task_id: str | None = None
|
||||
task_name: str | None = None
|
||||
from_task: Any | None = None
|
||||
from_agent: Any | None = None
|
||||
agent_role: str | None = None
|
||||
agent_id: str | None = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
|
||||
|
||||
class KnowledgeRetrievalStartedEvent(KnowledgeEventBase):
|
||||
"""Event emitted when a knowledge retrieval is started."""
|
||||
|
||||
type: str = "knowledge_search_query_started"
|
||||
agent: BaseAgent
|
||||
|
||||
|
||||
class KnowledgeRetrievalCompletedEvent(BaseEvent):
|
||||
class KnowledgeRetrievalCompletedEvent(KnowledgeEventBase):
|
||||
"""Event emitted when a knowledge retrieval is completed."""
|
||||
|
||||
query: str
|
||||
type: str = "knowledge_search_query_completed"
|
||||
agent: BaseAgent
|
||||
retrieved_knowledge: str
|
||||
|
||||
|
||||
class KnowledgeQueryStartedEvent(BaseEvent):
|
||||
class KnowledgeQueryStartedEvent(KnowledgeEventBase):
|
||||
"""Event emitted when a knowledge query is started."""
|
||||
|
||||
task_prompt: str
|
||||
type: str = "knowledge_query_started"
|
||||
agent: BaseAgent
|
||||
|
||||
|
||||
class KnowledgeQueryFailedEvent(BaseEvent):
|
||||
class KnowledgeQueryFailedEvent(KnowledgeEventBase):
|
||||
"""Event emitted when a knowledge query fails."""
|
||||
|
||||
type: str = "knowledge_query_failed"
|
||||
agent: BaseAgent
|
||||
error: str
|
||||
|
||||
|
||||
class KnowledgeQueryCompletedEvent(BaseEvent):
|
||||
class KnowledgeQueryCompletedEvent(KnowledgeEventBase):
|
||||
"""Event emitted when a knowledge query is completed."""
|
||||
|
||||
query: str
|
||||
type: str = "knowledge_query_completed"
|
||||
agent: BaseAgent
|
||||
|
||||
|
||||
class KnowledgeSearchQueryFailedEvent(BaseEvent):
|
||||
class KnowledgeSearchQueryFailedEvent(KnowledgeEventBase):
|
||||
"""Event emitted when a knowledge search query fails."""
|
||||
|
||||
query: str
|
||||
type: str = "knowledge_search_query_failed"
|
||||
agent: BaseAgent
|
||||
error: str
|
||||
|
||||
@@ -5,7 +5,21 @@ from typing import Any
|
||||
from crewai.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class LLMGuardrailStartedEvent(BaseEvent):
|
||||
class LLMGuardrailBaseEvent(BaseEvent):
|
||||
task_id: str | None = None
|
||||
task_name: str | None = None
|
||||
from_task: Any | None = None
|
||||
from_agent: Any | None = None
|
||||
agent_role: str | None = None
|
||||
agent_id: str | None = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
|
||||
|
||||
class LLMGuardrailStartedEvent(LLMGuardrailBaseEvent):
|
||||
"""Event emitted when a guardrail task starts
|
||||
|
||||
Attributes:
|
||||
@@ -29,7 +43,7 @@ class LLMGuardrailStartedEvent(BaseEvent):
|
||||
self.guardrail = getsource(self.guardrail).strip()
|
||||
|
||||
|
||||
class LLMGuardrailCompletedEvent(BaseEvent):
|
||||
class LLMGuardrailCompletedEvent(LLMGuardrailBaseEvent):
|
||||
"""Event emitted when a guardrail task completes
|
||||
|
||||
Attributes:
|
||||
@@ -44,3 +58,16 @@ class LLMGuardrailCompletedEvent(BaseEvent):
|
||||
result: Any
|
||||
error: str | None = None
|
||||
retry_count: int
|
||||
|
||||
|
||||
class LLMGuardrailFailedEvent(LLMGuardrailBaseEvent):
|
||||
"""Event emitted when a guardrail task fails
|
||||
|
||||
Attributes:
|
||||
error: The error message
|
||||
retry_count: The number of times the guardrail has been retried
|
||||
"""
|
||||
|
||||
type: str = "llm_guardrail_failed"
|
||||
error: str
|
||||
retry_count: int
|
||||
|
||||
@@ -1377,7 +1377,7 @@ class ConsoleFormatter:
|
||||
if isinstance(formatted_answer, AgentAction):
|
||||
thought = re.sub(r"\n+", "\n", formatted_answer.thought)
|
||||
formatted_json = json.dumps(
|
||||
formatted_answer.tool_input,
|
||||
json.loads(formatted_answer.tool_input),
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
|
||||
@@ -31,7 +31,7 @@ from crewai.flow.flow_visualizer import plot_flow
|
||||
from crewai.flow.persistence.base import FlowPersistence
|
||||
from crewai.flow.types import FlowExecutionData
|
||||
from crewai.flow.utils import get_possible_return_constants
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.printer import Printer, PrinterColor
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -105,7 +105,7 @@ def start(condition: str | dict | Callable | None = None) -> Callable:
|
||||
condition : Optional[Union[str, dict, Callable]], optional
|
||||
Defines when the start method should execute. Can be:
|
||||
- str: Name of a method that triggers this start
|
||||
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
|
||||
- dict: Result from or_() or and_(), including nested conditions
|
||||
- Callable: A method reference that triggers this start
|
||||
Default is None, meaning unconditional start.
|
||||
|
||||
@@ -140,13 +140,18 @@ def start(condition: str | dict | Callable | None = None) -> Callable:
|
||||
if isinstance(condition, str):
|
||||
func.__trigger_methods__ = [condition]
|
||||
func.__condition_type__ = "OR"
|
||||
elif (
|
||||
isinstance(condition, dict)
|
||||
and "type" in condition
|
||||
and "methods" in condition
|
||||
):
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif isinstance(condition, dict) and "type" in condition:
|
||||
if "conditions" in condition:
|
||||
func.__trigger_condition__ = condition
|
||||
func.__trigger_methods__ = _extract_all_methods(condition)
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif "methods" in condition:
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
else:
|
||||
raise ValueError(
|
||||
"Condition dict must contain 'conditions' or 'methods'"
|
||||
)
|
||||
elif callable(condition) and hasattr(condition, "__name__"):
|
||||
func.__trigger_methods__ = [condition.__name__]
|
||||
func.__condition_type__ = "OR"
|
||||
@@ -172,7 +177,7 @@ def listen(condition: str | dict | Callable) -> Callable:
|
||||
condition : Union[str, dict, Callable]
|
||||
Specifies when the listener should execute. Can be:
|
||||
- str: Name of a method that triggers this listener
|
||||
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
|
||||
- dict: Result from or_() or and_(), including nested conditions
|
||||
- Callable: A method reference that triggers this listener
|
||||
|
||||
Returns
|
||||
@@ -200,13 +205,18 @@ def listen(condition: str | dict | Callable) -> Callable:
|
||||
if isinstance(condition, str):
|
||||
func.__trigger_methods__ = [condition]
|
||||
func.__condition_type__ = "OR"
|
||||
elif (
|
||||
isinstance(condition, dict)
|
||||
and "type" in condition
|
||||
and "methods" in condition
|
||||
):
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif isinstance(condition, dict) and "type" in condition:
|
||||
if "conditions" in condition:
|
||||
func.__trigger_condition__ = condition
|
||||
func.__trigger_methods__ = _extract_all_methods(condition)
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif "methods" in condition:
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
else:
|
||||
raise ValueError(
|
||||
"Condition dict must contain 'conditions' or 'methods'"
|
||||
)
|
||||
elif callable(condition) and hasattr(condition, "__name__"):
|
||||
func.__trigger_methods__ = [condition.__name__]
|
||||
func.__condition_type__ = "OR"
|
||||
@@ -233,7 +243,7 @@ def router(condition: str | dict | Callable) -> Callable:
|
||||
condition : Union[str, dict, Callable]
|
||||
Specifies when the router should execute. Can be:
|
||||
- str: Name of a method that triggers this router
|
||||
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
|
||||
- dict: Result from or_() or and_(), including nested conditions
|
||||
- Callable: A method reference that triggers this router
|
||||
|
||||
Returns
|
||||
@@ -266,13 +276,18 @@ def router(condition: str | dict | Callable) -> Callable:
|
||||
if isinstance(condition, str):
|
||||
func.__trigger_methods__ = [condition]
|
||||
func.__condition_type__ = "OR"
|
||||
elif (
|
||||
isinstance(condition, dict)
|
||||
and "type" in condition
|
||||
and "methods" in condition
|
||||
):
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif isinstance(condition, dict) and "type" in condition:
|
||||
if "conditions" in condition:
|
||||
func.__trigger_condition__ = condition
|
||||
func.__trigger_methods__ = _extract_all_methods(condition)
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif "methods" in condition:
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
else:
|
||||
raise ValueError(
|
||||
"Condition dict must contain 'conditions' or 'methods'"
|
||||
)
|
||||
elif callable(condition) and hasattr(condition, "__name__"):
|
||||
func.__trigger_methods__ = [condition.__name__]
|
||||
func.__condition_type__ = "OR"
|
||||
@@ -298,14 +313,15 @@ def or_(*conditions: str | dict | Callable) -> dict:
|
||||
*conditions : Union[str, dict, Callable]
|
||||
Variable number of conditions that can be:
|
||||
- str: Method names
|
||||
- dict: Existing condition dictionaries
|
||||
- dict: Existing condition dictionaries (nested conditions)
|
||||
- Callable: Method references
|
||||
|
||||
Returns
|
||||
-------
|
||||
dict
|
||||
A condition dictionary with format:
|
||||
{"type": "OR", "methods": list_of_method_names}
|
||||
{"type": "OR", "conditions": list_of_conditions}
|
||||
where each condition can be a string (method name) or a nested dict
|
||||
|
||||
Raises
|
||||
------
|
||||
@@ -317,18 +333,22 @@ def or_(*conditions: str | dict | Callable) -> dict:
|
||||
>>> @listen(or_("success", "timeout"))
|
||||
>>> def handle_completion(self):
|
||||
... pass
|
||||
|
||||
>>> @listen(or_(and_("step1", "step2"), "step3"))
|
||||
>>> def handle_nested(self):
|
||||
... pass
|
||||
"""
|
||||
methods = []
|
||||
processed_conditions: list[str | dict[str, Any]] = []
|
||||
for condition in conditions:
|
||||
if isinstance(condition, dict) and "methods" in condition:
|
||||
methods.extend(condition["methods"])
|
||||
if isinstance(condition, dict):
|
||||
processed_conditions.append(condition)
|
||||
elif isinstance(condition, str):
|
||||
methods.append(condition)
|
||||
processed_conditions.append(condition)
|
||||
elif callable(condition):
|
||||
methods.append(getattr(condition, "__name__", repr(condition)))
|
||||
processed_conditions.append(getattr(condition, "__name__", repr(condition)))
|
||||
else:
|
||||
raise ValueError("Invalid condition in or_()")
|
||||
return {"type": "OR", "methods": methods}
|
||||
return {"type": "OR", "conditions": processed_conditions}
|
||||
|
||||
|
||||
def and_(*conditions: str | dict | Callable) -> dict:
|
||||
@@ -344,14 +364,15 @@ def and_(*conditions: str | dict | Callable) -> dict:
|
||||
*conditions : Union[str, dict, Callable]
|
||||
Variable number of conditions that can be:
|
||||
- str: Method names
|
||||
- dict: Existing condition dictionaries
|
||||
- dict: Existing condition dictionaries (nested conditions)
|
||||
- Callable: Method references
|
||||
|
||||
Returns
|
||||
-------
|
||||
dict
|
||||
A condition dictionary with format:
|
||||
{"type": "AND", "methods": list_of_method_names}
|
||||
{"type": "AND", "conditions": list_of_conditions}
|
||||
where each condition can be a string (method name) or a nested dict
|
||||
|
||||
Raises
|
||||
------
|
||||
@@ -363,18 +384,69 @@ def and_(*conditions: str | dict | Callable) -> dict:
|
||||
>>> @listen(and_("validated", "processed"))
|
||||
>>> def handle_complete_data(self):
|
||||
... pass
|
||||
|
||||
>>> @listen(and_(or_("step1", "step2"), "step3"))
|
||||
>>> def handle_nested(self):
|
||||
... pass
|
||||
"""
|
||||
methods = []
|
||||
processed_conditions: list[str | dict[str, Any]] = []
|
||||
for condition in conditions:
|
||||
if isinstance(condition, dict) and "methods" in condition:
|
||||
methods.extend(condition["methods"])
|
||||
if isinstance(condition, dict):
|
||||
processed_conditions.append(condition)
|
||||
elif isinstance(condition, str):
|
||||
methods.append(condition)
|
||||
processed_conditions.append(condition)
|
||||
elif callable(condition):
|
||||
methods.append(getattr(condition, "__name__", repr(condition)))
|
||||
processed_conditions.append(getattr(condition, "__name__", repr(condition)))
|
||||
else:
|
||||
raise ValueError("Invalid condition in and_()")
|
||||
return {"type": "AND", "methods": methods}
|
||||
return {"type": "AND", "conditions": processed_conditions}
|
||||
|
||||
|
||||
def _normalize_condition(condition: str | dict | list) -> dict:
|
||||
"""Normalize a condition to standard format with 'conditions' key.
|
||||
|
||||
Args:
|
||||
condition: Can be a string (method name), dict (condition), or list
|
||||
|
||||
Returns:
|
||||
Normalized dict with 'type' and 'conditions' keys
|
||||
"""
|
||||
if isinstance(condition, str):
|
||||
return {"type": "OR", "conditions": [condition]}
|
||||
if isinstance(condition, dict):
|
||||
if "conditions" in condition:
|
||||
return condition
|
||||
if "methods" in condition:
|
||||
return {"type": condition["type"], "conditions": condition["methods"]}
|
||||
return condition
|
||||
if isinstance(condition, list):
|
||||
return {"type": "OR", "conditions": condition}
|
||||
return {"type": "OR", "conditions": [condition]}
|
||||
|
||||
|
||||
def _extract_all_methods(condition: str | dict | list) -> list[str]:
|
||||
"""Extract all method names from a condition (including nested).
|
||||
|
||||
Args:
|
||||
condition: Can be a string, dict, or list
|
||||
|
||||
Returns:
|
||||
List of all method names in the condition tree
|
||||
"""
|
||||
if isinstance(condition, str):
|
||||
return [condition]
|
||||
if isinstance(condition, dict):
|
||||
normalized = _normalize_condition(condition)
|
||||
methods = []
|
||||
for sub_cond in normalized.get("conditions", []):
|
||||
methods.extend(_extract_all_methods(sub_cond))
|
||||
return methods
|
||||
if isinstance(condition, list):
|
||||
methods = []
|
||||
for item in condition:
|
||||
methods.extend(_extract_all_methods(item))
|
||||
return methods
|
||||
return []
|
||||
|
||||
|
||||
class FlowMeta(type):
|
||||
@@ -402,7 +474,10 @@ class FlowMeta(type):
|
||||
if hasattr(attr_value, "__trigger_methods__"):
|
||||
methods = attr_value.__trigger_methods__
|
||||
condition_type = getattr(attr_value, "__condition_type__", "OR")
|
||||
listeners[attr_name] = (condition_type, methods)
|
||||
if hasattr(attr_value, "__trigger_condition__"):
|
||||
listeners[attr_name] = attr_value.__trigger_condition__
|
||||
else:
|
||||
listeners[attr_name] = (condition_type, methods)
|
||||
|
||||
if (
|
||||
hasattr(attr_value, "__is_router__")
|
||||
@@ -822,6 +897,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
# Clear completed methods and outputs for a fresh start
|
||||
self._completed_methods.clear()
|
||||
self._method_outputs.clear()
|
||||
self._pending_and_listeners.clear()
|
||||
else:
|
||||
# We're restoring from persistence, set the flag
|
||||
self._is_execution_resuming = True
|
||||
@@ -1086,10 +1162,16 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
for method_name in self._start_methods:
|
||||
# Check if this start method is triggered by the current trigger
|
||||
if method_name in self._listeners:
|
||||
condition_type, trigger_methods = self._listeners[
|
||||
method_name
|
||||
]
|
||||
if current_trigger in trigger_methods:
|
||||
condition_data = self._listeners[method_name]
|
||||
should_trigger = False
|
||||
if isinstance(condition_data, tuple):
|
||||
_, trigger_methods = condition_data
|
||||
should_trigger = current_trigger in trigger_methods
|
||||
elif isinstance(condition_data, dict):
|
||||
all_methods = _extract_all_methods(condition_data)
|
||||
should_trigger = current_trigger in all_methods
|
||||
|
||||
if should_trigger:
|
||||
# Only execute if this is a cycle (method was already completed)
|
||||
if method_name in self._completed_methods:
|
||||
# For router-triggered start methods in cycles, temporarily clear resumption flag
|
||||
@@ -1099,6 +1181,51 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
await self._execute_start_method(method_name)
|
||||
self._is_execution_resuming = was_resuming
|
||||
|
||||
def _evaluate_condition(
|
||||
self, condition: str | dict, trigger_method: str, listener_name: str
|
||||
) -> bool:
|
||||
"""Recursively evaluate a condition (simple or nested).
|
||||
|
||||
Args:
|
||||
condition: Can be a string (method name) or dict (nested condition)
|
||||
trigger_method: The method that just completed
|
||||
listener_name: Name of the listener being evaluated
|
||||
|
||||
Returns:
|
||||
True if the condition is satisfied, False otherwise
|
||||
"""
|
||||
if isinstance(condition, str):
|
||||
return condition == trigger_method
|
||||
|
||||
if isinstance(condition, dict):
|
||||
normalized = _normalize_condition(condition)
|
||||
cond_type = normalized.get("type", "OR")
|
||||
sub_conditions = normalized.get("conditions", [])
|
||||
|
||||
if cond_type == "OR":
|
||||
return any(
|
||||
self._evaluate_condition(sub_cond, trigger_method, listener_name)
|
||||
for sub_cond in sub_conditions
|
||||
)
|
||||
|
||||
if cond_type == "AND":
|
||||
pending_key = f"{listener_name}:{id(condition)}"
|
||||
|
||||
if pending_key not in self._pending_and_listeners:
|
||||
all_methods = set(_extract_all_methods(condition))
|
||||
self._pending_and_listeners[pending_key] = all_methods
|
||||
|
||||
if trigger_method in self._pending_and_listeners[pending_key]:
|
||||
self._pending_and_listeners[pending_key].discard(trigger_method)
|
||||
|
||||
if not self._pending_and_listeners[pending_key]:
|
||||
self._pending_and_listeners.pop(pending_key, None)
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
return False
|
||||
|
||||
def _find_triggered_methods(
|
||||
self, trigger_method: str, router_only: bool
|
||||
) -> list[str]:
|
||||
@@ -1106,7 +1233,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
Finds all methods that should be triggered based on conditions.
|
||||
|
||||
This internal method evaluates both OR and AND conditions to determine
|
||||
which methods should be executed next in the flow.
|
||||
which methods should be executed next in the flow. Supports nested conditions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
@@ -1123,14 +1250,13 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Handles both OR and AND conditions:
|
||||
* OR: Triggers if any condition is met
|
||||
* AND: Triggers only when all conditions are met
|
||||
- Handles both OR and AND conditions, including nested combinations
|
||||
- Maintains state for AND conditions using _pending_and_listeners
|
||||
- Separates router and normal listener evaluation
|
||||
"""
|
||||
triggered = []
|
||||
for listener_name, (condition_type, methods) in self._listeners.items():
|
||||
|
||||
for listener_name, condition_data in self._listeners.items():
|
||||
is_router = listener_name in self._routers
|
||||
|
||||
if router_only != is_router:
|
||||
@@ -1139,23 +1265,29 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
if not router_only and listener_name in self._start_methods:
|
||||
continue
|
||||
|
||||
if condition_type == "OR":
|
||||
# If the trigger_method matches any in methods, run this
|
||||
if trigger_method in methods:
|
||||
triggered.append(listener_name)
|
||||
elif condition_type == "AND":
|
||||
# Initialize pending methods for this listener if not already done
|
||||
if listener_name not in self._pending_and_listeners:
|
||||
self._pending_and_listeners[listener_name] = set(methods)
|
||||
# Remove the trigger method from pending methods
|
||||
if trigger_method in self._pending_and_listeners[listener_name]:
|
||||
self._pending_and_listeners[listener_name].discard(trigger_method)
|
||||
if isinstance(condition_data, tuple):
|
||||
condition_type, methods = condition_data
|
||||
|
||||
if not self._pending_and_listeners[listener_name]:
|
||||
# All required methods have been executed
|
||||
if condition_type == "OR":
|
||||
if trigger_method in methods:
|
||||
triggered.append(listener_name)
|
||||
elif condition_type == "AND":
|
||||
if listener_name not in self._pending_and_listeners:
|
||||
self._pending_and_listeners[listener_name] = set(methods)
|
||||
if trigger_method in self._pending_and_listeners[listener_name]:
|
||||
self._pending_and_listeners[listener_name].discard(
|
||||
trigger_method
|
||||
)
|
||||
|
||||
if not self._pending_and_listeners[listener_name]:
|
||||
triggered.append(listener_name)
|
||||
self._pending_and_listeners.pop(listener_name, None)
|
||||
|
||||
elif isinstance(condition_data, dict):
|
||||
if self._evaluate_condition(
|
||||
condition_data, trigger_method, listener_name
|
||||
):
|
||||
triggered.append(listener_name)
|
||||
# Reset pending methods for this listener
|
||||
self._pending_and_listeners.pop(listener_name, None)
|
||||
|
||||
return triggered
|
||||
|
||||
@@ -1218,7 +1350,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
raise
|
||||
|
||||
def _log_flow_event(
|
||||
self, message: str, color: str = "yellow", level: str = "info"
|
||||
self, message: str, color: PrinterColor | None = "yellow", level: str = "info"
|
||||
) -> None:
|
||||
"""Centralized logging method for flow events.
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ import uuid
|
||||
from collections.abc import Callable
|
||||
from typing import (
|
||||
Any,
|
||||
Literal,
|
||||
cast,
|
||||
get_args,
|
||||
get_origin,
|
||||
@@ -62,6 +63,7 @@ from crewai.utilities.llm_utils import create_llm
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
from crewai.utilities.tool_utils import execute_tool_and_check_finality
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
class LiteAgentOutput(BaseModel):
|
||||
@@ -180,7 +182,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
|
||||
_cache_handler: CacheHandler = PrivateAttr(default_factory=CacheHandler)
|
||||
_key: str = PrivateAttr(default_factory=lambda: str(uuid.uuid4()))
|
||||
_messages: list[dict[str, str]] = PrivateAttr(default_factory=list)
|
||||
_messages: list[LLMMessage] = PrivateAttr(default_factory=list)
|
||||
_iterations: int = PrivateAttr(default=0)
|
||||
_printer: Printer = PrivateAttr(default_factory=Printer)
|
||||
_guardrail: Callable | None = PrivateAttr(default=None)
|
||||
@@ -219,7 +221,6 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
raise TypeError(
|
||||
f"Guardrail requires LLM instance of type BaseLLM, got {type(self.llm).__name__}"
|
||||
)
|
||||
|
||||
self._guardrail = LLMGuardrail(description=self.guardrail, llm=self.llm)
|
||||
|
||||
return self
|
||||
@@ -276,7 +277,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
"""Return the original role for compatibility with tool interfaces."""
|
||||
return self.role
|
||||
|
||||
def kickoff(self, messages: str | list[dict[str, str]]) -> LiteAgentOutput:
|
||||
def kickoff(self, messages: str | list[LLMMessage]) -> LiteAgentOutput:
|
||||
"""
|
||||
Execute the agent with the given messages.
|
||||
|
||||
@@ -368,6 +369,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
guardrail=self._guardrail,
|
||||
retry_count=self._guardrail_retry_count,
|
||||
event_source=self,
|
||||
from_agent=self,
|
||||
)
|
||||
|
||||
if not guardrail_result.success:
|
||||
@@ -414,9 +416,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
|
||||
return output
|
||||
|
||||
async def kickoff_async(
|
||||
self, messages: str | list[dict[str, str]]
|
||||
) -> LiteAgentOutput:
|
||||
async def kickoff_async(self, messages: str | list[LLMMessage]) -> LiteAgentOutput:
|
||||
"""
|
||||
Execute the agent asynchronously with the given messages.
|
||||
|
||||
@@ -461,9 +461,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
|
||||
return base_prompt
|
||||
|
||||
def _format_messages(
|
||||
self, messages: str | list[dict[str, str]]
|
||||
) -> list[dict[str, str]]:
|
||||
def _format_messages(self, messages: str | list[LLMMessage]) -> list[LLMMessage]:
|
||||
"""Format messages for the LLM."""
|
||||
if isinstance(messages, str):
|
||||
messages = [{"role": "user", "content": messages}]
|
||||
@@ -471,7 +469,9 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
system_prompt = self._get_default_system_prompt()
|
||||
|
||||
# Add system message at the beginning
|
||||
formatted_messages = [{"role": "system", "content": system_prompt}]
|
||||
formatted_messages: list[LLMMessage] = [
|
||||
{"role": "system", "content": system_prompt}
|
||||
]
|
||||
|
||||
# Add the rest of the messages
|
||||
formatted_messages.extend(messages)
|
||||
@@ -583,6 +583,8 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
),
|
||||
)
|
||||
|
||||
def _append_message(self, text: str, role: str = "assistant") -> None:
|
||||
def _append_message(
|
||||
self, text: str, role: Literal["user", "assistant", "system"] = "assistant"
|
||||
) -> None:
|
||||
"""Append a message to the message list with the given role."""
|
||||
self._messages.append(format_message_for_llm(text, role=role))
|
||||
self._messages.append(cast(LLMMessage, format_message_for_llm(text, role=role)))
|
||||
|
||||
@@ -7,7 +7,7 @@ import uuid
|
||||
import warnings
|
||||
from collections.abc import Callable
|
||||
from concurrent.futures import Future
|
||||
from copy import copy
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
from typing import (
|
||||
@@ -462,6 +462,8 @@ class Task(BaseModel):
|
||||
guardrail=self._guardrail,
|
||||
retry_count=self.retry_count,
|
||||
event_source=self,
|
||||
from_task=self,
|
||||
from_agent=agent,
|
||||
)
|
||||
if not guardrail_result.success:
|
||||
if self.retry_count >= self.guardrail_max_retries:
|
||||
@@ -670,7 +672,9 @@ Follow these guidelines:
|
||||
copied_data = {k: v for k, v in copied_data.items() if v is not None}
|
||||
|
||||
cloned_context = (
|
||||
[task_mapping[context_task.key] for context_task in self.context]
|
||||
self.context
|
||||
if self.context is NOT_SPECIFIED
|
||||
else [task_mapping[context_task.key] for context_task in self.context]
|
||||
if isinstance(self.context, list)
|
||||
else None
|
||||
)
|
||||
@@ -679,7 +683,7 @@ Follow these guidelines:
|
||||
return next((agent for agent in agents if agent.role == role), None)
|
||||
|
||||
cloned_agent = get_agent_by_role(self.agent.role) if self.agent else None
|
||||
cloned_tools = copy(self.tools) if self.tools else []
|
||||
cloned_tools = shallow_copy(self.tools) if self.tools else []
|
||||
|
||||
return self.__class__(
|
||||
**copied_data,
|
||||
|
||||
@@ -14,6 +14,7 @@ from pydantic import (
|
||||
from pydantic import BaseModel as PydanticBaseModel
|
||||
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
from crewai.utilities.asyncio_utils import run_coroutine_sync
|
||||
|
||||
|
||||
class EnvVar(BaseModel):
|
||||
@@ -90,7 +91,7 @@ class BaseTool(BaseModel, ABC):
|
||||
|
||||
# If _run is async, we safely run it
|
||||
if asyncio.iscoroutine(result):
|
||||
result = asyncio.run(result)
|
||||
result = run_coroutine_sync(result)
|
||||
|
||||
self.current_usage_count += 1
|
||||
|
||||
|
||||