Compare commits
42 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a83c57a2f2 | ||
|
|
08e15ab267 | ||
|
|
9728388ea7 | ||
|
|
4371cf5690 | ||
|
|
d28daa26cd | ||
|
|
a850813f2b | ||
|
|
5944a39629 | ||
|
|
c594859ed0 | ||
|
|
2ee27efca7 | ||
|
|
f6e13eb890 | ||
|
|
e7b3ce27ca | ||
|
|
dba27cf8b5 | ||
|
|
6469f224f6 | ||
|
|
f3a63be215 | ||
|
|
01d8c189f0 | ||
|
|
cc83c1ead5 | ||
|
|
7578901f6d | ||
|
|
d1343b96ed | ||
|
|
42f2b4d551 | ||
|
|
0229390ad1 | ||
|
|
f0fb349ddf | ||
|
|
bf2e2a42da | ||
|
|
814c962196 | ||
|
|
2ebb2e845f | ||
|
|
7b550ebfe8 | ||
|
|
29919c2d81 | ||
|
|
b71c88814f | ||
|
|
cb8bcfe214 | ||
|
|
13a514f8be | ||
|
|
316b1cea69 | ||
|
|
6f2e39c0dd | ||
|
|
8d93361cb3 | ||
|
|
54ec245d84 | ||
|
|
f589ab9b80 | ||
|
|
fadb59e0f0 | ||
|
|
1a60848425 | ||
|
|
0135163040 | ||
|
|
dac5d6d664 | ||
|
|
f0f94f2540 | ||
|
|
bf9e0423f2 | ||
|
|
f47e0c82c4 | ||
|
|
eabced321c |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 45 KiB |
|
Before Width: | Height: | Size: 48 KiB |
|
Before Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 23 KiB |
|
Before Width: | Height: | Size: 43 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 24 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 25 KiB |
|
Before Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 33 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 45 KiB |
|
Before Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 52 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 17 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 33 KiB |
28
.github/codeql/codeql-config.yml
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
name: "CodeQL Config"
|
||||
|
||||
paths-ignore:
|
||||
# Ignore template files - these are boilerplate code that shouldn't be analyzed
|
||||
- "lib/crewai/src/crewai/cli/templates/**"
|
||||
# Ignore test cassettes - these are test fixtures/recordings
|
||||
- "lib/crewai/tests/cassettes/**"
|
||||
- "lib/crewai-tools/tests/cassettes/**"
|
||||
# Ignore cache and build artifacts
|
||||
- ".cache/**"
|
||||
# Ignore documentation build artifacts
|
||||
- "docs/.cache/**"
|
||||
# Ignore experimental code
|
||||
- "lib/crewai/src/crewai/experimental/a2a/**"
|
||||
|
||||
paths:
|
||||
# Include all Python source code from workspace packages
|
||||
- "lib/crewai/src/**"
|
||||
- "lib/crewai-tools/src/**"
|
||||
- "lib/devtools/src/**"
|
||||
# Include tests (but exclude cassettes via paths-ignore)
|
||||
- "lib/crewai/tests/**"
|
||||
- "lib/crewai-tools/tests/**"
|
||||
- "lib/devtools/tests/**"
|
||||
|
||||
# Configure specific queries or packs if needed
|
||||
# queries:
|
||||
# - uses: security-and-quality
|
||||
63
.github/security.md
vendored
@@ -1,27 +1,50 @@
|
||||
## CrewAI Security Vulnerability Reporting Policy
|
||||
## CrewAI Security Policy
|
||||
|
||||
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
|
||||
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
|
||||
|
||||
### Reporting Process
|
||||
Do **not** report vulnerabilities via public GitHub issues.
|
||||
### Scope
|
||||
|
||||
Email all vulnerability reports directly to:
|
||||
**security@crewai.com**
|
||||
We welcome reports for vulnerabilities that could impact:
|
||||
|
||||
### Required Information
|
||||
To help us quickly validate and remediate the issue, your report must include:
|
||||
- CrewAI-maintained source code and repositories
|
||||
- CrewAI-operated infrastructure and services
|
||||
- Official CrewAI releases, packages, and distributions
|
||||
|
||||
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
|
||||
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
|
||||
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
|
||||
- **Special Configuration:** Document any special settings or configurations required to reproduce.
|
||||
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
|
||||
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
|
||||
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
|
||||
|
||||
### Our Response
|
||||
- We will acknowledge receipt of your report promptly via your provided email.
|
||||
- Confirmed vulnerabilities will receive priority remediation based on severity.
|
||||
- Patches will be released as swiftly as possible following verification.
|
||||
### How to Report
|
||||
|
||||
### Reward Notice
|
||||
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
|
||||
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
|
||||
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
|
||||
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
|
||||
|
||||
### What to Include
|
||||
|
||||
Providing comprehensive information enables us to validate the issue quickly:
|
||||
|
||||
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
|
||||
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
|
||||
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
|
||||
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
|
||||
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
|
||||
|
||||
### Our Commitment
|
||||
|
||||
- **Acknowledgement:** We aim to acknowledge your report within two business days.
|
||||
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
|
||||
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
|
||||
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
|
||||
|
||||
### Coordinated Disclosure
|
||||
|
||||
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
|
||||
|
||||
### Safe Harbor
|
||||
|
||||
We will not pursue or support legal action against individuals who, in good faith:
|
||||
|
||||
- Follow this policy and refrain from violating any applicable laws
|
||||
- Avoid privacy violations, data destruction, or service disruption
|
||||
- Limit testing to systems in scope and respect rate limits and terms of service
|
||||
|
||||
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
|
||||
|
||||
2
.github/workflows/build-uv-cache.yml
vendored
@@ -7,6 +7,8 @@ on:
|
||||
paths:
|
||||
- "uv.lock"
|
||||
- "pyproject.toml"
|
||||
schedule:
|
||||
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
|
||||
1
.github/workflows/codeql.yml
vendored
@@ -73,6 +73,7 @@ jobs:
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
build-mode: ${{ matrix.build-mode }}
|
||||
config-file: ./.github/codeql/codeql-config.yml
|
||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||
# By default, queries listed here will override any specified in a config file.
|
||||
# Prefix the list here with "+" to use these queries and those in the config file.
|
||||
|
||||
1
.github/workflows/linter.yml
vendored
@@ -55,6 +55,7 @@ jobs:
|
||||
echo "${{ steps.changed-files.outputs.files }}" \
|
||||
| tr ' ' '\n' \
|
||||
| grep -v 'src/crewai/cli/templates/' \
|
||||
| grep -v '/tests/' \
|
||||
| xargs -I{} uv run ruff check "{}"
|
||||
|
||||
- name: Save uv caches
|
||||
|
||||
16
.github/workflows/publish.yml
vendored
@@ -7,7 +7,6 @@ on:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
if: github.event.release.prerelease == true
|
||||
name: Build packages
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
@@ -35,7 +34,6 @@ jobs:
|
||||
path: dist/
|
||||
|
||||
publish:
|
||||
if: github.event.release.prerelease == true
|
||||
name: Publish to PyPI
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
@@ -65,7 +63,19 @@ jobs:
|
||||
env:
|
||||
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
|
||||
run: |
|
||||
failed=0
|
||||
for package in dist/*; do
|
||||
if [[ "$package" == *"crewai_devtools"* ]]; then
|
||||
echo "Skipping private package: $package"
|
||||
continue
|
||||
fi
|
||||
echo "Publishing $package"
|
||||
uv publish "$package"
|
||||
if ! uv publish "$package"; then
|
||||
echo "Failed to publish $package"
|
||||
failed=1
|
||||
fi
|
||||
done
|
||||
if [ $failed -eq 1 ]; then
|
||||
echo "Some packages failed to publish"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -3,19 +3,24 @@ repos:
|
||||
hooks:
|
||||
- id: ruff
|
||||
name: ruff
|
||||
entry: uv run ruff check
|
||||
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
exclude: ^lib/crewai/
|
||||
- id: ruff-format
|
||||
name: ruff-format
|
||||
entry: uv run ruff format
|
||||
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
exclude: ^lib/crewai/
|
||||
- id: mypy
|
||||
name: mypy
|
||||
entry: uv run mypy
|
||||
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
exclude: ^lib/crewai/
|
||||
- repo: https://github.com/astral-sh/uv-pre-commit
|
||||
rev: 0.9.3
|
||||
hooks:
|
||||
- id: uv-lock
|
||||
|
||||
|
||||
28
README.md
@@ -62,9 +62,9 @@
|
||||
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
|
||||
standard for enterprise-ready AI automation.
|
||||
|
||||
# CrewAI Enterprise Suite
|
||||
# CrewAI AMP Suite
|
||||
|
||||
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
|
||||
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
|
||||
|
||||
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
|
||||
|
||||
@@ -76,9 +76,9 @@ You can try one part of the suite the [Crew Control Plane for free](https://app.
|
||||
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
|
||||
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
|
||||
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
|
||||
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
|
||||
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
|
||||
|
||||
CrewAI Enterprise is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
|
||||
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
|
||||
intelligent automations.
|
||||
|
||||
## Table of contents
|
||||
@@ -674,9 +674,9 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b
|
||||
|
||||
### Enterprise Features
|
||||
|
||||
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
|
||||
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
|
||||
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
|
||||
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-amp-offer)
|
||||
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments)
|
||||
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-amp-for-free)
|
||||
|
||||
### Q: What exactly is CrewAI?
|
||||
|
||||
@@ -732,17 +732,17 @@ A: Check out practical examples in the [CrewAI-examples repository](https://gith
|
||||
|
||||
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
|
||||
|
||||
### Q: What additional features does CrewAI Enterprise offer?
|
||||
### Q: What additional features does CrewAI AMP offer?
|
||||
|
||||
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
|
||||
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
|
||||
|
||||
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
|
||||
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
|
||||
|
||||
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
|
||||
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
|
||||
|
||||
### Q: Can I try CrewAI Enterprise for free?
|
||||
### Q: Can I try CrewAI AMP for free?
|
||||
|
||||
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
|
||||
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
|
||||
|
||||
### Q: Does CrewAI support fine-tuning or training custom models?
|
||||
|
||||
@@ -762,7 +762,7 @@ A: CrewAI is highly scalable, supporting simple automations and large-scale ente
|
||||
|
||||
### Q: Does CrewAI offer debugging and monitoring tools?
|
||||
|
||||
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
|
||||
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
|
||||
|
||||
### Q: What programming languages does CrewAI support?
|
||||
|
||||
|
||||
1737
crewAI.excalidraw
@@ -9,7 +9,22 @@
|
||||
},
|
||||
"favicon": "/images/favicon.svg",
|
||||
"contextual": {
|
||||
"options": ["copy", "view", "chatgpt", "claude"]
|
||||
"options": [
|
||||
"copy",
|
||||
"view",
|
||||
"chatgpt",
|
||||
"claude",
|
||||
"perplexity",
|
||||
"mcp",
|
||||
"cursor",
|
||||
"vscode",
|
||||
{
|
||||
"title": "Request a feature",
|
||||
"description": "Join the discussion on GitHub to request a new feature",
|
||||
"icon": "plus",
|
||||
"href": "https://github.com/crewAIInc/crewAI/issues/new/choose"
|
||||
}
|
||||
]
|
||||
},
|
||||
"navigation": {
|
||||
"languages": [
|
||||
@@ -119,6 +134,7 @@
|
||||
"group": "MCP Integration",
|
||||
"pages": [
|
||||
"en/mcp/overview",
|
||||
"en/mcp/dsl-integration",
|
||||
"en/mcp/stdio",
|
||||
"en/mcp/sse",
|
||||
"en/mcp/streamable-http",
|
||||
@@ -256,8 +272,10 @@
|
||||
{
|
||||
"group": "Observability",
|
||||
"pages": [
|
||||
"en/observability/tracing",
|
||||
"en/observability/overview",
|
||||
"en/observability/arize-phoenix",
|
||||
"en/observability/braintrust",
|
||||
"en/observability/langdb",
|
||||
"en/observability/langfuse",
|
||||
"en/observability/langtrace",
|
||||
@@ -287,6 +305,7 @@
|
||||
"en/learn/force-tool-output-as-result",
|
||||
"en/learn/hierarchical-process",
|
||||
"en/learn/human-input-on-execution",
|
||||
"en/learn/human-in-the-loop",
|
||||
"en/learn/kickoff-async",
|
||||
"en/learn/kickoff-for-each",
|
||||
"en/learn/llm-connections",
|
||||
@@ -303,7 +322,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Enterprise",
|
||||
"tab": "AMP",
|
||||
"icon": "briefcase",
|
||||
"groups": [
|
||||
{
|
||||
@@ -343,10 +362,20 @@
|
||||
"en/enterprise/integrations/github",
|
||||
"en/enterprise/integrations/gmail",
|
||||
"en/enterprise/integrations/google_calendar",
|
||||
"en/enterprise/integrations/google_contacts",
|
||||
"en/enterprise/integrations/google_docs",
|
||||
"en/enterprise/integrations/google_drive",
|
||||
"en/enterprise/integrations/google_sheets",
|
||||
"en/enterprise/integrations/google_slides",
|
||||
"en/enterprise/integrations/hubspot",
|
||||
"en/enterprise/integrations/jira",
|
||||
"en/enterprise/integrations/linear",
|
||||
"en/enterprise/integrations/microsoft_excel",
|
||||
"en/enterprise/integrations/microsoft_onedrive",
|
||||
"en/enterprise/integrations/microsoft_outlook",
|
||||
"en/enterprise/integrations/microsoft_sharepoint",
|
||||
"en/enterprise/integrations/microsoft_teams",
|
||||
"en/enterprise/integrations/microsoft_word",
|
||||
"en/enterprise/integrations/notion",
|
||||
"en/enterprise/integrations/salesforce",
|
||||
"en/enterprise/integrations/shopify",
|
||||
@@ -379,6 +408,7 @@
|
||||
"en/enterprise/guides/kickoff-crew",
|
||||
"en/enterprise/guides/update-crew",
|
||||
"en/enterprise/guides/enable-crew-studio",
|
||||
"en/enterprise/guides/capture_telemetry_logs",
|
||||
"en/enterprise/guides/azure-openai-setup",
|
||||
"en/enterprise/guides/tool-repository",
|
||||
"en/enterprise/guides/react-component-export",
|
||||
@@ -403,6 +433,7 @@
|
||||
"en/api-reference/introduction",
|
||||
"en/api-reference/inputs",
|
||||
"en/api-reference/kickoff",
|
||||
"en/api-reference/resume",
|
||||
"en/api-reference/status"
|
||||
]
|
||||
}
|
||||
@@ -540,6 +571,7 @@
|
||||
"group": "Integração MCP",
|
||||
"pages": [
|
||||
"pt-BR/mcp/overview",
|
||||
"pt-BR/mcp/dsl-integration",
|
||||
"pt-BR/mcp/stdio",
|
||||
"pt-BR/mcp/sse",
|
||||
"pt-BR/mcp/streamable-http",
|
||||
@@ -667,6 +699,7 @@
|
||||
"pages": [
|
||||
"pt-BR/observability/overview",
|
||||
"pt-BR/observability/arize-phoenix",
|
||||
"pt-BR/observability/braintrust",
|
||||
"pt-BR/observability/langdb",
|
||||
"pt-BR/observability/langfuse",
|
||||
"pt-BR/observability/langtrace",
|
||||
@@ -695,6 +728,7 @@
|
||||
"pt-BR/learn/force-tool-output-as-result",
|
||||
"pt-BR/learn/hierarchical-process",
|
||||
"pt-BR/learn/human-input-on-execution",
|
||||
"pt-BR/learn/human-in-the-loop",
|
||||
"pt-BR/learn/kickoff-async",
|
||||
"pt-BR/learn/kickoff-for-each",
|
||||
"pt-BR/learn/llm-connections",
|
||||
@@ -711,7 +745,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Enterprise",
|
||||
"tab": "AMP",
|
||||
"icon": "briefcase",
|
||||
"groups": [
|
||||
{
|
||||
@@ -751,10 +785,20 @@
|
||||
"pt-BR/enterprise/integrations/github",
|
||||
"pt-BR/enterprise/integrations/gmail",
|
||||
"pt-BR/enterprise/integrations/google_calendar",
|
||||
"pt-BR/enterprise/integrations/google_contacts",
|
||||
"pt-BR/enterprise/integrations/google_docs",
|
||||
"pt-BR/enterprise/integrations/google_drive",
|
||||
"pt-BR/enterprise/integrations/google_sheets",
|
||||
"pt-BR/enterprise/integrations/google_slides",
|
||||
"pt-BR/enterprise/integrations/hubspot",
|
||||
"pt-BR/enterprise/integrations/jira",
|
||||
"pt-BR/enterprise/integrations/linear",
|
||||
"pt-BR/enterprise/integrations/microsoft_excel",
|
||||
"pt-BR/enterprise/integrations/microsoft_onedrive",
|
||||
"pt-BR/enterprise/integrations/microsoft_outlook",
|
||||
"pt-BR/enterprise/integrations/microsoft_sharepoint",
|
||||
"pt-BR/enterprise/integrations/microsoft_teams",
|
||||
"pt-BR/enterprise/integrations/microsoft_word",
|
||||
"pt-BR/enterprise/integrations/notion",
|
||||
"pt-BR/enterprise/integrations/salesforce",
|
||||
"pt-BR/enterprise/integrations/shopify",
|
||||
@@ -783,6 +827,12 @@
|
||||
"group": "Triggers",
|
||||
"pages": [
|
||||
"pt-BR/enterprise/guides/automation-triggers",
|
||||
"pt-BR/enterprise/guides/gmail-trigger",
|
||||
"pt-BR/enterprise/guides/google-calendar-trigger",
|
||||
"pt-BR/enterprise/guides/google-drive-trigger",
|
||||
"pt-BR/enterprise/guides/outlook-trigger",
|
||||
"pt-BR/enterprise/guides/onedrive-trigger",
|
||||
"pt-BR/enterprise/guides/microsoft-teams-trigger",
|
||||
"pt-BR/enterprise/guides/slack-trigger",
|
||||
"pt-BR/enterprise/guides/hubspot-trigger",
|
||||
"pt-BR/enterprise/guides/salesforce-trigger",
|
||||
@@ -807,6 +857,7 @@
|
||||
"pt-BR/api-reference/introduction",
|
||||
"pt-BR/api-reference/inputs",
|
||||
"pt-BR/api-reference/kickoff",
|
||||
"pt-BR/api-reference/resume",
|
||||
"pt-BR/api-reference/status"
|
||||
]
|
||||
}
|
||||
@@ -940,6 +991,7 @@
|
||||
"group": "MCP 통합",
|
||||
"pages": [
|
||||
"ko/mcp/overview",
|
||||
"ko/mcp/dsl-integration",
|
||||
"ko/mcp/stdio",
|
||||
"ko/mcp/sse",
|
||||
"ko/mcp/streamable-http",
|
||||
@@ -1049,7 +1101,6 @@
|
||||
"ko/tools/cloud-storage/overview",
|
||||
"ko/tools/cloud-storage/s3readertool",
|
||||
"ko/tools/cloud-storage/s3writertool",
|
||||
"ko/tools/cloud-storage/bedrockinvokeagenttool",
|
||||
"ko/tools/cloud-storage/bedrockkbretriever"
|
||||
]
|
||||
},
|
||||
@@ -1080,6 +1131,7 @@
|
||||
"pages": [
|
||||
"ko/observability/overview",
|
||||
"ko/observability/arize-phoenix",
|
||||
"ko/observability/braintrust",
|
||||
"ko/observability/langdb",
|
||||
"ko/observability/langfuse",
|
||||
"ko/observability/langtrace",
|
||||
@@ -1108,6 +1160,7 @@
|
||||
"ko/learn/force-tool-output-as-result",
|
||||
"ko/learn/hierarchical-process",
|
||||
"ko/learn/human-input-on-execution",
|
||||
"ko/learn/human-in-the-loop",
|
||||
"ko/learn/kickoff-async",
|
||||
"ko/learn/kickoff-for-each",
|
||||
"ko/learn/llm-connections",
|
||||
@@ -1164,10 +1217,20 @@
|
||||
"ko/enterprise/integrations/github",
|
||||
"ko/enterprise/integrations/gmail",
|
||||
"ko/enterprise/integrations/google_calendar",
|
||||
"ko/enterprise/integrations/google_contacts",
|
||||
"ko/enterprise/integrations/google_docs",
|
||||
"ko/enterprise/integrations/google_drive",
|
||||
"ko/enterprise/integrations/google_sheets",
|
||||
"ko/enterprise/integrations/google_slides",
|
||||
"ko/enterprise/integrations/hubspot",
|
||||
"ko/enterprise/integrations/jira",
|
||||
"ko/enterprise/integrations/linear",
|
||||
"ko/enterprise/integrations/microsoft_excel",
|
||||
"ko/enterprise/integrations/microsoft_onedrive",
|
||||
"ko/enterprise/integrations/microsoft_outlook",
|
||||
"ko/enterprise/integrations/microsoft_sharepoint",
|
||||
"ko/enterprise/integrations/microsoft_teams",
|
||||
"ko/enterprise/integrations/microsoft_word",
|
||||
"ko/enterprise/integrations/notion",
|
||||
"ko/enterprise/integrations/salesforce",
|
||||
"ko/enterprise/integrations/shopify",
|
||||
@@ -1196,6 +1259,12 @@
|
||||
"group": "트리거",
|
||||
"pages": [
|
||||
"ko/enterprise/guides/automation-triggers",
|
||||
"ko/enterprise/guides/gmail-trigger",
|
||||
"ko/enterprise/guides/google-calendar-trigger",
|
||||
"ko/enterprise/guides/google-drive-trigger",
|
||||
"ko/enterprise/guides/outlook-trigger",
|
||||
"ko/enterprise/guides/onedrive-trigger",
|
||||
"ko/enterprise/guides/microsoft-teams-trigger",
|
||||
"ko/enterprise/guides/slack-trigger",
|
||||
"ko/enterprise/guides/hubspot-trigger",
|
||||
"ko/enterprise/guides/salesforce-trigger",
|
||||
@@ -1218,6 +1287,7 @@
|
||||
"ko/api-reference/introduction",
|
||||
"ko/api-reference/inputs",
|
||||
"ko/api-reference/kickoff",
|
||||
"ko/api-reference/resume",
|
||||
"ko/api-reference/status"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,29 +1,29 @@
|
||||
---
|
||||
title: "Introduction"
|
||||
description: "Complete reference for the CrewAI Enterprise REST API"
|
||||
description: "Complete reference for the CrewAI AMP REST API"
|
||||
icon: "code"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# CrewAI Enterprise API
|
||||
# CrewAI AMP API
|
||||
|
||||
Welcome to the CrewAI Enterprise API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
|
||||
Welcome to the CrewAI AMP API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
|
||||
|
||||
## Quick Start
|
||||
|
||||
<Steps>
|
||||
<Step title="Get Your API Credentials">
|
||||
Navigate to your crew's detail page in the CrewAI Enterprise dashboard and copy your Bearer Token from the Status tab.
|
||||
Navigate to your crew's detail page in the CrewAI AMP dashboard and copy your Bearer Token from the Status tab.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Discover Required Inputs">
|
||||
Use the `GET /inputs` endpoint to see what parameters your crew expects.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Monitor Progress">
|
||||
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
|
||||
</Step>
|
||||
@@ -46,7 +46,7 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
|
||||
<Tip>
|
||||
You can find both token types in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
|
||||
You can find both token types in the Status tab of your crew's detail page in the CrewAI AMP dashboard.
|
||||
</Tip>
|
||||
|
||||
## Base URL
|
||||
@@ -62,7 +62,7 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
|
||||
## Typical Workflow
|
||||
|
||||
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
|
||||
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
|
||||
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
|
||||
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
|
||||
4. **Results**: Extract the final output from the completed response
|
||||
|
||||
@@ -82,12 +82,12 @@ The API uses standard HTTP status codes:
|
||||
## Interactive Testing
|
||||
|
||||
<Info>
|
||||
**Why no "Send" button?** Since each CrewAI Enterprise user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
|
||||
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
|
||||
</Info>
|
||||
|
||||
Each endpoint page shows you:
|
||||
- ✅ **Exact request format** with all parameters
|
||||
- ✅ **Response examples** for success and error cases
|
||||
- ✅ **Response examples** for success and error cases
|
||||
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
|
||||
- ✅ **Authentication examples** with proper Bearer token format
|
||||
|
||||
@@ -104,7 +104,7 @@ Each endpoint page shows you:
|
||||
|
||||
**Example workflow:**
|
||||
1. **Copy this cURL example** from any endpoint page
|
||||
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
|
||||
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
|
||||
3. **Replace the Bearer token** with your real token from the dashboard
|
||||
4. **Run the request** in your terminal or API client
|
||||
|
||||
|
||||
6
docs/en/api-reference/resume.mdx
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
title: "POST /resume"
|
||||
description: "Resume crew execution with human feedback"
|
||||
openapi: "/enterprise-api.en.yaml POST /resume"
|
||||
mode: "wide"
|
||||
---
|
||||
@@ -20,7 +20,7 @@ Think of an agent as a specialized team member with specific skills, expertise,
|
||||
</Tip>
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
|
||||
CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
|
||||
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ icon: terminal
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Warning>Since release 0.140.0, CrewAI Enterprise started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
|
||||
<Warning>Since release 0.140.0, CrewAI AMP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -186,9 +186,9 @@ def crew(self) -> Crew:
|
||||
|
||||
### 10. Deploy
|
||||
|
||||
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
|
||||
Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
|
||||
|
||||
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
|
||||
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
|
||||
You can login or create an account with:
|
||||
```shell Terminal
|
||||
crewai login
|
||||
@@ -203,7 +203,7 @@ Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
|
||||
|
||||
### 11. Organization Management
|
||||
|
||||
Manage your CrewAI Enterprise organizations.
|
||||
Manage your CrewAI AMP organizations.
|
||||
|
||||
```shell Terminal
|
||||
crewai org [COMMAND] [OPTIONS]
|
||||
@@ -227,17 +227,17 @@ crewai org switch <organization_id>
|
||||
```
|
||||
|
||||
<Note>
|
||||
You must be authenticated to CrewAI Enterprise to use these organization management commands.
|
||||
You must be authenticated to CrewAI AMP to use these organization management commands.
|
||||
</Note>
|
||||
|
||||
- **Create a deployment** (continued):
|
||||
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
|
||||
|
||||
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
|
||||
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
|
||||
```shell Terminal
|
||||
crewai deploy push
|
||||
```
|
||||
- Initiates the deployment process on the CrewAI Enterprise platform.
|
||||
- Initiates the deployment process on the CrewAI AMP platform.
|
||||
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
|
||||
|
||||
- **Deployment Status**: You can check the status of your deployment with:
|
||||
@@ -262,7 +262,7 @@ You must be authenticated to CrewAI Enterprise to use these organization managem
|
||||
```shell Terminal
|
||||
crewai deploy remove
|
||||
```
|
||||
This deletes the deployment from the CrewAI Enterprise platform.
|
||||
This deletes the deployment from the CrewAI AMP platform.
|
||||
|
||||
- **Help Command**: You can get help with the CLI with:
|
||||
```shell Terminal
|
||||
@@ -270,7 +270,7 @@ You must be authenticated to CrewAI Enterprise to use these organization managem
|
||||
```
|
||||
This shows the help message for the CrewAI Deploy CLI.
|
||||
|
||||
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
|
||||
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
|
||||
|
||||
<iframe
|
||||
className="w-full aspect-video rounded-xl"
|
||||
@@ -283,7 +283,7 @@ Watch this video tutorial for a step-by-step demonstration of deploying your cre
|
||||
|
||||
### 11. Login
|
||||
|
||||
Authenticate with CrewAI Enterprise using a secure device code flow (no email entry required).
|
||||
Authenticate with CrewAI AMP using a secure device code flow (no email entry required).
|
||||
|
||||
```shell Terminal
|
||||
crewai login
|
||||
@@ -354,7 +354,7 @@ crewai config reset
|
||||
|
||||
#### Available Configuration Parameters
|
||||
|
||||
- `enterprise_base_url`: Base URL of the CrewAI Enterprise instance
|
||||
- `enterprise_base_url`: Base URL of the CrewAI AMP instance
|
||||
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
|
||||
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
|
||||
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
|
||||
@@ -370,7 +370,7 @@ crewai config list
|
||||
Example output:
|
||||
| Setting | Value | Description |
|
||||
| :------------------ | :----------------------- | :---------------------------------------------------------- |
|
||||
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI Enterprise instance |
|
||||
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
|
||||
| org_name | Not set | Name of the currently active organization |
|
||||
| org_uuid | Not set | UUID of the currently active organization |
|
||||
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
|
||||
|
||||
@@ -20,7 +20,7 @@ CrewAI uses an event bus architecture to emit events throughout the execution li
|
||||
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
|
||||
CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
|
||||
CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
|
||||
|
||||
## What are LLMs?
|
||||
@@ -113,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="OpenAI">
|
||||
Set the following environment variables in your `.env` file:
|
||||
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
# Optional
|
||||
OPENAI_API_BASE=<custom-base-url>
|
||||
OPENAI_ORGANIZATION=<your-org-id>
|
||||
OPENAI_BASE_URL=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4", # call model by provider/model_name
|
||||
temperature=0.8,
|
||||
max_tokens=150,
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key", # Or set OPENAI_API_KEY
|
||||
temperature=0.7,
|
||||
max_tokens=4000
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.openai.com/v1", # Optional custom endpoint
|
||||
organization="org-...", # Optional organization ID
|
||||
project="proj_...", # Optional project ID
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
max_completion_tokens=4000, # For newer models
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.1,
|
||||
presence_penalty=0.1,
|
||||
stop=["END"],
|
||||
seed=42
|
||||
seed=42, # For reproducible outputs
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3, # Maximum retry attempts
|
||||
logprobs=True, # Return log probabilities
|
||||
top_logprobs=5, # Number of most likely tokens
|
||||
reasoning_effort="medium" # For o1 models: low, medium, high
|
||||
)
|
||||
```
|
||||
|
||||
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
|
||||
**Structured Outputs:**
|
||||
```python Code
|
||||
from pydantic import BaseModel
|
||||
from crewai import LLM
|
||||
|
||||
class ResponseFormat(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
summary: str
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key (required)
|
||||
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support (except o1 models)
|
||||
- Structured outputs with JSON schema
|
||||
- Streaming support for real-time responses
|
||||
- Token usage tracking
|
||||
- Stop sequences support (except o1 models)
|
||||
- Log probabilities for token-level insights
|
||||
- Reasoning effort control for o1 models
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|---------------------|------------------|-----------------------------------------------|
|
||||
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
|
||||
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
|
||||
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
|
||||
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
|
||||
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
|
||||
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
|
||||
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
|
||||
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
|
||||
| o1-mini | 128,000 tokens | Efficient reasoning model |
|
||||
| o3-mini | 200,000 tokens | Lightweight reasoning model |
|
||||
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
|
||||
|
||||
**Note:** To use OpenAI, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[openai]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Meta-Llama">
|
||||
@@ -187,69 +247,186 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Anthropic">
|
||||
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# Optional
|
||||
ANTHROPIC_API_BASE=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-sonnet-20240229-v1:0",
|
||||
temperature=0.7
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
|
||||
max_tokens=4096 # Required for Anthropic
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.anthropic.com", # Optional custom endpoint
|
||||
temperature=0.7,
|
||||
max_tokens=4096, # Required parameter
|
||||
top_p=0.9,
|
||||
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3 # Maximum retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
|
||||
|
||||
**Features:**
|
||||
- Native tool use support for Claude 3+ models
|
||||
- Streaming support for real-time responses
|
||||
- Automatic system message handling
|
||||
- Stop sequences for controlled output
|
||||
- Token usage tracking
|
||||
- Multi-turn tool use conversations
|
||||
|
||||
**Important Notes:**
|
||||
- `max_tokens` is a **required** parameter for all Anthropic models
|
||||
- Claude uses `stop_sequences` instead of `stop`
|
||||
- System messages are handled separately from conversation messages
|
||||
- First message must be from the user (automatically handled)
|
||||
- Messages must alternate between user and assistant
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|------------------------------|----------------|-----------------------------------------------|
|
||||
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
|
||||
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
|
||||
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
|
||||
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
|
||||
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
|
||||
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
|
||||
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
|
||||
| claude-2 | 100,000 tokens | Versatile model for various tasks |
|
||||
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
|
||||
|
||||
**Note:** To use Anthropic, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[anthropic]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google (Gemini API)">
|
||||
Set your API key in your `.env` file. If you need a key, or need to find an
|
||||
existing key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
|
||||
|
||||
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
|
||||
```toml .env
|
||||
# https://ai.google.dev/gemini-api/docs/api-key
|
||||
# Required (one of the following)
|
||||
GOOGLE_API_KEY=<your-api-key>
|
||||
GEMINI_API_KEY=<your-api-key>
|
||||
|
||||
# Optional - for Vertex AI
|
||||
GOOGLE_CLOUD_PROJECT=<your-project-id>
|
||||
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
|
||||
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.0-flash",
|
||||
temperature=0.7,
|
||||
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
|
||||
### Gemini models
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.5-flash",
|
||||
api_key="your-api-key",
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
top_k=40, # Top-k sampling parameter
|
||||
max_output_tokens=8192,
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
safety_settings={
|
||||
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
|
||||
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Vertex AI Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-1.5-pro",
|
||||
project="your-gcp-project-id",
|
||||
location="us-central1" # GCP region
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
|
||||
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
|
||||
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
|
||||
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Gemini 1.5+ and 2.x models
|
||||
- Streaming support for real-time responses
|
||||
- Multimodal capabilities (text, images, video)
|
||||
- Safety settings configuration
|
||||
- Support for both Gemini API and Vertex AI
|
||||
- Automatic system instruction handling
|
||||
- Token usage tracking
|
||||
|
||||
**Gemini Models:**
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
|
||||
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
|
||||
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
|
||||
|
||||
**Gemma Models:**
|
||||
|
||||
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|----------------|----------------|------------------------------------|
|
||||
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
|
||||
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
|
||||
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
|
||||
| gemma-3-27b | 128,000 tokens | High-performance tasks |
|
||||
|
||||
**Note:** To use Google Gemini, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[google-genai]"
|
||||
```
|
||||
|
||||
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
|
||||
|
||||
### Gemma
|
||||
|
||||
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window |
|
||||
|----------------|----------------|
|
||||
| gemma-3-1b-it | 32k tokens |
|
||||
| gemma-3-4b-it | 32k tokens |
|
||||
| gemma-3-12b-it | 32k tokens |
|
||||
| gemma-3-27b-it | 128k tokens |
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Google (Vertex AI)">
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
|
||||
@@ -291,43 +468,146 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure">
|
||||
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AZURE_API_KEY=<your-api-key>
|
||||
AZURE_API_BASE=<your-resource-url>
|
||||
AZURE_API_VERSION=<api-version>
|
||||
AZURE_ENDPOINT=<your-endpoint-url>
|
||||
|
||||
# Optional
|
||||
AZURE_AD_TOKEN=<your-azure-ad-token>
|
||||
AZURE_API_TYPE=<your-azure-api-type>
|
||||
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Endpoint URL Formats:**
|
||||
|
||||
For Azure OpenAI deployments:
|
||||
```
|
||||
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
|
||||
```
|
||||
|
||||
For Azure AI Inference endpoints:
|
||||
```
|
||||
https://<resource-name>.inference.azure.com
|
||||
```
|
||||
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
api_version="2023-05-15"
|
||||
api_key="<your-api-key>", # Or set AZURE_API_KEY
|
||||
endpoint="<your-endpoint-url>",
|
||||
api_version="2024-06-01"
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4o",
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.0,
|
||||
presence_penalty=0.0,
|
||||
stop=["END"],
|
||||
stream=True,
|
||||
timeout=60.0,
|
||||
max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AZURE_API_KEY`: Your Azure API key (required)
|
||||
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
|
||||
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
|
||||
- Streaming support for real-time responses
|
||||
- Automatic endpoint URL validation and correction
|
||||
- Comprehensive error handling with retry logic
|
||||
- Token usage tracking
|
||||
|
||||
**Note:** To use Azure AI Inference, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[azure-ai-inference]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AWS Bedrock">
|
||||
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
AWS_DEFAULT_REGION=<your-region>
|
||||
|
||||
# Optional
|
||||
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
|
||||
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
region_name="us-east-1"
|
||||
)
|
||||
```
|
||||
|
||||
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
|
||||
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
|
||||
aws_session_token="your-session-token", # For temporary credentials
|
||||
region_name="us-east-1",
|
||||
temperature=0.7,
|
||||
max_tokens=4096,
|
||||
top_p=0.9,
|
||||
top_k=250, # For Claude models
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
guardrail_config={ # Optional content filtering
|
||||
"guardrailIdentifier": "your-guardrail-id",
|
||||
"guardrailVersion": "1"
|
||||
},
|
||||
additional_model_request_fields={ # Model-specific parameters
|
||||
"top_k": 250
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
|
||||
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
|
||||
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
|
||||
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
|
||||
|
||||
**Features:**
|
||||
- Native tool calling support via Converse API
|
||||
- Streaming and non-streaming responses
|
||||
- Comprehensive error handling with retry logic
|
||||
- Guardrail configuration for content filtering
|
||||
- Model-specific parameters via `additional_model_request_fields`
|
||||
- Token usage tracking and stop reason logging
|
||||
- Support for all Bedrock foundation models
|
||||
- Automatic conversation format handling
|
||||
|
||||
**Important Notes:**
|
||||
- Uses the modern Converse API for unified model access
|
||||
- Automatic handling of model-specific conversation requirements
|
||||
- System messages are handled separately from conversation
|
||||
- First message must be from user (automatically handled)
|
||||
- Some models (like Cohere) require conversation to end with user message
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------------|----------------------|-------------------------------------------------------------------|
|
||||
@@ -357,7 +637,12 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
|
||||
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
|
||||
|
||||
**Note:** To use AWS Bedrock, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[bedrock]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Amazon SageMaker">
|
||||
@@ -899,7 +1184,7 @@ Learn how to get the most out of your LLM configuration:
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Drop Additional Parameters">
|
||||
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
|
||||
|
||||
```python
|
||||
|
||||
@@ -14,7 +14,7 @@ Tasks provide all necessary details for execution, such as a description, the ag
|
||||
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Visual Task Builder">
|
||||
CrewAI Enterprise includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
|
||||
CrewAI AMP includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/cre
|
||||
enabling everything from simple searches to complex interactions and effective teamwork among agents.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Tools Repository">
|
||||
CrewAI Enterprise provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
|
||||
CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
|
||||
|
||||
The Enterprise Tools Repository includes:
|
||||
- Pre-built connectors for popular enterprise systems
|
||||
@@ -208,7 +208,7 @@ from crewai.tools import BaseTool
|
||||
class AsyncCustomTool(BaseTool):
|
||||
name: str = "async_custom_tool"
|
||||
description: str = "An asynchronous custom tool"
|
||||
|
||||
|
||||
async def _run(self, query: str = "") -> str:
|
||||
"""Asynchronously run the tool"""
|
||||
# Your async implementation here
|
||||
|
||||
@@ -102,5 +102,3 @@ Once deployed, you can view the automation details and have the **Options** drop
|
||||
Stream real-time events and updates to your systems.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
|
||||
|
||||
@@ -86,5 +86,3 @@ Once published, you can view the automation details and have the **Options** dro
|
||||
Export a React Component.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ The Marketplace provides a curated surface for discovering integrations, interna
|
||||

|
||||
</Frame>
|
||||
|
||||
You can also download the templates directly from the marketplace by clicking on the `Download` button so
|
||||
You can also download the templates directly from the marketplace by clicking on the `Download` button so
|
||||
you can use them locally or refine them to your needs.
|
||||
|
||||
## Related
|
||||
@@ -44,5 +44,3 @@ you can use them locally or refine them to your needs.
|
||||
Store, share, and reuse agent definitions across teams and projects.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
|
||||
|
||||
@@ -7,16 +7,16 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
RBAC in CrewAI Enterprise enables secure, scalable access management through a combination of organization‑level roles and automation‑level visibility controls.
|
||||
RBAC in CrewAI AMP enables secure, scalable access management through a combination of organization‑level roles and automation‑level visibility controls.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI Enterprise" />
|
||||
|
||||
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI AMP" />
|
||||
|
||||
</Frame>
|
||||
|
||||
## Users and Roles
|
||||
|
||||
Each member in your CrewAI workspace is assigned a role, which determines their access across various features.
|
||||
Each member in your CrewAI workspace is assigned a role, which determines their access across various features.
|
||||
|
||||
You can:
|
||||
|
||||
@@ -28,7 +28,7 @@ You can configure users and roles in Settings → Roles.
|
||||
|
||||
<Steps>
|
||||
<Step title="Open Roles settings">
|
||||
Go to <b>Settings → Roles</b> in CrewAI Enterprise.
|
||||
Go to <b>Settings → Roles</b> in CrewAI AMP.
|
||||
</Step>
|
||||
<Step title="Choose a role type">
|
||||
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click <b>Create role</b> to define a custom one.
|
||||
@@ -93,12 +93,10 @@ The organization owner always has access. In private mode, only whitelisted user
|
||||
</Tip>
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI Enterprise" />
|
||||
|
||||
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI AMP" />
|
||||
|
||||
</Frame>
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with RBAC questions.
|
||||
</Card>
|
||||
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link>
|
||||
2. Click <b>Connect</b> on the desired service
|
||||
3. Complete the OAuth flow and grant scopes
|
||||
4. Copy your Enterprise Token from the <b>Integration</b> tab
|
||||
4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link>
|
||||
|
||||
<Frame>
|
||||

|
||||
@@ -57,29 +57,37 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
<Tip>
|
||||
All services you have authenticated will be available as tools. Add `CrewaiEnterpriseTools` to your agent and you’re set.
|
||||
Use the new streamlined approach to integrate enterprise apps. Simply specify the app and its actions directly in the Agent configuration.
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import CrewaiEnterpriseTools
|
||||
|
||||
# Get enterprise tools (Gmail tool will be included)
|
||||
enterprise_tools = CrewaiEnterpriseTools(
|
||||
enterprise_token="your_enterprise_token"
|
||||
)
|
||||
# print the tools
|
||||
print(enterprise_tools)
|
||||
|
||||
# Create an agent with Gmail capabilities
|
||||
email_agent = Agent(
|
||||
role="Email Manager",
|
||||
goal="Manage and organize email communications",
|
||||
backstory="An AI assistant specialized in email management and communication.",
|
||||
tools=enterprise_tools
|
||||
apps=['gmail', 'gmail/send_email'] # Using canonical name 'gmail'
|
||||
)
|
||||
|
||||
# Task to send an email
|
||||
@@ -102,21 +110,14 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
### Filtering Tools
|
||||
|
||||
```python
|
||||
from crewai_tools import CrewaiEnterpriseTools
|
||||
|
||||
enterprise_tools = CrewaiEnterpriseTools(
|
||||
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
|
||||
)
|
||||
|
||||
|
||||
gmail_tool = enterprise_tools["gmail_find_email"]
|
||||
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create agent with specific Gmail actions only
|
||||
gmail_agent = Agent(
|
||||
role="Gmail Manager",
|
||||
goal="Manage gmail communications and notifications",
|
||||
backstory="An AI assistant that helps coordinate gmail communications.",
|
||||
tools=[gmail_tool]
|
||||
apps=['gmail/fetch_emails'] # Using canonical name with specific action
|
||||
)
|
||||
|
||||
notification_task = Task(
|
||||
@@ -188,10 +189,10 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
|
||||
## Internal Tools
|
||||
|
||||
Create custom tools locally, publish them on CrewAI Enterprise Tool Repository and use them in your agents.
|
||||
Create custom tools locally, publish them on CrewAI AMP Tool Repository and use them in your agents.
|
||||
|
||||
<Tip>
|
||||
Before running the commands below, make sure you log in to your CrewAI Enterprise account by running this command:
|
||||
Before running the commands below, make sure you log in to your CrewAI AMP account by running this command:
|
||||
```bash
|
||||
crewai login
|
||||
```
|
||||
@@ -209,13 +210,13 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
```
|
||||
</Step>
|
||||
<Step title="Publish">
|
||||
Publish the tool to the CrewAI Enterprise Tool Repository.
|
||||
Publish the tool to the CrewAI AMP Tool Repository.
|
||||
```bash
|
||||
crewai tool publish
|
||||
```
|
||||
</Step>
|
||||
<Step title="Install">
|
||||
Install the tool from the CrewAI Enterprise Tool Repository.
|
||||
Install the tool from the CrewAI AMP Tool Repository.
|
||||
```bash
|
||||
crewai tool install your-tool
|
||||
```
|
||||
@@ -247,4 +248,3 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
Automate workflows and integrate with external platforms and services.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ Traces provide comprehensive visibility into your crew executions, helping you m
|
||||
|
||||
## What are Traces?
|
||||
|
||||
Traces in CrewAI Enterprise are detailed execution records that capture every aspect of your crew's operation, from initial inputs to final outputs. They record:
|
||||
Traces in CrewAI AMP are detailed execution records that capture every aspect of your crew's operation, from initial inputs to final outputs. They record:
|
||||
|
||||
- Agent thoughts and reasoning
|
||||
- Task execution details
|
||||
@@ -28,9 +28,9 @@ Traces in CrewAI Enterprise are detailed execution records that capture every as
|
||||
|
||||
<Steps>
|
||||
<Step title="Navigate to the Traces Tab">
|
||||
Once in your CrewAI Enterprise dashboard, click on the **Traces** to view all execution records.
|
||||
Once in your CrewAI AMP dashboard, click on the **Traces** to view all execution records.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Select an Execution">
|
||||
You'll see a list of all crew executions, sorted by date. Click on any execution to view its detailed trace.
|
||||
</Step>
|
||||
@@ -112,7 +112,7 @@ Traces are invaluable for troubleshooting issues with your crews:
|
||||
<Steps>
|
||||
<Step title="Identify Failure Points">
|
||||
When a crew execution doesn't produce the expected results, examine the trace to find where things went wrong. Look for:
|
||||
|
||||
|
||||
- Failed tasks
|
||||
- Unexpected agent decisions
|
||||
- Tool usage errors
|
||||
@@ -122,19 +122,19 @@ Traces are invaluable for troubleshooting issues with your crews:
|
||||

|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Optimize Performance">
|
||||
Use execution metrics to identify performance bottlenecks:
|
||||
|
||||
|
||||
- Tasks that took longer than expected
|
||||
- Excessive token usage
|
||||
- Redundant tool operations
|
||||
- Unnecessary API calls
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Improve Cost Efficiency">
|
||||
Analyze token usage and cost estimates to optimize your crew's efficiency:
|
||||
|
||||
|
||||
- Consider using smaller models for simpler tasks
|
||||
- Refine prompts to be more concise
|
||||
- Cache frequently accessed information
|
||||
@@ -153,5 +153,5 @@ CrewAI batches trace uploads to reduce overhead on high-volume runs:
|
||||
This yields more stable tracing under load while preserving detailed task/agent telemetry.
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with trace analysis or any other CrewAI Enterprise features.
|
||||
</Card>
|
||||
Contact our support team for assistance with trace analysis or any other CrewAI AMP features.
|
||||
</Card>
|
||||
|
||||
@@ -7,8 +7,8 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
Enterprise Event Streaming lets you receive real-time webhook updates about your crews and flows deployed to
|
||||
CrewAI Enterprise, such as model calls, tool usage, and flow steps.
|
||||
Enterprise Event Streaming lets you receive real-time webhook updates about your crews and flows deployed to
|
||||
CrewAI AMP, such as model calls, tool usage, and flow steps.
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -151,7 +151,7 @@ CrewAI supports both system events and custom events in Enterprise Event Streami
|
||||
### Reasoning Events:
|
||||
|
||||
- `agent_reasoning_started`
|
||||
- `agent_reasoning_completed`
|
||||
- `agent_reasoning_completed`
|
||||
- `agent_reasoning_failed`
|
||||
|
||||
Event names match the internal event bus. See GitHub for the full list of events.
|
||||
@@ -165,4 +165,4 @@ You can emit your own custom events, and they will be delivered through the webh
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with webhook integration or troubleshooting.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
</CardGroup>
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
---
|
||||
title: "Triggers Overview"
|
||||
description: "Understand how CrewAI Enterprise triggers work, how to manage them, and where to find integration-specific playbooks"
|
||||
description: "Understand how CrewAI AMP triggers work, how to manage them, and where to find integration-specific playbooks"
|
||||
icon: "face-smile"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
CrewAI Enterprise triggers connect your automations to real-time events across the tools your teams already use. Instead of polling systems or relying on manual kickoffs, triggers listen for changes—new emails, calendar updates, CRM status changes—and immediately launch the crew or flow you specify.
|
||||
CrewAI AMP triggers connect your automations to real-time events across the tools your teams already use. Instead of polling systems or relying on manual kickoffs, triggers listen for changes—new emails, calendar updates, CRM status changes—and immediately launch the crew or flow you specify.
|
||||
|
||||
<Frame>
|
||||

|
||||
@@ -117,27 +117,50 @@ Before wiring a trigger into production, make sure you:
|
||||
- Decide whether to pass trigger context automatically using `allow_crewai_trigger_context`
|
||||
- Set up monitoring—webhook logs, CrewAI execution history, and optional external alerting
|
||||
|
||||
### Payload & Crew Examples Repository
|
||||
### Testing Triggers Locally with CLI
|
||||
|
||||
We maintain a comprehensive repository with end-to-end trigger examples to help you build and test your automations:
|
||||
The CrewAI CLI provides powerful commands to help you develop and test trigger-driven automations without deploying to production.
|
||||
|
||||
This repository contains:
|
||||
#### List Available Triggers
|
||||
|
||||
- **Realistic payload samples** for every supported trigger integration
|
||||
- **Ready-to-run crew implementations** that parse each payload and turn it into a business workflow
|
||||
- **Multiple scenarios per integration** (e.g., new events, updates, deletions) so you can match the shape of your data
|
||||
View all available triggers for your connected integrations:
|
||||
|
||||
| Integration | When it fires | Payload Samples | Crew Examples |
|
||||
| :-- | :-- | :-- | :-- |
|
||||
| Gmail | New messages, thread updates | [New alerts, thread updates](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) | [`new-email-crew.py`, `gmail-alert-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) |
|
||||
| Google Calendar | Event created / updated / started / ended / cancelled | [Event lifecycle payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) | [`calendar-event-crew.py`, `calendar-meeting-crew.py`, `calendar-working-location-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) |
|
||||
| Google Drive | File created / updated / deleted | [File lifecycle payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) | [`drive-file-crew.py`, `drive-file-deletion-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) |
|
||||
| Outlook | New email, calendar event removed | [Outlook payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) | [`outlook-message-crew.py`, `outlook-event-removal-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) |
|
||||
| OneDrive | File operations (create, update, share, delete) | [OneDrive payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) | [`onedrive-file-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) |
|
||||
| HubSpot | Record created / updated (contacts, companies, deals) | [HubSpot payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot) | [`hubspot-company-crew.py`, `hubspot-contact-crew.py`, `hubspot-record-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot) |
|
||||
| Microsoft Teams | Chat thread created | [Teams chat payload](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) | [`teams-chat-created-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) |
|
||||
```bash
|
||||
crewai triggers list
|
||||
```
|
||||
|
||||
This command displays all triggers available based on your connected integrations, showing:
|
||||
- Integration name and connection status
|
||||
- Available trigger types
|
||||
- Trigger names and descriptions
|
||||
|
||||
#### Simulate Trigger Execution
|
||||
|
||||
Test your crew with realistic trigger payloads before deployment:
|
||||
|
||||
```bash
|
||||
crewai triggers run <trigger_name>
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
crewai triggers run microsoft_onedrive/file_changed
|
||||
```
|
||||
|
||||
This command:
|
||||
- Executes your crew locally
|
||||
- Passes a complete, realistic trigger payload
|
||||
- Simulates exactly how your crew will be called in production
|
||||
|
||||
<Warning>
|
||||
**Important Development Notes:**
|
||||
- Use `crewai triggers run <trigger>` to simulate trigger execution during development
|
||||
- Using `crewai run` will NOT simulate trigger calls and won't pass the trigger payload
|
||||
- After deployment, your crew will be executed with the actual trigger payload
|
||||
- If your crew expects parameters that aren't in the trigger payload, execution may fail
|
||||
</Warning>
|
||||
|
||||
Use these samples to understand payload shape, copy the matching crew, and then replace the test payload with your live trigger data.
|
||||
|
||||
### Triggers with Crew
|
||||
|
||||
@@ -241,15 +264,20 @@ def delegate_to_crew(self, crewai_trigger_payload: dict = None):
|
||||
## Troubleshooting
|
||||
|
||||
**Trigger not firing:**
|
||||
- Verify the trigger is enabled
|
||||
- Check integration connection status
|
||||
- Verify the trigger is enabled in your deployment's Triggers tab
|
||||
- Check integration connection status under Tools & Integrations
|
||||
- Ensure all required environment variables are properly configured
|
||||
|
||||
**Execution failures:**
|
||||
- Check the execution logs for error details
|
||||
- If you are developing, make sure the inputs include the `crewai_trigger_payload` parameter with the correct payload
|
||||
- Use `crewai triggers run <trigger_name>` to test locally and see the exact payload structure
|
||||
- Verify your crew can handle the `crewai_trigger_payload` parameter
|
||||
- Ensure your crew doesn't expect parameters that aren't included in the trigger payload
|
||||
|
||||
**Development issues:**
|
||||
- Always test with `crewai triggers run <trigger>` before deploying to see the complete payload
|
||||
- Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead
|
||||
- Use `crewai triggers list` to verify which triggers are available for your connected integrations
|
||||
- After deployment, your crew will receive the actual trigger payload, so test thoroughly locally first
|
||||
|
||||
Automation triggers transform your CrewAI deployments into responsive, event-driven systems that can seamlessly integrate with your existing business processes and tools.
|
||||
|
||||
<Card title="CrewAI Enterprise Trigger Examples" href="https://github.com/crewAIInc/crewai-enterprise-trigger-examples" icon="github">
|
||||
Check them out on GitHub!
|
||||
</Card>
|
||||
@@ -19,8 +19,8 @@ This guide walks you through connecting Azure OpenAI with Crew Studio for seamle
|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
<Step title="Configure CrewAI Enterprise Connection">
|
||||
4. In another tab, open `CrewAI Enterprise > LLM Connections`. Name your LLM Connection, select Azure as the provider, and choose the same model you selected in Azure.
|
||||
<Step title="Configure CrewAI AMP Connection">
|
||||
4. In another tab, open `CrewAI AMP > LLM Connections`. Name your LLM Connection, select Azure as the provider, and choose the same model you selected in Azure.
|
||||
5. On the same page, add environment variables from step 3:
|
||||
- One named `AZURE_DEPLOYMENT_TARGET_URL` (using the Target URI). The URL should look like this: https://your-deployment.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview
|
||||
- Another named `AZURE_API_KEY` (using the Key).
|
||||
@@ -28,7 +28,7 @@ This guide walks you through connecting Azure OpenAI with Crew Studio for seamle
|
||||
</Step>
|
||||
|
||||
<Step title="Set Default Configuration">
|
||||
7. In `CrewAI Enterprise > Settings > Defaults > Crew Studio LLM Settings`, set the new LLM Connection and model as defaults.
|
||||
7. In `CrewAI AMP > Settings > Defaults > Crew Studio LLM Settings`, set the new LLM Connection and model as defaults.
|
||||
</Step>
|
||||
|
||||
<Step title="Configure Network Access">
|
||||
@@ -49,4 +49,4 @@ If you encounter issues:
|
||||
- Verify the Target URI format matches the expected pattern
|
||||
- Check that the API key is correct and has proper permissions
|
||||
- Ensure network access is configured to allow CrewAI connections
|
||||
- Confirm the deployment model matches what you've configured in CrewAI
|
||||
- Confirm the deployment model matches what you've configured in CrewAI
|
||||
|
||||
@@ -7,7 +7,7 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
[CrewAI Enterprise](https://app.crewai.com) streamlines the process of **creating**, **deploying**, and **managing** your AI agents in production environments.
|
||||
[CrewAI AMP](https://app.crewai.com) streamlines the process of **creating**, **deploying**, and **managing** your AI agents in production environments.
|
||||
|
||||
## Getting Started
|
||||
|
||||
|
||||
35
docs/en/enterprise/guides/capture_telemetry_logs.mdx
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
title: "Open Telemetry Logs"
|
||||
description: "Understand how to capture telemetry logs from your CrewAI AMP deployments"
|
||||
icon: "magnifying-glass-chart"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
CrewAI AMP provides a powerful way to capture telemetry logs from your deployments. This allows you to monitor the performance of your agents and workflows, and to debug issues that may arise.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="ENTERPRISE OTEL SETUP enabled" icon="users">
|
||||
Your organization should have ENTERPRISE OTEL SETUP enabled
|
||||
</Card>
|
||||
<Card title="OTEL collector setup" icon="server">
|
||||
Your organization should have an OTEL collector setup or a provider like Datadog log intake setup
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
|
||||
## How to capture telemetry logs
|
||||
|
||||
1. Go to settings/organization tab
|
||||
2. Configure your OTEL collector setup
|
||||
3. Save
|
||||
|
||||
|
||||
|
||||
Example to setup OTEL log collection capture to Datadog.
|
||||
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
@@ -1,12 +1,12 @@
|
||||
---
|
||||
title: "Deploy Crew"
|
||||
description: "Deploying a Crew on CrewAI Enterprise"
|
||||
description: "Deploying a Crew on CrewAI AMP"
|
||||
icon: "rocket"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
After creating a crew locally or through Crew Studio, the next step is deploying it to the CrewAI Enterprise platform. This guide covers multiple deployment methods to help you choose the best approach for your workflow.
|
||||
After creating a crew locally or through Crew Studio, the next step is deploying it to the CrewAI AMP platform. This guide covers multiple deployment methods to help you choose the best approach for your workflow.
|
||||
</Note>
|
||||
|
||||
## Prerequisites
|
||||
@@ -39,10 +39,10 @@ The CLI provides the fastest way to deploy locally developed crews to the Enterp
|
||||
</Step>
|
||||
|
||||
<Step title="Authenticate with the Enterprise Platform">
|
||||
First, you need to authenticate your CLI with the CrewAI Enterprise platform:
|
||||
First, you need to authenticate your CLI with the CrewAI AMP platform:
|
||||
|
||||
```bash
|
||||
# If you already have a CrewAI Enterprise account, or want to create one:
|
||||
# If you already have a CrewAI AMP account, or want to create one:
|
||||
crewai login
|
||||
```
|
||||
|
||||
@@ -124,7 +124,7 @@ The CrewAI CLI offers several commands to manage your deployments:
|
||||
|
||||
## Option 2: Deploy Directly via Web Interface
|
||||
|
||||
You can also deploy your crews directly through the CrewAI Enterprise web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
|
||||
You can also deploy your crews directly through the CrewAI AMP web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -134,9 +134,9 @@ You can also deploy your crews directly through the CrewAI Enterprise web interf
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Connecting GitHub to CrewAI Enterprise">
|
||||
<Step title="Connecting GitHub to CrewAI AMP">
|
||||
|
||||
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
|
||||
1. Log in to [CrewAI AMP](https://app.crewai.com)
|
||||
2. Click on the button "Connect GitHub"
|
||||
|
||||
<Frame>
|
||||
@@ -190,7 +190,7 @@ You can also deploy your crews directly through the CrewAI Enterprise web interf
|
||||
## ⚠️ Environment Variable Security Requirements
|
||||
|
||||
<Warning>
|
||||
**Important**: CrewAI Enterprise has security restrictions on environment variable names that can cause deployment failures if not followed.
|
||||
**Important**: CrewAI AMP has security restrictions on environment variable names that can cause deployment failures if not followed.
|
||||
</Warning>
|
||||
|
||||
### Blocked Environment Variable Patterns
|
||||
|
||||
@@ -1,17 +1,17 @@
|
||||
---
|
||||
title: "Enable Crew Studio"
|
||||
description: "Enabling Crew Studio on CrewAI Enterprise"
|
||||
description: "Enabling Crew Studio on CrewAI AMP"
|
||||
icon: "comments"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Tip>
|
||||
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
|
||||
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
|
||||
</Tip>
|
||||
|
||||
## What is Crew Studio?
|
||||
|
||||
Crew Studio is an innovative way to create AI agent crews without writing code.
|
||||
Crew Studio is an innovative way to create AI agent crews without writing code.
|
||||
|
||||
<Frame>
|
||||

|
||||
@@ -24,7 +24,7 @@ With Crew Studio, you can:
|
||||
- Select appropriate tools
|
||||
- Configure necessary inputs
|
||||
- Generate downloadable code for customization
|
||||
- Deploy directly to the CrewAI Enterprise platform
|
||||
- Deploy directly to the CrewAI AMP platform
|
||||
|
||||
## Configuration Steps
|
||||
|
||||
@@ -32,14 +32,14 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
|
||||
|
||||
<Steps>
|
||||
<Step title="Set Up LLM Connection">
|
||||
Go to the **LLM Connections** tab in your CrewAI Enterprise dashboard and create a new LLM connection.
|
||||
Go to the **LLM Connections** tab in your CrewAI AMP dashboard and create a new LLM connection.
|
||||
|
||||
<Note>
|
||||
Feel free to use any LLM provider you want that is supported by CrewAI.
|
||||
</Note>
|
||||
|
||||
|
||||
Configure your LLM connection:
|
||||
|
||||
|
||||
- Enter a `Connection Name` (e.g., `OpenAI`)
|
||||
- Select your model provider: `openai` or `azure`
|
||||
- Select models you'd like to use in your Studio-generated Crews
|
||||
@@ -48,28 +48,28 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
|
||||
- For OpenAI: Add `OPENAI_API_KEY` with your API key
|
||||
- For Azure OpenAI: Refer to [this article](https://blog.crewai.com/configuring-azure-openai-with-crewai-a-comprehensive-guide/) for configuration details
|
||||
- Click `Add Connection` to save your configuration
|
||||
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Verify Connection Added">
|
||||
Once you complete the setup, you'll see your new connection added to the list of available connections.
|
||||
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Configure LLM Defaults">
|
||||
In the main menu, go to **Settings → Defaults** and configure the LLM Defaults settings:
|
||||
|
||||
|
||||
- Select default models for agents and other components
|
||||
- Set default configurations for Crew Studio
|
||||
|
||||
|
||||
Click `Save Settings` to apply your changes.
|
||||
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
@@ -82,38 +82,38 @@ Now that you've configured your LLM connection and default settings, you're read
|
||||
|
||||
<Steps>
|
||||
<Step title="Access Studio">
|
||||
Navigate to the **Studio** section in your CrewAI Enterprise dashboard.
|
||||
Navigate to the **Studio** section in your CrewAI AMP dashboard.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Start a Conversation">
|
||||
Start a conversation with the Crew Assistant by describing the problem you want to solve:
|
||||
|
||||
|
||||
```md
|
||||
I need a crew that can research the latest AI developments and create a summary report.
|
||||
```
|
||||
|
||||
|
||||
The Crew Assistant will ask clarifying questions to better understand your requirements.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Review Generated Crew">
|
||||
Review the generated crew configuration, including:
|
||||
|
||||
|
||||
- Agents and their roles
|
||||
- Tasks to be performed
|
||||
- Required inputs
|
||||
- Tools to be used
|
||||
|
||||
|
||||
This is your opportunity to refine the configuration before proceeding.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Deploy or Download">
|
||||
Once you're satisfied with the configuration, you can:
|
||||
|
||||
|
||||
- Download the generated code for local customization
|
||||
- Deploy the crew directly to the CrewAI Enterprise platform
|
||||
- Deploy the crew directly to the CrewAI AMP platform
|
||||
- Modify the configuration and regenerate the crew
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Test Your Crew">
|
||||
After deployment, test your crew with sample inputs to ensure it performs as expected.
|
||||
</Step>
|
||||
@@ -130,38 +130,37 @@ Here's a typical workflow for creating a crew with Crew Studio:
|
||||
<Steps>
|
||||
<Step title="Describe Your Problem">
|
||||
Start by describing your problem:
|
||||
|
||||
|
||||
```md
|
||||
I need a crew that can analyze financial news and provide investment recommendations
|
||||
```
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Answer Questions">
|
||||
Respond to clarifying questions from the Crew Assistant to refine your requirements.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Review the Plan">
|
||||
Review the generated crew plan, which might include:
|
||||
|
||||
|
||||
- A Research Agent to gather financial news
|
||||
- An Analysis Agent to interpret the data
|
||||
- A Recommendations Agent to provide investment advice
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Approve or Modify">
|
||||
Approve the plan or request changes if necessary.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Download or Deploy">
|
||||
Download the code for customization or deploy directly to the platform.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Test and Refine">
|
||||
Test your crew with sample inputs and refine as needed.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Crew Studio or any other CrewAI Enterprise features.
|
||||
Contact our support team for assistance with Crew Studio or any other CrewAI AMP features.
|
||||
</Card>
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ Use the Gmail Trigger to kick off your deployed crews when Gmail events happen i
|
||||
|
||||
## Enabling the Gmail Trigger
|
||||
|
||||
1. Open your deployment in CrewAI Enterprise
|
||||
1. Open your deployment in CrewAI AMP
|
||||
2. Go to the **Triggers** tab
|
||||
3. Locate **Gmail** and switch the toggle to enable
|
||||
|
||||
@@ -51,16 +51,25 @@ class GmailProcessingCrew:
|
||||
)
|
||||
```
|
||||
|
||||
The Gmail payload will be available via the standard context mechanisms. See the payload samples repository for structure and fields.
|
||||
The Gmail payload will be available via the standard context mechanisms.
|
||||
|
||||
### Sample payloads & crews
|
||||
### Testing Locally
|
||||
|
||||
The [CrewAI Enterprise Trigger Examples repository](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) includes:
|
||||
Test your Gmail trigger integration locally using the CrewAI CLI:
|
||||
|
||||
- `new-email-payload-1.json` / `new-email-payload-2.json` — production-style new message alerts with matching crews in `new-email-crew.py`
|
||||
- `thread-updated-sample-1.json` — follow-up messages on an existing thread, processed by `gmail-alert-crew.py`
|
||||
```bash
|
||||
# View all available triggers
|
||||
crewai triggers list
|
||||
|
||||
Use these samples to validate your parsing logic locally before wiring the trigger to your live Gmail accounts.
|
||||
# Simulate a Gmail trigger with realistic payload
|
||||
crewai triggers run gmail/new_email
|
||||
```
|
||||
|
||||
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Monitoring Executions
|
||||
|
||||
@@ -70,16 +79,10 @@ Track history and performance of triggered runs:
|
||||
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
|
||||
</Frame>
|
||||
|
||||
## Payload Reference
|
||||
|
||||
See the sample payloads and field descriptions:
|
||||
|
||||
<Card title="Gmail samples in Trigger Examples Repo" href="https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail" icon="envelopes-bulk">
|
||||
Gmail samples in Trigger Examples Repo
|
||||
</Card>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Ensure Gmail is connected in Tools & Integrations
|
||||
- Verify the Gmail Trigger is enabled on the Triggers tab
|
||||
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
|
||||
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
|
||||
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution
|
||||
|
||||
@@ -15,7 +15,7 @@ Use the Google Calendar trigger to launch automations whenever calendar events c
|
||||
|
||||
## Enabling the Google Calendar Trigger
|
||||
|
||||
1. Open your deployment in CrewAI Enterprise
|
||||
1. Open your deployment in CrewAI AMP
|
||||
2. Go to the **Triggers** tab
|
||||
3. Locate **Google Calendar** and switch the toggle to enable
|
||||
|
||||
@@ -39,16 +39,23 @@ print(result.raw)
|
||||
|
||||
Use `crewai_trigger_payload` exactly as it is delivered by the trigger so the crew can extract the proper fields.
|
||||
|
||||
## Sample payloads & crews
|
||||
## Testing Locally
|
||||
|
||||
The [Google Calendar examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) show how to handle multiple event types:
|
||||
Test your Google Calendar trigger integration locally using the CrewAI CLI:
|
||||
|
||||
- `new-event.json` → standard event creation handled by `calendar-event-crew.py`
|
||||
- `event-updated.json` / `event-started.json` / `event-ended.json` → in-flight updates processed by `calendar-meeting-crew.py`
|
||||
- `event-canceled.json` → cancellation workflow that alerts attendees via `calendar-meeting-crew.py`
|
||||
- Working location events use `calendar-working-location-crew.py` to extract on-site schedules
|
||||
```bash
|
||||
# View all available triggers
|
||||
crewai triggers list
|
||||
|
||||
Each crew transforms raw event metadata (attendees, rooms, working locations) into the summaries your teams need.
|
||||
# Simulate a Google Calendar trigger with realistic payload
|
||||
crewai triggers run google_calendar/event_changed
|
||||
```
|
||||
|
||||
The `crewai triggers run` command will execute your crew with a complete Calendar payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Monitoring Executions
|
||||
|
||||
@@ -61,5 +68,7 @@ The **Executions** list in the deployment dashboard tracks every triggered run a
|
||||
## Troubleshooting
|
||||
|
||||
- Ensure the correct Google account is connected and the trigger is enabled
|
||||
- Test locally with `crewai triggers run google_calendar/event_changed` to see the exact payload structure
|
||||
- Confirm your workflow handles all-day events (payloads use `start.date` and `end.date` instead of timestamps)
|
||||
- Check execution logs if reminders or attendee arrays are missing—calendar permissions can limit fields in the payload
|
||||
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution
|
||||
|
||||
@@ -15,7 +15,7 @@ Trigger your automations when files are created, updated, or removed in Google D
|
||||
|
||||
## Enabling the Google Drive Trigger
|
||||
|
||||
1. Open your deployment in CrewAI Enterprise
|
||||
1. Open your deployment in CrewAI AMP
|
||||
2. Go to the **Triggers** tab
|
||||
3. Locate **Google Drive** and switch the toggle to enable
|
||||
|
||||
@@ -36,15 +36,23 @@ crew.kickoff({
|
||||
})
|
||||
```
|
||||
|
||||
## Sample payloads & crews
|
||||
## Testing Locally
|
||||
|
||||
Explore the [Google Drive examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) to cover different operations:
|
||||
Test your Google Drive trigger integration locally using the CrewAI CLI:
|
||||
|
||||
- `new-file.json` → new uploads processed by `drive-file-crew.py`
|
||||
- `updated-file.json` → file edits and metadata changes handled by `drive-file-crew.py`
|
||||
- `deleted-file.json` → deletion events routed through `drive-file-deletion-crew.py`
|
||||
```bash
|
||||
# View all available triggers
|
||||
crewai triggers list
|
||||
|
||||
Each crew highlights the file name, operation type, owner, permissions, and security considerations so downstream systems can respond appropriately.
|
||||
# Simulate a Google Drive trigger with realistic payload
|
||||
crewai triggers run google_drive/file_changed
|
||||
```
|
||||
|
||||
The `crewai triggers run` command will execute your crew with a complete Drive payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Monitoring Executions
|
||||
|
||||
@@ -57,5 +65,7 @@ Track history and performance of triggered runs with the **Executions** list in
|
||||
## Troubleshooting
|
||||
|
||||
- Verify Google Drive is connected and the trigger toggle is enabled
|
||||
- Test locally with `crewai triggers run google_drive/file_changed` to see the exact payload structure
|
||||
- If a payload is missing permission data, ensure the connected account has access to the file or folder
|
||||
- The trigger sends file IDs only; use the Drive API if you need to fetch binary content during the crew run
|
||||
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution
|
||||
|
||||
@@ -5,22 +5,22 @@ icon: "hubspot"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
This guide provides a step-by-step process to set up HubSpot triggers for CrewAI Enterprise, enabling you to initiate crews directly from HubSpot Workflows.
|
||||
This guide provides a step-by-step process to set up HubSpot triggers for CrewAI AMP, enabling you to initiate crews directly from HubSpot Workflows.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A CrewAI Enterprise account
|
||||
- A CrewAI AMP account
|
||||
- A HubSpot account with the [HubSpot Workflows](https://knowledge.hubspot.com/workflows/create-workflows) feature
|
||||
|
||||
## Setup Steps
|
||||
|
||||
<Steps>
|
||||
<Step title="Connect your HubSpot account with CrewAI Enterprise">
|
||||
- Log in to your `CrewAI Enterprise account > Triggers`
|
||||
<Step title="Connect your HubSpot account with CrewAI AMP">
|
||||
- Log in to your `CrewAI AMP account > Triggers`
|
||||
- Select `HubSpot` from the list of available triggers
|
||||
- Choose the HubSpot account you want to connect with CrewAI Enterprise
|
||||
- Follow the on-screen prompts to authorize CrewAI Enterprise access to your HubSpot account
|
||||
- A confirmation message will appear once HubSpot is successfully connected with CrewAI Enterprise
|
||||
- Choose the HubSpot account you want to connect with CrewAI AMP
|
||||
- Follow the on-screen prompts to authorize CrewAI AMP access to your HubSpot account
|
||||
- A confirmation message will appear once HubSpot is successfully connected with CrewAI AMP
|
||||
</Step>
|
||||
<Step title="Create a HubSpot Workflow">
|
||||
- Log in to your `HubSpot account > Automations > Workflows > New workflow`
|
||||
@@ -49,16 +49,4 @@ This guide provides a step-by-step process to set up HubSpot triggers for CrewAI
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Additional Resources
|
||||
|
||||
### Sample payloads & crews
|
||||
|
||||
You can jump-start development with the [HubSpot examples in the trigger repository](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot):
|
||||
|
||||
- `record-created-contact.json`, `record-updated-contact.json` → contact lifecycle events handled by `hubspot-contact-crew.py`
|
||||
- `record-created-company.json`, `record-updated-company.json` → company enrichment flows in `hubspot-company-crew.py`
|
||||
- `record-created-deals.json`, `record-updated-deals.json` → deal pipeline automation in `hubspot-record-crew.py`
|
||||
|
||||
Each crew demonstrates how to parse HubSpot record fields, enrich context, and return structured insights.
|
||||
|
||||
For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows).
|
||||
For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows).
|
||||
|
||||
@@ -40,6 +40,28 @@ Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelli
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
||||
</Frame>
|
||||
|
||||
<Warning>
|
||||
**Critical: Webhook URLs Must Be Provided Again**:
|
||||
You **must** provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) in the resume call that you used in the kickoff call. Webhook configurations are **NOT** automatically carried over from kickoff - they must be explicitly included in the resume request to continue receiving notifications for task completion, agent steps, and crew completion.
|
||||
</Warning>
|
||||
|
||||
Example resume call with webhooks:
|
||||
```bash
|
||||
curl -X POST {BASE_URL}/resume \
|
||||
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"task_id": "research_task",
|
||||
"human_feedback": "Great work! Please add more details.",
|
||||
"is_approve": true,
|
||||
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
||||
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
||||
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
||||
}'
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**Feedback Impact on Task Execution**:
|
||||
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
|
||||
@@ -76,4 +98,4 @@ HITL workflows are particularly valuable for:
|
||||
- Complex decision-making scenarios
|
||||
- Sensitive or high-stakes operations
|
||||
- Creative tasks requiring human judgment
|
||||
- Compliance and regulatory reviews
|
||||
- Compliance and regulatory reviews
|
||||
|
||||
@@ -1,19 +1,19 @@
|
||||
---
|
||||
title: "Kickoff Crew"
|
||||
description: "Kickoff a Crew on CrewAI Enterprise"
|
||||
description: "Kickoff a Crew on CrewAI AMP"
|
||||
icon: "flag-checkered"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Once you've deployed your crew to the CrewAI Enterprise platform, you can kickoff executions through the web interface or the API. This guide covers both approaches.
|
||||
Once you've deployed your crew to the CrewAI AMP platform, you can kickoff executions through the web interface or the API. This guide covers both approaches.
|
||||
|
||||
## Method 1: Using the Web Interface
|
||||
|
||||
### Step 1: Navigate to Your Deployed Crew
|
||||
|
||||
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
|
||||
1. Log in to [CrewAI AMP](https://app.crewai.com)
|
||||
2. Click on the crew name from your projects list
|
||||
3. You'll be taken to the crew's detail page
|
||||
|
||||
@@ -83,7 +83,7 @@ Once execution is complete:
|
||||
|
||||
## Method 2: Using the API
|
||||
|
||||
You can also kickoff crews programmatically using the CrewAI Enterprise REST API.
|
||||
You can also kickoff crews programmatically using the CrewAI AMP REST API.
|
||||
|
||||
### Authentication
|
||||
|
||||
@@ -184,4 +184,3 @@ If an execution fails:
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with execution issues or questions about the Enterprise platform.
|
||||
</Card>
|
||||
|
||||
|
||||