Compare commits
181 Commits
0.193.0
...
devin/1766
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
029eedfddb | ||
|
|
be70a04153 | ||
|
|
0c359f4df8 | ||
|
|
fe288dbe73 | ||
|
|
dc63bc2319 | ||
|
|
8d0effafec | ||
|
|
1cdbe79b34 | ||
|
|
84328d9311 | ||
|
|
88d3c0fa97 | ||
|
|
75ff7dce0c | ||
|
|
38b0b125d3 | ||
|
|
9bd8ad51f7 | ||
|
|
0632a054ca | ||
|
|
feec6b440e | ||
|
|
e43c7debbd | ||
|
|
8ef9fe2cab | ||
|
|
807f97114f | ||
|
|
bdafe0fac7 | ||
|
|
8e99d490b0 | ||
|
|
34b909367b | ||
|
|
22684b513e | ||
|
|
3e3b9df761 | ||
|
|
177294f588 | ||
|
|
beef712646 | ||
|
|
6125b866fd | ||
|
|
f2f994612c | ||
|
|
7fff2b654c | ||
|
|
34e09162ba | ||
|
|
24d1fad7ab | ||
|
|
9b8f31fa07 | ||
|
|
d898d7c02c | ||
|
|
f04c40babf | ||
|
|
c456e5c5fa | ||
|
|
633e279b51 | ||
|
|
a25778974d | ||
|
|
09f1ba6956 | ||
|
|
20704742e2 | ||
|
|
59180e9c9f | ||
|
|
3ce019b07b | ||
|
|
2355ec0733 | ||
|
|
c925d2d519 | ||
|
|
bc4e6a3127 | ||
|
|
37526c693b | ||
|
|
c59173a762 | ||
|
|
4d8eec96e8 | ||
|
|
2025a26fc3 | ||
|
|
bed9a3847a | ||
|
|
5239dc9859 | ||
|
|
52444ad390 | ||
|
|
f070595e65 | ||
|
|
69c5eace2d | ||
|
|
d88ac338d5 | ||
|
|
4ae8c36815 | ||
|
|
b049b73f2e | ||
|
|
d2b9c54931 | ||
|
|
a928cde6ee | ||
|
|
9c84475691 | ||
|
|
f3c5d1e351 | ||
|
|
a978267fa2 | ||
|
|
b759654e7d | ||
|
|
9da1f0c0aa | ||
|
|
a559cedbd1 | ||
|
|
bcc3e358cb | ||
|
|
d160f0874a | ||
|
|
9fcf55198f | ||
|
|
f46a846ddc | ||
|
|
b546982690 | ||
|
|
d7bdac12a2 | ||
|
|
528d812263 | ||
|
|
ffd717c51a | ||
|
|
fbe4aa4bd1 | ||
|
|
c205d2e8de | ||
|
|
fcb5b19b2e | ||
|
|
01f0111d52 | ||
|
|
6b52587c67 | ||
|
|
629f7f34ce | ||
|
|
0f1c173d02 | ||
|
|
19c5b9a35e | ||
|
|
1ed307b58c | ||
|
|
d29867bbb6 | ||
|
|
b2c278ed22 | ||
|
|
f6aed9798b | ||
|
|
40a2d387a1 | ||
|
|
6f36d7003b | ||
|
|
9e5906c52f | ||
|
|
fc521839e4 | ||
|
|
e4cc9a664c | ||
|
|
7e6171d5bc | ||
|
|
61ad1fb112 | ||
|
|
54710a8711 | ||
|
|
5abf976373 | ||
|
|
329567153b | ||
|
|
60332e0b19 | ||
|
|
40932af3fa | ||
|
|
e134e5305b | ||
|
|
e229ef4e19 | ||
|
|
2e9eb8c32d | ||
|
|
4ebb5114ed | ||
|
|
70b083945f | ||
|
|
410db1ff39 | ||
|
|
5d6b4c922b | ||
|
|
b07c0fc45c | ||
|
|
97853199c7 | ||
|
|
494ed7e671 | ||
|
|
a83c57a2f2 | ||
|
|
08e15ab267 | ||
|
|
9728388ea7 | ||
|
|
4371cf5690 | ||
|
|
d28daa26cd | ||
|
|
a850813f2b | ||
|
|
5944a39629 | ||
|
|
c594859ed0 | ||
|
|
2ee27efca7 | ||
|
|
f6e13eb890 | ||
|
|
e7b3ce27ca | ||
|
|
dba27cf8b5 | ||
|
|
6469f224f6 | ||
|
|
f3a63be215 | ||
|
|
01d8c189f0 | ||
|
|
cc83c1ead5 | ||
|
|
7578901f6d | ||
|
|
d1343b96ed | ||
|
|
42f2b4d551 | ||
|
|
0229390ad1 | ||
|
|
f0fb349ddf | ||
|
|
bf2e2a42da | ||
|
|
814c962196 | ||
|
|
2ebb2e845f | ||
|
|
7b550ebfe8 | ||
|
|
29919c2d81 | ||
|
|
b71c88814f | ||
|
|
cb8bcfe214 | ||
|
|
13a514f8be | ||
|
|
316b1cea69 | ||
|
|
6f2e39c0dd | ||
|
|
8d93361cb3 | ||
|
|
54ec245d84 | ||
|
|
f589ab9b80 | ||
|
|
fadb59e0f0 | ||
|
|
1a60848425 | ||
|
|
0135163040 | ||
|
|
dac5d6d664 | ||
|
|
f0f94f2540 | ||
|
|
bf9e0423f2 | ||
|
|
f47e0c82c4 | ||
|
|
eabced321c | ||
|
|
b77074e48e | ||
|
|
7d5cd4d3e2 | ||
|
|
73e932bfee | ||
|
|
12fa7e2ff1 | ||
|
|
091d1267d8 | ||
|
|
b5b10a8cde | ||
|
|
2485ed93d6 | ||
|
|
ce5ea9be6f | ||
|
|
e070c1400c | ||
|
|
6537e3737d | ||
|
|
346faf229f | ||
|
|
a0b757a12c | ||
|
|
1dbe8aab52 | ||
|
|
4ac65eb0a6 | ||
|
|
3e97393f58 | ||
|
|
34bed359a6 | ||
|
|
feeed505bb | ||
|
|
cb0efd05b4 | ||
|
|
db5f565dea | ||
|
|
58413b663a | ||
|
|
37636f0dd7 | ||
|
|
0e370593f1 | ||
|
|
aa8dc9d77f | ||
|
|
9c1096dbdc | ||
|
|
47044450c0 | ||
|
|
0ee438c39d | ||
|
|
cbb9965bf7 | ||
|
|
4951d30dd9 | ||
|
|
7426969736 | ||
|
|
d879be8b66 | ||
|
|
24b84a4b68 | ||
|
|
8e571ea8a7 | ||
|
|
2cfc4d37b8 | ||
|
|
f4abc41235 | ||
|
|
de5d3c3ad1 |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 45 KiB |
|
Before Width: | Height: | Size: 48 KiB |
|
Before Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 23 KiB |
|
Before Width: | Height: | Size: 43 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 24 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 25 KiB |
|
Before Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 33 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 45 KiB |
|
Before Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 52 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 17 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 39 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 33 KiB |
161
.env.test
Normal file
@@ -0,0 +1,161 @@
|
||||
# =============================================================================
|
||||
# Test Environment Variables
|
||||
# =============================================================================
|
||||
# This file contains all environment variables needed to run tests locally
|
||||
# in a way that mimics the GitHub Actions CI environment.
|
||||
|
||||
# =============================================================================
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# LLM Provider API Keys
|
||||
# -----------------------------------------------------------------------------
|
||||
OPENAI_API_KEY=fake-api-key
|
||||
ANTHROPIC_API_KEY=fake-anthropic-key
|
||||
GEMINI_API_KEY=fake-gemini-key
|
||||
AZURE_API_KEY=fake-azure-key
|
||||
OPENROUTER_API_KEY=fake-openrouter-key
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# AWS Credentials
|
||||
# -----------------------------------------------------------------------------
|
||||
AWS_ACCESS_KEY_ID=fake-aws-access-key
|
||||
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
|
||||
AWS_DEFAULT_REGION=us-east-1
|
||||
AWS_REGION_NAME=us-east-1
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Azure OpenAI Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
AZURE_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
|
||||
AZURE_OPENAI_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
|
||||
AZURE_OPENAI_API_KEY=fake-azure-openai-key
|
||||
AZURE_API_VERSION=2024-02-15-preview
|
||||
OPENAI_API_VERSION=2024-02-15-preview
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Google Cloud Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
#GOOGLE_CLOUD_PROJECT=fake-gcp-project
|
||||
#GOOGLE_CLOUD_LOCATION=us-central1
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# OpenAI Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
OPENAI_BASE_URL=https://api.openai.com/v1
|
||||
OPENAI_API_BASE=https://api.openai.com/v1
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Search & Scraping Tool API Keys
|
||||
# -----------------------------------------------------------------------------
|
||||
SERPER_API_KEY=fake-serper-key
|
||||
EXA_API_KEY=fake-exa-key
|
||||
BRAVE_API_KEY=fake-brave-key
|
||||
FIRECRAWL_API_KEY=fake-firecrawl-key
|
||||
TAVILY_API_KEY=fake-tavily-key
|
||||
SERPAPI_API_KEY=fake-serpapi-key
|
||||
SERPLY_API_KEY=fake-serply-key
|
||||
LINKUP_API_KEY=fake-linkup-key
|
||||
PARALLEL_API_KEY=fake-parallel-key
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Exa Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
EXA_BASE_URL=https://api.exa.ai
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Web Scraping & Automation
|
||||
# -----------------------------------------------------------------------------
|
||||
BRIGHT_DATA_API_KEY=fake-brightdata-key
|
||||
BRIGHT_DATA_ZONE=fake-zone
|
||||
BRIGHTDATA_API_URL=https://api.brightdata.com
|
||||
BRIGHTDATA_DEFAULT_TIMEOUT=600
|
||||
BRIGHTDATA_DEFAULT_POLLING_INTERVAL=1
|
||||
|
||||
OXYLABS_USERNAME=fake-oxylabs-user
|
||||
OXYLABS_PASSWORD=fake-oxylabs-pass
|
||||
|
||||
SCRAPFLY_API_KEY=fake-scrapfly-key
|
||||
SCRAPEGRAPH_API_KEY=fake-scrapegraph-key
|
||||
|
||||
BROWSERBASE_API_KEY=fake-browserbase-key
|
||||
BROWSERBASE_PROJECT_ID=fake-browserbase-project
|
||||
|
||||
HYPERBROWSER_API_KEY=fake-hyperbrowser-key
|
||||
MULTION_API_KEY=fake-multion-key
|
||||
APIFY_API_TOKEN=fake-apify-token
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Database & Vector Store Credentials
|
||||
# -----------------------------------------------------------------------------
|
||||
SINGLESTOREDB_URL=mysql://fake:fake@localhost:3306/fake
|
||||
SINGLESTOREDB_HOST=localhost
|
||||
SINGLESTOREDB_PORT=3306
|
||||
SINGLESTOREDB_USER=fake-user
|
||||
SINGLESTOREDB_PASSWORD=fake-password
|
||||
SINGLESTOREDB_DATABASE=fake-database
|
||||
SINGLESTOREDB_CONNECT_TIMEOUT=30
|
||||
|
||||
SNOWFLAKE_USER=fake-snowflake-user
|
||||
SNOWFLAKE_PASSWORD=fake-snowflake-password
|
||||
SNOWFLAKE_ACCOUNT=fake-snowflake-account
|
||||
SNOWFLAKE_WAREHOUSE=fake-snowflake-warehouse
|
||||
SNOWFLAKE_DATABASE=fake-snowflake-database
|
||||
SNOWFLAKE_SCHEMA=fake-snowflake-schema
|
||||
|
||||
WEAVIATE_URL=http://localhost:8080
|
||||
WEAVIATE_API_KEY=fake-weaviate-key
|
||||
|
||||
EMBEDCHAIN_DB_URI=sqlite:///test.db
|
||||
|
||||
# Databricks Credentials
|
||||
DATABRICKS_HOST=https://fake-databricks.cloud.databricks.com
|
||||
DATABRICKS_TOKEN=fake-databricks-token
|
||||
DATABRICKS_CONFIG_PROFILE=fake-profile
|
||||
|
||||
# MongoDB Credentials
|
||||
MONGODB_URI=mongodb://fake:fake@localhost:27017/fake
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# CrewAI Platform & Enterprise
|
||||
# -----------------------------------------------------------------------------
|
||||
# setting CREWAI_PLATFORM_INTEGRATION_TOKEN causes these test to fail:
|
||||
#=========================== short test summary info ============================
|
||||
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_platform_context_manager_basic_usage - AssertionError: assert 'fake-platform-token' is None
|
||||
# + where 'fake-platform-token' = get_platform_integration_token()
|
||||
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_context_var_isolation_between_tests - AssertionError: assert 'fake-platform-token' is None
|
||||
# + where 'fake-platform-token' = get_platform_integration_token()
|
||||
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_multiple_sequential_context_managers - AssertionError: assert 'fake-platform-token' is None
|
||||
# + where 'fake-platform-token' = get_platform_integration_token()
|
||||
#CREWAI_PLATFORM_INTEGRATION_TOKEN=fake-platform-token
|
||||
CREWAI_PERSONAL_ACCESS_TOKEN=fake-personal-token
|
||||
CREWAI_PLUS_URL=https://fake.crewai.com
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Other Service API Keys
|
||||
# -----------------------------------------------------------------------------
|
||||
ZAPIER_API_KEY=fake-zapier-key
|
||||
PATRONUS_API_KEY=fake-patronus-key
|
||||
MINDS_API_KEY=fake-minds-key
|
||||
HF_TOKEN=fake-hf-token
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Feature Flags/Testing Modes
|
||||
# -----------------------------------------------------------------------------
|
||||
CREWAI_DISABLE_TELEMETRY=true
|
||||
OTEL_SDK_DISABLED=true
|
||||
CREWAI_TESTING=true
|
||||
CREWAI_TRACING_ENABLED=false
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Testing/CI Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
# VCR recording mode: "none" (default), "new_episodes", "all", "once"
|
||||
PYTEST_VCR_RECORD_MODE=none
|
||||
|
||||
# Set to "true" by GitHub when running in GitHub Actions
|
||||
# GITHUB_ACTIONS=false
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Python Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
PYTHONUNBUFFERED=1
|
||||
28
.github/codeql/codeql-config.yml
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
name: "CodeQL Config"
|
||||
|
||||
paths-ignore:
|
||||
# Ignore template files - these are boilerplate code that shouldn't be analyzed
|
||||
- "lib/crewai/src/crewai/cli/templates/**"
|
||||
# Ignore test cassettes - these are test fixtures/recordings
|
||||
- "lib/crewai/tests/cassettes/**"
|
||||
- "lib/crewai-tools/tests/cassettes/**"
|
||||
# Ignore cache and build artifacts
|
||||
- ".cache/**"
|
||||
# Ignore documentation build artifacts
|
||||
- "docs/.cache/**"
|
||||
# Ignore experimental code
|
||||
- "lib/crewai/src/crewai/experimental/a2a/**"
|
||||
|
||||
paths:
|
||||
# Include all Python source code from workspace packages
|
||||
- "lib/crewai/src/**"
|
||||
- "lib/crewai-tools/src/**"
|
||||
- "lib/devtools/src/**"
|
||||
# Include tests (but exclude cassettes via paths-ignore)
|
||||
- "lib/crewai/tests/**"
|
||||
- "lib/crewai-tools/tests/**"
|
||||
- "lib/devtools/tests/**"
|
||||
|
||||
# Configure specific queries or packs if needed
|
||||
# queries:
|
||||
# - uses: security-and-quality
|
||||
11
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
# To get started with Dependabot version updates, you'll need to specify which
|
||||
# package ecosystems to update and where the package manifests are located.
|
||||
# Please see the documentation for all configuration options:
|
||||
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: uv # See documentation for possible values
|
||||
directory: "/" # Location of package manifests
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
63
.github/security.md
vendored
@@ -1,27 +1,50 @@
|
||||
## CrewAI Security Vulnerability Reporting Policy
|
||||
## CrewAI Security Policy
|
||||
|
||||
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
|
||||
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
|
||||
|
||||
### Reporting Process
|
||||
Do **not** report vulnerabilities via public GitHub issues.
|
||||
### Scope
|
||||
|
||||
Email all vulnerability reports directly to:
|
||||
**security@crewai.com**
|
||||
We welcome reports for vulnerabilities that could impact:
|
||||
|
||||
### Required Information
|
||||
To help us quickly validate and remediate the issue, your report must include:
|
||||
- CrewAI-maintained source code and repositories
|
||||
- CrewAI-operated infrastructure and services
|
||||
- Official CrewAI releases, packages, and distributions
|
||||
|
||||
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
|
||||
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
|
||||
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
|
||||
- **Special Configuration:** Document any special settings or configurations required to reproduce.
|
||||
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
|
||||
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
|
||||
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
|
||||
|
||||
### Our Response
|
||||
- We will acknowledge receipt of your report promptly via your provided email.
|
||||
- Confirmed vulnerabilities will receive priority remediation based on severity.
|
||||
- Patches will be released as swiftly as possible following verification.
|
||||
### How to Report
|
||||
|
||||
### Reward Notice
|
||||
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
|
||||
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
|
||||
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
|
||||
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
|
||||
|
||||
### What to Include
|
||||
|
||||
Providing comprehensive information enables us to validate the issue quickly:
|
||||
|
||||
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
|
||||
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
|
||||
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
|
||||
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
|
||||
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
|
||||
|
||||
### Our Commitment
|
||||
|
||||
- **Acknowledgement:** We aim to acknowledge your report within two business days.
|
||||
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
|
||||
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
|
||||
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
|
||||
|
||||
### Coordinated Disclosure
|
||||
|
||||
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
|
||||
|
||||
### Safe Harbor
|
||||
|
||||
We will not pursue or support legal action against individuals who, in good faith:
|
||||
|
||||
- Follow this policy and refrain from violating any applicable laws
|
||||
- Avoid privacy violations, data destruction, or service disruption
|
||||
- Limit testing to systems in scope and respect rate limits and terms of service
|
||||
|
||||
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
|
||||
|
||||
2
.github/workflows/build-uv-cache.yml
vendored
@@ -7,6 +7,8 @@ on:
|
||||
paths:
|
||||
- "uv.lock"
|
||||
- "pyproject.toml"
|
||||
schedule:
|
||||
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
|
||||
5
.github/workflows/codeql.yml
vendored
@@ -15,11 +15,11 @@ on:
|
||||
push:
|
||||
branches: [ "main" ]
|
||||
paths-ignore:
|
||||
- "src/crewai/cli/templates/**"
|
||||
- "lib/crewai/src/crewai/cli/templates/**"
|
||||
pull_request:
|
||||
branches: [ "main" ]
|
||||
paths-ignore:
|
||||
- "src/crewai/cli/templates/**"
|
||||
- "lib/crewai/src/crewai/cli/templates/**"
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
@@ -73,6 +73,7 @@ jobs:
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
build-mode: ${{ matrix.build-mode }}
|
||||
config-file: ./.github/codeql/codeql-config.yml
|
||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||
# By default, queries listed here will override any specified in a config file.
|
||||
# Prefix the list here with "+" to use these queries and those in the config file.
|
||||
|
||||
35
.github/workflows/docs-broken-links.yml
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
name: Check Documentation Broken Links
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "docs.json"
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "docs.json"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
check-links:
|
||||
name: Check broken links
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "latest"
|
||||
|
||||
- name: Install Mintlify CLI
|
||||
run: npm i -g mintlify
|
||||
|
||||
- name: Run broken link checker
|
||||
run: |
|
||||
# Auto-answer the prompt with yes command
|
||||
yes "" | mintlify broken-links || test $? -eq 141
|
||||
working-directory: ./docs
|
||||
9
.github/workflows/linter.yml
vendored
@@ -52,10 +52,11 @@ jobs:
|
||||
- name: Run Ruff on Changed Files
|
||||
if: ${{ steps.changed-files.outputs.files != '' }}
|
||||
run: |
|
||||
echo "${{ steps.changed-files.outputs.files }}" \
|
||||
| tr ' ' '\n' \
|
||||
| grep -v 'src/crewai/cli/templates/' \
|
||||
| xargs -I{} uv run ruff check "{}"
|
||||
echo "${{ steps.changed-files.outputs.files }}" \
|
||||
| tr ' ' '\n' \
|
||||
| grep -v 'src/crewai/cli/templates/' \
|
||||
| grep -v '/tests/' \
|
||||
| xargs -I{} uv run ruff check "{}"
|
||||
|
||||
- name: Save uv caches
|
||||
if: steps.cache-restore.outputs.cache-hit != 'true'
|
||||
|
||||
100
.github/workflows/publish.yml
vendored
Normal file
@@ -0,0 +1,100 @@
|
||||
name: Publish to PyPI
|
||||
|
||||
on:
|
||||
repository_dispatch:
|
||||
types: [deployment-tests-passed]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
release_tag:
|
||||
description: 'Release tag to publish'
|
||||
required: false
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build packages
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- name: Determine release tag
|
||||
id: release
|
||||
run: |
|
||||
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
|
||||
if [ -n "${{ inputs.release_tag }}" ]; then
|
||||
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
|
||||
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
|
||||
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "tag=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ steps.release.outputs.tag || github.ref }}
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v4
|
||||
|
||||
- name: Build packages
|
||||
run: |
|
||||
uv build --all-packages
|
||||
rm dist/.gitignore
|
||||
|
||||
- name: Upload artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: dist/
|
||||
|
||||
publish:
|
||||
name: Publish to PyPI
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
environment:
|
||||
name: pypi
|
||||
url: https://pypi.org/p/crewai
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v6
|
||||
with:
|
||||
version: "0.8.4"
|
||||
python-version: "3.12"
|
||||
enable-cache: false
|
||||
|
||||
- name: Download artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: dist
|
||||
|
||||
- name: Publish to PyPI
|
||||
env:
|
||||
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
|
||||
run: |
|
||||
failed=0
|
||||
for package in dist/*; do
|
||||
if [[ "$package" == *"crewai_devtools"* ]]; then
|
||||
echo "Skipping private package: $package"
|
||||
continue
|
||||
fi
|
||||
echo "Publishing $package"
|
||||
if ! uv publish "$package"; then
|
||||
echo "Failed to publish $package"
|
||||
failed=1
|
||||
fi
|
||||
done
|
||||
if [ $failed -eq 1 ]; then
|
||||
echo "Some packages failed to publish"
|
||||
exit 1
|
||||
fi
|
||||
27
.github/workflows/tests.yml
vendored
@@ -5,10 +5,6 @@ on: [pull_request]
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
OPENAI_API_KEY: fake-api-key
|
||||
PYTHONUNBUFFERED: 1
|
||||
|
||||
jobs:
|
||||
tests:
|
||||
name: tests (${{ matrix.python-version }})
|
||||
@@ -56,13 +52,13 @@ jobs:
|
||||
- name: Run tests (group ${{ matrix.group }} of 8)
|
||||
run: |
|
||||
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
|
||||
DURATION_FILE=".test_durations_py${PYTHON_VERSION_SAFE}"
|
||||
|
||||
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}"
|
||||
|
||||
# Temporarily always skip cached durations to fix test splitting
|
||||
# When durations don't match, pytest-split runs duplicate tests instead of splitting
|
||||
echo "Using even test splitting (duration cache disabled until fix merged)"
|
||||
DURATIONS_ARG=""
|
||||
|
||||
|
||||
# Original logic (disabled temporarily):
|
||||
# if [ ! -f "$DURATION_FILE" ]; then
|
||||
# echo "No cached durations found, tests will be split evenly"
|
||||
@@ -74,18 +70,25 @@ jobs:
|
||||
# echo "No test changes detected, using cached test durations for optimal splitting"
|
||||
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
|
||||
# fi
|
||||
|
||||
uv run pytest \
|
||||
--block-network \
|
||||
--timeout=30 \
|
||||
|
||||
cd lib/crewai && uv run pytest \
|
||||
-vv \
|
||||
--splits 8 \
|
||||
--group ${{ matrix.group }} \
|
||||
$DURATIONS_ARG \
|
||||
--durations=10 \
|
||||
-n auto \
|
||||
--maxfail=3
|
||||
|
||||
- name: Run tool tests (group ${{ matrix.group }} of 8)
|
||||
run: |
|
||||
cd lib/crewai-tools && uv run pytest \
|
||||
-vv \
|
||||
--splits 8 \
|
||||
--group ${{ matrix.group }} \
|
||||
--durations=10 \
|
||||
--maxfail=3
|
||||
|
||||
|
||||
- name: Save uv caches
|
||||
if: steps.cache-restore.outputs.cache-hit != 'true'
|
||||
uses: actions/cache/save@v4
|
||||
|
||||
18
.github/workflows/trigger-deployment-tests.yml
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
name: Trigger Deployment Tests
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
trigger:
|
||||
name: Trigger deployment tests
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger deployment tests
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
|
||||
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
|
||||
event-type: crewai-release
|
||||
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'
|
||||
1
.gitignore
vendored
@@ -2,7 +2,6 @@
|
||||
.pytest_cache
|
||||
__pycache__
|
||||
dist/
|
||||
lib/
|
||||
.env
|
||||
assets/*
|
||||
.idea
|
||||
|
||||
@@ -3,17 +3,31 @@ repos:
|
||||
hooks:
|
||||
- id: ruff
|
||||
name: ruff
|
||||
entry: uv run ruff check
|
||||
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
- id: ruff-format
|
||||
name: ruff-format
|
||||
entry: uv run ruff format
|
||||
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
- id: mypy
|
||||
name: mypy
|
||||
entry: uv run mypy
|
||||
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
exclude: ^tests/
|
||||
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
|
||||
- repo: https://github.com/astral-sh/uv-pre-commit
|
||||
rev: 0.9.3
|
||||
hooks:
|
||||
- id: uv-lock
|
||||
- repo: https://github.com/commitizen-tools/commitizen
|
||||
rev: v4.10.1
|
||||
hooks:
|
||||
- id: commitizen
|
||||
- id: commitizen-branch
|
||||
stages: [ pre-push ]
|
||||
|
||||
|
||||
28
README.md
@@ -62,9 +62,9 @@
|
||||
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
|
||||
standard for enterprise-ready AI automation.
|
||||
|
||||
# CrewAI Enterprise Suite
|
||||
# CrewAI AOP Suite
|
||||
|
||||
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
|
||||
CrewAI AOP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
|
||||
|
||||
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
|
||||
|
||||
@@ -76,9 +76,9 @@ You can try one part of the suite the [Crew Control Plane for free](https://app.
|
||||
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
|
||||
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
|
||||
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
|
||||
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
|
||||
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AOP on-premise or in the cloud, depending on your security and compliance requirements.
|
||||
|
||||
CrewAI Enterprise is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
|
||||
CrewAI AOP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
|
||||
intelligent automations.
|
||||
|
||||
## Table of contents
|
||||
@@ -674,9 +674,9 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b
|
||||
|
||||
### Enterprise Features
|
||||
|
||||
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
|
||||
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
|
||||
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
|
||||
- [What additional features does CrewAI AOP offer?](#q-what-additional-features-does-crewai-amp-offer)
|
||||
- [Is CrewAI AOP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments)
|
||||
- [Can I try CrewAI AOP for free?](#q-can-i-try-crewai-amp-for-free)
|
||||
|
||||
### Q: What exactly is CrewAI?
|
||||
|
||||
@@ -732,17 +732,17 @@ A: Check out practical examples in the [CrewAI-examples repository](https://gith
|
||||
|
||||
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
|
||||
|
||||
### Q: What additional features does CrewAI Enterprise offer?
|
||||
### Q: What additional features does CrewAI AOP offer?
|
||||
|
||||
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
|
||||
A: CrewAI AOP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
|
||||
|
||||
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
|
||||
### Q: Is CrewAI AOP available for cloud and on-premise deployments?
|
||||
|
||||
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
|
||||
A: Yes, CrewAI AOP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
|
||||
|
||||
### Q: Can I try CrewAI Enterprise for free?
|
||||
### Q: Can I try CrewAI AOP for free?
|
||||
|
||||
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
|
||||
A: Yes, you can explore part of the CrewAI AOP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
|
||||
|
||||
### Q: Does CrewAI support fine-tuning or training custom models?
|
||||
|
||||
@@ -762,7 +762,7 @@ A: CrewAI is highly scalable, supporting simple automations and large-scale ente
|
||||
|
||||
### Q: Does CrewAI offer debugging and monitoring tools?
|
||||
|
||||
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
|
||||
A: Yes, CrewAI AOP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
|
||||
|
||||
### Q: What programming languages does CrewAI support?
|
||||
|
||||
|
||||
197
conftest.py
Normal file
@@ -0,0 +1,197 @@
|
||||
"""Pytest configuration for crewAI workspace."""
|
||||
|
||||
from collections.abc import Generator
|
||||
import os
|
||||
from pathlib import Path
|
||||
import tempfile
|
||||
from typing import Any
|
||||
|
||||
from dotenv import load_dotenv
|
||||
import pytest
|
||||
from vcr.request import Request # type: ignore[import-untyped]
|
||||
|
||||
|
||||
env_test_path = Path(__file__).parent / ".env.test"
|
||||
load_dotenv(env_test_path, override=True)
|
||||
load_dotenv(override=True)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True, scope="function")
|
||||
def cleanup_event_handlers() -> Generator[None, Any, None]:
|
||||
"""Clean up event bus handlers after each test to prevent test pollution."""
|
||||
yield
|
||||
|
||||
try:
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
|
||||
with crewai_event_bus._rwlock.w_locked():
|
||||
crewai_event_bus._sync_handlers.clear()
|
||||
crewai_event_bus._async_handlers.clear()
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True, scope="function")
|
||||
def setup_test_environment() -> Generator[None, Any, None]:
|
||||
"""Setup test environment for crewAI workspace."""
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
storage_dir = Path(temp_dir) / "crewai_test_storage"
|
||||
storage_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if not storage_dir.exists() or not storage_dir.is_dir():
|
||||
raise RuntimeError(
|
||||
f"Failed to create test storage directory: {storage_dir}"
|
||||
)
|
||||
|
||||
try:
|
||||
test_file = storage_dir / ".permissions_test"
|
||||
test_file.touch()
|
||||
test_file.unlink()
|
||||
except (OSError, IOError) as e:
|
||||
raise RuntimeError(
|
||||
f"Test storage directory {storage_dir} is not writable: {e}"
|
||||
) from e
|
||||
|
||||
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
|
||||
os.environ["CREWAI_TESTING"] = "true"
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
os.environ.pop("CREWAI_TESTING", "true")
|
||||
os.environ.pop("CREWAI_STORAGE_DIR", None)
|
||||
os.environ.pop("CREWAI_DISABLE_TELEMETRY", "true")
|
||||
os.environ.pop("OTEL_SDK_DISABLED", "true")
|
||||
os.environ.pop("OPENAI_BASE_URL", "https://api.openai.com/v1")
|
||||
os.environ.pop("OPENAI_API_BASE", "https://api.openai.com/v1")
|
||||
|
||||
|
||||
HEADERS_TO_FILTER = {
|
||||
"authorization": "AUTHORIZATION-XXX",
|
||||
"content-security-policy": "CSP-FILTERED",
|
||||
"cookie": "COOKIE-XXX",
|
||||
"set-cookie": "SET-COOKIE-XXX",
|
||||
"permissions-policy": "PERMISSIONS-POLICY-XXX",
|
||||
"referrer-policy": "REFERRER-POLICY-XXX",
|
||||
"strict-transport-security": "STS-XXX",
|
||||
"x-content-type-options": "X-CONTENT-TYPE-XXX",
|
||||
"x-frame-options": "X-FRAME-OPTIONS-XXX",
|
||||
"x-permitted-cross-domain-policies": "X-PERMITTED-XXX",
|
||||
"x-request-id": "X-REQUEST-ID-XXX",
|
||||
"x-runtime": "X-RUNTIME-XXX",
|
||||
"x-xss-protection": "X-XSS-PROTECTION-XXX",
|
||||
"x-stainless-arch": "X-STAINLESS-ARCH-XXX",
|
||||
"x-stainless-os": "X-STAINLESS-OS-XXX",
|
||||
"x-stainless-read-timeout": "X-STAINLESS-READ-TIMEOUT-XXX",
|
||||
"cf-ray": "CF-RAY-XXX",
|
||||
"etag": "ETAG-XXX",
|
||||
"Strict-Transport-Security": "STS-XXX",
|
||||
"access-control-expose-headers": "ACCESS-CONTROL-XXX",
|
||||
"openai-organization": "OPENAI-ORG-XXX",
|
||||
"openai-project": "OPENAI-PROJECT-XXX",
|
||||
"x-ratelimit-limit-requests": "X-RATELIMIT-LIMIT-REQUESTS-XXX",
|
||||
"x-ratelimit-limit-tokens": "X-RATELIMIT-LIMIT-TOKENS-XXX",
|
||||
"x-ratelimit-remaining-requests": "X-RATELIMIT-REMAINING-REQUESTS-XXX",
|
||||
"x-ratelimit-remaining-tokens": "X-RATELIMIT-REMAINING-TOKENS-XXX",
|
||||
"x-ratelimit-reset-requests": "X-RATELIMIT-RESET-REQUESTS-XXX",
|
||||
"x-ratelimit-reset-tokens": "X-RATELIMIT-RESET-TOKENS-XXX",
|
||||
"x-goog-api-key": "X-GOOG-API-KEY-XXX",
|
||||
"api-key": "X-API-KEY-XXX",
|
||||
"User-Agent": "X-USER-AGENT-XXX",
|
||||
"apim-request-id:": "X-API-CLIENT-REQUEST-ID-XXX",
|
||||
"azureml-model-session": "AZUREML-MODEL-SESSION-XXX",
|
||||
"x-ms-client-request-id": "X-MS-CLIENT-REQUEST-ID-XXX",
|
||||
"x-ms-region": "X-MS-REGION-XXX",
|
||||
"apim-request-id": "APIM-REQUEST-ID-XXX",
|
||||
"x-api-key": "X-API-KEY-XXX",
|
||||
"anthropic-organization-id": "ANTHROPIC-ORGANIZATION-ID-XXX",
|
||||
"request-id": "REQUEST-ID-XXX",
|
||||
"anthropic-ratelimit-input-tokens-limit": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX",
|
||||
"anthropic-ratelimit-input-tokens-remaining": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX",
|
||||
"anthropic-ratelimit-input-tokens-reset": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX",
|
||||
"anthropic-ratelimit-output-tokens-limit": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX",
|
||||
"anthropic-ratelimit-output-tokens-remaining": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX",
|
||||
"anthropic-ratelimit-output-tokens-reset": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX",
|
||||
"anthropic-ratelimit-tokens-limit": "ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX",
|
||||
"anthropic-ratelimit-tokens-remaining": "ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX",
|
||||
"anthropic-ratelimit-tokens-reset": "ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX",
|
||||
"x-amz-date": "X-AMZ-DATE-XXX",
|
||||
"amz-sdk-invocation-id": "AMZ-SDK-INVOCATION-ID-XXX",
|
||||
"accept-encoding": "ACCEPT-ENCODING-XXX",
|
||||
"x-amzn-requestid": "X-AMZN-REQUESTID-XXX",
|
||||
"x-amzn-RequestId": "X-AMZN-REQUESTID-XXX",
|
||||
}
|
||||
|
||||
|
||||
def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any-unimported]
|
||||
"""Filter sensitive headers from request before recording."""
|
||||
for header_name, replacement in HEADERS_TO_FILTER.items():
|
||||
for variant in [header_name, header_name.upper(), header_name.title()]:
|
||||
if variant in request.headers:
|
||||
request.headers[variant] = [replacement]
|
||||
|
||||
request.method = request.method.upper()
|
||||
return request
|
||||
|
||||
|
||||
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Filter sensitive headers from response before recording."""
|
||||
# Remove Content-Encoding to prevent decompression issues on replay
|
||||
for encoding_header in ["Content-Encoding", "content-encoding"]:
|
||||
response["headers"].pop(encoding_header, None)
|
||||
|
||||
for header_name, replacement in HEADERS_TO_FILTER.items():
|
||||
for variant in [header_name, header_name.upper(), header_name.title()]:
|
||||
if variant in response["headers"]:
|
||||
response["headers"][variant] = [replacement]
|
||||
return response
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def vcr_cassette_dir(request: Any) -> str:
|
||||
"""Generate cassette directory path based on test module location.
|
||||
|
||||
Organizes cassettes to mirror test directory structure within each package:
|
||||
lib/crewai/tests/llms/google/test_google.py -> lib/crewai/tests/cassettes/llms/google/
|
||||
lib/crewai-tools/tests/tools/test_search.py -> lib/crewai-tools/tests/cassettes/tools/
|
||||
"""
|
||||
test_file = Path(request.fspath)
|
||||
|
||||
for parent in test_file.parents:
|
||||
if parent.name in ("crewai", "crewai-tools") and parent.parent.name == "lib":
|
||||
package_root = parent
|
||||
break
|
||||
else:
|
||||
package_root = test_file.parent
|
||||
|
||||
tests_root = package_root / "tests"
|
||||
test_dir = test_file.parent
|
||||
|
||||
if test_dir != tests_root:
|
||||
relative_path = test_dir.relative_to(tests_root)
|
||||
cassette_dir = tests_root / "cassettes" / relative_path
|
||||
else:
|
||||
cassette_dir = tests_root / "cassettes"
|
||||
|
||||
cassette_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
return str(cassette_dir)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def vcr_config(vcr_cassette_dir: str) -> dict[str, Any]:
|
||||
"""Configure VCR with organized cassette storage."""
|
||||
config = {
|
||||
"cassette_library_dir": vcr_cassette_dir,
|
||||
"record_mode": os.getenv("PYTEST_VCR_RECORD_MODE", "once"),
|
||||
"filter_headers": [(k, v) for k, v in HEADERS_TO_FILTER.items()],
|
||||
"before_record_request": _filter_request_headers,
|
||||
"before_record_response": _filter_response_headers,
|
||||
"filter_query_parameters": ["key"],
|
||||
"match_on": ["method", "scheme", "host", "port", "path"],
|
||||
}
|
||||
|
||||
if os.getenv("GITHUB_ACTIONS") == "true":
|
||||
config["record_mode"] = "none"
|
||||
|
||||
return config
|
||||
1737
crewAI.excalidraw
282
docs/docs.json
@@ -9,7 +9,22 @@
|
||||
},
|
||||
"favicon": "/images/favicon.svg",
|
||||
"contextual": {
|
||||
"options": ["copy", "view", "chatgpt", "claude"]
|
||||
"options": [
|
||||
"copy",
|
||||
"view",
|
||||
"chatgpt",
|
||||
"claude",
|
||||
"perplexity",
|
||||
"mcp",
|
||||
"cursor",
|
||||
"vscode",
|
||||
{
|
||||
"title": "Request a feature",
|
||||
"description": "Join the discussion on GitHub to request a new feature",
|
||||
"icon": "plus",
|
||||
"href": "https://github.com/crewAIInc/crewAI/issues/new/choose"
|
||||
}
|
||||
]
|
||||
},
|
||||
"navigation": {
|
||||
"languages": [
|
||||
@@ -40,6 +55,16 @@
|
||||
]
|
||||
},
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Home",
|
||||
"icon": "house",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Welcome",
|
||||
"pages": ["index"]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Documentation",
|
||||
"icon": "book-open",
|
||||
@@ -109,6 +134,7 @@
|
||||
"group": "MCP Integration",
|
||||
"pages": [
|
||||
"en/mcp/overview",
|
||||
"en/mcp/dsl-integration",
|
||||
"en/mcp/stdio",
|
||||
"en/mcp/sse",
|
||||
"en/mcp/streamable-http",
|
||||
@@ -225,9 +251,10 @@
|
||||
"group": "Integrations",
|
||||
"icon": "plug",
|
||||
"pages": [
|
||||
"en/tools/tool-integrations/overview",
|
||||
"en/tools/tool-integrations/bedrockinvokeagenttool",
|
||||
"en/tools/tool-integrations/crewaiautomationtool"
|
||||
"en/tools/integration/overview",
|
||||
"en/tools/integration/bedrockinvokeagenttool",
|
||||
"en/tools/integration/crewaiautomationtool",
|
||||
"en/tools/integration/mergeagenthandlertool"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -246,8 +273,11 @@
|
||||
{
|
||||
"group": "Observability",
|
||||
"pages": [
|
||||
"en/observability/tracing",
|
||||
"en/observability/overview",
|
||||
"en/observability/arize-phoenix",
|
||||
"en/observability/braintrust",
|
||||
"en/observability/datadog",
|
||||
"en/observability/langdb",
|
||||
"en/observability/langfuse",
|
||||
"en/observability/langtrace",
|
||||
@@ -277,13 +307,17 @@
|
||||
"en/learn/force-tool-output-as-result",
|
||||
"en/learn/hierarchical-process",
|
||||
"en/learn/human-input-on-execution",
|
||||
"en/learn/human-in-the-loop",
|
||||
"en/learn/kickoff-async",
|
||||
"en/learn/kickoff-for-each",
|
||||
"en/learn/llm-connections",
|
||||
"en/learn/multimodal-agents",
|
||||
"en/learn/replay-tasks-from-latest-crew-kickoff",
|
||||
"en/learn/sequential-process",
|
||||
"en/learn/using-annotations"
|
||||
"en/learn/using-annotations",
|
||||
"en/learn/execution-hooks",
|
||||
"en/learn/llm-hooks",
|
||||
"en/learn/tool-hooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -293,7 +327,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Enterprise",
|
||||
"tab": "AOP",
|
||||
"icon": "briefcase",
|
||||
"groups": [
|
||||
{
|
||||
@@ -301,15 +335,27 @@
|
||||
"pages": ["en/enterprise/introduction"]
|
||||
},
|
||||
{
|
||||
"group": "Features",
|
||||
"group": "Build",
|
||||
"pages": [
|
||||
"en/enterprise/features/automations",
|
||||
"en/enterprise/features/crew-studio",
|
||||
"en/enterprise/features/marketplace",
|
||||
"en/enterprise/features/agent-repositories",
|
||||
"en/enterprise/features/tools-and-integrations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Operate",
|
||||
"pages": [
|
||||
"en/enterprise/features/rbac",
|
||||
"en/enterprise/features/tool-repository",
|
||||
"en/enterprise/features/webhook-streaming",
|
||||
"en/enterprise/features/traces",
|
||||
"en/enterprise/features/hallucination-guardrail",
|
||||
"en/enterprise/features/integrations",
|
||||
"en/enterprise/features/agent-repositories"
|
||||
"en/enterprise/features/webhook-streaming",
|
||||
"en/enterprise/features/hallucination-guardrail"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Manage",
|
||||
"pages": [
|
||||
"en/enterprise/features/rbac"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -321,10 +367,20 @@
|
||||
"en/enterprise/integrations/github",
|
||||
"en/enterprise/integrations/gmail",
|
||||
"en/enterprise/integrations/google_calendar",
|
||||
"en/enterprise/integrations/google_contacts",
|
||||
"en/enterprise/integrations/google_docs",
|
||||
"en/enterprise/integrations/google_drive",
|
||||
"en/enterprise/integrations/google_sheets",
|
||||
"en/enterprise/integrations/google_slides",
|
||||
"en/enterprise/integrations/hubspot",
|
||||
"en/enterprise/integrations/jira",
|
||||
"en/enterprise/integrations/linear",
|
||||
"en/enterprise/integrations/microsoft_excel",
|
||||
"en/enterprise/integrations/microsoft_onedrive",
|
||||
"en/enterprise/integrations/microsoft_outlook",
|
||||
"en/enterprise/integrations/microsoft_sharepoint",
|
||||
"en/enterprise/integrations/microsoft_teams",
|
||||
"en/enterprise/integrations/microsoft_word",
|
||||
"en/enterprise/integrations/notion",
|
||||
"en/enterprise/integrations/salesforce",
|
||||
"en/enterprise/integrations/shopify",
|
||||
@@ -333,6 +389,22 @@
|
||||
"en/enterprise/integrations/zendesk"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Triggers",
|
||||
"pages": [
|
||||
"en/enterprise/guides/automation-triggers",
|
||||
"en/enterprise/guides/gmail-trigger",
|
||||
"en/enterprise/guides/google-calendar-trigger",
|
||||
"en/enterprise/guides/google-drive-trigger",
|
||||
"en/enterprise/guides/outlook-trigger",
|
||||
"en/enterprise/guides/onedrive-trigger",
|
||||
"en/enterprise/guides/microsoft-teams-trigger",
|
||||
"en/enterprise/guides/slack-trigger",
|
||||
"en/enterprise/guides/hubspot-trigger",
|
||||
"en/enterprise/guides/salesforce-trigger",
|
||||
"en/enterprise/guides/zapier-trigger"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "How-To Guides",
|
||||
"pages": [
|
||||
@@ -341,16 +413,13 @@
|
||||
"en/enterprise/guides/kickoff-crew",
|
||||
"en/enterprise/guides/update-crew",
|
||||
"en/enterprise/guides/enable-crew-studio",
|
||||
"en/enterprise/guides/capture_telemetry_logs",
|
||||
"en/enterprise/guides/azure-openai-setup",
|
||||
"en/enterprise/guides/automation-triggers",
|
||||
"en/enterprise/guides/hubspot-trigger",
|
||||
"en/enterprise/guides/tool-repository",
|
||||
"en/enterprise/guides/react-component-export",
|
||||
"en/enterprise/guides/salesforce-trigger",
|
||||
"en/enterprise/guides/slack-trigger",
|
||||
"en/enterprise/guides/team-management",
|
||||
"en/enterprise/guides/webhook-automation",
|
||||
"en/enterprise/guides/human-in-the-loop",
|
||||
"en/enterprise/guides/zapier-trigger"
|
||||
"en/enterprise/guides/webhook-automation"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -369,6 +438,7 @@
|
||||
"en/api-reference/introduction",
|
||||
"en/api-reference/inputs",
|
||||
"en/api-reference/kickoff",
|
||||
"en/api-reference/resume",
|
||||
"en/api-reference/status"
|
||||
]
|
||||
}
|
||||
@@ -423,6 +493,16 @@
|
||||
]
|
||||
},
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Início",
|
||||
"icon": "house",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Bem-vindo",
|
||||
"pages": ["pt-BR/index"]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Documentação",
|
||||
"icon": "book-open",
|
||||
@@ -496,6 +576,7 @@
|
||||
"group": "Integração MCP",
|
||||
"pages": [
|
||||
"pt-BR/mcp/overview",
|
||||
"pt-BR/mcp/dsl-integration",
|
||||
"pt-BR/mcp/stdio",
|
||||
"pt-BR/mcp/sse",
|
||||
"pt-BR/mcp/streamable-http",
|
||||
@@ -598,12 +679,12 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Integrações",
|
||||
"group": "Integrations",
|
||||
"icon": "plug",
|
||||
"pages": [
|
||||
"pt-BR/tools/tool-integrations/overview",
|
||||
"pt-BR/tools/tool-integrations/bedrockinvokeagenttool",
|
||||
"pt-BR/tools/tool-integrations/crewaiautomationtool"
|
||||
"pt-BR/tools/integration/overview",
|
||||
"pt-BR/tools/integration/bedrockinvokeagenttool",
|
||||
"pt-BR/tools/integration/crewaiautomationtool"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -623,6 +704,8 @@
|
||||
"pages": [
|
||||
"pt-BR/observability/overview",
|
||||
"pt-BR/observability/arize-phoenix",
|
||||
"pt-BR/observability/braintrust",
|
||||
"pt-BR/observability/datadog",
|
||||
"pt-BR/observability/langdb",
|
||||
"pt-BR/observability/langfuse",
|
||||
"pt-BR/observability/langtrace",
|
||||
@@ -651,13 +734,17 @@
|
||||
"pt-BR/learn/force-tool-output-as-result",
|
||||
"pt-BR/learn/hierarchical-process",
|
||||
"pt-BR/learn/human-input-on-execution",
|
||||
"pt-BR/learn/human-in-the-loop",
|
||||
"pt-BR/learn/kickoff-async",
|
||||
"pt-BR/learn/kickoff-for-each",
|
||||
"pt-BR/learn/llm-connections",
|
||||
"pt-BR/learn/multimodal-agents",
|
||||
"pt-BR/learn/replay-tasks-from-latest-crew-kickoff",
|
||||
"pt-BR/learn/sequential-process",
|
||||
"pt-BR/learn/using-annotations"
|
||||
"pt-BR/learn/using-annotations",
|
||||
"pt-BR/learn/execution-hooks",
|
||||
"pt-BR/learn/llm-hooks",
|
||||
"pt-BR/learn/tool-hooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -667,7 +754,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Enterprise",
|
||||
"tab": "AOP",
|
||||
"icon": "briefcase",
|
||||
"groups": [
|
||||
{
|
||||
@@ -675,14 +762,27 @@
|
||||
"pages": ["pt-BR/enterprise/introduction"]
|
||||
},
|
||||
{
|
||||
"group": "Funcionalidades",
|
||||
"group": "Construir",
|
||||
"pages": [
|
||||
"pt-BR/enterprise/features/automations",
|
||||
"pt-BR/enterprise/features/crew-studio",
|
||||
"pt-BR/enterprise/features/marketplace",
|
||||
"pt-BR/enterprise/features/agent-repositories",
|
||||
"pt-BR/enterprise/features/tools-and-integrations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Operar",
|
||||
"pages": [
|
||||
"pt-BR/enterprise/features/rbac",
|
||||
"pt-BR/enterprise/features/tool-repository",
|
||||
"pt-BR/enterprise/features/webhook-streaming",
|
||||
"pt-BR/enterprise/features/traces",
|
||||
"pt-BR/enterprise/features/hallucination-guardrail",
|
||||
"pt-BR/enterprise/features/integrations"
|
||||
"pt-BR/enterprise/features/webhook-streaming",
|
||||
"pt-BR/enterprise/features/hallucination-guardrail"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Gerenciar",
|
||||
"pages": [
|
||||
"pt-BR/enterprise/features/rbac"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -694,10 +794,20 @@
|
||||
"pt-BR/enterprise/integrations/github",
|
||||
"pt-BR/enterprise/integrations/gmail",
|
||||
"pt-BR/enterprise/integrations/google_calendar",
|
||||
"pt-BR/enterprise/integrations/google_contacts",
|
||||
"pt-BR/enterprise/integrations/google_docs",
|
||||
"pt-BR/enterprise/integrations/google_drive",
|
||||
"pt-BR/enterprise/integrations/google_sheets",
|
||||
"pt-BR/enterprise/integrations/google_slides",
|
||||
"pt-BR/enterprise/integrations/hubspot",
|
||||
"pt-BR/enterprise/integrations/jira",
|
||||
"pt-BR/enterprise/integrations/linear",
|
||||
"pt-BR/enterprise/integrations/microsoft_excel",
|
||||
"pt-BR/enterprise/integrations/microsoft_onedrive",
|
||||
"pt-BR/enterprise/integrations/microsoft_outlook",
|
||||
"pt-BR/enterprise/integrations/microsoft_sharepoint",
|
||||
"pt-BR/enterprise/integrations/microsoft_teams",
|
||||
"pt-BR/enterprise/integrations/microsoft_word",
|
||||
"pt-BR/enterprise/integrations/notion",
|
||||
"pt-BR/enterprise/integrations/salesforce",
|
||||
"pt-BR/enterprise/integrations/shopify",
|
||||
@@ -715,14 +825,26 @@
|
||||
"pt-BR/enterprise/guides/update-crew",
|
||||
"pt-BR/enterprise/guides/enable-crew-studio",
|
||||
"pt-BR/enterprise/guides/azure-openai-setup",
|
||||
"pt-BR/enterprise/guides/automation-triggers",
|
||||
"pt-BR/enterprise/guides/hubspot-trigger",
|
||||
"pt-BR/enterprise/guides/tool-repository",
|
||||
"pt-BR/enterprise/guides/react-component-export",
|
||||
"pt-BR/enterprise/guides/salesforce-trigger",
|
||||
"pt-BR/enterprise/guides/slack-trigger",
|
||||
"pt-BR/enterprise/guides/team-management",
|
||||
"pt-BR/enterprise/guides/webhook-automation",
|
||||
"pt-BR/enterprise/guides/human-in-the-loop",
|
||||
"pt-BR/enterprise/guides/webhook-automation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Triggers",
|
||||
"pages": [
|
||||
"pt-BR/enterprise/guides/automation-triggers",
|
||||
"pt-BR/enterprise/guides/gmail-trigger",
|
||||
"pt-BR/enterprise/guides/google-calendar-trigger",
|
||||
"pt-BR/enterprise/guides/google-drive-trigger",
|
||||
"pt-BR/enterprise/guides/outlook-trigger",
|
||||
"pt-BR/enterprise/guides/onedrive-trigger",
|
||||
"pt-BR/enterprise/guides/microsoft-teams-trigger",
|
||||
"pt-BR/enterprise/guides/slack-trigger",
|
||||
"pt-BR/enterprise/guides/hubspot-trigger",
|
||||
"pt-BR/enterprise/guides/salesforce-trigger",
|
||||
"pt-BR/enterprise/guides/zapier-trigger"
|
||||
]
|
||||
},
|
||||
@@ -744,6 +866,7 @@
|
||||
"pt-BR/api-reference/introduction",
|
||||
"pt-BR/api-reference/inputs",
|
||||
"pt-BR/api-reference/kickoff",
|
||||
"pt-BR/api-reference/resume",
|
||||
"pt-BR/api-reference/status"
|
||||
]
|
||||
}
|
||||
@@ -798,6 +921,16 @@
|
||||
]
|
||||
},
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "홈",
|
||||
"icon": "house",
|
||||
"groups": [
|
||||
{
|
||||
"group": "환영합니다",
|
||||
"pages": ["ko/index"]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "기술 문서",
|
||||
"icon": "book-open",
|
||||
@@ -867,6 +1000,7 @@
|
||||
"group": "MCP 통합",
|
||||
"pages": [
|
||||
"ko/mcp/overview",
|
||||
"ko/mcp/dsl-integration",
|
||||
"ko/mcp/stdio",
|
||||
"ko/mcp/sse",
|
||||
"ko/mcp/streamable-http",
|
||||
@@ -976,17 +1110,16 @@
|
||||
"ko/tools/cloud-storage/overview",
|
||||
"ko/tools/cloud-storage/s3readertool",
|
||||
"ko/tools/cloud-storage/s3writertool",
|
||||
"ko/tools/cloud-storage/bedrockinvokeagenttool",
|
||||
"ko/tools/cloud-storage/bedrockkbretriever"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "통합",
|
||||
"group": "Integrations",
|
||||
"icon": "plug",
|
||||
"pages": [
|
||||
"ko/tools/tool-integrations/overview",
|
||||
"ko/tools/tool-integrations/bedrockinvokeagenttool",
|
||||
"ko/tools/tool-integrations/crewaiautomationtool"
|
||||
"ko/tools/integration/overview",
|
||||
"ko/tools/integration/bedrockinvokeagenttool",
|
||||
"ko/tools/integration/crewaiautomationtool"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1007,6 +1140,8 @@
|
||||
"pages": [
|
||||
"ko/observability/overview",
|
||||
"ko/observability/arize-phoenix",
|
||||
"ko/observability/braintrust",
|
||||
"ko/observability/datadog",
|
||||
"ko/observability/langdb",
|
||||
"ko/observability/langfuse",
|
||||
"ko/observability/langtrace",
|
||||
@@ -1035,13 +1170,17 @@
|
||||
"ko/learn/force-tool-output-as-result",
|
||||
"ko/learn/hierarchical-process",
|
||||
"ko/learn/human-input-on-execution",
|
||||
"ko/learn/human-in-the-loop",
|
||||
"ko/learn/kickoff-async",
|
||||
"ko/learn/kickoff-for-each",
|
||||
"ko/learn/llm-connections",
|
||||
"ko/learn/multimodal-agents",
|
||||
"ko/learn/replay-tasks-from-latest-crew-kickoff",
|
||||
"ko/learn/sequential-process",
|
||||
"ko/learn/using-annotations"
|
||||
"ko/learn/using-annotations",
|
||||
"ko/learn/execution-hooks",
|
||||
"ko/learn/llm-hooks",
|
||||
"ko/learn/tool-hooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1059,15 +1198,27 @@
|
||||
"pages": ["ko/enterprise/introduction"]
|
||||
},
|
||||
{
|
||||
"group": "특징",
|
||||
"group": "빌드",
|
||||
"pages": [
|
||||
"ko/enterprise/features/automations",
|
||||
"ko/enterprise/features/crew-studio",
|
||||
"ko/enterprise/features/marketplace",
|
||||
"ko/enterprise/features/agent-repositories",
|
||||
"ko/enterprise/features/tools-and-integrations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "운영",
|
||||
"pages": [
|
||||
"ko/enterprise/features/rbac",
|
||||
"ko/enterprise/features/tool-repository",
|
||||
"ko/enterprise/features/webhook-streaming",
|
||||
"ko/enterprise/features/traces",
|
||||
"ko/enterprise/features/hallucination-guardrail",
|
||||
"ko/enterprise/features/integrations",
|
||||
"ko/enterprise/features/agent-repositories"
|
||||
"ko/enterprise/features/webhook-streaming",
|
||||
"ko/enterprise/features/hallucination-guardrail"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "관리",
|
||||
"pages": [
|
||||
"ko/enterprise/features/rbac"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1079,10 +1230,20 @@
|
||||
"ko/enterprise/integrations/github",
|
||||
"ko/enterprise/integrations/gmail",
|
||||
"ko/enterprise/integrations/google_calendar",
|
||||
"ko/enterprise/integrations/google_contacts",
|
||||
"ko/enterprise/integrations/google_docs",
|
||||
"ko/enterprise/integrations/google_drive",
|
||||
"ko/enterprise/integrations/google_sheets",
|
||||
"ko/enterprise/integrations/google_slides",
|
||||
"ko/enterprise/integrations/hubspot",
|
||||
"ko/enterprise/integrations/jira",
|
||||
"ko/enterprise/integrations/linear",
|
||||
"ko/enterprise/integrations/microsoft_excel",
|
||||
"ko/enterprise/integrations/microsoft_onedrive",
|
||||
"ko/enterprise/integrations/microsoft_outlook",
|
||||
"ko/enterprise/integrations/microsoft_sharepoint",
|
||||
"ko/enterprise/integrations/microsoft_teams",
|
||||
"ko/enterprise/integrations/microsoft_word",
|
||||
"ko/enterprise/integrations/notion",
|
||||
"ko/enterprise/integrations/salesforce",
|
||||
"ko/enterprise/integrations/shopify",
|
||||
@@ -1100,14 +1261,26 @@
|
||||
"ko/enterprise/guides/update-crew",
|
||||
"ko/enterprise/guides/enable-crew-studio",
|
||||
"ko/enterprise/guides/azure-openai-setup",
|
||||
"ko/enterprise/guides/automation-triggers",
|
||||
"ko/enterprise/guides/hubspot-trigger",
|
||||
"ko/enterprise/guides/tool-repository",
|
||||
"ko/enterprise/guides/react-component-export",
|
||||
"ko/enterprise/guides/salesforce-trigger",
|
||||
"ko/enterprise/guides/slack-trigger",
|
||||
"ko/enterprise/guides/team-management",
|
||||
"ko/enterprise/guides/webhook-automation",
|
||||
"ko/enterprise/guides/human-in-the-loop",
|
||||
"ko/enterprise/guides/webhook-automation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "트리거",
|
||||
"pages": [
|
||||
"ko/enterprise/guides/automation-triggers",
|
||||
"ko/enterprise/guides/gmail-trigger",
|
||||
"ko/enterprise/guides/google-calendar-trigger",
|
||||
"ko/enterprise/guides/google-drive-trigger",
|
||||
"ko/enterprise/guides/outlook-trigger",
|
||||
"ko/enterprise/guides/onedrive-trigger",
|
||||
"ko/enterprise/guides/microsoft-teams-trigger",
|
||||
"ko/enterprise/guides/slack-trigger",
|
||||
"ko/enterprise/guides/hubspot-trigger",
|
||||
"ko/enterprise/guides/salesforce-trigger",
|
||||
"ko/enterprise/guides/zapier-trigger"
|
||||
]
|
||||
},
|
||||
@@ -1127,6 +1300,7 @@
|
||||
"ko/api-reference/introduction",
|
||||
"ko/api-reference/inputs",
|
||||
"ko/api-reference/kickoff",
|
||||
"ko/api-reference/resume",
|
||||
"ko/api-reference/status"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,31 +1,32 @@
|
||||
---
|
||||
title: "Introduction"
|
||||
description: "Complete reference for the CrewAI Enterprise REST API"
|
||||
description: "Complete reference for the CrewAI AOP REST API"
|
||||
icon: "code"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# CrewAI Enterprise API
|
||||
# CrewAI AOP API
|
||||
|
||||
Welcome to the CrewAI Enterprise API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
|
||||
Welcome to the CrewAI AOP API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
|
||||
|
||||
## Quick Start
|
||||
|
||||
<Steps>
|
||||
<Step title="Get Your API Credentials">
|
||||
Navigate to your crew's detail page in the CrewAI Enterprise dashboard and copy your Bearer Token from the Status tab.
|
||||
Navigate to your crew's detail page in the CrewAI AOP dashboard and copy your Bearer Token from the Status tab.
|
||||
</Step>
|
||||
|
||||
<Step title="Discover Required Inputs">
|
||||
Use the `GET /inputs` endpoint to see what parameters your crew expects.
|
||||
</Step>
|
||||
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
|
||||
</Step>
|
||||
|
||||
|
||||
<Step title="Discover Required Inputs">
|
||||
Use the `GET /inputs` endpoint to see what parameters your crew expects.
|
||||
</Step>
|
||||
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive
|
||||
a `kickoff_id`.
|
||||
</Step>
|
||||
|
||||
<Step title="Monitor Progress">
|
||||
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
|
||||
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
|
||||
|
||||
### Token Types
|
||||
|
||||
| Token Type | Scope | Use Case |
|
||||
|:-----------|:--------|:----------|
|
||||
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
| Token Type | Scope | Use Case |
|
||||
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
|
||||
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
|
||||
<Tip>
|
||||
You can find both token types in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
|
||||
You can find both token types in the Status tab of your crew's detail page in
|
||||
the CrewAI AOP dashboard.
|
||||
</Tip>
|
||||
|
||||
## Base URL
|
||||
@@ -62,32 +64,36 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
|
||||
## Typical Workflow
|
||||
|
||||
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
|
||||
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
|
||||
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
|
||||
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
|
||||
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
|
||||
4. **Results**: Extract the final output from the completed response
|
||||
|
||||
## Error Handling
|
||||
|
||||
The API uses standard HTTP status codes:
|
||||
|
||||
| Code | Meaning |
|
||||
|------|:--------|
|
||||
| `200` | Success |
|
||||
| `400` | Bad Request - Invalid input format |
|
||||
| `401` | Unauthorized - Invalid bearer token |
|
||||
| `404` | Not Found - Resource doesn't exist |
|
||||
| Code | Meaning |
|
||||
| ----- | :----------------------------------------- |
|
||||
| `200` | Success |
|
||||
| `400` | Bad Request - Invalid input format |
|
||||
| `401` | Unauthorized - Invalid bearer token |
|
||||
| `404` | Not Found - Resource doesn't exist |
|
||||
| `422` | Validation Error - Missing required inputs |
|
||||
| `500` | Server Error - Contact support |
|
||||
| `500` | Server Error - Contact support |
|
||||
|
||||
## Interactive Testing
|
||||
|
||||
<Info>
|
||||
**Why no "Send" button?** Since each CrewAI Enterprise user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
|
||||
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew
|
||||
URL, we use **reference mode** instead of an interactive playground to avoid
|
||||
confusion. This shows you exactly what the requests should look like without
|
||||
non-functional send buttons.
|
||||
</Info>
|
||||
|
||||
Each endpoint page shows you:
|
||||
|
||||
- ✅ **Exact request format** with all parameters
|
||||
- ✅ **Response examples** for success and error cases
|
||||
- ✅ **Response examples** for success and error cases
|
||||
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
|
||||
- ✅ **Authentication examples** with proper Bearer token format
|
||||
|
||||
@@ -103,18 +109,27 @@ Each endpoint page shows you:
|
||||
</CardGroup>
|
||||
|
||||
**Example workflow:**
|
||||
|
||||
1. **Copy this cURL example** from any endpoint page
|
||||
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
|
||||
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
|
||||
3. **Replace the Bearer token** with your real token from the dashboard
|
||||
4. **Run the request** in your terminal or API client
|
||||
|
||||
## Need Help?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
|
||||
<Card
|
||||
title="Enterprise Support"
|
||||
icon="headset"
|
||||
href="mailto:support@crewai.com"
|
||||
>
|
||||
Get help with API integration and troubleshooting
|
||||
</Card>
|
||||
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
|
||||
<Card
|
||||
title="Enterprise Dashboard"
|
||||
icon="chart-line"
|
||||
href="https://app.crewai.com"
|
||||
>
|
||||
Manage your crews and view execution logs
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
6
docs/en/api-reference/resume.mdx
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
title: "POST /resume"
|
||||
description: "Resume crew execution with human feedback"
|
||||
openapi: "/enterprise-api.en.yaml POST /resume"
|
||||
mode: "wide"
|
||||
---
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: "GET /status/{kickoff_id}"
|
||||
title: "GET /{kickoff_id}/status"
|
||||
description: "Get execution status"
|
||||
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
|
||||
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ Think of an agent as a specialized team member with specific skills, expertise,
|
||||
</Tip>
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
|
||||
CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
|
||||
CrewAI AOP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ icon: terminal
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Warning>Since release 0.140.0, CrewAI Enterprise started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
|
||||
<Warning>Since release 0.140.0, CrewAI AOP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -186,9 +186,9 @@ def crew(self) -> Crew:
|
||||
|
||||
### 10. Deploy
|
||||
|
||||
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
|
||||
Deploy the crew or flow to [CrewAI AOP](https://app.crewai.com).
|
||||
|
||||
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
|
||||
- **Authentication**: You need to be authenticated to deploy to CrewAI AOP.
|
||||
You can login or create an account with:
|
||||
```shell Terminal
|
||||
crewai login
|
||||
@@ -203,7 +203,7 @@ Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
|
||||
|
||||
### 11. Organization Management
|
||||
|
||||
Manage your CrewAI Enterprise organizations.
|
||||
Manage your CrewAI AOP organizations.
|
||||
|
||||
```shell Terminal
|
||||
crewai org [COMMAND] [OPTIONS]
|
||||
@@ -227,17 +227,17 @@ crewai org switch <organization_id>
|
||||
```
|
||||
|
||||
<Note>
|
||||
You must be authenticated to CrewAI Enterprise to use these organization management commands.
|
||||
You must be authenticated to CrewAI AOP to use these organization management commands.
|
||||
</Note>
|
||||
|
||||
- **Create a deployment** (continued):
|
||||
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
|
||||
|
||||
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
|
||||
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AOP.
|
||||
```shell Terminal
|
||||
crewai deploy push
|
||||
```
|
||||
- Initiates the deployment process on the CrewAI Enterprise platform.
|
||||
- Initiates the deployment process on the CrewAI AOP platform.
|
||||
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
|
||||
|
||||
- **Deployment Status**: You can check the status of your deployment with:
|
||||
@@ -262,7 +262,7 @@ You must be authenticated to CrewAI Enterprise to use these organization managem
|
||||
```shell Terminal
|
||||
crewai deploy remove
|
||||
```
|
||||
This deletes the deployment from the CrewAI Enterprise platform.
|
||||
This deletes the deployment from the CrewAI AOP platform.
|
||||
|
||||
- **Help Command**: You can get help with the CLI with:
|
||||
```shell Terminal
|
||||
@@ -270,22 +270,20 @@ You must be authenticated to CrewAI Enterprise to use these organization managem
|
||||
```
|
||||
This shows the help message for the CrewAI Deploy CLI.
|
||||
|
||||
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
|
||||
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AOP](http://app.crewai.com) using the CLI.
|
||||
|
||||
<iframe
|
||||
width="100%"
|
||||
height="400"
|
||||
className="w-full aspect-video rounded-xl"
|
||||
src="https://www.youtube.com/embed/3EqSV-CYDZA"
|
||||
title="CrewAI Deployment Guide"
|
||||
frameborder="0"
|
||||
style={{ borderRadius: '10px' }}
|
||||
frameBorder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allowfullscreen
|
||||
allowFullScreen
|
||||
></iframe>
|
||||
|
||||
### 11. Login
|
||||
|
||||
Authenticate with CrewAI Enterprise using a secure device code flow (no email entry required).
|
||||
Authenticate with CrewAI AOP using a secure device code flow (no email entry required).
|
||||
|
||||
```shell Terminal
|
||||
crewai login
|
||||
@@ -356,7 +354,7 @@ crewai config reset
|
||||
|
||||
#### Available Configuration Parameters
|
||||
|
||||
- `enterprise_base_url`: Base URL of the CrewAI Enterprise instance
|
||||
- `enterprise_base_url`: Base URL of the CrewAI AOP instance
|
||||
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
|
||||
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
|
||||
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
|
||||
@@ -372,7 +370,7 @@ crewai config list
|
||||
Example output:
|
||||
| Setting | Value | Description |
|
||||
| :------------------ | :----------------------- | :---------------------------------------------------------- |
|
||||
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI Enterprise instance |
|
||||
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AOP instance |
|
||||
| org_name | Not set | Name of the currently active organization |
|
||||
| org_uuid | Not set | UUID of the currently active organization |
|
||||
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
|
||||
@@ -404,6 +402,81 @@ crewai config reset
|
||||
After resetting configuration, re-run `crewai login` to authenticate again.
|
||||
</Tip>
|
||||
|
||||
### 14. Trace Management
|
||||
|
||||
Manage trace collection preferences for your Crew and Flow executions.
|
||||
|
||||
```shell Terminal
|
||||
crewai traces [COMMAND]
|
||||
```
|
||||
|
||||
#### Commands:
|
||||
|
||||
- `enable`: Enable trace collection for crew/flow executions
|
||||
```shell Terminal
|
||||
crewai traces enable
|
||||
```
|
||||
|
||||
- `disable`: Disable trace collection for crew/flow executions
|
||||
```shell Terminal
|
||||
crewai traces disable
|
||||
```
|
||||
|
||||
- `status`: Show current trace collection status
|
||||
```shell Terminal
|
||||
crewai traces status
|
||||
```
|
||||
|
||||
#### How Tracing Works
|
||||
|
||||
Trace collection is controlled by checking three settings in priority order:
|
||||
|
||||
1. **Explicit flag in code** (highest priority - can enable OR disable):
|
||||
```python
|
||||
crew = Crew(agents=[...], tasks=[...], tracing=True) # Always enable
|
||||
crew = Crew(agents=[...], tasks=[...], tracing=False) # Always disable
|
||||
crew = Crew(agents=[...], tasks=[...]) # Check lower priorities (default)
|
||||
```
|
||||
- `tracing=True` will **always enable** tracing (overrides everything)
|
||||
- `tracing=False` will **always disable** tracing (overrides everything)
|
||||
- `tracing=None` or omitted will check lower priority settings
|
||||
|
||||
2. **Environment variable** (second priority):
|
||||
```env
|
||||
CREWAI_TRACING_ENABLED=true
|
||||
```
|
||||
- Checked only if `tracing` is not explicitly set to `True` or `False` in code
|
||||
- Set to `true` or `1` to enable tracing
|
||||
|
||||
3. **User preference** (lowest priority):
|
||||
```shell Terminal
|
||||
crewai traces enable
|
||||
```
|
||||
- Checked only if `tracing` is not set in code and `CREWAI_TRACING_ENABLED` is not set to `true`
|
||||
- Running `crewai traces enable` is sufficient to enable tracing by itself
|
||||
|
||||
<Note>
|
||||
**To enable tracing**, use any one of these methods:
|
||||
- Set `tracing=True` in your Crew/Flow code, OR
|
||||
- Add `CREWAI_TRACING_ENABLED=true` to your `.env` file, OR
|
||||
- Run `crewai traces enable`
|
||||
|
||||
**To disable tracing**, use any ONE of these methods:
|
||||
- Set `tracing=False` in your Crew/Flow code (overrides everything), OR
|
||||
- Remove or set to `false` the `CREWAI_TRACING_ENABLED` env var, OR
|
||||
- Run `crewai traces disable`
|
||||
|
||||
Higher priority settings override lower ones.
|
||||
</Note>
|
||||
|
||||
<Tip>
|
||||
For more information about tracing, see the [Tracing documentation](/observability/tracing).
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
|
||||
</Tip>
|
||||
|
||||
<Note>
|
||||
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
|
||||
</Note>
|
||||
|
||||
@@ -33,6 +33,7 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
||||
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
|
||||
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
|
||||
|
||||
<Tip>
|
||||
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
@@ -306,12 +307,27 @@ print(result)
|
||||
|
||||
### Different Ways to Kick Off a Crew
|
||||
|
||||
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
|
||||
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process.
|
||||
|
||||
#### Synchronous Methods
|
||||
|
||||
- `kickoff()`: Starts the execution process according to the defined process flow.
|
||||
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
|
||||
- `kickoff_async()`: Initiates the workflow asynchronously.
|
||||
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
|
||||
|
||||
#### Asynchronous Methods
|
||||
|
||||
CrewAI offers two approaches for async execution:
|
||||
|
||||
| Method | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
|
||||
| `akickoff_for_each()` | Native async | Native async execution for each input in a list |
|
||||
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
|
||||
| `kickoff_for_each_async()` | Thread-based | Thread-based async for each input in a list |
|
||||
|
||||
<Note>
|
||||
For high-concurrency workloads, `akickoff()` and `akickoff_for_each()` are recommended as they use native async for task execution, memory operations, and knowledge retrieval.
|
||||
</Note>
|
||||
|
||||
```python Code
|
||||
# Start the crew's task execution
|
||||
@@ -324,19 +340,53 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
|
||||
for result in results:
|
||||
print(result)
|
||||
|
||||
# Example of using kickoff_async
|
||||
# Example of using native async with akickoff
|
||||
inputs = {'topic': 'AI in healthcare'}
|
||||
async_result = await my_crew.akickoff(inputs=inputs)
|
||||
print(async_result)
|
||||
|
||||
# Example of using native async with akickoff_for_each
|
||||
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
|
||||
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
|
||||
for async_result in async_results:
|
||||
print(async_result)
|
||||
|
||||
# Example of using thread-based kickoff_async
|
||||
inputs = {'topic': 'AI in healthcare'}
|
||||
async_result = await my_crew.kickoff_async(inputs=inputs)
|
||||
print(async_result)
|
||||
|
||||
# Example of using kickoff_for_each_async
|
||||
# Example of using thread-based kickoff_for_each_async
|
||||
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
|
||||
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
|
||||
for async_result in async_results:
|
||||
print(async_result)
|
||||
```
|
||||
|
||||
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
|
||||
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. For detailed async examples, see the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide.
|
||||
|
||||
### Streaming Crew Execution
|
||||
|
||||
For real-time visibility into crew execution, you can enable streaming to receive output as it's generated:
|
||||
|
||||
```python Code
|
||||
# Enable streaming
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
# Iterate over streaming output
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
```
|
||||
|
||||
Learn more about streaming in the [Streaming Crew Execution](/en/learn/streaming-crew-execution) guide.
|
||||
|
||||
### Replaying from a Specific Task
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ CrewAI uses an event bus architecture to emit events throughout the execution li
|
||||
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
|
||||
CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
|
||||
CrewAI AOP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -875,14 +875,13 @@ By exploring these examples, you can gain insights into how to leverage CrewAI F
|
||||
Also, check out our YouTube video on how to use flows in CrewAI below!
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
className="w-full aspect-video rounded-xl"
|
||||
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
title="CrewAI Flows overview"
|
||||
frameBorder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
referrerPolicy="strict-origin-when-cross-origin"
|
||||
allowFullScreen
|
||||
></iframe>
|
||||
|
||||
## Running Flows
|
||||
@@ -898,6 +897,31 @@ flow = ExampleFlow()
|
||||
result = flow.kickoff()
|
||||
```
|
||||
|
||||
### Streaming Flow Execution
|
||||
|
||||
For real-time visibility into flow execution, you can enable streaming to receive output as it's generated:
|
||||
|
||||
```python
|
||||
class StreamingFlow(Flow):
|
||||
stream = True # Enable streaming
|
||||
|
||||
@start()
|
||||
def research(self):
|
||||
# Your flow implementation
|
||||
pass
|
||||
|
||||
# Iterate over streaming output
|
||||
flow = StreamingFlow()
|
||||
streaming = flow.kickoff()
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
```
|
||||
|
||||
Learn more about streaming in the [Streaming Flow Execution](/en/learn/streaming-flow-execution) guide.
|
||||
|
||||
### Using the CLI
|
||||
|
||||
Starting from version 0.103.0, you can run flows using the `crewai run` command:
|
||||
|
||||
@@ -388,8 +388,8 @@ crew = Crew(
|
||||
agents=[sales_agent, tech_agent, support_agent],
|
||||
tasks=[...],
|
||||
embedder={ # Fallback embedder for agents without their own
|
||||
"provider": "google",
|
||||
"config": {"model": "text-embedding-004"}
|
||||
"provider": "google-generativeai",
|
||||
"config": {"model_name": "gemini-embedding-001"}
|
||||
}
|
||||
)
|
||||
|
||||
@@ -629,9 +629,9 @@ agent = Agent(
|
||||
backstory="Expert researcher",
|
||||
knowledge_sources=[knowledge_source],
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"provider": "google-generativeai",
|
||||
"config": {
|
||||
"model": "models/text-embedding-004",
|
||||
"model_name": "gemini-embedding-001",
|
||||
"api_key": "your-google-key"
|
||||
}
|
||||
}
|
||||
@@ -739,7 +739,7 @@ class KnowledgeMonitorListener(BaseEventListener):
|
||||
knowledge_monitor = KnowledgeMonitorListener()
|
||||
```
|
||||
|
||||
For more information on using events, see the [Event Listeners](https://docs.crewai.com/concepts/event-listener) documentation.
|
||||
For more information on using events, see the [Event Listeners](/en/concepts/event-listener) documentation.
|
||||
|
||||
### Custom Knowledge Sources
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
|
||||
|
||||
## What are LLMs?
|
||||
@@ -113,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="OpenAI">
|
||||
Set the following environment variables in your `.env` file:
|
||||
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
# Optional
|
||||
OPENAI_API_BASE=<custom-base-url>
|
||||
OPENAI_ORGANIZATION=<your-org-id>
|
||||
OPENAI_BASE_URL=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4", # call model by provider/model_name
|
||||
temperature=0.8,
|
||||
max_tokens=150,
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key", # Or set OPENAI_API_KEY
|
||||
temperature=0.7,
|
||||
max_tokens=4000
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.openai.com/v1", # Optional custom endpoint
|
||||
organization="org-...", # Optional organization ID
|
||||
project="proj_...", # Optional project ID
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
max_completion_tokens=4000, # For newer models
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.1,
|
||||
presence_penalty=0.1,
|
||||
stop=["END"],
|
||||
seed=42
|
||||
seed=42, # For reproducible outputs
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3, # Maximum retry attempts
|
||||
logprobs=True, # Return log probabilities
|
||||
top_logprobs=5, # Number of most likely tokens
|
||||
reasoning_effort="medium" # For o1 models: low, medium, high
|
||||
)
|
||||
```
|
||||
|
||||
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
|
||||
**Structured Outputs:**
|
||||
```python Code
|
||||
from pydantic import BaseModel
|
||||
from crewai import LLM
|
||||
|
||||
class ResponseFormat(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
summary: str
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key (required)
|
||||
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support (except o1 models)
|
||||
- Structured outputs with JSON schema
|
||||
- Streaming support for real-time responses
|
||||
- Token usage tracking
|
||||
- Stop sequences support (except o1 models)
|
||||
- Log probabilities for token-level insights
|
||||
- Reasoning effort control for o1 models
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|---------------------|------------------|-----------------------------------------------|
|
||||
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
|
||||
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
|
||||
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
|
||||
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
|
||||
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
|
||||
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
|
||||
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
|
||||
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
|
||||
| o1-mini | 128,000 tokens | Efficient reasoning model |
|
||||
| o3-mini | 200,000 tokens | Lightweight reasoning model |
|
||||
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
|
||||
|
||||
**Note:** To use OpenAI, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[openai]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Meta-Llama">
|
||||
@@ -187,69 +247,230 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Anthropic">
|
||||
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# Optional
|
||||
ANTHROPIC_API_BASE=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-sonnet-20240229-v1:0",
|
||||
temperature=0.7
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
|
||||
max_tokens=4096 # Required for Anthropic
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.anthropic.com", # Optional custom endpoint
|
||||
temperature=0.7,
|
||||
max_tokens=4096, # Required parameter
|
||||
top_p=0.9,
|
||||
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3 # Maximum retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
**Extended Thinking (Claude Sonnet 4 and Beyond):**
|
||||
|
||||
CrewAI supports Anthropic's Extended Thinking feature, which allows Claude to think through problems in a more human-like way before responding. This is particularly useful for complex reasoning, analysis, and problem-solving tasks.
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
# Enable extended thinking with default settings
|
||||
llm = LLM(
|
||||
model="anthropic/claude-sonnet-4",
|
||||
thinking={"type": "enabled"},
|
||||
max_tokens=10000
|
||||
)
|
||||
|
||||
# Configure thinking with budget control
|
||||
llm = LLM(
|
||||
model="anthropic/claude-sonnet-4",
|
||||
thinking={
|
||||
"type": "enabled",
|
||||
"budget_tokens": 5000 # Limit thinking tokens
|
||||
},
|
||||
max_tokens=10000
|
||||
)
|
||||
```
|
||||
|
||||
**Thinking Configuration Options:**
|
||||
- `type`: Set to `"enabled"` to activate extended thinking mode
|
||||
- `budget_tokens` (optional): Maximum tokens to use for thinking (helps control costs)
|
||||
|
||||
**Models Supporting Extended Thinking:**
|
||||
- `claude-sonnet-4` and newer models
|
||||
- `claude-3-7-sonnet` (with extended thinking capabilities)
|
||||
|
||||
**When to Use Extended Thinking:**
|
||||
- Complex reasoning and multi-step problem solving
|
||||
- Mathematical calculations and proofs
|
||||
- Code analysis and debugging
|
||||
- Strategic planning and decision making
|
||||
- Research and analytical tasks
|
||||
|
||||
**Note:** Extended thinking consumes additional tokens but can significantly improve response quality for complex tasks.
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
|
||||
|
||||
**Features:**
|
||||
- Native tool use support for Claude 3+ models
|
||||
- Extended Thinking support for Claude Sonnet 4+
|
||||
- Streaming support for real-time responses
|
||||
- Automatic system message handling
|
||||
- Stop sequences for controlled output
|
||||
- Token usage tracking
|
||||
- Multi-turn tool use conversations
|
||||
|
||||
**Important Notes:**
|
||||
- `max_tokens` is a **required** parameter for all Anthropic models
|
||||
- Claude uses `stop_sequences` instead of `stop`
|
||||
- System messages are handled separately from conversation messages
|
||||
- First message must be from the user (automatically handled)
|
||||
- Messages must alternate between user and assistant
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|------------------------------|----------------|-----------------------------------------------|
|
||||
| claude-sonnet-4 | 200,000 tokens | Latest with extended thinking capabilities |
|
||||
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
|
||||
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
|
||||
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
|
||||
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
|
||||
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
|
||||
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
|
||||
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
|
||||
| claude-2 | 100,000 tokens | Versatile model for various tasks |
|
||||
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
|
||||
|
||||
**Note:** To use Anthropic, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[anthropic]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google (Gemini API)">
|
||||
Set your API key in your `.env` file. If you need a key, or need to find an
|
||||
existing key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
|
||||
|
||||
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
|
||||
```toml .env
|
||||
# https://ai.google.dev/gemini-api/docs/api-key
|
||||
# Required (one of the following)
|
||||
GOOGLE_API_KEY=<your-api-key>
|
||||
GEMINI_API_KEY=<your-api-key>
|
||||
|
||||
# Optional - for Vertex AI
|
||||
GOOGLE_CLOUD_PROJECT=<your-project-id>
|
||||
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
|
||||
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.0-flash",
|
||||
temperature=0.7,
|
||||
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
|
||||
### Gemini models
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.5-flash",
|
||||
api_key="your-api-key",
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
top_k=40, # Top-k sampling parameter
|
||||
max_output_tokens=8192,
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
safety_settings={
|
||||
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
|
||||
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Vertex AI Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-1.5-pro",
|
||||
project="your-gcp-project-id",
|
||||
location="us-central1" # GCP region
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
|
||||
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
|
||||
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
|
||||
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Gemini 1.5+ and 2.x models
|
||||
- Streaming support for real-time responses
|
||||
- Multimodal capabilities (text, images, video)
|
||||
- Safety settings configuration
|
||||
- Support for both Gemini API and Vertex AI
|
||||
- Automatic system instruction handling
|
||||
- Token usage tracking
|
||||
|
||||
**Gemini Models:**
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
|
||||
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
|
||||
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
|
||||
|
||||
**Gemma Models:**
|
||||
|
||||
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|----------------|----------------|------------------------------------|
|
||||
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
|
||||
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
|
||||
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
|
||||
| gemma-3-27b | 128,000 tokens | High-performance tasks |
|
||||
|
||||
**Note:** To use Google Gemini, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[google-genai]"
|
||||
```
|
||||
|
||||
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
|
||||
|
||||
### Gemma
|
||||
|
||||
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window |
|
||||
|----------------|----------------|
|
||||
| gemma-3-1b-it | 32k tokens |
|
||||
| gemma-3-4b-it | 32k tokens |
|
||||
| gemma-3-12b-it | 32k tokens |
|
||||
| gemma-3-27b-it | 128k tokens |
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Google (Vertex AI)">
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
|
||||
@@ -291,43 +512,146 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure">
|
||||
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AZURE_API_KEY=<your-api-key>
|
||||
AZURE_API_BASE=<your-resource-url>
|
||||
AZURE_API_VERSION=<api-version>
|
||||
AZURE_ENDPOINT=<your-endpoint-url>
|
||||
|
||||
# Optional
|
||||
AZURE_AD_TOKEN=<your-azure-ad-token>
|
||||
AZURE_API_TYPE=<your-azure-api-type>
|
||||
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Endpoint URL Formats:**
|
||||
|
||||
For Azure OpenAI deployments:
|
||||
```
|
||||
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
|
||||
```
|
||||
|
||||
For Azure AI Inference endpoints:
|
||||
```
|
||||
https://<resource-name>.inference.azure.com
|
||||
```
|
||||
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
api_version="2023-05-15"
|
||||
api_key="<your-api-key>", # Or set AZURE_API_KEY
|
||||
endpoint="<your-endpoint-url>",
|
||||
api_version="2024-06-01"
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4o",
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.0,
|
||||
presence_penalty=0.0,
|
||||
stop=["END"],
|
||||
stream=True,
|
||||
timeout=60.0,
|
||||
max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AZURE_API_KEY`: Your Azure API key (required)
|
||||
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
|
||||
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
|
||||
- Streaming support for real-time responses
|
||||
- Automatic endpoint URL validation and correction
|
||||
- Comprehensive error handling with retry logic
|
||||
- Token usage tracking
|
||||
|
||||
**Note:** To use Azure AI Inference, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[azure-ai-inference]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AWS Bedrock">
|
||||
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
AWS_DEFAULT_REGION=<your-region>
|
||||
|
||||
# Optional
|
||||
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
|
||||
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
region_name="us-east-1"
|
||||
)
|
||||
```
|
||||
|
||||
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
|
||||
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
|
||||
aws_session_token="your-session-token", # For temporary credentials
|
||||
region_name="us-east-1",
|
||||
temperature=0.7,
|
||||
max_tokens=4096,
|
||||
top_p=0.9,
|
||||
top_k=250, # For Claude models
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
guardrail_config={ # Optional content filtering
|
||||
"guardrailIdentifier": "your-guardrail-id",
|
||||
"guardrailVersion": "1"
|
||||
},
|
||||
additional_model_request_fields={ # Model-specific parameters
|
||||
"top_k": 250
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
|
||||
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
|
||||
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
|
||||
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
|
||||
|
||||
**Features:**
|
||||
- Native tool calling support via Converse API
|
||||
- Streaming and non-streaming responses
|
||||
- Comprehensive error handling with retry logic
|
||||
- Guardrail configuration for content filtering
|
||||
- Model-specific parameters via `additional_model_request_fields`
|
||||
- Token usage tracking and stop reason logging
|
||||
- Support for all Bedrock foundation models
|
||||
- Automatic conversation format handling
|
||||
|
||||
**Important Notes:**
|
||||
- Uses the modern Converse API for unified model access
|
||||
- Automatic handling of model-specific conversation requirements
|
||||
- System messages are handled separately from conversation
|
||||
- First message must be from user (automatically handled)
|
||||
- Some models (like Cohere) require conversation to end with user message
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------------|----------------------|-------------------------------------------------------------------|
|
||||
@@ -357,7 +681,12 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
|
||||
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
|
||||
|
||||
**Note:** To use AWS Bedrock, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[bedrock]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Amazon SageMaker">
|
||||
@@ -750,7 +1079,7 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
|
||||
```
|
||||
|
||||
<Tip>
|
||||
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
|
||||
[Click here](/en/concepts/event-listener#event-listeners) for more details
|
||||
</Tip>
|
||||
</Tab>
|
||||
|
||||
@@ -804,6 +1133,50 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Async LLM Calls
|
||||
|
||||
CrewAI supports asynchronous LLM calls for improved performance and concurrency in your AI workflows. Async calls allow you to run multiple LLM requests concurrently without blocking, making them ideal for high-throughput applications and parallel agent operations.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Basic Usage">
|
||||
Use the `acall` method for asynchronous LLM requests:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai import LLM
|
||||
|
||||
async def main():
|
||||
llm = LLM(model="openai/gpt-4o")
|
||||
|
||||
# Single async call
|
||||
response = await llm.acall("What is the capital of France?")
|
||||
print(response)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
The `acall` method supports all the same parameters as the synchronous `call` method, including messages, tools, and callbacks.
|
||||
</Tab>
|
||||
|
||||
<Tab title="With Streaming">
|
||||
Combine async calls with streaming for real-time concurrent responses:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai import LLM
|
||||
|
||||
async def stream_async():
|
||||
llm = LLM(model="openai/gpt-4o", stream=True)
|
||||
|
||||
response = await llm.acall("Write a short story about AI")
|
||||
|
||||
print(response)
|
||||
|
||||
asyncio.run(stream_async())
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Structured LLM Calls
|
||||
|
||||
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
|
||||
@@ -899,7 +1272,7 @@ Learn how to get the most out of your LLM configuration:
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Drop Additional Parameters">
|
||||
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
|
||||
|
||||
```python
|
||||
@@ -915,6 +1288,52 @@ Learn how to get the most out of your LLM configuration:
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Transport Interceptors">
|
||||
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
|
||||
|
||||
**Supported Providers:**
|
||||
- ✅ OpenAI
|
||||
- ✅ Anthropic
|
||||
|
||||
**Basic Usage:**
|
||||
```python
|
||||
import httpx
|
||||
from crewai import LLM
|
||||
from crewai.llms.hooks import BaseInterceptor
|
||||
|
||||
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
|
||||
"""Custom interceptor to modify requests and responses."""
|
||||
|
||||
def on_outbound(self, request: httpx.Request) -> httpx.Request:
|
||||
"""Print request before sending to the LLM provider."""
|
||||
print(request)
|
||||
return request
|
||||
|
||||
def on_inbound(self, response: httpx.Response) -> httpx.Response:
|
||||
"""Process response after receiving from the LLM provider."""
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response time: {response.elapsed}")
|
||||
return response
|
||||
|
||||
# Use the interceptor with an LLM
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
interceptor=CustomInterceptor()
|
||||
)
|
||||
```
|
||||
|
||||
**Important Notes:**
|
||||
- Both methods must return the received object or type of object.
|
||||
- Modifying received objects may result in unexpected behavior or application crashes.
|
||||
- Not all providers support interceptors - check the supported providers list above
|
||||
|
||||
<Info>
|
||||
Interceptors operate at the transport layer. This is particularly useful for:
|
||||
- Message transformation and filtering
|
||||
- Debugging API interactions
|
||||
</Info>
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
@@ -341,7 +341,7 @@ crew = Crew(
|
||||
embedder={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": "text-embedding-3-small" # or "text-embedding-3-large"
|
||||
"model_name": "text-embedding-3-small" # or "text-embedding-3-large"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -353,7 +353,7 @@ crew = Crew(
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"api_key": "your-openai-api-key", # Optional: override env var
|
||||
"model": "text-embedding-3-large",
|
||||
"model_name": "text-embedding-3-large",
|
||||
"dimensions": 1536, # Optional: reduce dimensions for smaller storage
|
||||
"organization_id": "your-org-id" # Optional: for organization accounts
|
||||
}
|
||||
@@ -375,7 +375,7 @@ crew = Crew(
|
||||
"api_base": "https://your-resource.openai.azure.com/",
|
||||
"api_type": "azure",
|
||||
"api_version": "2023-05-15",
|
||||
"model": "text-embedding-3-small",
|
||||
"model_name": "text-embedding-3-small",
|
||||
"deployment_id": "your-deployment-name" # Azure deployment name
|
||||
}
|
||||
}
|
||||
@@ -390,10 +390,10 @@ Use Google's text embedding models for integration with Google Cloud services.
|
||||
crew = Crew(
|
||||
memory=True,
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"provider": "google-generativeai",
|
||||
"config": {
|
||||
"api_key": "your-google-api-key",
|
||||
"model": "text-embedding-004" # or "text-embedding-preview-0409"
|
||||
"model_name": "gemini-embedding-001" # or "text-embedding-005", "text-multilingual-embedding-002"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -461,7 +461,7 @@ crew = Crew(
|
||||
"provider": "cohere",
|
||||
"config": {
|
||||
"api_key": "your-cohere-api-key",
|
||||
"model": "embed-english-v3.0" # or "embed-multilingual-v3.0"
|
||||
"model_name": "embed-english-v3.0" # or "embed-multilingual-v3.0"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -478,7 +478,7 @@ crew = Crew(
|
||||
"provider": "voyageai",
|
||||
"config": {
|
||||
"api_key": "your-voyage-api-key",
|
||||
"model": "voyage-large-2", # or "voyage-code-2" for code
|
||||
"model": "voyage-3", # or "voyage-3-lite", "voyage-code-3"
|
||||
"input_type": "document" # or "query"
|
||||
}
|
||||
}
|
||||
@@ -515,8 +515,7 @@ crew = Crew(
|
||||
"provider": "huggingface",
|
||||
"config": {
|
||||
"api_key": "your-hf-token", # Optional for public models
|
||||
"model": "sentence-transformers/all-MiniLM-L6-v2",
|
||||
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
|
||||
"model": "sentence-transformers/all-MiniLM-L6-v2"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -912,10 +911,10 @@ crew = Crew(
|
||||
crew = Crew(
|
||||
memory=True,
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"provider": "google-generativeai",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model": "text-embedding-004"
|
||||
"model_name": "gemini-embedding-001"
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
@@ -14,7 +14,7 @@ Tasks provide all necessary details for execution, such as a description, the ag
|
||||
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Visual Task Builder">
|
||||
CrewAI Enterprise includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
|
||||
CrewAI AOP includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
|
||||
|
||||

|
||||
|
||||
@@ -60,6 +60,7 @@ crew = Crew(
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
|
||||
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
|
||||
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
|
||||
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
|
||||
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
|
||||
|
||||
<Note type="warning" title="Deprecated: max_retries">
|
||||
@@ -223,6 +224,7 @@ By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput`
|
||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
|
||||
| **Agent** | `agent` | `str` | The agent that executed the task. |
|
||||
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
|
||||
| **Messages** | `messages` | `list[LLMMessage]` | The messages from the last task execution. |
|
||||
|
||||
### Task Methods and Properties
|
||||
|
||||
@@ -341,7 +343,11 @@ Task guardrails provide a way to validate and transform task outputs before they
|
||||
are passed to the next task. This feature helps ensure data quality and provides
|
||||
feedback to agents when their output doesn't meet specific criteria.
|
||||
|
||||
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
|
||||
CrewAI supports two types of guardrails:
|
||||
|
||||
1. **Function-based guardrails**: Python functions with custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
|
||||
|
||||
2. **LLM-based guardrails**: String descriptions that use the agent's LLM to validate outputs based on natural language criteria. These are ideal for complex or subjective validation requirements.
|
||||
|
||||
### Function-Based Guardrails
|
||||
|
||||
@@ -355,12 +361,12 @@ def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate blog content meets requirements."""
|
||||
try:
|
||||
# Check word count
|
||||
word_count = len(result.split())
|
||||
word_count = len(result.raw.split())
|
||||
if word_count > 200:
|
||||
return (False, "Blog content exceeds 200 words")
|
||||
|
||||
# Additional validation logic here
|
||||
return (True, result.strip())
|
||||
return (True, result.raw.strip())
|
||||
except Exception as e:
|
||||
return (False, "Unexpected error during validation")
|
||||
|
||||
@@ -372,6 +378,147 @@ blog_task = Task(
|
||||
)
|
||||
```
|
||||
|
||||
### LLM-Based Guardrails (String Descriptions)
|
||||
|
||||
Instead of writing custom validation functions, you can use string descriptions that leverage LLM-based validation. When you provide a string to the `guardrail` or `guardrails` parameter, CrewAI automatically creates an `LLMGuardrail` that uses the agent's LLM to validate the output based on your description.
|
||||
|
||||
**Requirements**:
|
||||
- The task must have an `agent` assigned (the guardrail uses the agent's LLM)
|
||||
- Provide a clear, descriptive string explaining the validation criteria
|
||||
|
||||
```python Code
|
||||
from crewai import Task
|
||||
|
||||
# Single LLM-based guardrail
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A blog post under 200 words",
|
||||
agent=blog_agent,
|
||||
guardrail="The blog post must be under 200 words and contain no technical jargon"
|
||||
)
|
||||
```
|
||||
|
||||
LLM-based guardrails are particularly useful for:
|
||||
- **Complex validation logic** that's difficult to express programmatically
|
||||
- **Subjective criteria** like tone, style, or quality assessments
|
||||
- **Natural language requirements** that are easier to describe than code
|
||||
|
||||
The LLM guardrail will:
|
||||
1. Analyze the task output against your description
|
||||
2. Return `(True, output)` if the output complies with the criteria
|
||||
3. Return `(False, feedback)` with specific feedback if validation fails
|
||||
|
||||
**Example with detailed validation criteria**:
|
||||
|
||||
```python Code
|
||||
research_task = Task(
|
||||
description="Research the latest developments in quantum computing",
|
||||
expected_output="A comprehensive research report",
|
||||
agent=researcher_agent,
|
||||
guardrail="""
|
||||
The research report must:
|
||||
- Be at least 1000 words long
|
||||
- Include at least 5 credible sources
|
||||
- Cover both technical and practical applications
|
||||
- Be written in a professional, academic tone
|
||||
- Avoid speculation or unverified claims
|
||||
"""
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple Guardrails
|
||||
|
||||
You can apply multiple guardrails to a task using the `guardrails` parameter. Multiple guardrails are executed sequentially, with each guardrail receiving the output from the previous one. This allows you to chain validation and transformation steps.
|
||||
|
||||
The `guardrails` parameter accepts:
|
||||
- A list of guardrail functions or string descriptions
|
||||
- A single guardrail function or string (same as `guardrail`)
|
||||
|
||||
**Note**: If `guardrails` is provided, it takes precedence over `guardrail`. The `guardrail` parameter will be ignored when `guardrails` is set.
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Any
|
||||
from crewai import TaskOutput, Task
|
||||
|
||||
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate word count is within limits."""
|
||||
word_count = len(result.raw.split())
|
||||
if word_count < 100:
|
||||
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
|
||||
if word_count > 500:
|
||||
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
|
||||
return (True, result.raw)
|
||||
|
||||
def validate_no_profanity(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Check for inappropriate language."""
|
||||
profanity_words = ["badword1", "badword2"] # Example list
|
||||
content_lower = result.raw.lower()
|
||||
for word in profanity_words:
|
||||
if word in content_lower:
|
||||
return (False, f"Inappropriate language detected: {word}")
|
||||
return (True, result.raw)
|
||||
|
||||
def format_output(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Format and clean the output."""
|
||||
formatted = result.raw.strip()
|
||||
# Capitalize first letter
|
||||
formatted = formatted[0].upper() + formatted[1:] if formatted else formatted
|
||||
return (True, formatted)
|
||||
|
||||
# Apply multiple guardrails sequentially
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A well-formatted blog post between 100-500 words",
|
||||
agent=blog_agent,
|
||||
guardrails=[
|
||||
validate_word_count, # First: validate length
|
||||
validate_no_profanity, # Second: check content
|
||||
format_output # Third: format the result
|
||||
],
|
||||
guardrail_max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
In this example, the guardrails execute in order:
|
||||
1. `validate_word_count` checks the word count
|
||||
2. `validate_no_profanity` checks for inappropriate language (using the output from step 1)
|
||||
3. `format_output` formats the final result (using the output from step 2)
|
||||
|
||||
If any guardrail fails, the error is sent back to the agent, and the task is retried up to `guardrail_max_retries` times.
|
||||
|
||||
**Mixing function-based and LLM-based guardrails**:
|
||||
|
||||
You can combine both function-based and string-based guardrails in the same list:
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Any
|
||||
from crewai import TaskOutput, Task
|
||||
|
||||
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate word count is within limits."""
|
||||
word_count = len(result.raw.split())
|
||||
if word_count < 100:
|
||||
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
|
||||
if word_count > 500:
|
||||
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
|
||||
return (True, result.raw)
|
||||
|
||||
# Mix function-based and LLM-based guardrails
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A well-formatted blog post between 100-500 words",
|
||||
agent=blog_agent,
|
||||
guardrails=[
|
||||
validate_word_count, # Function-based: precise word count check
|
||||
"The content must be engaging and suitable for a general audience", # LLM-based: subjective quality check
|
||||
"The writing style should be clear, concise, and free of technical jargon" # LLM-based: style validation
|
||||
],
|
||||
guardrail_max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
This approach combines the precision of programmatic validation with the flexibility of LLM-based assessment for subjective criteria.
|
||||
|
||||
### Guardrail Function Requirements
|
||||
|
||||
1. **Function Signature**:
|
||||
@@ -897,14 +1044,13 @@ except RuntimeError as e:
|
||||
Check out the video below to see how to use structured outputs in CrewAI:
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
className="w-full aspect-video rounded-xl"
|
||||
src="https://www.youtube.com/embed/dNpKQk5uxHw"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
title="Structured outputs in CrewAI"
|
||||
frameBorder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
referrerPolicy="strict-origin-when-cross-origin"
|
||||
allowFullScreen
|
||||
></iframe>
|
||||
|
||||
## Conclusion
|
||||
|
||||
@@ -17,7 +17,7 @@ This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/cre
|
||||
enabling everything from simple searches to complex interactions and effective teamwork among agents.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Tools Repository">
|
||||
CrewAI Enterprise provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
|
||||
CrewAI AOP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
|
||||
|
||||
The Enterprise Tools Repository includes:
|
||||
- Pre-built connectors for popular enterprise systems
|
||||
@@ -208,7 +208,7 @@ from crewai.tools import BaseTool
|
||||
class AsyncCustomTool(BaseTool):
|
||||
name: str = "async_custom_tool"
|
||||
description: str = "An asynchronous custom tool"
|
||||
|
||||
|
||||
async def _run(self, query: str = "") -> str:
|
||||
"""Asynchronously run the tool"""
|
||||
# Your async implementation here
|
||||
|
||||
@@ -1,12 +1,16 @@
|
||||
---
|
||||
title: 'Agent Repositories'
|
||||
description: 'Learn how to use Agent Repositories to share and reuse your agents across teams and projects'
|
||||
icon: 'database'
|
||||
icon: 'people-group'
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
Agent Repositories allow enterprise users to store, share, and reuse agent definitions across teams and projects. This feature enables organizations to maintain a centralized library of standardized agents, promoting consistency and reducing duplication of effort.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Benefits of Agent Repositories
|
||||
|
||||
- **Standardization**: Maintain consistent agent definitions across your organization
|
||||
@@ -14,25 +18,21 @@ Agent Repositories allow enterprise users to store, share, and reuse agent defin
|
||||
- **Governance**: Implement organization-wide policies for agent configurations
|
||||
- **Collaboration**: Enable teams to share and build upon each other's work
|
||||
|
||||
## Using Agent Repositories
|
||||
|
||||
### Prerequisites
|
||||
## Creating and Use Agent Repositories
|
||||
|
||||
1. You must have an account at CrewAI, try the [free plan](https://app.crewai.com).
|
||||
2. You need to be authenticated using the CrewAI CLI.
|
||||
3. If you have more than one organization, make sure you are switched to the correct organization using the CLI command:
|
||||
2. Create agents with specific roles and goals for your workflows.
|
||||
3. Configure tools and capabilities for each specialized assistant.
|
||||
4. Deploy agents across projects via visual interface or API integration.
|
||||
|
||||
```bash
|
||||
crewai org switch <org_id>
|
||||
```
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
### Creating and Managing Agents in Repositories
|
||||
|
||||
To create and manage agents in repositories,Enterprise Dashboard.
|
||||
|
||||
### Loading Agents from Repositories
|
||||
|
||||
You can load agents from repositories in your code using the `from_repository` parameter:
|
||||
You can load agents from repositories in your code using the `from_repository` parameter to run locally:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
@@ -42,7 +42,6 @@ from crewai import Agent
|
||||
researcher = Agent(
|
||||
from_repository="market-research-agent"
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
### Overriding Repository Settings
|
||||
|
||||
104
docs/en/enterprise/features/automations.mdx
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: Automations
|
||||
description: "Manage, deploy, and monitor your live crews (automations) in one place."
|
||||
icon: "rocket"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Automations is the live operations hub for your deployed crews. Use it to deploy from GitHub or a ZIP file, manage environment variables, re‑deploy when needed, and monitor the status of each automation.
|
||||
|
||||
<Frame>
|
||||

|
||||
|
||||
</Frame>
|
||||
|
||||
## Deployment Methods
|
||||
|
||||
### Deploy from GitHub
|
||||
|
||||
Use this for version‑controlled projects and continuous deployment.
|
||||
|
||||
<Steps>
|
||||
<Step title="Connect GitHub">
|
||||
Click <b>Configure GitHub</b> and authorize access.
|
||||
</Step>
|
||||
<Step title="Select Repository & Branch">
|
||||
Choose the <b>Repository</b> and <b>Branch</b> you want to deploy from.
|
||||
</Step>
|
||||
<Step title="Enable Auto‑deploy (optional)">
|
||||
Turn on <b>Automatically deploy new commits</b> to ship updates on every push.
|
||||
</Step>
|
||||
<Step title="Add Environment Variables">
|
||||
Add secrets individually or use <b>Bulk View</b> for multiple variables.
|
||||
</Step>
|
||||
<Step title="Deploy">
|
||||
Click <b>Deploy</b> to create your live automation.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
### Deploy from ZIP
|
||||
|
||||
Ship quickly without Git—upload a compressed package of your project.
|
||||
|
||||
<Steps>
|
||||
<Step title="Choose File">
|
||||
Select the ZIP archive from your computer.
|
||||
</Step>
|
||||
<Step title="Add Environment Variables">
|
||||
Provide any required variables or keys.
|
||||
</Step>
|
||||
<Step title="Deploy">
|
||||
Click <b>Deploy</b> to create your live automation.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Automations Dashboard
|
||||
|
||||
The table lists all live automations with key details:
|
||||
|
||||
- **CREW**: Automation name
|
||||
- **STATUS**: Online / Failed / In Progress
|
||||
- **URL**: Endpoint for kickoff/status
|
||||
- **TOKEN**: Automation token
|
||||
- **ACTIONS**: Re‑deploy, delete, and more
|
||||
|
||||
Use the top‑right controls to filter and search:
|
||||
|
||||
- Search by name
|
||||
- Filter by <b>Status</b>
|
||||
- Filter by <b>Source</b> (GitHub / Studio / ZIP)
|
||||
|
||||
Once deployed, you can view the automation details and have the **Options** dropdown menu to `chat with this crew`, `Export React Component` and `Export as MCP`.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Prefer GitHub deployments for version control and CI/CD
|
||||
- Use re‑deploy to roll forward after code or config updates or set it to auto-deploy on every push
|
||||
|
||||
## Related
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Deploy a Crew" href="/en/enterprise/guides/deploy-crew" icon="rocket">
|
||||
Deploy a Crew from GitHub or ZIP file.
|
||||
</Card>
|
||||
<Card title="Automation Triggers" href="/en/enterprise/guides/automation-triggers" icon="trigger">
|
||||
Trigger automations via webhooks or API.
|
||||
</Card>
|
||||
<Card title="Webhook Automation" href="/en/enterprise/guides/webhook-automation" icon="webhook">
|
||||
Stream real-time events and updates to your systems.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
88
docs/en/enterprise/features/crew-studio.mdx
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
title: Crew Studio
|
||||
description: "Build new automations with AI assistance, a visual editor, and integrated testing."
|
||||
icon: "pencil"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Crew Studio is an interactive, AI‑assisted workspace for creating new automations from scratch using natural language and a visual workflow editor.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Prompt‑based Creation
|
||||
|
||||
- Describe the automation you want; the AI generates agents, tasks, and tools.
|
||||
- Use voice input via the microphone icon if preferred.
|
||||
- Start from built‑in prompts for common use cases.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Visual Editor
|
||||
|
||||
The canvas reflects the workflow as nodes and edges with three supporting panels that allow you to configure the workflow easily without writing code; a.k.a. "**vibe coding AI Agents**".
|
||||
|
||||
You can use the drag-and-drop functionality to add agents, tasks, and tools to the canvas or you can use the chat section to build the agents. Both approaches share state and can be used interchangeably.
|
||||
|
||||
- **AI Thoughts (left)**: streaming reasoning as the workflow is designed
|
||||
- **Canvas (center)**: agents and tasks as connected nodes
|
||||
- **Resources (right)**: drag‑and‑drop components (agents, tasks, tools)
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Execution & Debugging
|
||||
|
||||
Switch to the <b>Execution</b> view to run and observe the workflow:
|
||||
|
||||
- Event timeline
|
||||
- Detailed logs (Details, Messages, Raw Data)
|
||||
- Local test runs before publishing
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Publish & Export
|
||||
|
||||
- <b>Publish</b> to deploy a live automation
|
||||
- <b>Download</b> source as a ZIP for local development or customization
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
Once published, you can view the automation details and have the **Options** dropdown menu to `chat with this crew`, `Export React Component` and `Export as MCP`.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Iterate quickly in Studio; publish only when stable
|
||||
- Keep tools constrained to minimum permissions needed
|
||||
- Use Traces to validate behavior and performance
|
||||
|
||||
## Related
|
||||
|
||||
<CardGroup cols={4}>
|
||||
<Card title="Enable Crew Studio" href="/en/enterprise/guides/enable-crew-studio" icon="palette">
|
||||
Enable Crew Studio.
|
||||
</Card>
|
||||
<Card title="Build a Crew" href="/en/enterprise/guides/build-crew" icon="paintbrush">
|
||||
Build a Crew.
|
||||
</Card>
|
||||
<Card title="Deploy a Crew" href="/en/enterprise/guides/deploy-crew" icon="rocket">
|
||||
Deploy a Crew from GitHub or ZIP file.
|
||||
</Card>
|
||||
<Card title="Export a React Component" href="/en/enterprise/guides/react-component-export" icon="download">
|
||||
Export a React Component.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -1,186 +0,0 @@
|
||||
---
|
||||
title: Integrations
|
||||
description: "Connected applications for your agents to take actions."
|
||||
icon: "plug"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Enable your agents to authenticate with any OAuth enabled provider and take actions. From Salesforce and HubSpot to Google and GitHub, we've got you covered with 16+ integrated services.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Supported Integrations
|
||||
|
||||
### **Communication & Collaboration**
|
||||
- **Gmail** - Manage emails and drafts
|
||||
- **Slack** - Workspace notifications and alerts
|
||||
- **Microsoft** - Office 365 and Teams integration
|
||||
|
||||
### **Project Management**
|
||||
- **Jira** - Issue tracking and project management
|
||||
- **ClickUp** - Task and productivity management
|
||||
- **Asana** - Team task and project coordination
|
||||
- **Notion** - Page and database management
|
||||
- **Linear** - Software project and bug tracking
|
||||
- **GitHub** - Repository and issue management
|
||||
|
||||
### **Customer Relationship Management**
|
||||
- **Salesforce** - CRM account and opportunity management
|
||||
- **HubSpot** - Sales pipeline and contact management
|
||||
- **Zendesk** - Customer support ticket management
|
||||
|
||||
### **Business & Finance**
|
||||
- **Stripe** - Payment processing and customer management
|
||||
- **Shopify** - E-commerce store and product management
|
||||
|
||||
### **Productivity & Storage**
|
||||
- **Google Sheets** - Spreadsheet data synchronization
|
||||
- **Google Calendar** - Event and schedule management
|
||||
- **Box** - File storage and document management
|
||||
|
||||
and more to come!
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before using Authentication Integrations, ensure you have:
|
||||
|
||||
- A [CrewAI Enterprise](https://app.crewai.com) account. You can get started with a free trial.
|
||||
|
||||
|
||||
## Setting Up Integrations
|
||||
|
||||
### 1. Connect Your Account
|
||||
|
||||
1. Navigate to [CrewAI Enterprise](https://app.crewai.com)
|
||||
2. Go to **Integrations** tab - https://app.crewai.com/crewai_plus/connectors
|
||||
3. Click **Connect** on your desired service from the Authentication Integrations section
|
||||
4. Complete the OAuth authentication flow
|
||||
5. Grant necessary permissions for your use case
|
||||
6. All set! Get your Enterprise Token from your [CrewAI Enterprise](https://app.crewai.com) in **Integration** tab
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
### 2. Install Integration Tools
|
||||
|
||||
All you need is the latest version of `crewai-tools` package.
|
||||
|
||||
```bash
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
<Tip>
|
||||
All the services you are authenticated into will be available as tools. So all you need to do is add the `CrewaiEnterpriseTools` to your agent and you are good to go.
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import CrewaiEnterpriseTools
|
||||
|
||||
# Get enterprise tools (Gmail tool will be included)
|
||||
enterprise_tools = CrewaiEnterpriseTools(
|
||||
enterprise_token="your_enterprise_token"
|
||||
)
|
||||
# print the tools
|
||||
print(enterprise_tools)
|
||||
|
||||
# Create an agent with Gmail capabilities
|
||||
email_agent = Agent(
|
||||
role="Email Manager",
|
||||
goal="Manage and organize email communications",
|
||||
backstory="An AI assistant specialized in email management and communication.",
|
||||
tools=enterprise_tools
|
||||
)
|
||||
|
||||
# Task to send an email
|
||||
email_task = Task(
|
||||
description="Draft and send a follow-up email to john@example.com about the project update",
|
||||
agent=email_agent,
|
||||
expected_output="Confirmation that email was sent successfully"
|
||||
)
|
||||
|
||||
# Run the task
|
||||
crew = Crew(
|
||||
agents=[email_agent],
|
||||
tasks=[email_task]
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### Filtering Tools
|
||||
|
||||
```python
|
||||
from crewai_tools import CrewaiEnterpriseTools
|
||||
|
||||
enterprise_tools = CrewaiEnterpriseTools(
|
||||
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
|
||||
)
|
||||
gmail_tool = enterprise_tools["gmail_find_email"]
|
||||
|
||||
gmail_agent = Agent(
|
||||
role="Gmail Manager",
|
||||
goal="Manage gmail communications and notifications",
|
||||
backstory="An AI assistant that helps coordinate gmail communications.",
|
||||
tools=[gmail_tool]
|
||||
)
|
||||
|
||||
notification_task = Task(
|
||||
description="Find the email from john@example.com",
|
||||
agent=gmail_agent,
|
||||
expected_output="Email found from john@example.com"
|
||||
)
|
||||
|
||||
# Run the task
|
||||
crew = Crew(
|
||||
agents=[slack_agent],
|
||||
tasks=[notification_task]
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
- **Principle of Least Privilege**: Only grant the minimum permissions required for your agents' tasks
|
||||
- **Regular Audits**: Periodically review connected integrations and their permissions
|
||||
- **Secure Credentials**: Never hardcode credentials; use CrewAI's secure authentication flow
|
||||
|
||||
|
||||
### Filtering Tools
|
||||
On a deployed crew, you can specify which actions are avialbel for each integration from the settings page of the service you connected to.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
|
||||
### Scoped Deployments for multi user organizations
|
||||
You can deploy your crew and scope each integration to a specific user. For example, a crew that connects to google can use a specific user's gmail account.
|
||||
|
||||
<Tip>
|
||||
This is useful for multi user organizations where you want to scope the integration to a specific user.
|
||||
</Tip>
|
||||
|
||||
|
||||
Use the `user_bearer_token` to scope the integration to a specific user so that when the crew is kicked off, it will use the user's bearer token to authenticate with the integration. If user is not logged in, then the crew will not use any connected integrations. Use the default bearer token to authenticate with the integrations thats deployed with the crew.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
|
||||
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with integration setup or troubleshooting.
|
||||
</Card>
|
||||
46
docs/en/enterprise/features/marketplace.mdx
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
title: Marketplace
|
||||
description: "Discover, install, and govern reusable assets for your enterprise crews."
|
||||
icon: "store"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Marketplace provides a curated surface for discovering integrations, internal tools, and reusable assets that accelerate crew development.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Discoverability
|
||||
|
||||
- Browse by category and capability
|
||||
- Search for assets by name or keyword
|
||||
|
||||
## Install & Enable
|
||||
|
||||
- One‑click install for approved assets
|
||||
- Enable or disable per crew as needed
|
||||
- Configure required environment variables and scopes
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
You can also download the templates directly from the marketplace by clicking on the `Download` button so
|
||||
you can use them locally or refine them to your needs.
|
||||
|
||||
## Related
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Tools & Integrations" href="/en/enterprise/features/tools-and-integrations" icon="wrench">
|
||||
Connect external apps and manage internal tools your agents can use.
|
||||
</Card>
|
||||
<Card title="Tool Repository" href="/en/enterprise/guides/tool-repository#tool-repository" icon="toolbox">
|
||||
Publish and install tools to enhance your crews' capabilities.
|
||||
</Card>
|
||||
<Card title="Agents Repository" href="/en/enterprise/features/agent-repositories" icon="people-group">
|
||||
Store, share, and reuse agent definitions across teams and projects.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -7,16 +7,16 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
RBAC in CrewAI Enterprise enables secure, scalable access management through a combination of organization‑level roles and automation‑level visibility controls.
|
||||
RBAC in CrewAI AOP enables secure, scalable access management through a combination of organization‑level roles and automation‑level visibility controls.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI Enterprise" />
|
||||
|
||||
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI AOP" />
|
||||
|
||||
</Frame>
|
||||
|
||||
## Users and Roles
|
||||
|
||||
Each member in your CrewAI workspace is assigned a role, which determines their access across various features.
|
||||
Each member in your CrewAI workspace is assigned a role, which determines their access across various features.
|
||||
|
||||
You can:
|
||||
|
||||
@@ -28,7 +28,7 @@ You can configure users and roles in Settings → Roles.
|
||||
|
||||
<Steps>
|
||||
<Step title="Open Roles settings">
|
||||
Go to <b>Settings → Roles</b> in CrewAI Enterprise.
|
||||
Go to <b>Settings → Roles</b> in CrewAI AOP.
|
||||
</Step>
|
||||
<Step title="Choose a role type">
|
||||
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click <b>Create role</b> to define a custom one.
|
||||
@@ -93,12 +93,10 @@ The organization owner always has access. In private mode, only whitelisted user
|
||||
</Tip>
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI Enterprise" />
|
||||
|
||||
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI AOP" />
|
||||
|
||||
</Frame>
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with RBAC questions.
|
||||
</Card>
|
||||
|
||||
|
||||
|
||||
250
docs/en/enterprise/features/tools-and-integrations.mdx
Normal file
@@ -0,0 +1,250 @@
|
||||
---
|
||||
title: Tools & Integrations
|
||||
description: "Connect external apps and manage internal tools your agents can use."
|
||||
icon: "wrench"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Tools & Integrations is the central hub for connecting third‑party apps and managing internal tools that your agents can use at runtime.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Explore
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Integrations" icon="plug">
|
||||
|
||||
## Agent Apps (Integrations)
|
||||
|
||||
Connect enterprise‑grade applications (e.g., Gmail, Google Drive, HubSpot, Slack) via OAuth to enable agent actions.
|
||||
|
||||
<Steps>
|
||||
<Step title="Connect">
|
||||
Click <b>Connect</b> on an app and complete OAuth.
|
||||
</Step>
|
||||
<Step title="Configure">
|
||||
Optionally adjust scopes, triggers, and action availability.
|
||||
</Step>
|
||||
<Step title="Use in Agents">
|
||||
Connected services become available as tools for your agents.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
### Connect your Account
|
||||
|
||||
1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link>
|
||||
2. Click <b>Connect</b> on the desired service
|
||||
3. Complete the OAuth flow and grant scopes
|
||||
4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link>
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
### Install Integration Tools
|
||||
|
||||
To use the integrations locally, you need to install the latest `crewai-tools` package.
|
||||
|
||||
```bash
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
<Tip>
|
||||
Use the new streamlined approach to integrate enterprise apps. Simply specify the app and its actions directly in the Agent configuration.
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create an agent with Gmail capabilities
|
||||
email_agent = Agent(
|
||||
role="Email Manager",
|
||||
goal="Manage and organize email communications",
|
||||
backstory="An AI assistant specialized in email management and communication.",
|
||||
apps=['gmail', 'gmail/send_email'] # Using canonical name 'gmail'
|
||||
)
|
||||
|
||||
# Task to send an email
|
||||
email_task = Task(
|
||||
description="Draft and send a follow-up email to john@example.com about the project update",
|
||||
agent=email_agent,
|
||||
expected_output="Confirmation that email was sent successfully"
|
||||
)
|
||||
|
||||
# Run the task
|
||||
crew = Crew(
|
||||
agents=[email_agent],
|
||||
tasks=[email_task]
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### Filtering Tools
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create agent with specific Gmail actions only
|
||||
gmail_agent = Agent(
|
||||
role="Gmail Manager",
|
||||
goal="Manage gmail communications and notifications",
|
||||
backstory="An AI assistant that helps coordinate gmail communications.",
|
||||
apps=['gmail/fetch_emails'] # Using canonical name with specific action
|
||||
)
|
||||
|
||||
notification_task = Task(
|
||||
description="Find the email from john@example.com",
|
||||
agent=gmail_agent,
|
||||
expected_output="Email found from john@example.com"
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[gmail_agent],
|
||||
tasks=[notification_task]
|
||||
)
|
||||
```
|
||||
|
||||
On a deployed crew, you can specify which actions are available for each integration from the service settings page.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
### Scoped Deployments (multi‑user orgs)
|
||||
|
||||
You can scope each integration to a specific user. For example, a crew that connects to Google can use a specific user’s Gmail account.
|
||||
|
||||
<Tip>
|
||||
Useful when different teams/users must keep data access separated.
|
||||
</Tip>
|
||||
|
||||
Use the `user_bearer_token` to scope authentication to the requesting user. If the user isn’t logged in, the crew won’t use connected integrations. Otherwise it falls back to the default bearer token configured for the deployment.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
<div id="catalog"></div>
|
||||
### Catalog
|
||||
|
||||
#### Communication & Collaboration
|
||||
- Gmail — Manage emails and drafts
|
||||
- Slack — Workspace notifications and alerts
|
||||
- Microsoft — Office 365 and Teams integration
|
||||
|
||||
#### Project Management
|
||||
- Jira — Issue tracking and project management
|
||||
- ClickUp — Task and productivity management
|
||||
- Asana — Team task and project coordination
|
||||
- Notion — Page and database management
|
||||
- Linear — Software project and bug tracking
|
||||
- GitHub — Repository and issue management
|
||||
|
||||
#### Customer Relationship Management
|
||||
- Salesforce — CRM account and opportunity management
|
||||
- HubSpot — Sales pipeline and contact management
|
||||
- Zendesk — Customer support ticket management
|
||||
|
||||
#### Business & Finance
|
||||
- Stripe — Payment processing and customer management
|
||||
- Shopify — E‑commerce store and product management
|
||||
|
||||
#### Productivity & Storage
|
||||
- Google Sheets — Spreadsheet data synchronization
|
||||
- Google Calendar — Event and schedule management
|
||||
- Box — File storage and document management
|
||||
|
||||
…and more to come!
|
||||
|
||||
</Tab>
|
||||
<Tab title="Internal Tools" icon="toolbox">
|
||||
|
||||
## Internal Tools
|
||||
|
||||
Create custom tools locally, publish them on CrewAI AOP Tool Repository and use them in your agents.
|
||||
|
||||
<Tip>
|
||||
Before running the commands below, make sure you log in to your CrewAI AOP account by running this command:
|
||||
```bash
|
||||
crewai login
|
||||
```
|
||||
</Tip>
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
<Steps>
|
||||
<Step title="Create">
|
||||
Create a new tool locally.
|
||||
```bash
|
||||
crewai tool create your-tool
|
||||
```
|
||||
</Step>
|
||||
<Step title="Publish">
|
||||
Publish the tool to the CrewAI AOP Tool Repository.
|
||||
```bash
|
||||
crewai tool publish
|
||||
```
|
||||
</Step>
|
||||
<Step title="Install">
|
||||
Install the tool from the CrewAI AOP Tool Repository.
|
||||
```bash
|
||||
crewai tool install your-tool
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
Manage:
|
||||
|
||||
- Name and description
|
||||
- Visibility (Private / Public)
|
||||
- Required environment variables
|
||||
- Version history and downloads
|
||||
- Team and role access
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Related
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tool Repository" href="/en/enterprise/guides/tool-repository#tool-repository" icon="toolbox">
|
||||
Create, publish, and version custom tools for your organization.
|
||||
</Card>
|
||||
<Card title="Webhook Automation" href="/en/enterprise/guides/webhook-automation" icon="bolt">
|
||||
Automate workflows and integrate with external platforms and services.
|
||||
</Card>
|
||||
</CardGroup>
|
||||