Compare commits

..

4 Commits

Author SHA1 Message Date
theCyberTech
9fbc602b3e Revert "Update CodeQL workflow to include custom config file"
This reverts commit 9c54bfce1b.
2025-09-26 15:15:43 +08:00
Greyson LaLonde
aa15b38d41 ci: add canary workflow trigger for branch testing 2025-09-23 23:57:04 -04:00
theCyberTech
9c54bfce1b Update CodeQL workflow to include custom config file
This commit adds a reference to a custom CodeQL configuration file (.github/codeql-config.yml) in the GitHub Actions workflow for CodeQL analysis. This enhancement allows for more tailored queries and analysis settings during the code scanning process.
2025-09-24 00:21:31 +08:00
theCyberTech
2c80ac6283 Add Canary Crew for Github Action
Initial commit for the Canary Crew project using crewAI. Includes workflow for GitHub Actions, project configuration, agent and task YAML files, main execution and utility scripts, a custom tool example, user knowledge file, and documentation. Enables multi-agent AI research and reporting with markdown output.
2025-09-22 15:23:26 +08:00
2114 changed files with 102026 additions and 259519 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

161
.env.test
View File

@@ -1,161 +0,0 @@
# =============================================================================
# Test Environment Variables
# =============================================================================
# This file contains all environment variables needed to run tests locally
# in a way that mimics the GitHub Actions CI environment.
# =============================================================================
# -----------------------------------------------------------------------------
# LLM Provider API Keys
# -----------------------------------------------------------------------------
OPENAI_API_KEY=fake-api-key
ANTHROPIC_API_KEY=fake-anthropic-key
GEMINI_API_KEY=fake-gemini-key
AZURE_API_KEY=fake-azure-key
OPENROUTER_API_KEY=fake-openrouter-key
# -----------------------------------------------------------------------------
# AWS Credentials
# -----------------------------------------------------------------------------
AWS_ACCESS_KEY_ID=fake-aws-access-key
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_REGION_NAME=us-east-1
# -----------------------------------------------------------------------------
# Azure OpenAI Configuration
# -----------------------------------------------------------------------------
AZURE_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_API_KEY=fake-azure-openai-key
AZURE_API_VERSION=2024-02-15-preview
OPENAI_API_VERSION=2024-02-15-preview
# -----------------------------------------------------------------------------
# Google Cloud Configuration
# -----------------------------------------------------------------------------
#GOOGLE_CLOUD_PROJECT=fake-gcp-project
#GOOGLE_CLOUD_LOCATION=us-central1
# -----------------------------------------------------------------------------
# OpenAI Configuration
# -----------------------------------------------------------------------------
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_BASE=https://api.openai.com/v1
# -----------------------------------------------------------------------------
# Search & Scraping Tool API Keys
# -----------------------------------------------------------------------------
SERPER_API_KEY=fake-serper-key
EXA_API_KEY=fake-exa-key
BRAVE_API_KEY=fake-brave-key
FIRECRAWL_API_KEY=fake-firecrawl-key
TAVILY_API_KEY=fake-tavily-key
SERPAPI_API_KEY=fake-serpapi-key
SERPLY_API_KEY=fake-serply-key
LINKUP_API_KEY=fake-linkup-key
PARALLEL_API_KEY=fake-parallel-key
# -----------------------------------------------------------------------------
# Exa Configuration
# -----------------------------------------------------------------------------
EXA_BASE_URL=https://api.exa.ai
# -----------------------------------------------------------------------------
# Web Scraping & Automation
# -----------------------------------------------------------------------------
BRIGHT_DATA_API_KEY=fake-brightdata-key
BRIGHT_DATA_ZONE=fake-zone
BRIGHTDATA_API_URL=https://api.brightdata.com
BRIGHTDATA_DEFAULT_TIMEOUT=600
BRIGHTDATA_DEFAULT_POLLING_INTERVAL=1
OXYLABS_USERNAME=fake-oxylabs-user
OXYLABS_PASSWORD=fake-oxylabs-pass
SCRAPFLY_API_KEY=fake-scrapfly-key
SCRAPEGRAPH_API_KEY=fake-scrapegraph-key
BROWSERBASE_API_KEY=fake-browserbase-key
BROWSERBASE_PROJECT_ID=fake-browserbase-project
HYPERBROWSER_API_KEY=fake-hyperbrowser-key
MULTION_API_KEY=fake-multion-key
APIFY_API_TOKEN=fake-apify-token
# -----------------------------------------------------------------------------
# Database & Vector Store Credentials
# -----------------------------------------------------------------------------
SINGLESTOREDB_URL=mysql://fake:fake@localhost:3306/fake
SINGLESTOREDB_HOST=localhost
SINGLESTOREDB_PORT=3306
SINGLESTOREDB_USER=fake-user
SINGLESTOREDB_PASSWORD=fake-password
SINGLESTOREDB_DATABASE=fake-database
SINGLESTOREDB_CONNECT_TIMEOUT=30
SNOWFLAKE_USER=fake-snowflake-user
SNOWFLAKE_PASSWORD=fake-snowflake-password
SNOWFLAKE_ACCOUNT=fake-snowflake-account
SNOWFLAKE_WAREHOUSE=fake-snowflake-warehouse
SNOWFLAKE_DATABASE=fake-snowflake-database
SNOWFLAKE_SCHEMA=fake-snowflake-schema
WEAVIATE_URL=http://localhost:8080
WEAVIATE_API_KEY=fake-weaviate-key
EMBEDCHAIN_DB_URI=sqlite:///test.db
# Databricks Credentials
DATABRICKS_HOST=https://fake-databricks.cloud.databricks.com
DATABRICKS_TOKEN=fake-databricks-token
DATABRICKS_CONFIG_PROFILE=fake-profile
# MongoDB Credentials
MONGODB_URI=mongodb://fake:fake@localhost:27017/fake
# -----------------------------------------------------------------------------
# CrewAI Platform & Enterprise
# -----------------------------------------------------------------------------
# setting CREWAI_PLATFORM_INTEGRATION_TOKEN causes these test to fail:
#=========================== short test summary info ============================
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_platform_context_manager_basic_usage - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_context_var_isolation_between_tests - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_multiple_sequential_context_managers - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#CREWAI_PLATFORM_INTEGRATION_TOKEN=fake-platform-token
CREWAI_PERSONAL_ACCESS_TOKEN=fake-personal-token
CREWAI_PLUS_URL=https://fake.crewai.com
# -----------------------------------------------------------------------------
# Other Service API Keys
# -----------------------------------------------------------------------------
ZAPIER_API_KEY=fake-zapier-key
PATRONUS_API_KEY=fake-patronus-key
MINDS_API_KEY=fake-minds-key
HF_TOKEN=fake-hf-token
# -----------------------------------------------------------------------------
# Feature Flags/Testing Modes
# -----------------------------------------------------------------------------
CREWAI_DISABLE_TELEMETRY=true
OTEL_SDK_DISABLED=true
CREWAI_TESTING=true
CREWAI_TRACING_ENABLED=false
# -----------------------------------------------------------------------------
# Testing/CI Configuration
# -----------------------------------------------------------------------------
# VCR recording mode: "none" (default), "new_episodes", "all", "once"
PYTEST_VCR_RECORD_MODE=none
# Set to "true" by GitHub when running in GitHub Actions
# GITHUB_ACTIONS=false
# -----------------------------------------------------------------------------
# Python Configuration
# -----------------------------------------------------------------------------
PYTHONUNBUFFERED=1

View File

@@ -1,28 +0,0 @@
name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality

View File

@@ -1,11 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: uv # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"

63
.github/security.md vendored
View File

@@ -1,50 +1,27 @@
## CrewAI Security Policy
## CrewAI Security Vulnerability Reporting Policy
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
### Scope
### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
We welcome reports for vulnerabilities that could impact:
Email all vulnerability reports directly to:
**security@crewai.com**
- CrewAI-maintained source code and repositories
- CrewAI-operated infrastructure and services
- Official CrewAI releases, packages, and distributions
### Required Information
To help us quickly validate and remediate the issue, your report must include:
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
### How to Report
### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
### Reward Notice
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.

View File

@@ -7,8 +7,6 @@ on:
paths:
- "uv.lock"
- "pyproject.toml"
schedule:
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
workflow_dispatch:
permissions:

50
.github/workflows/canary.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Canary Crew Check
on:
push:
branches:
- main
- Canary-Crew-Github-Action
pull_request:
branches:
- main
permissions:
contents: read
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
jobs:
canary-run:
name: Run Canary Crew
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.11"
- name: Install canary dependencies
working-directory: canary
run: uv sync
- name: Run canary crew
working-directory: canary
run: uv run crewai run
- name: Upload canary report
if: always()
uses: actions/upload-artifact@v4
with:
name: canary-report
path: canary/report.md
if-no-files-found: ignore

View File

@@ -15,11 +15,11 @@ on:
push:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
- "src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
- "src/crewai/cli/templates/**"
jobs:
analyze:
@@ -73,7 +73,6 @@ jobs:
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
config-file: ./.github/codeql/codeql-config.yml
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.

View File

@@ -1,35 +0,0 @@
name: Check Documentation Broken Links
on:
pull_request:
paths:
- "docs/**"
- "docs.json"
push:
branches:
- main
paths:
- "docs/**"
- "docs.json"
workflow_dispatch:
jobs:
check-links:
name: Check broken links
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "latest"
- name: Install Mintlify CLI
run: npm i -g mintlify
- name: Run broken link checker
run: |
# Auto-answer the prompt with yes command
yes "" | mintlify broken-links || test $? -eq 141
working-directory: ./docs

View File

@@ -52,11 +52,10 @@ jobs:
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| grep -v '/tests/' \
| xargs -I{} uv run ruff check "{}"
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'

View File

@@ -1,100 +0,0 @@
name: Publish to PyPI
on:
repository_dispatch:
types: [deployment-tests-passed]
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag to publish'
required: false
type: string
jobs:
build:
name: Build packages
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Determine release tag
id: release
run: |
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
if [ -n "${{ inputs.release_tag }}" ]; then
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
else
echo "tag=" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v4
with:
ref: ${{ steps.release.outputs.tag || github.ref }}
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

View File

@@ -5,6 +5,10 @@ on: [pull_request]
permissions:
contents: read
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
jobs:
tests:
name: tests (${{ matrix.python-version }})
@@ -52,13 +56,13 @@ jobs:
- name: Run tests (group ${{ matrix.group }} of 8)
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}"
DURATION_FILE=".test_durations_py${PYTHON_VERSION_SAFE}"
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
@@ -70,25 +74,18 @@ jobs:
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
cd lib/crewai && uv run pytest \
uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
$DURATIONS_ARG \
--durations=10 \
-n auto \
--maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8)
run: |
cd lib/crewai-tools && uv run pytest \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
--durations=10 \
--maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4

View File

@@ -1,18 +0,0 @@
name: Trigger Deployment Tests
on:
release:
types: [published]
jobs:
trigger:
name: Trigger deployment tests
runs-on: ubuntu-latest
steps:
- name: Trigger deployment tests
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
event-type: crewai-release
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'

1
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.pytest_cache
__pycache__
dist/
lib/
.env
assets/*
.idea

View File

@@ -3,31 +3,17 @@ repos:
hooks:
- id: ruff
name: ruff
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
entry: uv run ruff check
language: system
pass_filenames: true
types: [python]
- id: ruff-format
name: ruff-format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
entry: uv run ruff format
language: system
pass_filenames: true
types: [python]
- id: mypy
name: mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
entry: uv run mypy
language: system
pass_filenames: true
types: [python]
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:
- id: commitizen
- id: commitizen-branch
stages: [ pre-push ]
exclude: ^tests/

View File

@@ -57,14 +57,14 @@
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: The **enterprise and production architecture** for building and deploying multi-agent systems. Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI AMP Suite
# CrewAI Enterprise Suite
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
@@ -76,9 +76,9 @@ You can try one part of the suite the [Crew Control Plane for free](https://app.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
CrewAI Enterprise is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
## Table of contents
@@ -124,8 +124,7 @@ Setup and run your first CrewAI agents by following this tutorial.
[![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learning Resources
Learn CrewAI through our comprehensive courses:
@@ -142,7 +141,6 @@ CrewAI offers two powerful, complementary approaches that work seamlessly togeth
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
@@ -168,13 +166,13 @@ Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](h
First, install CrewAI:
```shell
uv pip install crewai
pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
uv pip install 'crewai[tools]'
pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
@@ -187,15 +185,14 @@ If you encounter issues during installation or usage, here are some common solut
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `uv pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `uv pip install 'crewai[tools]'`
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `uv pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `uv pip install tiktoken --prefer-binary`
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
@@ -273,7 +270,7 @@ reporting_analyst:
**tasks.yaml**
````yaml
```yaml
# src/my_project/config/tasks.yaml
research_task:
description: >
@@ -293,7 +290,7 @@ reporting_task:
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
````
```
**crew.py**
@@ -559,7 +556,7 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
_P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb))._
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -614,7 +611,7 @@ uv build
### Installing Locally
```bash
uv pip install dist/*.tar.gz
pip install dist/*.tar.gz
```
## Telemetry
@@ -677,9 +674,9 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b
### Enterprise Features
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-amp-offer)
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-amp-for-free)
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
### Q: What exactly is CrewAI?
@@ -690,13 +687,13 @@ A: CrewAI is a standalone, lean, and fast Python framework built specifically fo
A: Install CrewAI using pip:
```shell
uv pip install crewai
pip install crewai
```
For additional tools, use:
```shell
uv pip install 'crewai[tools]'
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
@@ -735,17 +732,17 @@ A: Check out practical examples in the [CrewAI-examples repository](https://gith
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI AMP offer?
### Q: What additional features does CrewAI Enterprise offer?
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI AMP for free?
### Q: Can I try CrewAI Enterprise for free?
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
@@ -765,7 +762,7 @@ A: CrewAI is highly scalable, supporting simple automations and large-scale ente
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?

5
canary/.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
.env
__pycache__/
.DS_Store
report.md

54
canary/README.md Normal file
View File

@@ -0,0 +1,54 @@
# Canary Crew
Welcome to the Canary Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:
```bash
pip install uv
```
Next, navigate to your project directory and install the dependencies:
(Optional) Lock the dependencies and install them by using the CLI command:
```bash
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/canary/config/agents.yaml` to define your agents
- Modify `src/canary/config/tasks.yaml` to define your tasks
- Modify `src/canary/crew.py` to add your own logic, tools and specific args
- Modify `src/canary/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
$ crewai run
```
This command initializes the canary Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
The canary Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the Canary Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

23
canary/pyproject.toml Normal file
View File

@@ -0,0 +1,23 @@
[project]
name = "canary"
version = "0.1.0"
description = "canary using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.120.1,<1.0.0"
]
[project.scripts]
canary = "canary.main:run"
run_crew = "canary.main:run"
train = "canary.main:train"
replay = "canary.main:replay"
test = "canary.main:test"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.crewai]
type = "crew"

64
canary/src/canary/crew.py Normal file
View File

@@ -0,0 +1,64 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
@CrewBase
class Canary():
"""Canary crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# If you would like to add tools to your agents, you can learn more about it here:
# https://docs.crewai.com/concepts/agents#agent-tools
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
# To learn more about structured task outputs,
# task dependencies, and task callbacks, check out the documentation:
# https://docs.crewai.com/concepts/tasks#overview-of-a-task
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the Canary crew"""
# To learn how to add knowledge sources to your crew, check out the documentation:
# https://docs.crewai.com/concepts/knowledge#what-is-knowledge
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
)

68
canary/src/canary/main.py Normal file
View File

@@ -0,0 +1,68 @@
#!/usr/bin/env python
import sys
import warnings
from datetime import datetime
from canary.crew import Canary
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
# This main file is intended to be a way for you to run your
# crew locally, so refrain from adding unnecessary logic into this file.
# Replace with inputs you want to test with, it will automatically
# interpolate any tasks and agents information
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI LLMs',
'current_year': str(datetime.now().year)
}
try:
Canary().crew().kickoff(inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while running the crew: {e}")
def train():
"""
Train the crew for a given number of iterations.
"""
inputs = {
"topic": "AI LLMs",
'current_year': str(datetime.now().year)
}
try:
Canary().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
def replay():
"""
Replay the crew execution from a specific task.
"""
try:
Canary().crew().replay(task_id=sys.argv[1])
except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}")
def test():
"""
Test the crew execution and returns the results.
"""
inputs = {
"topic": "AI LLMs",
"current_year": str(datetime.now().year)
}
try:
Canary().crew().test(n_iterations=int(sys.argv[1]), eval_llm=sys.argv[2], inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while testing the crew: {e}")

3513
canary/uv.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,199 +0,0 @@
"""Pytest configuration for crewAI workspace."""
from collections.abc import Generator
import os
from pathlib import Path
import tempfile
from typing import Any
from dotenv import load_dotenv
import pytest
from vcr.request import Request # type: ignore[import-untyped]
env_test_path = Path(__file__).parent / ".env.test"
load_dotenv(env_test_path, override=True)
load_dotenv(override=True)
@pytest.fixture(autouse=True, scope="function")
def cleanup_event_handlers() -> Generator[None, Any, None]:
"""Clean up event bus handlers after each test to prevent test pollution."""
yield
try:
from crewai.events.event_bus import crewai_event_bus
with crewai_event_bus._rwlock.w_locked():
crewai_event_bus._sync_handlers.clear()
crewai_event_bus._async_handlers.clear()
except Exception: # noqa: S110
pass
@pytest.fixture(autouse=True, scope="function")
def setup_test_environment() -> Generator[None, Any, None]:
"""Setup test environment for crewAI workspace."""
with tempfile.TemporaryDirectory() as temp_dir:
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(
f"Failed to create test storage directory: {storage_dir}"
)
try:
test_file = storage_dir / ".permissions_test"
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(
f"Test storage directory {storage_dir} is not writable: {e}"
) from e
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
os.environ["CREWAI_TESTING"] = "true"
try:
yield
finally:
os.environ.pop("CREWAI_TESTING", "true")
os.environ.pop("CREWAI_STORAGE_DIR", None)
os.environ.pop("CREWAI_DISABLE_TELEMETRY", "true")
os.environ.pop("OTEL_SDK_DISABLED", "true")
os.environ.pop("OPENAI_BASE_URL", "https://api.openai.com/v1")
os.environ.pop("OPENAI_API_BASE", "https://api.openai.com/v1")
HEADERS_TO_FILTER = {
"authorization": "AUTHORIZATION-XXX",
"content-security-policy": "CSP-FILTERED",
"cookie": "COOKIE-XXX",
"set-cookie": "SET-COOKIE-XXX",
"permissions-policy": "PERMISSIONS-POLICY-XXX",
"referrer-policy": "REFERRER-POLICY-XXX",
"strict-transport-security": "STS-XXX",
"x-content-type-options": "X-CONTENT-TYPE-XXX",
"x-frame-options": "X-FRAME-OPTIONS-XXX",
"x-permitted-cross-domain-policies": "X-PERMITTED-XXX",
"x-request-id": "X-REQUEST-ID-XXX",
"x-runtime": "X-RUNTIME-XXX",
"x-xss-protection": "X-XSS-PROTECTION-XXX",
"x-stainless-arch": "X-STAINLESS-ARCH-XXX",
"x-stainless-os": "X-STAINLESS-OS-XXX",
"x-stainless-read-timeout": "X-STAINLESS-READ-TIMEOUT-XXX",
"cf-ray": "CF-RAY-XXX",
"etag": "ETAG-XXX",
"Strict-Transport-Security": "STS-XXX",
"access-control-expose-headers": "ACCESS-CONTROL-XXX",
"openai-organization": "OPENAI-ORG-XXX",
"openai-project": "OPENAI-PROJECT-XXX",
"x-ratelimit-limit-requests": "X-RATELIMIT-LIMIT-REQUESTS-XXX",
"x-ratelimit-limit-tokens": "X-RATELIMIT-LIMIT-TOKENS-XXX",
"x-ratelimit-remaining-requests": "X-RATELIMIT-REMAINING-REQUESTS-XXX",
"x-ratelimit-remaining-tokens": "X-RATELIMIT-REMAINING-TOKENS-XXX",
"x-ratelimit-reset-requests": "X-RATELIMIT-RESET-REQUESTS-XXX",
"x-ratelimit-reset-tokens": "X-RATELIMIT-RESET-TOKENS-XXX",
"x-goog-api-key": "X-GOOG-API-KEY-XXX",
"api-key": "X-API-KEY-XXX",
"User-Agent": "X-USER-AGENT-XXX",
"apim-request-id:": "X-API-CLIENT-REQUEST-ID-XXX",
"azureml-model-session": "AZUREML-MODEL-SESSION-XXX",
"x-ms-client-request-id": "X-MS-CLIENT-REQUEST-ID-XXX",
"x-ms-region": "X-MS-REGION-XXX",
"apim-request-id": "APIM-REQUEST-ID-XXX",
"x-api-key": "X-API-KEY-XXX",
"anthropic-organization-id": "ANTHROPIC-ORGANIZATION-ID-XXX",
"request-id": "REQUEST-ID-XXX",
"anthropic-ratelimit-input-tokens-limit": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-input-tokens-remaining": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-input-tokens-reset": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-output-tokens-limit": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-output-tokens-remaining": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-output-tokens-reset": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-tokens-limit": "ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-tokens-remaining": "ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-tokens-reset": "ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX",
"x-amz-date": "X-AMZ-DATE-XXX",
"amz-sdk-invocation-id": "AMZ-SDK-INVOCATION-ID-XXX",
"accept-encoding": "ACCEPT-ENCODING-XXX",
"x-amzn-requestid": "X-AMZN-REQUESTID-XXX",
"x-amzn-RequestId": "X-AMZN-REQUESTID-XXX",
"x-a2a-notification-token": "X-A2A-NOTIFICATION-TOKEN-XXX",
"x-a2a-version": "X-A2A-VERSION-XXX",
}
def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any-unimported]
"""Filter sensitive headers from request before recording."""
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in request.headers:
request.headers[variant] = [replacement]
request.method = request.method.upper()
return request
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any]:
"""Filter sensitive headers from response before recording."""
# Remove Content-Encoding to prevent decompression issues on replay
for encoding_header in ["Content-Encoding", "content-encoding"]:
response["headers"].pop(encoding_header, None)
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in response["headers"]:
response["headers"][variant] = [replacement]
return response
@pytest.fixture(scope="module")
def vcr_cassette_dir(request: Any) -> str:
"""Generate cassette directory path based on test module location.
Organizes cassettes to mirror test directory structure within each package:
lib/crewai/tests/llms/google/test_google.py -> lib/crewai/tests/cassettes/llms/google/
lib/crewai-tools/tests/tools/test_search.py -> lib/crewai-tools/tests/cassettes/tools/
"""
test_file = Path(request.fspath)
for parent in test_file.parents:
if parent.name in ("crewai", "crewai-tools") and parent.parent.name == "lib":
package_root = parent
break
else:
package_root = test_file.parent
tests_root = package_root / "tests"
test_dir = test_file.parent
if test_dir != tests_root:
relative_path = test_dir.relative_to(tests_root)
cassette_dir = tests_root / "cassettes" / relative_path
else:
cassette_dir = tests_root / "cassettes"
cassette_dir.mkdir(parents=True, exist_ok=True)
return str(cassette_dir)
@pytest.fixture(scope="module")
def vcr_config(vcr_cassette_dir: str) -> dict[str, Any]:
"""Configure VCR with organized cassette storage."""
config = {
"cassette_library_dir": vcr_cassette_dir,
"record_mode": os.getenv("PYTEST_VCR_RECORD_MODE", "once"),
"filter_headers": [(k, v) for k, v in HEADERS_TO_FILTER.items()],
"before_record_request": _filter_request_headers,
"before_record_response": _filter_response_headers,
"filter_query_parameters": ["key"],
"match_on": ["method", "scheme", "host", "port", "path"],
}
if os.getenv("GITHUB_ACTIONS") == "true":
config["record_mode"] = "none"
return config

1737
crewAI.excalidraw Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -9,22 +9,7 @@
},
"favicon": "/images/favicon.svg",
"contextual": {
"options": [
"copy",
"view",
"chatgpt",
"claude",
"perplexity",
"mcp",
"cursor",
"vscode",
{
"title": "Request a feature",
"description": "Join the discussion on GitHub to request a new feature",
"icon": "plus",
"href": "https://github.com/crewAIInc/crewAI/issues/new/choose"
}
]
"options": ["copy", "view", "chatgpt", "claude"]
},
"navigation": {
"languages": [
@@ -55,29 +40,13 @@
]
},
"tabs": [
{
"tab": "Home",
"icon": "house",
"groups": [
{
"group": "Welcome",
"pages": [
"index"
]
}
]
},
{
"tab": "Documentation",
"icon": "book-open",
"groups": [
{
"group": "Get Started",
"pages": [
"en/introduction",
"en/installation",
"en/quickstart"
]
"pages": ["en/introduction", "en/installation", "en/quickstart"]
},
{
"group": "Guides",
@@ -85,23 +54,17 @@
{
"group": "Strategy",
"icon": "compass",
"pages": [
"en/guides/concepts/evaluating-use-cases"
]
"pages": ["en/guides/concepts/evaluating-use-cases"]
},
{
"group": "Agents",
"icon": "user",
"pages": [
"en/guides/agents/crafting-effective-agents"
]
"pages": ["en/guides/agents/crafting-effective-agents"]
},
{
"group": "Crews",
"icon": "users",
"pages": [
"en/guides/crews/first-crew"
]
"pages": ["en/guides/crews/first-crew"]
},
{
"group": "Flows",
@@ -128,7 +91,6 @@
"en/concepts/tasks",
"en/concepts/crews",
"en/concepts/flows",
"en/concepts/production-architecture",
"en/concepts/knowledge",
"en/concepts/llms",
"en/concepts/processes",
@@ -147,7 +109,6 @@
"group": "MCP Integration",
"pages": [
"en/mcp/overview",
"en/mcp/dsl-integration",
"en/mcp/stdio",
"en/mcp/sse",
"en/mcp/streamable-http",
@@ -264,10 +225,9 @@
"group": "Integrations",
"icon": "plug",
"pages": [
"en/tools/integration/overview",
"en/tools/integration/bedrockinvokeagenttool",
"en/tools/integration/crewaiautomationtool",
"en/tools/integration/mergeagenthandlertool"
"en/tools/tool-integrations/overview",
"en/tools/tool-integrations/bedrockinvokeagenttool",
"en/tools/tool-integrations/crewaiautomationtool"
]
},
{
@@ -286,11 +246,8 @@
{
"group": "Observability",
"pages": [
"en/observability/tracing",
"en/observability/overview",
"en/observability/arize-phoenix",
"en/observability/braintrust",
"en/observability/datadog",
"en/observability/langdb",
"en/observability/langfuse",
"en/observability/langtrace",
@@ -320,61 +277,39 @@
"en/learn/force-tool-output-as-result",
"en/learn/hierarchical-process",
"en/learn/human-input-on-execution",
"en/learn/human-in-the-loop",
"en/learn/human-feedback-in-flows",
"en/learn/kickoff-async",
"en/learn/kickoff-for-each",
"en/learn/llm-connections",
"en/learn/multimodal-agents",
"en/learn/replay-tasks-from-latest-crew-kickoff",
"en/learn/sequential-process",
"en/learn/using-annotations",
"en/learn/execution-hooks",
"en/learn/llm-hooks",
"en/learn/tool-hooks"
"en/learn/using-annotations"
]
},
{
"group": "Telemetry",
"pages": [
"en/telemetry"
]
"pages": ["en/telemetry"]
}
]
},
{
"tab": "AMP",
"tab": "Enterprise",
"icon": "briefcase",
"groups": [
{
"group": "Getting Started",
"pages": [
"en/enterprise/introduction"
]
"pages": ["en/enterprise/introduction"]
},
{
"group": "Build",
"group": "Features",
"pages": [
"en/enterprise/features/automations",
"en/enterprise/features/crew-studio",
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions"
]
},
{
"group": "Operate",
"pages": [
"en/enterprise/features/traces",
"en/enterprise/features/rbac",
"en/enterprise/features/tool-repository",
"en/enterprise/features/webhook-streaming",
"en/enterprise/features/hallucination-guardrail"
]
},
{
"group": "Manage",
"pages": [
"en/enterprise/features/rbac"
"en/enterprise/features/traces",
"en/enterprise/features/hallucination-guardrail",
"en/enterprise/features/integrations",
"en/enterprise/features/agent-repositories"
]
},
{
@@ -386,20 +321,10 @@
"en/enterprise/integrations/github",
"en/enterprise/integrations/gmail",
"en/enterprise/integrations/google_calendar",
"en/enterprise/integrations/google_contacts",
"en/enterprise/integrations/google_docs",
"en/enterprise/integrations/google_drive",
"en/enterprise/integrations/google_sheets",
"en/enterprise/integrations/google_slides",
"en/enterprise/integrations/hubspot",
"en/enterprise/integrations/jira",
"en/enterprise/integrations/linear",
"en/enterprise/integrations/microsoft_excel",
"en/enterprise/integrations/microsoft_onedrive",
"en/enterprise/integrations/microsoft_outlook",
"en/enterprise/integrations/microsoft_sharepoint",
"en/enterprise/integrations/microsoft_teams",
"en/enterprise/integrations/microsoft_word",
"en/enterprise/integrations/notion",
"en/enterprise/integrations/salesforce",
"en/enterprise/integrations/shopify",
@@ -408,22 +333,6 @@
"en/enterprise/integrations/zendesk"
]
},
{
"group": "Triggers",
"pages": [
"en/enterprise/guides/automation-triggers",
"en/enterprise/guides/gmail-trigger",
"en/enterprise/guides/google-calendar-trigger",
"en/enterprise/guides/google-drive-trigger",
"en/enterprise/guides/outlook-trigger",
"en/enterprise/guides/onedrive-trigger",
"en/enterprise/guides/microsoft-teams-trigger",
"en/enterprise/guides/slack-trigger",
"en/enterprise/guides/hubspot-trigger",
"en/enterprise/guides/salesforce-trigger",
"en/enterprise/guides/zapier-trigger"
]
},
{
"group": "How-To Guides",
"pages": [
@@ -432,20 +341,21 @@
"en/enterprise/guides/kickoff-crew",
"en/enterprise/guides/update-crew",
"en/enterprise/guides/enable-crew-studio",
"en/enterprise/guides/capture_telemetry_logs",
"en/enterprise/guides/azure-openai-setup",
"en/enterprise/guides/tool-repository",
"en/enterprise/guides/automation-triggers",
"en/enterprise/guides/hubspot-trigger",
"en/enterprise/guides/react-component-export",
"en/enterprise/guides/salesforce-trigger",
"en/enterprise/guides/slack-trigger",
"en/enterprise/guides/team-management",
"en/enterprise/guides/webhook-automation",
"en/enterprise/guides/human-in-the-loop",
"en/enterprise/guides/webhook-automation"
"en/enterprise/guides/zapier-trigger"
]
},
{
"group": "Resources",
"pages": [
"en/enterprise/resources/frequently-asked-questions"
]
"pages": ["en/enterprise/resources/frequently-asked-questions"]
}
]
},
@@ -459,7 +369,6 @@
"en/api-reference/introduction",
"en/api-reference/inputs",
"en/api-reference/kickoff",
"en/api-reference/resume",
"en/api-reference/status"
]
}
@@ -471,10 +380,7 @@
"groups": [
{
"group": "Examples",
"pages": [
"en/examples/example",
"en/examples/cookbooks"
]
"pages": ["en/examples/example", "en/examples/cookbooks"]
}
]
},
@@ -484,9 +390,7 @@
"groups": [
{
"group": "Release Notes",
"pages": [
"en/changelog"
]
"pages": ["en/changelog"]
}
]
}
@@ -519,18 +423,6 @@
]
},
"tabs": [
{
"tab": "Início",
"icon": "house",
"groups": [
{
"group": "Bem-vindo",
"pages": [
"pt-BR/index"
]
}
]
},
{
"tab": "Documentação",
"icon": "book-open",
@@ -549,23 +441,17 @@
{
"group": "Estratégia",
"icon": "compass",
"pages": [
"pt-BR/guides/concepts/evaluating-use-cases"
]
"pages": ["pt-BR/guides/concepts/evaluating-use-cases"]
},
{
"group": "Agentes",
"icon": "user",
"pages": [
"pt-BR/guides/agents/crafting-effective-agents"
]
"pages": ["pt-BR/guides/agents/crafting-effective-agents"]
},
{
"group": "Crews",
"icon": "users",
"pages": [
"pt-BR/guides/crews/first-crew"
]
"pages": ["pt-BR/guides/crews/first-crew"]
},
{
"group": "Flows",
@@ -592,7 +478,6 @@
"pt-BR/concepts/tasks",
"pt-BR/concepts/crews",
"pt-BR/concepts/flows",
"pt-BR/concepts/production-architecture",
"pt-BR/concepts/knowledge",
"pt-BR/concepts/llms",
"pt-BR/concepts/processes",
@@ -611,7 +496,6 @@
"group": "Integração MCP",
"pages": [
"pt-BR/mcp/overview",
"pt-BR/mcp/dsl-integration",
"pt-BR/mcp/stdio",
"pt-BR/mcp/sse",
"pt-BR/mcp/streamable-http",
@@ -714,12 +598,12 @@
]
},
{
"group": "Integrations",
"group": "Integrações",
"icon": "plug",
"pages": [
"pt-BR/tools/integration/overview",
"pt-BR/tools/integration/bedrockinvokeagenttool",
"pt-BR/tools/integration/crewaiautomationtool"
"pt-BR/tools/tool-integrations/overview",
"pt-BR/tools/tool-integrations/bedrockinvokeagenttool",
"pt-BR/tools/tool-integrations/crewaiautomationtool"
]
},
{
@@ -737,11 +621,8 @@
{
"group": "Observabilidade",
"pages": [
"pt-BR/observability/tracing",
"pt-BR/observability/overview",
"pt-BR/observability/arize-phoenix",
"pt-BR/observability/braintrust",
"pt-BR/observability/datadog",
"pt-BR/observability/langdb",
"pt-BR/observability/langfuse",
"pt-BR/observability/langtrace",
@@ -770,61 +651,38 @@
"pt-BR/learn/force-tool-output-as-result",
"pt-BR/learn/hierarchical-process",
"pt-BR/learn/human-input-on-execution",
"pt-BR/learn/human-in-the-loop",
"pt-BR/learn/human-feedback-in-flows",
"pt-BR/learn/kickoff-async",
"pt-BR/learn/kickoff-for-each",
"pt-BR/learn/llm-connections",
"pt-BR/learn/multimodal-agents",
"pt-BR/learn/replay-tasks-from-latest-crew-kickoff",
"pt-BR/learn/sequential-process",
"pt-BR/learn/using-annotations",
"pt-BR/learn/execution-hooks",
"pt-BR/learn/llm-hooks",
"pt-BR/learn/tool-hooks"
"pt-BR/learn/using-annotations"
]
},
{
"group": "Telemetria",
"pages": [
"pt-BR/telemetry"
]
"pages": ["pt-BR/telemetry"]
}
]
},
{
"tab": "AMP",
"tab": "Enterprise",
"icon": "briefcase",
"groups": [
{
"group": "Começando",
"pages": [
"pt-BR/enterprise/introduction"
]
"pages": ["pt-BR/enterprise/introduction"]
},
{
"group": "Construir",
"group": "Funcionalidades",
"pages": [
"pt-BR/enterprise/features/automations",
"pt-BR/enterprise/features/crew-studio",
"pt-BR/enterprise/features/marketplace",
"pt-BR/enterprise/features/agent-repositories",
"pt-BR/enterprise/features/tools-and-integrations",
"pt-BR/enterprise/features/pii-trace-redactions"
]
},
{
"group": "Operar",
"pages": [
"pt-BR/enterprise/features/traces",
"pt-BR/enterprise/features/rbac",
"pt-BR/enterprise/features/tool-repository",
"pt-BR/enterprise/features/webhook-streaming",
"pt-BR/enterprise/features/hallucination-guardrail"
]
},
{
"group": "Gerenciar",
"pages": [
"pt-BR/enterprise/features/rbac"
"pt-BR/enterprise/features/traces",
"pt-BR/enterprise/features/hallucination-guardrail",
"pt-BR/enterprise/features/integrations"
]
},
{
@@ -836,20 +694,10 @@
"pt-BR/enterprise/integrations/github",
"pt-BR/enterprise/integrations/gmail",
"pt-BR/enterprise/integrations/google_calendar",
"pt-BR/enterprise/integrations/google_contacts",
"pt-BR/enterprise/integrations/google_docs",
"pt-BR/enterprise/integrations/google_drive",
"pt-BR/enterprise/integrations/google_sheets",
"pt-BR/enterprise/integrations/google_slides",
"pt-BR/enterprise/integrations/hubspot",
"pt-BR/enterprise/integrations/jira",
"pt-BR/enterprise/integrations/linear",
"pt-BR/enterprise/integrations/microsoft_excel",
"pt-BR/enterprise/integrations/microsoft_onedrive",
"pt-BR/enterprise/integrations/microsoft_outlook",
"pt-BR/enterprise/integrations/microsoft_sharepoint",
"pt-BR/enterprise/integrations/microsoft_teams",
"pt-BR/enterprise/integrations/microsoft_word",
"pt-BR/enterprise/integrations/notion",
"pt-BR/enterprise/integrations/salesforce",
"pt-BR/enterprise/integrations/shopify",
@@ -867,26 +715,14 @@
"pt-BR/enterprise/guides/update-crew",
"pt-BR/enterprise/guides/enable-crew-studio",
"pt-BR/enterprise/guides/azure-openai-setup",
"pt-BR/enterprise/guides/tool-repository",
"pt-BR/enterprise/guides/react-component-export",
"pt-BR/enterprise/guides/team-management",
"pt-BR/enterprise/guides/human-in-the-loop",
"pt-BR/enterprise/guides/webhook-automation"
]
},
{
"group": "Triggers",
"pages": [
"pt-BR/enterprise/guides/automation-triggers",
"pt-BR/enterprise/guides/gmail-trigger",
"pt-BR/enterprise/guides/google-calendar-trigger",
"pt-BR/enterprise/guides/google-drive-trigger",
"pt-BR/enterprise/guides/outlook-trigger",
"pt-BR/enterprise/guides/onedrive-trigger",
"pt-BR/enterprise/guides/microsoft-teams-trigger",
"pt-BR/enterprise/guides/slack-trigger",
"pt-BR/enterprise/guides/hubspot-trigger",
"pt-BR/enterprise/guides/react-component-export",
"pt-BR/enterprise/guides/salesforce-trigger",
"pt-BR/enterprise/guides/slack-trigger",
"pt-BR/enterprise/guides/team-management",
"pt-BR/enterprise/guides/webhook-automation",
"pt-BR/enterprise/guides/human-in-the-loop",
"pt-BR/enterprise/guides/zapier-trigger"
]
},
@@ -908,7 +744,6 @@
"pt-BR/api-reference/introduction",
"pt-BR/api-reference/inputs",
"pt-BR/api-reference/kickoff",
"pt-BR/api-reference/resume",
"pt-BR/api-reference/status"
]
}
@@ -920,10 +755,7 @@
"groups": [
{
"group": "Exemplos",
"pages": [
"pt-BR/examples/example",
"pt-BR/examples/cookbooks"
]
"pages": ["pt-BR/examples/example", "pt-BR/examples/cookbooks"]
}
]
},
@@ -933,9 +765,7 @@
"groups": [
{
"group": "Notas de Versão",
"pages": [
"pt-BR/changelog"
]
"pages": ["pt-BR/changelog"]
}
]
}
@@ -968,29 +798,13 @@
]
},
"tabs": [
{
"tab": "홈",
"icon": "house",
"groups": [
{
"group": "환영합니다",
"pages": [
"ko/index"
]
}
]
},
{
"tab": "기술 문서",
"icon": "book-open",
"groups": [
{
"group": "시작 안내",
"pages": [
"ko/introduction",
"ko/installation",
"ko/quickstart"
]
"pages": ["ko/introduction", "ko/installation", "ko/quickstart"]
},
{
"group": "가이드",
@@ -998,23 +812,17 @@
{
"group": "전략",
"icon": "compass",
"pages": [
"ko/guides/concepts/evaluating-use-cases"
]
"pages": ["ko/guides/concepts/evaluating-use-cases"]
},
{
"group": "에이전트 (Agents)",
"icon": "user",
"pages": [
"ko/guides/agents/crafting-effective-agents"
]
"pages": ["ko/guides/agents/crafting-effective-agents"]
},
{
"group": "크루 (Crews)",
"icon": "users",
"pages": [
"ko/guides/crews/first-crew"
]
"pages": ["ko/guides/crews/first-crew"]
},
{
"group": "플로우 (Flows)",
@@ -1041,7 +849,6 @@
"ko/concepts/tasks",
"ko/concepts/crews",
"ko/concepts/flows",
"ko/concepts/production-architecture",
"ko/concepts/knowledge",
"ko/concepts/llms",
"ko/concepts/processes",
@@ -1060,7 +867,6 @@
"group": "MCP 통합",
"pages": [
"ko/mcp/overview",
"ko/mcp/dsl-integration",
"ko/mcp/stdio",
"ko/mcp/sse",
"ko/mcp/streamable-http",
@@ -1170,16 +976,17 @@
"ko/tools/cloud-storage/overview",
"ko/tools/cloud-storage/s3readertool",
"ko/tools/cloud-storage/s3writertool",
"ko/tools/cloud-storage/bedrockinvokeagenttool",
"ko/tools/cloud-storage/bedrockkbretriever"
]
},
{
"group": "Integrations",
"group": "통합",
"icon": "plug",
"pages": [
"ko/tools/integration/overview",
"ko/tools/integration/bedrockinvokeagenttool",
"ko/tools/integration/crewaiautomationtool"
"ko/tools/tool-integrations/overview",
"ko/tools/tool-integrations/bedrockinvokeagenttool",
"ko/tools/tool-integrations/crewaiautomationtool"
]
},
{
@@ -1198,11 +1005,8 @@
{
"group": "Observability",
"pages": [
"ko/observability/tracing",
"ko/observability/overview",
"ko/observability/arize-phoenix",
"ko/observability/braintrust",
"ko/observability/datadog",
"ko/observability/langdb",
"ko/observability/langfuse",
"ko/observability/langtrace",
@@ -1231,25 +1035,18 @@
"ko/learn/force-tool-output-as-result",
"ko/learn/hierarchical-process",
"ko/learn/human-input-on-execution",
"ko/learn/human-in-the-loop",
"ko/learn/human-feedback-in-flows",
"ko/learn/kickoff-async",
"ko/learn/kickoff-for-each",
"ko/learn/llm-connections",
"ko/learn/multimodal-agents",
"ko/learn/replay-tasks-from-latest-crew-kickoff",
"ko/learn/sequential-process",
"ko/learn/using-annotations",
"ko/learn/execution-hooks",
"ko/learn/llm-hooks",
"ko/learn/tool-hooks"
"ko/learn/using-annotations"
]
},
{
"group": "Telemetry",
"pages": [
"ko/telemetry"
]
"pages": ["ko/telemetry"]
}
]
},
@@ -1259,33 +1056,18 @@
"groups": [
{
"group": "시작 안내",
"pages": [
"ko/enterprise/introduction"
]
"pages": ["ko/enterprise/introduction"]
},
{
"group": "빌드",
"group": "특징",
"pages": [
"ko/enterprise/features/automations",
"ko/enterprise/features/crew-studio",
"ko/enterprise/features/marketplace",
"ko/enterprise/features/agent-repositories",
"ko/enterprise/features/tools-and-integrations",
"ko/enterprise/features/pii-trace-redactions"
]
},
{
"group": "운영",
"pages": [
"ko/enterprise/features/traces",
"ko/enterprise/features/rbac",
"ko/enterprise/features/tool-repository",
"ko/enterprise/features/webhook-streaming",
"ko/enterprise/features/hallucination-guardrail"
]
},
{
"group": "관리",
"pages": [
"ko/enterprise/features/rbac"
"ko/enterprise/features/traces",
"ko/enterprise/features/hallucination-guardrail",
"ko/enterprise/features/integrations",
"ko/enterprise/features/agent-repositories"
]
},
{
@@ -1297,20 +1079,10 @@
"ko/enterprise/integrations/github",
"ko/enterprise/integrations/gmail",
"ko/enterprise/integrations/google_calendar",
"ko/enterprise/integrations/google_contacts",
"ko/enterprise/integrations/google_docs",
"ko/enterprise/integrations/google_drive",
"ko/enterprise/integrations/google_sheets",
"ko/enterprise/integrations/google_slides",
"ko/enterprise/integrations/hubspot",
"ko/enterprise/integrations/jira",
"ko/enterprise/integrations/linear",
"ko/enterprise/integrations/microsoft_excel",
"ko/enterprise/integrations/microsoft_onedrive",
"ko/enterprise/integrations/microsoft_outlook",
"ko/enterprise/integrations/microsoft_sharepoint",
"ko/enterprise/integrations/microsoft_teams",
"ko/enterprise/integrations/microsoft_word",
"ko/enterprise/integrations/notion",
"ko/enterprise/integrations/salesforce",
"ko/enterprise/integrations/shopify",
@@ -1328,34 +1100,20 @@
"ko/enterprise/guides/update-crew",
"ko/enterprise/guides/enable-crew-studio",
"ko/enterprise/guides/azure-openai-setup",
"ko/enterprise/guides/tool-repository",
"ko/enterprise/guides/react-component-export",
"ko/enterprise/guides/team-management",
"ko/enterprise/guides/human-in-the-loop",
"ko/enterprise/guides/webhook-automation"
]
},
{
"group": "트리거",
"pages": [
"ko/enterprise/guides/automation-triggers",
"ko/enterprise/guides/gmail-trigger",
"ko/enterprise/guides/google-calendar-trigger",
"ko/enterprise/guides/google-drive-trigger",
"ko/enterprise/guides/outlook-trigger",
"ko/enterprise/guides/onedrive-trigger",
"ko/enterprise/guides/microsoft-teams-trigger",
"ko/enterprise/guides/slack-trigger",
"ko/enterprise/guides/hubspot-trigger",
"ko/enterprise/guides/react-component-export",
"ko/enterprise/guides/salesforce-trigger",
"ko/enterprise/guides/slack-trigger",
"ko/enterprise/guides/team-management",
"ko/enterprise/guides/webhook-automation",
"ko/enterprise/guides/human-in-the-loop",
"ko/enterprise/guides/zapier-trigger"
]
},
{
"group": "학습 자원",
"pages": [
"ko/enterprise/resources/frequently-asked-questions"
]
"pages": ["ko/enterprise/resources/frequently-asked-questions"]
}
]
},
@@ -1369,7 +1127,6 @@
"ko/api-reference/introduction",
"ko/api-reference/inputs",
"ko/api-reference/kickoff",
"ko/api-reference/resume",
"ko/api-reference/status"
]
}
@@ -1381,10 +1138,7 @@
"groups": [
{
"group": "예시",
"pages": [
"ko/examples/example",
"ko/examples/cookbooks"
]
"pages": ["ko/examples/example", "ko/examples/cookbooks"]
}
]
},
@@ -1394,9 +1148,7 @@
"groups": [
{
"group": "릴리스 노트",
"pages": [
"ko/changelog"
]
"pages": ["ko/changelog"]
}
]
}

View File

@@ -1,32 +1,31 @@
---
title: "Introduction"
description: "Complete reference for the CrewAI AMP REST API"
description: "Complete reference for the CrewAI Enterprise REST API"
icon: "code"
mode: "wide"
---
# CrewAI AMP API
# CrewAI Enterprise API
Welcome to the CrewAI AMP API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
Welcome to the CrewAI Enterprise API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
## Quick Start
<Steps>
<Step title="Get Your API Credentials">
Navigate to your crew's detail page in the CrewAI AMP dashboard and copy your Bearer Token from the Status tab.
Navigate to your crew's detail page in the CrewAI Enterprise dashboard and copy your Bearer Token from the Status tab.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive
a `kickoff_id`.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
</Step>
<Step title="Monitor Progress">
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
</Step>
</Steps>
@@ -41,14 +40,13 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Token Types
| Token Type | Scope | Use Case |
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
| Token Type | Scope | Use Case |
|:-----------|:--------|:----------|
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip>
You can find both token types in the Status tab of your crew's detail page in
the CrewAI AMP dashboard.
You can find both token types in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
</Tip>
## Base URL
@@ -64,36 +62,32 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
## Typical Workflow
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
4. **Results**: Extract the final output from the completed response
## Error Handling
The API uses standard HTTP status codes:
| Code | Meaning |
| ----- | :----------------------------------------- |
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| Code | Meaning |
|------|:--------|
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| `422` | Validation Error - Missing required inputs |
| `500` | Server Error - Contact support |
| `500` | Server Error - Contact support |
## Interactive Testing
<Info>
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew
URL, we use **reference mode** instead of an interactive playground to avoid
confusion. This shows you exactly what the requests should look like without
non-functional send buttons.
**Why no "Send" button?** Since each CrewAI Enterprise user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
</Info>
Each endpoint page shows you:
- ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases
- ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
- ✅ **Authentication examples** with proper Bearer token format
@@ -109,27 +103,18 @@ Each endpoint page shows you:
</CardGroup>
**Example workflow:**
1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard
4. **Run the request** in your terminal or API client
## Need Help?
<CardGroup cols={2}>
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
Get help with API integration and troubleshooting
</Card>
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
Manage your crews and view execution logs
</Card>
</CardGroup>

View File

@@ -1,6 +0,0 @@
---
title: "POST /resume"
description: "Resume crew execution with human feedback"
openapi: "/enterprise-api.en.yaml POST /resume"
mode: "wide"
---

View File

@@ -1,6 +1,8 @@
---
title: "GET /{kickoff_id}/status"
title: "GET /status/{kickoff_id}"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
mode: "wide"
---

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,6 @@ mode: "wide"
## Overview of an Agent
In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Perform specific tasks
- Make decisions based on its role and goal
- Use tools to accomplish objectives
@@ -17,19 +16,15 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Delegate tasks when allowed
<Tip>
Think of an agent as a specialized team member with specific skills,
expertise, and responsibilities. For example, a `Researcher` agent might excel
at gathering and analyzing information, while a `Writer` agent might be better
at creating content.
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
</Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces
- Real-time testing and validation
- Template library with pre-configured agent types
@@ -38,36 +33,36 @@ The Visual Agent Builder enables:
## Agent Attributes
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
## Creating Agents
@@ -142,8 +137,7 @@ class LatestAiDevelopmentCrew():
```
<Note>
The names you use in your YAML files (`agents.yaml`) should match the method
names in your Python code.
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
</Note>
### Direct Code Definition
@@ -190,7 +184,6 @@ agent = Agent(
Let's break down some key parameter combinations for common use cases:
#### Basic Research Agent
```python Code
research_agent = Agent(
role="Research Analyst",
@@ -202,7 +195,6 @@ research_agent = Agent(
```
#### Code Development Agent
```python Code
dev_agent = Agent(
role="Senior Python Developer",
@@ -216,7 +208,6 @@ dev_agent = Agent(
```
#### Long-Running Analysis Agent
```python Code
analysis_agent = Agent(
role="Data Analyst",
@@ -230,7 +221,6 @@ analysis_agent = Agent(
```
#### Custom Template Agent
```python Code
custom_agent = Agent(
role="Customer Service Representative",
@@ -246,7 +236,6 @@ custom_agent = Agent(
```
#### Date-Aware Agent with Reasoning
```python Code
strategic_agent = Agent(
role="Market Analyst",
@@ -261,7 +250,6 @@ strategic_agent = Agent(
```
#### Reasoning Agent
```python Code
reasoning_agent = Agent(
role="Strategic Planner",
@@ -275,7 +263,6 @@ reasoning_agent = Agent(
```
#### Multimodal Agent
```python Code
multimodal_agent = Agent(
role="Visual Content Analyst",
@@ -289,64 +276,52 @@ multimodal_agent = Agent(
### Parameter Details
#### Critical Parameters
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
- `llm` determines the language model used (default: OpenAI's GPT-4)
#### Memory and Context
- `memory`: Enable to maintain conversation history
- `respect_context_window`: Prevents token limit issues
- `knowledge_sources`: Add domain-specific knowledge bases
#### Execution Control
- `max_iter`: Maximum attempts before giving best answer
- `max_execution_time`: Timeout in seconds
- `max_rpm`: Rate limiting for API calls
- `max_retry_limit`: Retries on error
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Note>
This runs a default Docker image. If you want to configure the docker image,
the checkout the Code Interpreter Tool in the tools section. Add the code
interpreter tool as a tool in the agent as a tool parameter.
</Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
Add the code interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks
- `inject_date`: Automatically inject current date into task descriptions
#### Templates
- `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format
- `response_template`: Formats agent responses
<Note>
When using custom templates, ensure that both `system_template` and
`prompt_template` are defined. The `response_template` is optional but
recommended for consistent output formatting.
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`,
and `{backstory}` in your templates. These will be automatically populated
during execution.
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
</Note>
## Agent Tools
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
@@ -385,8 +360,7 @@ analyst = Agent(
```
<Note>
When `memory` is enabled, the agent will maintain context across multiple
interactions, improving its ability to handle complex, multi-step tasks.
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
</Note>
## Context Window Management
@@ -416,7 +390,6 @@ smart_agent = Agent(
```
**What happens when context limits are exceeded:**
- ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."`
- 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history
- ✅ **Continued execution**: Task execution continues seamlessly with the summarized context
@@ -438,7 +411,6 @@ strict_agent = Agent(
```
**What happens when context limits are exceeded:**
- ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."`
- 🛑 **Execution stops**: Task execution halts immediately
- 🔧 **Manual intervention required**: You need to modify your approach
@@ -446,7 +418,6 @@ strict_agent = Agent(
### Choosing the Right Setting
#### Use `respect_context_window=True` (Default) when:
- **Processing large documents** that might exceed context limits
- **Long-running conversations** where some summarization is acceptable
- **Research tasks** where general context is more important than exact details
@@ -465,7 +436,6 @@ document_processor = Agent(
```
#### Use `respect_context_window=False` when:
- **Precision is critical** and information loss is unacceptable
- **Legal or medical tasks** requiring complete context
- **Code review** where missing details could introduce bugs
@@ -488,7 +458,6 @@ precision_agent = Agent(
When dealing with very large datasets, consider these strategies:
#### 1. Use RAG Tools
```python Code
from crewai_tools import RagTool
@@ -506,7 +475,6 @@ rag_agent = Agent(
```
#### 2. Use Knowledge Sources
```python Code
# Use knowledge sources instead of large prompts
knowledge_agent = Agent(
@@ -530,7 +498,6 @@ knowledge_agent = Agent(
### Troubleshooting Context Issues
**If you're getting context limit errors:**
```python Code
# Quick fix: Enable automatic handling
agent.respect_context_window = True
@@ -544,7 +511,6 @@ agent.tools = [RagTool()]
```
**If automatic summarization loses important information:**
```python Code
# Disable auto-summarization and use RAG instead
agent = Agent(
@@ -558,10 +524,7 @@ agent = Agent(
```
<Note>
The context window management feature works automatically in the background.
You don't need to call any special functions - just set
`respect_context_window` to your preferred behavior and CrewAI handles the
rest!
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
</Note>
## Direct Agent Interaction with `kickoff()`
@@ -593,10 +556,10 @@ print(result.raw)
### Parameters and Return Values
| Parameter | Type | Description |
| :---------------- | :--------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
| Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties:
@@ -658,34 +621,28 @@ asyncio.run(main())
```
<Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler
execution flow while preserving all of the agent's configuration (role, goal,
backstory, tools, etc.).
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
### Performance Optimization
- Use `respect_context_window: true` to prevent token limit issues
- Set appropriate `max_rpm` to avoid rate limiting
- Enable `cache: true` to improve performance for repetitive tasks
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Advanced Features
- Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks
- Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts)
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
@@ -693,7 +650,6 @@ asyncio.run(main())
- Enable `multimodal: true` for agents that need to process both text and visual content
### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions
- Consider using different LLMs for different purposes:
@@ -701,7 +657,6 @@ asyncio.run(main())
- `function_calling_llm` for efficient tool usage
### Date Awareness and Reasoning
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes
- Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc.
@@ -709,26 +664,22 @@ asyncio.run(main())
- Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection
### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling)
## Troubleshooting Common Issues
1. **Rate Limiting**: If you're hitting API rate limits:
- Implement appropriate `max_rpm`
- Use caching for repetitive operations
- Consider batching requests
2. **Context Window Errors**: If you're exceeding context limits:
- Enable `respect_context_window`
- Use more efficient prompts
- Clear agent memory periodically
3. **Code Execution Issues**: If code execution fails:
- Verify Docker is installed for safe mode
- Check execution permissions
- Review code sandbox settings

View File

@@ -5,12 +5,7 @@ icon: terminal
mode: "wide"
---
<Warning>
Since release 0.140.0, CrewAI AMP started a process of migrating their login
provider. As such, the authentication flow via CLI was updated. Users that use
Google to login, or that created their account after July 3rd, 2025 will be
unable to log in with older versions of the `crewai` library.
</Warning>
<Warning>Since release 0.140.0, CrewAI Enterprise started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
## Overview
@@ -46,7 +41,6 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow
Example:
```shell Terminal
crewai create crew my_new_crew
crewai create flow my_new_flow
@@ -63,7 +57,6 @@ crewai version [OPTIONS]
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell Terminal
crewai version
crewai version --tools
@@ -81,7 +74,6 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell Terminal
crewai train -n 10 -f my_training_data.pkl
```
@@ -97,7 +89,6 @@ crewai replay [OPTIONS]
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell Terminal
crewai replay -t task_123456
```
@@ -127,7 +118,6 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories
Example:
```shell Terminal
crewai reset-memories --long --short
crewai reset-memories --all
@@ -145,7 +135,6 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -159,16 +148,12 @@ crewai run
```
<Note>
Starting from version 0.103.0, the `crewai run` command can be used to run
both standard crews and flows. For flows, it automatically detects the type
from pyproject.toml and runs the appropriate command. This is now the
recommended way to run both crews and flows.
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.
</Note>
<Note>
Make sure to run these commands from the directory where your CrewAI project
is set up. Some commands may require additional configuration or setup within
your project structure.
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
### 9. Chat
@@ -180,7 +165,6 @@ After receiving the results, you can continue interacting with the assistant for
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
@@ -198,30 +182,28 @@ def crew(self) -> Crew:
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. Deploy
Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
You can login or create an account with:
```shell Terminal
crewai login
```
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
You can login or create an account with:
```shell Terminal
crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
Manage your CrewAI AMP organizations.
Manage your CrewAI Enterprise organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
@@ -230,113 +212,98 @@ crewai org [COMMAND] [OPTIONS]
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI AMP to use these organization management
commands.
You must be authenticated to CrewAI Enterprise to use these organization management commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI AMP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI Enterprise platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list
```
This lists all your deployments.
```shell Terminal
crewai deploy list
```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI AMP platform.
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI Enterprise platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
<iframe
className="w-full aspect-video rounded-xl"
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameBorder="0"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
allowfullscreen
></iframe>
### 11. Login
Authenticate with CrewAI AMP using a secure device code flow (no email entry required).
Authenticate with CrewAI Enterprise using a secure device code flow (no email entry required).
```shell Terminal
crewai login
```
What happens:
- A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL
- Enter/confirm the code to complete authentication
Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys
When running `crewai create crew` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider and model, you will be prompted for API keys.
@@ -344,11 +311,11 @@ Once you've selected an LLM provider and model, you will be prompted for API key
Here's a list of the most popular LLM providers suggested by the CLI:
- OpenAI
- Groq
- Anthropic
- Google Gemini
- SambaNova
* OpenAI
* Groq
* Anthropic
* Google Gemini
* SambaNova
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
@@ -360,7 +327,7 @@ When you select a provider, the CLI will prompt you to enter the Key name and th
See the following link for each provider's key name:
- [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management
@@ -373,26 +340,23 @@ crewai config [COMMAND] [OPTIONS]
#### Commands:
- `list`: Display all CLI configuration parameters
```shell Terminal
crewai config list
```
- `set`: Set a CLI configuration parameter
```shell Terminal
crewai config set <key> <value>
```
- `reset`: Reset all CLI configuration parameters to default values
```shell Terminal
crewai config reset
```
#### Available Configuration Parameters
- `enterprise_base_url`: Base URL of the CrewAI AMP instance
- `enterprise_base_url`: Base URL of the CrewAI Enterprise instance
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
@@ -401,141 +365,45 @@ crewai config reset
#### Examples
Display current configuration:
```shell Terminal
crewai config list
```
Example output:
| Setting | Value | Description |
| Setting | Value | Description |
| :------------------ | :----------------------- | :---------------------------------------------------------- |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI Enterprise instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
Set the enterprise base URL:
```shell Terminal
crewai config set enterprise_base_url https://my-enterprise.crewai.com
```
Set OAuth2 provider:
```shell Terminal
crewai config set oauth2_provider auth0
```
Set OAuth2 domain:
```shell Terminal
crewai config set oauth2_domain my-company.auth0.com
```
Reset all configuration to defaults:
```shell Terminal
crewai config reset
```
<Tip>
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
### 14. Trace Management
Manage trace collection preferences for your Crew and Flow executions.
```shell Terminal
crewai traces [COMMAND]
```
#### Commands:
- `enable`: Enable trace collection for crew/flow executions
```shell Terminal
crewai traces enable
```
- `disable`: Disable trace collection for crew/flow executions
```shell Terminal
crewai traces disable
```
- `status`: Show current trace collection status
```shell Terminal
crewai traces status
```
#### How Tracing Works
Trace collection is controlled by checking three settings in priority order:
1. **Explicit flag in code** (highest priority - can enable OR disable):
```python
crew = Crew(agents=[...], tasks=[...], tracing=True) # Always enable
crew = Crew(agents=[...], tasks=[...], tracing=False) # Always disable
crew = Crew(agents=[...], tasks=[...]) # Check lower priorities (default)
```
- `tracing=True` will **always enable** tracing (overrides everything)
- `tracing=False` will **always disable** tracing (overrides everything)
- `tracing=None` or omitted will check lower priority settings
2. **Environment variable** (second priority):
```env
CREWAI_TRACING_ENABLED=true
```
- Checked only if `tracing` is not explicitly set to `True` or `False` in code
- Set to `true` or `1` to enable tracing
3. **User preference** (lowest priority):
```shell Terminal
crewai traces enable
```
- Checked only if `tracing` is not set in code and `CREWAI_TRACING_ENABLED` is not set to `true`
- Running `crewai traces enable` is sufficient to enable tracing by itself
<Note>
**To enable tracing**, use any one of these methods:
- Set `tracing=True` in your Crew/Flow code, OR
- Add `CREWAI_TRACING_ENABLED=true` to your `.env` file, OR
- Run `crewai traces enable`
**To disable tracing**, use any ONE of these methods:
- Set `tracing=False` in your Crew/Flow code (overrides everything), OR
- Remove or set to `false` the `CREWAI_TRACING_ENABLED` env var, OR
- Run `crewai traces disable`
Higher priority settings override lower ones.
</Note>
<Tip>
For more information about tracing, see the [Tracing
documentation](/observability/tracing).
</Tip>
<Tip>
CrewAI CLI handles authentication to the Tool Repository automatically when
adding packages to your project. Just append `crewai` before any `uv` command
to use it. E.g. `crewai uv add requests`. For more information, see [Tool
Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
<Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some
settings like organization name and UUID are read-only and managed through
authentication and organization commands. Tool repository related settings are
hidden and cannot be set directly by users.
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
</Note>

View File

@@ -33,7 +33,6 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
<Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -307,27 +306,12 @@ print(result)
### Different Ways to Kick Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process.
#### Synchronous Methods
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
#### Asynchronous Methods
CrewAI offers two approaches for async execution:
| Method | Type | Description |
|--------|------|-------------|
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
| `akickoff_for_each()` | Native async | Native async execution for each input in a list |
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
| `kickoff_for_each_async()` | Thread-based | Thread-based async for each input in a list |
<Note>
For high-concurrency workloads, `akickoff()` and `akickoff_for_each()` are recommended as they use native async for task execution, memory operations, and knowledge retrieval.
</Note>
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
```python Code
# Start the crew's task execution
@@ -340,53 +324,19 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using native async with akickoff
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.akickoff(inputs=inputs)
print(async_result)
# Example of using native async with akickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
for async_result in async_results:
print(async_result)
# Example of using thread-based kickoff_async
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using thread-based kickoff_for_each_async
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. For detailed async examples, see the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide.
### Streaming Crew Execution
For real-time visibility into crew execution, you can enable streaming to receive output as it's generated:
```python Code
# Enable streaming
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Iterate over streaming output
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
```
Learn more about streaming in the [Streaming Crew Execution](/en/learn/streaming-crew-execution) guide.
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
### Replaying from a Specific Task

View File

@@ -1,6 +1,6 @@
---
title: "Event Listeners"
description: "Tap into CrewAI events to build custom integrations and monitoring"
title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring'
icon: spinner
mode: "wide"
---
@@ -20,12 +20,11 @@ CrewAI uses an event bus architecture to emit events throughout the execution li
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png)
With Prompt Tracing you can:
- View the complete history of all prompts sent to your LLM
- Track token usage and costs
- Debug agent reasoning failures
@@ -275,6 +274,7 @@ The structure of the event object depends on the event type, but all events inhe
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:

Some files were not shown because too many files have changed in this diff Show More