Compare commits

..

4 Commits

2409 changed files with 95026 additions and 259957 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

161
.env.test
View File

@@ -1,161 +0,0 @@
# =============================================================================
# Test Environment Variables
# =============================================================================
# This file contains all environment variables needed to run tests locally
# in a way that mimics the GitHub Actions CI environment.
# =============================================================================
# -----------------------------------------------------------------------------
# LLM Provider API Keys
# -----------------------------------------------------------------------------
OPENAI_API_KEY=fake-api-key
ANTHROPIC_API_KEY=fake-anthropic-key
GEMINI_API_KEY=fake-gemini-key
AZURE_API_KEY=fake-azure-key
OPENROUTER_API_KEY=fake-openrouter-key
# -----------------------------------------------------------------------------
# AWS Credentials
# -----------------------------------------------------------------------------
AWS_ACCESS_KEY_ID=fake-aws-access-key
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_REGION_NAME=us-east-1
# -----------------------------------------------------------------------------
# Azure OpenAI Configuration
# -----------------------------------------------------------------------------
AZURE_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_API_KEY=fake-azure-openai-key
AZURE_API_VERSION=2024-02-15-preview
OPENAI_API_VERSION=2024-02-15-preview
# -----------------------------------------------------------------------------
# Google Cloud Configuration
# -----------------------------------------------------------------------------
#GOOGLE_CLOUD_PROJECT=fake-gcp-project
#GOOGLE_CLOUD_LOCATION=us-central1
# -----------------------------------------------------------------------------
# OpenAI Configuration
# -----------------------------------------------------------------------------
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_BASE=https://api.openai.com/v1
# -----------------------------------------------------------------------------
# Search & Scraping Tool API Keys
# -----------------------------------------------------------------------------
SERPER_API_KEY=fake-serper-key
EXA_API_KEY=fake-exa-key
BRAVE_API_KEY=fake-brave-key
FIRECRAWL_API_KEY=fake-firecrawl-key
TAVILY_API_KEY=fake-tavily-key
SERPAPI_API_KEY=fake-serpapi-key
SERPLY_API_KEY=fake-serply-key
LINKUP_API_KEY=fake-linkup-key
PARALLEL_API_KEY=fake-parallel-key
# -----------------------------------------------------------------------------
# Exa Configuration
# -----------------------------------------------------------------------------
EXA_BASE_URL=https://api.exa.ai
# -----------------------------------------------------------------------------
# Web Scraping & Automation
# -----------------------------------------------------------------------------
BRIGHT_DATA_API_KEY=fake-brightdata-key
BRIGHT_DATA_ZONE=fake-zone
BRIGHTDATA_API_URL=https://api.brightdata.com
BRIGHTDATA_DEFAULT_TIMEOUT=600
BRIGHTDATA_DEFAULT_POLLING_INTERVAL=1
OXYLABS_USERNAME=fake-oxylabs-user
OXYLABS_PASSWORD=fake-oxylabs-pass
SCRAPFLY_API_KEY=fake-scrapfly-key
SCRAPEGRAPH_API_KEY=fake-scrapegraph-key
BROWSERBASE_API_KEY=fake-browserbase-key
BROWSERBASE_PROJECT_ID=fake-browserbase-project
HYPERBROWSER_API_KEY=fake-hyperbrowser-key
MULTION_API_KEY=fake-multion-key
APIFY_API_TOKEN=fake-apify-token
# -----------------------------------------------------------------------------
# Database & Vector Store Credentials
# -----------------------------------------------------------------------------
SINGLESTOREDB_URL=mysql://fake:fake@localhost:3306/fake
SINGLESTOREDB_HOST=localhost
SINGLESTOREDB_PORT=3306
SINGLESTOREDB_USER=fake-user
SINGLESTOREDB_PASSWORD=fake-password
SINGLESTOREDB_DATABASE=fake-database
SINGLESTOREDB_CONNECT_TIMEOUT=30
SNOWFLAKE_USER=fake-snowflake-user
SNOWFLAKE_PASSWORD=fake-snowflake-password
SNOWFLAKE_ACCOUNT=fake-snowflake-account
SNOWFLAKE_WAREHOUSE=fake-snowflake-warehouse
SNOWFLAKE_DATABASE=fake-snowflake-database
SNOWFLAKE_SCHEMA=fake-snowflake-schema
WEAVIATE_URL=http://localhost:8080
WEAVIATE_API_KEY=fake-weaviate-key
EMBEDCHAIN_DB_URI=sqlite:///test.db
# Databricks Credentials
DATABRICKS_HOST=https://fake-databricks.cloud.databricks.com
DATABRICKS_TOKEN=fake-databricks-token
DATABRICKS_CONFIG_PROFILE=fake-profile
# MongoDB Credentials
MONGODB_URI=mongodb://fake:fake@localhost:27017/fake
# -----------------------------------------------------------------------------
# CrewAI Platform & Enterprise
# -----------------------------------------------------------------------------
# setting CREWAI_PLATFORM_INTEGRATION_TOKEN causes these test to fail:
#=========================== short test summary info ============================
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_platform_context_manager_basic_usage - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_context_var_isolation_between_tests - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_multiple_sequential_context_managers - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#CREWAI_PLATFORM_INTEGRATION_TOKEN=fake-platform-token
CREWAI_PERSONAL_ACCESS_TOKEN=fake-personal-token
CREWAI_PLUS_URL=https://fake.crewai.com
# -----------------------------------------------------------------------------
# Other Service API Keys
# -----------------------------------------------------------------------------
ZAPIER_API_KEY=fake-zapier-key
PATRONUS_API_KEY=fake-patronus-key
MINDS_API_KEY=fake-minds-key
HF_TOKEN=fake-hf-token
# -----------------------------------------------------------------------------
# Feature Flags/Testing Modes
# -----------------------------------------------------------------------------
CREWAI_DISABLE_TELEMETRY=true
OTEL_SDK_DISABLED=true
CREWAI_TESTING=true
CREWAI_TRACING_ENABLED=false
# -----------------------------------------------------------------------------
# Testing/CI Configuration
# -----------------------------------------------------------------------------
# VCR recording mode: "none" (default), "new_episodes", "all", "once"
PYTEST_VCR_RECORD_MODE=none
# Set to "true" by GitHub when running in GitHub Actions
# GITHUB_ACTIONS=false
# -----------------------------------------------------------------------------
# Python Configuration
# -----------------------------------------------------------------------------
PYTHONUNBUFFERED=1

View File

@@ -1,28 +0,0 @@
name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality

View File

@@ -1,11 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: uv # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"

63
.github/security.md vendored
View File

@@ -1,50 +1,27 @@
## CrewAI Security Policy ## CrewAI Security Vulnerability Reporting Policy
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do. CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
### Scope ### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
We welcome reports for vulnerabilities that could impact: Email all vulnerability reports directly to:
**security@crewai.com**
- CrewAI-maintained source code and repositories ### Required Information
- CrewAI-operated infrastructure and services To help us quickly validate and remediate the issue, your report must include:
- Official CrewAI releases, packages, and distributions
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers. - **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
### How to Report ### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media. ### Reward Notice
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`. Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.

View File

@@ -1,48 +0,0 @@
name: Build uv cache
on:
push:
branches:
- main
paths:
- "uv.lock"
- "pyproject.toml"
schedule:
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
workflow_dispatch:
permissions:
contents: read
jobs:
build-cache:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies and populate cache
run: |
echo "Building global UV cache for Python ${{ matrix.python-version }}..."
uv sync --all-groups --all-extras --no-install-project
echo "Cache populated successfully"
- name: Save uv caches
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

View File

@@ -1,103 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
push:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: actions
build-mode: none
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
config-file: ./.github/codeql/codeql-config.yml
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -1,35 +0,0 @@
name: Check Documentation Broken Links
on:
pull_request:
paths:
- "docs/**"
- "docs.json"
push:
branches:
- main
paths:
- "docs/**"
- "docs.json"
workflow_dispatch:
jobs:
check-links:
name: Check broken links
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "latest"
- name: Install Mintlify CLI
run: npm i -g mintlify
- name: Run broken link checker
run: |
# Auto-answer the prompt with yes command
yes "" | mintlify broken-links || test $? -eq 141
working-directory: ./docs

View File

@@ -2,9 +2,6 @@ name: Lint
on: [pull_request] on: [pull_request]
permissions:
contents: read
jobs: jobs:
lint: lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -18,27 +15,8 @@ jobs:
- name: Fetch Target Branch - name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1 run: git fetch origin $TARGET_BRANCH --depth=1
- name: Restore global uv cache - name: Install Ruff
id: cache-restore run: pip install ruff
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py3.11-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.11"
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras --no-install-project
- name: Get Changed Python Files - name: Get Changed Python Files
id: changed-files id: changed-files
@@ -55,15 +33,4 @@ jobs:
echo "${{ steps.changed-files.outputs.files }}" \ echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \ | tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \ | grep -v 'src/crewai/cli/templates/' \
| grep -v '/tests/' \ | xargs -I{} ruff check "{}"
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}

View File

@@ -1,81 +0,0 @@
name: Publish to PyPI
on:
release:
types: [ published ]
workflow_dispatch:
jobs:
build:
name: Build packages
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

23
.github/workflows/security-checker.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Security Checker
on: [pull_request]
jobs:
security-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11.9"
- name: Install dependencies
run: pip install bandit
- name: Run Bandit
run: bandit -c pyproject.toml -r src/ -ll

View File

@@ -3,7 +3,11 @@ name: Run Tests
on: [pull_request] on: [pull_request]
permissions: permissions:
contents: read contents: write
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
jobs: jobs:
tests: tests:
@@ -18,83 +22,29 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv - name: Install uv
uses: astral-sh/setup-uv@v6 uses: astral-sh/setup-uv@v3
with: with:
version: "0.8.4" enable-cache: true
python-version: ${{ matrix.python-version }} cache-dependency-glob: |
enable-cache: false **/pyproject.toml
**/uv.lock
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
- name: Install the project - name: Install the project
run: uv sync --all-groups --all-extras run: uv sync --dev --all-extras
- name: Restore test durations
uses: actions/cache/restore@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Run tests (group ${{ matrix.group }} of 8) - name: Run tests (group ${{ matrix.group }} of 8)
run: | run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_') uv run pytest \
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}" --block-network \
--timeout=30 \
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
# DURATIONS_ARG=""
# elif git diff origin/${{ github.base_ref }}...HEAD --name-only 2>/dev/null | grep -q "^tests/.*\.py$"; then
# echo "Test files have changed, skipping cached durations to avoid mismatches"
# DURATIONS_ARG=""
# else
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
cd lib/crewai && uv run pytest \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
$DURATIONS_ARG \
--durations=10 \
--maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8)
run: |
cd lib/crewai-tools && uv run pytest \
-vv \ -vv \
--splits 8 \ --splits 8 \
--group ${{ matrix.group }} \ --group ${{ matrix.group }} \
--durations=10 \ --durations=10 \
-n auto \
--maxfail=3 --maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

View File

@@ -3,99 +3,24 @@ name: Run Type Checks
on: [pull_request] on: [pull_request]
permissions: permissions:
contents: read contents: write
jobs: jobs:
type-checker-matrix: type-checker:
name: type-checker (${{ matrix.python-version }})
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with: with:
fetch-depth: 0 # Fetch all history for proper diff python-version: "3.11.9"
- name: Restore global uv cache - name: Install Requirements
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras
- name: Get changed Python files
id: changed-files
run: | run: |
# Get the list of changed Python files compared to the base branch pip install mypy
echo "Fetching changed files..."
git diff --name-only --diff-filter=ACMRT origin/${{ github.base_ref }}...HEAD -- '*.py' > changed_files.txt
# Filter for files in src/ directory only (excluding tests/) - name: Run type checks
grep -E "^src/" changed_files.txt > filtered_changed_files.txt || true run: mypy src
# Check if there are any changed files
if [ -s filtered_changed_files.txt ]; then
echo "Changed Python files in src/:"
cat filtered_changed_files.txt
echo "has_changes=true" >> $GITHUB_OUTPUT
# Convert newlines to spaces for mypy command
echo "files=$(cat filtered_changed_files.txt | tr '\n' ' ')" >> $GITHUB_OUTPUT
else
echo "No Python files changed in src/"
echo "has_changes=false" >> $GITHUB_OUTPUT
fi
- name: Run type checks on changed files
if: steps.changed-files.outputs.has_changes == 'true'
run: |
echo "Running mypy on changed files with Python ${{ matrix.python-version }}..."
uv run mypy ${{ steps.changed-files.outputs.files }}
- name: No files to check
if: steps.changed-files.outputs.has_changes == 'false'
run: echo "No Python files in src/ were modified - skipping type checks"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
# Summary job to provide single status for branch protection
type-checker:
name: type-checker
runs-on: ubuntu-latest
needs: type-checker-matrix
if: always()
steps:
- name: Check matrix results
run: |
if [ "${{ needs.type-checker-matrix.result }}" == "success" ] || [ "${{ needs.type-checker-matrix.result }}" == "skipped" ]; then
echo "✅ All type checks passed"
else
echo "❌ Type checks failed"
exit 1
fi

View File

@@ -1,71 +0,0 @@
name: Update Test Durations
on:
push:
branches:
- main
paths:
- 'tests/**/*.py'
workflow_dispatch:
permissions:
contents: read
jobs:
update-durations:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --all-groups --all-extras
- name: Run all tests and store durations
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
uv run pytest --store-durations --durations-path=.test_durations_py${PYTHON_VERSION_SAFE} -n auto
continue-on-error: true
- name: Save durations to cache
if: always()
uses: actions/cache/save@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

1
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.pytest_cache .pytest_cache
__pycache__ __pycache__
dist/ dist/
lib/
.env .env
assets/* assets/*
.idea .idea

View File

@@ -1,27 +1,7 @@
repos: repos:
- repo: local - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.2
hooks: hooks:
- id: ruff - id: ruff
name: ruff args: ["--fix"]
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: ruff-format - id: ruff-format
name: ruff-format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: mypy
name: mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock

4
.ruff.toml Normal file
View File

@@ -0,0 +1,4 @@
exclude = [
"templates",
"__init__.py",
]

View File

@@ -62,9 +62,9 @@
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation. standard for enterprise-ready AI automation.
# CrewAI AOP Suite # CrewAI Enterprise Suite
CrewAI AOP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation. CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com) You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
@@ -76,9 +76,9 @@ You can try one part of the suite the [Crew Control Plane for free](https://app.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management. - **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making. - **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues. - **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AOP on-premise or in the cloud, depending on your security and compliance requirements. - **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI AOP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient, CrewAI Enterprise is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations. intelligent automations.
## Table of contents ## Table of contents
@@ -418,10 +418,10 @@ Choose CrewAI to easily build powerful, adaptable, and production-ready AI autom
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file): You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator) - [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution) - [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) - [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) - [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
### Quick Tutorial ### Quick Tutorial
@@ -429,19 +429,19 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
### Write Job Descriptions ### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting) or watch a video below: [Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings") [![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner ### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) or watch a video below: [Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner") [![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis ### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) or watch a video below: [Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis") [![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
@@ -674,9 +674,9 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b
### Enterprise Features ### Enterprise Features
- [What additional features does CrewAI AOP offer?](#q-what-additional-features-does-crewai-amp-offer) - [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI AOP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments) - [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AOP for free?](#q-can-i-try-crewai-amp-for-free) - [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
### Q: What exactly is CrewAI? ### Q: What exactly is CrewAI?
@@ -732,17 +732,17 @@ A: Check out practical examples in the [CrewAI-examples repository](https://gith
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines. A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI AOP offer? ### Q: What additional features does CrewAI Enterprise offer?
A: CrewAI AOP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support. A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI AOP available for cloud and on-premise deployments? ### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
A: Yes, CrewAI AOP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements. A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI AOP for free? ### Q: Can I try CrewAI Enterprise for free?
A: Yes, you can explore part of the CrewAI AOP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free. A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models? ### Q: Does CrewAI support fine-tuning or training custom models?
@@ -762,7 +762,7 @@ A: CrewAI is highly scalable, supporting simple automations and large-scale ente
### Q: Does CrewAI offer debugging and monitoring tools? ### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI AOP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations. A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support? ### Q: What programming languages does CrewAI support?

View File

@@ -1,197 +0,0 @@
"""Pytest configuration for crewAI workspace."""
from collections.abc import Generator
import os
from pathlib import Path
import tempfile
from typing import Any
from dotenv import load_dotenv
import pytest
from vcr.request import Request # type: ignore[import-untyped]
env_test_path = Path(__file__).parent / ".env.test"
load_dotenv(env_test_path, override=True)
load_dotenv(override=True)
@pytest.fixture(autouse=True, scope="function")
def cleanup_event_handlers() -> Generator[None, Any, None]:
"""Clean up event bus handlers after each test to prevent test pollution."""
yield
try:
from crewai.events.event_bus import crewai_event_bus
with crewai_event_bus._rwlock.w_locked():
crewai_event_bus._sync_handlers.clear()
crewai_event_bus._async_handlers.clear()
except Exception: # noqa: S110
pass
@pytest.fixture(autouse=True, scope="function")
def setup_test_environment() -> Generator[None, Any, None]:
"""Setup test environment for crewAI workspace."""
with tempfile.TemporaryDirectory() as temp_dir:
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(
f"Failed to create test storage directory: {storage_dir}"
)
try:
test_file = storage_dir / ".permissions_test"
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(
f"Test storage directory {storage_dir} is not writable: {e}"
) from e
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
os.environ["CREWAI_TESTING"] = "true"
try:
yield
finally:
os.environ.pop("CREWAI_TESTING", "true")
os.environ.pop("CREWAI_STORAGE_DIR", None)
os.environ.pop("CREWAI_DISABLE_TELEMETRY", "true")
os.environ.pop("OTEL_SDK_DISABLED", "true")
os.environ.pop("OPENAI_BASE_URL", "https://api.openai.com/v1")
os.environ.pop("OPENAI_API_BASE", "https://api.openai.com/v1")
HEADERS_TO_FILTER = {
"authorization": "AUTHORIZATION-XXX",
"content-security-policy": "CSP-FILTERED",
"cookie": "COOKIE-XXX",
"set-cookie": "SET-COOKIE-XXX",
"permissions-policy": "PERMISSIONS-POLICY-XXX",
"referrer-policy": "REFERRER-POLICY-XXX",
"strict-transport-security": "STS-XXX",
"x-content-type-options": "X-CONTENT-TYPE-XXX",
"x-frame-options": "X-FRAME-OPTIONS-XXX",
"x-permitted-cross-domain-policies": "X-PERMITTED-XXX",
"x-request-id": "X-REQUEST-ID-XXX",
"x-runtime": "X-RUNTIME-XXX",
"x-xss-protection": "X-XSS-PROTECTION-XXX",
"x-stainless-arch": "X-STAINLESS-ARCH-XXX",
"x-stainless-os": "X-STAINLESS-OS-XXX",
"x-stainless-read-timeout": "X-STAINLESS-READ-TIMEOUT-XXX",
"cf-ray": "CF-RAY-XXX",
"etag": "ETAG-XXX",
"Strict-Transport-Security": "STS-XXX",
"access-control-expose-headers": "ACCESS-CONTROL-XXX",
"openai-organization": "OPENAI-ORG-XXX",
"openai-project": "OPENAI-PROJECT-XXX",
"x-ratelimit-limit-requests": "X-RATELIMIT-LIMIT-REQUESTS-XXX",
"x-ratelimit-limit-tokens": "X-RATELIMIT-LIMIT-TOKENS-XXX",
"x-ratelimit-remaining-requests": "X-RATELIMIT-REMAINING-REQUESTS-XXX",
"x-ratelimit-remaining-tokens": "X-RATELIMIT-REMAINING-TOKENS-XXX",
"x-ratelimit-reset-requests": "X-RATELIMIT-RESET-REQUESTS-XXX",
"x-ratelimit-reset-tokens": "X-RATELIMIT-RESET-TOKENS-XXX",
"x-goog-api-key": "X-GOOG-API-KEY-XXX",
"api-key": "X-API-KEY-XXX",
"User-Agent": "X-USER-AGENT-XXX",
"apim-request-id:": "X-API-CLIENT-REQUEST-ID-XXX",
"azureml-model-session": "AZUREML-MODEL-SESSION-XXX",
"x-ms-client-request-id": "X-MS-CLIENT-REQUEST-ID-XXX",
"x-ms-region": "X-MS-REGION-XXX",
"apim-request-id": "APIM-REQUEST-ID-XXX",
"x-api-key": "X-API-KEY-XXX",
"anthropic-organization-id": "ANTHROPIC-ORGANIZATION-ID-XXX",
"request-id": "REQUEST-ID-XXX",
"anthropic-ratelimit-input-tokens-limit": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-input-tokens-remaining": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-input-tokens-reset": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-output-tokens-limit": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-output-tokens-remaining": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-output-tokens-reset": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-tokens-limit": "ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-tokens-remaining": "ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-tokens-reset": "ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX",
"x-amz-date": "X-AMZ-DATE-XXX",
"amz-sdk-invocation-id": "AMZ-SDK-INVOCATION-ID-XXX",
"accept-encoding": "ACCEPT-ENCODING-XXX",
"x-amzn-requestid": "X-AMZN-REQUESTID-XXX",
"x-amzn-RequestId": "X-AMZN-REQUESTID-XXX",
}
def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any-unimported]
"""Filter sensitive headers from request before recording."""
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in request.headers:
request.headers[variant] = [replacement]
request.method = request.method.upper()
return request
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any]:
"""Filter sensitive headers from response before recording."""
# Remove Content-Encoding to prevent decompression issues on replay
for encoding_header in ["Content-Encoding", "content-encoding"]:
response["headers"].pop(encoding_header, None)
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in response["headers"]:
response["headers"][variant] = [replacement]
return response
@pytest.fixture(scope="module")
def vcr_cassette_dir(request: Any) -> str:
"""Generate cassette directory path based on test module location.
Organizes cassettes to mirror test directory structure within each package:
lib/crewai/tests/llms/google/test_google.py -> lib/crewai/tests/cassettes/llms/google/
lib/crewai-tools/tests/tools/test_search.py -> lib/crewai-tools/tests/cassettes/tools/
"""
test_file = Path(request.fspath)
for parent in test_file.parents:
if parent.name in ("crewai", "crewai-tools") and parent.parent.name == "lib":
package_root = parent
break
else:
package_root = test_file.parent
tests_root = package_root / "tests"
test_dir = test_file.parent
if test_dir != tests_root:
relative_path = test_dir.relative_to(tests_root)
cassette_dir = tests_root / "cassettes" / relative_path
else:
cassette_dir = tests_root / "cassettes"
cassette_dir.mkdir(parents=True, exist_ok=True)
return str(cassette_dir)
@pytest.fixture(scope="module")
def vcr_config(vcr_cassette_dir: str) -> dict[str, Any]:
"""Configure VCR with organized cassette storage."""
config = {
"cassette_library_dir": vcr_cassette_dir,
"record_mode": os.getenv("PYTEST_VCR_RECORD_MODE", "once"),
"filter_headers": [(k, v) for k, v in HEADERS_TO_FILTER.items()],
"before_record_request": _filter_request_headers,
"before_record_response": _filter_response_headers,
"filter_query_parameters": ["key"],
"match_on": ["method", "scheme", "host", "port", "path"],
}
if os.getenv("GITHUB_ACTIONS") == "true":
config["record_mode"] = "none"
return config

1737
crewAI.excalidraw Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{ {
"$schema": "https://mintlify.com/docs.json", "$schema": "https://mintlify.com/docs.json",
"theme": "aspen", "theme": "mint",
"name": "CrewAI", "name": "CrewAI",
"colors": { "colors": {
"primary": "#EB6658", "primary": "#EB6658",
@@ -9,22 +9,7 @@
}, },
"favicon": "/images/favicon.svg", "favicon": "/images/favicon.svg",
"contextual": { "contextual": {
"options": [ "options": ["copy", "view", "chatgpt", "claude"]
"copy",
"view",
"chatgpt",
"claude",
"perplexity",
"mcp",
"cursor",
"vscode",
{
"title": "Request a feature",
"description": "Join the discussion on GitHub to request a new feature",
"icon": "plus",
"href": "https://github.com/crewAIInc/crewAI/issues/new/choose"
}
]
}, },
"navigation": { "navigation": {
"languages": [ "languages": [
@@ -42,32 +27,21 @@
"href": "https://community.crewai.com", "href": "https://community.crewai.com",
"icon": "discourse" "icon": "discourse"
}, },
{
"anchor": "Blog",
"href": "https://blog.crewai.com",
"icon": "newspaper"
},
{ {
"anchor": "Crew GPT", "anchor": "Crew GPT",
"href": "https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant", "href": "https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant",
"icon": "robot" "icon": "robot"
},
{
"anchor": "Releases",
"href": "https://github.com/crewAIInc/crewAI/releases",
"icon": "tag"
} }
] ]
}, },
"tabs": [ "tabs": [
{
"tab": "Home",
"icon": "house",
"groups": [
{
"group": "Welcome",
"pages": ["index"]
}
]
},
{ {
"tab": "Documentation", "tab": "Documentation",
"icon": "book-open",
"groups": [ "groups": [
{ {
"group": "Get Started", "group": "Get Started",
@@ -78,22 +52,18 @@
"pages": [ "pages": [
{ {
"group": "Strategy", "group": "Strategy",
"icon": "compass",
"pages": ["en/guides/concepts/evaluating-use-cases"] "pages": ["en/guides/concepts/evaluating-use-cases"]
}, },
{ {
"group": "Agents", "group": "Agents",
"icon": "user",
"pages": ["en/guides/agents/crafting-effective-agents"] "pages": ["en/guides/agents/crafting-effective-agents"]
}, },
{ {
"group": "Crews", "group": "Crews",
"icon": "users",
"pages": ["en/guides/crews/first-crew"] "pages": ["en/guides/crews/first-crew"]
}, },
{ {
"group": "Flows", "group": "Flows",
"icon": "code-branch",
"pages": [ "pages": [
"en/guides/flows/first-flow", "en/guides/flows/first-flow",
"en/guides/flows/mastering-flow-state" "en/guides/flows/mastering-flow-state"
@@ -101,7 +71,6 @@
}, },
{ {
"group": "Advanced", "group": "Advanced",
"icon": "gear",
"pages": [ "pages": [
"en/guides/advanced/customizing-prompts", "en/guides/advanced/customizing-prompts",
"en/guides/advanced/fingerprinting" "en/guides/advanced/fingerprinting"
@@ -134,7 +103,6 @@
"group": "MCP Integration", "group": "MCP Integration",
"pages": [ "pages": [
"en/mcp/overview", "en/mcp/overview",
"en/mcp/dsl-integration",
"en/mcp/stdio", "en/mcp/stdio",
"en/mcp/sse", "en/mcp/sse",
"en/mcp/streamable-http", "en/mcp/streamable-http",
@@ -148,7 +116,6 @@
"en/tools/overview", "en/tools/overview",
{ {
"group": "File & Document", "group": "File & Document",
"icon": "folder-open",
"pages": [ "pages": [
"en/tools/file-document/overview", "en/tools/file-document/overview",
"en/tools/file-document/filereadtool", "en/tools/file-document/filereadtool",
@@ -168,7 +135,6 @@
}, },
{ {
"group": "Web Scraping & Browsing", "group": "Web Scraping & Browsing",
"icon": "globe",
"pages": [ "pages": [
"en/tools/web-scraping/overview", "en/tools/web-scraping/overview",
"en/tools/web-scraping/scrapewebsitetool", "en/tools/web-scraping/scrapewebsitetool",
@@ -188,7 +154,6 @@
}, },
{ {
"group": "Search & Research", "group": "Search & Research",
"icon": "magnifying-glass",
"pages": [ "pages": [
"en/tools/search-research/overview", "en/tools/search-research/overview",
"en/tools/search-research/serperdevtool", "en/tools/search-research/serperdevtool",
@@ -210,7 +175,6 @@
}, },
{ {
"group": "Database & Data", "group": "Database & Data",
"icon": "database",
"pages": [ "pages": [
"en/tools/database-data/overview", "en/tools/database-data/overview",
"en/tools/database-data/mysqltool", "en/tools/database-data/mysqltool",
@@ -225,7 +189,6 @@
}, },
{ {
"group": "AI & Machine Learning", "group": "AI & Machine Learning",
"icon": "brain",
"pages": [ "pages": [
"en/tools/ai-ml/overview", "en/tools/ai-ml/overview",
"en/tools/ai-ml/dalletool", "en/tools/ai-ml/dalletool",
@@ -239,27 +202,16 @@
}, },
{ {
"group": "Cloud & Storage", "group": "Cloud & Storage",
"icon": "cloud",
"pages": [ "pages": [
"en/tools/cloud-storage/overview", "en/tools/cloud-storage/overview",
"en/tools/cloud-storage/s3readertool", "en/tools/cloud-storage/s3readertool",
"en/tools/cloud-storage/s3writertool", "en/tools/cloud-storage/s3writertool",
"en/tools/cloud-storage/bedrockinvokeagenttool",
"en/tools/cloud-storage/bedrockkbretriever" "en/tools/cloud-storage/bedrockkbretriever"
] ]
}, },
{ {
"group": "Integrations", "group": "Automation & Integration",
"icon": "plug",
"pages": [
"en/tools/integration/overview",
"en/tools/integration/bedrockinvokeagenttool",
"en/tools/integration/crewaiautomationtool",
"en/tools/integration/mergeagenthandlertool"
]
},
{
"group": "Automation",
"icon": "bolt",
"pages": [ "pages": [
"en/tools/automation/overview", "en/tools/automation/overview",
"en/tools/automation/apifyactorstool", "en/tools/automation/apifyactorstool",
@@ -273,11 +225,8 @@
{ {
"group": "Observability", "group": "Observability",
"pages": [ "pages": [
"en/observability/tracing",
"en/observability/overview", "en/observability/overview",
"en/observability/arize-phoenix", "en/observability/arize-phoenix",
"en/observability/braintrust",
"en/observability/datadog",
"en/observability/langdb", "en/observability/langdb",
"en/observability/langfuse", "en/observability/langfuse",
"en/observability/langtrace", "en/observability/langtrace",
@@ -307,17 +256,13 @@
"en/learn/force-tool-output-as-result", "en/learn/force-tool-output-as-result",
"en/learn/hierarchical-process", "en/learn/hierarchical-process",
"en/learn/human-input-on-execution", "en/learn/human-input-on-execution",
"en/learn/human-in-the-loop",
"en/learn/kickoff-async", "en/learn/kickoff-async",
"en/learn/kickoff-for-each", "en/learn/kickoff-for-each",
"en/learn/llm-connections", "en/learn/llm-connections",
"en/learn/multimodal-agents", "en/learn/multimodal-agents",
"en/learn/replay-tasks-from-latest-crew-kickoff", "en/learn/replay-tasks-from-latest-crew-kickoff",
"en/learn/sequential-process", "en/learn/sequential-process",
"en/learn/using-annotations", "en/learn/using-annotations"
"en/learn/execution-hooks",
"en/learn/llm-hooks",
"en/learn/tool-hooks"
] ]
}, },
{ {
@@ -327,35 +272,22 @@
] ]
}, },
{ {
"tab": "AOP", "tab": "Enterprise",
"icon": "briefcase",
"groups": [ "groups": [
{ {
"group": "Getting Started", "group": "Getting Started",
"pages": ["en/enterprise/introduction"] "pages": ["en/enterprise/introduction"]
}, },
{ {
"group": "Build", "group": "Features",
"pages": [ "pages": [
"en/enterprise/features/automations", "en/enterprise/features/rbac",
"en/enterprise/features/crew-studio", "en/enterprise/features/tool-repository",
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations"
]
},
{
"group": "Operate",
"pages": [
"en/enterprise/features/traces",
"en/enterprise/features/webhook-streaming", "en/enterprise/features/webhook-streaming",
"en/enterprise/features/hallucination-guardrail" "en/enterprise/features/traces",
] "en/enterprise/features/hallucination-guardrail",
}, "en/enterprise/features/integrations",
{ "en/enterprise/features/agent-repositories"
"group": "Manage",
"pages": [
"en/enterprise/features/rbac"
] ]
}, },
{ {
@@ -367,20 +299,10 @@
"en/enterprise/integrations/github", "en/enterprise/integrations/github",
"en/enterprise/integrations/gmail", "en/enterprise/integrations/gmail",
"en/enterprise/integrations/google_calendar", "en/enterprise/integrations/google_calendar",
"en/enterprise/integrations/google_contacts",
"en/enterprise/integrations/google_docs",
"en/enterprise/integrations/google_drive",
"en/enterprise/integrations/google_sheets", "en/enterprise/integrations/google_sheets",
"en/enterprise/integrations/google_slides",
"en/enterprise/integrations/hubspot", "en/enterprise/integrations/hubspot",
"en/enterprise/integrations/jira", "en/enterprise/integrations/jira",
"en/enterprise/integrations/linear", "en/enterprise/integrations/linear",
"en/enterprise/integrations/microsoft_excel",
"en/enterprise/integrations/microsoft_onedrive",
"en/enterprise/integrations/microsoft_outlook",
"en/enterprise/integrations/microsoft_sharepoint",
"en/enterprise/integrations/microsoft_teams",
"en/enterprise/integrations/microsoft_word",
"en/enterprise/integrations/notion", "en/enterprise/integrations/notion",
"en/enterprise/integrations/salesforce", "en/enterprise/integrations/salesforce",
"en/enterprise/integrations/shopify", "en/enterprise/integrations/shopify",
@@ -389,22 +311,6 @@
"en/enterprise/integrations/zendesk" "en/enterprise/integrations/zendesk"
] ]
}, },
{
"group": "Triggers",
"pages": [
"en/enterprise/guides/automation-triggers",
"en/enterprise/guides/gmail-trigger",
"en/enterprise/guides/google-calendar-trigger",
"en/enterprise/guides/google-drive-trigger",
"en/enterprise/guides/outlook-trigger",
"en/enterprise/guides/onedrive-trigger",
"en/enterprise/guides/microsoft-teams-trigger",
"en/enterprise/guides/slack-trigger",
"en/enterprise/guides/hubspot-trigger",
"en/enterprise/guides/salesforce-trigger",
"en/enterprise/guides/zapier-trigger"
]
},
{ {
"group": "How-To Guides", "group": "How-To Guides",
"pages": [ "pages": [
@@ -413,13 +319,15 @@
"en/enterprise/guides/kickoff-crew", "en/enterprise/guides/kickoff-crew",
"en/enterprise/guides/update-crew", "en/enterprise/guides/update-crew",
"en/enterprise/guides/enable-crew-studio", "en/enterprise/guides/enable-crew-studio",
"en/enterprise/guides/capture_telemetry_logs",
"en/enterprise/guides/azure-openai-setup", "en/enterprise/guides/azure-openai-setup",
"en/enterprise/guides/tool-repository", "en/enterprise/guides/hubspot-trigger",
"en/enterprise/guides/react-component-export", "en/enterprise/guides/react-component-export",
"en/enterprise/guides/salesforce-trigger",
"en/enterprise/guides/slack-trigger",
"en/enterprise/guides/team-management", "en/enterprise/guides/team-management",
"en/enterprise/guides/webhook-automation",
"en/enterprise/guides/human-in-the-loop", "en/enterprise/guides/human-in-the-loop",
"en/enterprise/guides/webhook-automation" "en/enterprise/guides/zapier-trigger"
] ]
}, },
{ {
@@ -430,7 +338,6 @@
}, },
{ {
"tab": "API Reference", "tab": "API Reference",
"icon": "magnifying-glass",
"groups": [ "groups": [
{ {
"group": "Getting Started", "group": "Getting Started",
@@ -438,7 +345,6 @@
"en/api-reference/introduction", "en/api-reference/introduction",
"en/api-reference/inputs", "en/api-reference/inputs",
"en/api-reference/kickoff", "en/api-reference/kickoff",
"en/api-reference/resume",
"en/api-reference/status" "en/api-reference/status"
] ]
} }
@@ -446,23 +352,12 @@
}, },
{ {
"tab": "Examples", "tab": "Examples",
"icon": "code",
"groups": [ "groups": [
{ {
"group": "Examples", "group": "Examples",
"pages": ["en/examples/example", "en/examples/cookbooks"] "pages": ["en/examples/example", "en/examples/cookbooks"]
} }
] ]
},
{
"tab": "Changelog",
"icon": "clock",
"groups": [
{
"group": "Release Notes",
"pages": ["en/changelog"]
}
]
} }
] ]
}, },
@@ -480,32 +375,21 @@
"href": "https://community.crewai.com", "href": "https://community.crewai.com",
"icon": "discourse" "icon": "discourse"
}, },
{
"anchor": "Blog",
"href": "https://blog.crewai.com",
"icon": "newspaper"
},
{ {
"anchor": "Crew GPT", "anchor": "Crew GPT",
"href": "https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant", "href": "https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant",
"icon": "robot" "icon": "robot"
},
{
"anchor": "Lançamentos",
"href": "https://github.com/crewAIInc/crewAI/releases",
"icon": "tag"
} }
] ]
}, },
"tabs": [ "tabs": [
{
"tab": "Início",
"icon": "house",
"groups": [
{
"group": "Bem-vindo",
"pages": ["pt-BR/index"]
}
]
},
{ {
"tab": "Documentação", "tab": "Documentação",
"icon": "book-open",
"groups": [ "groups": [
{ {
"group": "Começando", "group": "Começando",
@@ -520,22 +404,18 @@
"pages": [ "pages": [
{ {
"group": "Estratégia", "group": "Estratégia",
"icon": "compass",
"pages": ["pt-BR/guides/concepts/evaluating-use-cases"] "pages": ["pt-BR/guides/concepts/evaluating-use-cases"]
}, },
{ {
"group": "Agentes", "group": "Agentes",
"icon": "user",
"pages": ["pt-BR/guides/agents/crafting-effective-agents"] "pages": ["pt-BR/guides/agents/crafting-effective-agents"]
}, },
{ {
"group": "Crews", "group": "Crews",
"icon": "users",
"pages": ["pt-BR/guides/crews/first-crew"] "pages": ["pt-BR/guides/crews/first-crew"]
}, },
{ {
"group": "Flows", "group": "Flows",
"icon": "code-branch",
"pages": [ "pages": [
"pt-BR/guides/flows/first-flow", "pt-BR/guides/flows/first-flow",
"pt-BR/guides/flows/mastering-flow-state" "pt-BR/guides/flows/mastering-flow-state"
@@ -543,7 +423,6 @@
}, },
{ {
"group": "Avançado", "group": "Avançado",
"icon": "gear",
"pages": [ "pages": [
"pt-BR/guides/advanced/customizing-prompts", "pt-BR/guides/advanced/customizing-prompts",
"pt-BR/guides/advanced/fingerprinting" "pt-BR/guides/advanced/fingerprinting"
@@ -576,7 +455,6 @@
"group": "Integração MCP", "group": "Integração MCP",
"pages": [ "pages": [
"pt-BR/mcp/overview", "pt-BR/mcp/overview",
"pt-BR/mcp/dsl-integration",
"pt-BR/mcp/stdio", "pt-BR/mcp/stdio",
"pt-BR/mcp/sse", "pt-BR/mcp/sse",
"pt-BR/mcp/streamable-http", "pt-BR/mcp/streamable-http",
@@ -590,7 +468,6 @@
"pt-BR/tools/overview", "pt-BR/tools/overview",
{ {
"group": "Arquivo & Documento", "group": "Arquivo & Documento",
"icon": "folder-open",
"pages": [ "pages": [
"pt-BR/tools/file-document/overview", "pt-BR/tools/file-document/overview",
"pt-BR/tools/file-document/filereadtool", "pt-BR/tools/file-document/filereadtool",
@@ -608,7 +485,6 @@
}, },
{ {
"group": "Web Scraping & Navegação", "group": "Web Scraping & Navegação",
"icon": "globe",
"pages": [ "pages": [
"pt-BR/tools/web-scraping/overview", "pt-BR/tools/web-scraping/overview",
"pt-BR/tools/web-scraping/scrapewebsitetool", "pt-BR/tools/web-scraping/scrapewebsitetool",
@@ -627,7 +503,6 @@
}, },
{ {
"group": "Pesquisa", "group": "Pesquisa",
"icon": "magnifying-glass",
"pages": [ "pages": [
"pt-BR/tools/search-research/overview", "pt-BR/tools/search-research/overview",
"pt-BR/tools/search-research/serperdevtool", "pt-BR/tools/search-research/serperdevtool",
@@ -643,7 +518,6 @@
}, },
{ {
"group": "Dados", "group": "Dados",
"icon": "database",
"pages": [ "pages": [
"pt-BR/tools/database-data/overview", "pt-BR/tools/database-data/overview",
"pt-BR/tools/database-data/mysqltool", "pt-BR/tools/database-data/mysqltool",
@@ -656,7 +530,6 @@
}, },
{ {
"group": "IA & Machine Learning", "group": "IA & Machine Learning",
"icon": "brain",
"pages": [ "pages": [
"pt-BR/tools/ai-ml/overview", "pt-BR/tools/ai-ml/overview",
"pt-BR/tools/ai-ml/dalletool", "pt-BR/tools/ai-ml/dalletool",
@@ -670,26 +543,16 @@
}, },
{ {
"group": "Cloud & Armazenamento", "group": "Cloud & Armazenamento",
"icon": "cloud",
"pages": [ "pages": [
"pt-BR/tools/cloud-storage/overview", "pt-BR/tools/cloud-storage/overview",
"pt-BR/tools/cloud-storage/s3readertool", "pt-BR/tools/cloud-storage/s3readertool",
"pt-BR/tools/cloud-storage/s3writertool", "pt-BR/tools/cloud-storage/s3writertool",
"pt-BR/tools/cloud-storage/bedrockinvokeagenttool",
"pt-BR/tools/cloud-storage/bedrockkbretriever" "pt-BR/tools/cloud-storage/bedrockkbretriever"
] ]
}, },
{ {
"group": "Integrations", "group": "Automação & Integração",
"icon": "plug",
"pages": [
"pt-BR/tools/integration/overview",
"pt-BR/tools/integration/bedrockinvokeagenttool",
"pt-BR/tools/integration/crewaiautomationtool"
]
},
{
"group": "Automação",
"icon": "bolt",
"pages": [ "pages": [
"pt-BR/tools/automation/overview", "pt-BR/tools/automation/overview",
"pt-BR/tools/automation/apifyactorstool", "pt-BR/tools/automation/apifyactorstool",
@@ -704,8 +567,6 @@
"pages": [ "pages": [
"pt-BR/observability/overview", "pt-BR/observability/overview",
"pt-BR/observability/arize-phoenix", "pt-BR/observability/arize-phoenix",
"pt-BR/observability/braintrust",
"pt-BR/observability/datadog",
"pt-BR/observability/langdb", "pt-BR/observability/langdb",
"pt-BR/observability/langfuse", "pt-BR/observability/langfuse",
"pt-BR/observability/langtrace", "pt-BR/observability/langtrace",
@@ -734,17 +595,13 @@
"pt-BR/learn/force-tool-output-as-result", "pt-BR/learn/force-tool-output-as-result",
"pt-BR/learn/hierarchical-process", "pt-BR/learn/hierarchical-process",
"pt-BR/learn/human-input-on-execution", "pt-BR/learn/human-input-on-execution",
"pt-BR/learn/human-in-the-loop",
"pt-BR/learn/kickoff-async", "pt-BR/learn/kickoff-async",
"pt-BR/learn/kickoff-for-each", "pt-BR/learn/kickoff-for-each",
"pt-BR/learn/llm-connections", "pt-BR/learn/llm-connections",
"pt-BR/learn/multimodal-agents", "pt-BR/learn/multimodal-agents",
"pt-BR/learn/replay-tasks-from-latest-crew-kickoff", "pt-BR/learn/replay-tasks-from-latest-crew-kickoff",
"pt-BR/learn/sequential-process", "pt-BR/learn/sequential-process",
"pt-BR/learn/using-annotations", "pt-BR/learn/using-annotations"
"pt-BR/learn/execution-hooks",
"pt-BR/learn/llm-hooks",
"pt-BR/learn/tool-hooks"
] ]
}, },
{ {
@@ -754,35 +611,21 @@
] ]
}, },
{ {
"tab": "AOP", "tab": "Enterprise",
"icon": "briefcase",
"groups": [ "groups": [
{ {
"group": "Começando", "group": "Começando",
"pages": ["pt-BR/enterprise/introduction"] "pages": ["pt-BR/enterprise/introduction"]
}, },
{ {
"group": "Construir", "group": "Funcionalidades",
"pages": [ "pages": [
"pt-BR/enterprise/features/automations", "pt-BR/enterprise/features/rbac",
"pt-BR/enterprise/features/crew-studio", "pt-BR/enterprise/features/tool-repository",
"pt-BR/enterprise/features/marketplace",
"pt-BR/enterprise/features/agent-repositories",
"pt-BR/enterprise/features/tools-and-integrations"
]
},
{
"group": "Operar",
"pages": [
"pt-BR/enterprise/features/traces",
"pt-BR/enterprise/features/webhook-streaming", "pt-BR/enterprise/features/webhook-streaming",
"pt-BR/enterprise/features/hallucination-guardrail" "pt-BR/enterprise/features/traces",
] "pt-BR/enterprise/features/hallucination-guardrail",
}, "pt-BR/enterprise/features/integrations"
{
"group": "Gerenciar",
"pages": [
"pt-BR/enterprise/features/rbac"
] ]
}, },
{ {
@@ -794,20 +637,10 @@
"pt-BR/enterprise/integrations/github", "pt-BR/enterprise/integrations/github",
"pt-BR/enterprise/integrations/gmail", "pt-BR/enterprise/integrations/gmail",
"pt-BR/enterprise/integrations/google_calendar", "pt-BR/enterprise/integrations/google_calendar",
"pt-BR/enterprise/integrations/google_contacts",
"pt-BR/enterprise/integrations/google_docs",
"pt-BR/enterprise/integrations/google_drive",
"pt-BR/enterprise/integrations/google_sheets", "pt-BR/enterprise/integrations/google_sheets",
"pt-BR/enterprise/integrations/google_slides",
"pt-BR/enterprise/integrations/hubspot", "pt-BR/enterprise/integrations/hubspot",
"pt-BR/enterprise/integrations/jira", "pt-BR/enterprise/integrations/jira",
"pt-BR/enterprise/integrations/linear", "pt-BR/enterprise/integrations/linear",
"pt-BR/enterprise/integrations/microsoft_excel",
"pt-BR/enterprise/integrations/microsoft_onedrive",
"pt-BR/enterprise/integrations/microsoft_outlook",
"pt-BR/enterprise/integrations/microsoft_sharepoint",
"pt-BR/enterprise/integrations/microsoft_teams",
"pt-BR/enterprise/integrations/microsoft_word",
"pt-BR/enterprise/integrations/notion", "pt-BR/enterprise/integrations/notion",
"pt-BR/enterprise/integrations/salesforce", "pt-BR/enterprise/integrations/salesforce",
"pt-BR/enterprise/integrations/shopify", "pt-BR/enterprise/integrations/shopify",
@@ -825,26 +658,13 @@
"pt-BR/enterprise/guides/update-crew", "pt-BR/enterprise/guides/update-crew",
"pt-BR/enterprise/guides/enable-crew-studio", "pt-BR/enterprise/guides/enable-crew-studio",
"pt-BR/enterprise/guides/azure-openai-setup", "pt-BR/enterprise/guides/azure-openai-setup",
"pt-BR/enterprise/guides/tool-repository",
"pt-BR/enterprise/guides/react-component-export",
"pt-BR/enterprise/guides/team-management",
"pt-BR/enterprise/guides/human-in-the-loop",
"pt-BR/enterprise/guides/webhook-automation"
]
},
{
"group": "Triggers",
"pages": [
"pt-BR/enterprise/guides/automation-triggers",
"pt-BR/enterprise/guides/gmail-trigger",
"pt-BR/enterprise/guides/google-calendar-trigger",
"pt-BR/enterprise/guides/google-drive-trigger",
"pt-BR/enterprise/guides/outlook-trigger",
"pt-BR/enterprise/guides/onedrive-trigger",
"pt-BR/enterprise/guides/microsoft-teams-trigger",
"pt-BR/enterprise/guides/slack-trigger",
"pt-BR/enterprise/guides/hubspot-trigger", "pt-BR/enterprise/guides/hubspot-trigger",
"pt-BR/enterprise/guides/react-component-export",
"pt-BR/enterprise/guides/salesforce-trigger", "pt-BR/enterprise/guides/salesforce-trigger",
"pt-BR/enterprise/guides/slack-trigger",
"pt-BR/enterprise/guides/team-management",
"pt-BR/enterprise/guides/webhook-automation",
"pt-BR/enterprise/guides/human-in-the-loop",
"pt-BR/enterprise/guides/zapier-trigger" "pt-BR/enterprise/guides/zapier-trigger"
] ]
}, },
@@ -858,7 +678,6 @@
}, },
{ {
"tab": "Referência da API", "tab": "Referência da API",
"icon": "magnifying-glass",
"groups": [ "groups": [
{ {
"group": "Começando", "group": "Começando",
@@ -866,7 +685,6 @@
"pt-BR/api-reference/introduction", "pt-BR/api-reference/introduction",
"pt-BR/api-reference/inputs", "pt-BR/api-reference/inputs",
"pt-BR/api-reference/kickoff", "pt-BR/api-reference/kickoff",
"pt-BR/api-reference/resume",
"pt-BR/api-reference/status" "pt-BR/api-reference/status"
] ]
} }
@@ -874,23 +692,12 @@
}, },
{ {
"tab": "Exemplos", "tab": "Exemplos",
"icon": "code",
"groups": [ "groups": [
{ {
"group": "Exemplos", "group": "Exemplos",
"pages": ["pt-BR/examples/example", "pt-BR/examples/cookbooks"] "pages": ["pt-BR/examples/example", "pt-BR/examples/cookbooks"]
} }
] ]
},
{
"tab": "Notas de Versão",
"icon": "clock",
"groups": [
{
"group": "Notas de Versão",
"pages": ["pt-BR/changelog"]
}
]
} }
] ]
}, },
@@ -908,32 +715,21 @@
"href": "https://community.crewai.com", "href": "https://community.crewai.com",
"icon": "discourse" "icon": "discourse"
}, },
{
"anchor": "블로그",
"href": "https://blog.crewai.com",
"icon": "newspaper"
},
{ {
"anchor": "Crew GPT", "anchor": "Crew GPT",
"href": "https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant", "href": "https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant",
"icon": "robot" "icon": "robot"
},
{
"anchor": "릴리스",
"href": "https://github.com/crewAIInc/crewAI/releases",
"icon": "tag"
} }
] ]
}, },
"tabs": [ "tabs": [
{
"tab": "홈",
"icon": "house",
"groups": [
{
"group": "환영합니다",
"pages": ["ko/index"]
}
]
},
{ {
"tab": "기술 문서", "tab": "기술 문서",
"icon": "book-open",
"groups": [ "groups": [
{ {
"group": "시작 안내", "group": "시작 안내",
@@ -944,22 +740,18 @@
"pages": [ "pages": [
{ {
"group": "전략", "group": "전략",
"icon": "compass",
"pages": ["ko/guides/concepts/evaluating-use-cases"] "pages": ["ko/guides/concepts/evaluating-use-cases"]
}, },
{ {
"group": "에이전트 (Agents)", "group": "에이전트 (Agents)",
"icon": "user",
"pages": ["ko/guides/agents/crafting-effective-agents"] "pages": ["ko/guides/agents/crafting-effective-agents"]
}, },
{ {
"group": "크루 (Crews)", "group": "크루 (Crews)",
"icon": "users",
"pages": ["ko/guides/crews/first-crew"] "pages": ["ko/guides/crews/first-crew"]
}, },
{ {
"group": "플로우 (Flows)", "group": "플로우 (Flows)",
"icon": "code-branch",
"pages": [ "pages": [
"ko/guides/flows/first-flow", "ko/guides/flows/first-flow",
"ko/guides/flows/mastering-flow-state" "ko/guides/flows/mastering-flow-state"
@@ -967,7 +759,6 @@
}, },
{ {
"group": "고급", "group": "고급",
"icon": "gear",
"pages": [ "pages": [
"ko/guides/advanced/customizing-prompts", "ko/guides/advanced/customizing-prompts",
"ko/guides/advanced/fingerprinting" "ko/guides/advanced/fingerprinting"
@@ -1000,7 +791,6 @@
"group": "MCP 통합", "group": "MCP 통합",
"pages": [ "pages": [
"ko/mcp/overview", "ko/mcp/overview",
"ko/mcp/dsl-integration",
"ko/mcp/stdio", "ko/mcp/stdio",
"ko/mcp/sse", "ko/mcp/sse",
"ko/mcp/streamable-http", "ko/mcp/streamable-http",
@@ -1014,7 +804,6 @@
"ko/tools/overview", "ko/tools/overview",
{ {
"group": "파일 & 문서", "group": "파일 & 문서",
"icon": "folder-open",
"pages": [ "pages": [
"ko/tools/file-document/overview", "ko/tools/file-document/overview",
"ko/tools/file-document/filereadtool", "ko/tools/file-document/filereadtool",
@@ -1034,7 +823,6 @@
}, },
{ {
"group": "웹 스크래핑 & 브라우징", "group": "웹 스크래핑 & 브라우징",
"icon": "globe",
"pages": [ "pages": [
"ko/tools/web-scraping/overview", "ko/tools/web-scraping/overview",
"ko/tools/web-scraping/scrapewebsitetool", "ko/tools/web-scraping/scrapewebsitetool",
@@ -1054,7 +842,6 @@
}, },
{ {
"group": "검색 및 연구", "group": "검색 및 연구",
"icon": "magnifying-glass",
"pages": [ "pages": [
"ko/tools/search-research/overview", "ko/tools/search-research/overview",
"ko/tools/search-research/serperdevtool", "ko/tools/search-research/serperdevtool",
@@ -1076,7 +863,6 @@
}, },
{ {
"group": "데이터베이스 & 데이터", "group": "데이터베이스 & 데이터",
"icon": "database",
"pages": [ "pages": [
"ko/tools/database-data/overview", "ko/tools/database-data/overview",
"ko/tools/database-data/mysqltool", "ko/tools/database-data/mysqltool",
@@ -1091,7 +877,6 @@
}, },
{ {
"group": "인공지능 & 머신러닝", "group": "인공지능 & 머신러닝",
"icon": "brain",
"pages": [ "pages": [
"ko/tools/ai-ml/overview", "ko/tools/ai-ml/overview",
"ko/tools/ai-ml/dalletool", "ko/tools/ai-ml/dalletool",
@@ -1105,26 +890,16 @@
}, },
{ {
"group": "클라우드 & 스토리지", "group": "클라우드 & 스토리지",
"icon": "cloud",
"pages": [ "pages": [
"ko/tools/cloud-storage/overview", "ko/tools/cloud-storage/overview",
"ko/tools/cloud-storage/s3readertool", "ko/tools/cloud-storage/s3readertool",
"ko/tools/cloud-storage/s3writertool", "ko/tools/cloud-storage/s3writertool",
"ko/tools/cloud-storage/bedrockinvokeagenttool",
"ko/tools/cloud-storage/bedrockkbretriever" "ko/tools/cloud-storage/bedrockkbretriever"
] ]
}, },
{ {
"group": "Integrations", "group": "자동화 & 통합",
"icon": "plug",
"pages": [
"ko/tools/integration/overview",
"ko/tools/integration/bedrockinvokeagenttool",
"ko/tools/integration/crewaiautomationtool"
]
},
{
"group": "자동화",
"icon": "bolt",
"pages": [ "pages": [
"ko/tools/automation/overview", "ko/tools/automation/overview",
"ko/tools/automation/apifyactorstool", "ko/tools/automation/apifyactorstool",
@@ -1140,8 +915,6 @@
"pages": [ "pages": [
"ko/observability/overview", "ko/observability/overview",
"ko/observability/arize-phoenix", "ko/observability/arize-phoenix",
"ko/observability/braintrust",
"ko/observability/datadog",
"ko/observability/langdb", "ko/observability/langdb",
"ko/observability/langfuse", "ko/observability/langfuse",
"ko/observability/langtrace", "ko/observability/langtrace",
@@ -1170,17 +943,13 @@
"ko/learn/force-tool-output-as-result", "ko/learn/force-tool-output-as-result",
"ko/learn/hierarchical-process", "ko/learn/hierarchical-process",
"ko/learn/human-input-on-execution", "ko/learn/human-input-on-execution",
"ko/learn/human-in-the-loop",
"ko/learn/kickoff-async", "ko/learn/kickoff-async",
"ko/learn/kickoff-for-each", "ko/learn/kickoff-for-each",
"ko/learn/llm-connections", "ko/learn/llm-connections",
"ko/learn/multimodal-agents", "ko/learn/multimodal-agents",
"ko/learn/replay-tasks-from-latest-crew-kickoff", "ko/learn/replay-tasks-from-latest-crew-kickoff",
"ko/learn/sequential-process", "ko/learn/sequential-process",
"ko/learn/using-annotations", "ko/learn/using-annotations"
"ko/learn/execution-hooks",
"ko/learn/llm-hooks",
"ko/learn/tool-hooks"
] ]
}, },
{ {
@@ -1191,34 +960,21 @@
}, },
{ {
"tab": "엔터프라이즈", "tab": "엔터프라이즈",
"icon": "briefcase",
"groups": [ "groups": [
{ {
"group": "시작 안내", "group": "시작 안내",
"pages": ["ko/enterprise/introduction"] "pages": ["ko/enterprise/introduction"]
}, },
{ {
"group": "빌드", "group": "특징",
"pages": [ "pages": [
"ko/enterprise/features/automations", "ko/enterprise/features/rbac",
"ko/enterprise/features/crew-studio", "ko/enterprise/features/tool-repository",
"ko/enterprise/features/marketplace",
"ko/enterprise/features/agent-repositories",
"ko/enterprise/features/tools-and-integrations"
]
},
{
"group": "운영",
"pages": [
"ko/enterprise/features/traces",
"ko/enterprise/features/webhook-streaming", "ko/enterprise/features/webhook-streaming",
"ko/enterprise/features/hallucination-guardrail" "ko/enterprise/features/traces",
] "ko/enterprise/features/hallucination-guardrail",
}, "ko/enterprise/features/integrations",
{ "ko/enterprise/features/agent-repositories"
"group": "관리",
"pages": [
"ko/enterprise/features/rbac"
] ]
}, },
{ {
@@ -1230,20 +986,10 @@
"ko/enterprise/integrations/github", "ko/enterprise/integrations/github",
"ko/enterprise/integrations/gmail", "ko/enterprise/integrations/gmail",
"ko/enterprise/integrations/google_calendar", "ko/enterprise/integrations/google_calendar",
"ko/enterprise/integrations/google_contacts",
"ko/enterprise/integrations/google_docs",
"ko/enterprise/integrations/google_drive",
"ko/enterprise/integrations/google_sheets", "ko/enterprise/integrations/google_sheets",
"ko/enterprise/integrations/google_slides",
"ko/enterprise/integrations/hubspot", "ko/enterprise/integrations/hubspot",
"ko/enterprise/integrations/jira", "ko/enterprise/integrations/jira",
"ko/enterprise/integrations/linear", "ko/enterprise/integrations/linear",
"ko/enterprise/integrations/microsoft_excel",
"ko/enterprise/integrations/microsoft_onedrive",
"ko/enterprise/integrations/microsoft_outlook",
"ko/enterprise/integrations/microsoft_sharepoint",
"ko/enterprise/integrations/microsoft_teams",
"ko/enterprise/integrations/microsoft_word",
"ko/enterprise/integrations/notion", "ko/enterprise/integrations/notion",
"ko/enterprise/integrations/salesforce", "ko/enterprise/integrations/salesforce",
"ko/enterprise/integrations/shopify", "ko/enterprise/integrations/shopify",
@@ -1261,26 +1007,13 @@
"ko/enterprise/guides/update-crew", "ko/enterprise/guides/update-crew",
"ko/enterprise/guides/enable-crew-studio", "ko/enterprise/guides/enable-crew-studio",
"ko/enterprise/guides/azure-openai-setup", "ko/enterprise/guides/azure-openai-setup",
"ko/enterprise/guides/tool-repository",
"ko/enterprise/guides/react-component-export",
"ko/enterprise/guides/team-management",
"ko/enterprise/guides/human-in-the-loop",
"ko/enterprise/guides/webhook-automation"
]
},
{
"group": "트리거",
"pages": [
"ko/enterprise/guides/automation-triggers",
"ko/enterprise/guides/gmail-trigger",
"ko/enterprise/guides/google-calendar-trigger",
"ko/enterprise/guides/google-drive-trigger",
"ko/enterprise/guides/outlook-trigger",
"ko/enterprise/guides/onedrive-trigger",
"ko/enterprise/guides/microsoft-teams-trigger",
"ko/enterprise/guides/slack-trigger",
"ko/enterprise/guides/hubspot-trigger", "ko/enterprise/guides/hubspot-trigger",
"ko/enterprise/guides/react-component-export",
"ko/enterprise/guides/salesforce-trigger", "ko/enterprise/guides/salesforce-trigger",
"ko/enterprise/guides/slack-trigger",
"ko/enterprise/guides/team-management",
"ko/enterprise/guides/webhook-automation",
"ko/enterprise/guides/human-in-the-loop",
"ko/enterprise/guides/zapier-trigger" "ko/enterprise/guides/zapier-trigger"
] ]
}, },
@@ -1292,7 +1025,6 @@
}, },
{ {
"tab": "API 레퍼런스", "tab": "API 레퍼런스",
"icon": "magnifying-glass",
"groups": [ "groups": [
{ {
"group": "시작 안내", "group": "시작 안내",
@@ -1300,7 +1032,6 @@
"ko/api-reference/introduction", "ko/api-reference/introduction",
"ko/api-reference/inputs", "ko/api-reference/inputs",
"ko/api-reference/kickoff", "ko/api-reference/kickoff",
"ko/api-reference/resume",
"ko/api-reference/status" "ko/api-reference/status"
] ]
} }
@@ -1308,23 +1039,12 @@
}, },
{ {
"tab": "예시", "tab": "예시",
"icon": "code",
"groups": [ "groups": [
{ {
"group": "예시", "group": "예시",
"pages": ["ko/examples/example", "ko/examples/cookbooks"] "pages": ["ko/examples/example", "ko/examples/cookbooks"]
} }
] ]
},
{
"tab": "변경 로그",
"icon": "clock",
"groups": [
{
"group": "릴리스 노트",
"pages": ["ko/changelog"]
}
]
} }
] ]
} }
@@ -1334,23 +1054,15 @@
"light": "/images/crew_only_logo.png", "light": "/images/crew_only_logo.png",
"dark": "/images/crew_only_logo.png" "dark": "/images/crew_only_logo.png"
}, },
"fonts": {
"family": "Inter"
},
"appearance": { "appearance": {
"default": "system", "default": "dark",
"strict": false, "strict": false
"layout": "sidenav"
},
"background": {
"decoration": "grid"
}, },
"navbar": { "navbar": {
"links": [ "links": [
{ {
"label": "Start Cloud Trial", "label": "Start Cloud Trial",
"href": "https://app.crewai.com", "href": "https://app.crewai.com"
"icon": "arrow-up-right-from-square"
} }
], ],
"primary": { "primary": {
@@ -1369,20 +1081,7 @@
} }
}, },
"seo": { "seo": {
"indexing": "all", "indexing": "all"
"metatags": {
"og:type": "website",
"og:site_name": "CrewAI Documentation",
"og:image": "https://docs.crewai.com/images/crew_only_logo.png",
"twitter:card": "summary_large_image",
"twitter:site": "@crewAIInc",
"keywords": "AI agents, multi-agent systems, CrewAI, artificial intelligence, automation, Python framework, agent collaboration, AI workflows"
}
},
"feedback": {
"enabled": true,
"thumbsRating": true,
"suggestEdit": true
}, },
"redirects": [ "redirects": [
{ {
@@ -1403,7 +1102,7 @@
}, },
{ {
"source": "/changelog", "source": "/changelog",
"destination": "/en/changelog" "destination": "https://github.com/crewAIInc/crewAI/releases"
}, },
{ {
"source": "/telemetry", "source": "/telemetry",

View File

@@ -2,7 +2,6 @@
title: "GET /inputs" title: "GET /inputs"
description: "Get required inputs for your crew" description: "Get required inputs for your crew"
openapi: "/enterprise-api.en.yaml GET /inputs" openapi: "/enterprise-api.en.yaml GET /inputs"
mode: "wide"
--- ---

View File

@@ -1,19 +1,18 @@
--- ---
title: "Introduction" title: "Introduction"
description: "Complete reference for the CrewAI AOP REST API" description: "Complete reference for the CrewAI Enterprise REST API"
icon: "code" icon: "code"
mode: "wide"
--- ---
# CrewAI AOP API # CrewAI Enterprise API
Welcome to the CrewAI AOP API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services. Welcome to the CrewAI Enterprise API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
## Quick Start ## Quick Start
<Steps> <Steps>
<Step title="Get Your API Credentials"> <Step title="Get Your API Credentials">
Navigate to your crew's detail page in the CrewAI AOP dashboard and copy your Bearer Token from the Status tab. Navigate to your crew's detail page in the CrewAI Enterprise dashboard and copy your Bearer Token from the Status tab.
</Step> </Step>
<Step title="Discover Required Inputs"> <Step title="Discover Required Inputs">
@@ -46,7 +45,7 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations | | **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip> <Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI AOP dashboard. You can find both token types in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
</Tip> </Tip>
## Base URL ## Base URL
@@ -82,7 +81,7 @@ The API uses standard HTTP status codes:
## Interactive Testing ## Interactive Testing
<Info> <Info>
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons. **Why no "Send" button?** Since each CrewAI Enterprise user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
</Info> </Info>
Each endpoint page shows you: Each endpoint page shows you:

View File

@@ -2,7 +2,6 @@
title: "POST /kickoff" title: "POST /kickoff"
description: "Start a crew execution" description: "Start a crew execution"
openapi: "/enterprise-api.en.yaml POST /kickoff" openapi: "/enterprise-api.en.yaml POST /kickoff"
mode: "wide"
--- ---

View File

@@ -1,6 +0,0 @@
---
title: "POST /resume"
description: "Resume crew execution with human feedback"
openapi: "/enterprise-api.en.yaml POST /resume"
mode: "wide"
---

View File

@@ -2,7 +2,6 @@
title: "GET /status/{kickoff_id}" title: "GET /status/{kickoff_id}"
description: "Get execution status" description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}" openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
mode: "wide"
--- ---

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,6 @@
title: Agents title: Agents
description: Detailed guide on creating and managing agents within the CrewAI framework. description: Detailed guide on creating and managing agents within the CrewAI framework.
icon: robot icon: robot
mode: "wide"
--- ---
## Overview of an Agent ## Overview of an Agent
@@ -20,7 +19,7 @@ Think of an agent as a specialized team member with specific skills, expertise,
</Tip> </Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder"> <Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI AOP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time. CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png) ![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)

View File

@@ -2,10 +2,9 @@
title: CLI title: CLI
description: Learn how to use the CrewAI CLI to interact with CrewAI. description: Learn how to use the CrewAI CLI to interact with CrewAI.
icon: terminal icon: terminal
mode: "wide"
--- ---
<Warning>Since release 0.140.0, CrewAI AOP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning> <Warning>Since release 0.140.0, CrewAI Enterprise started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
## Overview ## Overview
@@ -186,9 +185,9 @@ def crew(self) -> Crew:
### 10. Deploy ### 10. Deploy
Deploy the crew or flow to [CrewAI AOP](https://app.crewai.com). Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI AOP. - **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
You can login or create an account with: You can login or create an account with:
```shell Terminal ```shell Terminal
crewai login crewai login
@@ -203,7 +202,7 @@ Deploy the crew or flow to [CrewAI AOP](https://app.crewai.com).
### 11. Organization Management ### 11. Organization Management
Manage your CrewAI AOP organizations. Manage your CrewAI Enterprise organizations.
```shell Terminal ```shell Terminal
crewai org [COMMAND] [OPTIONS] crewai org [COMMAND] [OPTIONS]
@@ -227,17 +226,17 @@ crewai org switch <organization_id>
``` ```
<Note> <Note>
You must be authenticated to CrewAI AOP to use these organization management commands. You must be authenticated to CrewAI Enterprise to use these organization management commands.
</Note> </Note>
- **Create a deployment** (continued): - **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically). - Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AOP. - **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
```shell Terminal ```shell Terminal
crewai deploy push crewai deploy push
``` ```
- Initiates the deployment process on the CrewAI AOP platform. - Initiates the deployment process on the CrewAI Enterprise platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID). - Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with: - **Deployment Status**: You can check the status of your deployment with:
@@ -262,7 +261,7 @@ You must be authenticated to CrewAI AOP to use these organization management com
```shell Terminal ```shell Terminal
crewai deploy remove crewai deploy remove
``` ```
This deletes the deployment from the CrewAI AOP platform. This deletes the deployment from the CrewAI Enterprise platform.
- **Help Command**: You can get help with the CLI with: - **Help Command**: You can get help with the CLI with:
```shell Terminal ```shell Terminal
@@ -270,36 +269,20 @@ You must be authenticated to CrewAI AOP to use these organization management com
``` ```
This shows the help message for the CrewAI Deploy CLI. This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AOP](http://app.crewai.com) using the CLI. Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
<iframe <iframe
className="w-full aspect-video rounded-xl" width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA" src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide" title="CrewAI Deployment Guide"
frameBorder="0" frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen allowfullscreen
></iframe> ></iframe>
### 11. Login ### 11. API Keys
Authenticate with CrewAI AOP using a secure device code flow (no email entry required).
```shell Terminal
crewai login
```
What happens:
- A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL
- Enter/confirm the code to complete authentication
Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider. When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
@@ -327,7 +310,7 @@ See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers) * [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management ### 12. Configuration Management
Manage CLI configuration settings for CrewAI. Manage CLI configuration settings for CrewAI.
@@ -354,7 +337,7 @@ crewai config reset
#### Available Configuration Parameters #### Available Configuration Parameters
- `enterprise_base_url`: Base URL of the CrewAI AOP instance - `enterprise_base_url`: Base URL of the CrewAI Enterprise instance
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0) - `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource - `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests - `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
@@ -368,15 +351,19 @@ crewai config list
``` ```
Example output: Example output:
| Setting | Value | Description | ```
| :------------------ | :----------------------- | :---------------------------------------------------------- | CrewAI CLI Configuration
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AOP instance | ┏━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
| org_name | Not set | Name of the currently active organization | ┃ Setting ┃ Value ┃ Description ┃
| org_uuid | Not set | UUID of the currently active organization | ┡━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) | │ enterprise_base_url│ https://app.crewai.com │ Base URL of the CrewAI Enterprise instance
| oauth2_audience | client_01YYY | Audience identifying the target API/resource | org_name │ Not set │ Name of the currently active organization │
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider | org_uuid │ Not set │ UUID of the currently active organization │
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) | oauth2_provider │ workos │ OAuth2 provider used for authentication (e.g., workos, okta, auth0).
│ oauth2_audience │ client_01YYY │ OAuth2 audience value, typically used to identify the target API or resource. │
│ oauth2_client_id │ client_01XXX │ OAuth2 client ID issued by the provider, used during authentication requests. │
│ oauth2_domain │ login.crewai.com │ OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens. │
```
Set the enterprise base URL: Set the enterprise base URL:
```shell Terminal ```shell Terminal
@@ -398,85 +385,6 @@ Reset all configuration to defaults:
crewai config reset crewai config reset
``` ```
<Tip>
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
### 14. Trace Management
Manage trace collection preferences for your Crew and Flow executions.
```shell Terminal
crewai traces [COMMAND]
```
#### Commands:
- `enable`: Enable trace collection for crew/flow executions
```shell Terminal
crewai traces enable
```
- `disable`: Disable trace collection for crew/flow executions
```shell Terminal
crewai traces disable
```
- `status`: Show current trace collection status
```shell Terminal
crewai traces status
```
#### How Tracing Works
Trace collection is controlled by checking three settings in priority order:
1. **Explicit flag in code** (highest priority - can enable OR disable):
```python
crew = Crew(agents=[...], tasks=[...], tracing=True) # Always enable
crew = Crew(agents=[...], tasks=[...], tracing=False) # Always disable
crew = Crew(agents=[...], tasks=[...]) # Check lower priorities (default)
```
- `tracing=True` will **always enable** tracing (overrides everything)
- `tracing=False` will **always disable** tracing (overrides everything)
- `tracing=None` or omitted will check lower priority settings
2. **Environment variable** (second priority):
```env
CREWAI_TRACING_ENABLED=true
```
- Checked only if `tracing` is not explicitly set to `True` or `False` in code
- Set to `true` or `1` to enable tracing
3. **User preference** (lowest priority):
```shell Terminal
crewai traces enable
```
- Checked only if `tracing` is not set in code and `CREWAI_TRACING_ENABLED` is not set to `true`
- Running `crewai traces enable` is sufficient to enable tracing by itself
<Note>
**To enable tracing**, use any one of these methods:
- Set `tracing=True` in your Crew/Flow code, OR
- Add `CREWAI_TRACING_ENABLED=true` to your `.env` file, OR
- Run `crewai traces enable`
**To disable tracing**, use any ONE of these methods:
- Set `tracing=False` in your Crew/Flow code (overrides everything), OR
- Remove or set to `false` the `CREWAI_TRACING_ENABLED` env var, OR
- Run `crewai traces disable`
Higher priority settings override lower ones.
</Note>
<Tip>
For more information about tracing, see the [Tracing documentation](/observability/tracing).
</Tip>
<Tip>
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
</Tip>
<Note> <Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users. Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
</Note> </Note>

View File

@@ -2,7 +2,6 @@
title: Collaboration title: Collaboration
description: How to enable agents to work together, delegate tasks, and communicate effectively within CrewAI teams. description: How to enable agents to work together, delegate tasks, and communicate effectively within CrewAI teams.
icon: screen-users icon: screen-users
mode: "wide"
--- ---
## Overview ## Overview

View File

@@ -2,7 +2,6 @@
title: Crews title: Crews
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities. description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
icon: people-group icon: people-group
mode: "wide"
--- ---
## Overview ## Overview
@@ -33,7 +32,6 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | | **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. | | **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
<Tip> <Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. **Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -307,27 +305,12 @@ print(result)
### Different Ways to Kick Off a Crew ### Different Ways to Kick Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process. Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
#### Synchronous Methods
- `kickoff()`: Starts the execution process according to the defined process flow. - `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection. - `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
- `kickoff_async()`: Initiates the workflow asynchronously.
#### Asynchronous Methods - `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
CrewAI offers two approaches for async execution:
| Method | Type | Description |
|--------|------|-------------|
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
| `akickoff_for_each()` | Native async | Native async execution for each input in a list |
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
| `kickoff_for_each_async()` | Thread-based | Thread-based async for each input in a list |
<Note>
For high-concurrency workloads, `akickoff()` and `akickoff_for_each()` are recommended as they use native async for task execution, memory operations, and knowledge retrieval.
</Note>
```python Code ```python Code
# Start the crew's task execution # Start the crew's task execution
@@ -340,53 +323,19 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results: for result in results:
print(result) print(result)
# Example of using native async with akickoff # Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.akickoff(inputs=inputs)
print(async_result)
# Example of using native async with akickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
for async_result in async_results:
print(async_result)
# Example of using thread-based kickoff_async
inputs = {'topic': 'AI in healthcare'} inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs) async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result) print(async_result)
# Example of using thread-based kickoff_for_each_async # Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}] inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array) async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results: for async_result in async_results:
print(async_result) print(async_result)
``` ```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. For detailed async examples, see the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide. These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
### Streaming Crew Execution
For real-time visibility into crew execution, you can enable streaming to receive output as it's generated:
```python Code
# Enable streaming
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Iterate over streaming output
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
```
Learn more about streaming in the [Streaming Crew Execution](/en/learn/streaming-crew-execution) guide.
### Replaying from a Specific Task ### Replaying from a Specific Task

View File

@@ -2,7 +2,6 @@
title: 'Event Listeners' title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring' description: 'Tap into CrewAI events to build custom integrations and monitoring'
icon: spinner icon: spinner
mode: "wide"
--- ---
## Overview ## Overview
@@ -20,7 +19,7 @@ CrewAI uses an event bus architecture to emit events throughout the execution li
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur. When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
<Note type="info" title="Enterprise Enhancement: Prompt Tracing"> <Note type="info" title="Enterprise Enhancement: Prompt Tracing">
CrewAI AOP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations. CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png) ![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png)
@@ -45,12 +44,12 @@ To create a custom event listener, you need to:
Here's a simple example of a custom event listener class: Here's a simple example of a custom event listener class:
```python ```python
from crewai.events import ( from crewai.utilities.events import (
CrewKickoffStartedEvent, CrewKickoffStartedEvent,
CrewKickoffCompletedEvent, CrewKickoffCompletedEvent,
AgentExecutionCompletedEvent, AgentExecutionCompletedEvent,
) )
from crewai.events import BaseEventListener from crewai.utilities.events.base_event_listener import BaseEventListener
class MyCustomListener(BaseEventListener): class MyCustomListener(BaseEventListener):
def __init__(self): def __init__(self):
@@ -147,7 +146,7 @@ my_project/
```python ```python
# my_custom_listener.py # my_custom_listener.py
from crewai.events import BaseEventListener from crewai.utilities.events.base_event_listener import BaseEventListener
# ... import events ... # ... import events ...
class MyCustomListener(BaseEventListener): class MyCustomListener(BaseEventListener):
@@ -280,7 +279,7 @@ Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` i
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager: For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:
```python ```python
from crewai.events import crewai_event_bus, CrewKickoffStartedEvent from crewai.utilities.events import crewai_event_bus, CrewKickoffStartedEvent
with crewai_event_bus.scoped_handlers(): with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStartedEvent) @crewai_event_bus.on(CrewKickoffStartedEvent)

View File

@@ -2,7 +2,6 @@
title: Flows title: Flows
description: Learn how to create and manage AI workflows using CrewAI Flows. description: Learn how to create and manage AI workflows using CrewAI Flows.
icon: arrow-progress icon: arrow-progress
mode: "wide"
--- ---
## Overview ## Overview
@@ -98,13 +97,7 @@ The state's unique ID and stored data can be useful for tracking flow executions
### @start() ### @start()
The `@start()` decorator marks entry points for a Flow. You can: The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
- Declare multiple unconditional starts: `@start()`
- Gate a start on a prior method or router label: `@start("method_or_label")`
- Provide a callable condition to control when a start should fire
All satisfied `@start()` methods will execute (often in parallel) when the Flow begins or resumes.
### @listen() ### @listen()
@@ -875,13 +868,14 @@ By exploring these examples, you can gain insights into how to leverage CrewAI F
Also, check out our YouTube video on how to use flows in CrewAI below! Also, check out our YouTube video on how to use flows in CrewAI below!
<iframe <iframe
className="w-full aspect-video rounded-xl" width="560"
height="315"
src="https://www.youtube.com/embed/MTb5my6VOT8" src="https://www.youtube.com/embed/MTb5my6VOT8"
title="CrewAI Flows overview" title="YouTube video player"
frameBorder="0" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerPolicy="strict-origin-when-cross-origin" referrerpolicy="strict-origin-when-cross-origin"
allowFullScreen allowfullscreen
></iframe> ></iframe>
## Running Flows ## Running Flows
@@ -897,31 +891,6 @@ flow = ExampleFlow()
result = flow.kickoff() result = flow.kickoff()
``` ```
### Streaming Flow Execution
For real-time visibility into flow execution, you can enable streaming to receive output as it's generated:
```python
class StreamingFlow(Flow):
stream = True # Enable streaming
@start()
def research(self):
# Your flow implementation
pass
# Iterate over streaming output
flow = StreamingFlow()
streaming = flow.kickoff()
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
```
Learn more about streaming in the [Streaming Flow Execution](/en/learn/streaming-flow-execution) guide.
### Using the CLI ### Using the CLI
Starting from version 0.103.0, you can run flows using the `crewai run` command: Starting from version 0.103.0, you can run flows using the `crewai run` command:

View File

@@ -2,7 +2,6 @@
title: Knowledge title: Knowledge
description: What is knowledge in CrewAI and how to use it. description: What is knowledge in CrewAI and how to use it.
icon: book icon: book
mode: "wide"
--- ---
## Overview ## Overview
@@ -25,41 +24,6 @@ For file-based Knowledge Sources, make sure to place your files in a `knowledge`
Also, use relative paths from the `knowledge` directory when creating the source. Also, use relative paths from the `knowledge` directory when creating the source.
</Tip> </Tip>
### Vector store (RAG) client configuration
CrewAI exposes a provider-neutral RAG client abstraction for vector stores. The default provider is ChromaDB, and Qdrant is supported as well. You can switch providers using configuration utilities.
Supported today:
- ChromaDB (default)
- Qdrant
```python Code
from crewai.rag.config.utils import set_rag_config, get_rag_client, clear_rag_config
# ChromaDB (default)
from crewai.rag.chromadb.config import ChromaDBConfig
set_rag_config(ChromaDBConfig())
chromadb_client = get_rag_client()
# Qdrant
from crewai.rag.qdrant.config import QdrantConfig
set_rag_config(QdrantConfig())
qdrant_client = get_rag_client()
# Example operations (same API for any provider)
client = qdrant_client # or chromadb_client
client.create_collection(collection_name="docs")
client.add_documents(
collection_name="docs",
documents=[{"id": "1", "content": "CrewAI enables collaborative AI agents."}],
)
results = client.search(collection_name="docs", query="collaborative agents", limit=3)
clear_rag_config() # optional reset
```
This RAG client is separate from Knowledges built-in storage. Use it when you need direct vector-store control or custom retrieval pipelines.
### Basic String Knowledge Example ### Basic String Knowledge Example
```python Code ```python Code
@@ -388,8 +352,8 @@ crew = Crew(
agents=[sales_agent, tech_agent, support_agent], agents=[sales_agent, tech_agent, support_agent],
tasks=[...], tasks=[...],
embedder={ # Fallback embedder for agents without their own embedder={ # Fallback embedder for agents without their own
"provider": "google-generativeai", "provider": "google",
"config": {"model_name": "gemini-embedding-001"} "config": {"model": "text-embedding-004"}
} }
) )
@@ -629,9 +593,9 @@ agent = Agent(
backstory="Expert researcher", backstory="Expert researcher",
knowledge_sources=[knowledge_source], knowledge_sources=[knowledge_source],
embedder={ embedder={
"provider": "google-generativeai", "provider": "google",
"config": { "config": {
"model_name": "gemini-embedding-001", "model": "models/text-embedding-004",
"api_key": "your-google-key" "api_key": "your-google-key"
} }
} }
@@ -717,11 +681,11 @@ CrewAI emits events during the knowledge retrieval process that you can listen f
#### Example: Monitoring Knowledge Retrieval #### Example: Monitoring Knowledge Retrieval
```python ```python
from crewai.events import ( from crewai.utilities.events import (
KnowledgeRetrievalStartedEvent, KnowledgeRetrievalStartedEvent,
KnowledgeRetrievalCompletedEvent, KnowledgeRetrievalCompletedEvent,
BaseEventListener,
) )
from crewai.utilities.events.base_event_listener import BaseEventListener
class KnowledgeMonitorListener(BaseEventListener): class KnowledgeMonitorListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus): def setup_listeners(self, crewai_event_bus):
@@ -739,7 +703,7 @@ class KnowledgeMonitorListener(BaseEventListener):
knowledge_monitor = KnowledgeMonitorListener() knowledge_monitor = KnowledgeMonitorListener()
``` ```
For more information on using events, see the [Event Listeners](/en/concepts/event-listener) documentation. For more information on using events, see the [Event Listeners](https://docs.crewai.com/concepts/event-listener) documentation.
### Custom Knowledge Sources ### Custom Knowledge Sources

View File

@@ -2,12 +2,11 @@
title: 'LLMs' title: 'LLMs'
description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects' description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects'
icon: 'microchip-ai' icon: 'microchip-ai'
mode: "wide"
--- ---
## Overview ## Overview
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects. CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
## What are LLMs? ## What are LLMs?
@@ -113,104 +112,44 @@ In this section, you'll find detailed examples that help you select, configure,
<AccordionGroup> <AccordionGroup>
<Accordion title="OpenAI"> <Accordion title="OpenAI">
CrewAI provides native integration with OpenAI through the OpenAI Python SDK. Set the following environment variables in your `.env` file:
```toml Code ```toml Code
# Required # Required
OPENAI_API_KEY=sk-... OPENAI_API_KEY=sk-...
# Optional # Optional
OPENAI_BASE_URL=<custom-base-url> OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
``` ```
**Basic Usage:** Example usage in your CrewAI project:
```python Code ```python Code
from crewai import LLM from crewai import LLM
llm = LLM( llm = LLM(
model="openai/gpt-4o", model="openai/gpt-4", # call model by provider/model_name
api_key="your-api-key", # Or set OPENAI_API_KEY temperature=0.8,
temperature=0.7, max_tokens=150,
max_tokens=4000
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4o",
api_key="your-api-key",
base_url="https://api.openai.com/v1", # Optional custom endpoint
organization="org-...", # Optional organization ID
project="proj_...", # Optional project ID
temperature=0.7,
max_tokens=4000,
max_completion_tokens=4000, # For newer models
top_p=0.9, top_p=0.9,
frequency_penalty=0.1, frequency_penalty=0.1,
presence_penalty=0.1, presence_penalty=0.1,
stop=["END"], stop=["END"],
seed=42, # For reproducible outputs seed=42
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3, # Maximum retry attempts
logprobs=True, # Return log probabilities
top_logprobs=5, # Number of most likely tokens
reasoning_effort="medium" # For o1 models: low, medium, high
) )
``` ```
**Structured Outputs:** OpenAI is one of the leading providers of LLMs with a wide range of models and features.
```python Code
from pydantic import BaseModel
from crewai import LLM
class ResponseFormat(BaseModel):
name: str
age: int
summary: str
llm = LLM(
model="openai/gpt-4o",
)
```
**Supported Environment Variables:**
- `OPENAI_API_KEY`: Your OpenAI API key (required)
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
**Features:**
- Native function calling support (except o1 models)
- Structured outputs with JSON schema
- Streaming support for real-time responses
- Token usage tracking
- Stop sequences support (except o1 models)
- Log probabilities for token-level insights
- Reasoning effort control for o1 models
**Supported Models:**
| Model | Context Window | Best For | | Model | Context Window | Best For |
|---------------------|------------------|-----------------------------------------------| |---------------------|------------------|-----------------------------------------------|
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities | | GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| gpt-4.1-mini | 1M tokens | Efficient version with large context | | GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant | | GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence | | o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context | | o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis | | o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning | | o1 | 200,000 tokens | Fast reasoning, complex reasoning |
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
| o1-mini | 128,000 tokens | Efficient reasoning model |
| o3-mini | 200,000 tokens | Lightweight reasoning model |
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
**Note:** To use OpenAI, install the required dependencies:
```bash
uv add "crewai[openai]"
```
</Accordion> </Accordion>
<Accordion title="Meta-Llama"> <Accordion title="Meta-Llama">
@@ -247,230 +186,69 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion> </Accordion>
<Accordion title="Anthropic"> <Accordion title="Anthropic">
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
```toml Code ```toml Code
# Required # Required
ANTHROPIC_API_KEY=sk-ant-... ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
``` ```
**Basic Usage:** Example usage in your CrewAI project:
```python Code ```python Code
from crewai import LLM
llm = LLM( llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022", model="anthropic/claude-3-sonnet-20240229-v1:0",
api_key="your-api-key", # Or set ANTHROPIC_API_KEY temperature=0.7
max_tokens=4096 # Required for Anthropic
) )
``` ```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key",
base_url="https://api.anthropic.com", # Optional custom endpoint
temperature=0.7,
max_tokens=4096, # Required parameter
top_p=0.9,
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3 # Maximum retry attempts
)
```
**Extended Thinking (Claude Sonnet 4 and Beyond):**
CrewAI supports Anthropic's Extended Thinking feature, which allows Claude to think through problems in a more human-like way before responding. This is particularly useful for complex reasoning, analysis, and problem-solving tasks.
```python Code
from crewai import LLM
# Enable extended thinking with default settings
llm = LLM(
model="anthropic/claude-sonnet-4",
thinking={"type": "enabled"},
max_tokens=10000
)
# Configure thinking with budget control
llm = LLM(
model="anthropic/claude-sonnet-4",
thinking={
"type": "enabled",
"budget_tokens": 5000 # Limit thinking tokens
},
max_tokens=10000
)
```
**Thinking Configuration Options:**
- `type`: Set to `"enabled"` to activate extended thinking mode
- `budget_tokens` (optional): Maximum tokens to use for thinking (helps control costs)
**Models Supporting Extended Thinking:**
- `claude-sonnet-4` and newer models
- `claude-3-7-sonnet` (with extended thinking capabilities)
**When to Use Extended Thinking:**
- Complex reasoning and multi-step problem solving
- Mathematical calculations and proofs
- Code analysis and debugging
- Strategic planning and decision making
- Research and analytical tasks
**Note:** Extended thinking consumes additional tokens but can significantly improve response quality for complex tasks.
**Supported Environment Variables:**
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
**Features:**
- Native tool use support for Claude 3+ models
- Extended Thinking support for Claude Sonnet 4+
- Streaming support for real-time responses
- Automatic system message handling
- Stop sequences for controlled output
- Token usage tracking
- Multi-turn tool use conversations
**Important Notes:**
- `max_tokens` is a **required** parameter for all Anthropic models
- Claude uses `stop_sequences` instead of `stop`
- System messages are handled separately from conversation messages
- First message must be from the user (automatically handled)
- Messages must alternate between user and assistant
**Supported Models:**
| Model | Context Window | Best For |
|------------------------------|----------------|-----------------------------------------------|
| claude-sonnet-4 | 200,000 tokens | Latest with extended thinking capabilities |
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
| claude-2 | 100,000 tokens | Versatile model for various tasks |
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
**Note:** To use Anthropic, install the required dependencies:
```bash
uv add "crewai[anthropic]"
```
</Accordion> </Accordion>
<Accordion title="Google (Gemini API)"> <Accordion title="Google (Gemini API)">
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK. Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
```toml .env ```toml .env
# Required (one of the following) # https://ai.google.dev/gemini-api/docs/api-key
GOOGLE_API_KEY=<your-api-key>
GEMINI_API_KEY=<your-api-key> GEMINI_API_KEY=<your-api-key>
# Optional - for Vertex AI
GOOGLE_CLOUD_PROJECT=<your-project-id>
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
``` ```
**Basic Usage:** Example usage in your CrewAI project:
```python Code ```python Code
from crewai import LLM from crewai import LLM
llm = LLM( llm = LLM(
model="gemini/gemini-2.0-flash", model="gemini/gemini-2.0-flash",
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
temperature=0.7
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.5-flash",
api_key="your-api-key",
temperature=0.7, temperature=0.7,
top_p=0.9,
top_k=40, # Top-k sampling parameter
max_output_tokens=8192,
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
safety_settings={
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
}
) )
``` ```
**Vertex AI Configuration:** ### Gemini models
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro",
project="your-gcp-project-id",
location="us-central1" # GCP region
)
```
**Supported Environment Variables:**
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
**Features:**
- Native function calling support for Gemini 1.5+ and 2.x models
- Streaming support for real-time responses
- Multimodal capabilities (text, images, video)
- Safety settings configuration
- Support for both Gemini API and Vertex AI
- Automatic system instruction handling
- Token usage tracking
**Gemini Models:**
Google offers a range of powerful models optimized for different use cases. Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For | | Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------| |--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency | | gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding | | gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking | | gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency | | gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks | | gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient | | gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.0-pro | 32,768 tokens | Earlier generation model | | gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
**Gemma Models:**
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window | Best For |
|----------------|----------------|------------------------------------|
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
| gemma-3-27b | 128,000 tokens | High-performance tasks |
**Note:** To use Google Gemini, install the required dependencies:
```bash
uv add "crewai[google-genai]"
```
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models). The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion> </Accordion>
<Accordion title="Google (Vertex AI)"> <Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code: Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
@@ -512,146 +290,43 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion> </Accordion>
<Accordion title="Azure"> <Accordion title="Azure">
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
```toml Code ```toml Code
# Required # Required
AZURE_API_KEY=<your-api-key> AZURE_API_KEY=<your-api-key>
AZURE_ENDPOINT=<your-endpoint-url> AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional # Optional
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01 AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
``` ```
**Endpoint URL Formats:** Example usage in your CrewAI project:
For Azure OpenAI deployments:
```
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
```
For Azure AI Inference endpoints:
```
https://<resource-name>.inference.azure.com
```
**Basic Usage:**
```python Code ```python Code
llm = LLM( llm = LLM(
model="azure/gpt-4", model="azure/gpt-4",
api_key="<your-api-key>", # Or set AZURE_API_KEY api_version="2023-05-15"
endpoint="<your-endpoint-url>",
api_version="2024-06-01"
) )
``` ```
**Advanced Configuration:**
```python Code
llm = LLM(
model="azure/gpt-4o",
temperature=0.7,
max_tokens=4000,
top_p=0.9,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["END"],
stream=True,
timeout=60.0,
max_retries=3
)
```
**Supported Environment Variables:**
- `AZURE_API_KEY`: Your Azure API key (required)
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
**Features:**
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
- Streaming support for real-time responses
- Automatic endpoint URL validation and correction
- Comprehensive error handling with retry logic
- Token usage tracking
**Note:** To use Azure AI Inference, install the required dependencies:
```bash
uv add "crewai[azure-ai-inference]"
```
</Accordion> </Accordion>
<Accordion title="AWS Bedrock"> <Accordion title="AWS Bedrock">
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
```toml Code ```toml Code
# Required
AWS_ACCESS_KEY_ID=<your-access-key> AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key> AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
# Optional
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
``` ```
**Basic Usage:** Example usage in your CrewAI project:
```python Code ```python Code
from crewai import LLM
llm = LLM( llm = LLM(
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0", model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
region_name="us-east-1"
) )
``` ```
**Advanced Configuration:** Before using Amazon Bedrock, make sure you have boto3 installed in your environment
```python Code
from crewai import LLM
llm = LLM( [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
aws_session_token="your-session-token", # For temporary credentials
region_name="us-east-1",
temperature=0.7,
max_tokens=4096,
top_p=0.9,
top_k=250, # For Claude models
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
guardrail_config={ # Optional content filtering
"guardrailIdentifier": "your-guardrail-id",
"guardrailVersion": "1"
},
additional_model_request_fields={ # Model-specific parameters
"top_k": 250
}
)
```
**Supported Environment Variables:**
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
**Features:**
- Native tool calling support via Converse API
- Streaming and non-streaming responses
- Comprehensive error handling with retry logic
- Guardrail configuration for content filtering
- Model-specific parameters via `additional_model_request_fields`
- Token usage tracking and stop reason logging
- Support for all Bedrock foundation models
- Automatic conversation format handling
**Important Notes:**
- Uses the modern Converse API for unified model access
- Automatic handling of model-specific conversation requirements
- System messages are handled separately from conversation
- First message must be from user (automatically handled)
- Some models (like Cohere) require conversation to end with user message
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
| Model | Context Window | Best For | | Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------| |-------------------------|----------------------|-------------------------------------------------------------------|
@@ -681,12 +356,7 @@ In this section, you'll find detailed examples that help you select, configure,
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. | | Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. | | Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. | | Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
**Note:** To use AWS Bedrock, install the required dependencies:
```bash
uv add "crewai[bedrock]"
```
</Accordion> </Accordion>
<Accordion title="Amazon SageMaker"> <Accordion title="Amazon SageMaker">
@@ -1063,10 +733,10 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
CrewAI emits events for each chunk received during streaming: CrewAI emits events for each chunk received during streaming:
```python ```python
from crewai.events import ( from crewai.utilities.events import (
LLMStreamChunkEvent LLMStreamChunkEvent
) )
from crewai.events import BaseEventListener from crewai.utilities.events.base_event_listener import BaseEventListener
class MyCustomListener(BaseEventListener): class MyCustomListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus): def setup_listeners(self, crewai_event_bus):
@@ -1079,7 +749,7 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
``` ```
<Tip> <Tip>
[Click here](/en/concepts/event-listener#event-listeners) for more details [Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
</Tip> </Tip>
</Tab> </Tab>
@@ -1088,8 +758,8 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
```python ```python
from crewai import LLM, Agent, Task, Crew from crewai import LLM, Agent, Task, Crew
from crewai.events import LLMStreamChunkEvent from crewai.utilities.events import LLMStreamChunkEvent
from crewai.events import BaseEventListener from crewai.utilities.events.base_event_listener import BaseEventListener
class MyCustomListener(BaseEventListener): class MyCustomListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus): def setup_listeners(self, crewai_event_bus):
@@ -1133,50 +803,6 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
</Tab> </Tab>
</Tabs> </Tabs>
## Async LLM Calls
CrewAI supports asynchronous LLM calls for improved performance and concurrency in your AI workflows. Async calls allow you to run multiple LLM requests concurrently without blocking, making them ideal for high-throughput applications and parallel agent operations.
<Tabs>
<Tab title="Basic Usage">
Use the `acall` method for asynchronous LLM requests:
```python
import asyncio
from crewai import LLM
async def main():
llm = LLM(model="openai/gpt-4o")
# Single async call
response = await llm.acall("What is the capital of France?")
print(response)
asyncio.run(main())
```
The `acall` method supports all the same parameters as the synchronous `call` method, including messages, tools, and callbacks.
</Tab>
<Tab title="With Streaming">
Combine async calls with streaming for real-time concurrent responses:
```python
import asyncio
from crewai import LLM
async def stream_async():
llm = LLM(model="openai/gpt-4o", stream=True)
response = await llm.acall("Write a short story about AI")
print(response)
asyncio.run(stream_async())
```
</Tab>
</Tabs>
## Structured LLM Calls ## Structured LLM Calls
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing. CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
@@ -1272,7 +898,7 @@ Learn how to get the most out of your LLM configuration:
</Accordion> </Accordion>
<Accordion title="Drop Additional Parameters"> <Accordion title="Drop Additional Parameters">
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration. CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call: For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
```python ```python
@@ -1288,52 +914,6 @@ Learn how to get the most out of your LLM configuration:
) )
``` ```
</Accordion> </Accordion>
<Accordion title="Transport Interceptors">
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
**Supported Providers:**
- ✅ OpenAI
- ✅ Anthropic
**Basic Usage:**
```python
import httpx
from crewai import LLM
from crewai.llms.hooks import BaseInterceptor
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
"""Custom interceptor to modify requests and responses."""
def on_outbound(self, request: httpx.Request) -> httpx.Request:
"""Print request before sending to the LLM provider."""
print(request)
return request
def on_inbound(self, response: httpx.Response) -> httpx.Response:
"""Process response after receiving from the LLM provider."""
print(f"Status: {response.status_code}")
print(f"Response time: {response.elapsed}")
return response
# Use the interceptor with an LLM
llm = LLM(
model="openai/gpt-4o",
interceptor=CustomInterceptor()
)
```
**Important Notes:**
- Both methods must return the received object or type of object.
- Modifying received objects may result in unexpected behavior or application crashes.
- Not all providers support interceptors - check the supported providers list above
<Info>
Interceptors operate at the transport layer. This is particularly useful for:
- Message transformation and filtering
- Debugging API interactions
</Info>
</Accordion>
</AccordionGroup> </AccordionGroup>
## Common Issues and Solutions ## Common Issues and Solutions

View File

@@ -2,12 +2,11 @@
title: Memory title: Memory
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities. description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
icon: database icon: database
mode: "wide"
--- ---
## Overview ## Overview
The CrewAI framework provides a sophisticated memory system designed to significantly enhance AI agent capabilities. CrewAI offers **two distinct memory approaches** that serve different use cases: The CrewAI framework provides a sophisticated memory system designed to significantly enhance AI agent capabilities. CrewAI offers **three distinct memory approaches** that serve different use cases:
1. **Basic Memory System** - Built-in short-term, long-term, and entity memory 1. **Basic Memory System** - Built-in short-term, long-term, and entity memory
2. **External Memory** - Standalone external memory providers 2. **External Memory** - Standalone external memory providers
@@ -341,7 +340,7 @@ crew = Crew(
embedder={ embedder={
"provider": "openai", "provider": "openai",
"config": { "config": {
"model_name": "text-embedding-3-small" # or "text-embedding-3-large" "model": "text-embedding-3-small" # or "text-embedding-3-large"
} }
} }
) )
@@ -353,7 +352,7 @@ crew = Crew(
"provider": "openai", "provider": "openai",
"config": { "config": {
"api_key": "your-openai-api-key", # Optional: override env var "api_key": "your-openai-api-key", # Optional: override env var
"model_name": "text-embedding-3-large", "model": "text-embedding-3-large",
"dimensions": 1536, # Optional: reduce dimensions for smaller storage "dimensions": 1536, # Optional: reduce dimensions for smaller storage
"organization_id": "your-org-id" # Optional: for organization accounts "organization_id": "your-org-id" # Optional: for organization accounts
} }
@@ -375,7 +374,7 @@ crew = Crew(
"api_base": "https://your-resource.openai.azure.com/", "api_base": "https://your-resource.openai.azure.com/",
"api_type": "azure", "api_type": "azure",
"api_version": "2023-05-15", "api_version": "2023-05-15",
"model_name": "text-embedding-3-small", "model": "text-embedding-3-small",
"deployment_id": "your-deployment-name" # Azure deployment name "deployment_id": "your-deployment-name" # Azure deployment name
} }
} }
@@ -390,10 +389,10 @@ Use Google's text embedding models for integration with Google Cloud services.
crew = Crew( crew = Crew(
memory=True, memory=True,
embedder={ embedder={
"provider": "google-generativeai", "provider": "google",
"config": { "config": {
"api_key": "your-google-api-key", "api_key": "your-google-api-key",
"model_name": "gemini-embedding-001" # or "text-embedding-005", "text-multilingual-embedding-002" "model": "text-embedding-004" # or "text-embedding-preview-0409"
} }
} }
) )
@@ -461,7 +460,7 @@ crew = Crew(
"provider": "cohere", "provider": "cohere",
"config": { "config": {
"api_key": "your-cohere-api-key", "api_key": "your-cohere-api-key",
"model_name": "embed-english-v3.0" # or "embed-multilingual-v3.0" "model": "embed-english-v3.0" # or "embed-multilingual-v3.0"
} }
} }
) )
@@ -478,7 +477,7 @@ crew = Crew(
"provider": "voyageai", "provider": "voyageai",
"config": { "config": {
"api_key": "your-voyage-api-key", "api_key": "your-voyage-api-key",
"model": "voyage-3", # or "voyage-3-lite", "voyage-code-3" "model": "voyage-large-2", # or "voyage-code-2" for code
"input_type": "document" # or "query" "input_type": "document" # or "query"
} }
} }
@@ -515,7 +514,8 @@ crew = Crew(
"provider": "huggingface", "provider": "huggingface",
"config": { "config": {
"api_key": "your-hf-token", # Optional for public models "api_key": "your-hf-token", # Optional for public models
"model": "sentence-transformers/all-MiniLM-L6-v2" "model": "sentence-transformers/all-MiniLM-L6-v2",
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
} }
} }
) )
@@ -738,17 +738,6 @@ print(f"OpenAI: {openai_time:.2f}s")
print(f"Ollama: {ollama_time:.2f}s") print(f"Ollama: {ollama_time:.2f}s")
``` ```
### Entity Memory batching behavior
Entity Memory supports batching when saving multiple entities at once. When you pass a list of `EntityMemoryItem`, the system:
- Emits a single MemorySaveStartedEvent with `entity_count`
- Saves each entity internally, collecting any partial errors
- Emits MemorySaveCompletedEvent with aggregate metadata (saved count, errors)
- Raises a partial-save exception if some entities failed (includes counts)
This improves performance and observability when writing many entities in one operation.
## 2. External Memory ## 2. External Memory
External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing. External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing.
@@ -911,10 +900,10 @@ crew = Crew(
crew = Crew( crew = Crew(
memory=True, memory=True,
embedder={ embedder={
"provider": "google-generativeai", "provider": "google",
"config": { "config": {
"api_key": "your-api-key", "api_key": "your-api-key",
"model_name": "gemini-embedding-001" "model": "text-embedding-004"
} }
} }
) )
@@ -1052,8 +1041,8 @@ CrewAI emits the following memory-related events:
Track memory operation timing to optimize your application: Track memory operation timing to optimize your application:
```python ```python
from crewai.events import ( from crewai.utilities.events.base_event_listener import BaseEventListener
BaseEventListener, from crewai.utilities.events import (
MemoryQueryCompletedEvent, MemoryQueryCompletedEvent,
MemorySaveCompletedEvent MemorySaveCompletedEvent
) )
@@ -1087,8 +1076,8 @@ memory_monitor = MemoryPerformanceMonitor()
Log memory operations for debugging and insights: Log memory operations for debugging and insights:
```python ```python
from crewai.events import ( from crewai.utilities.events.base_event_listener import BaseEventListener
BaseEventListener, from crewai.utilities.events import (
MemorySaveStartedEvent, MemorySaveStartedEvent,
MemoryQueryStartedEvent, MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent MemoryRetrievalCompletedEvent
@@ -1128,8 +1117,8 @@ memory_logger = MemoryLogger()
Capture and respond to memory errors: Capture and respond to memory errors:
```python ```python
from crewai.events import ( from crewai.utilities.events.base_event_listener import BaseEventListener
BaseEventListener, from crewai.utilities.events import (
MemorySaveFailedEvent, MemorySaveFailedEvent,
MemoryQueryFailedEvent MemoryQueryFailedEvent
) )
@@ -1178,8 +1167,8 @@ error_tracker = MemoryErrorTracker(notify_email="admin@example.com")
Memory events can be forwarded to analytics and monitoring platforms to track performance metrics, detect anomalies, and visualize memory usage patterns: Memory events can be forwarded to analytics and monitoring platforms to track performance metrics, detect anomalies, and visualize memory usage patterns:
```python ```python
from crewai.events import ( from crewai.utilities.events.base_event_listener import BaseEventListener
BaseEventListener, from crewai.utilities.events import (
MemoryQueryCompletedEvent, MemoryQueryCompletedEvent,
MemorySaveCompletedEvent MemorySaveCompletedEvent
) )

View File

@@ -2,7 +2,6 @@
title: Planning title: Planning
description: Learn how to add planning to your CrewAI Crew and improve their performance. description: Learn how to add planning to your CrewAI Crew and improve their performance.
icon: ruler-combined icon: ruler-combined
mode: "wide"
--- ---
## Overview ## Overview

View File

@@ -2,7 +2,6 @@
title: Processes title: Processes
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details. description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
icon: bars-staggered icon: bars-staggered
mode: "wide"
--- ---
## Overview ## Overview

View File

@@ -2,7 +2,6 @@
title: Reasoning title: Reasoning
description: "Learn how to enable and use agent reasoning to improve task execution." description: "Learn how to enable and use agent reasoning to improve task execution."
icon: brain icon: brain
mode: "wide"
--- ---
## Overview ## Overview

Some files were not shown because too many files have changed in this diff Show More