mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
Compare commits
140 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
46846bcace | ||
|
|
d71e91e8f2 | ||
|
|
9a212b8e29 | ||
|
|
67953b3a6a | ||
|
|
a760923c50 | ||
|
|
1c4f44af80 | ||
|
|
09014215a9 | ||
|
|
0ccc155457 | ||
|
|
8945457883 | ||
|
|
b787d7e591 | ||
|
|
25c0c030ce | ||
|
|
f8deb0fd18 | ||
|
|
f3c17a249b | ||
|
|
467ee2917e | ||
|
|
b9dd166a6b | ||
|
|
c73b36a4c5 | ||
|
|
0c020991c4 | ||
|
|
be70a04153 | ||
|
|
0c359f4df8 | ||
|
|
fe288dbe73 | ||
|
|
dc63bc2319 | ||
|
|
8d0effafec | ||
|
|
1cdbe79b34 | ||
|
|
84328d9311 | ||
|
|
88d3c0fa97 | ||
|
|
75ff7dce0c | ||
|
|
38b0b125d3 | ||
|
|
9bd8ad51f7 | ||
|
|
0632a054ca | ||
|
|
feec6b440e | ||
|
|
e43c7debbd | ||
|
|
8ef9fe2cab | ||
|
|
807f97114f | ||
|
|
bdafe0fac7 | ||
|
|
8e99d490b0 | ||
|
|
34b909367b | ||
|
|
22684b513e | ||
|
|
3e3b9df761 | ||
|
|
177294f588 | ||
|
|
beef712646 | ||
|
|
6125b866fd | ||
|
|
f2f994612c | ||
|
|
7fff2b654c | ||
|
|
34e09162ba | ||
|
|
24d1fad7ab | ||
|
|
9b8f31fa07 | ||
|
|
d898d7c02c | ||
|
|
f04c40babf | ||
|
|
c456e5c5fa | ||
|
|
633e279b51 | ||
|
|
a25778974d | ||
|
|
09f1ba6956 | ||
|
|
20704742e2 | ||
|
|
59180e9c9f | ||
|
|
3ce019b07b | ||
|
|
2355ec0733 | ||
|
|
c925d2d519 | ||
|
|
bc4e6a3127 | ||
|
|
37526c693b | ||
|
|
c59173a762 | ||
|
|
4d8eec96e8 | ||
|
|
2025a26fc3 | ||
|
|
bed9a3847a | ||
|
|
5239dc9859 | ||
|
|
52444ad390 | ||
|
|
f070595e65 | ||
|
|
69c5eace2d | ||
|
|
d88ac338d5 | ||
|
|
4ae8c36815 | ||
|
|
b049b73f2e | ||
|
|
d2b9c54931 | ||
|
|
a928cde6ee | ||
|
|
9c84475691 | ||
|
|
f3c5d1e351 | ||
|
|
a978267fa2 | ||
|
|
b759654e7d | ||
|
|
9da1f0c0aa | ||
|
|
a559cedbd1 | ||
|
|
bcc3e358cb | ||
|
|
d160f0874a | ||
|
|
9fcf55198f | ||
|
|
f46a846ddc | ||
|
|
b546982690 | ||
|
|
d7bdac12a2 | ||
|
|
528d812263 | ||
|
|
ffd717c51a | ||
|
|
fbe4aa4bd1 | ||
|
|
c205d2e8de | ||
|
|
fcb5b19b2e | ||
|
|
01f0111d52 | ||
|
|
6b52587c67 | ||
|
|
629f7f34ce | ||
|
|
0f1c173d02 | ||
|
|
19c5b9a35e | ||
|
|
1ed307b58c | ||
|
|
d29867bbb6 | ||
|
|
b2c278ed22 | ||
|
|
f6aed9798b | ||
|
|
40a2d387a1 | ||
|
|
6f36d7003b | ||
|
|
9e5906c52f | ||
|
|
fc521839e4 | ||
|
|
e4cc9a664c | ||
|
|
7e6171d5bc | ||
|
|
61ad1fb112 | ||
|
|
54710a8711 | ||
|
|
5abf976373 | ||
|
|
329567153b | ||
|
|
60332e0b19 | ||
|
|
40932af3fa | ||
|
|
e134e5305b | ||
|
|
e229ef4e19 | ||
|
|
2e9eb8c32d | ||
|
|
4ebb5114ed | ||
|
|
70b083945f | ||
|
|
410db1ff39 | ||
|
|
5d6b4c922b | ||
|
|
b07c0fc45c | ||
|
|
97853199c7 | ||
|
|
494ed7e671 | ||
|
|
a83c57a2f2 | ||
|
|
08e15ab267 | ||
|
|
9728388ea7 | ||
|
|
4371cf5690 | ||
|
|
d28daa26cd | ||
|
|
a850813f2b | ||
|
|
5944a39629 | ||
|
|
c594859ed0 | ||
|
|
2ee27efca7 | ||
|
|
f6e13eb890 | ||
|
|
e7b3ce27ca | ||
|
|
dba27cf8b5 | ||
|
|
6469f224f6 | ||
|
|
f3a63be215 | ||
|
|
01d8c189f0 | ||
|
|
cc83c1ead5 | ||
|
|
7578901f6d | ||
|
|
d1343b96ed | ||
|
|
42f2b4d551 | ||
|
|
0229390ad1 |
161
.env.test
Normal file
161
.env.test
Normal file
@@ -0,0 +1,161 @@
|
||||
# =============================================================================
|
||||
# Test Environment Variables
|
||||
# =============================================================================
|
||||
# This file contains all environment variables needed to run tests locally
|
||||
# in a way that mimics the GitHub Actions CI environment.
|
||||
|
||||
# =============================================================================
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# LLM Provider API Keys
|
||||
# -----------------------------------------------------------------------------
|
||||
OPENAI_API_KEY=fake-api-key
|
||||
ANTHROPIC_API_KEY=fake-anthropic-key
|
||||
GEMINI_API_KEY=fake-gemini-key
|
||||
AZURE_API_KEY=fake-azure-key
|
||||
OPENROUTER_API_KEY=fake-openrouter-key
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# AWS Credentials
|
||||
# -----------------------------------------------------------------------------
|
||||
AWS_ACCESS_KEY_ID=fake-aws-access-key
|
||||
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
|
||||
AWS_DEFAULT_REGION=us-east-1
|
||||
AWS_REGION_NAME=us-east-1
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Azure OpenAI Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
AZURE_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
|
||||
AZURE_OPENAI_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
|
||||
AZURE_OPENAI_API_KEY=fake-azure-openai-key
|
||||
AZURE_API_VERSION=2024-02-15-preview
|
||||
OPENAI_API_VERSION=2024-02-15-preview
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Google Cloud Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
#GOOGLE_CLOUD_PROJECT=fake-gcp-project
|
||||
#GOOGLE_CLOUD_LOCATION=us-central1
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# OpenAI Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
OPENAI_BASE_URL=https://api.openai.com/v1
|
||||
OPENAI_API_BASE=https://api.openai.com/v1
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Search & Scraping Tool API Keys
|
||||
# -----------------------------------------------------------------------------
|
||||
SERPER_API_KEY=fake-serper-key
|
||||
EXA_API_KEY=fake-exa-key
|
||||
BRAVE_API_KEY=fake-brave-key
|
||||
FIRECRAWL_API_KEY=fake-firecrawl-key
|
||||
TAVILY_API_KEY=fake-tavily-key
|
||||
SERPAPI_API_KEY=fake-serpapi-key
|
||||
SERPLY_API_KEY=fake-serply-key
|
||||
LINKUP_API_KEY=fake-linkup-key
|
||||
PARALLEL_API_KEY=fake-parallel-key
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Exa Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
EXA_BASE_URL=https://api.exa.ai
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Web Scraping & Automation
|
||||
# -----------------------------------------------------------------------------
|
||||
BRIGHT_DATA_API_KEY=fake-brightdata-key
|
||||
BRIGHT_DATA_ZONE=fake-zone
|
||||
BRIGHTDATA_API_URL=https://api.brightdata.com
|
||||
BRIGHTDATA_DEFAULT_TIMEOUT=600
|
||||
BRIGHTDATA_DEFAULT_POLLING_INTERVAL=1
|
||||
|
||||
OXYLABS_USERNAME=fake-oxylabs-user
|
||||
OXYLABS_PASSWORD=fake-oxylabs-pass
|
||||
|
||||
SCRAPFLY_API_KEY=fake-scrapfly-key
|
||||
SCRAPEGRAPH_API_KEY=fake-scrapegraph-key
|
||||
|
||||
BROWSERBASE_API_KEY=fake-browserbase-key
|
||||
BROWSERBASE_PROJECT_ID=fake-browserbase-project
|
||||
|
||||
HYPERBROWSER_API_KEY=fake-hyperbrowser-key
|
||||
MULTION_API_KEY=fake-multion-key
|
||||
APIFY_API_TOKEN=fake-apify-token
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Database & Vector Store Credentials
|
||||
# -----------------------------------------------------------------------------
|
||||
SINGLESTOREDB_URL=mysql://fake:fake@localhost:3306/fake
|
||||
SINGLESTOREDB_HOST=localhost
|
||||
SINGLESTOREDB_PORT=3306
|
||||
SINGLESTOREDB_USER=fake-user
|
||||
SINGLESTOREDB_PASSWORD=fake-password
|
||||
SINGLESTOREDB_DATABASE=fake-database
|
||||
SINGLESTOREDB_CONNECT_TIMEOUT=30
|
||||
|
||||
SNOWFLAKE_USER=fake-snowflake-user
|
||||
SNOWFLAKE_PASSWORD=fake-snowflake-password
|
||||
SNOWFLAKE_ACCOUNT=fake-snowflake-account
|
||||
SNOWFLAKE_WAREHOUSE=fake-snowflake-warehouse
|
||||
SNOWFLAKE_DATABASE=fake-snowflake-database
|
||||
SNOWFLAKE_SCHEMA=fake-snowflake-schema
|
||||
|
||||
WEAVIATE_URL=http://localhost:8080
|
||||
WEAVIATE_API_KEY=fake-weaviate-key
|
||||
|
||||
EMBEDCHAIN_DB_URI=sqlite:///test.db
|
||||
|
||||
# Databricks Credentials
|
||||
DATABRICKS_HOST=https://fake-databricks.cloud.databricks.com
|
||||
DATABRICKS_TOKEN=fake-databricks-token
|
||||
DATABRICKS_CONFIG_PROFILE=fake-profile
|
||||
|
||||
# MongoDB Credentials
|
||||
MONGODB_URI=mongodb://fake:fake@localhost:27017/fake
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# CrewAI Platform & Enterprise
|
||||
# -----------------------------------------------------------------------------
|
||||
# setting CREWAI_PLATFORM_INTEGRATION_TOKEN causes these test to fail:
|
||||
#=========================== short test summary info ============================
|
||||
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_platform_context_manager_basic_usage - AssertionError: assert 'fake-platform-token' is None
|
||||
# + where 'fake-platform-token' = get_platform_integration_token()
|
||||
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_context_var_isolation_between_tests - AssertionError: assert 'fake-platform-token' is None
|
||||
# + where 'fake-platform-token' = get_platform_integration_token()
|
||||
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_multiple_sequential_context_managers - AssertionError: assert 'fake-platform-token' is None
|
||||
# + where 'fake-platform-token' = get_platform_integration_token()
|
||||
#CREWAI_PLATFORM_INTEGRATION_TOKEN=fake-platform-token
|
||||
CREWAI_PERSONAL_ACCESS_TOKEN=fake-personal-token
|
||||
CREWAI_PLUS_URL=https://fake.crewai.com
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Other Service API Keys
|
||||
# -----------------------------------------------------------------------------
|
||||
ZAPIER_API_KEY=fake-zapier-key
|
||||
PATRONUS_API_KEY=fake-patronus-key
|
||||
MINDS_API_KEY=fake-minds-key
|
||||
HF_TOKEN=fake-hf-token
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Feature Flags/Testing Modes
|
||||
# -----------------------------------------------------------------------------
|
||||
CREWAI_DISABLE_TELEMETRY=true
|
||||
OTEL_SDK_DISABLED=true
|
||||
CREWAI_TESTING=true
|
||||
CREWAI_TRACING_ENABLED=false
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Testing/CI Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
# VCR recording mode: "none" (default), "new_episodes", "all", "once"
|
||||
PYTEST_VCR_RECORD_MODE=none
|
||||
|
||||
# Set to "true" by GitHub when running in GitHub Actions
|
||||
# GITHUB_ACTIONS=false
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Python Configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
PYTHONUNBUFFERED=1
|
||||
19
.github/codeql/codeql-config.yml
vendored
19
.github/codeql/codeql-config.yml
vendored
@@ -2,19 +2,26 @@ name: "CodeQL Config"
|
||||
|
||||
paths-ignore:
|
||||
# Ignore template files - these are boilerplate code that shouldn't be analyzed
|
||||
- "src/crewai/cli/templates/**"
|
||||
- "lib/crewai/src/crewai/cli/templates/**"
|
||||
# Ignore test cassettes - these are test fixtures/recordings
|
||||
- "tests/cassettes/**"
|
||||
- "lib/crewai/tests/cassettes/**"
|
||||
- "lib/crewai-tools/tests/cassettes/**"
|
||||
# Ignore cache and build artifacts
|
||||
- ".cache/**"
|
||||
# Ignore documentation build artifacts
|
||||
- "docs/.cache/**"
|
||||
# Ignore experimental code
|
||||
- "lib/crewai/src/crewai/experimental/a2a/**"
|
||||
|
||||
paths:
|
||||
# Include all Python source code
|
||||
- "src/**"
|
||||
# Include tests (but exclude cassettes)
|
||||
- "tests/**"
|
||||
# Include all Python source code from workspace packages
|
||||
- "lib/crewai/src/**"
|
||||
- "lib/crewai-tools/src/**"
|
||||
- "lib/devtools/src/**"
|
||||
# Include tests (but exclude cassettes via paths-ignore)
|
||||
- "lib/crewai/tests/**"
|
||||
- "lib/crewai-tools/tests/**"
|
||||
- "lib/devtools/tests/**"
|
||||
|
||||
# Configure specific queries or packs if needed
|
||||
# queries:
|
||||
|
||||
11
.github/dependabot.yml
vendored
Normal file
11
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
# To get started with Dependabot version updates, you'll need to specify which
|
||||
# package ecosystems to update and where the package manifests are located.
|
||||
# Please see the documentation for all configuration options:
|
||||
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: uv # See documentation for possible values
|
||||
directory: "/" # Location of package manifests
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
35
.github/workflows/docs-broken-links.yml
vendored
Normal file
35
.github/workflows/docs-broken-links.yml
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
name: Check Documentation Broken Links
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "docs.json"
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "docs.json"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
check-links:
|
||||
name: Check broken links
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "latest"
|
||||
|
||||
- name: Install Mintlify CLI
|
||||
run: npm i -g mintlify
|
||||
|
||||
- name: Run broken link checker
|
||||
run: |
|
||||
# Auto-answer the prompt with yes command
|
||||
yes "" | mintlify broken-links || test $? -eq 141
|
||||
working-directory: ./docs
|
||||
27
.github/workflows/publish.yml
vendored
27
.github/workflows/publish.yml
vendored
@@ -1,19 +1,37 @@
|
||||
name: Publish to PyPI
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [ published ]
|
||||
repository_dispatch:
|
||||
types: [deployment-tests-passed]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
release_tag:
|
||||
description: 'Release tag to publish'
|
||||
required: false
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build:
|
||||
if: github.event.release.prerelease == true
|
||||
name: Build packages
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- name: Determine release tag
|
||||
id: release
|
||||
run: |
|
||||
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
|
||||
if [ -n "${{ inputs.release_tag }}" ]; then
|
||||
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
|
||||
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
|
||||
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "tag=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ steps.release.outputs.tag || github.ref }}
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
@@ -25,7 +43,7 @@ jobs:
|
||||
|
||||
- name: Build packages
|
||||
run: |
|
||||
uv build --prerelease="allow" --all-packages
|
||||
uv build --all-packages
|
||||
rm dist/.gitignore
|
||||
|
||||
- name: Upload artifacts
|
||||
@@ -35,7 +53,6 @@ jobs:
|
||||
path: dist/
|
||||
|
||||
publish:
|
||||
if: github.event.release.prerelease == true
|
||||
name: Publish to PyPI
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
18
.github/workflows/tests.yml
vendored
18
.github/workflows/tests.yml
vendored
@@ -5,18 +5,6 @@ on: [pull_request]
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
OPENAI_API_KEY: fake-api-key
|
||||
PYTHONUNBUFFERED: 1
|
||||
BRAVE_API_KEY: fake-brave-key
|
||||
SNOWFLAKE_USER: fake-snowflake-user
|
||||
SNOWFLAKE_PASSWORD: fake-snowflake-password
|
||||
SNOWFLAKE_ACCOUNT: fake-snowflake-account
|
||||
SNOWFLAKE_WAREHOUSE: fake-snowflake-warehouse
|
||||
SNOWFLAKE_DATABASE: fake-snowflake-database
|
||||
SNOWFLAKE_SCHEMA: fake-snowflake-schema
|
||||
EMBEDCHAIN_DB_URI: sqlite:///test.db
|
||||
|
||||
jobs:
|
||||
tests:
|
||||
name: tests (${{ matrix.python-version }})
|
||||
@@ -84,26 +72,20 @@ jobs:
|
||||
# fi
|
||||
|
||||
cd lib/crewai && uv run pytest \
|
||||
--block-network \
|
||||
--timeout=30 \
|
||||
-vv \
|
||||
--splits 8 \
|
||||
--group ${{ matrix.group }} \
|
||||
$DURATIONS_ARG \
|
||||
--durations=10 \
|
||||
-n auto \
|
||||
--maxfail=3
|
||||
|
||||
- name: Run tool tests (group ${{ matrix.group }} of 8)
|
||||
run: |
|
||||
cd lib/crewai-tools && uv run pytest \
|
||||
--block-network \
|
||||
--timeout=30 \
|
||||
-vv \
|
||||
--splits 8 \
|
||||
--group ${{ matrix.group }} \
|
||||
--durations=10 \
|
||||
-n auto \
|
||||
--maxfail=3
|
||||
|
||||
|
||||
|
||||
18
.github/workflows/trigger-deployment-tests.yml
vendored
Normal file
18
.github/workflows/trigger-deployment-tests.yml
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
name: Trigger Deployment Tests
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
trigger:
|
||||
name: Trigger deployment tests
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger deployment tests
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
|
||||
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
|
||||
event-type: crewai-release
|
||||
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'
|
||||
@@ -3,19 +3,31 @@ repos:
|
||||
hooks:
|
||||
- id: ruff
|
||||
name: ruff
|
||||
entry: uv run ruff check
|
||||
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
exclude: ^lib/crewai/
|
||||
- id: ruff-format
|
||||
name: ruff-format
|
||||
entry: uv run ruff format
|
||||
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
exclude: ^lib/crewai/
|
||||
- id: mypy
|
||||
name: mypy
|
||||
entry: uv run mypy
|
||||
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
|
||||
language: system
|
||||
pass_filenames: true
|
||||
types: [python]
|
||||
exclude: ^lib/crewai/
|
||||
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
|
||||
- repo: https://github.com/astral-sh/uv-pre-commit
|
||||
rev: 0.9.3
|
||||
hooks:
|
||||
- id: uv-lock
|
||||
- repo: https://github.com/commitizen-tools/commitizen
|
||||
rev: v4.10.1
|
||||
hooks:
|
||||
- id: commitizen
|
||||
- id: commitizen-branch
|
||||
stages: [ pre-push ]
|
||||
|
||||
|
||||
29
README.md
29
README.md
@@ -57,7 +57,7 @@
|
||||
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
|
||||
|
||||
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
|
||||
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
|
||||
- **CrewAI Flows**: The **enterprise and production architecture** for building and deploying multi-agent systems. Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
|
||||
|
||||
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
|
||||
standard for enterprise-ready AI automation.
|
||||
@@ -124,6 +124,7 @@ Setup and run your first CrewAI agents by following this tutorial.
|
||||
[](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
|
||||
|
||||
###
|
||||
|
||||
Learning Resources
|
||||
|
||||
Learn CrewAI through our comprehensive courses:
|
||||
@@ -141,6 +142,7 @@ CrewAI offers two powerful, complementary approaches that work seamlessly togeth
|
||||
- Dynamic task delegation and collaboration
|
||||
- Specialized roles with defined goals and expertise
|
||||
- Flexible problem-solving approaches
|
||||
|
||||
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
|
||||
|
||||
- Fine-grained control over execution paths for real-world scenarios
|
||||
@@ -166,13 +168,13 @@ Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](h
|
||||
First, install CrewAI:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
uv pip install crewai
|
||||
```
|
||||
|
||||
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
uv pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
The command above installs the basic package and also adds extra components which require more dependencies to function.
|
||||
@@ -185,14 +187,15 @@ If you encounter issues during installation or usage, here are some common solut
|
||||
|
||||
1. **ModuleNotFoundError: No module named 'tiktoken'**
|
||||
|
||||
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
|
||||
- If using embedchain or other tools: `pip install 'crewai[tools]'`
|
||||
- Install tiktoken explicitly: `uv pip install 'crewai[embeddings]'`
|
||||
- If using embedchain or other tools: `uv pip install 'crewai[tools]'`
|
||||
|
||||
2. **Failed building wheel for tiktoken**
|
||||
|
||||
- Ensure Rust compiler is installed (see installation steps above)
|
||||
- For Windows: Verify Visual C++ Build Tools are installed
|
||||
- Try upgrading pip: `pip install --upgrade pip`
|
||||
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
|
||||
- Try upgrading pip: `uv pip install --upgrade pip`
|
||||
- If issues persist, use a pre-built wheel: `uv pip install tiktoken --prefer-binary`
|
||||
|
||||
### 2. Setting Up Your Crew with the YAML Configuration
|
||||
|
||||
@@ -270,7 +273,7 @@ reporting_analyst:
|
||||
|
||||
**tasks.yaml**
|
||||
|
||||
```yaml
|
||||
````yaml
|
||||
# src/my_project/config/tasks.yaml
|
||||
research_task:
|
||||
description: >
|
||||
@@ -290,7 +293,7 @@ reporting_task:
|
||||
Formatted as markdown without '```'
|
||||
agent: reporting_analyst
|
||||
output_file: report.md
|
||||
```
|
||||
````
|
||||
|
||||
**crew.py**
|
||||
|
||||
@@ -556,7 +559,7 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
|
||||
|
||||
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
|
||||
|
||||
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
|
||||
_P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb))._
|
||||
|
||||
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
||||
@@ -611,7 +614,7 @@ uv build
|
||||
### Installing Locally
|
||||
|
||||
```bash
|
||||
pip install dist/*.tar.gz
|
||||
uv pip install dist/*.tar.gz
|
||||
```
|
||||
|
||||
## Telemetry
|
||||
@@ -687,13 +690,13 @@ A: CrewAI is a standalone, lean, and fast Python framework built specifically fo
|
||||
A: Install CrewAI using pip:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
uv pip install crewai
|
||||
```
|
||||
|
||||
For additional tools, use:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
uv pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
### Q: Does CrewAI depend on LangChain?
|
||||
|
||||
199
conftest.py
Normal file
199
conftest.py
Normal file
@@ -0,0 +1,199 @@
|
||||
"""Pytest configuration for crewAI workspace."""
|
||||
|
||||
from collections.abc import Generator
|
||||
import os
|
||||
from pathlib import Path
|
||||
import tempfile
|
||||
from typing import Any
|
||||
|
||||
from dotenv import load_dotenv
|
||||
import pytest
|
||||
from vcr.request import Request # type: ignore[import-untyped]
|
||||
|
||||
|
||||
env_test_path = Path(__file__).parent / ".env.test"
|
||||
load_dotenv(env_test_path, override=True)
|
||||
load_dotenv(override=True)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True, scope="function")
|
||||
def cleanup_event_handlers() -> Generator[None, Any, None]:
|
||||
"""Clean up event bus handlers after each test to prevent test pollution."""
|
||||
yield
|
||||
|
||||
try:
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
|
||||
with crewai_event_bus._rwlock.w_locked():
|
||||
crewai_event_bus._sync_handlers.clear()
|
||||
crewai_event_bus._async_handlers.clear()
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True, scope="function")
|
||||
def setup_test_environment() -> Generator[None, Any, None]:
|
||||
"""Setup test environment for crewAI workspace."""
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
storage_dir = Path(temp_dir) / "crewai_test_storage"
|
||||
storage_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if not storage_dir.exists() or not storage_dir.is_dir():
|
||||
raise RuntimeError(
|
||||
f"Failed to create test storage directory: {storage_dir}"
|
||||
)
|
||||
|
||||
try:
|
||||
test_file = storage_dir / ".permissions_test"
|
||||
test_file.touch()
|
||||
test_file.unlink()
|
||||
except (OSError, IOError) as e:
|
||||
raise RuntimeError(
|
||||
f"Test storage directory {storage_dir} is not writable: {e}"
|
||||
) from e
|
||||
|
||||
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
|
||||
os.environ["CREWAI_TESTING"] = "true"
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
os.environ.pop("CREWAI_TESTING", "true")
|
||||
os.environ.pop("CREWAI_STORAGE_DIR", None)
|
||||
os.environ.pop("CREWAI_DISABLE_TELEMETRY", "true")
|
||||
os.environ.pop("OTEL_SDK_DISABLED", "true")
|
||||
os.environ.pop("OPENAI_BASE_URL", "https://api.openai.com/v1")
|
||||
os.environ.pop("OPENAI_API_BASE", "https://api.openai.com/v1")
|
||||
|
||||
|
||||
HEADERS_TO_FILTER = {
|
||||
"authorization": "AUTHORIZATION-XXX",
|
||||
"content-security-policy": "CSP-FILTERED",
|
||||
"cookie": "COOKIE-XXX",
|
||||
"set-cookie": "SET-COOKIE-XXX",
|
||||
"permissions-policy": "PERMISSIONS-POLICY-XXX",
|
||||
"referrer-policy": "REFERRER-POLICY-XXX",
|
||||
"strict-transport-security": "STS-XXX",
|
||||
"x-content-type-options": "X-CONTENT-TYPE-XXX",
|
||||
"x-frame-options": "X-FRAME-OPTIONS-XXX",
|
||||
"x-permitted-cross-domain-policies": "X-PERMITTED-XXX",
|
||||
"x-request-id": "X-REQUEST-ID-XXX",
|
||||
"x-runtime": "X-RUNTIME-XXX",
|
||||
"x-xss-protection": "X-XSS-PROTECTION-XXX",
|
||||
"x-stainless-arch": "X-STAINLESS-ARCH-XXX",
|
||||
"x-stainless-os": "X-STAINLESS-OS-XXX",
|
||||
"x-stainless-read-timeout": "X-STAINLESS-READ-TIMEOUT-XXX",
|
||||
"cf-ray": "CF-RAY-XXX",
|
||||
"etag": "ETAG-XXX",
|
||||
"Strict-Transport-Security": "STS-XXX",
|
||||
"access-control-expose-headers": "ACCESS-CONTROL-XXX",
|
||||
"openai-organization": "OPENAI-ORG-XXX",
|
||||
"openai-project": "OPENAI-PROJECT-XXX",
|
||||
"x-ratelimit-limit-requests": "X-RATELIMIT-LIMIT-REQUESTS-XXX",
|
||||
"x-ratelimit-limit-tokens": "X-RATELIMIT-LIMIT-TOKENS-XXX",
|
||||
"x-ratelimit-remaining-requests": "X-RATELIMIT-REMAINING-REQUESTS-XXX",
|
||||
"x-ratelimit-remaining-tokens": "X-RATELIMIT-REMAINING-TOKENS-XXX",
|
||||
"x-ratelimit-reset-requests": "X-RATELIMIT-RESET-REQUESTS-XXX",
|
||||
"x-ratelimit-reset-tokens": "X-RATELIMIT-RESET-TOKENS-XXX",
|
||||
"x-goog-api-key": "X-GOOG-API-KEY-XXX",
|
||||
"api-key": "X-API-KEY-XXX",
|
||||
"User-Agent": "X-USER-AGENT-XXX",
|
||||
"apim-request-id:": "X-API-CLIENT-REQUEST-ID-XXX",
|
||||
"azureml-model-session": "AZUREML-MODEL-SESSION-XXX",
|
||||
"x-ms-client-request-id": "X-MS-CLIENT-REQUEST-ID-XXX",
|
||||
"x-ms-region": "X-MS-REGION-XXX",
|
||||
"apim-request-id": "APIM-REQUEST-ID-XXX",
|
||||
"x-api-key": "X-API-KEY-XXX",
|
||||
"anthropic-organization-id": "ANTHROPIC-ORGANIZATION-ID-XXX",
|
||||
"request-id": "REQUEST-ID-XXX",
|
||||
"anthropic-ratelimit-input-tokens-limit": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX",
|
||||
"anthropic-ratelimit-input-tokens-remaining": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX",
|
||||
"anthropic-ratelimit-input-tokens-reset": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX",
|
||||
"anthropic-ratelimit-output-tokens-limit": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX",
|
||||
"anthropic-ratelimit-output-tokens-remaining": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX",
|
||||
"anthropic-ratelimit-output-tokens-reset": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX",
|
||||
"anthropic-ratelimit-tokens-limit": "ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX",
|
||||
"anthropic-ratelimit-tokens-remaining": "ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX",
|
||||
"anthropic-ratelimit-tokens-reset": "ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX",
|
||||
"x-amz-date": "X-AMZ-DATE-XXX",
|
||||
"amz-sdk-invocation-id": "AMZ-SDK-INVOCATION-ID-XXX",
|
||||
"accept-encoding": "ACCEPT-ENCODING-XXX",
|
||||
"x-amzn-requestid": "X-AMZN-REQUESTID-XXX",
|
||||
"x-amzn-RequestId": "X-AMZN-REQUESTID-XXX",
|
||||
"x-a2a-notification-token": "X-A2A-NOTIFICATION-TOKEN-XXX",
|
||||
"x-a2a-version": "X-A2A-VERSION-XXX",
|
||||
}
|
||||
|
||||
|
||||
def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any-unimported]
|
||||
"""Filter sensitive headers from request before recording."""
|
||||
for header_name, replacement in HEADERS_TO_FILTER.items():
|
||||
for variant in [header_name, header_name.upper(), header_name.title()]:
|
||||
if variant in request.headers:
|
||||
request.headers[variant] = [replacement]
|
||||
|
||||
request.method = request.method.upper()
|
||||
return request
|
||||
|
||||
|
||||
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Filter sensitive headers from response before recording."""
|
||||
# Remove Content-Encoding to prevent decompression issues on replay
|
||||
for encoding_header in ["Content-Encoding", "content-encoding"]:
|
||||
response["headers"].pop(encoding_header, None)
|
||||
|
||||
for header_name, replacement in HEADERS_TO_FILTER.items():
|
||||
for variant in [header_name, header_name.upper(), header_name.title()]:
|
||||
if variant in response["headers"]:
|
||||
response["headers"][variant] = [replacement]
|
||||
return response
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def vcr_cassette_dir(request: Any) -> str:
|
||||
"""Generate cassette directory path based on test module location.
|
||||
|
||||
Organizes cassettes to mirror test directory structure within each package:
|
||||
lib/crewai/tests/llms/google/test_google.py -> lib/crewai/tests/cassettes/llms/google/
|
||||
lib/crewai-tools/tests/tools/test_search.py -> lib/crewai-tools/tests/cassettes/tools/
|
||||
"""
|
||||
test_file = Path(request.fspath)
|
||||
|
||||
for parent in test_file.parents:
|
||||
if parent.name in ("crewai", "crewai-tools") and parent.parent.name == "lib":
|
||||
package_root = parent
|
||||
break
|
||||
else:
|
||||
package_root = test_file.parent
|
||||
|
||||
tests_root = package_root / "tests"
|
||||
test_dir = test_file.parent
|
||||
|
||||
if test_dir != tests_root:
|
||||
relative_path = test_dir.relative_to(tests_root)
|
||||
cassette_dir = tests_root / "cassettes" / relative_path
|
||||
else:
|
||||
cassette_dir = tests_root / "cassettes"
|
||||
|
||||
cassette_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
return str(cassette_dir)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def vcr_config(vcr_cassette_dir: str) -> dict[str, Any]:
|
||||
"""Configure VCR with organized cassette storage."""
|
||||
config = {
|
||||
"cassette_library_dir": vcr_cassette_dir,
|
||||
"record_mode": os.getenv("PYTEST_VCR_RECORD_MODE", "once"),
|
||||
"filter_headers": [(k, v) for k, v in HEADERS_TO_FILTER.items()],
|
||||
"before_record_request": _filter_request_headers,
|
||||
"before_record_response": _filter_response_headers,
|
||||
"filter_query_parameters": ["key"],
|
||||
"match_on": ["method", "scheme", "host", "port", "path"],
|
||||
}
|
||||
|
||||
if os.getenv("GITHUB_ACTIONS") == "true":
|
||||
config["record_mode"] = "none"
|
||||
|
||||
return config
|
||||
@@ -116,6 +116,7 @@
|
||||
"en/concepts/tasks",
|
||||
"en/concepts/crews",
|
||||
"en/concepts/flows",
|
||||
"en/concepts/production-architecture",
|
||||
"en/concepts/knowledge",
|
||||
"en/concepts/llms",
|
||||
"en/concepts/processes",
|
||||
@@ -134,6 +135,7 @@
|
||||
"group": "MCP Integration",
|
||||
"pages": [
|
||||
"en/mcp/overview",
|
||||
"en/mcp/dsl-integration",
|
||||
"en/mcp/stdio",
|
||||
"en/mcp/sse",
|
||||
"en/mcp/streamable-http",
|
||||
@@ -252,7 +254,8 @@
|
||||
"pages": [
|
||||
"en/tools/integration/overview",
|
||||
"en/tools/integration/bedrockinvokeagenttool",
|
||||
"en/tools/integration/crewaiautomationtool"
|
||||
"en/tools/integration/crewaiautomationtool",
|
||||
"en/tools/integration/mergeagenthandlertool"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -275,6 +278,7 @@
|
||||
"en/observability/overview",
|
||||
"en/observability/arize-phoenix",
|
||||
"en/observability/braintrust",
|
||||
"en/observability/datadog",
|
||||
"en/observability/langdb",
|
||||
"en/observability/langfuse",
|
||||
"en/observability/langtrace",
|
||||
@@ -305,13 +309,17 @@
|
||||
"en/learn/hierarchical-process",
|
||||
"en/learn/human-input-on-execution",
|
||||
"en/learn/human-in-the-loop",
|
||||
"en/learn/human-feedback-in-flows",
|
||||
"en/learn/kickoff-async",
|
||||
"en/learn/kickoff-for-each",
|
||||
"en/learn/llm-connections",
|
||||
"en/learn/multimodal-agents",
|
||||
"en/learn/replay-tasks-from-latest-crew-kickoff",
|
||||
"en/learn/sequential-process",
|
||||
"en/learn/using-annotations"
|
||||
"en/learn/using-annotations",
|
||||
"en/learn/execution-hooks",
|
||||
"en/learn/llm-hooks",
|
||||
"en/learn/tool-hooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -552,6 +560,7 @@
|
||||
"pt-BR/concepts/tasks",
|
||||
"pt-BR/concepts/crews",
|
||||
"pt-BR/concepts/flows",
|
||||
"pt-BR/concepts/production-architecture",
|
||||
"pt-BR/concepts/knowledge",
|
||||
"pt-BR/concepts/llms",
|
||||
"pt-BR/concepts/processes",
|
||||
@@ -570,6 +579,7 @@
|
||||
"group": "Integração MCP",
|
||||
"pages": [
|
||||
"pt-BR/mcp/overview",
|
||||
"pt-BR/mcp/dsl-integration",
|
||||
"pt-BR/mcp/stdio",
|
||||
"pt-BR/mcp/sse",
|
||||
"pt-BR/mcp/streamable-http",
|
||||
@@ -695,9 +705,11 @@
|
||||
{
|
||||
"group": "Observabilidade",
|
||||
"pages": [
|
||||
"pt-BR/observability/tracing",
|
||||
"pt-BR/observability/overview",
|
||||
"pt-BR/observability/arize-phoenix",
|
||||
"pt-BR/observability/braintrust",
|
||||
"pt-BR/observability/datadog",
|
||||
"pt-BR/observability/langdb",
|
||||
"pt-BR/observability/langfuse",
|
||||
"pt-BR/observability/langtrace",
|
||||
@@ -727,13 +739,17 @@
|
||||
"pt-BR/learn/hierarchical-process",
|
||||
"pt-BR/learn/human-input-on-execution",
|
||||
"pt-BR/learn/human-in-the-loop",
|
||||
"pt-BR/learn/human-feedback-in-flows",
|
||||
"pt-BR/learn/kickoff-async",
|
||||
"pt-BR/learn/kickoff-for-each",
|
||||
"pt-BR/learn/llm-connections",
|
||||
"pt-BR/learn/multimodal-agents",
|
||||
"pt-BR/learn/replay-tasks-from-latest-crew-kickoff",
|
||||
"pt-BR/learn/sequential-process",
|
||||
"pt-BR/learn/using-annotations"
|
||||
"pt-BR/learn/using-annotations",
|
||||
"pt-BR/learn/execution-hooks",
|
||||
"pt-BR/learn/llm-hooks",
|
||||
"pt-BR/learn/tool-hooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -971,6 +987,7 @@
|
||||
"ko/concepts/tasks",
|
||||
"ko/concepts/crews",
|
||||
"ko/concepts/flows",
|
||||
"ko/concepts/production-architecture",
|
||||
"ko/concepts/knowledge",
|
||||
"ko/concepts/llms",
|
||||
"ko/concepts/processes",
|
||||
@@ -989,6 +1006,7 @@
|
||||
"group": "MCP 통합",
|
||||
"pages": [
|
||||
"ko/mcp/overview",
|
||||
"ko/mcp/dsl-integration",
|
||||
"ko/mcp/stdio",
|
||||
"ko/mcp/sse",
|
||||
"ko/mcp/streamable-http",
|
||||
@@ -1126,9 +1144,11 @@
|
||||
{
|
||||
"group": "Observability",
|
||||
"pages": [
|
||||
"ko/observability/tracing",
|
||||
"ko/observability/overview",
|
||||
"ko/observability/arize-phoenix",
|
||||
"ko/observability/braintrust",
|
||||
"ko/observability/datadog",
|
||||
"ko/observability/langdb",
|
||||
"ko/observability/langfuse",
|
||||
"ko/observability/langtrace",
|
||||
@@ -1158,13 +1178,17 @@
|
||||
"ko/learn/hierarchical-process",
|
||||
"ko/learn/human-input-on-execution",
|
||||
"ko/learn/human-in-the-loop",
|
||||
"ko/learn/human-feedback-in-flows",
|
||||
"ko/learn/kickoff-async",
|
||||
"ko/learn/kickoff-for-each",
|
||||
"ko/learn/llm-connections",
|
||||
"ko/learn/multimodal-agents",
|
||||
"ko/learn/replay-tasks-from-latest-crew-kickoff",
|
||||
"ko/learn/sequential-process",
|
||||
"ko/learn/using-annotations"
|
||||
"ko/learn/using-annotations",
|
||||
"ko/learn/execution-hooks",
|
||||
"ko/learn/llm-hooks",
|
||||
"ko/learn/tool-hooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -21,11 +21,12 @@ Welcome to the CrewAI AMP API reference. This API allows you to programmatically
|
||||
</Step>
|
||||
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive
|
||||
a `kickoff_id`.
|
||||
</Step>
|
||||
|
||||
<Step title="Monitor Progress">
|
||||
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
|
||||
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -41,12 +42,13 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
|
||||
### Token Types
|
||||
|
||||
| Token Type | Scope | Use Case |
|
||||
|:-----------|:--------|:----------|
|
||||
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
|
||||
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
|
||||
<Tip>
|
||||
You can find both token types in the Status tab of your crew's detail page in the CrewAI AMP dashboard.
|
||||
You can find both token types in the Status tab of your crew's detail page in
|
||||
the CrewAI AMP dashboard.
|
||||
</Tip>
|
||||
|
||||
## Base URL
|
||||
@@ -63,7 +65,7 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
|
||||
|
||||
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
|
||||
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
|
||||
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
|
||||
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
|
||||
4. **Results**: Extract the final output from the completed response
|
||||
|
||||
## Error Handling
|
||||
@@ -71,7 +73,7 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
|
||||
The API uses standard HTTP status codes:
|
||||
|
||||
| Code | Meaning |
|
||||
|------|:--------|
|
||||
| ----- | :----------------------------------------- |
|
||||
| `200` | Success |
|
||||
| `400` | Bad Request - Invalid input format |
|
||||
| `401` | Unauthorized - Invalid bearer token |
|
||||
@@ -82,10 +84,14 @@ The API uses standard HTTP status codes:
|
||||
## Interactive Testing
|
||||
|
||||
<Info>
|
||||
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
|
||||
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew
|
||||
URL, we use **reference mode** instead of an interactive playground to avoid
|
||||
confusion. This shows you exactly what the requests should look like without
|
||||
non-functional send buttons.
|
||||
</Info>
|
||||
|
||||
Each endpoint page shows you:
|
||||
|
||||
- ✅ **Exact request format** with all parameters
|
||||
- ✅ **Response examples** for success and error cases
|
||||
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
|
||||
@@ -103,6 +109,7 @@ Each endpoint page shows you:
|
||||
</CardGroup>
|
||||
|
||||
**Example workflow:**
|
||||
|
||||
1. **Copy this cURL example** from any endpoint page
|
||||
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
|
||||
3. **Replace the Bearer token** with your real token from the dashboard
|
||||
@@ -111,10 +118,18 @@ Each endpoint page shows you:
|
||||
## Need Help?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
|
||||
<Card
|
||||
title="Enterprise Support"
|
||||
icon="headset"
|
||||
href="mailto:support@crewai.com"
|
||||
>
|
||||
Get help with API integration and troubleshooting
|
||||
</Card>
|
||||
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
|
||||
<Card
|
||||
title="Enterprise Dashboard"
|
||||
icon="chart-line"
|
||||
href="https://app.crewai.com"
|
||||
>
|
||||
Manage your crews and view execution logs
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: "GET /status/{kickoff_id}"
|
||||
title: "GET /{kickoff_id}/status"
|
||||
description: "Get execution status"
|
||||
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
|
||||
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ mode: "wide"
|
||||
## Overview of an Agent
|
||||
|
||||
In the CrewAI framework, an `Agent` is an autonomous unit that can:
|
||||
|
||||
- Perform specific tasks
|
||||
- Make decisions based on its role and goal
|
||||
- Use tools to accomplish objectives
|
||||
@@ -16,7 +17,10 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
|
||||
- Delegate tasks when allowed
|
||||
|
||||
<Tip>
|
||||
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
|
||||
Think of an agent as a specialized team member with specific skills,
|
||||
expertise, and responsibilities. For example, a `Researcher` agent might excel
|
||||
at gathering and analyzing information, while a `Writer` agent might be better
|
||||
at creating content.
|
||||
</Tip>
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
|
||||
@@ -25,6 +29,7 @@ CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and co
|
||||

|
||||
|
||||
The Visual Agent Builder enables:
|
||||
|
||||
- Intuitive agent configuration with form-based interfaces
|
||||
- Real-time testing and validation
|
||||
- Template library with pre-configured agent types
|
||||
@@ -34,7 +39,7 @@ The Visual Agent Builder enables:
|
||||
## Agent Attributes
|
||||
|
||||
| Attribute | Parameter | Type | Description |
|
||||
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
|
||||
| :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- |
|
||||
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
|
||||
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
|
||||
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
|
||||
@@ -137,7 +142,8 @@ class LatestAiDevelopmentCrew():
|
||||
```
|
||||
|
||||
<Note>
|
||||
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
|
||||
The names you use in your YAML files (`agents.yaml`) should match the method
|
||||
names in your Python code.
|
||||
</Note>
|
||||
|
||||
### Direct Code Definition
|
||||
@@ -184,6 +190,7 @@ agent = Agent(
|
||||
Let's break down some key parameter combinations for common use cases:
|
||||
|
||||
#### Basic Research Agent
|
||||
|
||||
```python Code
|
||||
research_agent = Agent(
|
||||
role="Research Analyst",
|
||||
@@ -195,6 +202,7 @@ research_agent = Agent(
|
||||
```
|
||||
|
||||
#### Code Development Agent
|
||||
|
||||
```python Code
|
||||
dev_agent = Agent(
|
||||
role="Senior Python Developer",
|
||||
@@ -208,6 +216,7 @@ dev_agent = Agent(
|
||||
```
|
||||
|
||||
#### Long-Running Analysis Agent
|
||||
|
||||
```python Code
|
||||
analysis_agent = Agent(
|
||||
role="Data Analyst",
|
||||
@@ -221,6 +230,7 @@ analysis_agent = Agent(
|
||||
```
|
||||
|
||||
#### Custom Template Agent
|
||||
|
||||
```python Code
|
||||
custom_agent = Agent(
|
||||
role="Customer Service Representative",
|
||||
@@ -236,6 +246,7 @@ custom_agent = Agent(
|
||||
```
|
||||
|
||||
#### Date-Aware Agent with Reasoning
|
||||
|
||||
```python Code
|
||||
strategic_agent = Agent(
|
||||
role="Market Analyst",
|
||||
@@ -250,6 +261,7 @@ strategic_agent = Agent(
|
||||
```
|
||||
|
||||
#### Reasoning Agent
|
||||
|
||||
```python Code
|
||||
reasoning_agent = Agent(
|
||||
role="Strategic Planner",
|
||||
@@ -263,6 +275,7 @@ reasoning_agent = Agent(
|
||||
```
|
||||
|
||||
#### Multimodal Agent
|
||||
|
||||
```python Code
|
||||
multimodal_agent = Agent(
|
||||
role="Visual Content Analyst",
|
||||
@@ -276,52 +289,64 @@ multimodal_agent = Agent(
|
||||
### Parameter Details
|
||||
|
||||
#### Critical Parameters
|
||||
|
||||
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
|
||||
- `llm` determines the language model used (default: OpenAI's GPT-4)
|
||||
|
||||
#### Memory and Context
|
||||
|
||||
- `memory`: Enable to maintain conversation history
|
||||
- `respect_context_window`: Prevents token limit issues
|
||||
- `knowledge_sources`: Add domain-specific knowledge bases
|
||||
|
||||
#### Execution Control
|
||||
|
||||
- `max_iter`: Maximum attempts before giving best answer
|
||||
- `max_execution_time`: Timeout in seconds
|
||||
- `max_rpm`: Rate limiting for API calls
|
||||
- `max_retry_limit`: Retries on error
|
||||
|
||||
#### Code Execution
|
||||
|
||||
- `allow_code_execution`: Must be True to run code
|
||||
- `code_execution_mode`:
|
||||
- `"safe"`: Uses Docker (recommended for production)
|
||||
- `"unsafe"`: Direct execution (use only in trusted environments)
|
||||
|
||||
<Note>
|
||||
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
|
||||
Add the code interpreter tool as a tool in the agent as a tool parameter.
|
||||
This runs a default Docker image. If you want to configure the docker image,
|
||||
the checkout the Code Interpreter Tool in the tools section. Add the code
|
||||
interpreter tool as a tool in the agent as a tool parameter.
|
||||
</Note>
|
||||
|
||||
#### Advanced Features
|
||||
|
||||
- `multimodal`: Enable multimodal capabilities for processing text and visual content
|
||||
- `reasoning`: Enable agent to reflect and create plans before executing tasks
|
||||
- `inject_date`: Automatically inject current date into task descriptions
|
||||
|
||||
#### Templates
|
||||
|
||||
- `system_template`: Defines agent's core behavior
|
||||
- `prompt_template`: Structures input format
|
||||
- `response_template`: Formats agent responses
|
||||
|
||||
<Note>
|
||||
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
|
||||
When using custom templates, ensure that both `system_template` and
|
||||
`prompt_template` are defined. The `response_template` is optional but
|
||||
recommended for consistent output formatting.
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
|
||||
When using custom templates, you can use variables like `{role}`, `{goal}`,
|
||||
and `{backstory}` in your templates. These will be automatically populated
|
||||
during execution.
|
||||
</Note>
|
||||
|
||||
## Agent Tools
|
||||
|
||||
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
|
||||
|
||||
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
|
||||
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
|
||||
|
||||
@@ -360,7 +385,8 @@ analyst = Agent(
|
||||
```
|
||||
|
||||
<Note>
|
||||
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
|
||||
When `memory` is enabled, the agent will maintain context across multiple
|
||||
interactions, improving its ability to handle complex, multi-step tasks.
|
||||
</Note>
|
||||
|
||||
## Context Window Management
|
||||
@@ -390,6 +416,7 @@ smart_agent = Agent(
|
||||
```
|
||||
|
||||
**What happens when context limits are exceeded:**
|
||||
|
||||
- ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."`
|
||||
- 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history
|
||||
- ✅ **Continued execution**: Task execution continues seamlessly with the summarized context
|
||||
@@ -411,6 +438,7 @@ strict_agent = Agent(
|
||||
```
|
||||
|
||||
**What happens when context limits are exceeded:**
|
||||
|
||||
- ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."`
|
||||
- 🛑 **Execution stops**: Task execution halts immediately
|
||||
- 🔧 **Manual intervention required**: You need to modify your approach
|
||||
@@ -418,6 +446,7 @@ strict_agent = Agent(
|
||||
### Choosing the Right Setting
|
||||
|
||||
#### Use `respect_context_window=True` (Default) when:
|
||||
|
||||
- **Processing large documents** that might exceed context limits
|
||||
- **Long-running conversations** where some summarization is acceptable
|
||||
- **Research tasks** where general context is more important than exact details
|
||||
@@ -436,6 +465,7 @@ document_processor = Agent(
|
||||
```
|
||||
|
||||
#### Use `respect_context_window=False` when:
|
||||
|
||||
- **Precision is critical** and information loss is unacceptable
|
||||
- **Legal or medical tasks** requiring complete context
|
||||
- **Code review** where missing details could introduce bugs
|
||||
@@ -458,6 +488,7 @@ precision_agent = Agent(
|
||||
When dealing with very large datasets, consider these strategies:
|
||||
|
||||
#### 1. Use RAG Tools
|
||||
|
||||
```python Code
|
||||
from crewai_tools import RagTool
|
||||
|
||||
@@ -475,6 +506,7 @@ rag_agent = Agent(
|
||||
```
|
||||
|
||||
#### 2. Use Knowledge Sources
|
||||
|
||||
```python Code
|
||||
# Use knowledge sources instead of large prompts
|
||||
knowledge_agent = Agent(
|
||||
@@ -498,6 +530,7 @@ knowledge_agent = Agent(
|
||||
### Troubleshooting Context Issues
|
||||
|
||||
**If you're getting context limit errors:**
|
||||
|
||||
```python Code
|
||||
# Quick fix: Enable automatic handling
|
||||
agent.respect_context_window = True
|
||||
@@ -511,6 +544,7 @@ agent.tools = [RagTool()]
|
||||
```
|
||||
|
||||
**If automatic summarization loses important information:**
|
||||
|
||||
```python Code
|
||||
# Disable auto-summarization and use RAG instead
|
||||
agent = Agent(
|
||||
@@ -524,7 +558,10 @@ agent = Agent(
|
||||
```
|
||||
|
||||
<Note>
|
||||
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
|
||||
The context window management feature works automatically in the background.
|
||||
You don't need to call any special functions - just set
|
||||
`respect_context_window` to your preferred behavior and CrewAI handles the
|
||||
rest!
|
||||
</Note>
|
||||
|
||||
## Direct Agent Interaction with `kickoff()`
|
||||
@@ -557,7 +594,7 @@ print(result.raw)
|
||||
### Parameters and Return Values
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
|
||||
| :---------------- | :--------------------------------- | :------------------------------------------------------------------------ |
|
||||
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
|
||||
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
|
||||
|
||||
@@ -621,28 +658,34 @@ asyncio.run(main())
|
||||
```
|
||||
|
||||
<Note>
|
||||
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
|
||||
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler
|
||||
execution flow while preserving all of the agent's configuration (role, goal,
|
||||
backstory, tools, etc.).
|
||||
</Note>
|
||||
|
||||
## Important Considerations and Best Practices
|
||||
|
||||
### Security and Code Execution
|
||||
|
||||
- When using `allow_code_execution`, be cautious with user input and always validate it
|
||||
- Use `code_execution_mode: "safe"` (Docker) in production environments
|
||||
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
- Use `respect_context_window: true` to prevent token limit issues
|
||||
- Set appropriate `max_rpm` to avoid rate limiting
|
||||
- Enable `cache: true` to improve performance for repetitive tasks
|
||||
- Adjust `max_iter` and `max_retry_limit` based on task complexity
|
||||
|
||||
### Memory and Context Management
|
||||
|
||||
- Leverage `knowledge_sources` for domain-specific information
|
||||
- Configure `embedder` when using custom embedding models
|
||||
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
|
||||
|
||||
### Advanced Features
|
||||
|
||||
- Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks
|
||||
- Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts)
|
||||
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
|
||||
@@ -650,6 +693,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
|
||||
- Enable `multimodal: true` for agents that need to process both text and visual content
|
||||
|
||||
### Agent Collaboration
|
||||
|
||||
- Enable `allow_delegation: true` when agents need to work together
|
||||
- Use `step_callback` to monitor and log agent interactions
|
||||
- Consider using different LLMs for different purposes:
|
||||
@@ -657,6 +701,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
|
||||
- `function_calling_llm` for efficient tool usage
|
||||
|
||||
### Date Awareness and Reasoning
|
||||
|
||||
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
|
||||
- Customize the date format with `date_format` using standard Python datetime format codes
|
||||
- Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc.
|
||||
@@ -664,22 +709,26 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
|
||||
- Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection
|
||||
|
||||
### Model Compatibility
|
||||
|
||||
- Set `use_system_prompt: false` for older models that don't support system messages
|
||||
- Ensure your chosen `llm` supports the features you need (like function calling)
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
1. **Rate Limiting**: If you're hitting API rate limits:
|
||||
|
||||
- Implement appropriate `max_rpm`
|
||||
- Use caching for repetitive operations
|
||||
- Consider batching requests
|
||||
|
||||
2. **Context Window Errors**: If you're exceeding context limits:
|
||||
|
||||
- Enable `respect_context_window`
|
||||
- Use more efficient prompts
|
||||
- Clear agent memory periodically
|
||||
|
||||
3. **Code Execution Issues**: If code execution fails:
|
||||
|
||||
- Verify Docker is installed for safe mode
|
||||
- Check execution permissions
|
||||
- Review code sandbox settings
|
||||
|
||||
@@ -5,7 +5,12 @@ icon: terminal
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Warning>Since release 0.140.0, CrewAI AMP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
|
||||
<Warning>
|
||||
Since release 0.140.0, CrewAI AMP started a process of migrating their login
|
||||
provider. As such, the authentication flow via CLI was updated. Users that use
|
||||
Google to login, or that created their account after July 3rd, 2025 will be
|
||||
unable to log in with older versions of the `crewai` library.
|
||||
</Warning>
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -41,6 +46,7 @@ crewai create [OPTIONS] TYPE NAME
|
||||
- `NAME`: Name of the crew or flow
|
||||
|
||||
Example:
|
||||
|
||||
```shell Terminal
|
||||
crewai create crew my_new_crew
|
||||
crewai create flow my_new_flow
|
||||
@@ -57,6 +63,7 @@ crewai version [OPTIONS]
|
||||
- `--tools`: (Optional) Show the installed version of CrewAI tools
|
||||
|
||||
Example:
|
||||
|
||||
```shell Terminal
|
||||
crewai version
|
||||
crewai version --tools
|
||||
@@ -74,6 +81,7 @@ crewai train [OPTIONS]
|
||||
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
|
||||
|
||||
Example:
|
||||
|
||||
```shell Terminal
|
||||
crewai train -n 10 -f my_training_data.pkl
|
||||
```
|
||||
@@ -89,6 +97,7 @@ crewai replay [OPTIONS]
|
||||
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
|
||||
|
||||
Example:
|
||||
|
||||
```shell Terminal
|
||||
crewai replay -t task_123456
|
||||
```
|
||||
@@ -118,6 +127,7 @@ crewai reset-memories [OPTIONS]
|
||||
- `-a, --all`: Reset ALL memories
|
||||
|
||||
Example:
|
||||
|
||||
```shell Terminal
|
||||
crewai reset-memories --long --short
|
||||
crewai reset-memories --all
|
||||
@@ -135,6 +145,7 @@ crewai test [OPTIONS]
|
||||
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
|
||||
|
||||
Example:
|
||||
|
||||
```shell Terminal
|
||||
crewai test -n 5 -m gpt-3.5-turbo
|
||||
```
|
||||
@@ -148,12 +159,16 @@ crewai run
|
||||
```
|
||||
|
||||
<Note>
|
||||
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.
|
||||
Starting from version 0.103.0, the `crewai run` command can be used to run
|
||||
both standard crews and flows. For flows, it automatically detects the type
|
||||
from pyproject.toml and runs the appropriate command. This is now the
|
||||
recommended way to run both crews and flows.
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
Make sure to run these commands from the directory where your CrewAI project is set up.
|
||||
Some commands may require additional configuration or setup within your project structure.
|
||||
Make sure to run these commands from the directory where your CrewAI project
|
||||
is set up. Some commands may require additional configuration or setup within
|
||||
your project structure.
|
||||
</Note>
|
||||
|
||||
### 9. Chat
|
||||
@@ -165,6 +180,7 @@ After receiving the results, you can continue interacting with the assistant for
|
||||
```shell Terminal
|
||||
crewai chat
|
||||
```
|
||||
|
||||
<Note>
|
||||
Ensure you execute these commands from your CrewAI project's root directory.
|
||||
</Note>
|
||||
@@ -182,6 +198,7 @@ def crew(self) -> Crew:
|
||||
chat_llm="gpt-4o", # LLM for chat orchestration
|
||||
)
|
||||
```
|
||||
|
||||
</Note>
|
||||
|
||||
### 10. Deploy
|
||||
@@ -190,6 +207,7 @@ Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
|
||||
|
||||
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
|
||||
You can login or create an account with:
|
||||
|
||||
```shell Terminal
|
||||
crewai login
|
||||
```
|
||||
@@ -212,56 +230,71 @@ crewai org [COMMAND] [OPTIONS]
|
||||
#### Commands:
|
||||
|
||||
- `list`: List all organizations you belong to
|
||||
|
||||
```shell Terminal
|
||||
crewai org list
|
||||
```
|
||||
|
||||
- `current`: Display your currently active organization
|
||||
|
||||
```shell Terminal
|
||||
crewai org current
|
||||
```
|
||||
|
||||
- `switch`: Switch to a specific organization
|
||||
|
||||
```shell Terminal
|
||||
crewai org switch <organization_id>
|
||||
```
|
||||
|
||||
<Note>
|
||||
You must be authenticated to CrewAI AMP to use these organization management commands.
|
||||
You must be authenticated to CrewAI AMP to use these organization management
|
||||
commands.
|
||||
</Note>
|
||||
|
||||
- **Create a deployment** (continued):
|
||||
|
||||
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
|
||||
|
||||
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
|
||||
|
||||
```shell Terminal
|
||||
crewai deploy push
|
||||
```
|
||||
|
||||
- Initiates the deployment process on the CrewAI AMP platform.
|
||||
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
|
||||
|
||||
- **Deployment Status**: You can check the status of your deployment with:
|
||||
|
||||
```shell Terminal
|
||||
crewai deploy status
|
||||
```
|
||||
|
||||
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
|
||||
|
||||
- **Deployment Logs**: You can check the logs of your deployment with:
|
||||
|
||||
```shell Terminal
|
||||
crewai deploy logs
|
||||
```
|
||||
|
||||
This streams the deployment logs to your terminal.
|
||||
|
||||
- **List deployments**: You can list all your deployments with:
|
||||
|
||||
```shell Terminal
|
||||
crewai deploy list
|
||||
```
|
||||
|
||||
This lists all your deployments.
|
||||
|
||||
- **Delete a deployment**: You can delete a deployment with:
|
||||
|
||||
```shell Terminal
|
||||
crewai deploy remove
|
||||
```
|
||||
|
||||
This deletes the deployment from the CrewAI AMP platform.
|
||||
|
||||
- **Help Command**: You can get help with the CLI with:
|
||||
@@ -290,18 +323,20 @@ crewai login
|
||||
```
|
||||
|
||||
What happens:
|
||||
|
||||
- A verification URL and short code are displayed in your terminal
|
||||
- Your browser opens to the verification URL
|
||||
- Enter/confirm the code to complete authentication
|
||||
|
||||
Notes:
|
||||
|
||||
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
|
||||
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
|
||||
- If you reset your configuration, run `crewai login` again to re-authenticate
|
||||
|
||||
### 12. API Keys
|
||||
|
||||
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
|
||||
When running `crewai create crew` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
|
||||
|
||||
Once you've selected an LLM provider and model, you will be prompted for API keys.
|
||||
|
||||
@@ -309,11 +344,11 @@ Once you've selected an LLM provider and model, you will be prompted for API key
|
||||
|
||||
Here's a list of the most popular LLM providers suggested by the CLI:
|
||||
|
||||
* OpenAI
|
||||
* Groq
|
||||
* Anthropic
|
||||
* Google Gemini
|
||||
* SambaNova
|
||||
- OpenAI
|
||||
- Groq
|
||||
- Anthropic
|
||||
- Google Gemini
|
||||
- SambaNova
|
||||
|
||||
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
|
||||
|
||||
@@ -325,7 +360,7 @@ When you select a provider, the CLI will prompt you to enter the Key name and th
|
||||
|
||||
See the following link for each provider's key name:
|
||||
|
||||
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
|
||||
- [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
|
||||
|
||||
### 13. Configuration Management
|
||||
|
||||
@@ -338,16 +373,19 @@ crewai config [COMMAND] [OPTIONS]
|
||||
#### Commands:
|
||||
|
||||
- `list`: Display all CLI configuration parameters
|
||||
|
||||
```shell Terminal
|
||||
crewai config list
|
||||
```
|
||||
|
||||
- `set`: Set a CLI configuration parameter
|
||||
|
||||
```shell Terminal
|
||||
crewai config set <key> <value>
|
||||
```
|
||||
|
||||
- `reset`: Reset all CLI configuration parameters to default values
|
||||
|
||||
```shell Terminal
|
||||
crewai config reset
|
||||
```
|
||||
@@ -363,6 +401,7 @@ crewai config reset
|
||||
#### Examples
|
||||
|
||||
Display current configuration:
|
||||
|
||||
```shell Terminal
|
||||
crewai config list
|
||||
```
|
||||
@@ -379,21 +418,25 @@ Example output:
|
||||
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
|
||||
|
||||
Set the enterprise base URL:
|
||||
|
||||
```shell Terminal
|
||||
crewai config set enterprise_base_url https://my-enterprise.crewai.com
|
||||
```
|
||||
|
||||
Set OAuth2 provider:
|
||||
|
||||
```shell Terminal
|
||||
crewai config set oauth2_provider auth0
|
||||
```
|
||||
|
||||
Set OAuth2 domain:
|
||||
|
||||
```shell Terminal
|
||||
crewai config set oauth2_domain my-company.auth0.com
|
||||
```
|
||||
|
||||
Reset all configuration to defaults:
|
||||
|
||||
```shell Terminal
|
||||
crewai config reset
|
||||
```
|
||||
@@ -402,10 +445,97 @@ crewai config reset
|
||||
After resetting configuration, re-run `crewai login` to authenticate again.
|
||||
</Tip>
|
||||
|
||||
### 14. Trace Management
|
||||
|
||||
Manage trace collection preferences for your Crew and Flow executions.
|
||||
|
||||
```shell Terminal
|
||||
crewai traces [COMMAND]
|
||||
```
|
||||
|
||||
#### Commands:
|
||||
|
||||
- `enable`: Enable trace collection for crew/flow executions
|
||||
|
||||
```shell Terminal
|
||||
crewai traces enable
|
||||
```
|
||||
|
||||
- `disable`: Disable trace collection for crew/flow executions
|
||||
|
||||
```shell Terminal
|
||||
crewai traces disable
|
||||
```
|
||||
|
||||
- `status`: Show current trace collection status
|
||||
|
||||
```shell Terminal
|
||||
crewai traces status
|
||||
```
|
||||
|
||||
#### How Tracing Works
|
||||
|
||||
Trace collection is controlled by checking three settings in priority order:
|
||||
|
||||
1. **Explicit flag in code** (highest priority - can enable OR disable):
|
||||
|
||||
```python
|
||||
crew = Crew(agents=[...], tasks=[...], tracing=True) # Always enable
|
||||
crew = Crew(agents=[...], tasks=[...], tracing=False) # Always disable
|
||||
crew = Crew(agents=[...], tasks=[...]) # Check lower priorities (default)
|
||||
```
|
||||
|
||||
- `tracing=True` will **always enable** tracing (overrides everything)
|
||||
- `tracing=False` will **always disable** tracing (overrides everything)
|
||||
- `tracing=None` or omitted will check lower priority settings
|
||||
|
||||
2. **Environment variable** (second priority):
|
||||
|
||||
```env
|
||||
CREWAI_TRACING_ENABLED=true
|
||||
```
|
||||
|
||||
- Checked only if `tracing` is not explicitly set to `True` or `False` in code
|
||||
- Set to `true` or `1` to enable tracing
|
||||
|
||||
3. **User preference** (lowest priority):
|
||||
```shell Terminal
|
||||
crewai traces enable
|
||||
```
|
||||
- Checked only if `tracing` is not set in code and `CREWAI_TRACING_ENABLED` is not set to `true`
|
||||
- Running `crewai traces enable` is sufficient to enable tracing by itself
|
||||
|
||||
<Note>
|
||||
**To enable tracing**, use any one of these methods:
|
||||
- Set `tracing=True` in your Crew/Flow code, OR
|
||||
- Add `CREWAI_TRACING_ENABLED=true` to your `.env` file, OR
|
||||
- Run `crewai traces enable`
|
||||
|
||||
**To disable tracing**, use any ONE of these methods:
|
||||
|
||||
- Set `tracing=False` in your Crew/Flow code (overrides everything), OR
|
||||
- Remove or set to `false` the `CREWAI_TRACING_ENABLED` env var, OR
|
||||
- Run `crewai traces disable`
|
||||
|
||||
Higher priority settings override lower ones.
|
||||
|
||||
</Note>
|
||||
|
||||
<Tip>
|
||||
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
|
||||
For more information about tracing, see the [Tracing
|
||||
documentation](/observability/tracing).
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
CrewAI CLI handles authentication to the Tool Repository automatically when
|
||||
adding packages to your project. Just append `crewai` before any `uv` command
|
||||
to use it. E.g. `crewai uv add requests`. For more information, see [Tool
|
||||
Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
|
||||
</Tip>
|
||||
|
||||
<Note>
|
||||
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
|
||||
Configuration settings are stored in `~/.config/crewai/settings.json`. Some
|
||||
settings like organization name and UUID are read-only and managed through
|
||||
authentication and organization commands. Tool repository related settings are
|
||||
hidden and cannot be set directly by users.
|
||||
</Note>
|
||||
|
||||
@@ -33,6 +33,7 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
||||
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
|
||||
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
|
||||
|
||||
<Tip>
|
||||
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
@@ -306,12 +307,27 @@ print(result)
|
||||
|
||||
### Different Ways to Kick Off a Crew
|
||||
|
||||
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
|
||||
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process.
|
||||
|
||||
#### Synchronous Methods
|
||||
|
||||
- `kickoff()`: Starts the execution process according to the defined process flow.
|
||||
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
|
||||
- `kickoff_async()`: Initiates the workflow asynchronously.
|
||||
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
|
||||
|
||||
#### Asynchronous Methods
|
||||
|
||||
CrewAI offers two approaches for async execution:
|
||||
|
||||
| Method | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
|
||||
| `akickoff_for_each()` | Native async | Native async execution for each input in a list |
|
||||
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
|
||||
| `kickoff_for_each_async()` | Thread-based | Thread-based async for each input in a list |
|
||||
|
||||
<Note>
|
||||
For high-concurrency workloads, `akickoff()` and `akickoff_for_each()` are recommended as they use native async for task execution, memory operations, and knowledge retrieval.
|
||||
</Note>
|
||||
|
||||
```python Code
|
||||
# Start the crew's task execution
|
||||
@@ -324,19 +340,53 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
|
||||
for result in results:
|
||||
print(result)
|
||||
|
||||
# Example of using kickoff_async
|
||||
# Example of using native async with akickoff
|
||||
inputs = {'topic': 'AI in healthcare'}
|
||||
async_result = await my_crew.akickoff(inputs=inputs)
|
||||
print(async_result)
|
||||
|
||||
# Example of using native async with akickoff_for_each
|
||||
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
|
||||
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
|
||||
for async_result in async_results:
|
||||
print(async_result)
|
||||
|
||||
# Example of using thread-based kickoff_async
|
||||
inputs = {'topic': 'AI in healthcare'}
|
||||
async_result = await my_crew.kickoff_async(inputs=inputs)
|
||||
print(async_result)
|
||||
|
||||
# Example of using kickoff_for_each_async
|
||||
# Example of using thread-based kickoff_for_each_async
|
||||
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
|
||||
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
|
||||
for async_result in async_results:
|
||||
print(async_result)
|
||||
```
|
||||
|
||||
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
|
||||
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. For detailed async examples, see the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide.
|
||||
|
||||
### Streaming Crew Execution
|
||||
|
||||
For real-time visibility into crew execution, you can enable streaming to receive output as it's generated:
|
||||
|
||||
```python Code
|
||||
# Enable streaming
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
# Iterate over streaming output
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
```
|
||||
|
||||
Learn more about streaming in the [Streaming Crew Execution](/en/learn/streaming-crew-execution) guide.
|
||||
|
||||
### Replaying from a Specific Task
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: 'Event Listeners'
|
||||
description: 'Tap into CrewAI events to build custom integrations and monitoring'
|
||||
title: "Event Listeners"
|
||||
description: "Tap into CrewAI events to build custom integrations and monitoring"
|
||||
icon: spinner
|
||||
mode: "wide"
|
||||
---
|
||||
@@ -25,6 +25,7 @@ CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event s
|
||||

|
||||
|
||||
With Prompt Tracing you can:
|
||||
|
||||
- View the complete history of all prompts sent to your LLM
|
||||
- Track token usage and costs
|
||||
- Debug agent reasoning failures
|
||||
@@ -274,7 +275,6 @@ The structure of the event object depends on the event type, but all events inhe
|
||||
|
||||
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
|
||||
|
||||
|
||||
## Advanced Usage: Scoped Handlers
|
||||
|
||||
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:
|
||||
|
||||
@@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
||||
|
||||
### Human in the Loop (human feedback)
|
||||
|
||||
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
|
||||
class ReviewFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Do you approve this content?",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
)
|
||||
def generate_content(self):
|
||||
return "Content to be reviewed..."
|
||||
|
||||
@listen("approved")
|
||||
def on_approval(self, result: HumanFeedbackResult):
|
||||
print(f"Approved! Feedback: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def on_rejection(self, result: HumanFeedbackResult):
|
||||
print(f"Rejected. Reason: {result.feedback}")
|
||||
```
|
||||
|
||||
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
|
||||
|
||||
You can also use `@human_feedback` without routing to simply collect feedback:
|
||||
|
||||
```python Code
|
||||
@start()
|
||||
@human_feedback(message="Any comments on this output?")
|
||||
def my_method(self):
|
||||
return "Output for review"
|
||||
|
||||
@listen(my_method)
|
||||
def next_step(self, result: HumanFeedbackResult):
|
||||
# Access feedback via result.feedback
|
||||
# Access original output via result.output
|
||||
pass
|
||||
```
|
||||
|
||||
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
|
||||
|
||||
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
|
||||
|
||||
## Adding Agents to Flows
|
||||
|
||||
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
|
||||
@@ -897,6 +946,31 @@ flow = ExampleFlow()
|
||||
result = flow.kickoff()
|
||||
```
|
||||
|
||||
### Streaming Flow Execution
|
||||
|
||||
For real-time visibility into flow execution, you can enable streaming to receive output as it's generated:
|
||||
|
||||
```python
|
||||
class StreamingFlow(Flow):
|
||||
stream = True # Enable streaming
|
||||
|
||||
@start()
|
||||
def research(self):
|
||||
# Your flow implementation
|
||||
pass
|
||||
|
||||
# Iterate over streaming output
|
||||
flow = StreamingFlow()
|
||||
streaming = flow.kickoff()
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
```
|
||||
|
||||
Learn more about streaming in the [Streaming Flow Execution](/en/learn/streaming-flow-execution) guide.
|
||||
|
||||
### Using the CLI
|
||||
|
||||
Starting from version 0.103.0, you can run flows using the `crewai run` command:
|
||||
|
||||
@@ -388,8 +388,8 @@ crew = Crew(
|
||||
agents=[sales_agent, tech_agent, support_agent],
|
||||
tasks=[...],
|
||||
embedder={ # Fallback embedder for agents without their own
|
||||
"provider": "google",
|
||||
"config": {"model": "text-embedding-004"}
|
||||
"provider": "google-generativeai",
|
||||
"config": {"model_name": "gemini-embedding-001"}
|
||||
}
|
||||
)
|
||||
|
||||
@@ -629,9 +629,9 @@ agent = Agent(
|
||||
backstory="Expert researcher",
|
||||
knowledge_sources=[knowledge_source],
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"provider": "google-generativeai",
|
||||
"config": {
|
||||
"model": "models/text-embedding-004",
|
||||
"model_name": "gemini-embedding-001",
|
||||
"api_key": "your-google-key"
|
||||
}
|
||||
}
|
||||
@@ -739,7 +739,7 @@ class KnowledgeMonitorListener(BaseEventListener):
|
||||
knowledge_monitor = KnowledgeMonitorListener()
|
||||
```
|
||||
|
||||
For more information on using events, see the [Event Listeners](https://docs.crewai.com/concepts/event-listener) documentation.
|
||||
For more information on using events, see the [Event Listeners](/en/concepts/event-listener) documentation.
|
||||
|
||||
### Custom Knowledge Sources
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
|
||||
|
||||
## What are LLMs?
|
||||
@@ -113,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="OpenAI">
|
||||
Set the following environment variables in your `.env` file:
|
||||
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
# Optional
|
||||
OPENAI_API_BASE=<custom-base-url>
|
||||
OPENAI_ORGANIZATION=<your-org-id>
|
||||
OPENAI_BASE_URL=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4", # call model by provider/model_name
|
||||
temperature=0.8,
|
||||
max_tokens=150,
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key", # Or set OPENAI_API_KEY
|
||||
temperature=0.7,
|
||||
max_tokens=4000
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.openai.com/v1", # Optional custom endpoint
|
||||
organization="org-...", # Optional organization ID
|
||||
project="proj_...", # Optional project ID
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
max_completion_tokens=4000, # For newer models
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.1,
|
||||
presence_penalty=0.1,
|
||||
stop=["END"],
|
||||
seed=42
|
||||
seed=42, # For reproducible outputs
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3, # Maximum retry attempts
|
||||
logprobs=True, # Return log probabilities
|
||||
top_logprobs=5, # Number of most likely tokens
|
||||
reasoning_effort="medium" # For o1 models: low, medium, high
|
||||
)
|
||||
```
|
||||
|
||||
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
|
||||
**Structured Outputs:**
|
||||
```python Code
|
||||
from pydantic import BaseModel
|
||||
from crewai import LLM
|
||||
|
||||
class ResponseFormat(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
summary: str
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key (required)
|
||||
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support (except o1 models)
|
||||
- Structured outputs with JSON schema
|
||||
- Streaming support for real-time responses
|
||||
- Token usage tracking
|
||||
- Stop sequences support (except o1 models)
|
||||
- Log probabilities for token-level insights
|
||||
- Reasoning effort control for o1 models
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|---------------------|------------------|-----------------------------------------------|
|
||||
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
|
||||
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
|
||||
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
|
||||
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
|
||||
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
|
||||
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
|
||||
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
|
||||
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
|
||||
| o1-mini | 128,000 tokens | Efficient reasoning model |
|
||||
| o3-mini | 200,000 tokens | Lightweight reasoning model |
|
||||
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
|
||||
|
||||
**Note:** To use OpenAI, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[openai]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Meta-Llama">
|
||||
@@ -187,69 +247,230 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Anthropic">
|
||||
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# Optional
|
||||
ANTHROPIC_API_BASE=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-sonnet-20240229-v1:0",
|
||||
temperature=0.7
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
|
||||
max_tokens=4096 # Required for Anthropic
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.anthropic.com", # Optional custom endpoint
|
||||
temperature=0.7,
|
||||
max_tokens=4096, # Required parameter
|
||||
top_p=0.9,
|
||||
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3 # Maximum retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
**Extended Thinking (Claude Sonnet 4 and Beyond):**
|
||||
|
||||
CrewAI supports Anthropic's Extended Thinking feature, which allows Claude to think through problems in a more human-like way before responding. This is particularly useful for complex reasoning, analysis, and problem-solving tasks.
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
# Enable extended thinking with default settings
|
||||
llm = LLM(
|
||||
model="anthropic/claude-sonnet-4",
|
||||
thinking={"type": "enabled"},
|
||||
max_tokens=10000
|
||||
)
|
||||
|
||||
# Configure thinking with budget control
|
||||
llm = LLM(
|
||||
model="anthropic/claude-sonnet-4",
|
||||
thinking={
|
||||
"type": "enabled",
|
||||
"budget_tokens": 5000 # Limit thinking tokens
|
||||
},
|
||||
max_tokens=10000
|
||||
)
|
||||
```
|
||||
|
||||
**Thinking Configuration Options:**
|
||||
- `type`: Set to `"enabled"` to activate extended thinking mode
|
||||
- `budget_tokens` (optional): Maximum tokens to use for thinking (helps control costs)
|
||||
|
||||
**Models Supporting Extended Thinking:**
|
||||
- `claude-sonnet-4` and newer models
|
||||
- `claude-3-7-sonnet` (with extended thinking capabilities)
|
||||
|
||||
**When to Use Extended Thinking:**
|
||||
- Complex reasoning and multi-step problem solving
|
||||
- Mathematical calculations and proofs
|
||||
- Code analysis and debugging
|
||||
- Strategic planning and decision making
|
||||
- Research and analytical tasks
|
||||
|
||||
**Note:** Extended thinking consumes additional tokens but can significantly improve response quality for complex tasks.
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
|
||||
|
||||
**Features:**
|
||||
- Native tool use support for Claude 3+ models
|
||||
- Extended Thinking support for Claude Sonnet 4+
|
||||
- Streaming support for real-time responses
|
||||
- Automatic system message handling
|
||||
- Stop sequences for controlled output
|
||||
- Token usage tracking
|
||||
- Multi-turn tool use conversations
|
||||
|
||||
**Important Notes:**
|
||||
- `max_tokens` is a **required** parameter for all Anthropic models
|
||||
- Claude uses `stop_sequences` instead of `stop`
|
||||
- System messages are handled separately from conversation messages
|
||||
- First message must be from the user (automatically handled)
|
||||
- Messages must alternate between user and assistant
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|------------------------------|----------------|-----------------------------------------------|
|
||||
| claude-sonnet-4 | 200,000 tokens | Latest with extended thinking capabilities |
|
||||
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
|
||||
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
|
||||
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
|
||||
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
|
||||
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
|
||||
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
|
||||
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
|
||||
| claude-2 | 100,000 tokens | Versatile model for various tasks |
|
||||
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
|
||||
|
||||
**Note:** To use Anthropic, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[anthropic]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google (Gemini API)">
|
||||
Set your API key in your `.env` file. If you need a key, or need to find an
|
||||
existing key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
|
||||
|
||||
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
|
||||
```toml .env
|
||||
# https://ai.google.dev/gemini-api/docs/api-key
|
||||
# Required (one of the following)
|
||||
GOOGLE_API_KEY=<your-api-key>
|
||||
GEMINI_API_KEY=<your-api-key>
|
||||
|
||||
# Optional - for Vertex AI
|
||||
GOOGLE_CLOUD_PROJECT=<your-project-id>
|
||||
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
|
||||
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.0-flash",
|
||||
temperature=0.7,
|
||||
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
|
||||
### Gemini models
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.5-flash",
|
||||
api_key="your-api-key",
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
top_k=40, # Top-k sampling parameter
|
||||
max_output_tokens=8192,
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
safety_settings={
|
||||
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
|
||||
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Vertex AI Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-1.5-pro",
|
||||
project="your-gcp-project-id",
|
||||
location="us-central1" # GCP region
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
|
||||
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
|
||||
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
|
||||
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Gemini 1.5+ and 2.x models
|
||||
- Streaming support for real-time responses
|
||||
- Multimodal capabilities (text, images, video)
|
||||
- Safety settings configuration
|
||||
- Support for both Gemini API and Vertex AI
|
||||
- Automatic system instruction handling
|
||||
- Token usage tracking
|
||||
|
||||
**Gemini Models:**
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
|
||||
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
|
||||
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
|
||||
|
||||
**Gemma Models:**
|
||||
|
||||
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|----------------|----------------|------------------------------------|
|
||||
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
|
||||
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
|
||||
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
|
||||
| gemma-3-27b | 128,000 tokens | High-performance tasks |
|
||||
|
||||
**Note:** To use Google Gemini, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[google-genai]"
|
||||
```
|
||||
|
||||
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
|
||||
|
||||
### Gemma
|
||||
|
||||
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window |
|
||||
|----------------|----------------|
|
||||
| gemma-3-1b-it | 32k tokens |
|
||||
| gemma-3-4b-it | 32k tokens |
|
||||
| gemma-3-12b-it | 32k tokens |
|
||||
| gemma-3-27b-it | 128k tokens |
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Google (Vertex AI)">
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
|
||||
@@ -291,43 +512,146 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure">
|
||||
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AZURE_API_KEY=<your-api-key>
|
||||
AZURE_API_BASE=<your-resource-url>
|
||||
AZURE_API_VERSION=<api-version>
|
||||
AZURE_ENDPOINT=<your-endpoint-url>
|
||||
|
||||
# Optional
|
||||
AZURE_AD_TOKEN=<your-azure-ad-token>
|
||||
AZURE_API_TYPE=<your-azure-api-type>
|
||||
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Endpoint URL Formats:**
|
||||
|
||||
For Azure OpenAI deployments:
|
||||
```
|
||||
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
|
||||
```
|
||||
|
||||
For Azure AI Inference endpoints:
|
||||
```
|
||||
https://<resource-name>.inference.azure.com
|
||||
```
|
||||
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
api_version="2023-05-15"
|
||||
api_key="<your-api-key>", # Or set AZURE_API_KEY
|
||||
endpoint="<your-endpoint-url>",
|
||||
api_version="2024-06-01"
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4o",
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.0,
|
||||
presence_penalty=0.0,
|
||||
stop=["END"],
|
||||
stream=True,
|
||||
timeout=60.0,
|
||||
max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AZURE_API_KEY`: Your Azure API key (required)
|
||||
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
|
||||
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
|
||||
- Streaming support for real-time responses
|
||||
- Automatic endpoint URL validation and correction
|
||||
- Comprehensive error handling with retry logic
|
||||
- Token usage tracking
|
||||
|
||||
**Note:** To use Azure AI Inference, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[azure-ai-inference]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AWS Bedrock">
|
||||
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
AWS_DEFAULT_REGION=<your-region>
|
||||
|
||||
# Optional
|
||||
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
|
||||
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
region_name="us-east-1"
|
||||
)
|
||||
```
|
||||
|
||||
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
|
||||
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
|
||||
aws_session_token="your-session-token", # For temporary credentials
|
||||
region_name="us-east-1",
|
||||
temperature=0.7,
|
||||
max_tokens=4096,
|
||||
top_p=0.9,
|
||||
top_k=250, # For Claude models
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
guardrail_config={ # Optional content filtering
|
||||
"guardrailIdentifier": "your-guardrail-id",
|
||||
"guardrailVersion": "1"
|
||||
},
|
||||
additional_model_request_fields={ # Model-specific parameters
|
||||
"top_k": 250
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
|
||||
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
|
||||
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
|
||||
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
|
||||
|
||||
**Features:**
|
||||
- Native tool calling support via Converse API
|
||||
- Streaming and non-streaming responses
|
||||
- Comprehensive error handling with retry logic
|
||||
- Guardrail configuration for content filtering
|
||||
- Model-specific parameters via `additional_model_request_fields`
|
||||
- Token usage tracking and stop reason logging
|
||||
- Support for all Bedrock foundation models
|
||||
- Automatic conversation format handling
|
||||
|
||||
**Important Notes:**
|
||||
- Uses the modern Converse API for unified model access
|
||||
- Automatic handling of model-specific conversation requirements
|
||||
- System messages are handled separately from conversation
|
||||
- First message must be from user (automatically handled)
|
||||
- Some models (like Cohere) require conversation to end with user message
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------------|----------------------|-------------------------------------------------------------------|
|
||||
@@ -357,7 +681,12 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
|
||||
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
|
||||
|
||||
**Note:** To use AWS Bedrock, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[bedrock]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Amazon SageMaker">
|
||||
@@ -750,7 +1079,7 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
|
||||
```
|
||||
|
||||
<Tip>
|
||||
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
|
||||
[Click here](/en/concepts/event-listener#event-listeners) for more details
|
||||
</Tip>
|
||||
</Tab>
|
||||
|
||||
@@ -804,6 +1133,50 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Async LLM Calls
|
||||
|
||||
CrewAI supports asynchronous LLM calls for improved performance and concurrency in your AI workflows. Async calls allow you to run multiple LLM requests concurrently without blocking, making them ideal for high-throughput applications and parallel agent operations.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Basic Usage">
|
||||
Use the `acall` method for asynchronous LLM requests:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai import LLM
|
||||
|
||||
async def main():
|
||||
llm = LLM(model="openai/gpt-4o")
|
||||
|
||||
# Single async call
|
||||
response = await llm.acall("What is the capital of France?")
|
||||
print(response)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
The `acall` method supports all the same parameters as the synchronous `call` method, including messages, tools, and callbacks.
|
||||
</Tab>
|
||||
|
||||
<Tab title="With Streaming">
|
||||
Combine async calls with streaming for real-time concurrent responses:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai import LLM
|
||||
|
||||
async def stream_async():
|
||||
llm = LLM(model="openai/gpt-4o", stream=True)
|
||||
|
||||
response = await llm.acall("Write a short story about AI")
|
||||
|
||||
print(response)
|
||||
|
||||
asyncio.run(stream_async())
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Structured LLM Calls
|
||||
|
||||
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
|
||||
@@ -899,7 +1272,7 @@ Learn how to get the most out of your LLM configuration:
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Drop Additional Parameters">
|
||||
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
|
||||
|
||||
```python
|
||||
@@ -915,6 +1288,52 @@ Learn how to get the most out of your LLM configuration:
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Transport Interceptors">
|
||||
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
|
||||
|
||||
**Supported Providers:**
|
||||
- ✅ OpenAI
|
||||
- ✅ Anthropic
|
||||
|
||||
**Basic Usage:**
|
||||
```python
|
||||
import httpx
|
||||
from crewai import LLM
|
||||
from crewai.llms.hooks import BaseInterceptor
|
||||
|
||||
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
|
||||
"""Custom interceptor to modify requests and responses."""
|
||||
|
||||
def on_outbound(self, request: httpx.Request) -> httpx.Request:
|
||||
"""Print request before sending to the LLM provider."""
|
||||
print(request)
|
||||
return request
|
||||
|
||||
def on_inbound(self, response: httpx.Response) -> httpx.Response:
|
||||
"""Process response after receiving from the LLM provider."""
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response time: {response.elapsed}")
|
||||
return response
|
||||
|
||||
# Use the interceptor with an LLM
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
interceptor=CustomInterceptor()
|
||||
)
|
||||
```
|
||||
|
||||
**Important Notes:**
|
||||
- Both methods must return the received object or type of object.
|
||||
- Modifying received objects may result in unexpected behavior or application crashes.
|
||||
- Not all providers support interceptors - check the supported providers list above
|
||||
|
||||
<Info>
|
||||
Interceptors operate at the transport layer. This is particularly useful for:
|
||||
- Message transformation and filtering
|
||||
- Debugging API interactions
|
||||
</Info>
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
@@ -341,7 +341,7 @@ crew = Crew(
|
||||
embedder={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": "text-embedding-3-small" # or "text-embedding-3-large"
|
||||
"model_name": "text-embedding-3-small" # or "text-embedding-3-large"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -353,7 +353,7 @@ crew = Crew(
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"api_key": "your-openai-api-key", # Optional: override env var
|
||||
"model": "text-embedding-3-large",
|
||||
"model_name": "text-embedding-3-large",
|
||||
"dimensions": 1536, # Optional: reduce dimensions for smaller storage
|
||||
"organization_id": "your-org-id" # Optional: for organization accounts
|
||||
}
|
||||
@@ -375,7 +375,7 @@ crew = Crew(
|
||||
"api_base": "https://your-resource.openai.azure.com/",
|
||||
"api_type": "azure",
|
||||
"api_version": "2023-05-15",
|
||||
"model": "text-embedding-3-small",
|
||||
"model_name": "text-embedding-3-small",
|
||||
"deployment_id": "your-deployment-name" # Azure deployment name
|
||||
}
|
||||
}
|
||||
@@ -390,10 +390,10 @@ Use Google's text embedding models for integration with Google Cloud services.
|
||||
crew = Crew(
|
||||
memory=True,
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"provider": "google-generativeai",
|
||||
"config": {
|
||||
"api_key": "your-google-api-key",
|
||||
"model": "text-embedding-004" # or "text-embedding-preview-0409"
|
||||
"model_name": "gemini-embedding-001" # or "text-embedding-005", "text-multilingual-embedding-002"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -461,7 +461,7 @@ crew = Crew(
|
||||
"provider": "cohere",
|
||||
"config": {
|
||||
"api_key": "your-cohere-api-key",
|
||||
"model": "embed-english-v3.0" # or "embed-multilingual-v3.0"
|
||||
"model_name": "embed-english-v3.0" # or "embed-multilingual-v3.0"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -478,7 +478,7 @@ crew = Crew(
|
||||
"provider": "voyageai",
|
||||
"config": {
|
||||
"api_key": "your-voyage-api-key",
|
||||
"model": "voyage-large-2", # or "voyage-code-2" for code
|
||||
"model": "voyage-3", # or "voyage-3-lite", "voyage-code-3"
|
||||
"input_type": "document" # or "query"
|
||||
}
|
||||
}
|
||||
@@ -515,8 +515,7 @@ crew = Crew(
|
||||
"provider": "huggingface",
|
||||
"config": {
|
||||
"api_key": "your-hf-token", # Optional for public models
|
||||
"model": "sentence-transformers/all-MiniLM-L6-v2",
|
||||
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
|
||||
"model": "sentence-transformers/all-MiniLM-L6-v2"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -912,10 +911,10 @@ crew = Crew(
|
||||
crew = Crew(
|
||||
memory=True,
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"provider": "google-generativeai",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model": "text-embedding-004"
|
||||
"model_name": "gemini-embedding-001"
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
154
docs/en/concepts/production-architecture.mdx
Normal file
154
docs/en/concepts/production-architecture.mdx
Normal file
@@ -0,0 +1,154 @@
|
||||
---
|
||||
title: Production Architecture
|
||||
description: Best practices for building production-ready AI applications with CrewAI
|
||||
icon: server
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# The Flow-First Mindset
|
||||
|
||||
When building production AI applications with CrewAI, **we recommend starting with a Flow**.
|
||||
|
||||
While it's possible to run individual Crews or Agents, wrapping them in a Flow provides the necessary structure for a robust, scalable application.
|
||||
|
||||
## Why Flows?
|
||||
|
||||
1. **State Management**: Flows provide a built-in way to manage state across different steps of your application. This is crucial for passing data between Crews, maintaining context, and handling user inputs.
|
||||
2. **Control**: Flows allow you to define precise execution paths, including loops, conditionals, and branching logic. This is essential for handling edge cases and ensuring your application behaves predictably.
|
||||
3. **Observability**: Flows provide a clear structure that makes it easier to trace execution, debug issues, and monitor performance. We recommend using [CrewAI Tracing](/en/observability/tracing) for detailed insights. Simply run `crewai login` to enable free observability features.
|
||||
|
||||
## The Architecture
|
||||
|
||||
A typical production CrewAI application looks like this:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
Start((Start)) --> Flow[Flow Orchestrator]
|
||||
Flow --> State{State Management}
|
||||
State --> Step1[Step 1: Data Gathering]
|
||||
Step1 --> Crew1[Research Crew]
|
||||
Crew1 --> State
|
||||
State --> Step2{Condition Check}
|
||||
Step2 -- "Valid" --> Step3[Step 3: Execution]
|
||||
Step3 --> Crew2[Action Crew]
|
||||
Step2 -- "Invalid" --> End((End))
|
||||
Crew2 --> End
|
||||
```
|
||||
|
||||
### 1. The Flow Class
|
||||
Your `Flow` class is the entry point. It defines the state schema and the methods that execute your logic.
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
class AppState(BaseModel):
|
||||
user_input: str = ""
|
||||
research_results: str = ""
|
||||
final_report: str = ""
|
||||
|
||||
class ProductionFlow(Flow[AppState]):
|
||||
@start()
|
||||
def gather_input(self):
|
||||
# ... logic to get input ...
|
||||
pass
|
||||
|
||||
@listen(gather_input)
|
||||
def run_research_crew(self):
|
||||
# ... trigger a Crew ...
|
||||
pass
|
||||
```
|
||||
|
||||
### 2. State Management
|
||||
Use Pydantic models to define your state. This ensures type safety and makes it clear what data is available at each step.
|
||||
|
||||
- **Keep it minimal**: Store only what you need to persist between steps.
|
||||
- **Use structured data**: Avoid unstructured dictionaries when possible.
|
||||
|
||||
### 3. Crews as Units of Work
|
||||
Delegate complex tasks to Crews. A Crew should be focused on a specific goal (e.g., "Research a topic", "Write a blog post").
|
||||
|
||||
- **Don't over-engineer Crews**: Keep them focused.
|
||||
- **Pass state explicitly**: Pass the necessary data from the Flow state to the Crew inputs.
|
||||
|
||||
```python
|
||||
@listen(gather_input)
|
||||
def run_research_crew(self):
|
||||
crew = ResearchCrew()
|
||||
result = crew.kickoff(inputs={"topic": self.state.user_input})
|
||||
self.state.research_results = result.raw
|
||||
```
|
||||
|
||||
## Control Primitives
|
||||
|
||||
Leverage CrewAI's control primitives to add robustness and control to your Crews.
|
||||
|
||||
### 1. Task Guardrails
|
||||
Use [Task Guardrails](/en/concepts/tasks#task-guardrails) to validate task outputs before they are accepted. This ensures that your agents produce high-quality results.
|
||||
|
||||
```python
|
||||
def validate_content(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
if len(result.raw) < 100:
|
||||
return (False, "Content is too short. Please expand.")
|
||||
return (True, result.raw)
|
||||
|
||||
task = Task(
|
||||
...,
|
||||
guardrail=validate_content
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Structured Outputs
|
||||
Always use structured outputs (`output_pydantic` or `output_json`) when passing data between tasks or to your application. This prevents parsing errors and ensures type safety.
|
||||
|
||||
```python
|
||||
class ResearchResult(BaseModel):
|
||||
summary: str
|
||||
sources: List[str]
|
||||
|
||||
task = Task(
|
||||
...,
|
||||
output_pydantic=ResearchResult
|
||||
)
|
||||
```
|
||||
|
||||
### 3. LLM Hooks
|
||||
Use [LLM Hooks](/en/learn/llm-hooks) to inspect or modify messages before they are sent to the LLM, or to sanitize responses.
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def log_request(context):
|
||||
print(f"Agent {context.agent.role} is calling the LLM...")
|
||||
```
|
||||
|
||||
## Deployment Patterns
|
||||
|
||||
When deploying your Flow, consider the following:
|
||||
|
||||
### CrewAI Enterprise
|
||||
The easiest way to deploy your Flow is using CrewAI Enterprise. It handles the infrastructure, authentication, and monitoring for you.
|
||||
|
||||
Check out the [Deployment Guide](/en/enterprise/guides/deploy-crew) to get started.
|
||||
|
||||
```bash
|
||||
crewai deploy create
|
||||
```
|
||||
|
||||
### Async Execution
|
||||
For long-running tasks, use `kickoff_async` to avoid blocking your API.
|
||||
|
||||
### Persistence
|
||||
Use the `@persist` decorator to save the state of your Flow to a database. This allows you to resume execution if the process crashes or if you need to wait for human input.
|
||||
|
||||
```python
|
||||
@persist
|
||||
class ProductionFlow(Flow[AppState]):
|
||||
# ...
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
- **Start with a Flow.**
|
||||
- **Define a clear State.**
|
||||
- **Use Crews for complex tasks.**
|
||||
- **Deploy with an API and persistence.**
|
||||
@@ -19,6 +19,7 @@ CrewAI AMP includes a Visual Task Builder in Crew Studio that simplifies complex
|
||||

|
||||
|
||||
The Visual Task Builder enables:
|
||||
|
||||
- Drag-and-drop task creation
|
||||
- Visual task dependencies and flow
|
||||
- Real-time testing and validation
|
||||
@@ -28,10 +29,12 @@ The Visual Task Builder enables:
|
||||
### Task Execution Flow
|
||||
|
||||
Tasks can be executed in two ways:
|
||||
|
||||
- **Sequential**: Tasks are executed in the order they are defined
|
||||
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
|
||||
|
||||
The execution flow is defined when creating the crew:
|
||||
|
||||
```python Code
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
@@ -43,7 +46,7 @@ crew = Crew(
|
||||
## Task Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
|
||||
| :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
|
||||
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
|
||||
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
|
||||
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
|
||||
@@ -60,11 +63,13 @@ crew = Crew(
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
|
||||
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
|
||||
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
|
||||
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
|
||||
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
|
||||
|
||||
<Note type="warning" title="Deprecated: max_retries">
|
||||
The task attribute `max_retries` is deprecated and will be removed in v1.0.0.
|
||||
Use `guardrail_max_retries` instead to control retry attempts when a guardrail fails.
|
||||
Use `guardrail_max_retries` instead to control retry attempts when a guardrail
|
||||
fails.
|
||||
</Note>
|
||||
|
||||
## Creating Tasks
|
||||
@@ -86,7 +91,7 @@ crew.kickoff(inputs={'topic': 'AI Agents'})
|
||||
|
||||
Here's an example of how to configure tasks using YAML:
|
||||
|
||||
```yaml tasks.yaml
|
||||
````yaml tasks.yaml
|
||||
research_task:
|
||||
description: >
|
||||
Conduct a thorough research about {topic}
|
||||
@@ -106,7 +111,7 @@ reporting_task:
|
||||
agent: reporting_analyst
|
||||
markdown: true
|
||||
output_file: report.md
|
||||
```
|
||||
````
|
||||
|
||||
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
|
||||
|
||||
@@ -164,7 +169,8 @@ class LatestAiDevelopmentCrew():
|
||||
```
|
||||
|
||||
<Note>
|
||||
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
|
||||
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should
|
||||
match the method names in your Python code.
|
||||
</Note>
|
||||
|
||||
### Direct Code Definition (Alternative)
|
||||
@@ -201,7 +207,8 @@ reporting_task = Task(
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc.
|
||||
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's
|
||||
process decide based on roles, availability, etc.
|
||||
</Tip>
|
||||
|
||||
## Task Output
|
||||
@@ -223,6 +230,7 @@ By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput`
|
||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
|
||||
| **Agent** | `agent` | `str` | The agent that executed the task. |
|
||||
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
|
||||
| **Messages** | `messages` | `list[LLMMessage]` | The messages from the last task execution. |
|
||||
|
||||
### Task Methods and Properties
|
||||
|
||||
@@ -285,12 +293,13 @@ formatted_task = Task(
|
||||
```
|
||||
|
||||
When `markdown=True`, the agent will receive additional instructions to format the output using:
|
||||
|
||||
- `#` for headers
|
||||
- `**text**` for bold text
|
||||
- `*text*` for italic text
|
||||
- `-` or `*` for bullet points
|
||||
- `` `code` `` for inline code
|
||||
- ``` ```language ``` for code blocks
|
||||
- ` `language ``` for code blocks
|
||||
|
||||
### YAML Configuration with Markdown
|
||||
|
||||
@@ -313,7 +322,9 @@ analysis_task:
|
||||
- **Cross-Platform Compatibility**: Markdown is universally supported
|
||||
|
||||
<Note>
|
||||
The markdown formatting instructions are automatically added to the task prompt when `markdown=True`, so you don't need to specify formatting requirements in your task description.
|
||||
The markdown formatting instructions are automatically added to the task
|
||||
prompt when `markdown=True`, so you don't need to specify formatting
|
||||
requirements in your task description.
|
||||
</Note>
|
||||
|
||||
## Task Dependencies and Context
|
||||
@@ -341,7 +352,11 @@ Task guardrails provide a way to validate and transform task outputs before they
|
||||
are passed to the next task. This feature helps ensure data quality and provides
|
||||
feedback to agents when their output doesn't meet specific criteria.
|
||||
|
||||
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
|
||||
CrewAI supports two types of guardrails:
|
||||
|
||||
1. **Function-based guardrails**: Python functions with custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
|
||||
|
||||
2. **LLM-based guardrails**: String descriptions that use the agent's LLM to validate outputs based on natural language criteria. These are ideal for complex or subjective validation requirements.
|
||||
|
||||
### Function-Based Guardrails
|
||||
|
||||
@@ -355,12 +370,12 @@ def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate blog content meets requirements."""
|
||||
try:
|
||||
# Check word count
|
||||
word_count = len(result.split())
|
||||
word_count = len(result.raw.split())
|
||||
if word_count > 200:
|
||||
return (False, "Blog content exceeds 200 words")
|
||||
|
||||
# Additional validation logic here
|
||||
return (True, result.strip())
|
||||
return (True, result.raw.strip())
|
||||
except Exception as e:
|
||||
return (False, "Unexpected error during validation")
|
||||
|
||||
@@ -372,9 +387,156 @@ blog_task = Task(
|
||||
)
|
||||
```
|
||||
|
||||
### LLM-Based Guardrails (String Descriptions)
|
||||
|
||||
Instead of writing custom validation functions, you can use string descriptions that leverage LLM-based validation. When you provide a string to the `guardrail` or `guardrails` parameter, CrewAI automatically creates an `LLMGuardrail` that uses the agent's LLM to validate the output based on your description.
|
||||
|
||||
**Requirements**:
|
||||
|
||||
- The task must have an `agent` assigned (the guardrail uses the agent's LLM)
|
||||
- Provide a clear, descriptive string explaining the validation criteria
|
||||
|
||||
```python Code
|
||||
from crewai import Task
|
||||
|
||||
# Single LLM-based guardrail
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A blog post under 200 words",
|
||||
agent=blog_agent,
|
||||
guardrail="The blog post must be under 200 words and contain no technical jargon"
|
||||
)
|
||||
```
|
||||
|
||||
LLM-based guardrails are particularly useful for:
|
||||
|
||||
- **Complex validation logic** that's difficult to express programmatically
|
||||
- **Subjective criteria** like tone, style, or quality assessments
|
||||
- **Natural language requirements** that are easier to describe than code
|
||||
|
||||
The LLM guardrail will:
|
||||
|
||||
1. Analyze the task output against your description
|
||||
2. Return `(True, output)` if the output complies with the criteria
|
||||
3. Return `(False, feedback)` with specific feedback if validation fails
|
||||
|
||||
**Example with detailed validation criteria**:
|
||||
|
||||
```python Code
|
||||
research_task = Task(
|
||||
description="Research the latest developments in quantum computing",
|
||||
expected_output="A comprehensive research report",
|
||||
agent=researcher_agent,
|
||||
guardrail="""
|
||||
The research report must:
|
||||
- Be at least 1000 words long
|
||||
- Include at least 5 credible sources
|
||||
- Cover both technical and practical applications
|
||||
- Be written in a professional, academic tone
|
||||
- Avoid speculation or unverified claims
|
||||
"""
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple Guardrails
|
||||
|
||||
You can apply multiple guardrails to a task using the `guardrails` parameter. Multiple guardrails are executed sequentially, with each guardrail receiving the output from the previous one. This allows you to chain validation and transformation steps.
|
||||
|
||||
The `guardrails` parameter accepts:
|
||||
|
||||
- A list of guardrail functions or string descriptions
|
||||
- A single guardrail function or string (same as `guardrail`)
|
||||
|
||||
**Note**: If `guardrails` is provided, it takes precedence over `guardrail`. The `guardrail` parameter will be ignored when `guardrails` is set.
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Any
|
||||
from crewai import TaskOutput, Task
|
||||
|
||||
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate word count is within limits."""
|
||||
word_count = len(result.raw.split())
|
||||
if word_count < 100:
|
||||
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
|
||||
if word_count > 500:
|
||||
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
|
||||
return (True, result.raw)
|
||||
|
||||
def validate_no_profanity(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Check for inappropriate language."""
|
||||
profanity_words = ["badword1", "badword2"] # Example list
|
||||
content_lower = result.raw.lower()
|
||||
for word in profanity_words:
|
||||
if word in content_lower:
|
||||
return (False, f"Inappropriate language detected: {word}")
|
||||
return (True, result.raw)
|
||||
|
||||
def format_output(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Format and clean the output."""
|
||||
formatted = result.raw.strip()
|
||||
# Capitalize first letter
|
||||
formatted = formatted[0].upper() + formatted[1:] if formatted else formatted
|
||||
return (True, formatted)
|
||||
|
||||
# Apply multiple guardrails sequentially
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A well-formatted blog post between 100-500 words",
|
||||
agent=blog_agent,
|
||||
guardrails=[
|
||||
validate_word_count, # First: validate length
|
||||
validate_no_profanity, # Second: check content
|
||||
format_output # Third: format the result
|
||||
],
|
||||
guardrail_max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
In this example, the guardrails execute in order:
|
||||
|
||||
1. `validate_word_count` checks the word count
|
||||
2. `validate_no_profanity` checks for inappropriate language (using the output from step 1)
|
||||
3. `format_output` formats the final result (using the output from step 2)
|
||||
|
||||
If any guardrail fails, the error is sent back to the agent, and the task is retried up to `guardrail_max_retries` times.
|
||||
|
||||
**Mixing function-based and LLM-based guardrails**:
|
||||
|
||||
You can combine both function-based and string-based guardrails in the same list:
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Any
|
||||
from crewai import TaskOutput, Task
|
||||
|
||||
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate word count is within limits."""
|
||||
word_count = len(result.raw.split())
|
||||
if word_count < 100:
|
||||
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
|
||||
if word_count > 500:
|
||||
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
|
||||
return (True, result.raw)
|
||||
|
||||
# Mix function-based and LLM-based guardrails
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A well-formatted blog post between 100-500 words",
|
||||
agent=blog_agent,
|
||||
guardrails=[
|
||||
validate_word_count, # Function-based: precise word count check
|
||||
"The content must be engaging and suitable for a general audience", # LLM-based: subjective quality check
|
||||
"The writing style should be clear, concise, and free of technical jargon" # LLM-based: style validation
|
||||
],
|
||||
guardrail_max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
This approach combines the precision of programmatic validation with the flexibility of LLM-based assessment for subjective criteria.
|
||||
|
||||
### Guardrail Function Requirements
|
||||
|
||||
1. **Function Signature**:
|
||||
|
||||
- Must accept exactly one parameter (the task output)
|
||||
- Should return a tuple of `(bool, Any)`
|
||||
- Type hints are recommended but optional
|
||||
@@ -383,11 +545,10 @@ blog_task = Task(
|
||||
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
|
||||
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
|
||||
|
||||
|
||||
|
||||
### Error Handling Best Practices
|
||||
|
||||
1. **Structured Error Responses**:
|
||||
|
||||
```python Code
|
||||
from crewai import TaskOutput, LLMGuardrail
|
||||
|
||||
@@ -403,11 +564,13 @@ def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
```
|
||||
|
||||
2. **Error Categories**:
|
||||
|
||||
- Use specific error codes
|
||||
- Include relevant context
|
||||
- Provide actionable feedback
|
||||
|
||||
3. **Validation Chain**:
|
||||
|
||||
```python Code
|
||||
from typing import Any, Dict, List, Tuple, Union
|
||||
from crewai import TaskOutput
|
||||
@@ -434,6 +597,7 @@ def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
### Handling Guardrail Results
|
||||
|
||||
When a guardrail returns `(False, error)`:
|
||||
|
||||
1. The error is sent back to the agent
|
||||
2. The agent attempts to fix the issue
|
||||
3. The process repeats until:
|
||||
@@ -441,6 +605,7 @@ When a guardrail returns `(False, error)`:
|
||||
- Maximum retries are reached (`guardrail_max_retries`)
|
||||
|
||||
Example with retry handling:
|
||||
|
||||
```python Code
|
||||
from typing import Optional, Tuple, Union
|
||||
from crewai import TaskOutput, Task
|
||||
@@ -466,10 +631,12 @@ task = Task(
|
||||
## Getting Structured Consistent Outputs from Tasks
|
||||
|
||||
<Note>
|
||||
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
|
||||
It's also important to note that the output of the final task of a crew
|
||||
becomes the final output of the actual crew itself.
|
||||
</Note>
|
||||
|
||||
### Using `output_pydantic`
|
||||
|
||||
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
|
||||
|
||||
Here's an example demonstrating how to use output_pydantic:
|
||||
@@ -539,18 +706,22 @@ print("Accessing Properties - Option 5")
|
||||
print("Blog:", result)
|
||||
|
||||
```
|
||||
|
||||
In this example:
|
||||
* A Pydantic model Blog is defined with title and content fields.
|
||||
* The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
|
||||
* After executing the crew, you can access the structured output in multiple ways as shown.
|
||||
|
||||
- A Pydantic model Blog is defined with title and content fields.
|
||||
- The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
|
||||
- After executing the crew, you can access the structured output in multiple ways as shown.
|
||||
|
||||
#### Explanation of Accessing the Output
|
||||
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the __getitem__ method.
|
||||
|
||||
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the **getitem** method.
|
||||
2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object.
|
||||
3. Using to_dict() Method: Convert the output to a dictionary and access the fields.
|
||||
4. Printing the Entire Object: Simply print the result object to see the structured output.
|
||||
|
||||
### Using `output_json`
|
||||
|
||||
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
|
||||
|
||||
Here's an example demonstrating how to use `output_json`:
|
||||
@@ -610,14 +781,15 @@ print("Blog:", result)
|
||||
```
|
||||
|
||||
In this example:
|
||||
* A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
|
||||
* The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
|
||||
* After executing the crew, you can access the structured JSON output in two ways as shown.
|
||||
|
||||
- A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
|
||||
- The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
|
||||
- After executing the crew, you can access the structured JSON output in two ways as shown.
|
||||
|
||||
#### Explanation of Accessing the Output
|
||||
|
||||
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the __getitem__ method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
|
||||
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the __str__ method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
|
||||
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the **getitem** method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
|
||||
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the **str** method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
|
||||
|
||||
---
|
||||
|
||||
@@ -807,8 +979,6 @@ While creating and executing tasks, certain validation mechanisms are in place t
|
||||
|
||||
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
|
||||
|
||||
|
||||
|
||||
## Creating Directories when Saving Files
|
||||
|
||||
The `create_directory` parameter controls whether CrewAI should automatically create directories when saving task outputs to files. This feature is particularly useful for organizing outputs and ensuring that file paths are correctly structured, especially when working with complex project hierarchies.
|
||||
@@ -870,12 +1040,14 @@ audit_task:
|
||||
### Use Cases
|
||||
|
||||
**Automatic Directory Creation (`create_directory=True`):**
|
||||
|
||||
- Development and prototyping environments
|
||||
- Dynamic report generation with date-based folders
|
||||
- Automated workflows where directory structure may vary
|
||||
- Multi-tenant applications with user-specific folders
|
||||
|
||||
**Manual Directory Management (`create_directory=False`):**
|
||||
|
||||
- Production environments with strict file system controls
|
||||
- Security-sensitive applications where directories must be pre-configured
|
||||
- Systems with specific permission requirements
|
||||
|
||||
@@ -20,6 +20,7 @@ enabling everything from simple searches to complex interactions and effective t
|
||||
CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
|
||||
|
||||
The Enterprise Tools Repository includes:
|
||||
|
||||
- Pre-built connectors for popular enterprise systems
|
||||
- Custom tool creation interface
|
||||
- Version control and sharing capabilities
|
||||
|
||||
@@ -37,7 +37,7 @@ you can use them locally or refine them to your needs.
|
||||
<Card title="Tools & Integrations" href="/en/enterprise/features/tools-and-integrations" icon="wrench">
|
||||
Connect external apps and manage internal tools your agents can use.
|
||||
</Card>
|
||||
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox">
|
||||
<Card title="Tool Repository" href="/en/enterprise/guides/tool-repository#tool-repository" icon="toolbox">
|
||||
Publish and install tools to enhance your crews' capabilities.
|
||||
</Card>
|
||||
<Card title="Agents Repository" href="/en/enterprise/features/agent-repositories" icon="people-group">
|
||||
|
||||
@@ -31,7 +31,8 @@ You can configure users and roles in Settings → Roles.
|
||||
Go to <b>Settings → Roles</b> in CrewAI AMP.
|
||||
</Step>
|
||||
<Step title="Choose a role type">
|
||||
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click <b>Create role</b> to define a custom one.
|
||||
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click{" "}
|
||||
<b>Create role</b> to define a custom one.
|
||||
</Step>
|
||||
<Step title="Assign to members">
|
||||
Select users and assign the role. You can change this anytime.
|
||||
@@ -41,7 +42,7 @@ You can configure users and roles in Settings → Roles.
|
||||
### Configuration summary
|
||||
|
||||
| Area | Where to configure | Options |
|
||||
|:---|:---|:---|
|
||||
| :-------------------- | :--------------------------------- | :-------------------------------------- |
|
||||
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
|
||||
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
|
||||
|
||||
@@ -70,26 +71,30 @@ You can configure automation‑level access control in Automation → Settings
|
||||
Navigate to <b>Automation → Settings → Visibility</b>.
|
||||
</Step>
|
||||
<Step title="Set visibility">
|
||||
Choose <b>Private</b> to restrict access. The organization owner always retains access.
|
||||
Choose <b>Private</b> to restrict access. The organization owner always
|
||||
retains access.
|
||||
</Step>
|
||||
<Step title="Whitelist access">
|
||||
Add specific users and roles allowed to view, run, and access logs/metrics/settings.
|
||||
Add specific users and roles allowed to view, run, and access
|
||||
logs/metrics/settings.
|
||||
</Step>
|
||||
<Step title="Save and verify">
|
||||
Save changes, then confirm that non‑whitelisted users cannot view or run the automation.
|
||||
Save changes, then confirm that non‑whitelisted users cannot view or run the
|
||||
automation.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Private visibility: access outcomes
|
||||
|
||||
| Action | Owner | Whitelisted user/role | Not whitelisted |
|
||||
|:---|:---|:---|:---|
|
||||
| :--------------------------- | :---- | :-------------------- | :-------------- |
|
||||
| View automation | ✓ | ✓ | ✗ |
|
||||
| Run automation/API | ✓ | ✓ | ✗ |
|
||||
| Access logs/metrics/settings | ✓ | ✓ | ✗ |
|
||||
|
||||
<Tip>
|
||||
The organization owner always has access. In private mode, only whitelisted users and roles can view, run, and access logs/metrics/settings.
|
||||
The organization owner always has access. In private mode, only whitelisted
|
||||
users and roles can view, run, and access logs/metrics/settings.
|
||||
</Tip>
|
||||
|
||||
<Frame>
|
||||
|
||||
@@ -22,6 +22,7 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
|
||||
Connect enterprise‑grade applications (e.g., Gmail, Google Drive, HubSpot, Slack) via OAuth to enable agent actions.
|
||||
|
||||
{" "}
|
||||
<Steps>
|
||||
<Step title="Connect">
|
||||
Click <b>Connect</b> on an app and complete OAuth.
|
||||
@@ -34,9 +35,8 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
{" "}
|
||||
<Frame></Frame>
|
||||
|
||||
### Connect your Account
|
||||
|
||||
@@ -45,6 +45,7 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
3. Complete the OAuth flow and grant scopes
|
||||
4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link>
|
||||
|
||||
{" "}
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
@@ -59,8 +60,11 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
|
||||
### Environment Variable Setup
|
||||
|
||||
{" "}
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
@@ -75,8 +79,10 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
|
||||
### Usage Example
|
||||
|
||||
{" "}
|
||||
<Tip>
|
||||
Use the new streamlined approach to integrate enterprise apps. Simply specify the app and its actions directly in the Agent configuration.
|
||||
Use the new streamlined approach to integrate enterprise apps. Simply specify
|
||||
the app and its actions directly in the Agent configuration.
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
@@ -134,6 +140,7 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
|
||||
On a deployed crew, you can specify which actions are available for each integration from the service settings page.
|
||||
|
||||
{" "}
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
@@ -142,25 +149,26 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
|
||||
You can scope each integration to a specific user. For example, a crew that connects to Google can use a specific user’s Gmail account.
|
||||
|
||||
<Tip>
|
||||
Useful when different teams/users must keep data access separated.
|
||||
</Tip>
|
||||
{" "}
|
||||
<Tip>Useful when different teams/users must keep data access separated.</Tip>
|
||||
|
||||
Use the `user_bearer_token` to scope authentication to the requesting user. If the user isn’t logged in, the crew won’t use connected integrations. Otherwise it falls back to the default bearer token configured for the deployment.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
{" "}
|
||||
<Frame></Frame>
|
||||
|
||||
{" "}
|
||||
<div id="catalog"></div>
|
||||
### Catalog
|
||||
|
||||
#### Communication & Collaboration
|
||||
|
||||
- Gmail — Manage emails and drafts
|
||||
- Slack — Workspace notifications and alerts
|
||||
- Microsoft — Office 365 and Teams integration
|
||||
|
||||
#### Project Management
|
||||
|
||||
- Jira — Issue tracking and project management
|
||||
- ClickUp — Task and productivity management
|
||||
- Asana — Team task and project coordination
|
||||
@@ -169,15 +177,18 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
- GitHub — Repository and issue management
|
||||
|
||||
#### Customer Relationship Management
|
||||
|
||||
- Salesforce — CRM account and opportunity management
|
||||
- HubSpot — Sales pipeline and contact management
|
||||
- Zendesk — Customer support ticket management
|
||||
|
||||
#### Business & Finance
|
||||
|
||||
- Stripe — Payment processing and customer management
|
||||
- Shopify — E‑commerce store and product management
|
||||
|
||||
#### Productivity & Storage
|
||||
|
||||
- Google Sheets — Spreadsheet data synchronization
|
||||
- Google Calendar — Event and schedule management
|
||||
- Box — File storage and document management
|
||||
@@ -191,35 +202,29 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
|
||||
Create custom tools locally, publish them on CrewAI AMP Tool Repository and use them in your agents.
|
||||
|
||||
{" "}
|
||||
<Tip>
|
||||
Before running the commands below, make sure you log in to your CrewAI AMP account by running this command:
|
||||
```bash
|
||||
crewai login
|
||||
```
|
||||
Before running the commands below, make sure you log in to your CrewAI AMP
|
||||
account by running this command: ```bash crewai login ```
|
||||
</Tip>
|
||||
|
||||
{" "}
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
{" "}
|
||||
<Steps>
|
||||
<Step title="Create">
|
||||
Create a new tool locally.
|
||||
```bash
|
||||
crewai tool create your-tool
|
||||
```
|
||||
Create a new tool locally. ```bash crewai tool create your-tool ```
|
||||
</Step>
|
||||
<Step title="Publish">
|
||||
Publish the tool to the CrewAI AMP Tool Repository.
|
||||
```bash
|
||||
crewai tool publish
|
||||
```
|
||||
Publish the tool to the CrewAI AMP Tool Repository. ```bash crewai tool
|
||||
publish ```
|
||||
</Step>
|
||||
<Step title="Install">
|
||||
Install the tool from the CrewAI AMP Tool Repository.
|
||||
```bash
|
||||
crewai tool install your-tool
|
||||
```
|
||||
Install the tool from the CrewAI AMP Tool Repository. ```bash crewai tool
|
||||
install your-tool ```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -231,9 +236,8 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
- Version history and downloads
|
||||
- Team and role access
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
{" "}
|
||||
<Frame></Frame>
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
@@ -241,10 +245,18 @@ Tools & Integrations is the central hub for connecting third‑party apps and ma
|
||||
## Related
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox">
|
||||
<Card
|
||||
title="Tool Repository"
|
||||
href="/en/enterprise/guides/tool-repository#tool-repository"
|
||||
icon="toolbox"
|
||||
>
|
||||
Create, publish, and version custom tools for your organization.
|
||||
</Card>
|
||||
<Card title="Webhook Automation" href="/en/enterprise/guides/webhook-automation" icon="bolt">
|
||||
<Card
|
||||
title="Webhook Automation"
|
||||
href="/en/enterprise/guides/webhook-automation"
|
||||
icon="bolt"
|
||||
>
|
||||
Automate workflows and integrate with external platforms and services.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -20,9 +20,7 @@ Traces in CrewAI AMP are detailed execution records that capture every aspect of
|
||||
- Execution times
|
||||
- Cost estimates
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
## Accessing Traces
|
||||
|
||||
@@ -51,9 +49,7 @@ The top section displays high-level metrics about the execution:
|
||||
- **Execution Time**: Total duration of the crew run
|
||||
- **Estimated Cost**: Approximate cost based on token usage
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
### 2. Tasks & Agents
|
||||
|
||||
@@ -64,33 +60,25 @@ This section shows all tasks and agents that were part of the crew execution:
|
||||
- Status (completed/failed)
|
||||
- Individual execution time of the task
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
### 3. Final Output
|
||||
|
||||
Displays the final result produced by the crew after all tasks are completed.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
### 4. Execution Timeline
|
||||
|
||||
A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
### 5. Detailed Task View
|
||||
|
||||
When you click on a specific task in the timeline or task list, you'll see:
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
- **Task Key**: Unique identifier for the task
|
||||
- **Task ID**: Technical identifier in the system
|
||||
@@ -104,7 +92,6 @@ When you click on a specific task in the timeline or task list, you'll see:
|
||||
- **Input**: Any input provided to this task from previous tasks
|
||||
- **Output**: The actual result produced by the agent
|
||||
|
||||
|
||||
## Using Traces for Debugging
|
||||
|
||||
Traces are invaluable for troubleshooting issues with your crews:
|
||||
@@ -121,6 +108,7 @@ Traces are invaluable for troubleshooting issues with your crews:
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Optimize Performance">
|
||||
@@ -130,6 +118,7 @@ Traces are invaluable for troubleshooting issues with your crews:
|
||||
- Excessive token usage
|
||||
- Redundant tool operations
|
||||
- Unnecessary API calls
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Improve Cost Efficiency">
|
||||
@@ -139,6 +128,7 @@ Traces are invaluable for troubleshooting issues with your crews:
|
||||
- Refine prompts to be more concise
|
||||
- Cache frequently accessed information
|
||||
- Structure tasks to minimize redundant operations
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -153,5 +143,6 @@ CrewAI batches trace uploads to reduce overhead on high-volume runs:
|
||||
This yields more stable tracing under load while preserving detailed task/agent telemetry.
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with trace analysis or any other CrewAI AMP features.
|
||||
Contact our support team for assistance with trace analysis or any other
|
||||
CrewAI AMP features.
|
||||
</Card>
|
||||
|
||||
@@ -55,7 +55,7 @@ Each webhook sends a list of events:
|
||||
}
|
||||
```
|
||||
|
||||
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) on GitHub.
|
||||
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/lib/crewai/src/crewai/events/types) on GitHub.
|
||||
|
||||
As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field.
|
||||
|
||||
@@ -159,10 +159,15 @@ Event names match the internal event bus. See GitHub for the full list of events
|
||||
You can emit your own custom events, and they will be delivered through the webhook stream alongside system events.
|
||||
|
||||
<CardGroup>
|
||||
<Card title="GitHub" icon="github" href="https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events">
|
||||
<Card
|
||||
title="GitHub"
|
||||
icon="github"
|
||||
href="https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events"
|
||||
>
|
||||
Full list of events
|
||||
</Card>
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with webhook integration or troubleshooting.
|
||||
Contact our support team for assistance with webhook integration or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -20,36 +20,60 @@ Deep-dive guides walk through setup and sample workflows for each integration:
|
||||
<a href="/en/enterprise/guides/gmail-trigger">Enable crews when emails arrive or threads update.</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Google Calendar Trigger" icon="calendar-days">
|
||||
<a href="/en/enterprise/guides/google-calendar-trigger">React to calendar events as they are created, updated, or cancelled.</a>
|
||||
<a href="/en/enterprise/guides/google-calendar-trigger">
|
||||
React to calendar events as they are created, updated, or cancelled.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Google Drive Trigger" icon="folder-open">
|
||||
<a href="/en/enterprise/guides/google-drive-trigger">Handle Drive file uploads, edits, and deletions.</a>
|
||||
<a href="/en/enterprise/guides/google-drive-trigger">
|
||||
Handle Drive file uploads, edits, and deletions.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Outlook Trigger" icon="envelope-open">
|
||||
<a href="/en/enterprise/guides/outlook-trigger">Automate responses to new Outlook messages and calendar updates.</a>
|
||||
<a href="/en/enterprise/guides/outlook-trigger">
|
||||
Automate responses to new Outlook messages and calendar updates.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="OneDrive Trigger" icon="cloud">
|
||||
<a href="/en/enterprise/guides/onedrive-trigger">Audit file activity and sharing changes in OneDrive.</a>
|
||||
<a href="/en/enterprise/guides/onedrive-trigger">
|
||||
Audit file activity and sharing changes in OneDrive.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Microsoft Teams Trigger" icon="comments">
|
||||
<a href="/en/enterprise/guides/microsoft-teams-trigger">Kick off workflows when new Teams chats start.</a>
|
||||
<a href="/en/enterprise/guides/microsoft-teams-trigger">
|
||||
Kick off workflows when new Teams chats start.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="HubSpot Trigger" icon="hubspot">
|
||||
<a href="/en/enterprise/guides/hubspot-trigger">Launch automations from HubSpot workflows and lifecycle events.</a>
|
||||
<a href="/en/enterprise/guides/hubspot-trigger">
|
||||
Launch automations from HubSpot workflows and lifecycle events.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Salesforce Trigger" icon="salesforce">
|
||||
<a href="/en/enterprise/guides/salesforce-trigger">Connect Salesforce processes to CrewAI for CRM automation.</a>
|
||||
<a href="/en/enterprise/guides/salesforce-trigger">
|
||||
Connect Salesforce processes to CrewAI for CRM automation.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Slack Trigger" icon="slack">
|
||||
<a href="/en/enterprise/guides/slack-trigger">Start crews directly from Slack slash commands.</a>
|
||||
<a href="/en/enterprise/guides/slack-trigger">
|
||||
Start crews directly from Slack slash commands.
|
||||
</a>
|
||||
</Card>
|
||||
|
||||
<Card title="Zapier Trigger" icon="bolt">
|
||||
@@ -76,7 +100,10 @@ To access and manage your automation triggers:
|
||||
2. Click on the **Triggers** tab to view all available trigger integrations
|
||||
|
||||
<Frame caption="Example of available automation triggers for a Gmail deployment">
|
||||
<img src="/images/enterprise/list-available-triggers.png" alt="List of available automation triggers" />
|
||||
<img
|
||||
src="/images/enterprise/list-available-triggers.png"
|
||||
alt="List of available automation triggers"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
This view shows all the trigger integrations available for your deployment, along with their current connection status.
|
||||
@@ -86,7 +113,10 @@ This view shows all the trigger integrations available for your deployment, alon
|
||||
Each trigger can be easily enabled or disabled using the toggle switch:
|
||||
|
||||
<Frame caption="Enable or disable triggers with toggle">
|
||||
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
|
||||
<img
|
||||
src="/images/enterprise/trigger-selected.png"
|
||||
alt="Enable or disable triggers with toggle"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
- **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur
|
||||
@@ -99,7 +129,10 @@ Simply click the toggle to change the trigger state. Changes take effect immedia
|
||||
Track the performance and history of your triggered executions:
|
||||
|
||||
<Frame caption="List of executions triggered by automation">
|
||||
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
|
||||
<img
|
||||
src="/images/enterprise/list-executions.png"
|
||||
alt="List of executions triggered by automation"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Building Trigger-Driven Automations
|
||||
@@ -130,6 +163,7 @@ crewai triggers list
|
||||
```
|
||||
|
||||
This command displays all triggers available based on your connected integrations, showing:
|
||||
|
||||
- Integration name and connection status
|
||||
- Available trigger types
|
||||
- Trigger names and descriptions
|
||||
@@ -149,6 +183,7 @@ crewai triggers run microsoft_onedrive/file_changed
|
||||
```
|
||||
|
||||
This command:
|
||||
|
||||
- Executes your crew locally
|
||||
- Passes a complete, realistic trigger payload
|
||||
- Simulates exactly how your crew will be called in production
|
||||
@@ -161,7 +196,6 @@ This command:
|
||||
- If your crew expects parameters that aren't in the trigger payload, execution may fail
|
||||
</Warning>
|
||||
|
||||
|
||||
### Triggers with Crew
|
||||
|
||||
Your existing crew definitions work seamlessly with triggers, you just need to have a task to parse the received payload:
|
||||
@@ -193,10 +227,12 @@ class MyAutomatedCrew:
|
||||
The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms.
|
||||
|
||||
<Note>
|
||||
Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI automatically injects this payload:
|
||||
- Tasks: appended to the first task's description by default ("Trigger Payload: {crewai_trigger_payload}")
|
||||
- Control via `allow_crewai_trigger_context`: set `True` to always inject, `False` to never inject
|
||||
- Flows: any `@start()` method that accepts a `crewai_trigger_payload` parameter will receive it
|
||||
Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI
|
||||
automatically injects this payload: - Tasks: appended to the first task's
|
||||
description by default ("Trigger Payload: {crewai_trigger_payload}") - Control
|
||||
via `allow_crewai_trigger_context`: set `True` to always inject, `False` to
|
||||
never inject - Flows: any `@start()` method that accepts a
|
||||
`crewai_trigger_payload` parameter will receive it
|
||||
</Note>
|
||||
|
||||
### Integration with Flows
|
||||
@@ -264,17 +300,20 @@ def delegate_to_crew(self, crewai_trigger_payload: dict = None):
|
||||
## Troubleshooting
|
||||
|
||||
**Trigger not firing:**
|
||||
|
||||
- Verify the trigger is enabled in your deployment's Triggers tab
|
||||
- Check integration connection status under Tools & Integrations
|
||||
- Ensure all required environment variables are properly configured
|
||||
|
||||
**Execution failures:**
|
||||
|
||||
- Check the execution logs for error details
|
||||
- Use `crewai triggers run <trigger_name>` to test locally and see the exact payload structure
|
||||
- Verify your crew can handle the `crewai_trigger_payload` parameter
|
||||
- Ensure your crew doesn't expect parameters that aren't included in the trigger payload
|
||||
|
||||
**Development issues:**
|
||||
|
||||
- Always test with `crewai triggers run <trigger>` before deploying to see the complete payload
|
||||
- Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead
|
||||
- Use `crewai triggers list` to verify which triggers are available for your connected integrations
|
||||
|
||||
@@ -37,6 +37,7 @@ This guide walks you through connecting Azure OpenAI with Crew Studio for seamle
|
||||
- Navigate to `Resource Management > Networking`.
|
||||
- Ensure that `Allow access from all networks` is enabled. If this setting is restricted, CrewAI may be blocked from accessing your Azure OpenAI endpoint.
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Verification
|
||||
@@ -46,6 +47,7 @@ You're all set! Crew Studio will now use your Azure OpenAI connection. Test the
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter issues:
|
||||
|
||||
- Verify the Target URI format matches the expected pattern
|
||||
- Check that the API key is correct and has proper permissions
|
||||
- Ensure network access is configured to allow CrewAI connections
|
||||
|
||||
@@ -22,21 +22,27 @@ mode: "wide"
|
||||
|
||||
### Installation and Setup
|
||||
|
||||
<Card title="Follow Standard Installation" icon="wrench" href="/en/installation">
|
||||
Follow our standard installation guide to set up CrewAI CLI and create your first project.
|
||||
<Card
|
||||
title="Follow Standard Installation"
|
||||
icon="wrench"
|
||||
href="/en/installation"
|
||||
>
|
||||
Follow our standard installation guide to set up CrewAI CLI and create your
|
||||
first project.
|
||||
</Card>
|
||||
|
||||
### Building Your Crew
|
||||
|
||||
<Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart">
|
||||
Follow our quickstart guide to create your first agent crew using YAML configuration.
|
||||
Follow our quickstart guide to create your first agent crew using YAML
|
||||
configuration.
|
||||
</Card>
|
||||
|
||||
## Support and Resources
|
||||
|
||||
For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com).
|
||||
|
||||
|
||||
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
|
||||
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
|
||||
Book time with our team to learn more about Enterprise features and how they
|
||||
can benefit your organization.
|
||||
</Card>
|
||||
|
||||
@@ -14,22 +14,17 @@ CrewAI AMP provides a powerful way to capture telemetry logs from your deploymen
|
||||
Your organization should have ENTERPRISE OTEL SETUP enabled
|
||||
</Card>
|
||||
<Card title="OTEL collector setup" icon="server">
|
||||
Your organization should have an OTEL collector setup or a provider like Datadog log intake setup
|
||||
Your organization should have an OTEL collector setup or a provider like
|
||||
Datadog log intake setup
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
|
||||
## How to capture telemetry logs
|
||||
|
||||
1. Go to settings/organization tab
|
||||
2. Configure your OTEL collector setup
|
||||
3. Save
|
||||
|
||||
|
||||
|
||||
Example to setup OTEL log collection capture to Datadog.
|
||||
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
@@ -6,17 +6,21 @@ mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
After creating a crew locally or through Crew Studio, the next step is deploying it to the CrewAI AMP platform. This guide covers multiple deployment methods to help you choose the best approach for your workflow.
|
||||
After creating a crew locally or through Crew Studio, the next step is
|
||||
deploying it to the CrewAI AMP platform. This guide covers multiple deployment
|
||||
methods to help you choose the best approach for your workflow.
|
||||
</Note>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Crew Ready for Deployment" icon="users">
|
||||
You should have a working crew either built locally or created through Crew Studio
|
||||
You should have a working crew either built locally or created through Crew
|
||||
Studio
|
||||
</Card>
|
||||
<Card title="GitHub Repository" icon="github">
|
||||
Your crew code should be in a GitHub repository (for GitHub integration method)
|
||||
Your crew code should be in a GitHub repository (for GitHub integration
|
||||
method)
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -187,10 +191,102 @@ You can also deploy your crews directly through the CrewAI AMP web interface by
|
||||
|
||||
</Steps>
|
||||
|
||||
## Option 3: Redeploy Using API (CI/CD Integration)
|
||||
|
||||
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
|
||||
|
||||
<Steps>
|
||||
<Step title="Get Your Personal Access Token">
|
||||
|
||||
Navigate to your CrewAI AMP account settings to generate an API token:
|
||||
|
||||
1. Go to [app.crewai.com](https://app.crewai.com)
|
||||
2. Click on **Settings** → **Account** → **Personal Access Token**
|
||||
3. Generate a new token and copy it securely
|
||||
4. Store this token as a secret in your CI/CD system
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Find Your Automation UUID">
|
||||
|
||||
Locate the unique identifier for your deployed crew:
|
||||
|
||||
1. Go to **Automations** in your CrewAI AMP dashboard
|
||||
2. Select your existing automation/crew
|
||||
3. Click on **Additional Details**
|
||||
4. Copy the **UUID** - this identifies your specific crew deployment
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Trigger Redeployment via API">
|
||||
|
||||
Use the Deploy API endpoint to trigger a redeployment:
|
||||
|
||||
```bash
|
||||
curl -i -X POST \
|
||||
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
|
||||
|
||||
# HTTP/2 200
|
||||
# content-type: application/json
|
||||
#
|
||||
# {
|
||||
# "uuid": "your-automation-uuid",
|
||||
# "status": "Deploy Enqueued",
|
||||
# "public_url": "https://your-crew-deployment.crewai.com",
|
||||
# "token": "your-bearer-token"
|
||||
# }
|
||||
```
|
||||
|
||||
<Info>
|
||||
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
|
||||
</Info>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="GitHub Actions Integration Example">
|
||||
|
||||
Here's a GitHub Actions workflow with more complex deployment triggers:
|
||||
|
||||
```yaml
|
||||
name: Deploy CrewAI Automation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
pull_request:
|
||||
types: [ labeled ]
|
||||
release:
|
||||
types: [ published ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
|
||||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
|
||||
(github.event_name == 'release')
|
||||
steps:
|
||||
- name: Trigger CrewAI Redeployment
|
||||
run: |
|
||||
curl -X POST \
|
||||
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## ⚠️ Environment Variable Security Requirements
|
||||
|
||||
<Warning>
|
||||
**Important**: CrewAI AMP has security restrictions on environment variable names that can cause deployment failures if not followed.
|
||||
**Important**: CrewAI AMP has security restrictions on environment variable
|
||||
names that can cause deployment failures if not followed.
|
||||
</Warning>
|
||||
|
||||
### Blocked Environment Variable Patterns
|
||||
@@ -198,12 +294,14 @@ You can also deploy your crews directly through the CrewAI AMP web interface by
|
||||
For security reasons, the following environment variable naming patterns are **automatically filtered** and will cause deployment issues:
|
||||
|
||||
**Blocked Patterns:**
|
||||
|
||||
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
|
||||
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
|
||||
- Variables ending with `_SECRET` (e.g., `API_SECRET`)
|
||||
- Variables ending with `_KEY` in certain contexts
|
||||
|
||||
**Specific Blocked Variables:**
|
||||
|
||||
- `GITHUB_USER`, `GITHUB_TOKEN`
|
||||
- `AWS_REGION`, `AWS_DEFAULT_REGION`
|
||||
- Various internal CrewAI system variables
|
||||
@@ -211,6 +309,7 @@ For security reasons, the following environment variable naming patterns are **a
|
||||
### Allowed Exceptions
|
||||
|
||||
Some variables are explicitly allowed despite matching blocked patterns:
|
||||
|
||||
- `AZURE_AD_TOKEN`
|
||||
- `AZURE_OPENAI_AD_TOKEN`
|
||||
- `ENTERPRISE_ACTION_TOKEN`
|
||||
@@ -240,7 +339,8 @@ API_CONFIG=secret123
|
||||
4. **Document changes**: Keep track of renamed variables for your team
|
||||
|
||||
<Tip>
|
||||
If you encounter deployment failures with cryptic environment variable errors, check your variable names against these patterns first.
|
||||
If you encounter deployment failures with cryptic environment variable errors,
|
||||
check your variable names against these patterns first.
|
||||
</Tip>
|
||||
|
||||
### Interact with Your Deployed Crew
|
||||
@@ -248,6 +348,7 @@ If you encounter deployment failures with cryptic environment variable errors, c
|
||||
Once deployment is complete, you can access your crew through:
|
||||
|
||||
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
|
||||
|
||||
- `/inputs`: Lists the required input parameters
|
||||
- `/kickoff`: Initiates an execution with provided inputs
|
||||
- `/status/{kickoff_id}`: Checks the execution status
|
||||
@@ -287,5 +388,6 @@ The Enterprise platform also offers:
|
||||
- **Crew Studio**: Build crews through a chat interface without writing code
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with deployment issues or questions about the Enterprise platform.
|
||||
Contact our support team for assistance with deployment issues or questions
|
||||
about the Enterprise platform.
|
||||
</Card>
|
||||
|
||||
@@ -6,7 +6,8 @@ mode: "wide"
|
||||
---
|
||||
|
||||
<Tip>
|
||||
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
|
||||
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly
|
||||
scaffold or build Crews through a conversational interface.
|
||||
</Tip>
|
||||
|
||||
## What is Crew Studio?
|
||||
@@ -52,6 +53,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Verify Connection Added">
|
||||
@@ -60,6 +62,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Configure LLM Defaults">
|
||||
@@ -73,6 +76,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -93,6 +97,7 @@ Now that you've configured your LLM connection and default settings, you're read
|
||||
```
|
||||
|
||||
The Crew Assistant will ask clarifying questions to better understand your requirements.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Review Generated Crew">
|
||||
@@ -104,6 +109,7 @@ Now that you've configured your LLM connection and default settings, you're read
|
||||
- Tools to be used
|
||||
|
||||
This is your opportunity to refine the configuration before proceeding.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Deploy or Download">
|
||||
@@ -112,6 +118,7 @@ Now that you've configured your LLM connection and default settings, you're read
|
||||
- Download the generated code for local customization
|
||||
- Deploy the crew directly to the CrewAI AMP platform
|
||||
- Modify the configuration and regenerate the crew
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Test Your Crew">
|
||||
@@ -120,7 +127,9 @@ Now that you've configured your LLM connection and default settings, you're read
|
||||
</Steps>
|
||||
|
||||
<Tip>
|
||||
For best results, provide clear, detailed descriptions of what you want your crew to accomplish. Include specific inputs and expected outputs in your description.
|
||||
For best results, provide clear, detailed descriptions of what you want your
|
||||
crew to accomplish. Include specific inputs and expected outputs in your
|
||||
description.
|
||||
</Tip>
|
||||
|
||||
## Example Workflow
|
||||
@@ -134,10 +143,13 @@ Here's a typical workflow for creating a crew with Crew Studio:
|
||||
```md
|
||||
I need a crew that can analyze financial news and provide investment recommendations
|
||||
```
|
||||
|
||||
</Step>
|
||||
|
||||
{" "}
|
||||
<Step title="Answer Questions">
|
||||
Respond to clarifying questions from the Crew Assistant to refine your requirements.
|
||||
Respond to clarifying questions from the Crew Assistant to refine your
|
||||
requirements.
|
||||
</Step>
|
||||
|
||||
<Step title="Review the Plan">
|
||||
@@ -146,12 +158,15 @@ Here's a typical workflow for creating a crew with Crew Studio:
|
||||
- A Research Agent to gather financial news
|
||||
- An Analysis Agent to interpret the data
|
||||
- A Recommendations Agent to provide investment advice
|
||||
|
||||
</Step>
|
||||
|
||||
{" "}
|
||||
<Step title="Approve or Modify">
|
||||
Approve the plan or request changes if necessary.
|
||||
</Step>
|
||||
|
||||
{" "}
|
||||
<Step title="Download or Deploy">
|
||||
Download the code for customization or deploy directly to the platform.
|
||||
</Step>
|
||||
@@ -162,5 +177,6 @@ Here's a typical workflow for creating a crew with Crew Studio:
|
||||
</Steps>
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Crew Studio or any other CrewAI AMP features.
|
||||
Contact our support team for assistance with Crew Studio or any other CrewAI
|
||||
AMP features.
|
||||
</Card>
|
||||
|
||||
@@ -10,7 +10,8 @@ mode: "wide"
|
||||
Use the Gmail Trigger to kick off your deployed crews when Gmail events happen in connected accounts, such as receiving a new email or messages matching a label/filter.
|
||||
|
||||
<Tip>
|
||||
Make sure Gmail is connected in Tools & Integrations and the trigger is enabled for your deployment.
|
||||
Make sure Gmail is connected in Tools & Integrations and the trigger is
|
||||
enabled for your deployment.
|
||||
</Tip>
|
||||
|
||||
## Enabling the Gmail Trigger
|
||||
@@ -20,7 +21,10 @@ Use the Gmail Trigger to kick off your deployed crews when Gmail events happen i
|
||||
3. Locate **Gmail** and switch the toggle to enable
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
|
||||
<img
|
||||
src="/images/enterprise/trigger-selected.png"
|
||||
alt="Enable or disable triggers with toggle"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Example: Process new emails
|
||||
@@ -62,13 +66,15 @@ Test your Gmail trigger integration locally using the CrewAI CLI:
|
||||
crewai triggers list
|
||||
|
||||
# Simulate a Gmail trigger with realistic payload
|
||||
crewai triggers run gmail/new_email
|
||||
crewai triggers run gmail/new_email_received
|
||||
```
|
||||
|
||||
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
Use `crewai triggers run gmail/new_email_received` (not `crewai run`) to
|
||||
simulate trigger execution during development. After deployment, your crew
|
||||
will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Monitoring Executions
|
||||
@@ -76,13 +82,16 @@ The `crewai triggers run` command will execute your crew with a complete Gmail p
|
||||
Track history and performance of triggered runs:
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
|
||||
<img
|
||||
src="/images/enterprise/list-executions.png"
|
||||
alt="List of executions triggered by automation"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Ensure Gmail is connected in Tools & Integrations
|
||||
- Verify the Gmail Trigger is enabled on the Triggers tab
|
||||
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
|
||||
- Test locally with `crewai triggers run gmail/new_email_received` to see the exact payload structure
|
||||
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
|
||||
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution
|
||||
|
||||
@@ -10,7 +10,8 @@ mode: "wide"
|
||||
Use the Google Calendar trigger to launch automations whenever calendar events change. Common use cases include briefing a team before a meeting, notifying stakeholders when a critical event is cancelled, or summarizing daily schedules.
|
||||
|
||||
<Tip>
|
||||
Make sure Google Calendar is connected in **Tools & Integrations** and enabled for the deployment you want to automate.
|
||||
Make sure Google Calendar is connected in **Tools & Integrations** and enabled
|
||||
for the deployment you want to automate.
|
||||
</Tip>
|
||||
|
||||
## Enabling the Google Calendar Trigger
|
||||
@@ -20,7 +21,10 @@ Use the Google Calendar trigger to launch automations whenever calendar events c
|
||||
3. Locate **Google Calendar** and switch the toggle to enable
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/calendar-trigger.png" alt="Enable or disable triggers with toggle" />
|
||||
<img
|
||||
src="/images/enterprise/calendar-trigger.png"
|
||||
alt="Enable or disable triggers with toggle"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Example: Summarize meeting details
|
||||
@@ -54,7 +58,9 @@ crewai triggers run google_calendar/event_changed
|
||||
The `crewai triggers run` command will execute your crew with a complete Calendar payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to
|
||||
simulate trigger execution during development. After deployment, your crew
|
||||
will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Monitoring Executions
|
||||
@@ -62,7 +68,10 @@ The `crewai triggers run` command will execute your crew with a complete Calenda
|
||||
The **Executions** list in the deployment dashboard tracks every triggered run and surfaces payload metadata, output summaries, and errors.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
|
||||
<img
|
||||
src="/images/enterprise/list-executions.png"
|
||||
alt="List of executions triggered by automation"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -10,7 +10,8 @@ mode: "wide"
|
||||
Trigger your automations when files are created, updated, or removed in Google Drive. Typical workflows include summarizing newly uploaded content, enforcing sharing policies, or notifying owners when critical files change.
|
||||
|
||||
<Tip>
|
||||
Connect Google Drive in **Tools & Integrations** and confirm the trigger is enabled for the automation you want to monitor.
|
||||
Connect Google Drive in **Tools & Integrations** and confirm the trigger is
|
||||
enabled for the automation you want to monitor.
|
||||
</Tip>
|
||||
|
||||
## Enabling the Google Drive Trigger
|
||||
@@ -20,7 +21,10 @@ Trigger your automations when files are created, updated, or removed in Google D
|
||||
3. Locate **Google Drive** and switch the toggle to enable
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/gdrive-trigger.png" alt="Enable or disable triggers with toggle" />
|
||||
<img
|
||||
src="/images/enterprise/gdrive-trigger.png"
|
||||
alt="Enable or disable triggers with toggle"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Example: Summarize file activity
|
||||
@@ -51,7 +55,9 @@ crewai triggers run google_drive/file_changed
|
||||
The `crewai triggers run` command will execute your crew with a complete Drive payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to
|
||||
simulate trigger execution during development. After deployment, your crew
|
||||
will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Monitoring Executions
|
||||
@@ -59,7 +65,10 @@ The `crewai triggers run` command will execute your crew with a complete Drive p
|
||||
Track history and performance of triggered runs with the **Executions** list in the deployment dashboard.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
|
||||
<img
|
||||
src="/images/enterprise/list-executions.png"
|
||||
alt="List of executions triggered by automation"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -16,35 +16,44 @@ This guide provides a step-by-step process to set up HubSpot triggers for CrewAI
|
||||
|
||||
<Steps>
|
||||
<Step title="Connect your HubSpot account with CrewAI AMP">
|
||||
- Log in to your `CrewAI AMP account > Triggers`
|
||||
- Select `HubSpot` from the list of available triggers
|
||||
- Choose the HubSpot account you want to connect with CrewAI AMP
|
||||
- Follow the on-screen prompts to authorize CrewAI AMP access to your HubSpot account
|
||||
- A confirmation message will appear once HubSpot is successfully connected with CrewAI AMP
|
||||
- Log in to your `CrewAI AMP account > Triggers` - Select `HubSpot` from the
|
||||
list of available triggers - Choose the HubSpot account you want to connect
|
||||
with CrewAI AMP - Follow the on-screen prompts to authorize CrewAI AMP
|
||||
access to your HubSpot account - A confirmation message will appear once
|
||||
HubSpot is successfully connected with CrewAI AMP
|
||||
</Step>
|
||||
<Step title="Create a HubSpot Workflow">
|
||||
- Log in to your `HubSpot account > Automations > Workflows > New workflow`
|
||||
- Select the workflow type that fits your needs (e.g., Start from scratch)
|
||||
- In the workflow builder, click the Plus (+) icon to add a new action.
|
||||
- Choose `Integrated apps > CrewAI > Kickoff a Crew`.
|
||||
- Select the Crew you want to initiate.
|
||||
- Click `Save` to add the action to your workflow
|
||||
- Select the workflow type that fits your needs (e.g., Start from scratch) -
|
||||
In the workflow builder, click the Plus (+) icon to add a new action. -
|
||||
Choose `Integrated apps > CrewAI > Kickoff a Crew`. - Select the Crew you
|
||||
want to initiate. - Click `Save` to add the action to your workflow
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hubspot-workflow-1.png" alt="HubSpot Workflow 1" />
|
||||
<img
|
||||
src="/images/enterprise/hubspot-workflow-1.png"
|
||||
alt="HubSpot Workflow 1"
|
||||
/>
|
||||
</Frame>
|
||||
</Step>
|
||||
<Step title="Use Crew results with other actions">
|
||||
- After the Kickoff a Crew step, click the Plus (+) icon to add a new action.
|
||||
- For example, to send an internal email notification, choose `Communications > Send internal email notification`
|
||||
- In the Body field, click `Insert data`, select `View properties or action outputs from > Action outputs > Crew Result` to include Crew data in the email
|
||||
- After the Kickoff a Crew step, click the Plus (+) icon to add a new
|
||||
action. - For example, to send an internal email notification, choose
|
||||
`Communications > Send internal email notification` - In the Body field,
|
||||
click `Insert data`, select `View properties or action outputs from > Action
|
||||
outputs > Crew Result` to include Crew data in the email
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hubspot-workflow-2.png" alt="HubSpot Workflow 2" />
|
||||
<img
|
||||
src="/images/enterprise/hubspot-workflow-2.png"
|
||||
alt="HubSpot Workflow 2"
|
||||
/>
|
||||
</Frame>
|
||||
- Configure any additional actions as needed
|
||||
- Review your workflow steps to ensure everything is set up correctly
|
||||
- Activate the workflow
|
||||
- Configure any additional actions as needed - Review your workflow
|
||||
steps to ensure everything is set up correctly - Activate the workflow
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hubspot-workflow-3.png" alt="HubSpot Workflow 3" />
|
||||
<img
|
||||
src="/images/enterprise/hubspot-workflow-3.png"
|
||||
alt="HubSpot Workflow 3"
|
||||
/>
|
||||
</Frame>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -17,9 +17,7 @@ Once you've deployed your crew to the CrewAI AMP platform, you can kickoff execu
|
||||
2. Click on the crew name from your projects list
|
||||
3. You'll be taken to the crew's detail page
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
### Step 2: Initiate Execution
|
||||
|
||||
@@ -31,9 +29,7 @@ From your crew's detail page, you have two options to kickoff an execution:
|
||||
2. Enter the required input parameters for your crew in the JSON editor
|
||||
3. Click the `Send Request` button
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
#### Option B: Using the Visual Interface
|
||||
|
||||
@@ -41,9 +37,7 @@ From your crew's detail page, you have two options to kickoff an execution:
|
||||
2. Enter the required inputs in the form fields
|
||||
3. Click the `Run Crew` button
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
### Step 3: Monitor Execution Progress
|
||||
|
||||
@@ -52,9 +46,7 @@ After initiating the execution:
|
||||
1. You'll receive a response containing a `kickoff_id` - **copy this ID**
|
||||
2. This ID is essential for tracking your execution
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
### Step 4: Check Execution Status
|
||||
|
||||
@@ -64,11 +56,10 @@ To monitor the progress of your execution:
|
||||
2. Paste the `kickoff_id` into the designated field
|
||||
3. Click the "Get Status" button
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
The status response will show:
|
||||
|
||||
- Current execution state (`running`, `completed`, etc.)
|
||||
- Details about which tasks are in progress
|
||||
- Any outputs produced so far
|
||||
@@ -182,5 +173,6 @@ If an execution fails:
|
||||
3. Look for LLM responses and tool usage in the trace details
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with execution issues or questions about the Enterprise platform.
|
||||
Contact our support team for assistance with execution issues or questions
|
||||
about the Enterprise platform.
|
||||
</Card>
|
||||
|
||||
@@ -10,7 +10,8 @@ mode: "wide"
|
||||
Use the Microsoft Teams trigger to start automations whenever a new chat is created. Common patterns include summarizing inbound requests, routing urgent messages to support teams, or creating follow-up tasks in other systems.
|
||||
|
||||
<Tip>
|
||||
Confirm Microsoft Teams is connected under **Tools & Integrations** and enabled in the **Triggers** tab for your deployment.
|
||||
Confirm Microsoft Teams is connected under **Tools & Integrations** and
|
||||
enabled in the **Triggers** tab for your deployment.
|
||||
</Tip>
|
||||
|
||||
## Enabling the Microsoft Teams Trigger
|
||||
@@ -20,7 +21,10 @@ Use the Microsoft Teams trigger to start automations whenever a new chat is crea
|
||||
3. Locate **Microsoft Teams** and switch the toggle to enable
|
||||
|
||||
<Frame caption="Microsoft Teams trigger connection">
|
||||
<img src="/images/enterprise/msteams-trigger.png" alt="Enable or disable triggers with toggle" />
|
||||
<img
|
||||
src="/images/enterprise/msteams-trigger.png"
|
||||
alt="Enable or disable triggers with toggle"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Example: Summarize a new chat thread
|
||||
@@ -52,7 +56,9 @@ crewai triggers run microsoft_teams/teams_message_created
|
||||
The `crewai triggers run` command will execute your crew with a complete Teams payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run microsoft_teams/teams_message_created` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
Use `crewai triggers run microsoft_teams/teams_message_created` (not `crewai
|
||||
run`) to simulate trigger execution during development. After deployment, your
|
||||
crew will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -10,7 +10,8 @@ mode: "wide"
|
||||
Start automations when files change inside OneDrive. You can generate audit summaries, notify security teams about external sharing, or update downstream line-of-business systems with new document metadata.
|
||||
|
||||
<Tip>
|
||||
Connect OneDrive in **Tools & Integrations** and toggle the trigger on for your deployment.
|
||||
Connect OneDrive in **Tools & Integrations** and toggle the trigger on for
|
||||
your deployment.
|
||||
</Tip>
|
||||
|
||||
## Enabling the OneDrive Trigger
|
||||
@@ -20,7 +21,10 @@ Start automations when files change inside OneDrive. You can generate audit summ
|
||||
3. Locate **OneDrive** and switch the toggle to enable
|
||||
|
||||
<Frame caption="Microsoft OneDrive trigger connection">
|
||||
<img src="/images/enterprise/onedrive-trigger.png" alt="Enable or disable triggers with toggle" />
|
||||
<img
|
||||
src="/images/enterprise/onedrive-trigger.png"
|
||||
alt="Enable or disable triggers with toggle"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Example: Audit file permissions
|
||||
@@ -51,7 +55,9 @@ crewai triggers run microsoft_onedrive/file_changed
|
||||
The `crewai triggers run` command will execute your crew with a complete OneDrive payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run microsoft_onedrive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
Use `crewai triggers run microsoft_onedrive/file_changed` (not `crewai run`)
|
||||
to simulate trigger execution during development. After deployment, your crew
|
||||
will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -10,7 +10,8 @@ mode: "wide"
|
||||
Automate responses when Outlook delivers a new message or when an event is removed from the calendar. Teams commonly route escalations, file tickets, or alert attendees of cancellations.
|
||||
|
||||
<Tip>
|
||||
Connect Outlook in **Tools & Integrations** and ensure the trigger is enabled for your deployment.
|
||||
Connect Outlook in **Tools & Integrations** and ensure the trigger is enabled
|
||||
for your deployment.
|
||||
</Tip>
|
||||
|
||||
## Enabling the Outlook Trigger
|
||||
@@ -20,7 +21,10 @@ Automate responses when Outlook delivers a new message or when an event is remov
|
||||
3. Locate **Outlook** and switch the toggle to enable
|
||||
|
||||
<Frame caption="Microsoft Outlook trigger connection">
|
||||
<img src="/images/enterprise/outlook-trigger.png" alt="Enable or disable triggers with toggle" />
|
||||
<img
|
||||
src="/images/enterprise/outlook-trigger.png"
|
||||
alt="Enable or disable triggers with toggle"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Example: Summarize a new email
|
||||
@@ -51,7 +55,9 @@ crewai triggers run microsoft_outlook/email_received
|
||||
The `crewai triggers run` command will execute your crew with a complete Outlook payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run microsoft_outlook/email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
Use `crewai triggers run microsoft_outlook/email_received` (not `crewai run`)
|
||||
to simulate trigger execution during development. After deployment, your crew
|
||||
will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -17,6 +17,7 @@ This guide explains how to export CrewAI AMP crews as React components and integ
|
||||
<img src="/images/enterprise/export-react-component.png" alt="Export React Component" />
|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Setting Up Your React Environment
|
||||
@@ -83,6 +84,7 @@ To run this React component locally, you'll need to set up a React development e
|
||||
```
|
||||
- This will start the development server, and your default web browser should open automatically to http://localhost:3000, where you'll see your React app running.
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Customization
|
||||
@@ -90,10 +92,16 @@ To run this React component locally, you'll need to set up a React development e
|
||||
You can then customise the `CrewLead.jsx` to add color, title etc
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/customise-react-component.png" alt="Customise React Component" />
|
||||
<img
|
||||
src="/images/enterprise/customise-react-component.png"
|
||||
alt="Customise React Component"
|
||||
/>
|
||||
</Frame>
|
||||
<Frame>
|
||||
<img src="/images/enterprise/customise-react-component-2.png" alt="Customise React Component" />
|
||||
<img
|
||||
src="/images/enterprise/customise-react-component-2.png"
|
||||
alt="Customise React Component"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
## Next Steps
|
||||
|
||||
@@ -11,29 +11,28 @@ As an administrator of a CrewAI AMP account, you can easily invite new team memb
|
||||
|
||||
<Steps>
|
||||
<Step title="Access the Settings Page">
|
||||
- Log in to your CrewAI AMP account
|
||||
- Look for the gear icon (⚙️) in the top right corner of the dashboard
|
||||
- Click on the gear icon to access the **Settings** page:
|
||||
- Log in to your CrewAI AMP account - Look for the gear icon (⚙️) in the top
|
||||
right corner of the dashboard - Click on the gear icon to access the
|
||||
**Settings** page:
|
||||
<Frame caption="Settings page">
|
||||
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
|
||||
</Frame>
|
||||
</Step>
|
||||
<Step title="Navigate to the Members Section">
|
||||
- On the Settings page, you'll see a `Members` tab
|
||||
- Click on the `Members` tab to access the **Members** page:
|
||||
- On the Settings page, you'll see a `Members` tab - Click on the `Members`
|
||||
tab to access the **Members** page:
|
||||
<Frame caption="Members tab">
|
||||
<img src="/images/enterprise/members-tab.png" alt="Members Tab" />
|
||||
</Frame>
|
||||
</Step>
|
||||
<Step title="Invite New Members">
|
||||
- In the Members section, you'll see a list of current members (including yourself)
|
||||
- Locate the `Email` input field
|
||||
- Enter the email address of the person you want to invite
|
||||
- Click the `Invite` button to send the invitation
|
||||
- In the Members section, you'll see a list of current members (including
|
||||
yourself) - Locate the `Email` input field - Enter the email address of the
|
||||
person you want to invite - Click the `Invite` button to send the invitation
|
||||
</Step>
|
||||
<Step title="Repeat as Needed">
|
||||
- You can repeat this process to invite multiple team members
|
||||
- Each invited member will receive an email invitation to join your organization
|
||||
- You can repeat this process to invite multiple team members - Each invited
|
||||
member will receive an email invitation to join your organization
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -43,35 +42,39 @@ You can add roles to your team members to control their access to different part
|
||||
|
||||
<Steps>
|
||||
<Step title="Access the Settings Page">
|
||||
- Log in to your CrewAI AMP account
|
||||
- Look for the gear icon (⚙️) in the top right corner of the dashboard
|
||||
- Click on the gear icon to access the **Settings** page:
|
||||
- Log in to your CrewAI AMP account - Look for the gear icon (⚙️) in the top
|
||||
right corner of the dashboard - Click on the gear icon to access the
|
||||
**Settings** page:
|
||||
<Frame>
|
||||
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
|
||||
</Frame>
|
||||
</Step>
|
||||
<Step title="Navigate to the Members Section">
|
||||
- On the Settings page, you'll see a `Roles` tab
|
||||
- Click on the `Roles` tab to access the **Roles** page.
|
||||
- On the Settings page, you'll see a `Roles` tab - Click on the `Roles` tab
|
||||
to access the **Roles** page.
|
||||
<Frame>
|
||||
<img src="/images/enterprise/roles-tab.png" alt="Roles Tab" />
|
||||
</Frame>
|
||||
- Click on the `Add Role` button to add a new role.
|
||||
- Enter the details and permissions of the role and click the `Create Role` button to create the role.
|
||||
- Click on the `Add Role` button to add a new role. - Enter the
|
||||
details and permissions of the role and click the `Create Role` button to
|
||||
create the role.
|
||||
<Frame>
|
||||
<img src="/images/enterprise/add-role-modal.png" alt="Add Role Modal" />
|
||||
</Frame>
|
||||
</Step>
|
||||
<Step title="Add Roles to Members">
|
||||
- In the Members section, you'll see a list of current members (including yourself)
|
||||
- In the Members section, you'll see a list of current members (including
|
||||
yourself)
|
||||
<Frame>
|
||||
<img src="/images/enterprise/member-accepted-invitation.png" alt="Member Accepted Invitation" />
|
||||
<img
|
||||
src="/images/enterprise/member-accepted-invitation.png"
|
||||
alt="Member Accepted Invitation"
|
||||
/>
|
||||
</Frame>
|
||||
- Once the member has accepted the invitation, you can add a role to them.
|
||||
- Navigate back to `Roles` tab
|
||||
- Go to the member you want to add a role to and under the `Role` column, click on the dropdown
|
||||
- Select the role you want to add to the member
|
||||
- Click the `Update` button to save the role
|
||||
- Once the member has accepted the invitation, you can add a role to
|
||||
them. - Navigate back to `Roles` tab - Go to the member you want to add a
|
||||
role to and under the `Role` column, click on the dropdown - Select the role
|
||||
you want to add to the member - Click the `Update` button to save the role
|
||||
<Frame>
|
||||
<img src="/images/enterprise/assign-role.png" alt="Add Role to Member" />
|
||||
</Frame>
|
||||
|
||||
@@ -21,7 +21,7 @@ The repository is not a version control system. Use Git to track code changes an
|
||||
Before using the Tool Repository, ensure you have:
|
||||
|
||||
- A [CrewAI AMP](https://app.crewai.com) account
|
||||
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
|
||||
- [CrewAI CLI](/en/concepts/cli#cli) installed
|
||||
- uv>=0.5.0 installed. Check out [how to upgrade](https://docs.astral.sh/uv/getting-started/installation/#upgrading-uv)
|
||||
- [Git](https://git-scm.com) installed and configured
|
||||
- Access permissions to publish or install tools in your CrewAI AMP organization
|
||||
@@ -112,7 +112,7 @@ By default, tools are published as private. To make a tool public:
|
||||
crewai tool publish --public
|
||||
```
|
||||
|
||||
For more details on how to build tools, see [Creating your own tools](https://docs.crewai.com/concepts/tools#creating-your-own-tools).
|
||||
For more details on how to build tools, see [Creating your own tools](/en/concepts/tools#creating-your-own-tools).
|
||||
|
||||
## Updating Tools
|
||||
|
||||
@@ -149,7 +149,6 @@ You can check the security check status of a tool at:
|
||||
`CrewAI AMP > Tools > Your Tool > Versions`
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with API integration or troubleshooting.
|
||||
Contact our support team for assistance with API integration or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
|
||||
|
||||
@@ -6,8 +6,9 @@ mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
After deploying your crew to CrewAI AMP, you may need to make updates to the code, security settings, or configuration.
|
||||
This guide explains how to perform these common update operations.
|
||||
After deploying your crew to CrewAI AMP, you may need to make updates to the
|
||||
code, security settings, or configuration. This guide explains how to perform
|
||||
these common update operations.
|
||||
</Note>
|
||||
|
||||
## Why Update Your Crew?
|
||||
@@ -15,6 +16,7 @@ This guide explains how to perform these common update operations.
|
||||
CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew.
|
||||
|
||||
There are several reasons you might want to update your crew deployment:
|
||||
|
||||
- You want to update the code with a latest commit you pushed to GitHub
|
||||
- You want to reset the bearer token for security reasons
|
||||
- You want to update environment variables
|
||||
@@ -26,9 +28,7 @@ When you've pushed new commits to your GitHub repository and want to update your
|
||||
1. Navigate to your crew in the CrewAI AMP platform
|
||||
2. Click on the `Re-deploy` button on your crew details page
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment.
|
||||
|
||||
@@ -40,12 +40,11 @@ If you need to generate a new bearer token (for example, if you suspect the curr
|
||||
2. Find the `Bearer Token` section
|
||||
3. Click the `Reset` button next to your current token
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
<Frame></Frame>
|
||||
|
||||
<Warning>
|
||||
Resetting your bearer token will invalidate the previous token immediately. Make sure to update any applications or scripts that are using the old token.
|
||||
Resetting your bearer token will invalidate the previous token immediately.
|
||||
Make sure to update any applications or scripts that are using the old token.
|
||||
</Warning>
|
||||
|
||||
## 3. Updating Environment Variables
|
||||
@@ -69,7 +68,8 @@ To update the environment variables for your crew:
|
||||
5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes
|
||||
|
||||
<Note>
|
||||
Updating environment variables will trigger a new deployment, but this will only update the environment configuration and not the code itself.
|
||||
Updating environment variables will trigger a new deployment, but this will
|
||||
only update the environment configuration and not the code itself.
|
||||
</Note>
|
||||
|
||||
## After Updating
|
||||
@@ -81,9 +81,11 @@ After performing any update:
|
||||
3. Once complete, test your crew to ensure the changes are working as expected
|
||||
|
||||
<Tip>
|
||||
If you encounter any issues after updating, you can view deployment logs in the platform or contact support for assistance.
|
||||
If you encounter any issues after updating, you can view deployment logs in
|
||||
the platform or contact support for assistance.
|
||||
</Tip>
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with updating your crew or troubleshooting deployment issues.
|
||||
Contact our support team for assistance with updating your crew or
|
||||
troubleshooting deployment issues.
|
||||
</Card>
|
||||
|
||||
@@ -76,6 +76,7 @@ CrewAI AMP allows you to automate your workflow using webhooks. This article wil
|
||||
<img src="/images/enterprise/activepieces-email.png" alt="ActivePieces Email" />
|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Webhook Output Examples
|
||||
@@ -152,4 +153,5 @@ CrewAI AMP allows you to automate your workflow using webhooks. This article wil
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
|
||||
</Tabs>
|
||||
|
||||
@@ -93,6 +93,7 @@ This guide will walk you through the process of setting up Zapier triggers for C
|
||||
<img src="/images/enterprise/zapier-9.png" alt="Zapier 12" />
|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Tips for Success
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Asana integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -42,6 +60,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `task` (string, required): Task ID - The ID of the Task the comment will be added to. The comment will be authored by the currently authenticated user.
|
||||
- `text` (string, required): Text (example: "This is a comment.").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/create_project">
|
||||
@@ -52,6 +71,7 @@ uv add crewai-tools
|
||||
- `workspace` (string, required): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Projects in. Defaults to the user's first Workspace if left blank.
|
||||
- `team` (string, optional): Team - Use Connect Portal Workflow Settings to allow users to select which Team to share this Project with. Defaults to the user's first Team if left blank.
|
||||
- `notes` (string, optional): Notes (example: "These are things we need to purchase.").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/get_projects">
|
||||
@@ -60,6 +80,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `archived` (string, optional): Archived - Choose "true" to show archived projects, "false" to display only active projects, or "default" to show both archived and active projects.
|
||||
- Options: `default`, `true`, `false`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/get_project_by_id">
|
||||
@@ -67,6 +88,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `projectFilterId` (string, required): Project ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/create_task">
|
||||
@@ -81,6 +103,7 @@ uv add crewai-tools
|
||||
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
|
||||
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
|
||||
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/update_task">
|
||||
@@ -96,6 +119,7 @@ uv add crewai-tools
|
||||
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
|
||||
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
|
||||
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/get_tasks">
|
||||
@@ -106,6 +130,7 @@ uv add crewai-tools
|
||||
- `project` (string, optional): Project - The ID of the Project to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Project.
|
||||
- `assignee` (string, optional): Assignee - The ID of the assignee to filter tasks on. Use Connect Portal Workflow Settings to allow users to select an Assignee.
|
||||
- `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/get_tasks_by_id">
|
||||
@@ -113,6 +138,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `taskId` (string, required): Task ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/get_task_by_external_id">
|
||||
@@ -120,6 +146,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/add_task_to_section">
|
||||
@@ -130,6 +157,7 @@ uv add crewai-tools
|
||||
- `taskId` (string, required): Task ID - The ID of the task. (example: "1204619611402340").
|
||||
- `beforeTaskId` (string, optional): Before Task ID - The ID of a task in this section that this task will be inserted before. Cannot be used with After Task ID. (example: "1204619611402340").
|
||||
- `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/get_teams">
|
||||
@@ -137,12 +165,14 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="asana/get_workspaces">
|
||||
**Description:** Get a list of workspaces in Asana.
|
||||
|
||||
**Parameters:** None required.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Box integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -50,6 +68,7 @@ uv add crewai-tools
|
||||
}
|
||||
```
|
||||
- `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "https://picsum.photos/200/300").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/save_file_from_object">
|
||||
@@ -59,6 +78,7 @@ uv add crewai-tools
|
||||
- `file` (string, required): File - Accepts a File Object containing file data. Files must be smaller than 50MB in size.
|
||||
- `fileName` (string, required): File Name (example: "qwerty.png").
|
||||
- `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/get_file_by_id">
|
||||
@@ -66,6 +86,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/list_files">
|
||||
@@ -91,6 +112,7 @@ uv add crewai-tools
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/create_folder">
|
||||
@@ -104,6 +126,7 @@ uv add crewai-tools
|
||||
"id": "123456"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/move_folder">
|
||||
@@ -118,6 +141,7 @@ uv add crewai-tools
|
||||
"id": "123456"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/get_folder_by_id">
|
||||
@@ -125,6 +149,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/search_folders">
|
||||
@@ -150,6 +175,7 @@ uv add crewai-tools
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="box/delete_folder">
|
||||
@@ -158,6 +184,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
|
||||
- `recursive` (boolean, optional): Recursive - Delete a folder that is not empty by recursively deleting the folder and all of its content.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the ClickUp integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -59,6 +77,7 @@ uv add crewai-tools
|
||||
}
|
||||
```
|
||||
Available fields: `space_ids%5B%5D`, `project_ids%5B%5D`, `list_ids%5B%5D`, `statuses%5B%5D`, `include_closed`, `assignees%5B%5D`, `tags%5B%5D`, `due_date_gt`, `due_date_lt`, `date_created_gt`, `date_created_lt`, `date_updated_gt`, `date_updated_lt`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/get_task_in_list">
|
||||
@@ -67,6 +86,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `listId` (string, required): List - Select a List to get tasks from. Use Connect Portal User Settings to allow users to select a ClickUp List.
|
||||
- `taskFilterFormula` (string, optional): Search for tasks that match specified filters. For example: name=task1.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/create_task">
|
||||
@@ -80,6 +100,7 @@ uv add crewai-tools
|
||||
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
|
||||
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
|
||||
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/update_task">
|
||||
@@ -94,6 +115,7 @@ uv add crewai-tools
|
||||
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
|
||||
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
|
||||
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/delete_task">
|
||||
@@ -101,6 +123,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `taskId` (string, required): Task ID - The ID of the task to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/get_list">
|
||||
@@ -108,6 +131,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `spaceId` (string, required): Space ID - The ID of the space containing the lists.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/get_custom_fields_in_list">
|
||||
@@ -115,6 +139,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `listId` (string, required): List ID - The ID of the list to get custom fields from.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/get_all_fields_in_list">
|
||||
@@ -122,6 +147,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `listId` (string, required): List ID - The ID of the list to get all fields from.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/get_space">
|
||||
@@ -129,6 +155,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `spaceId` (string, optional): Space ID - The ID of the space to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/get_folders">
|
||||
@@ -136,12 +163,14 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `spaceId` (string, required): Space ID - The ID of the space containing the folders.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="clickup/get_member">
|
||||
**Description:** Get Member information in ClickUp.
|
||||
|
||||
**Parameters:** None required.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -268,5 +297,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with ClickUp integration setup or troubleshooting.
|
||||
Contact our support team for assistance with ClickUp integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the GitHub integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -45,6 +63,7 @@ uv add crewai-tools
|
||||
- `title` (string, required): Issue Title - Specify the title of the issue to create.
|
||||
- `body` (string, optional): Issue Body - Specify the body contents of the issue to create.
|
||||
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/update_issue">
|
||||
@@ -59,6 +78,7 @@ uv add crewai-tools
|
||||
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
|
||||
- `state` (string, optional): State - Specify the updated state of the issue.
|
||||
- Options: `open`, `closed`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/get_issue_by_number">
|
||||
@@ -68,6 +88,7 @@ uv add crewai-tools
|
||||
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
|
||||
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
|
||||
- `issue_number` (string, required): Issue Number - Specify the number of the issue to fetch.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/lock_issue">
|
||||
@@ -79,6 +100,7 @@ uv add crewai-tools
|
||||
- `issue_number` (string, required): Issue Number - Specify the number of the issue to lock.
|
||||
- `lock_reason` (string, required): Lock Reason - Specify a reason for locking the issue or pull request conversation.
|
||||
- Options: `off-topic`, `too heated`, `resolved`, `spam`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/search_issue">
|
||||
@@ -106,6 +128,7 @@ uv add crewai-tools
|
||||
}
|
||||
```
|
||||
Available fields: `assignee`, `creator`, `mentioned`, `labels`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/create_release">
|
||||
@@ -124,6 +147,7 @@ uv add crewai-tools
|
||||
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
|
||||
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
|
||||
- Options: `true`, `false`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/update_release">
|
||||
@@ -143,6 +167,7 @@ uv add crewai-tools
|
||||
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
|
||||
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
|
||||
- Options: `true`, `false`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/get_release_by_id">
|
||||
@@ -152,6 +177,7 @@ uv add crewai-tools
|
||||
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
|
||||
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
|
||||
- `id` (string, required): Release ID - Specify the release ID of the release to fetch.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/get_release_by_tag_name">
|
||||
@@ -161,6 +187,7 @@ uv add crewai-tools
|
||||
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
|
||||
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
|
||||
- `tag_name` (string, required): Name - Specify the tag of the release to fetch. (example: "v1.0.0").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="github/delete_release">
|
||||
@@ -170,6 +197,7 @@ uv add crewai-tools
|
||||
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
|
||||
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
|
||||
- `id` (string, required): Release ID - Specify the ID of the release to delete.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -298,5 +326,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with GitHub integration setup or troubleshooting.
|
||||
Contact our support team for assistance with GitHub integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Gmail integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -46,6 +64,7 @@ uv add crewai-tools
|
||||
- `pageToken` (string, optional): Page token to retrieve a specific page of results.
|
||||
- `labelIds` (array, optional): Only return messages with labels that match all of the specified label IDs.
|
||||
- `includeSpamTrash` (boolean, optional): Include messages from SPAM and TRASH in the results. (default: false)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/send_email">
|
||||
@@ -61,6 +80,7 @@ uv add crewai-tools
|
||||
- `from` (string, optional): Sender email address (if different from authenticated user).
|
||||
- `replyTo` (string, optional): Reply-to email address.
|
||||
- `threadId` (string, optional): Thread ID if replying to an existing conversation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/delete_email">
|
||||
@@ -69,6 +89,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `userId` (string, required): The user's email address or 'me' for the authenticated user.
|
||||
- `id` (string, required): The ID of the message to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/create_draft">
|
||||
@@ -78,6 +99,7 @@ uv add crewai-tools
|
||||
- `userId` (string, required): The user's email address or 'me' for the authenticated user.
|
||||
- `message` (object, required): Message object containing the draft content.
|
||||
- `raw` (string, required): Base64url encoded email message.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/get_message">
|
||||
@@ -88,6 +110,7 @@ uv add crewai-tools
|
||||
- `id` (string, required): The ID of the message to retrieve.
|
||||
- `format` (string, optional): The format to return the message in. Options: "full", "metadata", "minimal", "raw". (default: "full")
|
||||
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/get_attachment">
|
||||
@@ -97,6 +120,7 @@ uv add crewai-tools
|
||||
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
|
||||
- `messageId` (string, required): The ID of the message containing the attachment.
|
||||
- `id` (string, required): The ID of the attachment to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/fetch_thread">
|
||||
@@ -107,6 +131,7 @@ uv add crewai-tools
|
||||
- `id` (string, required): The ID of the thread to retrieve.
|
||||
- `format` (string, optional): The format to return the messages in. Options: "full", "metadata", "minimal". (default: "full")
|
||||
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/modify_thread">
|
||||
@@ -117,6 +142,7 @@ uv add crewai-tools
|
||||
- `id` (string, required): The ID of the thread to modify.
|
||||
- `addLabelIds` (array, optional): A list of IDs of labels to add to this thread.
|
||||
- `removeLabelIds` (array, optional): A list of IDs of labels to remove from this thread.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/trash_thread">
|
||||
@@ -125,6 +151,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
|
||||
- `id` (string, required): The ID of the thread to trash.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="gmail/untrash_thread">
|
||||
@@ -133,6 +160,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
|
||||
- `id` (string, required): The ID of the thread to untrash.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -270,5 +298,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Gmail integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Gmail integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Google Calendar integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -53,6 +71,7 @@ uv add crewai-tools
|
||||
- `timeZone` (string, optional): Time zone used in the response. The default is UTC.
|
||||
- `groupExpansionMax` (integer, optional): Maximal number of calendar identifiers to be provided for a single group. Maximum: 100
|
||||
- `calendarExpansionMax` (integer, optional): Maximal number of calendars for which FreeBusy information is to be provided. Maximum: 50
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_calendar/create_event">
|
||||
@@ -101,6 +120,7 @@ uv add crewai-tools
|
||||
```
|
||||
- `visibility` (string, optional): Visibility of the event. Options: default, public, private, confidential. Default: default
|
||||
- `transparency` (string, optional): Whether the event blocks time on the calendar. Options: opaque, transparent. Default: opaque
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_calendar/view_events">
|
||||
@@ -120,6 +140,7 @@ uv add crewai-tools
|
||||
- `timeZone` (string, optional): Time zone used in the response.
|
||||
- `updatedMin` (string, optional): Lower bound for an event's last modification time (RFC3339) to filter by.
|
||||
- `iCalUID` (string, optional): Specifies an event ID in the iCalendar format to be provided in the response.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_calendar/update_event">
|
||||
@@ -132,6 +153,7 @@ uv add crewai-tools
|
||||
- `description` (string, optional): Updated event description
|
||||
- `start_dateTime` (string, optional): Updated start time
|
||||
- `end_dateTime` (string, optional): Updated end time
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_calendar/delete_event">
|
||||
@@ -140,6 +162,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `calendarId` (string, required): Calendar ID
|
||||
- `eventId` (string, required): Event ID to delete
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_calendar/view_calendar_list">
|
||||
@@ -151,6 +174,7 @@ uv add crewai-tools
|
||||
- `showDeleted` (boolean, optional): Whether to include deleted calendar list entries in the result. Default: false
|
||||
- `showHidden` (boolean, optional): Whether to show hidden entries. Default: false
|
||||
- `minAccessRole` (string, optional): The minimum access role for the user in the returned entries. Options: freeBusyReader, owner, reader, writer
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -311,22 +335,26 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Authentication Errors**
|
||||
|
||||
- Ensure your Google account has the necessary permissions for calendar access
|
||||
- Verify that the OAuth connection includes all required scopes for Google Calendar API
|
||||
- Check if calendar sharing settings allow the required access level
|
||||
|
||||
**Event Creation Issues**
|
||||
|
||||
- Verify that time formats are correct (RFC3339 format)
|
||||
- Ensure attendee email addresses are properly formatted
|
||||
- Check that the target calendar exists and is accessible
|
||||
- Verify time zones are correctly specified
|
||||
|
||||
**Availability and Time Conflicts**
|
||||
|
||||
- Use proper RFC3339 format for time ranges when checking availability
|
||||
- Ensure time zones are consistent across all operations
|
||||
- Verify that calendar IDs are correct when checking multiple calendars
|
||||
|
||||
**Event Updates and Deletions**
|
||||
|
||||
- Verify that event IDs are correct and events exist
|
||||
- Ensure you have edit permissions for the events
|
||||
- Check that calendar ownership allows modifications
|
||||
@@ -334,5 +362,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Google Calendar integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Google Calendar integration setup
|
||||
or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Google Contacts integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -45,6 +63,7 @@ uv add crewai-tools
|
||||
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
|
||||
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
|
||||
- `sortOrder` (string, optional): The order in which the connections should be sorted. Options: LAST_MODIFIED_ASCENDING, LAST_MODIFIED_DESCENDING, FIRST_NAME_ASCENDING, LAST_NAME_ASCENDING
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/search_contacts">
|
||||
@@ -56,6 +75,7 @@ uv add crewai-tools
|
||||
- `pageSize` (integer, optional): Number of results to return. Minimum: 1, Maximum: 30
|
||||
- `pageToken` (string, optional): Token specifying which result page to return.
|
||||
- `sources` (array, optional): The sources to search in. Options: READ_SOURCE_TYPE_CONTACT, READ_SOURCE_TYPE_PROFILE. Default: READ_SOURCE_TYPE_CONTACT
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/list_directory_people">
|
||||
@@ -68,6 +88,7 @@ uv add crewai-tools
|
||||
- `readMask` (string, optional): Fields to read (e.g., 'names,emailAddresses')
|
||||
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
|
||||
- `mergeSources` (array, optional): Additional data to merge into the directory people responses. Options: CONTACT
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/search_directory_people">
|
||||
@@ -78,6 +99,7 @@ uv add crewai-tools
|
||||
- `sources` (string, required): Directory sources (use 'DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE')
|
||||
- `pageSize` (integer, optional): Number of results to return
|
||||
- `readMask` (string, optional): Fields to read
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/list_other_contacts">
|
||||
@@ -88,6 +110,7 @@ uv add crewai-tools
|
||||
- `pageToken` (string, optional): Token specifying which result page to return.
|
||||
- `readMask` (string, optional): Fields to read
|
||||
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/search_other_contacts">
|
||||
@@ -97,6 +120,7 @@ uv add crewai-tools
|
||||
- `query` (string, required): Search query
|
||||
- `readMask` (string, required): Fields to read (e.g., 'names,emailAddresses')
|
||||
- `pageSize` (integer, optional): Number of results
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/get_person">
|
||||
@@ -105,6 +129,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `resourceName` (string, required): The resource name of the person to get (e.g., 'people/c123456789')
|
||||
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/create_contact">
|
||||
@@ -158,6 +183,7 @@ uv add crewai-tools
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/update_contact">
|
||||
@@ -169,6 +195,7 @@ uv add crewai-tools
|
||||
- `names` (array, optional): Person's names
|
||||
- `emailAddresses` (array, optional): Email addresses
|
||||
- `phoneNumbers` (array, optional): Phone numbers
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/delete_contact">
|
||||
@@ -176,6 +203,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `resourceName` (string, required): The resource name of the person to delete (e.g., 'people/c123456789')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/batch_get_people">
|
||||
@@ -184,6 +212,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `resourceNames` (array, required): Resource names of people to get. Maximum: 200 items
|
||||
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/list_contact_groups">
|
||||
@@ -193,6 +222,7 @@ uv add crewai-tools
|
||||
- `pageSize` (integer, optional): Number of contact groups to return. Minimum: 1, Maximum: 1000
|
||||
- `pageToken` (string, optional): Token specifying which result page to return.
|
||||
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -361,36 +391,43 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Google account has appropriate permissions for contacts access
|
||||
- Verify that the OAuth connection includes required scopes for Google Contacts API
|
||||
- Check that directory access permissions are granted for organization contacts
|
||||
|
||||
**Resource Name Format Issues**
|
||||
|
||||
- Ensure resource names follow the correct format (e.g., 'people/c123456789' for contacts)
|
||||
- Verify that contact group resource names use the format 'contactGroups/groupId'
|
||||
- Check that resource names exist and are accessible
|
||||
|
||||
**Search and Query Issues**
|
||||
|
||||
- Ensure search queries are properly formatted and not empty
|
||||
- Use appropriate readMask fields for the data you need
|
||||
- Verify that search sources are correctly specified (contacts vs profiles)
|
||||
|
||||
**Contact Creation and Updates**
|
||||
|
||||
- Ensure required fields are provided when creating contacts
|
||||
- Verify that email addresses and phone numbers are properly formatted
|
||||
- Check that updatePersonFields parameter includes all fields being updated
|
||||
|
||||
**Directory Access Issues**
|
||||
|
||||
- Ensure you have appropriate permissions to access organization directory
|
||||
- Verify that directory sources are correctly specified
|
||||
- Check that your organization allows API access to directory information
|
||||
|
||||
**Pagination and Limits**
|
||||
|
||||
- Be mindful of page size limits (varies by endpoint)
|
||||
- Use pageToken for pagination through large result sets
|
||||
- Respect API rate limits and implement appropriate delays
|
||||
|
||||
**Contact Groups and Organization**
|
||||
|
||||
- Ensure contact group names are unique when creating new groups
|
||||
- Verify that contacts exist before adding them to groups
|
||||
- Check that you have permissions to modify contact groups
|
||||
@@ -398,5 +435,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Google Contacts integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Google Contacts integration setup
|
||||
or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Google Docs integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -41,6 +59,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `title` (string, optional): The title for the new document.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/get_document">
|
||||
@@ -50,6 +69,7 @@ uv add crewai-tools
|
||||
- `documentId` (string, required): The ID of the document to retrieve.
|
||||
- `includeTabsContent` (boolean, optional): Whether to include tab content. Default is `false`.
|
||||
- `suggestionsViewMode` (string, optional): The suggestions view mode to apply to the document. Enum: `DEFAULT_FOR_CURRENT_ACCESS`, `PREVIEW_SUGGESTIONS_ACCEPTED`, `PREVIEW_WITHOUT_SUGGESTIONS`. Default is `DEFAULT_FOR_CURRENT_ACCESS`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/batch_update">
|
||||
@@ -59,6 +79,7 @@ uv add crewai-tools
|
||||
- `documentId` (string, required): The ID of the document to update.
|
||||
- `requests` (array, required): A list of updates to apply to the document. Each item is an object representing a request.
|
||||
- `writeControl` (object, optional): Provides control over how write requests are executed. Contains `requiredRevisionId` (string) and `targetRevisionId` (string).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_text">
|
||||
@@ -68,6 +89,7 @@ uv add crewai-tools
|
||||
- `documentId` (string, required): The ID of the document to update.
|
||||
- `text` (string, required): The text to insert.
|
||||
- `index` (integer, optional): The zero-based index where to insert the text. Default is `1`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/replace_text">
|
||||
@@ -78,6 +100,7 @@ uv add crewai-tools
|
||||
- `containsText` (string, required): The text to find and replace.
|
||||
- `replaceText` (string, required): The text to replace it with.
|
||||
- `matchCase` (boolean, optional): Whether the search should respect case. Default is `false`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_content_range">
|
||||
@@ -87,6 +110,7 @@ uv add crewai-tools
|
||||
- `documentId` (string, required): The ID of the document to update.
|
||||
- `startIndex` (integer, required): The start index of the range to delete.
|
||||
- `endIndex` (integer, required): The end index of the range to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_page_break">
|
||||
@@ -95,6 +119,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The ID of the document to update.
|
||||
- `index` (integer, optional): The zero-based index where to insert the page break. Default is `1`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_named_range">
|
||||
@@ -105,6 +130,7 @@ uv add crewai-tools
|
||||
- `name` (string, required): The name for the named range.
|
||||
- `startIndex` (integer, required): The start index of the range.
|
||||
- `endIndex` (integer, required): The end index of the range.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -200,29 +226,35 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Authentication Errors**
|
||||
|
||||
- Ensure your Google account has the necessary permissions for Google Docs access.
|
||||
- Verify that the OAuth connection includes all required scopes (`https://www.googleapis.com/auth/documents`).
|
||||
|
||||
**Document ID Issues**
|
||||
|
||||
- Double-check document IDs for correctness.
|
||||
- Ensure the document exists and is accessible to your account.
|
||||
- Document IDs can be found in the Google Docs URL.
|
||||
|
||||
**Text Insertion and Range Operations**
|
||||
|
||||
- When using `insert_text` or `delete_content_range`, ensure index positions are valid.
|
||||
- Remember that Google Docs uses zero-based indexing.
|
||||
- The document must have content at the specified index positions.
|
||||
|
||||
**Batch Update Request Formatting**
|
||||
|
||||
- When using `batch_update`, ensure the `requests` array is correctly formatted according to the Google Docs API documentation.
|
||||
- Complex updates require specific JSON structures for each request type.
|
||||
|
||||
**Replace Text Operations**
|
||||
|
||||
- For `replace_text`, ensure the `containsText` parameter exactly matches the text you want to replace.
|
||||
- Use `matchCase` parameter to control case sensitivity.
|
||||
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Google Docs integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Google Docs integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Google Drive integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -41,6 +59,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the file to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_drive/list_files">
|
||||
@@ -52,6 +71,7 @@ uv add crewai-tools
|
||||
- `page_token` (string, optional): Token for retrieving the next page of results.
|
||||
- `order_by` (string, optional): Sort order (example: "name", "createdTime desc", "modifiedTime").
|
||||
- `spaces` (string, optional): Comma-separated list of spaces to query (drive, appDataFolder, photos).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_drive/upload_file">
|
||||
@@ -63,6 +83,7 @@ uv add crewai-tools
|
||||
- `mime_type` (string, optional): MIME type of the file (example: "text/plain", "application/pdf").
|
||||
- `parent_folder_id` (string, optional): ID of the parent folder where the file should be created.
|
||||
- `description` (string, optional): Description of the file.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_drive/download_file">
|
||||
@@ -71,6 +92,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the file to download.
|
||||
- `mime_type` (string, optional): MIME type for export (required for Google Workspace documents).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_drive/create_folder">
|
||||
@@ -80,6 +102,7 @@ uv add crewai-tools
|
||||
- `name` (string, required): Name of the folder to create.
|
||||
- `parent_folder_id` (string, optional): ID of the parent folder where the new folder should be created.
|
||||
- `description` (string, optional): Description of the folder.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_drive/delete_file">
|
||||
@@ -87,6 +110,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the file to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_drive/share_file">
|
||||
@@ -100,6 +124,7 @@ uv add crewai-tools
|
||||
- `domain` (string, optional): The domain to share with (required for domain type).
|
||||
- `send_notification_email` (boolean, optional): Whether to send a notification email (default: true).
|
||||
- `email_message` (string, optional): A plain text custom message to include in the notification email.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_drive/update_file">
|
||||
@@ -113,6 +138,7 @@ uv add crewai-tools
|
||||
- `description` (string, optional): New description for the file.
|
||||
- `add_parents` (string, optional): Comma-separated list of parent folder IDs to add.
|
||||
- `remove_parents` (string, optional): Comma-separated list of parent folder IDs to remove.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
|
||||
@@ -34,6 +34,24 @@ Before using the Google Sheets integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -45,6 +63,7 @@ uv add crewai-tools
|
||||
- `ranges` (array, optional): The ranges to retrieve from the spreadsheet.
|
||||
- `includeGridData` (boolean, optional): True if grid data should be returned. Default: false
|
||||
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_sheets/get_values">
|
||||
@@ -56,6 +75,7 @@ uv add crewai-tools
|
||||
- `valueRenderOption` (string, optional): How values should be represented in the output. Options: FORMATTED_VALUE, UNFORMATTED_VALUE, FORMULA. Default: FORMATTED_VALUE
|
||||
- `dateTimeRenderOption` (string, optional): How dates, times, and durations should be represented in the output. Options: SERIAL_NUMBER, FORMATTED_STRING. Default: SERIAL_NUMBER
|
||||
- `majorDimension` (string, optional): The major dimension that results should use. Options: ROWS, COLUMNS. Default: ROWS
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_sheets/update_values">
|
||||
@@ -72,6 +92,7 @@ uv add crewai-tools
|
||||
]
|
||||
```
|
||||
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_sheets/append_values">
|
||||
@@ -89,6 +110,7 @@ uv add crewai-tools
|
||||
```
|
||||
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
|
||||
- `insertDataOption` (string, optional): How the input data should be inserted. Options: OVERWRITE, INSERT_ROWS. Default: INSERT_ROWS
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_sheets/create_spreadsheet">
|
||||
@@ -106,6 +128,7 @@ uv add crewai-tools
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -303,31 +326,37 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Google account has edit access to the target spreadsheets
|
||||
- Verify that the OAuth connection includes required scopes for Google Sheets API
|
||||
- Check that spreadsheets are shared with the authenticated account
|
||||
|
||||
**Spreadsheet Structure Issues**
|
||||
|
||||
- Ensure worksheets have proper column headers before creating or updating rows
|
||||
- Verify that range notation (A1 format) is correct for the target cells
|
||||
- Check that the specified spreadsheet ID exists and is accessible
|
||||
|
||||
**Data Type and Format Issues**
|
||||
|
||||
- Ensure data values match the expected format for each column
|
||||
- Use proper date formats for date columns (ISO format recommended)
|
||||
- Verify that numeric values are properly formatted for number columns
|
||||
|
||||
**Range and Cell Reference Issues**
|
||||
|
||||
- Use proper A1 notation for ranges (e.g., "A1:C10", "Sheet1!A1:B5")
|
||||
- Ensure range references don't exceed the actual spreadsheet dimensions
|
||||
- Verify that sheet names in range references match actual sheet names
|
||||
|
||||
**Value Input and Rendering Options**
|
||||
|
||||
- Choose appropriate `valueInputOption` (RAW vs USER_ENTERED) for your data
|
||||
- Select proper `valueRenderOption` based on how you want data formatted
|
||||
- Consider `dateTimeRenderOption` for consistent date/time handling
|
||||
|
||||
**Spreadsheet Creation Issues**
|
||||
|
||||
- Ensure spreadsheet titles are unique and follow naming conventions
|
||||
- Verify that sheet properties are properly structured when creating sheets
|
||||
- Check that you have permissions to create new spreadsheets in your account
|
||||
@@ -335,5 +364,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Google Sheets integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Google Sheets integration setup
|
||||
or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Google Slides integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -41,6 +59,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `title` (string, required): The title of the presentation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation">
|
||||
@@ -49,6 +68,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation to retrieve.
|
||||
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/batch_update_presentation">
|
||||
@@ -73,6 +93,7 @@ uv add crewai-tools
|
||||
"requiredRevisionId": "revision_id_string"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_page">
|
||||
@@ -81,6 +102,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `pageObjectId` (string, required): The ID of the page to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_thumbnail">
|
||||
@@ -89,6 +111,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `pageObjectId` (string, required): The ID of the page for thumbnail generation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/import_data_from_sheet">
|
||||
@@ -98,6 +121,7 @@ uv add crewai-tools
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `sheetId` (string, required): The ID of the Google Sheet to import from.
|
||||
- `dataRange` (string, required): The range of data to import from the sheet.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/upload_file_to_drive">
|
||||
@@ -106,6 +130,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file` (string, required): The file data to upload.
|
||||
- `presentationId` (string, required): The ID of the presentation to link the uploaded file.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/link_file_to_presentation">
|
||||
@@ -114,6 +139,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `fileId` (string, required): The ID of the file to link.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_all_presentations">
|
||||
@@ -122,6 +148,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `pageSize` (integer, optional): The number of presentations to return per page.
|
||||
- `pageToken` (string, optional): A token for pagination.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/delete_presentation">
|
||||
@@ -129,6 +156,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation to delete.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -330,36 +358,43 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Google account has appropriate permissions for Google Slides
|
||||
- Verify that the OAuth connection includes required scopes for presentations, spreadsheets, and drive access
|
||||
- Check that presentations are shared with the authenticated account
|
||||
|
||||
**Presentation ID Issues**
|
||||
|
||||
- Verify that presentation IDs are correct and presentations exist
|
||||
- Ensure you have access permissions to the presentations you're trying to modify
|
||||
- Check that presentation IDs are properly formatted
|
||||
|
||||
**Content Update Issues**
|
||||
|
||||
- Ensure batch update requests are properly formatted according to Google Slides API specifications
|
||||
- Verify that object IDs for slides and elements exist in the presentation
|
||||
- Check that write control revision IDs are current if using optimistic concurrency
|
||||
|
||||
**Data Import Issues**
|
||||
|
||||
- Verify that Google Sheet IDs are correct and accessible
|
||||
- Ensure data ranges are properly specified using A1 notation
|
||||
- Check that you have read permissions for the source spreadsheets
|
||||
|
||||
**File Upload and Linking Issues**
|
||||
|
||||
- Ensure file data is properly encoded for upload
|
||||
- Verify that Drive file IDs are correct when linking files
|
||||
- Check that you have appropriate Drive permissions for file operations
|
||||
|
||||
**Page and Thumbnail Operations**
|
||||
|
||||
- Verify that page object IDs exist in the specified presentation
|
||||
- Ensure presentations have content before attempting to generate thumbnails
|
||||
- Check that page structure is valid for thumbnail generation
|
||||
|
||||
**Pagination and Listing Issues**
|
||||
|
||||
- Use appropriate page sizes for listing presentations
|
||||
- Implement proper pagination using page tokens for large result sets
|
||||
- Handle empty result sets gracefully
|
||||
@@ -367,5 +402,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Google Slides integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Google Slides integration setup
|
||||
or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the HubSpot integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -99,6 +117,7 @@ uv add crewai-tools
|
||||
- `web_technologies` (string, optional): Web Technologies used. Must be one of the predefined values.
|
||||
- `website` (string, optional): Website URL.
|
||||
- `founded_year` (string, optional): Year Founded.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/create_contact">
|
||||
@@ -198,6 +217,7 @@ uv add crewai-tools
|
||||
- `hs_whatsapp_phone_number` (string, optional): WhatsApp Phone Number.
|
||||
- `work_email` (string, optional): Work email.
|
||||
- `hs_googleplusid` (string, optional): googleplus ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/create_deal">
|
||||
@@ -213,6 +233,7 @@ uv add crewai-tools
|
||||
- `dealtype` (string, optional): The type of deal. Available values: `newbusiness`, `existingbusiness`.
|
||||
- `description` (string, optional): A description of the deal.
|
||||
- `hs_priority` (string, optional): The priority of the deal. Available values: `low`, `medium`, `high`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/create_record_engagements">
|
||||
@@ -230,6 +251,7 @@ uv add crewai-tools
|
||||
- `hs_meeting_body` (string, optional): The description for the meeting. (Used for `MEETING`)
|
||||
- `hs_meeting_start_time` (string, optional): The start time of the meeting. (Used for `MEETING`)
|
||||
- `hs_meeting_end_time` (string, optional): The end time of the meeting. (Used for `MEETING`)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/update_company">
|
||||
@@ -247,6 +269,7 @@ uv add crewai-tools
|
||||
- `numberofemployees` (number, optional): Number of Employees.
|
||||
- `annualrevenue` (number, optional): Annual Revenue.
|
||||
- `description` (string, optional): Description.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/create_record_any">
|
||||
@@ -255,6 +278,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `recordType` (string, required): The object type ID of the custom object.
|
||||
- Additional parameters depend on the custom object's schema.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/update_contact">
|
||||
@@ -269,6 +293,7 @@ uv add crewai-tools
|
||||
- `company` (string, optional): Company Name.
|
||||
- `jobtitle` (string, optional): Job Title.
|
||||
- `lifecyclestage` (string, optional): Lifecycle Stage.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/update_deal">
|
||||
@@ -282,6 +307,7 @@ uv add crewai-tools
|
||||
- `pipeline` (string, optional): The pipeline the deal belongs to.
|
||||
- `closedate` (string, optional): The date the deal is expected to close.
|
||||
- `dealtype` (string, optional): The type of deal.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/update_record_engagements">
|
||||
@@ -293,6 +319,7 @@ uv add crewai-tools
|
||||
- `hs_task_subject` (string, optional): The title of the task.
|
||||
- `hs_task_body` (string, optional): The notes for the task.
|
||||
- `hs_task_status` (string, optional): The status of the task.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/update_record_any">
|
||||
@@ -302,6 +329,7 @@ uv add crewai-tools
|
||||
- `recordId` (string, required): The ID of the record to update.
|
||||
- `recordType` (string, required): The object type ID of the custom object.
|
||||
- Additional parameters depend on the custom object's schema.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/list_companies">
|
||||
@@ -309,6 +337,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/list_contacts">
|
||||
@@ -316,6 +345,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/list_deals">
|
||||
@@ -323,6 +353,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_records_engagements">
|
||||
@@ -331,6 +362,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `objectName` (string, required): The type of engagement to fetch (e.g., "notes").
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_records_any">
|
||||
@@ -339,6 +371,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `recordType` (string, required): The object type ID of the custom object.
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_company">
|
||||
@@ -346,6 +379,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the company to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_contact">
|
||||
@@ -353,6 +387,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the contact to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_deal">
|
||||
@@ -360,6 +395,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the deal to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_record_by_id_engagements">
|
||||
@@ -367,6 +403,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the engagement to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_record_by_id_any">
|
||||
@@ -375,6 +412,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `recordType` (string, required): The object type ID of the custom object.
|
||||
- `recordId` (string, required): The ID of the record to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/search_companies">
|
||||
@@ -383,6 +421,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/search_contacts">
|
||||
@@ -391,6 +430,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/search_deals">
|
||||
@@ -399,6 +439,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/search_records_engagements">
|
||||
@@ -407,6 +448,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `engagementFilterFormula` (object, optional): A filter for engagements.
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/search_records_any">
|
||||
@@ -416,6 +458,7 @@ uv add crewai-tools
|
||||
- `recordType` (string, required): The object type ID to search.
|
||||
- `filterFormula` (string, optional): The filter formula to apply.
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/delete_record_companies">
|
||||
@@ -423,6 +466,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the company to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/delete_record_contacts">
|
||||
@@ -430,6 +474,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the contact to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/delete_record_deals">
|
||||
@@ -437,6 +482,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the deal to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/delete_record_engagements">
|
||||
@@ -444,6 +490,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): The ID of the engagement to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/delete_record_any">
|
||||
@@ -452,6 +499,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `recordType` (string, required): The object type ID of the custom object.
|
||||
- `recordId` (string, required): The ID of the record to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/get_contacts_by_list_id">
|
||||
@@ -460,6 +508,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `listId` (string, required): The ID of the list to get contacts from.
|
||||
- `paginationParameters` (object, optional): Use `pageCursor` for subsequent pages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="hubspot/describe_action_schema">
|
||||
@@ -468,6 +517,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `recordType` (string, required): The object type ID (e.g., 'companies').
|
||||
- `operation` (string, required): The operation type (e.g., 'CREATE_RECORD').
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -561,5 +611,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with HubSpot integration setup or troubleshooting.
|
||||
Contact our support team for assistance with HubSpot integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Jira integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -54,6 +72,7 @@ uv add crewai-tools
|
||||
"customfield_10001": "value"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/update_issue">
|
||||
@@ -69,6 +88,7 @@ uv add crewai-tools
|
||||
- Options: `description`, `descriptionJSON`
|
||||
- `description` (string, optional): Description - A detailed description of the issue. This field appears only when 'descriptionType' = 'description'.
|
||||
- `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/get_issue_by_key">
|
||||
@@ -76,6 +96,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `issueKey` (string, required): Issue Key (example: "TEST-1234").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/filter_issues">
|
||||
@@ -102,6 +123,7 @@ uv add crewai-tools
|
||||
```
|
||||
Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`
|
||||
- `limit` (string, optional): Limit results - Limit the maximum number of issues to return. Defaults to 10 if left blank.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/search_by_jql">
|
||||
@@ -115,12 +137,14 @@ uv add crewai-tools
|
||||
"pageCursor": "cursor_string"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/update_issue_any">
|
||||
**Description:** Update any issue in Jira. Use DESCRIBE_ACTION_SCHEMA to get properties schema for this function.
|
||||
|
||||
**Parameters:** No specific parameters - use JIRA_DESCRIBE_ACTION_SCHEMA first to get the expected schema.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/describe_action_schema">
|
||||
@@ -130,6 +154,7 @@ uv add crewai-tools
|
||||
- `issueTypeId` (string, required): Issue Type ID.
|
||||
- `projectKey` (string, required): Project key.
|
||||
- `operation` (string, required): Operation Type value, for example CREATE_ISSUE or UPDATE_ISSUE.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/get_projects">
|
||||
@@ -142,6 +167,7 @@ uv add crewai-tools
|
||||
"pageCursor": "cursor_string"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/get_issue_types_by_project">
|
||||
@@ -149,12 +175,14 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `project` (string, required): Project key.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/get_issue_types">
|
||||
**Description:** Get all Issue Types in Jira.
|
||||
|
||||
**Parameters:** None required.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/get_issue_status_by_project">
|
||||
@@ -162,6 +190,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `project` (string, required): Project key.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="jira/get_all_assignees_by_project">
|
||||
@@ -169,6 +198,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `project` (string, required): Project key.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -332,31 +362,37 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Jira account has necessary permissions for the target projects
|
||||
- Verify that the OAuth connection includes required scopes for Jira API
|
||||
- Check if you have create/edit permissions for issues in the specified projects
|
||||
|
||||
**Invalid Project or Issue Keys**
|
||||
|
||||
- Double-check project keys and issue keys for correct format (e.g., "PROJ-123")
|
||||
- Ensure projects exist and are accessible to your account
|
||||
- Verify that issue keys reference existing issues
|
||||
|
||||
**Issue Type and Status Issues**
|
||||
|
||||
- Use JIRA_GET_ISSUE_TYPES_BY_PROJECT to get valid issue types for a project
|
||||
- Use JIRA_GET_ISSUE_STATUS_BY_PROJECT to get valid statuses
|
||||
- Ensure issue types and statuses are available in the target project
|
||||
|
||||
**JQL Query Problems**
|
||||
|
||||
- Test JQL queries in Jira's issue search before using in API calls
|
||||
- Ensure field names in JQL are spelled correctly and exist in your Jira instance
|
||||
- Use proper JQL syntax for complex queries
|
||||
|
||||
**Custom Fields and Schema Issues**
|
||||
|
||||
- Use JIRA_DESCRIBE_ACTION_SCHEMA to get the correct schema for complex issue types
|
||||
- Ensure custom field IDs are correct (e.g., "customfield_10001")
|
||||
- Verify that custom fields are available in the target project and issue type
|
||||
|
||||
**Filter Formula Issues**
|
||||
|
||||
- Ensure filter formulas follow the correct JSON structure for disjunctive normal form
|
||||
- Use valid field names that exist in your Jira configuration
|
||||
- Test simple filters before building complex multi-condition queries
|
||||
@@ -364,5 +400,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Jira integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Jira integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Linear integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -54,6 +72,7 @@ uv add crewai-tools
|
||||
"labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"]
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/update_issue">
|
||||
@@ -74,6 +93,7 @@ uv add crewai-tools
|
||||
"labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"]
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/get_issue_by_id">
|
||||
@@ -81,6 +101,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to fetch. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/get_issue_by_issue_identifier">
|
||||
@@ -88,6 +109,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `externalId` (string, required): External ID - Specify the human-readable Issue identifier of the issue to fetch. (example: "ABC-1").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/search_issue">
|
||||
@@ -115,6 +137,7 @@ uv add crewai-tools
|
||||
```
|
||||
Available fields: `title`, `number`, `project`, `createdAt`
|
||||
Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringStartsWith`, `$stringDoesNotStartWith`, `$stringEndsWith`, `$stringDoesNotEndWith`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`, `$numberGreaterThanOrEqualTo`, `$numberLessThanOrEqualTo`, `$numberGreaterThan`, `$numberLessThan`, `$dateTimeAfter`, `$dateTimeBefore`
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/delete_issue">
|
||||
@@ -122,6 +145,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to delete. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/archive_issue">
|
||||
@@ -129,6 +153,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to archive. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/create_sub_issue">
|
||||
@@ -145,6 +170,7 @@ uv add crewai-tools
|
||||
"lead": "linear_user_id"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/create_project">
|
||||
@@ -167,6 +193,7 @@ uv add crewai-tools
|
||||
"description": ""
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/update_project">
|
||||
@@ -183,6 +210,7 @@ uv add crewai-tools
|
||||
"description": ""
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/get_project_by_id">
|
||||
@@ -190,6 +218,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `projectId` (string, required): Project ID - Specify the Project ID of the project to fetch. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/delete_project">
|
||||
@@ -197,6 +226,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `projectId` (string, required): Project ID - Specify the Project ID of the project to delete. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b").
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="linear/search_teams">
|
||||
@@ -222,6 +252,7 @@ uv add crewai-tools
|
||||
}
|
||||
```
|
||||
Available fields: `id`, `name`
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -385,37 +416,44 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Linear account has necessary permissions for the target workspace
|
||||
- Verify that the OAuth connection includes required scopes for Linear API
|
||||
- Check if you have create/edit permissions for issues and projects in the workspace
|
||||
|
||||
**Invalid IDs and References**
|
||||
|
||||
- Double-check team IDs, issue IDs, and project IDs for correct UUID format
|
||||
- Ensure referenced entities (teams, projects, cycles) exist and are accessible
|
||||
- Verify that issue identifiers follow the correct format (e.g., "ABC-1")
|
||||
|
||||
**Team and Project Association Issues**
|
||||
|
||||
- Use LINEAR_SEARCH_TEAMS to get valid team IDs before creating issues or projects
|
||||
- Ensure teams exist and are active in your workspace
|
||||
- Verify that team IDs are properly formatted as UUIDs
|
||||
|
||||
**Issue Status and Priority Problems**
|
||||
|
||||
- Check that status IDs reference valid workflow states for the team
|
||||
- Ensure priority values are within the valid range for your Linear configuration
|
||||
- Verify that custom fields and labels exist before referencing them
|
||||
|
||||
**Date and Time Format Issues**
|
||||
|
||||
- Use ISO 8601 format for due dates and timestamps
|
||||
- Ensure time zones are handled correctly for due date calculations
|
||||
- Verify that date values are valid and in the future for due dates
|
||||
|
||||
**Search and Filter Issues**
|
||||
|
||||
- Ensure search queries are properly formatted and not empty
|
||||
- Use valid field names in filter formulas: `title`, `number`, `project`, `createdAt`
|
||||
- Test simple filters before building complex multi-condition queries
|
||||
- Verify that operator types match the data types of the fields being filtered
|
||||
|
||||
**Sub-issue Creation Problems**
|
||||
|
||||
- Ensure parent issue IDs are valid and accessible
|
||||
- Verify that the team ID for sub-issues matches or is compatible with the parent issue's team
|
||||
- Check that parent issues are not already archived or deleted
|
||||
@@ -423,5 +461,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Linear integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Linear integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Microsoft Excel integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -52,6 +70,7 @@ uv add crewai-tools
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_workbooks">
|
||||
@@ -63,6 +82,7 @@ uv add crewai-tools
|
||||
- `expand` (string, optional): Expand related resources inline
|
||||
- `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999
|
||||
- `orderby` (string, optional): Order results by specified properties
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_worksheets">
|
||||
@@ -75,6 +95,7 @@ uv add crewai-tools
|
||||
- `expand` (string, optional): Expand related resources inline
|
||||
- `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999
|
||||
- `orderby` (string, optional): Order results by specified properties
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/create_worksheet">
|
||||
@@ -83,6 +104,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `name` (string, required): Name of the new worksheet
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_range_data">
|
||||
@@ -92,6 +114,7 @@ uv add crewai-tools
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
- `range` (string, required): Range address (e.g., 'A1:C10')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/update_range_data">
|
||||
@@ -109,6 +132,7 @@ uv add crewai-tools
|
||||
["Jane", 25, "Los Angeles"]
|
||||
]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/add_table">
|
||||
@@ -119,6 +143,7 @@ uv add crewai-tools
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
- `range` (string, required): Range for the table (e.g., 'A1:D10')
|
||||
- `has_headers` (boolean, optional): Whether the first row contains headers. Default: true
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_tables">
|
||||
@@ -127,6 +152,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/add_table_row">
|
||||
@@ -140,6 +166,7 @@ uv add crewai-tools
|
||||
```json
|
||||
["John Doe", 35, "Manager", "Sales"]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/create_chart">
|
||||
@@ -151,6 +178,7 @@ uv add crewai-tools
|
||||
- `chart_type` (string, required): Type of chart (e.g., 'ColumnClustered', 'Line', 'Pie')
|
||||
- `source_data` (string, required): Range of data for the chart (e.g., 'A1:B10')
|
||||
- `series_by` (string, optional): How to interpret the data ('Auto', 'Columns', or 'Rows'). Default: Auto
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_cell">
|
||||
@@ -161,6 +189,7 @@ uv add crewai-tools
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
- `row` (integer, required): Row number (0-based)
|
||||
- `column` (integer, required): Column number (0-based)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_used_range">
|
||||
@@ -169,6 +198,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/list_charts">
|
||||
@@ -177,6 +207,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/delete_worksheet">
|
||||
@@ -185,6 +216,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet to delete
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/delete_table">
|
||||
@@ -194,6 +226,7 @@ uv add crewai-tools
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
- `table_name` (string, required): Name of the table to delete
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/list_names">
|
||||
@@ -201,6 +234,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -405,36 +439,43 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Microsoft account has appropriate permissions for Excel and OneDrive/SharePoint
|
||||
- Verify that the OAuth connection includes required scopes (Files.Read.All, Files.ReadWrite.All)
|
||||
- Check that you have access to the specific workbooks you're trying to modify
|
||||
|
||||
**File ID and Path Issues**
|
||||
|
||||
- Verify that file IDs are correct and files exist in your OneDrive or SharePoint
|
||||
- Ensure file paths are properly formatted when creating new workbooks
|
||||
- Check that workbook files have the correct .xlsx extension
|
||||
|
||||
**Worksheet and Range Issues**
|
||||
|
||||
- Verify that worksheet names exist in the specified workbook
|
||||
- Ensure range addresses are properly formatted (e.g., 'A1:C10')
|
||||
- Check that ranges don't exceed worksheet boundaries
|
||||
|
||||
**Data Format Issues**
|
||||
|
||||
- Ensure data values are properly formatted for Excel (strings, numbers, integers)
|
||||
- Verify that 2D arrays for ranges have consistent row and column counts
|
||||
- Check that table data includes proper headers when has_headers is true
|
||||
|
||||
**Chart Creation Issues**
|
||||
|
||||
- Verify that chart types are supported (ColumnClustered, Line, Pie, etc.)
|
||||
- Ensure source data ranges contain appropriate data for the chart type
|
||||
- Check that the source data range exists and contains data
|
||||
|
||||
**Table Management Issues**
|
||||
|
||||
- Ensure table names are unique within worksheets
|
||||
- Verify that table ranges don't overlap with existing tables
|
||||
- Check that new row data matches the table's column structure
|
||||
|
||||
**Cell and Range Operations**
|
||||
|
||||
- Verify that row and column indices are 0-based for cell operations
|
||||
- Ensure ranges contain data when using get_used_range
|
||||
- Check that named ranges exist before referencing them
|
||||
@@ -442,5 +483,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Microsoft Excel integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Microsoft Excel integration setup
|
||||
or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Microsoft OneDrive integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -43,6 +61,7 @@ uv add crewai-tools
|
||||
- `top` (integer, optional): Number of items to retrieve (max 1000). Default is `50`.
|
||||
- `orderby` (string, optional): Order by field (e.g., "name asc", "lastModifiedDateTime desc"). Default is "name asc".
|
||||
- `filter` (string, optional): OData filter expression.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_file_info">
|
||||
@@ -50,6 +69,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `item_id` (string, required): The ID of the file or folder.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/download_file">
|
||||
@@ -57,6 +77,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `item_id` (string, required): The ID of the file to download.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/upload_file">
|
||||
@@ -65,6 +86,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_name` (string, required): Name of the file to upload.
|
||||
- `content` (string, required): Base64 encoded file content.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/create_folder">
|
||||
@@ -72,6 +94,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `folder_name` (string, required): Name of the folder to create.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/delete_item">
|
||||
@@ -79,6 +102,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `item_id` (string, required): The ID of the file or folder to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/copy_item">
|
||||
@@ -88,6 +112,7 @@ uv add crewai-tools
|
||||
- `item_id` (string, required): The ID of the file or folder to copy.
|
||||
- `parent_id` (string, optional): The ID of the destination folder (optional, defaults to root).
|
||||
- `new_name` (string, optional): New name for the copied item (optional).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/move_item">
|
||||
@@ -97,6 +122,7 @@ uv add crewai-tools
|
||||
- `item_id` (string, required): The ID of the file or folder to move.
|
||||
- `parent_id` (string, required): The ID of the destination folder.
|
||||
- `new_name` (string, optional): New name for the item (optional).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/search_files">
|
||||
@@ -105,6 +131,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `query` (string, required): Search query string.
|
||||
- `top` (integer, optional): Number of results to return (max 1000). Default is `50`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/share_item">
|
||||
@@ -114,6 +141,7 @@ uv add crewai-tools
|
||||
- `item_id` (string, required): The ID of the file or folder to share.
|
||||
- `type` (string, optional): Type of sharing link. Enum: `view`, `edit`, `embed`. Default is `view`.
|
||||
- `scope` (string, optional): Scope of the sharing link. Enum: `anonymous`, `organization`. Default is `anonymous`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_thumbnails">
|
||||
@@ -121,6 +149,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `item_id` (string, required): The ID of the file.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -216,29 +245,35 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Authentication Errors**
|
||||
|
||||
- Ensure your Microsoft account has the necessary permissions for file access (e.g., `Files.Read`, `Files.ReadWrite`).
|
||||
- Verify that the OAuth connection includes all required scopes.
|
||||
|
||||
**File Upload Issues**
|
||||
|
||||
- Ensure `file_name` and `content` are provided for file uploads.
|
||||
- Content must be Base64 encoded for binary files.
|
||||
- Check that you have write permissions to OneDrive.
|
||||
|
||||
**File/Folder ID Issues**
|
||||
|
||||
- Double-check item IDs for correctness when accessing specific files or folders.
|
||||
- Item IDs are returned by other operations like `list_files` or `search_files`.
|
||||
- Ensure the referenced items exist and are accessible.
|
||||
|
||||
**Search and Filter Operations**
|
||||
|
||||
- Use appropriate search terms for `search_files` operations.
|
||||
- For `filter` parameters, use proper OData syntax.
|
||||
|
||||
**File Operations (Copy/Move)**
|
||||
|
||||
- For `move_item`, ensure both `item_id` and `parent_id` are provided.
|
||||
- For `copy_item`, only `item_id` is required; `parent_id` defaults to root if not specified.
|
||||
- Verify that destination folders exist and are accessible.
|
||||
|
||||
**Sharing Link Creation**
|
||||
|
||||
- Ensure the item exists before creating sharing links.
|
||||
- Choose appropriate `type` and `scope` based on your sharing requirements.
|
||||
- `anonymous` scope allows access without sign-in; `organization` requires organizational account.
|
||||
@@ -246,5 +281,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Microsoft OneDrive integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Microsoft OneDrive integration
|
||||
setup or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Microsoft Outlook integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -46,6 +64,7 @@ uv add crewai-tools
|
||||
- `orderby` (string, optional): Order by field (e.g., "receivedDateTime desc"). Default is "receivedDateTime desc".
|
||||
- `select` (string, optional): Select specific properties to return.
|
||||
- `expand` (string, optional): Expand related resources inline.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/send_email">
|
||||
@@ -61,6 +80,7 @@ uv add crewai-tools
|
||||
- `importance` (string, optional): Message importance level. Enum: `low`, `normal`, `high`. Default is `normal`.
|
||||
- `reply_to` (array, optional): Array of reply-to email addresses.
|
||||
- `save_to_sent_items` (boolean, optional): Whether to save the message to Sent Items folder. Default is `true`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/get_calendar_events">
|
||||
@@ -71,6 +91,7 @@ uv add crewai-tools
|
||||
- `skip` (integer, optional): Number of events to skip. Default is `0`.
|
||||
- `filter` (string, optional): OData filter expression (e.g., "start/dateTime ge '2024-01-01T00:00:00Z'").
|
||||
- `orderby` (string, optional): Order by field (e.g., "start/dateTime asc"). Default is "start/dateTime asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/create_calendar_event">
|
||||
@@ -84,6 +105,7 @@ uv add crewai-tools
|
||||
- `timezone` (string, optional): Time zone (e.g., 'Pacific Standard Time'). Default is `UTC`.
|
||||
- `location` (string, optional): Event location.
|
||||
- `attendees` (array, optional): Array of attendee email addresses.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/get_contacts">
|
||||
@@ -94,6 +116,7 @@ uv add crewai-tools
|
||||
- `skip` (integer, optional): Number of contacts to skip. Default is `0`.
|
||||
- `filter` (string, optional): OData filter expression.
|
||||
- `orderby` (string, optional): Order by field (e.g., "displayName asc"). Default is "displayName asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/create_contact">
|
||||
@@ -108,6 +131,7 @@ uv add crewai-tools
|
||||
- `homePhones` (array, optional): Array of home phone numbers.
|
||||
- `jobTitle` (string, optional): Contact's job title.
|
||||
- `companyName` (string, optional): Contact's company name.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -203,30 +227,36 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Authentication Errors**
|
||||
|
||||
- Ensure your Microsoft account has the necessary permissions for mail, calendar, and contact access.
|
||||
- Required scopes include: `Mail.Read`, `Mail.Send`, `Calendars.Read`, `Calendars.ReadWrite`, `Contacts.Read`, `Contacts.ReadWrite`.
|
||||
- Verify that the OAuth connection includes all required scopes.
|
||||
|
||||
**Email Sending Issues**
|
||||
|
||||
- Ensure `to_recipients`, `subject`, and `body` are provided for `send_email`.
|
||||
- Check that email addresses are properly formatted.
|
||||
- Verify that the account has `Mail.Send` permissions.
|
||||
|
||||
**Calendar Event Creation**
|
||||
|
||||
- Ensure `subject`, `start_datetime`, and `end_datetime` are provided.
|
||||
- Use proper ISO 8601 format for datetime fields (e.g., '2024-01-20T10:00:00').
|
||||
- Verify timezone settings if events appear at incorrect times.
|
||||
|
||||
**Contact Management**
|
||||
|
||||
- For `create_contact`, ensure `displayName` is provided as it's required.
|
||||
- When providing `emailAddresses`, use the proper object format with `address` and `name` properties.
|
||||
|
||||
**Search and Filter Issues**
|
||||
|
||||
- Use proper OData syntax for `filter` parameters.
|
||||
- For date filters, use ISO 8601 format (e.g., "receivedDateTime ge '2024-01-01T00:00:00Z'").
|
||||
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Microsoft Outlook integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Microsoft Outlook integration
|
||||
setup or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Microsoft SharePoint integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -47,6 +65,7 @@ uv add crewai-tools
|
||||
- `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999
|
||||
- `skip` (integer, optional): Number of items to skip. Minimum: 0
|
||||
- `orderby` (string, optional): Order results by specified properties (e.g., 'displayName desc')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_site">
|
||||
@@ -56,6 +75,7 @@ uv add crewai-tools
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `select` (string, optional): Select specific properties to return (e.g., 'displayName,id,webUrl,drives')
|
||||
- `expand` (string, optional): Expand related resources inline (e.g., 'drives,lists')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_site_lists">
|
||||
@@ -63,6 +83,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_list">
|
||||
@@ -71,6 +92,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `list_id` (string, required): The ID of the list
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_list_items">
|
||||
@@ -80,6 +102,7 @@ uv add crewai-tools
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `list_id` (string, required): The ID of the list
|
||||
- `expand` (string, optional): Expand related data (e.g., 'fields')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_list_item">
|
||||
@@ -96,6 +119,7 @@ uv add crewai-tools
|
||||
"Status": "Active"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/update_list_item">
|
||||
@@ -112,6 +136,7 @@ uv add crewai-tools
|
||||
"Status": "Completed"
|
||||
}
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_list_item">
|
||||
@@ -121,6 +146,7 @@ uv add crewai-tools
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `list_id` (string, required): The ID of the list
|
||||
- `item_id` (string, required): The ID of the item to delete
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/upload_file_to_library">
|
||||
@@ -130,6 +156,7 @@ uv add crewai-tools
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `file_path` (string, required): The path where to upload the file (e.g., 'folder/filename.txt')
|
||||
- `content` (string, required): The file content to upload
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_drive_items">
|
||||
@@ -137,6 +164,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_drive_item">
|
||||
@@ -145,6 +173,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `item_id` (string, required): The ID of the file or folder to delete
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -347,36 +376,43 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Microsoft account has appropriate permissions for SharePoint sites
|
||||
- Verify that the OAuth connection includes required scopes (Sites.Read.All, Sites.ReadWrite.All)
|
||||
- Check that you have access to the specific sites and lists you're trying to access
|
||||
|
||||
**Site and List ID Issues**
|
||||
|
||||
- Verify that site IDs and list IDs are correct and properly formatted
|
||||
- Ensure that sites and lists exist and are accessible to your account
|
||||
- Use the get_sites and get_site_lists actions to discover valid IDs
|
||||
|
||||
**Field and Schema Issues**
|
||||
|
||||
- Ensure field names match exactly with the SharePoint list schema
|
||||
- Verify that required fields are included when creating or updating list items
|
||||
- Check that field types and values are compatible with the list column definitions
|
||||
|
||||
**File Upload Issues**
|
||||
|
||||
- Ensure file paths are properly formatted and don't contain invalid characters
|
||||
- Verify that you have write permissions to the target document library
|
||||
- Check that file content is properly encoded for upload
|
||||
|
||||
**OData Query Issues**
|
||||
|
||||
- Use proper OData syntax for filter, select, expand, and orderby parameters
|
||||
- Verify that property names used in queries exist in the target resources
|
||||
- Test simple queries before building complex filter expressions
|
||||
|
||||
**Pagination and Performance**
|
||||
|
||||
- Use top and skip parameters appropriately for large result sets
|
||||
- Implement proper pagination for lists with many items
|
||||
- Consider using select parameters to return only needed properties
|
||||
|
||||
**Document Library Operations**
|
||||
|
||||
- Ensure you have proper permissions for document library operations
|
||||
- Verify that drive item IDs are correct when deleting files or folders
|
||||
- Check that file paths don't conflict with existing content
|
||||
@@ -384,5 +420,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Microsoft SharePoint integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Microsoft SharePoint integration
|
||||
setup or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Microsoft Teams integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -41,6 +59,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- No parameters required.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_channels">
|
||||
@@ -48,6 +67,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `team_id` (string, required): The ID of the team.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/send_message">
|
||||
@@ -58,6 +78,7 @@ uv add crewai-tools
|
||||
- `channel_id` (string, required): The ID of the channel.
|
||||
- `message` (string, required): The message content.
|
||||
- `content_type` (string, optional): Content type (html or text). Enum: `html`, `text`. Default is `text`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_messages">
|
||||
@@ -67,6 +88,7 @@ uv add crewai-tools
|
||||
- `team_id` (string, required): The ID of the team.
|
||||
- `channel_id` (string, required): The ID of the channel.
|
||||
- `top` (integer, optional): Number of messages to retrieve (max 50). Default is `20`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/create_meeting">
|
||||
@@ -76,6 +98,7 @@ uv add crewai-tools
|
||||
- `subject` (string, required): Meeting subject/title.
|
||||
- `startDateTime` (string, required): Meeting start time (ISO 8601 format with timezone).
|
||||
- `endDateTime` (string, required): Meeting end time (ISO 8601 format with timezone).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/search_online_meetings_by_join_url">
|
||||
@@ -83,6 +106,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `join_web_url` (string, required): The join web URL of the meeting to search for.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -178,35 +202,42 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Authentication Errors**
|
||||
|
||||
- Ensure your Microsoft account has the necessary permissions for Teams access.
|
||||
- Required scopes include: `Team.ReadBasic.All`, `Channel.ReadBasic.All`, `ChannelMessage.Send`, `ChannelMessage.Read.All`, `OnlineMeetings.ReadWrite`, `OnlineMeetings.Read`.
|
||||
- Verify that the OAuth connection includes all required scopes.
|
||||
|
||||
**Team and Channel Access**
|
||||
|
||||
- Ensure you are a member of the teams you're trying to access.
|
||||
- Double-check team IDs and channel IDs for correctness.
|
||||
- Team and channel IDs can be obtained using the `get_teams` and `get_channels` actions.
|
||||
|
||||
**Message Sending Issues**
|
||||
|
||||
- Ensure `team_id`, `channel_id`, and `message` are provided for `send_message`.
|
||||
- Verify that you have permissions to send messages to the specified channel.
|
||||
- Choose appropriate `content_type` (text or html) based on your message format.
|
||||
|
||||
**Meeting Creation**
|
||||
|
||||
- Ensure `subject`, `startDateTime`, and `endDateTime` are provided.
|
||||
- Use proper ISO 8601 format with timezone for datetime fields (e.g., '2024-01-20T10:00:00-08:00').
|
||||
- Verify that the meeting times are in the future.
|
||||
|
||||
**Message Retrieval Limitations**
|
||||
|
||||
- The `get_messages` action can retrieve a maximum of 50 messages per request.
|
||||
- Messages are returned in reverse chronological order (newest first).
|
||||
|
||||
**Meeting Search**
|
||||
|
||||
- For `search_online_meetings_by_join_url`, ensure the join URL is exact and properly formatted.
|
||||
- The URL should be the complete Teams meeting join URL.
|
||||
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Microsoft Teams integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Microsoft Teams integration setup
|
||||
or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Microsoft Word integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -45,6 +63,7 @@ uv add crewai-tools
|
||||
- `expand` (string, optional): Expand related resources inline.
|
||||
- `top` (integer, optional): Number of items to return (min 1, max 999).
|
||||
- `orderby` (string, optional): Order results by specified properties.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/create_text_document">
|
||||
@@ -53,6 +72,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `file_name` (string, required): Name of the text document (should end with .txt).
|
||||
- `content` (string, optional): Text content for the document. Default is "This is a new text document created via API."
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/get_document_content">
|
||||
@@ -60,6 +80,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the document.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/get_document_properties">
|
||||
@@ -67,6 +88,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the document.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/delete_document">
|
||||
@@ -74,6 +96,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the document to delete.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -169,24 +192,29 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Authentication Errors**
|
||||
|
||||
- Ensure your Microsoft account has the necessary permissions for file access (e.g., `Files.Read.All`, `Files.ReadWrite.All`).
|
||||
- Verify that the OAuth connection includes all required scopes.
|
||||
|
||||
**File Creation Issues**
|
||||
|
||||
- When creating text documents, ensure the `file_name` ends with `.txt` extension.
|
||||
- Verify that you have write permissions to the target location (OneDrive/SharePoint).
|
||||
|
||||
**Document Access Issues**
|
||||
|
||||
- Double-check document IDs for correctness when accessing specific documents.
|
||||
- Ensure the referenced documents exist and are accessible.
|
||||
- Note that this integration works best with text files (.txt) for content operations.
|
||||
|
||||
**Content Retrieval Limitations**
|
||||
|
||||
- The `get_document_content` action works best with text files (.txt).
|
||||
- For complex Word documents (.docx), consider using the document properties action to get metadata.
|
||||
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Microsoft Word integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Microsoft Word integration setup
|
||||
or troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -33,6 +33,24 @@ Before using the Notion integration, ensure you have:
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Actions
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -42,6 +60,7 @@ uv add crewai-tools
|
||||
**Parameters:**
|
||||
- `page_size` (integer, optional): Number of items returned in the response. Minimum: 1, Maximum: 100, Default: 100
|
||||
- `start_cursor` (string, optional): Cursor for pagination. Return results after this cursor.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="notion/get_user">
|
||||
@@ -49,6 +68,7 @@ uv add crewai-tools
|
||||
|
||||
**Parameters:**
|
||||
- `user_id` (string, required): The ID of the user to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="notion/create_comment">
|
||||
@@ -80,6 +100,7 @@ uv add crewai-tools
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -238,26 +259,31 @@ crew.kickoff()
|
||||
### Common Issues
|
||||
|
||||
**Permission Errors**
|
||||
|
||||
- Ensure your Notion account has appropriate permissions to read user information
|
||||
- Verify that the OAuth connection includes required scopes for user access and comment creation
|
||||
- Check that you have permissions to comment on the target pages or discussions
|
||||
|
||||
**User Access Issues**
|
||||
|
||||
- Ensure you have workspace admin permissions to list all users
|
||||
- Verify that user IDs are correct and users exist in the workspace
|
||||
- Check that the workspace allows API access to user information
|
||||
|
||||
**Comment Creation Issues**
|
||||
|
||||
- Verify that page IDs or discussion IDs are correct and accessible
|
||||
- Ensure that rich text content follows Notion's API format specifications
|
||||
- Check that you have comment permissions on the target pages or discussions
|
||||
|
||||
**API Rate Limits**
|
||||
|
||||
- Be mindful of Notion's API rate limits when making multiple requests
|
||||
- Implement appropriate delays between requests if needed
|
||||
- Consider pagination for large user lists
|
||||
|
||||
**Parent Object Specification**
|
||||
|
||||
- Ensure parent object type is correctly specified (page_id or discussion_id)
|
||||
- Verify that the parent page or discussion exists and is accessible
|
||||
- Check that the parent object ID format is correct
|
||||
@@ -265,5 +291,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Notion integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Notion integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -17,6 +17,40 @@ Before using the Salesforce integration, ensure you have:
|
||||
- A Salesforce account with appropriate permissions
|
||||
- Connected your Salesforce account through the [Integrations page](https://app.crewai.com/integrations)
|
||||
|
||||
## Setting Up Salesforce Integration
|
||||
|
||||
### 1. Connect Your Salesforce Account
|
||||
|
||||
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
|
||||
2. Find **Salesforce** in the Authentication Integrations section
|
||||
3. Click **Connect** and complete the OAuth flow
|
||||
4. Grant the necessary permissions for CRM and sales management
|
||||
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
|
||||
|
||||
### 2. Install Required Package
|
||||
|
||||
```bash
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
### **Record Management**
|
||||
@@ -33,6 +67,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `Title` (string, optional): Title of the contact, such as CEO or Vice President
|
||||
- `Description` (string, optional): A description of the Contact
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_record_lead">
|
||||
@@ -49,6 +84,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `Status` (string, optional): Lead Status - Use Connect Portal Workflow Settings to select Lead Status
|
||||
- `Description` (string, optional): A description of the Lead
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_record_opportunity">
|
||||
@@ -64,6 +100,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity
|
||||
- `NextStep` (string, optional): Description of next task in closing Opportunity
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_record_task">
|
||||
@@ -82,6 +119,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `isReminderSet` (boolean, optional): Whether reminder is set
|
||||
- `reminderDateTime` (string, optional): Reminder Date/Time in ISO format
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_record_account">
|
||||
@@ -94,12 +132,14 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `Phone` (string, optional): Phone number
|
||||
- `Description` (string, optional): Account description
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_record_any">
|
||||
**Description:** Create a record of any object type in Salesforce.
|
||||
|
||||
**Note:** This is a flexible tool for creating records of custom or unknown object types.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -118,6 +158,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `Title` (string, optional): Title of the contact
|
||||
- `Description` (string, optional): A description of the Contact
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/update_record_lead">
|
||||
@@ -135,6 +176,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `Status` (string, optional): Lead Status
|
||||
- `Description` (string, optional): A description of the Lead
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/update_record_opportunity">
|
||||
@@ -151,6 +193,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity
|
||||
- `NextStep` (string, optional): Description of next task in closing Opportunity
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/update_record_task">
|
||||
@@ -169,6 +212,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `isReminderSet` (boolean, optional): Whether reminder is set
|
||||
- `reminderDateTime` (string, optional): Reminder Date/Time in ISO format
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/update_record_account">
|
||||
@@ -182,12 +226,14 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `Phone` (string, optional): Phone number
|
||||
- `Description` (string, optional): Account description
|
||||
- `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/update_record_any">
|
||||
**Description:** Update a record of any object type in Salesforce.
|
||||
|
||||
**Note:** This is a flexible tool for updating records of custom or unknown object types.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -199,6 +245,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): Record ID of the Contact
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_id_lead">
|
||||
@@ -206,6 +253,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): Record ID of the Lead
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_id_opportunity">
|
||||
@@ -213,6 +261,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): Record ID of the Opportunity
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_id_task">
|
||||
@@ -220,6 +269,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): Record ID of the Task
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_id_account">
|
||||
@@ -227,6 +277,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `recordId` (string, required): Record ID of the Account
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_id_any">
|
||||
@@ -235,6 +286,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `recordType` (string, required): Record Type (e.g., "CustomObject__c")
|
||||
- `recordId` (string, required): Record ID
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -250,6 +302,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
|
||||
- `includeAllFields` (boolean, optional): Include all fields in results
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/search_records_lead">
|
||||
@@ -261,6 +314,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
|
||||
- `includeAllFields` (boolean, optional): Include all fields in results
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/search_records_opportunity">
|
||||
@@ -272,6 +326,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
|
||||
- `includeAllFields` (boolean, optional): Include all fields in results
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/search_records_task">
|
||||
@@ -283,6 +338,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
|
||||
- `includeAllFields` (boolean, optional): Include all fields in results
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/search_records_account">
|
||||
@@ -294,6 +350,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
|
||||
- `includeAllFields` (boolean, optional): Include all fields in results
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/search_records_any">
|
||||
@@ -304,6 +361,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `filterFormula` (string, optional): Filter search criteria
|
||||
- `includeAllFields` (boolean, optional): Include all fields in results
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -316,6 +374,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `listViewId` (string, required): List View ID
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_view_id_lead">
|
||||
@@ -324,6 +383,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `listViewId` (string, required): List View ID
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_view_id_opportunity">
|
||||
@@ -332,6 +392,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `listViewId` (string, required): List View ID
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_view_id_task">
|
||||
@@ -340,6 +401,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `listViewId` (string, required): List View ID
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_view_id_account">
|
||||
@@ -348,6 +410,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `listViewId` (string, required): List View ID
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/get_record_by_view_id_any">
|
||||
@@ -357,6 +420,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `recordType` (string, required): Record Type
|
||||
- `listViewId` (string, required): List View ID
|
||||
- `paginationParameters` (object, optional): Pagination settings with pageCursor
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -377,6 +441,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `description` (string, optional): Field description
|
||||
- `helperText` (string, optional): Helper text shown on hover
|
||||
- `defaultFieldValue` (string, optional): Default field value
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_custom_field_lead">
|
||||
@@ -393,6 +458,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `description` (string, optional): Field description
|
||||
- `helperText` (string, optional): Helper text shown on hover
|
||||
- `defaultFieldValue` (string, optional): Default field value
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_custom_field_opportunity">
|
||||
@@ -409,6 +475,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `description` (string, optional): Field description
|
||||
- `helperText` (string, optional): Helper text shown on hover
|
||||
- `defaultFieldValue` (string, optional): Default field value
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_custom_field_task">
|
||||
@@ -425,6 +492,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `description` (string, optional): Field description
|
||||
- `helperText` (string, optional): Helper text shown on hover
|
||||
- `defaultFieldValue` (string, optional): Default field value
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_custom_field_account">
|
||||
@@ -441,12 +509,14 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `description` (string, optional): Field description
|
||||
- `helperText` (string, optional): Helper text shown on hover
|
||||
- `defaultFieldValue` (string, optional): Default field value
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_custom_field_any">
|
||||
**Description:** Deploy custom fields for any object type.
|
||||
|
||||
**Note:** This is a flexible tool for creating custom fields on custom or unknown object types.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -458,6 +528,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `query` (string, required): SOQL Query (e.g., "SELECT Id, Name FROM Account WHERE Name = 'Example'")
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/create_custom_object">
|
||||
@@ -468,6 +539,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `pluralLabel` (string, required): Plural Label (e.g., "Accounts")
|
||||
- `description` (string, optional): A description of the Custom Object
|
||||
- `recordName` (string, required): Record Name that appears in layouts and searches (e.g., "Account Name")
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="salesforce/describe_action_schema">
|
||||
@@ -478,6 +550,7 @@ Before using the Salesforce integration, ensure you have:
|
||||
- `operation` (string, required): Operation Type (e.g., "CREATE_RECORD" or "UPDATE_RECORD")
|
||||
|
||||
**Note:** Use this function first when working with custom objects to understand their schema before performing operations.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -607,5 +680,6 @@ This comprehensive documentation covers all the Salesforce tools organized by fu
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Salesforce integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Salesforce integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -17,6 +17,40 @@ Before using the Shopify integration, ensure you have:
|
||||
- A Shopify store with appropriate admin permissions
|
||||
- Connected your Shopify store through the [Integrations page](https://app.crewai.com/integrations)
|
||||
|
||||
## Setting Up Shopify Integration
|
||||
|
||||
### 1. Connect Your Shopify Store
|
||||
|
||||
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
|
||||
2. Find **Shopify** in the Authentication Integrations section
|
||||
3. Click **Connect** and complete the OAuth flow
|
||||
4. Grant the necessary permissions for store and product management
|
||||
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
|
||||
|
||||
### 2. Install Required Package
|
||||
|
||||
```bash
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
### **Customer Management**
|
||||
@@ -32,6 +66,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `updatedAtMin` (string, optional): Only return customers updated after this date (ISO or Unix timestamp)
|
||||
- `updatedAtMax` (string, optional): Only return customers updated before this date (ISO or Unix timestamp)
|
||||
- `limit` (string, optional): Maximum number of customers to return (defaults to 250)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/search_customers">
|
||||
@@ -40,6 +75,7 @@ Before using the Shopify integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
|
||||
- `limit` (string, optional): Maximum number of customers to return (defaults to 250)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/create_customer">
|
||||
@@ -61,6 +97,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `note` (string, optional): Customer note
|
||||
- `sendEmailInvite` (boolean, optional): Whether to send email invitation
|
||||
- `metafields` (object, optional): Additional metafields in JSON format
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/update_customer">
|
||||
@@ -83,6 +120,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `note` (string, optional): Customer note
|
||||
- `sendEmailInvite` (boolean, optional): Whether to send email invitation
|
||||
- `metafields` (object, optional): Additional metafields in JSON format
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -99,6 +137,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `updatedAtMin` (string, optional): Only return orders updated after this date (ISO or Unix timestamp)
|
||||
- `updatedAtMax` (string, optional): Only return orders updated before this date (ISO or Unix timestamp)
|
||||
- `limit` (string, optional): Maximum number of orders to return (defaults to 250)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/create_order">
|
||||
@@ -112,6 +151,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided
|
||||
- `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy
|
||||
- `note` (string, optional): Order note
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/update_order">
|
||||
@@ -126,6 +166,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided
|
||||
- `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy
|
||||
- `note` (string, optional): Order note
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/get_abandoned_carts">
|
||||
@@ -138,6 +179,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `createdAtMin` (string, optional): Only return carts created after this date (ISO or Unix timestamp)
|
||||
- `createdAtMax` (string, optional): Only return carts created before this date (ISO or Unix timestamp)
|
||||
- `limit` (string, optional): Maximum number of carts to return (defaults to 250)
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -158,6 +200,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `updatedAtMin` (string, optional): Only return products updated after this date (ISO or Unix timestamp)
|
||||
- `updatedAtMax` (string, optional): Only return products updated before this date (ISO or Unix timestamp)
|
||||
- `limit` (string, optional): Maximum number of products to return (defaults to 250)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/create_product">
|
||||
@@ -174,6 +217,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `imageUrl` (string, optional): Product image URL
|
||||
- `isPublished` (boolean, optional): Whether product is published
|
||||
- `publishToPointToSale` (boolean, optional): Whether to publish to point of sale
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/update_product">
|
||||
@@ -191,6 +235,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `imageUrl` (string, optional): Product image URL
|
||||
- `isPublished` (boolean, optional): Whether product is published
|
||||
- `publishToPointToSale` (boolean, optional): Whether to publish to point of sale
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -202,6 +247,7 @@ Before using the Shopify integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `productFilterFormula` (object, optional): Advanced filter in disjunctive normal form with support for fields like id, title, vendor, status, handle, tag, created_at, updated_at, published_at
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/create_product_graphql">
|
||||
@@ -215,6 +261,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `tags` (string, optional): Product tags as array or comma-separated list
|
||||
- `media` (object, optional): Media objects with alt text, content type, and source URL
|
||||
- `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="shopify/update_product_graphql">
|
||||
@@ -229,6 +276,7 @@ Before using the Shopify integration, ensure you have:
|
||||
- `tags` (string, optional): Product tags as array or comma-separated list
|
||||
- `media` (object, optional): Updated media objects with alt text, content type, and source URL
|
||||
- `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -357,5 +405,6 @@ crew.kickoff()
|
||||
### Getting Help
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Shopify integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Shopify integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -17,6 +17,40 @@ Before using the Slack integration, ensure you have:
|
||||
- A Slack workspace with appropriate permissions
|
||||
- Connected your Slack workspace through the [Integrations page](https://app.crewai.com/integrations)
|
||||
|
||||
## Setting Up Slack Integration
|
||||
|
||||
### 1. Connect Your Slack Workspace
|
||||
|
||||
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
|
||||
2. Find **Slack** in the Authentication Integrations section
|
||||
3. Click **Connect** and complete the OAuth flow
|
||||
4. Grant the necessary permissions for team communication
|
||||
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
|
||||
|
||||
### 2. Install Required Package
|
||||
|
||||
```bash
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
### **User Management**
|
||||
@@ -27,6 +61,7 @@ Before using the Slack integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- No parameters required - retrieves all channel members
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="slack/get_user_by_email">
|
||||
@@ -34,6 +69,7 @@ Before using the Slack integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `email` (string, required): The email address of a user in the workspace
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="slack/get_users_by_name">
|
||||
@@ -44,6 +80,7 @@ Before using the Slack integration, ensure you have:
|
||||
- `displayName` (string, required): User's display name to search for
|
||||
- `paginationParameters` (object, optional): Pagination settings
|
||||
- `pageCursor` (string, optional): Page cursor for pagination
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -55,6 +92,7 @@ Before using the Slack integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- No parameters required - retrieves all accessible channels
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -71,6 +109,7 @@ Before using the Slack integration, ensure you have:
|
||||
- `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:")
|
||||
- `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements
|
||||
- `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="slack/send_direct_message">
|
||||
@@ -83,6 +122,7 @@ Before using the Slack integration, ensure you have:
|
||||
- `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:")
|
||||
- `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements
|
||||
- `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false)
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -100,6 +140,7 @@ Before using the Slack integration, ensure you have:
|
||||
- `from:@john in:#general` - Search for messages from John in the #general channel
|
||||
- `has:link after:2023-01-01` - Search for messages with links after January 1, 2023
|
||||
- `in:@channel before:yesterday` - Search for messages in a specific channel before yesterday
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -108,6 +149,7 @@ Before using the Slack integration, ensure you have:
|
||||
Slack's Block Kit allows you to create rich, interactive messages. Here are some examples of how to use the `blocks` parameter:
|
||||
|
||||
### Simple Text with Attachment
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
@@ -122,6 +164,7 @@ Slack's Block Kit allows you to create rich, interactive messages. Here are some
|
||||
```
|
||||
|
||||
### Rich Formatting with Sections
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
@@ -279,5 +322,6 @@ crew.kickoff()
|
||||
## Contact Support
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with Slack integration setup or troubleshooting.
|
||||
Contact our support team for assistance with Slack integration setup or
|
||||
troubleshooting.
|
||||
</Card>
|
||||
|
||||
@@ -17,6 +17,40 @@ Before using the Stripe integration, ensure you have:
|
||||
- A Stripe account with appropriate API permissions
|
||||
- Connected your Stripe account through the [Integrations page](https://app.crewai.com/integrations)
|
||||
|
||||
## Setting Up Stripe Integration
|
||||
|
||||
### 1. Connect Your Stripe Account
|
||||
|
||||
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
|
||||
2. Find **Stripe** in the Authentication Integrations section
|
||||
3. Click **Connect** and complete the OAuth flow
|
||||
4. Grant the necessary permissions for payment processing
|
||||
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
|
||||
|
||||
### 2. Install Required Package
|
||||
|
||||
```bash
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
### **Customer Management**
|
||||
@@ -30,6 +64,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `name` (string, optional): Customer's full name
|
||||
- `description` (string, optional): Customer description for internal reference
|
||||
- `metadataCreateCustomer` (object, optional): Additional metadata as key-value pairs (e.g., `{"field1": 1, "field2": 2}`)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="stripe/get_customer_by_id">
|
||||
@@ -37,6 +72,7 @@ Before using the Stripe integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `idGetCustomer` (string, required): The Stripe customer ID to retrieve
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="stripe/get_customers">
|
||||
@@ -47,6 +83,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `createdAfter` (string, optional): Filter customers created after this date (Unix timestamp)
|
||||
- `createdBefore` (string, optional): Filter customers created before this date (Unix timestamp)
|
||||
- `limitGetCustomers` (string, optional): Maximum number of customers to return (defaults to 10)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="stripe/update_customer">
|
||||
@@ -58,6 +95,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `name` (string, optional): Updated customer name
|
||||
- `description` (string, optional): Updated customer description
|
||||
- `metadataUpdateCustomer` (object, optional): Updated metadata as key-value pairs
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -71,6 +109,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `customerIdCreateSubscription` (string, required): The customer ID for whom the subscription will be created
|
||||
- `plan` (string, required): The plan ID for the subscription - Use Connect Portal Workflow Settings to allow users to select a plan
|
||||
- `metadataCreateSubscription` (object, optional): Additional metadata for the subscription
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="stripe/get_subscriptions">
|
||||
@@ -80,6 +119,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `customerIdGetSubscriptions` (string, optional): Filter subscriptions by customer ID
|
||||
- `subscriptionStatus` (string, optional): Filter by subscription status - Options: incomplete, incomplete_expired, trialing, active, past_due, canceled, unpaid
|
||||
- `limitGetSubscriptions` (string, optional): Maximum number of subscriptions to return (defaults to 10)
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -93,6 +133,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `productName` (string, required): The product name
|
||||
- `description` (string, optional): Product description
|
||||
- `metadataProduct` (object, optional): Additional product metadata as key-value pairs
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="stripe/get_product_by_id">
|
||||
@@ -100,6 +141,7 @@ Before using the Stripe integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `productId` (string, required): The Stripe product ID to retrieve
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="stripe/get_products">
|
||||
@@ -109,6 +151,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `createdAfter` (string, optional): Filter products created after this date (Unix timestamp)
|
||||
- `createdBefore` (string, optional): Filter products created before this date (Unix timestamp)
|
||||
- `limitGetProducts` (string, optional): Maximum number of products to return (defaults to 10)
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -122,6 +165,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `balanceTransactionType` (string, optional): Filter by transaction type - Options: charge, refund, payment, payment_refund
|
||||
- `paginationParameters` (object, optional): Pagination settings
|
||||
- `pageCursor` (string, optional): Page cursor for pagination
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="stripe/get_plans">
|
||||
@@ -131,6 +175,7 @@ Before using the Stripe integration, ensure you have:
|
||||
- `isPlanActive` (boolean, optional): Filter by plan status - true for active plans, false for inactive plans
|
||||
- `paginationParameters` (object, optional): Pagination settings
|
||||
- `pageCursor` (string, optional): Page cursor for pagination
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
|
||||
@@ -17,6 +17,40 @@ Before using the Zendesk integration, ensure you have:
|
||||
- A Zendesk account with appropriate API permissions
|
||||
- Connected your Zendesk account through the [Integrations page](https://app.crewai.com/integrations)
|
||||
|
||||
## Setting Up Zendesk Integration
|
||||
|
||||
### 1. Connect Your Zendesk Account
|
||||
|
||||
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
|
||||
2. Find **Zendesk** in the Authentication Integrations section
|
||||
3. Click **Connect** and complete the OAuth flow
|
||||
4. Grant the necessary permissions for ticket and user management
|
||||
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
|
||||
|
||||
### 2. Install Required Package
|
||||
|
||||
```bash
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
### 3. Environment Variable Setup
|
||||
|
||||
<Note>
|
||||
To use integrations with `Agent(apps=[])`, you must set the
|
||||
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
|
||||
Token.
|
||||
</Note>
|
||||
|
||||
```bash
|
||||
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
|
||||
```
|
||||
|
||||
Or add it to your `.env` file:
|
||||
|
||||
```
|
||||
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
### **Ticket Management**
|
||||
@@ -38,6 +72,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `ticketTags` (string, optional): Array of tags to apply (e.g., `["enterprise", "other_tag"]`)
|
||||
- `ticketExternalId` (string, optional): External ID to link tickets to local records
|
||||
- `ticketCustomFields` (object, optional): Custom field values in JSON format
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/update_ticket">
|
||||
@@ -56,6 +91,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `ticketTags` (string, optional): Updated tags array
|
||||
- `ticketExternalId` (string, optional): Updated external ID
|
||||
- `ticketCustomFields` (object, optional): Updated custom field values
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/get_ticket_by_id">
|
||||
@@ -63,6 +99,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `ticketId` (string, required): The ticket ID to retrieve (e.g., "35436")
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/add_comment_to_ticket">
|
||||
@@ -73,6 +110,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `commentBody` (string, required): Comment message (accepts plain text or HTML, e.g., "Thanks for your help!")
|
||||
- `isInternalNote` (boolean, optional): Set to true for internal notes instead of public replies (defaults to false)
|
||||
- `isPublic` (boolean, optional): True for public comments, false for internal notes
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/search_tickets">
|
||||
@@ -94,6 +132,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `dueDate` (object, optional): Filter by due date with operator and value
|
||||
- `sort_by` (string, optional): Sort field - Options: created_at, updated_at, priority, status, ticket_type
|
||||
- `sort_order` (string, optional): Sort direction - Options: asc, desc
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -111,6 +150,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `externalId` (string, optional): Unique identifier from another system
|
||||
- `details` (string, optional): Additional user details
|
||||
- `notes` (string, optional): Internal notes about the user
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/update_user">
|
||||
@@ -125,6 +165,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `externalId` (string, optional): Updated external ID
|
||||
- `details` (string, optional): Updated user details
|
||||
- `notes` (string, optional): Updated internal notes
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/get_user_by_id">
|
||||
@@ -132,6 +173,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
|
||||
**Parameters:**
|
||||
- `userId` (string, required): The user ID to retrieve
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/search_users">
|
||||
@@ -144,6 +186,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `externalId` (string, optional): Filter by external ID
|
||||
- `sort_by` (string, optional): Sort field - Options: created_at, updated_at
|
||||
- `sort_order` (string, optional): Sort direction - Options: asc, desc
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -156,6 +199,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
**Parameters:**
|
||||
- `paginationParameters` (object, optional): Pagination settings
|
||||
- `pageCursor` (string, optional): Page cursor for pagination
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="zendesk/get_ticket_audits">
|
||||
@@ -165,6 +209,7 @@ Before using the Zendesk integration, ensure you have:
|
||||
- `ticketId` (string, optional): Get audits for specific ticket (if empty, retrieves audits for all non-archived tickets, e.g., "1234")
|
||||
- `paginationParameters` (object, optional): Pagination settings
|
||||
- `pageCursor` (string, optional): Page cursor for pagination
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
|
||||
@@ -10,7 +10,10 @@ mode: "wide"
|
||||
CrewAI AMP(Agent Management Platform) provides a platform for deploying, monitoring, and scaling your crews and agents in a production environment.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crewai-enterprise-dashboard.png" alt="CrewAI AMP Dashboard" />
|
||||
<img
|
||||
src="/images/enterprise/crewai-enterprise-dashboard.png"
|
||||
alt="CrewAI AMP Dashboard"
|
||||
/>
|
||||
</Frame>
|
||||
|
||||
CrewAI AMP extends the power of the open-source framework with features designed for production deployments, collaboration, and scalability. Deploy your crews to a managed infrastructure and monitor their execution in real-time.
|
||||
@@ -22,7 +25,8 @@ CrewAI AMP extends the power of the open-source framework with features designed
|
||||
Deploy your crews to a managed infrastructure with a few clicks
|
||||
</Card>
|
||||
<Card title="API Access" icon="code">
|
||||
Access your deployed crews via REST API for integration with existing systems
|
||||
Access your deployed crews via REST API for integration with existing
|
||||
systems
|
||||
</Card>
|
||||
<Card title="Observability" icon="chart-line">
|
||||
Monitor your crews with detailed execution traces and logs
|
||||
@@ -57,11 +61,7 @@ CrewAI AMP extends the power of the open-source framework with features designed
|
||||
<Steps>
|
||||
<Step title="Sign up for an account">
|
||||
Create your account at [app.crewai.com](https://app.crewai.com)
|
||||
<Card
|
||||
title="Sign Up"
|
||||
icon="user"
|
||||
href="https://app.crewai.com/signup"
|
||||
>
|
||||
<Card title="Sign Up" icon="user" href="https://app.crewai.com/signup">
|
||||
Sign Up
|
||||
</Card>
|
||||
</Step>
|
||||
|
||||
@@ -49,7 +49,7 @@ mode: "wide"
|
||||
|
||||
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer. This input can provide extra context, clarify ambiguities, or validate the agent's output.
|
||||
|
||||
For detailed implementation guidance, see our [Human-in-the-Loop guide](/en/how-to/human-in-the-loop).
|
||||
For detailed implementation guidance, see our [Human-in-the-Loop guide](/en/enterprise/guides/human-in-the-loop).
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
|
||||
@@ -142,10 +142,11 @@ mode: "wide"
|
||||
<Accordion title="How can I create custom tools for my CrewAI agents?">
|
||||
You can create custom tools by subclassing the `BaseTool` class provided by CrewAI or by using the tool decorator. Subclassing involves defining a new class that inherits from `BaseTool`, specifying the name, description, and the `_run` method for operational logic. The tool decorator allows you to create a `Tool` object directly with the required attributes and a functional logic.
|
||||
|
||||
<Card href="https://docs.crewai.com/how-to/create-custom-tools" icon="code">CrewAI Tools Guide</Card>
|
||||
<Card href="/en/learn/create-custom-tools" icon="code">CrewAI Tools Guide</Card>
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How can you control the maximum number of requests per minute that the entire crew can perform?">
|
||||
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
</Accordion>
|
||||
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -6,6 +6,7 @@ mode: "wide"
|
||||
---
|
||||
|
||||
## Video Tutorial
|
||||
|
||||
Watch this video tutorial for a step-by-step demonstration of the installation process:
|
||||
|
||||
<iframe
|
||||
@@ -18,21 +19,25 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
|
||||
></iframe>
|
||||
|
||||
## Text Tutorial
|
||||
|
||||
<Note>
|
||||
**Python Version Requirements**
|
||||
|
||||
CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version:
|
||||
|
||||
```bash
|
||||
python3 --version
|
||||
```
|
||||
|
||||
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
|
||||
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
**OpenAI SDK Requirement**
|
||||
|
||||
CrewAI 0.175.0 requires `openai >= 1.13.3`. If you manage dependencies yourself, ensure your environment satisfies this constraint to avoid import/runtime issues.
|
||||
|
||||
</Note>
|
||||
|
||||
CrewAI uses the `uv` as its dependency management and package handling tool. It simplifies project setup and execution, offering a seamless experience.
|
||||
@@ -95,6 +100,7 @@ If you haven't installed `uv` yet, follow **step 1** to quickly get it set up on
|
||||
```
|
||||
<Check>Installation successful! You're ready to create your first crew! 🎉</Check>
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
# Creating a CrewAI Project
|
||||
@@ -128,6 +134,7 @@ We recommend using the `YAML` template scaffolding for a structured approach to
|
||||
├── agents.yaml
|
||||
└── tasks.yaml
|
||||
```
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Customize Your Project">
|
||||
@@ -144,6 +151,7 @@ We recommend using the `YAML` template scaffolding for a structured approach to
|
||||
|
||||
- Start by editing `agents.yaml` and `tasks.yaml` to define your crew's behavior.
|
||||
- Keep sensitive information like API keys in `.env`.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Run your Crew">
|
||||
@@ -168,12 +176,14 @@ We recommend using the `YAML` template scaffolding for a structured approach to
|
||||
For teams and organizations, CrewAI offers enterprise deployment options that eliminate setup complexity:
|
||||
|
||||
### CrewAI AMP (SaaS)
|
||||
|
||||
- Zero installation required - just sign up for free at [app.crewai.com](https://app.crewai.com)
|
||||
- Automatic updates and maintenance
|
||||
- Managed infrastructure and scaling
|
||||
- Build Crews with no Code
|
||||
|
||||
### CrewAI Factory (Self-hosted)
|
||||
|
||||
- Containerized deployment for your infrastructure
|
||||
- Supports any hyperscaler including on prem deployments
|
||||
- Integration with your existing security systems
|
||||
@@ -186,12 +196,9 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Build Your First Agent"
|
||||
icon="code"
|
||||
href="/en/quickstart"
|
||||
>
|
||||
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
|
||||
<Card title="Build Your First Agent" icon="code" href="/en/quickstart">
|
||||
Follow our quickstart guide to create your first CrewAI agent and get
|
||||
hands-on experience.
|
||||
</Card>
|
||||
<Card
|
||||
title="Join the Community"
|
||||
|
||||
@@ -7,110 +7,89 @@ mode: "wide"
|
||||
|
||||
# What is CrewAI?
|
||||
|
||||
**CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely independent of LangChain or other agent frameworks.**
|
||||
**CrewAI is the leading open-source framework for orchestrating autonomous AI agents and building complex workflows.**
|
||||
|
||||
CrewAI empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario:
|
||||
It empowers developers to build production-ready multi-agent systems by combining the collaborative intelligence of **Crews** with the precise control of **Flows**.
|
||||
|
||||
- **[CrewAI Crews](/en/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
|
||||
- **[CrewAI Flows](/en/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
|
||||
- **[CrewAI Flows](/en/guides/flows/first-flow)**: The backbone of your AI application. Flows allow you to create structured, event-driven workflows that manage state and control execution. They provide the scaffolding for your AI agents to work within.
|
||||
- **[CrewAI Crews](/en/guides/crews/first-crew)**: The units of work within your Flow. Crews are teams of autonomous agents that collaborate to solve specific tasks delegated to them by the Flow.
|
||||
|
||||
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
|
||||
With over 100,000 developers certified through our community courses, CrewAI is the standard for enterprise-ready AI automation.
|
||||
|
||||
## The CrewAI Architecture
|
||||
|
||||
## How Crews Work
|
||||
CrewAI's architecture is designed to balance autonomy with control.
|
||||
|
||||
### 1. Flows: The Backbone
|
||||
|
||||
<Note>
|
||||
Just like a company has departments (Sales, Engineering, Marketing) working together under leadership to achieve business goals, CrewAI helps you create an organization of AI agents with specialized roles collaborating to accomplish complex tasks.
|
||||
</Note>
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
| Component | Description | Key Features |
|
||||
|:----------|:-----------:|:------------|
|
||||
| **Crew** | The top-level organization | • Manages AI agent teams<br/>• Oversees workflows<br/>• Ensures collaboration<br/>• Delivers outcomes |
|
||||
| **AI Agents** | Specialized team members | • Have specific roles (researcher, writer)<br/>• Use designated tools<br/>• Can delegate tasks<br/>• Make autonomous decisions |
|
||||
| **Process** | Workflow management system | • Defines collaboration patterns<br/>• Controls task assignments<br/>• Manages interactions<br/>• Ensures efficient execution |
|
||||
| **Tasks** | Individual assignments | • Have clear objectives<br/>• Use specific tools<br/>• Feed into larger process<br/>• Produce actionable results |
|
||||
|
||||
### How It All Works Together
|
||||
|
||||
1. The **Crew** organizes the overall operation
|
||||
2. **AI Agents** work on their specialized tasks
|
||||
3. The **Process** ensures smooth collaboration
|
||||
4. **Tasks** get completed to achieve the goal
|
||||
|
||||
## Key Features
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Role-Based Agents" icon="users">
|
||||
Create specialized agents with defined roles, expertise, and goals - from researchers to analysts to writers
|
||||
</Card>
|
||||
<Card title="Flexible Tools" icon="screwdriver-wrench">
|
||||
Equip agents with custom tools and APIs to interact with external services and data sources
|
||||
</Card>
|
||||
<Card title="Intelligent Collaboration" icon="people-arrows">
|
||||
Agents work together, sharing insights and coordinating tasks to achieve complex objectives
|
||||
</Card>
|
||||
<Card title="Task Management" icon="list-check">
|
||||
Define sequential or parallel workflows, with agents automatically handling task dependencies
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## How Flows Work
|
||||
|
||||
<Note>
|
||||
While Crews excel at autonomous collaboration, Flows provide structured automations, offering granular control over workflow execution. Flows ensure tasks are executed reliably, securely, and efficiently, handling conditional logic, loops, and dynamic state management with precision. Flows integrate seamlessly with Crews, enabling you to balance high autonomy with exacting control.
|
||||
Think of a Flow as the "manager" or the "process definition" of your application. It defines the steps, the logic, and how data moves through your system.
|
||||
</Note>
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
| Component | Description | Key Features |
|
||||
|:----------|:-----------:|:------------|
|
||||
| **Flow** | Structured workflow orchestration | • Manages execution paths<br/>• Handles state transitions<br/>• Controls task sequencing<br/>• Ensures reliable execution |
|
||||
| **Events** | Triggers for workflow actions | • Initiate specific processes<br/>• Enable dynamic responses<br/>• Support conditional branching<br/>• Allow for real-time adaptation |
|
||||
| **States** | Workflow execution contexts | • Maintain execution data<br/>• Enable persistence<br/>• Support resumability<br/>• Ensure execution integrity |
|
||||
| **Crew Support** | Enhances workflow automation | • Injects pockets of agency when needed<br/>• Complements structured workflows<br/>• Balances automation with intelligence<br/>• Enables adaptive decision-making |
|
||||
Flows provide:
|
||||
- **State Management**: Persist data across steps and executions.
|
||||
- **Event-Driven Execution**: Trigger actions based on events or external inputs.
|
||||
- **Control Flow**: Use conditional logic, loops, and branching.
|
||||
|
||||
### Key Capabilities
|
||||
### 2. Crews: The Intelligence
|
||||
|
||||
<Note>
|
||||
Crews are the "teams" that do the heavy lifting. Within a Flow, you can trigger a Crew to tackle a complex problem requiring creativity and collaboration.
|
||||
</Note>
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
Crews provide:
|
||||
- **Role-Playing Agents**: Specialized agents with specific goals and tools.
|
||||
- **Autonomous Collaboration**: Agents work together to solve tasks.
|
||||
- **Task Delegation**: Tasks are assigned and executed based on agent capabilities.
|
||||
|
||||
## How It All Works Together
|
||||
|
||||
1. **The Flow** triggers an event or starts a process.
|
||||
2. **The Flow** manages the state and decides what to do next.
|
||||
3. **The Flow** delegates a complex task to a **Crew**.
|
||||
4. **The Crew**'s agents collaborate to complete the task.
|
||||
5. **The Crew** returns the result to the **Flow**.
|
||||
6. **The Flow** continues execution based on the result.
|
||||
|
||||
## Key Features
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Event-Driven Orchestration" icon="bolt">
|
||||
Define precise execution paths responding dynamically to events
|
||||
<Card title="Production-Grade Flows" icon="arrow-progress">
|
||||
Build reliable, stateful workflows that can handle long-running processes and complex logic.
|
||||
</Card>
|
||||
<Card title="Fine-Grained Control" icon="sliders">
|
||||
Manage workflow states and conditional execution securely and efficiently
|
||||
<Card title="Autonomous Crews" icon="users">
|
||||
Deploy teams of agents that can plan, execute, and collaborate to achieve high-level goals.
|
||||
</Card>
|
||||
<Card title="Native Crew Integration" icon="puzzle-piece">
|
||||
Effortlessly combine with Crews for enhanced autonomy and intelligence
|
||||
<Card title="Flexible Tools" icon="screwdriver-wrench">
|
||||
Connect your agents to any API, database, or local tool.
|
||||
</Card>
|
||||
<Card title="Deterministic Execution" icon="route">
|
||||
Ensure predictable outcomes with explicit control flow and error handling
|
||||
<Card title="Enterprise Security" icon="lock">
|
||||
Designed with security and compliance in mind for enterprise deployments.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## When to Use Crews vs. Flows
|
||||
|
||||
<Note>
|
||||
Understanding when to use [Crews](/en/guides/crews/first-crew) versus [Flows](/en/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
|
||||
</Note>
|
||||
**The short answer: Use both.**
|
||||
|
||||
| Use Case | Recommended Approach | Why? |
|
||||
|:---------|:---------------------|:-----|
|
||||
| **Open-ended research** | [Crews](/en/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
|
||||
| **Content generation** | [Crews](/en/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
|
||||
| **Decision workflows** | [Flows](/en/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
|
||||
| **API orchestration** | [Flows](/en/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
|
||||
| **Hybrid applications** | Combined approach | Use [Flows](/en/guides/flows/first-flow) to orchestrate overall process with [Crews](/en/guides/crews/first-crew) handling complex subtasks |
|
||||
For any production-ready application, **start with a Flow**.
|
||||
|
||||
### Decision Framework
|
||||
- **Use a Flow** to define the overall structure, state, and logic of your application.
|
||||
- **Use a Crew** within a Flow step when you need a team of agents to perform a specific, complex task that requires autonomy.
|
||||
|
||||
- **Choose [Crews](/en/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
|
||||
- **Choose [Flows](/en/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
|
||||
- **Combine both when:** Your application needs both structured processes and pockets of autonomous intelligence
|
||||
| Use Case | Architecture |
|
||||
| :--- | :--- |
|
||||
| **Simple Automation** | Single Flow with Python tasks |
|
||||
| **Complex Research** | Flow managing state -> Crew performing research |
|
||||
| **Application Backend** | Flow handling API requests -> Crew generating content -> Flow saving to DB |
|
||||
|
||||
## Why Choose CrewAI?
|
||||
|
||||
@@ -124,13 +103,6 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
## Ready to Start Building?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Build Your First Crew"
|
||||
icon="users-gear"
|
||||
href="/en/guides/crews/first-crew"
|
||||
>
|
||||
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
|
||||
</Card>
|
||||
<Card
|
||||
title="Build Your First Flow"
|
||||
icon="diagram-project"
|
||||
@@ -138,6 +110,13 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
>
|
||||
Learn how to create structured, event-driven workflows with precise control over execution.
|
||||
</Card>
|
||||
<Card
|
||||
title="Build Your First Crew"
|
||||
icon="users-gear"
|
||||
href="/en/guides/crews/first-crew"
|
||||
>
|
||||
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<CardGroup cols={3}>
|
||||
|
||||
367
docs/en/learn/a2a-agent-delegation.mdx
Normal file
367
docs/en/learn/a2a-agent-delegation.mdx
Normal file
@@ -0,0 +1,367 @@
|
||||
---
|
||||
title: Agent-to-Agent (A2A) Protocol
|
||||
description: Enable CrewAI agents to delegate tasks to remote A2A-compliant agents for specialized handling
|
||||
icon: network-wired
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## A2A Agent Delegation
|
||||
|
||||
CrewAI supports the Agent-to-Agent (A2A) protocol, allowing agents to delegate tasks to remote specialized agents. The agent's LLM automatically decides whether to handle a task directly or delegate to an A2A agent based on the task requirements.
|
||||
|
||||
<Note>
|
||||
A2A delegation requires the `a2a-sdk` package. Install with: `uv add 'crewai[a2a]'` or `pip install 'crewai[a2a]'`
|
||||
</Note>
|
||||
|
||||
## How It Works
|
||||
|
||||
When an agent is configured with A2A capabilities:
|
||||
|
||||
1. The LLM analyzes each task
|
||||
2. It decides to either:
|
||||
- Handle the task directly using its own capabilities
|
||||
- Delegate to a remote A2A agent for specialized handling
|
||||
3. If delegating, the agent communicates with the remote A2A agent through the protocol
|
||||
4. Results are returned to the CrewAI workflow
|
||||
|
||||
## Basic Configuration
|
||||
|
||||
Configure an agent for A2A delegation by setting the `a2a` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.a2a import A2AConfig
|
||||
|
||||
agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate research tasks efficiently",
|
||||
backstory="Expert at delegating to specialized research agents",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://example.com/.well-known/agent-card.json",
|
||||
timeout=120,
|
||||
max_turns=10
|
||||
)
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the latest developments in quantum computing",
|
||||
expected_output="A comprehensive research report",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], verbose=True)
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The `A2AConfig` class accepts the following parameters:
|
||||
|
||||
<ParamField path="endpoint" type="str" required>
|
||||
The A2A agent endpoint URL (typically points to `.well-known/agent-card.json`)
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="auth" type="AuthScheme" default="None">
|
||||
Authentication scheme for the A2A agent. Supports Bearer tokens, OAuth2, API keys, and HTTP authentication.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="timeout" type="int" default="120">
|
||||
Request timeout in seconds
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="max_turns" type="int" default="10">
|
||||
Maximum number of conversation turns with the A2A agent
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="response_model" type="type[BaseModel]" default="None">
|
||||
Optional Pydantic model for requesting structured output from an A2A agent. A2A protocol does not
|
||||
enforce this, so an A2A agent does not need to honor this request.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="fail_fast" type="bool" default="True">
|
||||
Whether to raise an error immediately if agent connection fails. When `False`, the agent continues with available agents and informs the LLM about unavailable ones.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="trust_remote_completion_status" type="bool" default="False">
|
||||
When `True`, returns the A2A agent's result directly when it signals completion. When `False`, allows the server agent to review the result and potentially continue the conversation.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="updates" type="UpdateConfig" default="StreamingConfig()">
|
||||
Update mechanism for receiving task status. Options: `StreamingConfig`, `PollingConfig`, or `PushNotificationConfig`.
|
||||
</ParamField>
|
||||
|
||||
## Authentication
|
||||
|
||||
For A2A agents that require authentication, use one of the provided auth schemes:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Bearer Token">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import BearerTokenAuth
|
||||
|
||||
agent = Agent(
|
||||
role="Secure Coordinator",
|
||||
goal="Coordinate tasks with secured agents",
|
||||
backstory="Manages secure agent communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://secure-agent.example.com/.well-known/agent-card.json",
|
||||
auth=BearerTokenAuth(token="your-bearer-token"),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="API Key">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import APIKeyAuth
|
||||
|
||||
agent = Agent(
|
||||
role="API Coordinator",
|
||||
goal="Coordinate with API-based agents",
|
||||
backstory="Manages API-authenticated communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://api-agent.example.com/.well-known/agent-card.json",
|
||||
auth=APIKeyAuth(
|
||||
api_key="your-api-key",
|
||||
location="header", # or "query" or "cookie"
|
||||
name="X-API-Key"
|
||||
),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="OAuth2">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import OAuth2ClientCredentials
|
||||
|
||||
agent = Agent(
|
||||
role="OAuth Coordinator",
|
||||
goal="Coordinate with OAuth-secured agents",
|
||||
backstory="Manages OAuth-authenticated communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://oauth-agent.example.com/.well-known/agent-card.json",
|
||||
auth=OAuth2ClientCredentials(
|
||||
token_url="https://auth.example.com/oauth/token",
|
||||
client_id="your-client-id",
|
||||
client_secret="your-client-secret",
|
||||
scopes=["read", "write"]
|
||||
),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="HTTP Basic">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import HTTPBasicAuth
|
||||
|
||||
agent = Agent(
|
||||
role="Basic Auth Coordinator",
|
||||
goal="Coordinate with basic auth agents",
|
||||
backstory="Manages basic authentication communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://basic-agent.example.com/.well-known/agent-card.json",
|
||||
auth=HTTPBasicAuth(
|
||||
username="your-username",
|
||||
password="your-password"
|
||||
),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Multiple A2A Agents
|
||||
|
||||
Configure multiple A2A agents for delegation by passing a list:
|
||||
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import BearerTokenAuth
|
||||
|
||||
agent = Agent(
|
||||
role="Multi-Agent Coordinator",
|
||||
goal="Coordinate with multiple specialized agents",
|
||||
backstory="Expert at delegating to the right specialist",
|
||||
llm="gpt-4o",
|
||||
a2a=[
|
||||
A2AConfig(
|
||||
endpoint="https://research.example.com/.well-known/agent-card.json",
|
||||
timeout=120
|
||||
),
|
||||
A2AConfig(
|
||||
endpoint="https://data.example.com/.well-known/agent-card.json",
|
||||
auth=BearerTokenAuth(token="data-token"),
|
||||
timeout=90
|
||||
)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
The LLM will automatically choose which A2A agent to delegate to based on the task requirements.
|
||||
|
||||
## Error Handling
|
||||
|
||||
Control how agent connection failures are handled using the `fail_fast` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
|
||||
# Fail immediately on connection errors (default)
|
||||
agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate research tasks",
|
||||
backstory="Expert at delegation",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://research.example.com/.well-known/agent-card.json",
|
||||
fail_fast=True
|
||||
)
|
||||
)
|
||||
|
||||
# Continue with available agents
|
||||
agent = Agent(
|
||||
role="Multi-Agent Coordinator",
|
||||
goal="Coordinate with multiple agents",
|
||||
backstory="Expert at working with available resources",
|
||||
llm="gpt-4o",
|
||||
a2a=[
|
||||
A2AConfig(
|
||||
endpoint="https://primary.example.com/.well-known/agent-card.json",
|
||||
fail_fast=False
|
||||
),
|
||||
A2AConfig(
|
||||
endpoint="https://backup.example.com/.well-known/agent-card.json",
|
||||
fail_fast=False
|
||||
)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
When `fail_fast=False`:
|
||||
- If some agents fail, the LLM is informed which agents are unavailable and can delegate to working agents
|
||||
- If all agents fail, the LLM receives a notice about unavailable agents and handles the task directly
|
||||
- Connection errors are captured and included in the context for better decision-making
|
||||
|
||||
## Update Mechanisms
|
||||
|
||||
Control how your agent receives task status updates from remote A2A agents:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Streaming (Default)">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.updates import StreamingConfig
|
||||
|
||||
agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate research tasks",
|
||||
backstory="Expert at delegation",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://research.example.com/.well-known/agent-card.json",
|
||||
updates=StreamingConfig()
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Polling">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.updates import PollingConfig
|
||||
|
||||
agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate research tasks",
|
||||
backstory="Expert at delegation",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://research.example.com/.well-known/agent-card.json",
|
||||
updates=PollingConfig(
|
||||
interval=2.0,
|
||||
timeout=300.0,
|
||||
max_polls=100
|
||||
)
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Push Notifications">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.updates import PushNotificationConfig
|
||||
|
||||
agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate research tasks",
|
||||
backstory="Expert at delegation",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://research.example.com/.well-known/agent-card.json",
|
||||
updates=PushNotificationConfig(
|
||||
url={base_url}/a2a/callback",
|
||||
token="your-validation-token",
|
||||
timeout=300.0
|
||||
)
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Best Practices
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Set Appropriate Timeouts" icon="clock">
|
||||
Configure timeouts based on expected A2A agent response times. Longer-running tasks may need higher timeout values.
|
||||
</Card>
|
||||
|
||||
<Card title="Limit Conversation Turns" icon="comments">
|
||||
Use `max_turns` to prevent excessive back-and-forth. The agent will automatically conclude conversations before hitting the limit.
|
||||
</Card>
|
||||
|
||||
<Card title="Use Resilient Error Handling" icon="shield-check">
|
||||
Set `fail_fast=False` for production environments with multiple agents to gracefully handle connection failures and maintain workflow continuity.
|
||||
</Card>
|
||||
|
||||
<Card title="Secure Your Credentials" icon="lock">
|
||||
Store authentication tokens and credentials as environment variables, not in code.
|
||||
</Card>
|
||||
|
||||
<Card title="Monitor Delegation Decisions" icon="eye">
|
||||
Use verbose mode to observe when the LLM chooses to delegate versus handle tasks directly.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Supported Authentication Methods
|
||||
|
||||
- **Bearer Token** - Simple token-based authentication
|
||||
- **OAuth2 Client Credentials** - OAuth2 flow for machine-to-machine communication
|
||||
- **OAuth2 Authorization Code** - OAuth2 flow requiring user authorization
|
||||
- **API Key** - Key-based authentication (header, query param, or cookie)
|
||||
- **HTTP Basic** - Username/password authentication
|
||||
- **HTTP Digest** - Digest authentication (requires `httpx-auth` package)
|
||||
|
||||
## Learn More
|
||||
|
||||
For more information about the A2A protocol and reference implementations:
|
||||
|
||||
- [A2A Protocol Documentation](https://a2a-protocol.org)
|
||||
- [A2A Sample Implementations](https://github.com/a2aproject/a2a-samples)
|
||||
- [A2A Python SDK](https://github.com/a2aproject/a2a-python)
|
||||
@@ -66,5 +66,55 @@ def my_cache_strategy(arguments: dict, result: str) -> bool:
|
||||
cached_tool.cache_function = my_cache_strategy
|
||||
```
|
||||
|
||||
### Creating Async Tools
|
||||
|
||||
CrewAI supports async tools for non-blocking I/O operations. This is useful when your tool needs to make HTTP requests, database queries, or other I/O-bound operations.
|
||||
|
||||
#### Using the `@tool` Decorator with Async Functions
|
||||
|
||||
The simplest way to create an async tool is using the `@tool` decorator with an async function:
|
||||
|
||||
```python Code
|
||||
import aiohttp
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool("Async Web Fetcher")
|
||||
async def fetch_webpage(url: str) -> str:
|
||||
"""Fetch content from a webpage asynchronously."""
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.get(url) as response:
|
||||
return await response.text()
|
||||
```
|
||||
|
||||
#### Subclassing `BaseTool` with Async Support
|
||||
|
||||
For more control, subclass `BaseTool` and implement both `_run` (sync) and `_arun` (async) methods:
|
||||
|
||||
```python Code
|
||||
import requests
|
||||
import aiohttp
|
||||
from crewai.tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class WebFetcherInput(BaseModel):
|
||||
"""Input schema for WebFetcher."""
|
||||
url: str = Field(..., description="The URL to fetch")
|
||||
|
||||
class WebFetcherTool(BaseTool):
|
||||
name: str = "Web Fetcher"
|
||||
description: str = "Fetches content from a URL"
|
||||
args_schema: type[BaseModel] = WebFetcherInput
|
||||
|
||||
def _run(self, url: str) -> str:
|
||||
"""Synchronous implementation."""
|
||||
return requests.get(url).text
|
||||
|
||||
async def _arun(self, url: str) -> str:
|
||||
"""Asynchronous implementation for non-blocking I/O."""
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.get(url) as response:
|
||||
return await response.text()
|
||||
```
|
||||
|
||||
By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes,
|
||||
you can leverage the full capabilities of the CrewAI framework, enhancing both the development experience and the efficiency of your AI agents.
|
||||
|
||||
522
docs/en/learn/execution-hooks.mdx
Normal file
522
docs/en/learn/execution-hooks.mdx
Normal file
@@ -0,0 +1,522 @@
|
||||
---
|
||||
title: Execution Hooks Overview
|
||||
description: Understanding and using execution hooks in CrewAI for fine-grained control over agent operations
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
Execution Hooks provide fine-grained control over the runtime behavior of your CrewAI agents. Unlike kickoff hooks that run before and after crew execution, execution hooks intercept specific operations during agent execution, allowing you to modify behavior, implement safety checks, and add comprehensive monitoring.
|
||||
|
||||
## Types of Execution Hooks
|
||||
|
||||
CrewAI provides two main categories of execution hooks:
|
||||
|
||||
### 1. [LLM Call Hooks](/learn/llm-hooks)
|
||||
|
||||
Control and monitor language model interactions:
|
||||
- **Before LLM Call**: Modify prompts, validate inputs, implement approval gates
|
||||
- **After LLM Call**: Transform responses, sanitize outputs, update conversation history
|
||||
|
||||
**Use Cases:**
|
||||
- Iteration limiting
|
||||
- Cost tracking and token usage monitoring
|
||||
- Response sanitization and content filtering
|
||||
- Human-in-the-loop approval for LLM calls
|
||||
- Adding safety guidelines or context
|
||||
- Debug logging and request/response inspection
|
||||
|
||||
[View LLM Hooks Documentation →](/learn/llm-hooks)
|
||||
|
||||
### 2. [Tool Call Hooks](/learn/tool-hooks)
|
||||
|
||||
Control and monitor tool execution:
|
||||
- **Before Tool Call**: Modify inputs, validate parameters, block dangerous operations
|
||||
- **After Tool Call**: Transform results, sanitize outputs, log execution details
|
||||
|
||||
**Use Cases:**
|
||||
- Safety guardrails for destructive operations
|
||||
- Human approval for sensitive actions
|
||||
- Input validation and sanitization
|
||||
- Result caching and rate limiting
|
||||
- Tool usage analytics
|
||||
- Debug logging and monitoring
|
||||
|
||||
[View Tool Hooks Documentation →](/learn/tool-hooks)
|
||||
|
||||
## Hook Registration Methods
|
||||
|
||||
### 1. Decorator-Based Hooks (Recommended)
|
||||
|
||||
The cleanest and most Pythonic way to register hooks:
|
||||
|
||||
```python
|
||||
from crewai.hooks import before_llm_call, after_llm_call, before_tool_call, after_tool_call
|
||||
|
||||
@before_llm_call
|
||||
def limit_iterations(context):
|
||||
"""Prevent infinite loops by limiting iterations."""
|
||||
if context.iterations > 10:
|
||||
return False # Block execution
|
||||
return None
|
||||
|
||||
@after_llm_call
|
||||
def sanitize_response(context):
|
||||
"""Remove sensitive data from LLM responses."""
|
||||
if "API_KEY" in context.response:
|
||||
return context.response.replace("API_KEY", "[REDACTED]")
|
||||
return None
|
||||
|
||||
@before_tool_call
|
||||
def block_dangerous_tools(context):
|
||||
"""Block destructive operations."""
|
||||
if context.tool_name == "delete_database":
|
||||
return False # Block execution
|
||||
return None
|
||||
|
||||
@after_tool_call
|
||||
def log_tool_result(context):
|
||||
"""Log tool execution."""
|
||||
print(f"Tool {context.tool_name} completed")
|
||||
return None
|
||||
```
|
||||
|
||||
### 2. Crew-Scoped Hooks
|
||||
|
||||
Apply hooks only to specific crew instances:
|
||||
|
||||
```python
|
||||
from crewai import CrewBase
|
||||
from crewai.project import crew
|
||||
from crewai.hooks import before_llm_call_crew, after_tool_call_crew
|
||||
|
||||
@CrewBase
|
||||
class MyProjCrew:
|
||||
@before_llm_call_crew
|
||||
def validate_inputs(self, context):
|
||||
# Only applies to this crew
|
||||
print(f"LLM call in {self.__class__.__name__}")
|
||||
return None
|
||||
|
||||
@after_tool_call_crew
|
||||
def log_results(self, context):
|
||||
# Crew-specific logging
|
||||
print(f"Tool result: {context.tool_result[:50]}...")
|
||||
return None
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential
|
||||
)
|
||||
```
|
||||
|
||||
## Hook Execution Flow
|
||||
|
||||
### LLM Call Flow
|
||||
|
||||
```
|
||||
Agent needs to call LLM
|
||||
↓
|
||||
[Before LLM Call Hooks Execute]
|
||||
├→ Hook 1: Validate iteration count
|
||||
├→ Hook 2: Add safety context
|
||||
└→ Hook 3: Log request
|
||||
↓
|
||||
If any hook returns False:
|
||||
├→ Block LLM call
|
||||
└→ Raise ValueError
|
||||
↓
|
||||
If all hooks return True/None:
|
||||
├→ LLM call proceeds
|
||||
└→ Response generated
|
||||
↓
|
||||
[After LLM Call Hooks Execute]
|
||||
├→ Hook 1: Sanitize response
|
||||
├→ Hook 2: Log response
|
||||
└→ Hook 3: Update metrics
|
||||
↓
|
||||
Final response returned
|
||||
```
|
||||
|
||||
### Tool Call Flow
|
||||
|
||||
```
|
||||
Agent needs to execute tool
|
||||
↓
|
||||
[Before Tool Call Hooks Execute]
|
||||
├→ Hook 1: Check if tool is allowed
|
||||
├→ Hook 2: Validate inputs
|
||||
└→ Hook 3: Request approval if needed
|
||||
↓
|
||||
If any hook returns False:
|
||||
├→ Block tool execution
|
||||
└→ Return error message
|
||||
↓
|
||||
If all hooks return True/None:
|
||||
├→ Tool execution proceeds
|
||||
└→ Result generated
|
||||
↓
|
||||
[After Tool Call Hooks Execute]
|
||||
├→ Hook 1: Sanitize result
|
||||
├→ Hook 2: Cache result
|
||||
└→ Hook 3: Log metrics
|
||||
↓
|
||||
Final result returned
|
||||
```
|
||||
|
||||
## Hook Context Objects
|
||||
|
||||
### LLMCallHookContext
|
||||
|
||||
Provides access to LLM execution state:
|
||||
|
||||
```python
|
||||
class LLMCallHookContext:
|
||||
executor: CrewAgentExecutor # Full executor access
|
||||
messages: list # Mutable message list
|
||||
agent: Agent # Current agent
|
||||
task: Task # Current task
|
||||
crew: Crew # Crew instance
|
||||
llm: BaseLLM # LLM instance
|
||||
iterations: int # Current iteration
|
||||
response: str | None # LLM response (after hooks)
|
||||
```
|
||||
|
||||
### ToolCallHookContext
|
||||
|
||||
Provides access to tool execution state:
|
||||
|
||||
```python
|
||||
class ToolCallHookContext:
|
||||
tool_name: str # Tool being called
|
||||
tool_input: dict # Mutable input parameters
|
||||
tool: CrewStructuredTool # Tool instance
|
||||
agent: Agent | None # Agent executing
|
||||
task: Task | None # Current task
|
||||
crew: Crew | None # Crew instance
|
||||
tool_result: str | None # Tool result (after hooks)
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Safety and Validation
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def safety_check(context):
|
||||
"""Block destructive operations."""
|
||||
dangerous = ['delete_file', 'drop_table', 'system_shutdown']
|
||||
if context.tool_name in dangerous:
|
||||
print(f"🛑 Blocked: {context.tool_name}")
|
||||
return False
|
||||
return None
|
||||
|
||||
@before_llm_call
|
||||
def iteration_limit(context):
|
||||
"""Prevent infinite loops."""
|
||||
if context.iterations > 15:
|
||||
print("⛔ Maximum iterations exceeded")
|
||||
return False
|
||||
return None
|
||||
```
|
||||
|
||||
### Human-in-the-Loop
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def require_approval(context):
|
||||
"""Require approval for sensitive operations."""
|
||||
sensitive = ['send_email', 'make_payment', 'post_message']
|
||||
|
||||
if context.tool_name in sensitive:
|
||||
response = context.request_human_input(
|
||||
prompt=f"Approve {context.tool_name}?",
|
||||
default_message="Type 'yes' to approve:"
|
||||
)
|
||||
|
||||
if response.lower() != 'yes':
|
||||
return False
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### Monitoring and Analytics
|
||||
|
||||
```python
|
||||
from collections import defaultdict
|
||||
import time
|
||||
|
||||
metrics = defaultdict(lambda: {'count': 0, 'total_time': 0})
|
||||
|
||||
@before_tool_call
|
||||
def start_timer(context):
|
||||
context.tool_input['_start'] = time.time()
|
||||
return None
|
||||
|
||||
@after_tool_call
|
||||
def track_metrics(context):
|
||||
start = context.tool_input.get('_start', time.time())
|
||||
duration = time.time() - start
|
||||
|
||||
metrics[context.tool_name]['count'] += 1
|
||||
metrics[context.tool_name]['total_time'] += duration
|
||||
|
||||
return None
|
||||
|
||||
# View metrics
|
||||
def print_metrics():
|
||||
for tool, data in metrics.items():
|
||||
avg = data['total_time'] / data['count']
|
||||
print(f"{tool}: {data['count']} calls, {avg:.2f}s avg")
|
||||
```
|
||||
|
||||
### Response Sanitization
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
@after_llm_call
|
||||
def sanitize_llm_response(context):
|
||||
"""Remove sensitive data from LLM responses."""
|
||||
if not context.response:
|
||||
return None
|
||||
|
||||
result = context.response
|
||||
result = re.sub(r'(api[_-]?key)["\']?\s*[:=]\s*["\']?[\w-]+',
|
||||
r'\1: [REDACTED]', result, flags=re.IGNORECASE)
|
||||
return result
|
||||
|
||||
@after_tool_call
|
||||
def sanitize_tool_result(context):
|
||||
"""Remove sensitive data from tool results."""
|
||||
if not context.tool_result:
|
||||
return None
|
||||
|
||||
result = context.tool_result
|
||||
result = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
|
||||
'[EMAIL-REDACTED]', result)
|
||||
return result
|
||||
```
|
||||
|
||||
## Hook Management
|
||||
|
||||
### Clearing All Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import clear_all_global_hooks
|
||||
|
||||
# Clear all hooks at once
|
||||
result = clear_all_global_hooks()
|
||||
print(f"Cleared {result['total']} hooks")
|
||||
# Output: {'llm_hooks': (2, 1), 'tool_hooks': (1, 2), 'total': (3, 3)}
|
||||
```
|
||||
|
||||
### Clearing Specific Hook Types
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
clear_before_llm_call_hooks,
|
||||
clear_after_llm_call_hooks,
|
||||
clear_before_tool_call_hooks,
|
||||
clear_after_tool_call_hooks
|
||||
)
|
||||
|
||||
# Clear specific types
|
||||
llm_before_count = clear_before_llm_call_hooks()
|
||||
tool_after_count = clear_after_tool_call_hooks()
|
||||
```
|
||||
|
||||
### Unregistering Individual Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
unregister_before_llm_call_hook,
|
||||
unregister_after_tool_call_hook
|
||||
)
|
||||
|
||||
def my_hook(context):
|
||||
...
|
||||
|
||||
# Register
|
||||
register_before_llm_call_hook(my_hook)
|
||||
|
||||
# Later, unregister
|
||||
success = unregister_before_llm_call_hook(my_hook)
|
||||
print(f"Unregistered: {success}")
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Keep Hooks Focused
|
||||
Each hook should have a single, clear responsibility:
|
||||
|
||||
```python
|
||||
# ✅ Good - focused responsibility
|
||||
@before_tool_call
|
||||
def validate_file_path(context):
|
||||
if context.tool_name == 'read_file':
|
||||
if '..' in context.tool_input.get('path', ''):
|
||||
return False
|
||||
return None
|
||||
|
||||
# ❌ Bad - too many responsibilities
|
||||
@before_tool_call
|
||||
def do_everything(context):
|
||||
# Validation + logging + metrics + approval...
|
||||
...
|
||||
```
|
||||
|
||||
### 2. Handle Errors Gracefully
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def safe_hook(context):
|
||||
try:
|
||||
# Your logic
|
||||
if some_condition:
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"Hook error: {e}")
|
||||
return None # Allow execution despite error
|
||||
```
|
||||
|
||||
### 3. Modify Context In-Place
|
||||
|
||||
```python
|
||||
# ✅ Correct - modify in-place
|
||||
@before_llm_call
|
||||
def add_context(context):
|
||||
context.messages.append({"role": "system", "content": "Be concise"})
|
||||
|
||||
# ❌ Wrong - replaces reference
|
||||
@before_llm_call
|
||||
def wrong_approach(context):
|
||||
context.messages = [{"role": "system", "content": "Be concise"}]
|
||||
```
|
||||
|
||||
### 4. Use Type Hints
|
||||
|
||||
```python
|
||||
from crewai.hooks import LLMCallHookContext, ToolCallHookContext
|
||||
|
||||
def my_llm_hook(context: LLMCallHookContext) -> bool | None:
|
||||
# IDE autocomplete and type checking
|
||||
return None
|
||||
|
||||
def my_tool_hook(context: ToolCallHookContext) -> str | None:
|
||||
return None
|
||||
```
|
||||
|
||||
### 5. Clean Up in Tests
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from crewai.hooks import clear_all_global_hooks
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clean_hooks():
|
||||
"""Reset hooks before each test."""
|
||||
yield
|
||||
clear_all_global_hooks()
|
||||
```
|
||||
|
||||
## When to Use Which Hook
|
||||
|
||||
### Use LLM Hooks When:
|
||||
- Implementing iteration limits
|
||||
- Adding context or safety guidelines to prompts
|
||||
- Tracking token usage and costs
|
||||
- Sanitizing or transforming responses
|
||||
- Implementing approval gates for LLM calls
|
||||
- Debugging prompt/response interactions
|
||||
|
||||
### Use Tool Hooks When:
|
||||
- Blocking dangerous or destructive operations
|
||||
- Validating tool inputs before execution
|
||||
- Implementing approval gates for sensitive actions
|
||||
- Caching tool results
|
||||
- Tracking tool usage and performance
|
||||
- Sanitizing tool outputs
|
||||
- Rate limiting tool calls
|
||||
|
||||
### Use Both When:
|
||||
Building comprehensive observability, safety, or approval systems that need to monitor all agent operations.
|
||||
|
||||
## Alternative Registration Methods
|
||||
|
||||
### Programmatic Registration (Advanced)
|
||||
|
||||
For dynamic hook registration or when you need to register hooks programmatically:
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
register_before_llm_call_hook,
|
||||
register_after_tool_call_hook
|
||||
)
|
||||
|
||||
def my_hook(context):
|
||||
return None
|
||||
|
||||
# Register programmatically
|
||||
register_before_llm_call_hook(my_hook)
|
||||
|
||||
# Useful for:
|
||||
# - Loading hooks from configuration
|
||||
# - Conditional hook registration
|
||||
# - Plugin systems
|
||||
```
|
||||
|
||||
**Note:** For most use cases, decorators are cleaner and more maintainable.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
1. **Keep Hooks Fast**: Hooks execute on every call - avoid heavy computation
|
||||
2. **Cache When Possible**: Store expensive validations or lookups
|
||||
3. **Be Selective**: Use crew-scoped hooks when global hooks aren't needed
|
||||
4. **Monitor Hook Overhead**: Profile hook execution time in production
|
||||
5. **Lazy Import**: Import heavy dependencies only when needed
|
||||
|
||||
## Debugging Hooks
|
||||
|
||||
### Enable Debug Logging
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@before_llm_call
|
||||
def debug_hook(context):
|
||||
logger.debug(f"LLM call: {context.agent.role}, iteration {context.iterations}")
|
||||
return None
|
||||
```
|
||||
|
||||
### Hook Execution Order
|
||||
|
||||
Hooks execute in registration order. If a before hook returns `False`, subsequent hooks don't execute:
|
||||
|
||||
```python
|
||||
# Register order matters!
|
||||
register_before_tool_call_hook(hook1) # Executes first
|
||||
register_before_tool_call_hook(hook2) # Executes second
|
||||
register_before_tool_call_hook(hook3) # Executes third
|
||||
|
||||
# If hook2 returns False:
|
||||
# - hook1 executed
|
||||
# - hook2 executed and returned False
|
||||
# - hook3 NOT executed
|
||||
# - Tool call blocked
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [LLM Call Hooks →](/learn/llm-hooks) - Detailed LLM hook documentation
|
||||
- [Tool Call Hooks →](/learn/tool-hooks) - Detailed tool hook documentation
|
||||
- [Before and After Kickoff Hooks →](/learn/before-and-after-kickoff-hooks) - Crew lifecycle hooks
|
||||
- [Human-in-the-Loop →](/learn/human-in-the-loop) - Human input patterns
|
||||
|
||||
## Conclusion
|
||||
|
||||
Execution hooks provide powerful control over agent runtime behavior. Use them to implement safety guardrails, approval workflows, comprehensive monitoring, and custom business logic. Combined with proper error handling, type safety, and performance considerations, hooks enable production-ready, secure, and observable agent systems.
|
||||
@@ -97,7 +97,7 @@ project_crew = Crew(
|
||||
```
|
||||
|
||||
<Tip>
|
||||
For more details on creating and customizing a manager agent, check out the [Custom Manager Agent documentation](https://docs.crewai.com/how-to/custom-manager-agent#custom-manager-agent).
|
||||
For more details on creating and customizing a manager agent, check out the [Custom Manager Agent documentation](/en/learn/custom-manager-agent).
|
||||
</Tip>
|
||||
|
||||
|
||||
|
||||
581
docs/en/learn/human-feedback-in-flows.mdx
Normal file
581
docs/en/learn/human-feedback-in-flows.mdx
Normal file
@@ -0,0 +1,581 @@
|
||||
---
|
||||
title: Human Feedback in Flows
|
||||
description: Learn how to integrate human feedback directly into your CrewAI Flows using the @human_feedback decorator
|
||||
icon: user-check
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The `@human_feedback` decorator enables human-in-the-loop (HITL) workflows directly within CrewAI Flows. It allows you to pause flow execution, present output to a human for review, collect their feedback, and optionally route to different listeners based on the feedback outcome.
|
||||
|
||||
This is particularly valuable for:
|
||||
|
||||
- **Quality assurance**: Review AI-generated content before it's used downstream
|
||||
- **Decision gates**: Let humans make critical decisions in automated workflows
|
||||
- **Approval workflows**: Implement approve/reject/revise patterns
|
||||
- **Interactive refinement**: Collect feedback to improve outputs iteratively
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A[Flow Method] --> B[Output Generated]
|
||||
B --> C[Human Reviews]
|
||||
C --> D{Feedback}
|
||||
D -->|emit specified| E[LLM Collapses to Outcome]
|
||||
D -->|no emit| F[HumanFeedbackResult]
|
||||
E --> G["@listen('approved')"]
|
||||
E --> H["@listen('rejected')"]
|
||||
F --> I[Next Listener]
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
Here's the simplest way to add human feedback to a flow:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback
|
||||
|
||||
class SimpleReviewFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(message="Please review this content:")
|
||||
def generate_content(self):
|
||||
return "This is AI-generated content that needs review."
|
||||
|
||||
@listen(generate_content)
|
||||
def process_feedback(self, result):
|
||||
print(f"Content: {result.output}")
|
||||
print(f"Human said: {result.feedback}")
|
||||
|
||||
flow = SimpleReviewFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
When this flow runs, it will:
|
||||
1. Execute `generate_content` and return the string
|
||||
2. Display the output to the user with the request message
|
||||
3. Wait for the user to type feedback (or press Enter to skip)
|
||||
4. Pass a `HumanFeedbackResult` object to `process_feedback`
|
||||
|
||||
## The @human_feedback Decorator
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `message` | `str` | Yes | The message shown to the human alongside the method output |
|
||||
| `emit` | `Sequence[str]` | No | List of possible outcomes. Feedback is collapsed to one of these, which triggers `@listen` decorators |
|
||||
| `llm` | `str \| BaseLLM` | When `emit` specified | LLM used to interpret feedback and map to an outcome |
|
||||
| `default_outcome` | `str` | No | Outcome to use if no feedback provided. Must be in `emit` |
|
||||
| `metadata` | `dict` | No | Additional data for enterprise integrations |
|
||||
| `provider` | `HumanFeedbackProvider` | No | Custom provider for async/non-blocking feedback. See [Async Human Feedback](#async-human-feedback-non-blocking) |
|
||||
|
||||
### Basic Usage (No Routing)
|
||||
|
||||
When you don't specify `emit`, the decorator simply collects feedback and passes a `HumanFeedbackResult` to the next listener:
|
||||
|
||||
```python Code
|
||||
@start()
|
||||
@human_feedback(message="What do you think of this analysis?")
|
||||
def analyze_data(self):
|
||||
return "Analysis results: Revenue up 15%, costs down 8%"
|
||||
|
||||
@listen(analyze_data)
|
||||
def handle_feedback(self, result):
|
||||
# result is a HumanFeedbackResult
|
||||
print(f"Analysis: {result.output}")
|
||||
print(f"Feedback: {result.feedback}")
|
||||
```
|
||||
|
||||
### Routing with emit
|
||||
|
||||
When you specify `emit`, the decorator becomes a router. The human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes:
|
||||
|
||||
```python Code
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Do you approve this content for publication?",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
)
|
||||
def review_content(self):
|
||||
return "Draft blog post content here..."
|
||||
|
||||
@listen("approved")
|
||||
def publish(self, result):
|
||||
print(f"Publishing! User said: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def discard(self, result):
|
||||
print(f"Discarding. Reason: {result.feedback}")
|
||||
|
||||
@listen("needs_revision")
|
||||
def revise(self, result):
|
||||
print(f"Revising based on: {result.feedback}")
|
||||
```
|
||||
|
||||
<Tip>
|
||||
The LLM uses structured outputs (function calling) when available to guarantee the response is one of your specified outcomes. This makes routing reliable and predictable.
|
||||
</Tip>
|
||||
|
||||
## HumanFeedbackResult
|
||||
|
||||
The `HumanFeedbackResult` dataclass contains all information about a human feedback interaction:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.human_feedback import HumanFeedbackResult
|
||||
|
||||
@dataclass
|
||||
class HumanFeedbackResult:
|
||||
output: Any # The original method output shown to the human
|
||||
feedback: str # The raw feedback text from the human
|
||||
outcome: str | None # The collapsed outcome (if emit was specified)
|
||||
timestamp: datetime # When the feedback was received
|
||||
method_name: str # Name of the decorated method
|
||||
metadata: dict # Any metadata passed to the decorator
|
||||
```
|
||||
|
||||
### Accessing in Listeners
|
||||
|
||||
When a listener is triggered by a `@human_feedback` method with `emit`, it receives the `HumanFeedbackResult`:
|
||||
|
||||
```python Code
|
||||
@listen("approved")
|
||||
def on_approval(self, result: HumanFeedbackResult):
|
||||
print(f"Original output: {result.output}")
|
||||
print(f"User feedback: {result.feedback}")
|
||||
print(f"Outcome: {result.outcome}") # "approved"
|
||||
print(f"Received at: {result.timestamp}")
|
||||
```
|
||||
|
||||
## Accessing Feedback History
|
||||
|
||||
The `Flow` class provides two attributes for accessing human feedback:
|
||||
|
||||
### last_human_feedback
|
||||
|
||||
Returns the most recent `HumanFeedbackResult`:
|
||||
|
||||
```python Code
|
||||
@listen(some_method)
|
||||
def check_feedback(self):
|
||||
if self.last_human_feedback:
|
||||
print(f"Last feedback: {self.last_human_feedback.feedback}")
|
||||
```
|
||||
|
||||
### human_feedback_history
|
||||
|
||||
A list of all `HumanFeedbackResult` objects collected during the flow:
|
||||
|
||||
```python Code
|
||||
@listen(final_step)
|
||||
def summarize(self):
|
||||
print(f"Total feedback collected: {len(self.human_feedback_history)}")
|
||||
for i, fb in enumerate(self.human_feedback_history):
|
||||
print(f"{i+1}. {fb.method_name}: {fb.outcome or 'no routing'}")
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Each `HumanFeedbackResult` is appended to `human_feedback_history`, so multiple feedback steps won't overwrite each other. Use this list to access all feedback collected during the flow.
|
||||
</Warning>
|
||||
|
||||
## Complete Example: Content Approval Workflow
|
||||
|
||||
Here's a full example implementing a content review and approval workflow:
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class ContentState(BaseModel):
|
||||
topic: str = ""
|
||||
draft: str = ""
|
||||
final_content: str = ""
|
||||
revision_count: int = 0
|
||||
|
||||
|
||||
class ContentApprovalFlow(Flow[ContentState]):
|
||||
"""A flow that generates content and gets human approval."""
|
||||
|
||||
@start()
|
||||
def get_topic(self):
|
||||
self.state.topic = input("What topic should I write about? ")
|
||||
return self.state.topic
|
||||
|
||||
@listen(get_topic)
|
||||
def generate_draft(self, topic):
|
||||
# In real use, this would call an LLM
|
||||
self.state.draft = f"# {topic}\n\nThis is a draft about {topic}..."
|
||||
return self.state.draft
|
||||
|
||||
@listen(generate_draft)
|
||||
@human_feedback(
|
||||
message="Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
)
|
||||
def review_draft(self, draft):
|
||||
return draft
|
||||
|
||||
@listen("approved")
|
||||
def publish_content(self, result: HumanFeedbackResult):
|
||||
self.state.final_content = result.output
|
||||
print("\n✅ Content approved and published!")
|
||||
print(f"Reviewer comment: {result.feedback}")
|
||||
return "published"
|
||||
|
||||
@listen("rejected")
|
||||
def handle_rejection(self, result: HumanFeedbackResult):
|
||||
print("\n❌ Content rejected")
|
||||
print(f"Reason: {result.feedback}")
|
||||
return "rejected"
|
||||
|
||||
@listen("needs_revision")
|
||||
def revise_content(self, result: HumanFeedbackResult):
|
||||
self.state.revision_count += 1
|
||||
print(f"\n📝 Revision #{self.state.revision_count} requested")
|
||||
print(f"Feedback: {result.feedback}")
|
||||
|
||||
# In a real flow, you might loop back to generate_draft
|
||||
# For this example, we just acknowledge
|
||||
return "revision_requested"
|
||||
|
||||
|
||||
# Run the flow
|
||||
flow = ContentApprovalFlow()
|
||||
result = flow.kickoff()
|
||||
print(f"\nFlow completed. Revisions requested: {flow.state.revision_count}")
|
||||
```
|
||||
|
||||
```text Output
|
||||
What topic should I write about? AI Safety
|
||||
|
||||
==================================================
|
||||
OUTPUT FOR REVIEW:
|
||||
==================================================
|
||||
# AI Safety
|
||||
|
||||
This is a draft about AI Safety...
|
||||
==================================================
|
||||
|
||||
Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:
|
||||
(Press Enter to skip, or type your feedback)
|
||||
|
||||
Your feedback: Looks good, approved!
|
||||
|
||||
✅ Content approved and published!
|
||||
Reviewer comment: Looks good, approved!
|
||||
|
||||
Flow completed. Revisions requested: 0
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Combining with Other Decorators
|
||||
|
||||
The `@human_feedback` decorator works with other flow decorators. Place it as the innermost decorator (closest to the function):
|
||||
|
||||
```python Code
|
||||
# Correct: @human_feedback is innermost (closest to the function)
|
||||
@start()
|
||||
@human_feedback(message="Review this:")
|
||||
def my_start_method(self):
|
||||
return "content"
|
||||
|
||||
@listen(other_method)
|
||||
@human_feedback(message="Review this too:")
|
||||
def my_listener(self, data):
|
||||
return f"processed: {data}"
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Place `@human_feedback` as the innermost decorator (last/closest to the function) so it wraps the method directly and can capture the return value before passing to the flow system.
|
||||
</Tip>
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Write Clear Request Messages
|
||||
|
||||
The `request` parameter is what the human sees. Make it actionable:
|
||||
|
||||
```python Code
|
||||
# ✅ Good - clear and actionable
|
||||
@human_feedback(message="Does this summary accurately capture the key points? Reply 'yes' or explain what's missing:")
|
||||
|
||||
# ❌ Bad - vague
|
||||
@human_feedback(message="Review this:")
|
||||
```
|
||||
|
||||
### 2. Choose Meaningful Outcomes
|
||||
|
||||
When using `emit`, pick outcomes that map naturally to human responses:
|
||||
|
||||
```python Code
|
||||
# ✅ Good - natural language outcomes
|
||||
emit=["approved", "rejected", "needs_more_detail"]
|
||||
|
||||
# ❌ Bad - technical or unclear
|
||||
emit=["state_1", "state_2", "state_3"]
|
||||
```
|
||||
|
||||
### 3. Always Provide a Default Outcome
|
||||
|
||||
Use `default_outcome` to handle cases where users press Enter without typing:
|
||||
|
||||
```python Code
|
||||
@human_feedback(
|
||||
message="Approve? (press Enter to request revision)",
|
||||
emit=["approved", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision", # Safe default
|
||||
)
|
||||
```
|
||||
|
||||
### 4. Use Feedback History for Audit Trails
|
||||
|
||||
Access `human_feedback_history` to create audit logs:
|
||||
|
||||
```python Code
|
||||
@listen(final_step)
|
||||
def create_audit_log(self):
|
||||
log = []
|
||||
for fb in self.human_feedback_history:
|
||||
log.append({
|
||||
"step": fb.method_name,
|
||||
"outcome": fb.outcome,
|
||||
"feedback": fb.feedback,
|
||||
"timestamp": fb.timestamp.isoformat(),
|
||||
})
|
||||
return log
|
||||
```
|
||||
|
||||
### 5. Handle Both Routed and Non-Routed Feedback
|
||||
|
||||
When designing flows, consider whether you need routing:
|
||||
|
||||
| Scenario | Use |
|
||||
|----------|-----|
|
||||
| Simple review, just need the feedback text | No `emit` |
|
||||
| Need to branch to different paths based on response | Use `emit` |
|
||||
| Approval gates with approve/reject/revise | Use `emit` |
|
||||
| Collecting comments for logging only | No `emit` |
|
||||
|
||||
## Async Human Feedback (Non-Blocking)
|
||||
|
||||
By default, `@human_feedback` blocks execution waiting for console input. For production applications, you may need **async/non-blocking** feedback that integrates with external systems like Slack, email, webhooks, or APIs.
|
||||
|
||||
### The Provider Abstraction
|
||||
|
||||
Use the `provider` parameter to specify a custom feedback collection strategy:
|
||||
|
||||
```python Code
|
||||
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
|
||||
|
||||
class WebhookProvider(HumanFeedbackProvider):
|
||||
"""Provider that pauses flow and waits for webhook callback."""
|
||||
|
||||
def __init__(self, webhook_url: str):
|
||||
self.webhook_url = webhook_url
|
||||
|
||||
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
|
||||
# Notify external system (e.g., send Slack message, create ticket)
|
||||
self.send_notification(context)
|
||||
|
||||
# Pause execution - framework handles persistence automatically
|
||||
raise HumanFeedbackPending(
|
||||
context=context,
|
||||
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
|
||||
)
|
||||
|
||||
class ReviewFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review this content:",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
provider=WebhookProvider("https://myapp.com/api"),
|
||||
)
|
||||
def generate_content(self):
|
||||
return "AI-generated content..."
|
||||
|
||||
@listen("approved")
|
||||
def publish(self, result):
|
||||
return "Published!"
|
||||
```
|
||||
|
||||
<Tip>
|
||||
The flow framework **automatically persists state** when `HumanFeedbackPending` is raised. Your provider only needs to notify the external system and raise the exception—no manual persistence calls required.
|
||||
</Tip>
|
||||
|
||||
### Handling Paused Flows
|
||||
|
||||
When using an async provider, `kickoff()` returns a `HumanFeedbackPending` object instead of raising an exception:
|
||||
|
||||
```python Code
|
||||
flow = ReviewFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
if isinstance(result, HumanFeedbackPending):
|
||||
# Flow is paused, state is automatically persisted
|
||||
print(f"Waiting for feedback at: {result.callback_info['webhook_url']}")
|
||||
print(f"Flow ID: {result.context.flow_id}")
|
||||
else:
|
||||
# Normal completion
|
||||
print(f"Flow completed: {result}")
|
||||
```
|
||||
|
||||
### Resuming a Paused Flow
|
||||
|
||||
When feedback arrives (e.g., via webhook), resume the flow:
|
||||
|
||||
```python Code
|
||||
# Sync handler:
|
||||
def handle_feedback_webhook(flow_id: str, feedback: str):
|
||||
flow = ReviewFlow.from_pending(flow_id)
|
||||
result = flow.resume(feedback)
|
||||
return result
|
||||
|
||||
# Async handler (FastAPI, aiohttp, etc.):
|
||||
async def handle_feedback_webhook(flow_id: str, feedback: str):
|
||||
flow = ReviewFlow.from_pending(flow_id)
|
||||
result = await flow.resume_async(feedback)
|
||||
return result
|
||||
```
|
||||
|
||||
### Key Types
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `HumanFeedbackProvider` | Protocol for custom feedback providers |
|
||||
| `PendingFeedbackContext` | Contains all info needed to resume a paused flow |
|
||||
| `HumanFeedbackPending` | Returned by `kickoff()` when flow is paused for feedback |
|
||||
| `ConsoleProvider` | Default blocking console input provider |
|
||||
|
||||
### PendingFeedbackContext
|
||||
|
||||
The context contains everything needed to resume:
|
||||
|
||||
```python Code
|
||||
@dataclass
|
||||
class PendingFeedbackContext:
|
||||
flow_id: str # Unique identifier for this flow execution
|
||||
flow_class: str # Fully qualified class name
|
||||
method_name: str # Method that triggered feedback
|
||||
method_output: Any # Output shown to the human
|
||||
message: str # The request message
|
||||
emit: list[str] | None # Possible outcomes for routing
|
||||
default_outcome: str | None
|
||||
metadata: dict # Custom metadata
|
||||
llm: str | None # LLM for outcome collapsing
|
||||
requested_at: datetime
|
||||
```
|
||||
|
||||
### Complete Async Flow Example
|
||||
|
||||
```python Code
|
||||
from crewai.flow import (
|
||||
Flow, start, listen, human_feedback,
|
||||
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
|
||||
)
|
||||
|
||||
class SlackNotificationProvider(HumanFeedbackProvider):
|
||||
"""Provider that sends Slack notifications and pauses for async feedback."""
|
||||
|
||||
def __init__(self, channel: str):
|
||||
self.channel = channel
|
||||
|
||||
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
|
||||
# Send Slack notification (implement your own)
|
||||
slack_thread_id = self.post_to_slack(
|
||||
channel=self.channel,
|
||||
message=f"Review needed:\n\n{context.method_output}\n\n{context.message}",
|
||||
)
|
||||
|
||||
# Pause execution - framework handles persistence automatically
|
||||
raise HumanFeedbackPending(
|
||||
context=context,
|
||||
callback_info={
|
||||
"slack_channel": self.channel,
|
||||
"thread_id": slack_thread_id,
|
||||
}
|
||||
)
|
||||
|
||||
class ContentPipeline(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Approve this content for publication?",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
provider=SlackNotificationProvider("#content-reviews"),
|
||||
)
|
||||
def generate_content(self):
|
||||
return "AI-generated blog post content..."
|
||||
|
||||
@listen("approved")
|
||||
def publish(self, result):
|
||||
print(f"Publishing! Reviewer said: {result.feedback}")
|
||||
return {"status": "published"}
|
||||
|
||||
@listen("rejected")
|
||||
def archive(self, result):
|
||||
print(f"Archived. Reason: {result.feedback}")
|
||||
return {"status": "archived"}
|
||||
|
||||
@listen("needs_revision")
|
||||
def queue_revision(self, result):
|
||||
print(f"Queued for revision: {result.feedback}")
|
||||
return {"status": "revision_needed"}
|
||||
|
||||
|
||||
# Starting the flow (will pause and wait for Slack response)
|
||||
def start_content_pipeline():
|
||||
flow = ContentPipeline()
|
||||
result = flow.kickoff()
|
||||
|
||||
if isinstance(result, HumanFeedbackPending):
|
||||
return {"status": "pending", "flow_id": result.context.flow_id}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# Resuming when Slack webhook fires (sync handler)
|
||||
def on_slack_feedback(flow_id: str, slack_message: str):
|
||||
flow = ContentPipeline.from_pending(flow_id)
|
||||
result = flow.resume(slack_message)
|
||||
return result
|
||||
|
||||
|
||||
# If your handler is async (FastAPI, aiohttp, Slack Bolt async, etc.)
|
||||
async def on_slack_feedback_async(flow_id: str, slack_message: str):
|
||||
flow = ContentPipeline.from_pending(flow_id)
|
||||
result = await flow.resume_async(slack_message)
|
||||
return result
|
||||
```
|
||||
|
||||
<Warning>
|
||||
If you're using an async web framework (FastAPI, aiohttp, Slack Bolt async mode), use `await flow.resume_async()` instead of `flow.resume()`. Calling `resume()` from within a running event loop will raise a `RuntimeError`.
|
||||
</Warning>
|
||||
|
||||
### Best Practices for Async Feedback
|
||||
|
||||
1. **Check the return type**: `kickoff()` returns `HumanFeedbackPending` when paused—no try/except needed
|
||||
2. **Use the right resume method**: Use `resume()` in sync code, `await resume_async()` in async code
|
||||
3. **Store callback info**: Use `callback_info` to store webhook URLs, ticket IDs, etc.
|
||||
4. **Implement idempotency**: Your resume handler should be idempotent for safety
|
||||
5. **Automatic persistence**: State is automatically saved when `HumanFeedbackPending` is raised and uses `SQLiteFlowPersistence` by default
|
||||
6. **Custom persistence**: Pass a custom persistence instance to `from_pending()` if needed
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Flows Overview](/en/concepts/flows) - Learn about CrewAI Flows
|
||||
- [Flow State Management](/en/guides/flows/mastering-flow-state) - Managing state in flows
|
||||
- [Flow Persistence](/en/concepts/flows#persistence) - Persisting flow state
|
||||
- [Routing with @router](/en/concepts/flows#router) - More about conditional routing
|
||||
- [Human Input on Execution](/en/learn/human-input-on-execution) - Task-level human input
|
||||
@@ -5,9 +5,22 @@ icon: "user-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
|
||||
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. CrewAI provides multiple ways to implement HITL depending on your needs.
|
||||
|
||||
## Setting Up HITL Workflows
|
||||
## Choosing Your HITL Approach
|
||||
|
||||
CrewAI offers two main approaches for implementing human-in-the-loop workflows:
|
||||
|
||||
| Approach | Best For | Integration |
|
||||
|----------|----------|-------------|
|
||||
| **Flow-based** (`@human_feedback` decorator) | Local development, console-based review, synchronous workflows | [Human Feedback in Flows](/en/learn/human-feedback-in-flows) |
|
||||
| **Webhook-based** (Enterprise) | Production deployments, async workflows, external integrations (Slack, Teams, etc.) | This guide |
|
||||
|
||||
<Tip>
|
||||
If you're building flows and want to add human review steps with routing based on feedback, check out the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide for the `@human_feedback` decorator.
|
||||
</Tip>
|
||||
|
||||
## Setting Up Webhook-Based HITL Workflows
|
||||
|
||||
<Steps>
|
||||
<Step title="Configure Your Task">
|
||||
|
||||
@@ -10,14 +10,25 @@ mode: "wide"
|
||||
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner.
|
||||
This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
|
||||
|
||||
## Asynchronous Crew Execution
|
||||
CrewAI offers two approaches for async execution:
|
||||
|
||||
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
|
||||
| Method | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
|
||||
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
|
||||
|
||||
<Note>
|
||||
For high-concurrency workloads, `akickoff()` is recommended as it uses native async for task execution, memory operations, and knowledge retrieval.
|
||||
</Note>
|
||||
|
||||
## Native Async Execution with `akickoff()`
|
||||
|
||||
The `akickoff()` method provides true native async execution, using async/await throughout the entire execution chain including task execution, memory operations, and knowledge queries.
|
||||
|
||||
### Method Signature
|
||||
|
||||
```python Code
|
||||
def kickoff_async(self, inputs: dict) -> CrewOutput:
|
||||
async def akickoff(self, inputs: dict) -> CrewOutput:
|
||||
```
|
||||
|
||||
### Parameters
|
||||
@@ -28,23 +39,13 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
|
||||
|
||||
- `CrewOutput`: An object representing the result of the crew execution.
|
||||
|
||||
## Potential Use Cases
|
||||
|
||||
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
|
||||
|
||||
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
|
||||
|
||||
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
|
||||
|
||||
## Example: Single Asynchronous Crew Execution
|
||||
|
||||
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
|
||||
### Example: Native Async Crew Execution
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
# Create an agent
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
@@ -52,37 +53,165 @@ coding_agent = Agent(
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
# Create a task
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
# Create a crew
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
# Async function to kickoff the crew asynchronously
|
||||
async def async_crew_execution():
|
||||
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
# Native async execution
|
||||
async def main():
|
||||
result = await analysis_crew.akickoff(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
print("Crew Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_crew_execution())
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Example: Multiple Asynchronous Crew Executions
|
||||
### Example: Multiple Native Async Crews
|
||||
|
||||
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
|
||||
Run multiple crews concurrently using `asyncio.gather()` with native async:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
task_1 = Task(
|
||||
description="Analyze the first dataset and calculate the average age. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
task_2 = Task(
|
||||
description="Analyze the second dataset and calculate the average age. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
|
||||
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
|
||||
|
||||
async def main():
|
||||
results = await asyncio.gather(
|
||||
crew_1.akickoff(inputs={"ages": [25, 30, 35, 40, 45]}),
|
||||
crew_2.akickoff(inputs={"ages": [20, 22, 24, 28, 30]})
|
||||
)
|
||||
|
||||
for i, result in enumerate(results, 1):
|
||||
print(f"Crew {i} Result:", result)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Example: Native Async for Multiple Inputs
|
||||
|
||||
Use `akickoff_for_each()` to execute your crew against multiple inputs concurrently with native async:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the dataset and calculate the average age. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
async def main():
|
||||
datasets = [
|
||||
{"ages": [25, 30, 35, 40, 45]},
|
||||
{"ages": [20, 22, 24, 28, 30]},
|
||||
{"ages": [30, 35, 40, 45, 50]}
|
||||
]
|
||||
|
||||
results = await analysis_crew.akickoff_for_each(datasets)
|
||||
|
||||
for i, result in enumerate(results, 1):
|
||||
print(f"Dataset {i} Result:", result)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Thread-Based Async with `kickoff_async()`
|
||||
|
||||
The `kickoff_async()` method provides async execution by wrapping the synchronous `kickoff()` in a thread. This is useful for simpler async integration or backward compatibility.
|
||||
|
||||
### Method Signature
|
||||
|
||||
```python Code
|
||||
async def kickoff_async(self, inputs: dict) -> CrewOutput:
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `inputs` (dict): A dictionary containing the input data required for the tasks.
|
||||
|
||||
### Returns
|
||||
|
||||
- `CrewOutput`: An object representing the result of the crew execution.
|
||||
|
||||
### Example: Thread-Based Async Execution
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
async def async_crew_execution():
|
||||
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
print("Crew Result:", result)
|
||||
|
||||
asyncio.run(async_crew_execution())
|
||||
```
|
||||
|
||||
### Example: Multiple Thread-Based Async Crews
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
@@ -90,7 +219,6 @@ coding_agent = Agent(
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create tasks that require code execution
|
||||
task_1 = Task(
|
||||
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
@@ -103,22 +231,76 @@ task_2 = Task(
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
# Create two crews and add tasks
|
||||
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
|
||||
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
|
||||
|
||||
# Async function to kickoff multiple crews asynchronously and wait for all to finish
|
||||
async def async_multiple_crews():
|
||||
# Create coroutines for concurrent execution
|
||||
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
|
||||
|
||||
# Wait for both crews to finish
|
||||
results = await asyncio.gather(result_1, result_2)
|
||||
|
||||
for i, result in enumerate(results, 1):
|
||||
print(f"Crew {i} Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_multiple_crews())
|
||||
```
|
||||
|
||||
## Async Streaming
|
||||
|
||||
Both async methods support streaming when `stream=True` is set on the crew:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Research and summarize topics",
|
||||
backstory="You are an expert researcher."
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the topic: {topic}",
|
||||
agent=agent,
|
||||
expected_output="A comprehensive summary of the topic."
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
stream=True # Enable streaming
|
||||
)
|
||||
|
||||
async def main():
|
||||
streaming_output = await crew.akickoff(inputs={"topic": "AI trends in 2024"})
|
||||
|
||||
# Async iteration over streaming chunks
|
||||
async for chunk in streaming_output:
|
||||
print(f"Chunk: {chunk.content}")
|
||||
|
||||
# Access final result after streaming completes
|
||||
result = streaming_output.result
|
||||
print(f"Final result: {result.raw}")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Potential Use Cases
|
||||
|
||||
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch.
|
||||
|
||||
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment.
|
||||
|
||||
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities.
|
||||
|
||||
## Choosing Between `akickoff()` and `kickoff_async()`
|
||||
|
||||
| Feature | `akickoff()` | `kickoff_async()` |
|
||||
|---------|--------------|-------------------|
|
||||
| Execution model | Native async/await | Thread-based wrapper |
|
||||
| Task execution | Async with `aexecute_sync()` | Sync in thread pool |
|
||||
| Memory operations | Async | Sync in thread pool |
|
||||
| Knowledge retrieval | Async | Sync in thread pool |
|
||||
| Best for | High-concurrency, I/O-bound workloads | Simple async integration |
|
||||
| Streaming support | Yes | Yes |
|
||||
|
||||
427
docs/en/learn/llm-hooks.mdx
Normal file
427
docs/en/learn/llm-hooks.mdx
Normal file
@@ -0,0 +1,427 @@
|
||||
---
|
||||
title: LLM Call Hooks
|
||||
description: Learn how to use LLM call hooks to intercept, modify, and control language model interactions in CrewAI
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
LLM Call Hooks provide fine-grained control over language model interactions during agent execution. These hooks allow you to intercept LLM calls, modify prompts, transform responses, implement approval gates, and add custom logging or monitoring.
|
||||
|
||||
## Overview
|
||||
|
||||
LLM hooks are executed at two critical points:
|
||||
- **Before LLM Call**: Modify messages, validate inputs, or block execution
|
||||
- **After LLM Call**: Transform responses, sanitize outputs, or modify conversation history
|
||||
|
||||
## Hook Types
|
||||
|
||||
### Before LLM Call Hooks
|
||||
|
||||
Executed before every LLM call, these hooks can:
|
||||
- Inspect and modify messages sent to the LLM
|
||||
- Block LLM execution based on conditions
|
||||
- Implement rate limiting or approval gates
|
||||
- Add context or system messages
|
||||
- Log request details
|
||||
|
||||
**Signature:**
|
||||
```python
|
||||
def before_hook(context: LLMCallHookContext) -> bool | None:
|
||||
# Return False to block execution
|
||||
# Return True or None to allow execution
|
||||
...
|
||||
```
|
||||
|
||||
### After LLM Call Hooks
|
||||
|
||||
Executed after every LLM call, these hooks can:
|
||||
- Modify or sanitize LLM responses
|
||||
- Add metadata or formatting
|
||||
- Log response details
|
||||
- Update conversation history
|
||||
- Implement content filtering
|
||||
|
||||
**Signature:**
|
||||
```python
|
||||
def after_hook(context: LLMCallHookContext) -> str | None:
|
||||
# Return modified response string
|
||||
# Return None to keep original response
|
||||
...
|
||||
```
|
||||
|
||||
## LLM Hook Context
|
||||
|
||||
The `LLMCallHookContext` object provides comprehensive access to execution state:
|
||||
|
||||
```python
|
||||
class LLMCallHookContext:
|
||||
executor: CrewAgentExecutor # Full executor reference
|
||||
messages: list # Mutable message list
|
||||
agent: Agent # Current agent
|
||||
task: Task # Current task
|
||||
crew: Crew # Crew instance
|
||||
llm: BaseLLM # LLM instance
|
||||
iterations: int # Current iteration count
|
||||
response: str | None # LLM response (after hooks only)
|
||||
```
|
||||
|
||||
### Modifying Messages
|
||||
|
||||
**Important:** Always modify messages in-place:
|
||||
|
||||
```python
|
||||
# ✅ Correct - modify in-place
|
||||
def add_context(context: LLMCallHookContext) -> None:
|
||||
context.messages.append({"role": "system", "content": "Be concise"})
|
||||
|
||||
# ❌ Wrong - replaces list reference
|
||||
def wrong_approach(context: LLMCallHookContext) -> None:
|
||||
context.messages = [{"role": "system", "content": "Be concise"}]
|
||||
```
|
||||
|
||||
## Registration Methods
|
||||
|
||||
### 1. Global Hook Registration
|
||||
|
||||
Register hooks that apply to all LLM calls across all crews:
|
||||
|
||||
```python
|
||||
from crewai.hooks import register_before_llm_call_hook, register_after_llm_call_hook
|
||||
|
||||
def log_llm_call(context):
|
||||
print(f"LLM call by {context.agent.role} at iteration {context.iterations}")
|
||||
return None # Allow execution
|
||||
|
||||
register_before_llm_call_hook(log_llm_call)
|
||||
```
|
||||
|
||||
### 2. Decorator-Based Registration
|
||||
|
||||
Use decorators for cleaner syntax:
|
||||
|
||||
```python
|
||||
from crewai.hooks import before_llm_call, after_llm_call
|
||||
|
||||
@before_llm_call
|
||||
def validate_iteration_count(context):
|
||||
if context.iterations > 10:
|
||||
print("⚠️ Exceeded maximum iterations")
|
||||
return False # Block execution
|
||||
return None
|
||||
|
||||
@after_llm_call
|
||||
def sanitize_response(context):
|
||||
if context.response and "API_KEY" in context.response:
|
||||
return context.response.replace("API_KEY", "[REDACTED]")
|
||||
return None
|
||||
```
|
||||
|
||||
### 3. Crew-Scoped Hooks
|
||||
|
||||
Register hooks for a specific crew instance:
|
||||
|
||||
```python
|
||||
@CrewBase
|
||||
class MyProjCrew:
|
||||
@before_llm_call_crew
|
||||
def validate_inputs(self, context):
|
||||
# Only applies to this crew
|
||||
if context.iterations == 0:
|
||||
print(f"Starting task: {context.task.description}")
|
||||
return None
|
||||
|
||||
@after_llm_call_crew
|
||||
def log_responses(self, context):
|
||||
# Crew-specific response logging
|
||||
print(f"Response length: {len(context.response)}")
|
||||
return None
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
### 1. Iteration Limiting
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def limit_iterations(context: LLMCallHookContext) -> bool | None:
|
||||
max_iterations = 15
|
||||
if context.iterations > max_iterations:
|
||||
print(f"⛔ Blocked: Exceeded {max_iterations} iterations")
|
||||
return False # Block execution
|
||||
return None
|
||||
```
|
||||
|
||||
### 2. Human Approval Gate
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def require_approval(context: LLMCallHookContext) -> bool | None:
|
||||
if context.iterations > 5:
|
||||
response = context.request_human_input(
|
||||
prompt=f"Iteration {context.iterations}: Approve LLM call?",
|
||||
default_message="Press Enter to approve, or type 'no' to block:"
|
||||
)
|
||||
if response.lower() == "no":
|
||||
print("🚫 LLM call blocked by user")
|
||||
return False
|
||||
return None
|
||||
```
|
||||
|
||||
### 3. Adding System Context
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def add_guardrails(context: LLMCallHookContext) -> None:
|
||||
# Add safety guidelines to every LLM call
|
||||
context.messages.append({
|
||||
"role": "system",
|
||||
"content": "Ensure responses are factual and cite sources when possible."
|
||||
})
|
||||
return None
|
||||
```
|
||||
|
||||
### 4. Response Sanitization
|
||||
|
||||
```python
|
||||
@after_llm_call
|
||||
def sanitize_sensitive_data(context: LLMCallHookContext) -> str | None:
|
||||
if not context.response:
|
||||
return None
|
||||
|
||||
# Remove sensitive patterns
|
||||
import re
|
||||
sanitized = context.response
|
||||
sanitized = re.sub(r'\b\d{3}-\d{2}-\d{4}\b', '[SSN-REDACTED]', sanitized)
|
||||
sanitized = re.sub(r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b', '[CARD-REDACTED]', sanitized)
|
||||
|
||||
return sanitized
|
||||
```
|
||||
|
||||
### 5. Cost Tracking
|
||||
|
||||
```python
|
||||
import tiktoken
|
||||
|
||||
@before_llm_call
|
||||
def track_token_usage(context: LLMCallHookContext) -> None:
|
||||
encoding = tiktoken.get_encoding("cl100k_base")
|
||||
total_tokens = sum(
|
||||
len(encoding.encode(msg.get("content", "")))
|
||||
for msg in context.messages
|
||||
)
|
||||
print(f"📊 Input tokens: ~{total_tokens}")
|
||||
return None
|
||||
|
||||
@after_llm_call
|
||||
def track_response_tokens(context: LLMCallHookContext) -> None:
|
||||
if context.response:
|
||||
encoding = tiktoken.get_encoding("cl100k_base")
|
||||
tokens = len(encoding.encode(context.response))
|
||||
print(f"📊 Response tokens: ~{tokens}")
|
||||
return None
|
||||
```
|
||||
|
||||
### 6. Debug Logging
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def debug_request(context: LLMCallHookContext) -> None:
|
||||
print(f"""
|
||||
🔍 LLM Call Debug:
|
||||
- Agent: {context.agent.role}
|
||||
- Task: {context.task.description[:50]}...
|
||||
- Iteration: {context.iterations}
|
||||
- Message Count: {len(context.messages)}
|
||||
- Last Message: {context.messages[-1] if context.messages else 'None'}
|
||||
""")
|
||||
return None
|
||||
|
||||
@after_llm_call
|
||||
def debug_response(context: LLMCallHookContext) -> None:
|
||||
if context.response:
|
||||
print(f"✅ Response Preview: {context.response[:100]}...")
|
||||
return None
|
||||
```
|
||||
|
||||
## Hook Management
|
||||
|
||||
### Unregistering Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
unregister_before_llm_call_hook,
|
||||
unregister_after_llm_call_hook
|
||||
)
|
||||
|
||||
# Unregister specific hook
|
||||
def my_hook(context):
|
||||
...
|
||||
|
||||
register_before_llm_call_hook(my_hook)
|
||||
# Later...
|
||||
unregister_before_llm_call_hook(my_hook) # Returns True if found
|
||||
```
|
||||
|
||||
### Clearing Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
clear_before_llm_call_hooks,
|
||||
clear_after_llm_call_hooks,
|
||||
clear_all_llm_call_hooks
|
||||
)
|
||||
|
||||
# Clear specific hook type
|
||||
count = clear_before_llm_call_hooks()
|
||||
print(f"Cleared {count} before hooks")
|
||||
|
||||
# Clear all LLM hooks
|
||||
before_count, after_count = clear_all_llm_call_hooks()
|
||||
print(f"Cleared {before_count} before and {after_count} after hooks")
|
||||
```
|
||||
|
||||
### Listing Registered Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
get_before_llm_call_hooks,
|
||||
get_after_llm_call_hooks
|
||||
)
|
||||
|
||||
# Get current hooks
|
||||
before_hooks = get_before_llm_call_hooks()
|
||||
after_hooks = get_after_llm_call_hooks()
|
||||
|
||||
print(f"Registered: {len(before_hooks)} before, {len(after_hooks)} after")
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Conditional Hook Execution
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def conditional_blocking(context: LLMCallHookContext) -> bool | None:
|
||||
# Only block for specific agents
|
||||
if context.agent.role == "researcher" and context.iterations > 10:
|
||||
return False
|
||||
|
||||
# Only block for specific tasks
|
||||
if "sensitive" in context.task.description.lower() and context.iterations > 5:
|
||||
return False
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### Context-Aware Modifications
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def adaptive_prompting(context: LLMCallHookContext) -> None:
|
||||
# Add different context based on iteration
|
||||
if context.iterations == 0:
|
||||
context.messages.append({
|
||||
"role": "system",
|
||||
"content": "Start with a high-level overview."
|
||||
})
|
||||
elif context.iterations > 3:
|
||||
context.messages.append({
|
||||
"role": "system",
|
||||
"content": "Focus on specific details and provide examples."
|
||||
})
|
||||
return None
|
||||
```
|
||||
|
||||
### Chaining Hooks
|
||||
|
||||
```python
|
||||
# Multiple hooks execute in registration order
|
||||
|
||||
@before_llm_call
|
||||
def first_hook(context):
|
||||
print("1. First hook executed")
|
||||
return None
|
||||
|
||||
@before_llm_call
|
||||
def second_hook(context):
|
||||
print("2. Second hook executed")
|
||||
return None
|
||||
|
||||
@before_llm_call
|
||||
def blocking_hook(context):
|
||||
if context.iterations > 10:
|
||||
print("3. Blocking hook - execution stopped")
|
||||
return False # Subsequent hooks won't execute
|
||||
print("3. Blocking hook - execution allowed")
|
||||
return None
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Keep Hooks Focused**: Each hook should have a single responsibility
|
||||
2. **Avoid Heavy Computation**: Hooks execute on every LLM call
|
||||
3. **Handle Errors Gracefully**: Use try-except to prevent hook failures from breaking execution
|
||||
4. **Use Type Hints**: Leverage `LLMCallHookContext` for better IDE support
|
||||
5. **Document Hook Behavior**: Especially for blocking conditions
|
||||
6. **Test Hooks Independently**: Unit test hooks before using in production
|
||||
7. **Clear Hooks in Tests**: Use `clear_all_llm_call_hooks()` between test runs
|
||||
8. **Modify In-Place**: Always modify `context.messages` in-place, never replace
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
@before_llm_call
|
||||
def safe_hook(context: LLMCallHookContext) -> bool | None:
|
||||
try:
|
||||
# Your hook logic
|
||||
if some_condition:
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"⚠️ Hook error: {e}")
|
||||
# Decide: allow or block on error
|
||||
return None # Allow execution despite error
|
||||
```
|
||||
|
||||
## Type Safety
|
||||
|
||||
```python
|
||||
from crewai.hooks import LLMCallHookContext, BeforeLLMCallHookType, AfterLLMCallHookType
|
||||
|
||||
# Explicit type annotations
|
||||
def my_before_hook(context: LLMCallHookContext) -> bool | None:
|
||||
return None
|
||||
|
||||
def my_after_hook(context: LLMCallHookContext) -> str | None:
|
||||
return None
|
||||
|
||||
# Type-safe registration
|
||||
register_before_llm_call_hook(my_before_hook)
|
||||
register_after_llm_call_hook(my_after_hook)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Hook Not Executing
|
||||
- Verify hook is registered before crew execution
|
||||
- Check if previous hook returned `False` (blocks subsequent hooks)
|
||||
- Ensure hook signature matches expected type
|
||||
|
||||
### Message Modifications Not Persisting
|
||||
- Use in-place modifications: `context.messages.append()`
|
||||
- Don't replace the list: `context.messages = []`
|
||||
|
||||
### Response Modifications Not Working
|
||||
- Return the modified string from after hooks
|
||||
- Returning `None` keeps the original response
|
||||
|
||||
## Conclusion
|
||||
|
||||
LLM Call Hooks provide powerful capabilities for controlling and monitoring language model interactions in CrewAI. Use them to implement safety guardrails, approval gates, logging, cost tracking, and response sanitization. Combined with proper error handling and type safety, hooks enable robust and production-ready agent systems.
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
title: 'Strategic LLM Selection Guide'
|
||||
description: 'Strategic framework for choosing the right LLM for your CrewAI AI agents and writing effective task and agent definitions'
|
||||
icon: 'brain-circuit'
|
||||
title: "Strategic LLM Selection Guide"
|
||||
description: "Strategic framework for choosing the right LLM for your CrewAI AI agents and writing effective task and agent definitions"
|
||||
icon: "brain-circuit"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
@@ -10,23 +10,35 @@ mode: "wide"
|
||||
Rather than prescriptive model recommendations, we advocate for a **thinking framework** that helps you make informed decisions based on your specific use case, constraints, and requirements. The LLM landscape evolves rapidly, with new models emerging regularly and existing ones being updated frequently. What matters most is developing a systematic approach to evaluation that remains relevant regardless of which specific models are available.
|
||||
|
||||
<Note>
|
||||
This guide focuses on strategic thinking rather than specific model recommendations, as the LLM landscape evolves rapidly.
|
||||
This guide focuses on strategic thinking rather than specific model
|
||||
recommendations, as the LLM landscape evolves rapidly.
|
||||
</Note>
|
||||
|
||||
## Quick Decision Framework
|
||||
|
||||
<Steps>
|
||||
<Step title="Analyze Your Tasks">
|
||||
Begin by deeply understanding what your tasks actually require. Consider the cognitive complexity involved, the depth of reasoning needed, the format of expected outputs, and the amount of context the model will need to process. This foundational analysis will guide every subsequent decision.
|
||||
Begin by deeply understanding what your tasks actually require. Consider the
|
||||
cognitive complexity involved, the depth of reasoning needed, the format of
|
||||
expected outputs, and the amount of context the model will need to process.
|
||||
This foundational analysis will guide every subsequent decision.
|
||||
</Step>
|
||||
<Step title="Map Model Capabilities">
|
||||
Once you understand your requirements, map them to model strengths. Different model families excel at different types of work; some are optimized for reasoning and analysis, others for creativity and content generation, and others for speed and efficiency.
|
||||
Once you understand your requirements, map them to model strengths.
|
||||
Different model families excel at different types of work; some are
|
||||
optimized for reasoning and analysis, others for creativity and content
|
||||
generation, and others for speed and efficiency.
|
||||
</Step>
|
||||
<Step title="Consider Constraints">
|
||||
Factor in your real-world operational constraints including budget limitations, latency requirements, data privacy needs, and infrastructure capabilities. The theoretically best model may not be the practically best choice for your situation.
|
||||
Factor in your real-world operational constraints including budget
|
||||
limitations, latency requirements, data privacy needs, and infrastructure
|
||||
capabilities. The theoretically best model may not be the practically best
|
||||
choice for your situation.
|
||||
</Step>
|
||||
<Step title="Test and Iterate">
|
||||
Start with reliable, well-understood models and optimize based on actual performance in your specific use case. Real-world results often differ from theoretical benchmarks, so empirical testing is crucial.
|
||||
Start with reliable, well-understood models and optimize based on actual
|
||||
performance in your specific use case. Real-world results often differ from
|
||||
theoretical benchmarks, so empirical testing is crucial.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -43,6 +55,7 @@ The most critical step in LLM selection is understanding what your task actually
|
||||
- **Complex Tasks** require multi-step reasoning, strategic thinking, and the ability to handle ambiguous or incomplete information. These might involve analyzing multiple data sources, developing comprehensive strategies, or solving problems that require breaking down into smaller components. The model needs to maintain context across multiple reasoning steps and often must make inferences that aren't explicitly stated.
|
||||
|
||||
- **Creative Tasks** demand a different type of cognitive capability focused on generating novel, engaging, and contextually appropriate content. This includes storytelling, marketing copy creation, and creative problem-solving. The model needs to understand nuance, tone, and audience while producing content that feels authentic and engaging rather than formulaic.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Output Requirements">
|
||||
@@ -51,6 +64,7 @@ The most critical step in LLM selection is understanding what your task actually
|
||||
- **Creative Content** outputs demand a balance of technical competence and creative flair. The model needs to understand audience, tone, and brand voice while producing content that engages readers and achieves specific communication goals. Quality here is often subjective and requires models that can adapt their writing style to different contexts and purposes.
|
||||
|
||||
- **Technical Content** sits between structured data and creative content, requiring both precision and clarity. Documentation, code generation, and technical analysis need to be accurate and comprehensive while remaining accessible to the intended audience. The model must understand complex technical concepts and communicate them effectively.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Context Needs">
|
||||
@@ -59,6 +73,7 @@ The most critical step in LLM selection is understanding what your task actually
|
||||
- **Long Context** requirements emerge when working with substantial documents, extended conversations, or complex multi-part tasks. The model needs to maintain coherence across thousands of tokens while referencing earlier information accurately. This capability becomes crucial for document analysis, comprehensive research, and sophisticated dialogue systems.
|
||||
|
||||
- **Very Long Context** scenarios push the boundaries of what's currently possible, involving massive document processing, extensive research synthesis, or complex multi-session interactions. These use cases require models specifically designed for extended context handling and often involve trade-offs between context length and processing speed.
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
@@ -73,6 +88,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
|
||||
The strength of reasoning models lies in their ability to maintain logical consistency across extended reasoning chains and to break down complex problems into manageable components. They're particularly valuable for strategic planning, complex analysis, and situations where the quality of reasoning matters more than speed of response.
|
||||
|
||||
However, reasoning models often come with trade-offs in terms of speed and cost. They may also be less suitable for creative tasks or simple operations where their sophisticated reasoning capabilities aren't needed. Consider these models when your tasks involve genuine complexity that benefits from systematic, step-by-step analysis.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="General Purpose Models" icon="microchip">
|
||||
@@ -81,6 +97,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
|
||||
The primary advantage of general purpose models is their reliability and predictability across different types of work. They handle most standard business tasks competently, from research and analysis to content creation and data processing. This makes them excellent choices for teams that need consistent performance across varied workflows.
|
||||
|
||||
While general purpose models may not achieve the peak performance of specialized alternatives in specific domains, they offer operational simplicity and reduced complexity in model management. They're often the best starting point for new projects, allowing teams to understand their specific needs before potentially optimizing with more specialized models.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Fast & Efficient Models" icon="bolt">
|
||||
@@ -89,6 +106,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
|
||||
These models excel in scenarios involving routine operations, simple data processing, function calling, and high-volume tasks where the cognitive requirements are relatively straightforward. They're particularly valuable for applications that need to process many requests quickly or operate within tight budget constraints.
|
||||
|
||||
The key consideration with efficient models is ensuring that their capabilities align with your task requirements. While they can handle many routine operations effectively, they may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation. They're best used for well-defined, routine operations where speed and cost matter more than sophistication.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Creative Models" icon="pen">
|
||||
@@ -97,6 +115,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
|
||||
The strength of creative models lies in their ability to adapt writing style to different audiences, maintain consistent voice and tone, and generate content that engages readers effectively. They often perform better on tasks involving storytelling, marketing copy, brand communications, and other content where creativity and engagement are primary goals.
|
||||
|
||||
When selecting creative models, consider not just their ability to generate text, but their understanding of audience, context, and purpose. The best creative models can adapt their output to match specific brand voices, target different audience segments, and maintain consistency across extended content pieces.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Open Source Models" icon="code">
|
||||
@@ -105,6 +124,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
|
||||
The primary benefits of open source models include elimination of per-token costs, ability to fine-tune for specific use cases, complete data privacy, and independence from external API providers. They're particularly valuable for organizations with strict data privacy requirements, budget constraints, or specific customization needs.
|
||||
|
||||
However, open source models require more technical expertise to deploy and maintain effectively. Teams need to consider infrastructure costs, model management complexity, and the ongoing effort required to keep models updated and optimized. The total cost of ownership may be higher than cloud-based alternatives when factoring in technical overhead.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -113,7 +133,8 @@ Understanding model capabilities requires looking beyond marketing claims and be
|
||||
### a. Multi-Model Approach
|
||||
|
||||
<Tip>
|
||||
Use different models for different purposes within the same crew to optimize both performance and cost.
|
||||
Use different models for different purposes within the same crew to optimize
|
||||
both performance and cost.
|
||||
</Tip>
|
||||
|
||||
The most sophisticated CrewAI implementations often employ multiple models strategically, assigning different models to different agents based on their specific roles and requirements. This approach allows teams to optimize for both performance and cost by using the most appropriate model for each type of work.
|
||||
@@ -177,6 +198,7 @@ The key to successful multi-model implementation is understanding how different
|
||||
Effective manager LLMs require strong reasoning capabilities to make good delegation decisions, consistent performance to ensure predictable coordination, and excellent context management to track the state of multiple agents simultaneously. The model needs to understand the capabilities and limitations of different agents while optimizing task allocation for efficiency and quality.
|
||||
|
||||
Cost considerations are particularly important for manager LLMs since they're involved in every operation. The model needs to provide sufficient capability for effective coordination while remaining cost-effective for frequent use. This often means finding models that offer good reasoning capabilities without the premium pricing of the most sophisticated options.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Function Calling LLM">
|
||||
@@ -185,6 +207,7 @@ The key to successful multi-model implementation is understanding how different
|
||||
The most important characteristics for function calling LLMs are precision and reliability rather than creativity or sophisticated reasoning. The model needs to consistently extract the correct parameters from natural language requests and handle tool responses appropriately. Speed is also important since tool usage often involves multiple round trips that can impact overall performance.
|
||||
|
||||
Many teams find that specialized function calling models or general purpose models with strong tool support work better than creative or reasoning-focused models for this role. The key is ensuring that the model can reliably bridge the gap between natural language instructions and structured tool calls.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Agent-Specific Overrides">
|
||||
@@ -193,6 +216,7 @@ The key to successful multi-model implementation is understanding how different
|
||||
Consider agent-specific overrides when an agent's role requires capabilities that differ substantially from other crew members. For example, a creative writing agent might benefit from a model optimized for content generation, while a data analysis agent might perform better with a reasoning-focused model.
|
||||
|
||||
The challenge with agent-specific overrides is balancing optimization with operational complexity. Each additional model adds complexity to deployment, monitoring, and cost management. Teams should focus overrides on agents where the performance improvement justifies the additional complexity.
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
@@ -209,6 +233,7 @@ Effective task definition is often more important than model selection in determ
|
||||
Effective task descriptions include relevant context and constraints that help the agent understand the broader purpose and any limitations they need to work within. They break complex work into focused steps that can be executed systematically, rather than presenting overwhelming, multi-faceted objectives that are difficult to approach systematically.
|
||||
|
||||
Common mistakes include being too vague about objectives, failing to provide necessary context, setting unclear success criteria, or combining multiple unrelated tasks into a single description. The goal is to provide enough information for the agent to succeed while maintaining focus on a single, clear objective.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expected Output Guidelines" icon="bullseye">
|
||||
@@ -217,6 +242,7 @@ Effective task definition is often more important than model selection in determ
|
||||
The best output guidelines provide concrete examples of quality indicators and define completion criteria clearly enough that both the agent and human reviewers can assess whether the task has been completed successfully. This reduces ambiguity and helps ensure consistent results across multiple task executions.
|
||||
|
||||
Avoid generic output descriptions that could apply to any task, missing format specifications that leave agents guessing about structure, unclear quality standards that make evaluation difficult, or failing to provide examples or templates that help agents understand expectations.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -229,6 +255,7 @@ Effective task definition is often more important than model selection in determ
|
||||
Implementing sequential dependencies effectively requires using the context parameter to chain related tasks, building complexity gradually through task progression, and ensuring that each task produces outputs that serve as meaningful inputs for subsequent tasks. The goal is to maintain logical flow between dependent tasks while avoiding unnecessary bottlenecks.
|
||||
|
||||
Sequential dependencies work best when there's a clear logical progression from one task to another and when the output of one task genuinely improves the quality or feasibility of subsequent tasks. However, they can create bottlenecks if not managed carefully, so it's important to identify which dependencies are truly necessary versus those that are merely convenient.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Parallel Execution">
|
||||
@@ -237,6 +264,7 @@ Effective task definition is often more important than model selection in determ
|
||||
Successful parallel execution requires identifying tasks that can truly run independently, grouping related but separate work streams effectively, and planning for result integration when parallel tasks need to be combined into a final deliverable. The key is ensuring that parallel tasks don't create conflicts or redundancies that reduce overall quality.
|
||||
|
||||
Consider parallel execution when you have multiple independent research streams, different types of analysis that don't depend on each other, or content creation tasks that can be developed simultaneously. However, be mindful of resource allocation and ensure that parallel execution doesn't overwhelm your available model capacity or budget.
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
@@ -245,7 +273,8 @@ Effective task definition is often more important than model selection in determ
|
||||
### a. Role-Driven LLM Selection
|
||||
|
||||
<Warning>
|
||||
Generic agent roles make it impossible to select the right LLM. Specific roles enable targeted model optimization.
|
||||
Generic agent roles make it impossible to select the right LLM. Specific roles
|
||||
enable targeted model optimization.
|
||||
</Warning>
|
||||
|
||||
The specificity of your agent roles directly determines which LLM capabilities matter most for optimal performance. This creates a strategic opportunity to match precise model strengths with agent responsibilities.
|
||||
@@ -253,6 +282,7 @@ The specificity of your agent roles directly determines which LLM capabilities m
|
||||
**Generic vs. Specific Role Impact on LLM Choice:**
|
||||
|
||||
When defining roles, think about the specific domain knowledge, working style, and decision-making frameworks that would be most valuable for the tasks the agent will handle. The more specific and contextual the role definition, the better the model can embody that role effectively.
|
||||
|
||||
```python
|
||||
# ✅ Specific role - clear LLM requirements
|
||||
specific_agent = Agent(
|
||||
@@ -273,7 +303,8 @@ specific_agent = Agent(
|
||||
### b. Backstory as Model Context Amplifier
|
||||
|
||||
<Info>
|
||||
Strategic backstories multiply your chosen LLM's effectiveness by providing domain-specific context that generic prompting cannot achieve.
|
||||
Strategic backstories multiply your chosen LLM's effectiveness by providing
|
||||
domain-specific context that generic prompting cannot achieve.
|
||||
</Info>
|
||||
|
||||
A well-crafted backstory transforms your LLM choice from generic capability to specialized expertise. This is especially crucial for cost optimization - a well-contextualized efficient model can outperform a premium model without proper context.
|
||||
@@ -300,6 +331,7 @@ domain_expert = Agent(
|
||||
```
|
||||
|
||||
**Backstory Elements That Enhance LLM Performance:**
|
||||
|
||||
- **Domain Experience**: "10+ years in enterprise SaaS sales"
|
||||
- **Specific Expertise**: "Specializes in technical due diligence for Series B+ rounds"
|
||||
- **Working Style**: "Prefers data-driven decisions with clear documentation"
|
||||
@@ -332,6 +364,7 @@ tech_writer = Agent(
|
||||
```
|
||||
|
||||
**Alignment Checklist:**
|
||||
|
||||
- ✅ **Role Specificity**: Clear domain and responsibilities
|
||||
- ✅ **LLM Match**: Model strengths align with role requirements
|
||||
- ✅ **Backstory Depth**: Provides domain context the LLM can leverage
|
||||
@@ -353,6 +386,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
- Are any agents heavily tool-dependent?
|
||||
|
||||
**Action**: Document current agent roles and identify optimization opportunities.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Implement Crew-Level Strategy" icon="users-gear">
|
||||
@@ -369,6 +403,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
```
|
||||
|
||||
**Action**: Establish your crew's default LLM before optimizing individual agents.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Optimize High-Impact Agents" icon="star">
|
||||
@@ -390,6 +425,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
```
|
||||
|
||||
**Action**: Upgrade 20% of your agents that handle 80% of the complexity.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Validate with Enterprise Testing" icon="test-tube">
|
||||
@@ -400,6 +436,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
- Share results with your team for collaborative decision-making
|
||||
|
||||
**Action**: Replace guesswork with data-driven validation using the testing platform.
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -412,6 +449,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
Consider reasoning models for business strategy development, complex data analysis that requires drawing insights from multiple sources, multi-step problem solving where each step depends on previous analysis, and strategic planning tasks that require considering multiple variables and their interactions.
|
||||
|
||||
However, reasoning models often come with higher costs and slower response times, so they're best reserved for tasks where their sophisticated capabilities provide genuine value rather than being used for simple operations that don't require complex reasoning.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Creative Models">
|
||||
@@ -420,6 +458,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
Use creative models for blog post writing and article creation, marketing copy that needs to engage and persuade, creative storytelling and narrative development, and brand communications where voice and tone are crucial. These models often understand nuance and context better than general purpose alternatives.
|
||||
|
||||
Creative models may be less suitable for technical or analytical tasks where precision and factual accuracy are more important than engagement and style. They're best used when the creative and communicative aspects of the output are primary success factors.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Efficient Models">
|
||||
@@ -428,6 +467,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
Consider efficient models for data processing and transformation tasks, simple formatting and organization operations, function calling and tool usage where precision matters more than sophistication, and high-volume operations where cost per operation is a significant factor.
|
||||
|
||||
The key with efficient models is ensuring that their capabilities align with task requirements. They can handle many routine operations effectively but may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Open Source Models">
|
||||
@@ -436,6 +476,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
Consider open source models for internal company tools where data privacy is paramount, privacy-sensitive applications that can't use external APIs, cost-optimized deployments where per-token pricing is prohibitive, and situations requiring custom model modifications or fine-tuning.
|
||||
|
||||
However, open source models require more technical expertise to deploy and maintain effectively. Consider the total cost of ownership including infrastructure, technical overhead, and ongoing maintenance when evaluating open source options.
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
@@ -455,6 +496,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
# Processing agent gets efficient model
|
||||
processor = Agent(role="Data Processor", llm=LLM(model="gpt-4o-mini"))
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Ignoring Crew-Level vs Agent-Level LLM Hierarchy" icon="shuffle">
|
||||
@@ -474,6 +516,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
# Agents inherit crew LLM unless specifically overridden
|
||||
agent1 = Agent(llm=LLM(model="claude-3-5-sonnet")) # Override for specific needs
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Function Calling Model Mismatch" icon="screwdriver-wrench">
|
||||
@@ -492,6 +535,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
llm=LLM(model="claude-3-5-sonnet") # Also strong with tools
|
||||
)
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Premature Optimization Without Testing" icon="gear">
|
||||
@@ -507,6 +551,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
# Test performance, then optimize specific agents as needed
|
||||
# Use Enterprise platform testing to validate improvements
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Overlooking Context and Memory Limitations" icon="brain">
|
||||
@@ -515,6 +560,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
**Real Example**: Using a short-context model for agents that need to maintain conversation history across multiple task iterations, or in crews with extensive agent-to-agent communication.
|
||||
|
||||
**CrewAI Solution**: Match context capabilities to crew communication patterns.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -522,21 +568,35 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
|
||||
|
||||
<Steps>
|
||||
<Step title="Start Simple" icon="play">
|
||||
Begin with reliable, general-purpose models that are well-understood and widely supported. This provides a stable foundation for understanding your specific requirements and performance expectations before optimizing for specialized needs.
|
||||
Begin with reliable, general-purpose models that are well-understood and
|
||||
widely supported. This provides a stable foundation for understanding your
|
||||
specific requirements and performance expectations before optimizing for
|
||||
specialized needs.
|
||||
</Step>
|
||||
<Step title="Measure What Matters" icon="chart-line">
|
||||
Develop metrics that align with your specific use case and business requirements rather than relying solely on general benchmarks. Focus on measuring outcomes that directly impact your success rather than theoretical performance indicators.
|
||||
Develop metrics that align with your specific use case and business
|
||||
requirements rather than relying solely on general benchmarks. Focus on
|
||||
measuring outcomes that directly impact your success rather than theoretical
|
||||
performance indicators.
|
||||
</Step>
|
||||
<Step title="Iterate Based on Results" icon="arrows-rotate">
|
||||
Make model changes based on observed performance in your specific context rather than theoretical considerations or general recommendations. Real-world performance often differs significantly from benchmark results or general reputation.
|
||||
Make model changes based on observed performance in your specific context
|
||||
rather than theoretical considerations or general recommendations.
|
||||
Real-world performance often differs significantly from benchmark results or
|
||||
general reputation.
|
||||
</Step>
|
||||
<Step title="Consider Total Cost" icon="calculator">
|
||||
Evaluate the complete cost of ownership including model costs, development time, maintenance overhead, and operational complexity. The cheapest model per token may not be the most cost-effective choice when considering all factors.
|
||||
Evaluate the complete cost of ownership including model costs, development
|
||||
time, maintenance overhead, and operational complexity. The cheapest model
|
||||
per token may not be the most cost-effective choice when considering all
|
||||
factors.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Tip>
|
||||
Focus on understanding your requirements first, then select models that best match those needs. The best LLM choice is the one that consistently delivers the results you need within your operational constraints.
|
||||
Focus on understanding your requirements first, then select models that best
|
||||
match those needs. The best LLM choice is the one that consistently delivers
|
||||
the results you need within your operational constraints.
|
||||
</Tip>
|
||||
|
||||
### Enterprise-Grade Model Validation
|
||||
@@ -562,7 +622,9 @@ For teams serious about optimizing their LLM selection, the **CrewAI AMP platfor
|
||||
Go to [app.crewai.com](https://app.crewai.com) to get started!
|
||||
|
||||
<Info>
|
||||
The Enterprise platform transforms model selection from guesswork into a data-driven process, enabling you to validate the principles in this guide with your actual use cases and requirements.
|
||||
The Enterprise platform transforms model selection from guesswork into a
|
||||
data-driven process, enabling you to validate the principles in this guide
|
||||
with your actual use cases and requirements.
|
||||
</Info>
|
||||
|
||||
## Key Principles Summary
|
||||
@@ -572,18 +634,24 @@ The Enterprise platform transforms model selection from guesswork into a data-dr
|
||||
Choose models based on what the task actually requires, not theoretical capabilities or general reputation.
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Capability Matching" icon="puzzle-piece">
|
||||
Align model strengths with agent roles and responsibilities for optimal performance.
|
||||
Align model strengths with agent roles and responsibilities for optimal
|
||||
performance.
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Strategic Consistency" icon="link">
|
||||
Maintain coherent model selection strategy across related components and workflows.
|
||||
Maintain coherent model selection strategy across related components and
|
||||
workflows.
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Practical Testing" icon="flask">
|
||||
Validate choices through real-world usage rather than benchmarks alone.
|
||||
</Card>
|
||||
|
||||
{" "}
|
||||
<Card title="Iterative Improvement" icon="arrow-up">
|
||||
Start simple and optimize based on actual performance and needs.
|
||||
</Card>
|
||||
@@ -594,13 +662,20 @@ The Enterprise platform transforms model selection from guesswork into a data-dr
|
||||
</CardGroup>
|
||||
|
||||
<Check>
|
||||
Remember: The best LLM choice is the one that consistently delivers the results you need within your operational constraints. Focus on understanding your requirements first, then select models that best match those needs.
|
||||
Remember: The best LLM choice is the one that consistently delivers the
|
||||
results you need within your operational constraints. Focus on understanding
|
||||
your requirements first, then select models that best match those needs.
|
||||
</Check>
|
||||
|
||||
## Current Model Landscape (June 2025)
|
||||
|
||||
<Warning>
|
||||
**Snapshot in Time**: The following model rankings represent current leaderboard standings as of June 2025, compiled from [LMSys Arena](https://arena.lmsys.org/), [Artificial Analysis](https://artificialanalysis.ai/), and other leading benchmarks. LLM performance, availability, and pricing change rapidly. Always conduct your own evaluations with your specific use cases and data.
|
||||
**Snapshot in Time**: The following model rankings represent current
|
||||
leaderboard standings as of June 2025, compiled from [LMSys
|
||||
Arena](https://arena.lmsys.org/), [Artificial
|
||||
Analysis](https://artificialanalysis.ai/), and other leading benchmarks. LLM
|
||||
performance, availability, and pricing change rapidly. Always conduct your own
|
||||
evaluations with your specific use cases and data.
|
||||
</Warning>
|
||||
|
||||
### Leading Models by Category
|
||||
@@ -608,7 +683,10 @@ Remember: The best LLM choice is the one that consistently delivers the results
|
||||
The tables below show a representative sample of current top-performing models across different categories, with guidance on their suitability for CrewAI agents:
|
||||
|
||||
<Note>
|
||||
These tables/metrics showcase selected leading models in each category and are not exhaustive. Many excellent models exist beyond those listed here. The goal is to illustrate the types of capabilities to look for rather than provide a complete catalog.
|
||||
These tables/metrics showcase selected leading models in each category and are
|
||||
not exhaustive. Many excellent models exist beyond those listed here. The goal
|
||||
is to illustrate the types of capabilities to look for rather than provide a
|
||||
complete catalog.
|
||||
</Note>
|
||||
|
||||
<Tabs>
|
||||
@@ -624,6 +702,7 @@ These tables/metrics showcase selected leading models in each category and are n
|
||||
| **Qwen3 235B (Reasoning)** | 62 | $2.63 | Moderate | Open-source alternative for reasoning tasks |
|
||||
|
||||
These models excel at multi-step reasoning and are ideal for agents that need to develop strategies, coordinate other agents, or analyze complex information.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Coding & Technical">
|
||||
@@ -638,6 +717,7 @@ These tables/metrics showcase selected leading models in each category and are n
|
||||
| **Llama 3.1 405B** | Good | 81.1% | $3.50 | Function calling LLM for tool-heavy workflows |
|
||||
|
||||
These models are optimized for code generation, debugging, and technical problem-solving, making them ideal for development-focused crews.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Speed & Efficiency">
|
||||
@@ -652,6 +732,7 @@ These tables/metrics showcase selected leading models in each category and are n
|
||||
| **Nova Micro** | High | 0.30s | $0.04 | Simple, fast task execution |
|
||||
|
||||
These models prioritize speed and efficiency, perfect for agents handling routine operations or requiring quick responses. **Pro tip**: Pairing these models with fast inference providers like Groq can achieve even better performance, especially for open-source models like Llama.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Balanced Performance">
|
||||
@@ -666,6 +747,7 @@ These tables/metrics showcase selected leading models in each category and are n
|
||||
| **Qwen3 32B** | 44 | Good | $1.23 | Budget-friendly versatility |
|
||||
|
||||
These models offer good performance across multiple dimensions, suitable for crews with diverse task requirements.
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
@@ -676,24 +758,28 @@ These tables/metrics showcase selected leading models in each category and are n
|
||||
**When performance is the priority**: Use top-tier models like **o3**, **Gemini 2.5 Pro**, or **Claude 4 Sonnet** for manager LLMs and critical agents. These models excel at complex reasoning and coordination but come with higher costs.
|
||||
|
||||
**Strategy**: Implement a multi-model approach where premium models handle strategic thinking while efficient models handle routine operations.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Cost-Conscious Crews" icon="dollar-sign">
|
||||
**When budget is a primary constraint**: Focus on models like **DeepSeek R1**, **Llama 4 Scout**, or **Gemini 2.0 Flash**. These provide strong performance at significantly lower costs.
|
||||
|
||||
**Strategy**: Use cost-effective models for most agents, reserving premium models only for the most critical decision-making roles.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Specialized Workflows" icon="screwdriver-wrench">
|
||||
**For specific domain expertise**: Choose models optimized for your primary use case. **Claude 4** series for coding, **Gemini 2.5 Pro** for research, **Llama 405B** for function calling.
|
||||
|
||||
**Strategy**: Select models based on your crew's primary function, ensuring the core capability aligns with model strengths.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Enterprise & Privacy" icon="shield">
|
||||
**For data-sensitive operations**: Consider open-source models like **Llama 4** series, **DeepSeek V3**, or **Qwen3** that can be deployed locally while maintaining competitive performance.
|
||||
|
||||
**Strategy**: Deploy open-source models on private infrastructure, accepting potential performance trade-offs for data control.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -706,7 +792,10 @@ These tables/metrics showcase selected leading models in each category and are n
|
||||
- **Open Source Viability**: The gap between open-source and proprietary models continues to narrow, with models like Llama 4 Maverick and DeepSeek V3 offering competitive performance at attractive price points. Fast inference providers particularly shine with open-source models, often delivering better speed-to-cost ratios than proprietary alternatives.
|
||||
|
||||
<Info>
|
||||
**Testing is Essential**: Leaderboard rankings provide general guidance, but your specific use case, prompting style, and evaluation criteria may produce different results. Always test candidate models with your actual tasks and data before making final decisions.
|
||||
**Testing is Essential**: Leaderboard rankings provide general guidance, but
|
||||
your specific use case, prompting style, and evaluation criteria may produce
|
||||
different results. Always test candidate models with your actual tasks and
|
||||
data before making final decisions.
|
||||
</Info>
|
||||
|
||||
### Practical Implementation Strategy
|
||||
@@ -716,12 +805,19 @@ These tables/metrics showcase selected leading models in each category and are n
|
||||
Begin with well-established models like **GPT-4.1**, **Claude 3.7 Sonnet**, or **Gemini 2.0 Flash** that offer good performance across multiple dimensions and have extensive real-world validation.
|
||||
</Step>
|
||||
|
||||
{" "}
|
||||
<Step title="Identify Specialized Needs">
|
||||
Determine if your crew has specific requirements (coding, reasoning, speed) that would benefit from specialized models like **Claude 4 Sonnet** for development or **o3** for complex analysis. For speed-critical applications, consider fast inference providers like **Groq** alongside model selection.
|
||||
Determine if your crew has specific requirements (coding, reasoning, speed)
|
||||
that would benefit from specialized models like **Claude 4 Sonnet** for
|
||||
development or **o3** for complex analysis. For speed-critical applications,
|
||||
consider fast inference providers like **Groq** alongside model selection.
|
||||
</Step>
|
||||
|
||||
{" "}
|
||||
<Step title="Implement Multi-Model Strategy">
|
||||
Use different models for different agents based on their roles. High-capability models for managers and complex tasks, efficient models for routine operations.
|
||||
Use different models for different agents based on their roles.
|
||||
High-capability models for managers and complex tasks, efficient models for
|
||||
routine operations.
|
||||
</Step>
|
||||
|
||||
<Step title="Monitor and Optimize">
|
||||
|
||||
356
docs/en/learn/streaming-crew-execution.mdx
Normal file
356
docs/en/learn/streaming-crew-execution.mdx
Normal file
@@ -0,0 +1,356 @@
|
||||
---
|
||||
title: Streaming Crew Execution
|
||||
description: Stream real-time output from your CrewAI crew execution
|
||||
icon: wave-pulse
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to stream real-time output during crew execution, allowing you to display results as they're generated rather than waiting for the entire process to complete. This feature is particularly useful for building interactive applications, providing user feedback, and monitoring long-running processes.
|
||||
|
||||
## How Streaming Works
|
||||
|
||||
When streaming is enabled, CrewAI captures LLM responses and tool calls as they happen, packaging them into structured chunks that include context about which task and agent is executing. You can iterate over these chunks in real-time and access the final result once execution completes.
|
||||
|
||||
## Enabling Streaming
|
||||
|
||||
To enable streaming, set the `stream` parameter to `True` when creating your crew:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
# Create your agents and tasks
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Gather comprehensive information on topics",
|
||||
backstory="You are an experienced researcher with excellent analytical skills.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the latest developments in AI",
|
||||
expected_output="A detailed report on recent AI advancements",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
# Enable streaming
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True # Enable streaming output
|
||||
)
|
||||
```
|
||||
|
||||
## Synchronous Streaming
|
||||
|
||||
When you call `kickoff()` on a crew with streaming enabled, it returns a `CrewStreamingOutput` object that you can iterate over to receive chunks as they arrive:
|
||||
|
||||
```python Code
|
||||
# Start streaming execution
|
||||
streaming = crew.kickoff(inputs={"topic": "artificial intelligence"})
|
||||
|
||||
# Iterate over chunks as they arrive
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access the final result after streaming completes
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result.raw}")
|
||||
```
|
||||
|
||||
### Stream Chunk Information
|
||||
|
||||
Each chunk provides rich context about the execution:
|
||||
|
||||
```python Code
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
for chunk in streaming:
|
||||
print(f"Task: {chunk.task_name} (index {chunk.task_index})")
|
||||
print(f"Agent: {chunk.agent_role}")
|
||||
print(f"Content: {chunk.content}")
|
||||
print(f"Type: {chunk.chunk_type}") # TEXT or TOOL_CALL
|
||||
if chunk.tool_call:
|
||||
print(f"Tool: {chunk.tool_call.tool_name}")
|
||||
print(f"Arguments: {chunk.tool_call.arguments}")
|
||||
```
|
||||
|
||||
### Accessing Streaming Results
|
||||
|
||||
The `CrewStreamingOutput` object provides several useful properties:
|
||||
|
||||
```python Code
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
# Iterate and collect chunks
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# After iteration completes
|
||||
print(f"\nCompleted: {streaming.is_completed}")
|
||||
print(f"Full text: {streaming.get_full_text()}")
|
||||
print(f"All chunks: {len(streaming.chunks)}")
|
||||
print(f"Final result: {streaming.result.raw}")
|
||||
```
|
||||
|
||||
## Asynchronous Streaming
|
||||
|
||||
For async applications, you can use either `akickoff()` (native async) or `kickoff_async()` (thread-based) with async iteration:
|
||||
|
||||
### Native Async with `akickoff()`
|
||||
|
||||
The `akickoff()` method provides true native async execution throughout the entire chain:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
async def stream_crew():
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
# Start native async streaming
|
||||
streaming = await crew.akickoff(inputs={"topic": "AI"})
|
||||
|
||||
# Async iteration over chunks
|
||||
async for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result.raw}")
|
||||
|
||||
asyncio.run(stream_crew())
|
||||
```
|
||||
|
||||
### Thread-Based Async with `kickoff_async()`
|
||||
|
||||
For simpler async integration or backward compatibility:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
async def stream_crew():
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
# Start thread-based async streaming
|
||||
streaming = await crew.kickoff_async(inputs={"topic": "AI"})
|
||||
|
||||
# Async iteration over chunks
|
||||
async for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result.raw}")
|
||||
|
||||
asyncio.run(stream_crew())
|
||||
```
|
||||
|
||||
<Note>
|
||||
For high-concurrency workloads, `akickoff()` is recommended as it uses native async for task execution, memory operations, and knowledge retrieval. See the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide for more details.
|
||||
</Note>
|
||||
|
||||
## Streaming with kickoff_for_each
|
||||
|
||||
When executing a crew for multiple inputs with `kickoff_for_each()`, streaming works differently depending on whether you use sync or async:
|
||||
|
||||
### Synchronous kickoff_for_each
|
||||
|
||||
With synchronous `kickoff_for_each()`, you get a list of `CrewStreamingOutput` objects, one for each input:
|
||||
|
||||
```python Code
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
inputs_list = [
|
||||
{"topic": "AI in healthcare"},
|
||||
{"topic": "AI in finance"}
|
||||
]
|
||||
|
||||
# Returns list of streaming outputs
|
||||
streaming_outputs = crew.kickoff_for_each(inputs=inputs_list)
|
||||
|
||||
# Iterate over each streaming output
|
||||
for i, streaming in enumerate(streaming_outputs):
|
||||
print(f"\n=== Input {i + 1} ===")
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\n\nResult {i + 1}: {result.raw}")
|
||||
```
|
||||
|
||||
### Asynchronous kickoff_for_each_async
|
||||
|
||||
With async `kickoff_for_each_async()`, you get a single `CrewStreamingOutput` that yields chunks from all crews as they arrive concurrently:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
async def stream_multiple_crews():
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
inputs_list = [
|
||||
{"topic": "AI in healthcare"},
|
||||
{"topic": "AI in finance"}
|
||||
]
|
||||
|
||||
# Returns single streaming output for all crews
|
||||
streaming = await crew.kickoff_for_each_async(inputs=inputs_list)
|
||||
|
||||
# Chunks from all crews arrive as they're generated
|
||||
async for chunk in streaming:
|
||||
print(f"[{chunk.task_name}] {chunk.content}", end="", flush=True)
|
||||
|
||||
# Access all results
|
||||
results = streaming.results # List of CrewOutput objects
|
||||
for i, result in enumerate(results):
|
||||
print(f"\n\nResult {i + 1}: {result.raw}")
|
||||
|
||||
asyncio.run(stream_multiple_crews())
|
||||
```
|
||||
|
||||
## Stream Chunk Types
|
||||
|
||||
Chunks can be of different types, indicated by the `chunk_type` field:
|
||||
|
||||
### TEXT Chunks
|
||||
|
||||
Standard text content from LLM responses:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
### TOOL_CALL Chunks
|
||||
|
||||
Information about tool calls being made:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TOOL_CALL:
|
||||
print(f"\nCalling tool: {chunk.tool_call.tool_name}")
|
||||
print(f"Arguments: {chunk.tool_call.arguments}")
|
||||
```
|
||||
|
||||
## Practical Example: Building a UI with Streaming
|
||||
|
||||
Here's a complete example showing how to build an interactive application with streaming:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.types.streaming import StreamChunkType
|
||||
|
||||
async def interactive_research():
|
||||
# Create crew with streaming enabled
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Provide detailed analysis on any topic",
|
||||
backstory="You are an expert researcher with broad knowledge.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research and analyze: {topic}",
|
||||
expected_output="A comprehensive analysis with key insights",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True,
|
||||
verbose=False
|
||||
)
|
||||
|
||||
# Get user input
|
||||
topic = input("Enter a topic to research: ")
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Researching: {topic}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
# Start streaming execution
|
||||
streaming = await crew.kickoff_async(inputs={"topic": topic})
|
||||
|
||||
current_task = ""
|
||||
async for chunk in streaming:
|
||||
# Show task transitions
|
||||
if chunk.task_name != current_task:
|
||||
current_task = chunk.task_name
|
||||
print(f"\n[{chunk.agent_role}] Working on: {chunk.task_name}")
|
||||
print("-" * 60)
|
||||
|
||||
# Display text chunks
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Display tool calls
|
||||
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
|
||||
print(f"\n🔧 Using tool: {chunk.tool_call.tool_name}")
|
||||
|
||||
# Show final result
|
||||
result = streaming.result
|
||||
print(f"\n\n{'='*60}")
|
||||
print("Analysis Complete!")
|
||||
print(f"{'='*60}")
|
||||
print(f"\nToken Usage: {result.token_usage}")
|
||||
|
||||
asyncio.run(interactive_research())
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
Streaming is particularly valuable for:
|
||||
|
||||
- **Interactive Applications**: Provide real-time feedback to users as agents work
|
||||
- **Long-Running Tasks**: Show progress for research, analysis, or content generation
|
||||
- **Debugging and Monitoring**: Observe agent behavior and decision-making in real-time
|
||||
- **User Experience**: Reduce perceived latency by showing incremental results
|
||||
- **Live Dashboards**: Build monitoring interfaces that display crew execution status
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Streaming automatically enables LLM streaming for all agents in the crew
|
||||
- You must iterate through all chunks before accessing the `.result` property
|
||||
- For `kickoff_for_each_async()` with streaming, use `.results` (plural) to get all outputs
|
||||
- Streaming adds minimal overhead and can actually improve perceived performance
|
||||
- Each chunk includes full context (task, agent, chunk type) for rich UIs
|
||||
|
||||
## Error Handling
|
||||
|
||||
Handle errors during streaming execution:
|
||||
|
||||
```python Code
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
try:
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\nSuccess: {result.raw}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\nError during streaming: {e}")
|
||||
if streaming.is_completed:
|
||||
print("Streaming completed but an error occurred")
|
||||
```
|
||||
|
||||
By leveraging streaming, you can build more responsive and interactive applications with CrewAI, providing users with real-time visibility into agent execution and results.
|
||||
450
docs/en/learn/streaming-flow-execution.mdx
Normal file
450
docs/en/learn/streaming-flow-execution.mdx
Normal file
@@ -0,0 +1,450 @@
|
||||
---
|
||||
title: Streaming Flow Execution
|
||||
description: Stream real-time output from your CrewAI flow execution
|
||||
icon: wave-pulse
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI Flows support streaming output, allowing you to receive real-time updates as your flow executes. This feature enables you to build responsive applications that display results incrementally, provide live progress updates, and create better user experiences for long-running workflows.
|
||||
|
||||
## How Flow Streaming Works
|
||||
|
||||
When streaming is enabled on a Flow, CrewAI captures and streams output from any crews or LLM calls within the flow. The stream delivers structured chunks containing the content, task context, and agent information as execution progresses.
|
||||
|
||||
## Enabling Streaming
|
||||
|
||||
To enable streaming, set the `stream` attribute to `True` on your Flow class:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
class ResearchFlow(Flow):
|
||||
stream = True # Enable streaming for the entire flow
|
||||
|
||||
@start()
|
||||
def initialize(self):
|
||||
return {"topic": "AI trends"}
|
||||
|
||||
@listen(initialize)
|
||||
def research_topic(self, data):
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Research topics thoroughly",
|
||||
backstory="Expert researcher with analytical skills",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research {topic} and provide insights",
|
||||
expected_output="Detailed research findings",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
return crew.kickoff(inputs=data)
|
||||
```
|
||||
|
||||
## Synchronous Streaming
|
||||
|
||||
When you call `kickoff()` on a flow with streaming enabled, it returns a `FlowStreamingOutput` object that you can iterate over:
|
||||
|
||||
```python Code
|
||||
flow = ResearchFlow()
|
||||
|
||||
# Start streaming execution
|
||||
streaming = flow.kickoff()
|
||||
|
||||
# Iterate over chunks as they arrive
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access the final result after streaming completes
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result}")
|
||||
```
|
||||
|
||||
### Stream Chunk Information
|
||||
|
||||
Each chunk provides context about where it originated in the flow:
|
||||
|
||||
```python Code
|
||||
streaming = flow.kickoff()
|
||||
|
||||
for chunk in streaming:
|
||||
print(f"Agent: {chunk.agent_role}")
|
||||
print(f"Task: {chunk.task_name}")
|
||||
print(f"Content: {chunk.content}")
|
||||
print(f"Type: {chunk.chunk_type}") # TEXT or TOOL_CALL
|
||||
```
|
||||
|
||||
### Accessing Streaming Properties
|
||||
|
||||
The `FlowStreamingOutput` object provides useful properties and methods:
|
||||
|
||||
```python Code
|
||||
streaming = flow.kickoff()
|
||||
|
||||
# Iterate and collect chunks
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# After iteration completes
|
||||
print(f"\nCompleted: {streaming.is_completed}")
|
||||
print(f"Full text: {streaming.get_full_text()}")
|
||||
print(f"Total chunks: {len(streaming.chunks)}")
|
||||
print(f"Final result: {streaming.result}")
|
||||
```
|
||||
|
||||
## Asynchronous Streaming
|
||||
|
||||
For async applications, use `kickoff_async()` with async iteration:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
async def stream_flow():
|
||||
flow = ResearchFlow()
|
||||
|
||||
# Start async streaming
|
||||
streaming = await flow.kickoff_async()
|
||||
|
||||
# Async iteration over chunks
|
||||
async for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result}")
|
||||
|
||||
asyncio.run(stream_flow())
|
||||
```
|
||||
|
||||
## Streaming with Multi-Step Flows
|
||||
|
||||
Streaming works seamlessly across multiple flow steps, including flows that execute multiple crews:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
class MultiStepFlow(Flow):
|
||||
stream = True
|
||||
|
||||
@start()
|
||||
def research_phase(self):
|
||||
"""First crew: Research the topic."""
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Gather comprehensive information",
|
||||
backstory="Expert at finding relevant information",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research AI developments in healthcare",
|
||||
expected_output="Research findings on AI in healthcare",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
|
||||
self.state["research"] = result.raw
|
||||
return result.raw
|
||||
|
||||
@listen(research_phase)
|
||||
def analysis_phase(self, research_data):
|
||||
"""Second crew: Analyze the research."""
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze information and extract insights",
|
||||
backstory="Expert at identifying patterns and trends",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze this research: {research}",
|
||||
expected_output="Key insights and trends",
|
||||
agent=analyst,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[analyst], tasks=[task])
|
||||
return crew.kickoff(inputs={"research": research_data})
|
||||
|
||||
|
||||
# Stream across both phases
|
||||
flow = MultiStepFlow()
|
||||
streaming = flow.kickoff()
|
||||
|
||||
current_step = ""
|
||||
for chunk in streaming:
|
||||
# Track which flow step is executing
|
||||
if chunk.task_name != current_step:
|
||||
current_step = chunk.task_name
|
||||
print(f"\n\n=== {chunk.task_name} ===\n")
|
||||
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal analysis: {result}")
|
||||
```
|
||||
|
||||
## Practical Example: Progress Dashboard
|
||||
|
||||
Here's a complete example showing how to build a progress dashboard with streaming:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.types.streaming import StreamChunkType
|
||||
|
||||
class ResearchPipeline(Flow):
|
||||
stream = True
|
||||
|
||||
@start()
|
||||
def gather_data(self):
|
||||
researcher = Agent(
|
||||
role="Data Gatherer",
|
||||
goal="Collect relevant information",
|
||||
backstory="Skilled at finding quality sources",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Gather data on renewable energy trends",
|
||||
expected_output="Collection of relevant data points",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
self.state["data"] = result.raw
|
||||
return result.raw
|
||||
|
||||
@listen(gather_data)
|
||||
def analyze_data(self, data):
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Extract meaningful insights",
|
||||
backstory="Expert at data analysis",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze: {data}",
|
||||
expected_output="Key insights and trends",
|
||||
agent=analyst,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[analyst], tasks=[task])
|
||||
return crew.kickoff(inputs={"data": data})
|
||||
|
||||
|
||||
async def run_with_dashboard():
|
||||
flow = ResearchPipeline()
|
||||
|
||||
print("="*60)
|
||||
print("RESEARCH PIPELINE DASHBOARD")
|
||||
print("="*60)
|
||||
|
||||
streaming = await flow.kickoff_async()
|
||||
|
||||
current_agent = ""
|
||||
current_task = ""
|
||||
chunk_count = 0
|
||||
|
||||
async for chunk in streaming:
|
||||
chunk_count += 1
|
||||
|
||||
# Display phase transitions
|
||||
if chunk.task_name != current_task:
|
||||
current_task = chunk.task_name
|
||||
current_agent = chunk.agent_role
|
||||
print(f"\n\n📋 Phase: {current_task}")
|
||||
print(f"👤 Agent: {current_agent}")
|
||||
print("-" * 60)
|
||||
|
||||
# Display text output
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Display tool usage
|
||||
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
|
||||
print(f"\n🔧 Tool: {chunk.tool_call.tool_name}")
|
||||
|
||||
# Show completion summary
|
||||
result = streaming.result
|
||||
print(f"\n\n{'='*60}")
|
||||
print("PIPELINE COMPLETE")
|
||||
print(f"{'='*60}")
|
||||
print(f"Total chunks: {chunk_count}")
|
||||
print(f"Final output length: {len(str(result))} characters")
|
||||
|
||||
asyncio.run(run_with_dashboard())
|
||||
```
|
||||
|
||||
## Streaming with State Management
|
||||
|
||||
Streaming works naturally with Flow state management:
|
||||
|
||||
```python Code
|
||||
from pydantic import BaseModel
|
||||
|
||||
class AnalysisState(BaseModel):
|
||||
topic: str = ""
|
||||
research: str = ""
|
||||
insights: str = ""
|
||||
|
||||
class StatefulStreamingFlow(Flow[AnalysisState]):
|
||||
stream = True
|
||||
|
||||
@start()
|
||||
def research(self):
|
||||
# State is available during streaming
|
||||
topic = self.state.topic
|
||||
print(f"Researching: {topic}")
|
||||
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Research topics thoroughly",
|
||||
backstory="Expert researcher",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description=f"Research {topic}",
|
||||
expected_output="Research findings",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
|
||||
self.state.research = result.raw
|
||||
return result.raw
|
||||
|
||||
@listen(research)
|
||||
def analyze(self, research):
|
||||
# Access updated state
|
||||
print(f"Analyzing {len(self.state.research)} chars of research")
|
||||
|
||||
analyst = Agent(
|
||||
role="Analyst",
|
||||
goal="Extract insights",
|
||||
backstory="Expert analyst",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze: {research}",
|
||||
expected_output="Key insights",
|
||||
agent=analyst,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[analyst], tasks=[task])
|
||||
result = crew.kickoff(inputs={"research": research})
|
||||
|
||||
self.state.insights = result.raw
|
||||
return result.raw
|
||||
|
||||
|
||||
# Run with streaming
|
||||
flow = StatefulStreamingFlow()
|
||||
streaming = flow.kickoff(inputs={"topic": "quantum computing"})
|
||||
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal state:")
|
||||
print(f"Topic: {flow.state.topic}")
|
||||
print(f"Research length: {len(flow.state.research)}")
|
||||
print(f"Insights length: {len(flow.state.insights)}")
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
Flow streaming is particularly valuable for:
|
||||
|
||||
- **Multi-Stage Workflows**: Show progress across research, analysis, and synthesis phases
|
||||
- **Complex Pipelines**: Provide visibility into long-running data processing flows
|
||||
- **Interactive Applications**: Build responsive UIs that display intermediate results
|
||||
- **Monitoring and Debugging**: Observe flow execution and crew interactions in real-time
|
||||
- **Progress Tracking**: Show users which stage of the workflow is currently executing
|
||||
- **Live Dashboards**: Create monitoring interfaces for production flows
|
||||
|
||||
## Stream Chunk Types
|
||||
|
||||
Like crew streaming, flow chunks can be of different types:
|
||||
|
||||
### TEXT Chunks
|
||||
|
||||
Standard text content from LLM responses:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
### TOOL_CALL Chunks
|
||||
|
||||
Information about tool calls within the flow:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
|
||||
print(f"\nTool: {chunk.tool_call.tool_name}")
|
||||
print(f"Args: {chunk.tool_call.arguments}")
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Handle errors gracefully during streaming:
|
||||
|
||||
```python Code
|
||||
flow = ResearchFlow()
|
||||
streaming = flow.kickoff()
|
||||
|
||||
try:
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\nSuccess! Result: {result}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\nError during flow execution: {e}")
|
||||
if streaming.is_completed:
|
||||
print("Streaming completed but flow encountered an error")
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Streaming automatically enables LLM streaming for any crews used within the flow
|
||||
- You must iterate through all chunks before accessing the `.result` property
|
||||
- Streaming works with both structured and unstructured flow state
|
||||
- Flow streaming captures output from all crews and LLM calls in the flow
|
||||
- Each chunk includes context about which agent and task generated it
|
||||
- Streaming adds minimal overhead to flow execution
|
||||
|
||||
## Combining with Flow Visualization
|
||||
|
||||
You can combine streaming with flow visualization to provide a complete picture:
|
||||
|
||||
```python Code
|
||||
# Generate flow visualization
|
||||
flow = ResearchFlow()
|
||||
flow.plot("research_flow") # Creates HTML visualization
|
||||
|
||||
# Run with streaming
|
||||
streaming = flow.kickoff()
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\nFlow complete! View structure at: research_flow.html")
|
||||
```
|
||||
|
||||
By leveraging flow streaming, you can build sophisticated, responsive applications that provide users with real-time visibility into complex multi-stage workflows, making your AI automations more transparent and engaging.
|
||||
600
docs/en/learn/tool-hooks.mdx
Normal file
600
docs/en/learn/tool-hooks.mdx
Normal file
@@ -0,0 +1,600 @@
|
||||
---
|
||||
title: Tool Call Hooks
|
||||
description: Learn how to use tool call hooks to intercept, modify, and control tool execution in CrewAI
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
Tool Call Hooks provide fine-grained control over tool execution during agent operations. These hooks allow you to intercept tool calls, modify inputs, transform outputs, implement safety checks, and add comprehensive logging or monitoring.
|
||||
|
||||
## Overview
|
||||
|
||||
Tool hooks are executed at two critical points:
|
||||
- **Before Tool Call**: Modify inputs, validate parameters, or block execution
|
||||
- **After Tool Call**: Transform results, sanitize outputs, or log execution details
|
||||
|
||||
## Hook Types
|
||||
|
||||
### Before Tool Call Hooks
|
||||
|
||||
Executed before every tool execution, these hooks can:
|
||||
- Inspect and modify tool inputs
|
||||
- Block tool execution based on conditions
|
||||
- Implement approval gates for dangerous operations
|
||||
- Validate parameters
|
||||
- Log tool invocations
|
||||
|
||||
**Signature:**
|
||||
```python
|
||||
def before_hook(context: ToolCallHookContext) -> bool | None:
|
||||
# Return False to block execution
|
||||
# Return True or None to allow execution
|
||||
...
|
||||
```
|
||||
|
||||
### After Tool Call Hooks
|
||||
|
||||
Executed after every tool execution, these hooks can:
|
||||
- Modify or sanitize tool results
|
||||
- Add metadata or formatting
|
||||
- Log execution results
|
||||
- Implement result validation
|
||||
- Transform output formats
|
||||
|
||||
**Signature:**
|
||||
```python
|
||||
def after_hook(context: ToolCallHookContext) -> str | None:
|
||||
# Return modified result string
|
||||
# Return None to keep original result
|
||||
...
|
||||
```
|
||||
|
||||
## Tool Hook Context
|
||||
|
||||
The `ToolCallHookContext` object provides comprehensive access to tool execution state:
|
||||
|
||||
```python
|
||||
class ToolCallHookContext:
|
||||
tool_name: str # Name of the tool being called
|
||||
tool_input: dict[str, Any] # Mutable tool input parameters
|
||||
tool: CrewStructuredTool # Tool instance reference
|
||||
agent: Agent | BaseAgent | None # Agent executing the tool
|
||||
task: Task | None # Current task
|
||||
crew: Crew | None # Crew instance
|
||||
tool_result: str | None # Tool result (after hooks only)
|
||||
```
|
||||
|
||||
### Modifying Tool Inputs
|
||||
|
||||
**Important:** Always modify tool inputs in-place:
|
||||
|
||||
```python
|
||||
# ✅ Correct - modify in-place
|
||||
def sanitize_input(context: ToolCallHookContext) -> None:
|
||||
context.tool_input['query'] = context.tool_input['query'].lower()
|
||||
|
||||
# ❌ Wrong - replaces dict reference
|
||||
def wrong_approach(context: ToolCallHookContext) -> None:
|
||||
context.tool_input = {'query': 'new query'}
|
||||
```
|
||||
|
||||
## Registration Methods
|
||||
|
||||
### 1. Global Hook Registration
|
||||
|
||||
Register hooks that apply to all tool calls across all crews:
|
||||
|
||||
```python
|
||||
from crewai.hooks import register_before_tool_call_hook, register_after_tool_call_hook
|
||||
|
||||
def log_tool_call(context):
|
||||
print(f"Tool: {context.tool_name}")
|
||||
print(f"Input: {context.tool_input}")
|
||||
return None # Allow execution
|
||||
|
||||
register_before_tool_call_hook(log_tool_call)
|
||||
```
|
||||
|
||||
### 2. Decorator-Based Registration
|
||||
|
||||
Use decorators for cleaner syntax:
|
||||
|
||||
```python
|
||||
from crewai.hooks import before_tool_call, after_tool_call
|
||||
|
||||
@before_tool_call
|
||||
def block_dangerous_tools(context):
|
||||
dangerous_tools = ['delete_database', 'drop_table', 'rm_rf']
|
||||
if context.tool_name in dangerous_tools:
|
||||
print(f"⛔ Blocked dangerous tool: {context.tool_name}")
|
||||
return False # Block execution
|
||||
return None
|
||||
|
||||
@after_tool_call
|
||||
def sanitize_results(context):
|
||||
if context.tool_result and "password" in context.tool_result.lower():
|
||||
return context.tool_result.replace("password", "[REDACTED]")
|
||||
return None
|
||||
```
|
||||
|
||||
### 3. Crew-Scoped Hooks
|
||||
|
||||
Register hooks for a specific crew instance:
|
||||
|
||||
```python
|
||||
@CrewBase
|
||||
class MyProjCrew:
|
||||
@before_tool_call_crew
|
||||
def validate_tool_inputs(self, context):
|
||||
# Only applies to this crew
|
||||
if context.tool_name == "web_search":
|
||||
if not context.tool_input.get('query'):
|
||||
print("❌ Invalid search query")
|
||||
return False
|
||||
return None
|
||||
|
||||
@after_tool_call_crew
|
||||
def log_tool_results(self, context):
|
||||
# Crew-specific tool logging
|
||||
print(f"✅ {context.tool_name} completed")
|
||||
return None
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
### 1. Safety Guardrails
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def safety_check(context: ToolCallHookContext) -> bool | None:
|
||||
# Block tools that could cause harm
|
||||
destructive_tools = [
|
||||
'delete_file',
|
||||
'drop_table',
|
||||
'remove_user',
|
||||
'system_shutdown'
|
||||
]
|
||||
|
||||
if context.tool_name in destructive_tools:
|
||||
print(f"🛑 Blocked destructive tool: {context.tool_name}")
|
||||
return False
|
||||
|
||||
# Warn on sensitive operations
|
||||
sensitive_tools = ['send_email', 'post_to_social_media', 'charge_payment']
|
||||
if context.tool_name in sensitive_tools:
|
||||
print(f"⚠️ Executing sensitive tool: {context.tool_name}")
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### 2. Human Approval Gate
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def require_approval_for_actions(context: ToolCallHookContext) -> bool | None:
|
||||
approval_required = [
|
||||
'send_email',
|
||||
'make_purchase',
|
||||
'delete_file',
|
||||
'post_message'
|
||||
]
|
||||
|
||||
if context.tool_name in approval_required:
|
||||
response = context.request_human_input(
|
||||
prompt=f"Approve {context.tool_name}?",
|
||||
default_message=f"Input: {context.tool_input}\nType 'yes' to approve:"
|
||||
)
|
||||
|
||||
if response.lower() != 'yes':
|
||||
print(f"❌ Tool execution denied: {context.tool_name}")
|
||||
return False
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### 3. Input Validation and Sanitization
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def validate_and_sanitize_inputs(context: ToolCallHookContext) -> bool | None:
|
||||
# Validate search queries
|
||||
if context.tool_name == 'web_search':
|
||||
query = context.tool_input.get('query', '')
|
||||
if len(query) < 3:
|
||||
print("❌ Search query too short")
|
||||
return False
|
||||
|
||||
# Sanitize query
|
||||
context.tool_input['query'] = query.strip().lower()
|
||||
|
||||
# Validate file paths
|
||||
if context.tool_name == 'read_file':
|
||||
path = context.tool_input.get('path', '')
|
||||
if '..' in path or path.startswith('/'):
|
||||
print("❌ Invalid file path")
|
||||
return False
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### 4. Result Sanitization
|
||||
|
||||
```python
|
||||
@after_tool_call
|
||||
def sanitize_sensitive_data(context: ToolCallHookContext) -> str | None:
|
||||
if not context.tool_result:
|
||||
return None
|
||||
|
||||
import re
|
||||
result = context.tool_result
|
||||
|
||||
# Remove API keys
|
||||
result = re.sub(
|
||||
r'(api[_-]?key|token)["\']?\s*[:=]\s*["\']?[\w-]+',
|
||||
r'\1: [REDACTED]',
|
||||
result,
|
||||
flags=re.IGNORECASE
|
||||
)
|
||||
|
||||
# Remove email addresses
|
||||
result = re.sub(
|
||||
r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
|
||||
'[EMAIL-REDACTED]',
|
||||
result
|
||||
)
|
||||
|
||||
# Remove credit card numbers
|
||||
result = re.sub(
|
||||
r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b',
|
||||
'[CARD-REDACTED]',
|
||||
result
|
||||
)
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
### 5. Tool Usage Analytics
|
||||
|
||||
```python
|
||||
import time
|
||||
from collections import defaultdict
|
||||
|
||||
tool_stats = defaultdict(lambda: {'count': 0, 'total_time': 0, 'failures': 0})
|
||||
|
||||
@before_tool_call
|
||||
def start_timer(context: ToolCallHookContext) -> None:
|
||||
context.tool_input['_start_time'] = time.time()
|
||||
return None
|
||||
|
||||
@after_tool_call
|
||||
def track_tool_usage(context: ToolCallHookContext) -> None:
|
||||
start_time = context.tool_input.get('_start_time', time.time())
|
||||
duration = time.time() - start_time
|
||||
|
||||
tool_stats[context.tool_name]['count'] += 1
|
||||
tool_stats[context.tool_name]['total_time'] += duration
|
||||
|
||||
if not context.tool_result or 'error' in context.tool_result.lower():
|
||||
tool_stats[context.tool_name]['failures'] += 1
|
||||
|
||||
print(f"""
|
||||
📊 Tool Stats for {context.tool_name}:
|
||||
- Executions: {tool_stats[context.tool_name]['count']}
|
||||
- Avg Time: {tool_stats[context.tool_name]['total_time'] / tool_stats[context.tool_name]['count']:.2f}s
|
||||
- Failures: {tool_stats[context.tool_name]['failures']}
|
||||
""")
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### 6. Rate Limiting
|
||||
|
||||
```python
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
tool_call_history = defaultdict(list)
|
||||
|
||||
@before_tool_call
|
||||
def rate_limit_tools(context: ToolCallHookContext) -> bool | None:
|
||||
tool_name = context.tool_name
|
||||
now = datetime.now()
|
||||
|
||||
# Clean old entries (older than 1 minute)
|
||||
tool_call_history[tool_name] = [
|
||||
call_time for call_time in tool_call_history[tool_name]
|
||||
if now - call_time < timedelta(minutes=1)
|
||||
]
|
||||
|
||||
# Check rate limit (max 10 calls per minute)
|
||||
if len(tool_call_history[tool_name]) >= 10:
|
||||
print(f"🚫 Rate limit exceeded for {tool_name}")
|
||||
return False
|
||||
|
||||
# Record this call
|
||||
tool_call_history[tool_name].append(now)
|
||||
return None
|
||||
```
|
||||
|
||||
### 7. Caching Tool Results
|
||||
|
||||
```python
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
tool_cache = {}
|
||||
|
||||
def cache_key(tool_name: str, tool_input: dict) -> str:
|
||||
"""Generate cache key from tool name and input."""
|
||||
input_str = json.dumps(tool_input, sort_keys=True)
|
||||
return hashlib.md5(f"{tool_name}:{input_str}".encode()).hexdigest()
|
||||
|
||||
@before_tool_call
|
||||
def check_cache(context: ToolCallHookContext) -> bool | None:
|
||||
key = cache_key(context.tool_name, context.tool_input)
|
||||
if key in tool_cache:
|
||||
print(f"💾 Cache hit for {context.tool_name}")
|
||||
# Note: Can't return cached result from before hook
|
||||
# Would need to implement this differently
|
||||
return None
|
||||
|
||||
@after_tool_call
|
||||
def cache_result(context: ToolCallHookContext) -> None:
|
||||
if context.tool_result:
|
||||
key = cache_key(context.tool_name, context.tool_input)
|
||||
tool_cache[key] = context.tool_result
|
||||
print(f"💾 Cached result for {context.tool_name}")
|
||||
return None
|
||||
```
|
||||
|
||||
### 8. Debug Logging
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def debug_tool_call(context: ToolCallHookContext) -> None:
|
||||
print(f"""
|
||||
🔍 Tool Call Debug:
|
||||
- Tool: {context.tool_name}
|
||||
- Agent: {context.agent.role if context.agent else 'Unknown'}
|
||||
- Task: {context.task.description[:50] if context.task else 'Unknown'}...
|
||||
- Input: {context.tool_input}
|
||||
""")
|
||||
return None
|
||||
|
||||
@after_tool_call
|
||||
def debug_tool_result(context: ToolCallHookContext) -> None:
|
||||
if context.tool_result:
|
||||
result_preview = context.tool_result[:200]
|
||||
print(f"✅ Result Preview: {result_preview}...")
|
||||
else:
|
||||
print("⚠️ No result returned")
|
||||
return None
|
||||
```
|
||||
|
||||
## Hook Management
|
||||
|
||||
### Unregistering Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
unregister_before_tool_call_hook,
|
||||
unregister_after_tool_call_hook
|
||||
)
|
||||
|
||||
# Unregister specific hook
|
||||
def my_hook(context):
|
||||
...
|
||||
|
||||
register_before_tool_call_hook(my_hook)
|
||||
# Later...
|
||||
success = unregister_before_tool_call_hook(my_hook)
|
||||
print(f"Unregistered: {success}")
|
||||
```
|
||||
|
||||
### Clearing Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
clear_before_tool_call_hooks,
|
||||
clear_after_tool_call_hooks,
|
||||
clear_all_tool_call_hooks
|
||||
)
|
||||
|
||||
# Clear specific hook type
|
||||
count = clear_before_tool_call_hooks()
|
||||
print(f"Cleared {count} before hooks")
|
||||
|
||||
# Clear all tool hooks
|
||||
before_count, after_count = clear_all_tool_call_hooks()
|
||||
print(f"Cleared {before_count} before and {after_count} after hooks")
|
||||
```
|
||||
|
||||
### Listing Registered Hooks
|
||||
|
||||
```python
|
||||
from crewai.hooks import (
|
||||
get_before_tool_call_hooks,
|
||||
get_after_tool_call_hooks
|
||||
)
|
||||
|
||||
# Get current hooks
|
||||
before_hooks = get_before_tool_call_hooks()
|
||||
after_hooks = get_after_tool_call_hooks()
|
||||
|
||||
print(f"Registered: {len(before_hooks)} before, {len(after_hooks)} after")
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Conditional Hook Execution
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def conditional_blocking(context: ToolCallHookContext) -> bool | None:
|
||||
# Only block for specific agents
|
||||
if context.agent and context.agent.role == "junior_agent":
|
||||
if context.tool_name in ['delete_file', 'send_email']:
|
||||
print(f"❌ Junior agents cannot use {context.tool_name}")
|
||||
return False
|
||||
|
||||
# Only block during specific tasks
|
||||
if context.task and "sensitive" in context.task.description.lower():
|
||||
if context.tool_name == 'web_search':
|
||||
print("❌ Web search blocked for sensitive tasks")
|
||||
return False
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### Context-Aware Input Modification
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def enhance_tool_inputs(context: ToolCallHookContext) -> None:
|
||||
# Add context based on agent role
|
||||
if context.agent and context.agent.role == "researcher":
|
||||
if context.tool_name == 'web_search':
|
||||
# Add domain restrictions for researchers
|
||||
context.tool_input['domains'] = ['edu', 'gov', 'org']
|
||||
|
||||
# Add context based on task
|
||||
if context.task and "urgent" in context.task.description.lower():
|
||||
if context.tool_name == 'send_email':
|
||||
context.tool_input['priority'] = 'high'
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
### Tool Chain Monitoring
|
||||
|
||||
```python
|
||||
tool_call_chain = []
|
||||
|
||||
@before_tool_call
|
||||
def track_tool_chain(context: ToolCallHookContext) -> None:
|
||||
tool_call_chain.append({
|
||||
'tool': context.tool_name,
|
||||
'timestamp': time.time(),
|
||||
'agent': context.agent.role if context.agent else 'Unknown'
|
||||
})
|
||||
|
||||
# Detect potential infinite loops
|
||||
recent_calls = tool_call_chain[-5:]
|
||||
if len(recent_calls) == 5 and all(c['tool'] == context.tool_name for c in recent_calls):
|
||||
print(f"⚠️ Warning: {context.tool_name} called 5 times in a row")
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Keep Hooks Focused**: Each hook should have a single responsibility
|
||||
2. **Avoid Heavy Computation**: Hooks execute on every tool call
|
||||
3. **Handle Errors Gracefully**: Use try-except to prevent hook failures
|
||||
4. **Use Type Hints**: Leverage `ToolCallHookContext` for better IDE support
|
||||
5. **Document Blocking Conditions**: Make it clear when/why tools are blocked
|
||||
6. **Test Hooks Independently**: Unit test hooks before using in production
|
||||
7. **Clear Hooks in Tests**: Use `clear_all_tool_call_hooks()` between test runs
|
||||
8. **Modify In-Place**: Always modify `context.tool_input` in-place, never replace
|
||||
9. **Log Important Decisions**: Especially when blocking tool execution
|
||||
10. **Consider Performance**: Cache expensive validations when possible
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
@before_tool_call
|
||||
def safe_validation(context: ToolCallHookContext) -> bool | None:
|
||||
try:
|
||||
# Your validation logic
|
||||
if not validate_input(context.tool_input):
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"⚠️ Hook error: {e}")
|
||||
# Decide: allow or block on error
|
||||
return None # Allow execution despite error
|
||||
```
|
||||
|
||||
## Type Safety
|
||||
|
||||
```python
|
||||
from crewai.hooks import ToolCallHookContext, BeforeToolCallHookType, AfterToolCallHookType
|
||||
|
||||
# Explicit type annotations
|
||||
def my_before_hook(context: ToolCallHookContext) -> bool | None:
|
||||
return None
|
||||
|
||||
def my_after_hook(context: ToolCallHookContext) -> str | None:
|
||||
return None
|
||||
|
||||
# Type-safe registration
|
||||
register_before_tool_call_hook(my_before_hook)
|
||||
register_after_tool_call_hook(my_after_hook)
|
||||
```
|
||||
|
||||
## Integration with Existing Tools
|
||||
|
||||
### Wrapping Existing Validation
|
||||
|
||||
```python
|
||||
def existing_validator(tool_name: str, inputs: dict) -> bool:
|
||||
"""Your existing validation function."""
|
||||
# Your validation logic
|
||||
return True
|
||||
|
||||
@before_tool_call
|
||||
def integrate_validator(context: ToolCallHookContext) -> bool | None:
|
||||
if not existing_validator(context.tool_name, context.tool_input):
|
||||
print(f"❌ Validation failed for {context.tool_name}")
|
||||
return False
|
||||
return None
|
||||
```
|
||||
|
||||
### Logging to External Systems
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@before_tool_call
|
||||
def log_to_external_system(context: ToolCallHookContext) -> None:
|
||||
logger.info(f"Tool call: {context.tool_name}", extra={
|
||||
'tool_name': context.tool_name,
|
||||
'tool_input': context.tool_input,
|
||||
'agent': context.agent.role if context.agent else None
|
||||
})
|
||||
return None
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Hook Not Executing
|
||||
- Verify hook is registered before crew execution
|
||||
- Check if previous hook returned `False` (blocks execution and subsequent hooks)
|
||||
- Ensure hook signature matches expected type
|
||||
|
||||
### Input Modifications Not Working
|
||||
- Use in-place modifications: `context.tool_input['key'] = value`
|
||||
- Don't replace the dict: `context.tool_input = {}`
|
||||
|
||||
### Result Modifications Not Working
|
||||
- Return the modified string from after hooks
|
||||
- Returning `None` keeps the original result
|
||||
- Ensure the tool actually returned a result
|
||||
|
||||
### Tool Blocked Unexpectedly
|
||||
- Check all before hooks for blocking conditions
|
||||
- Verify hook execution order
|
||||
- Add debug logging to identify which hook is blocking
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tool Call Hooks provide powerful capabilities for controlling and monitoring tool execution in CrewAI. Use them to implement safety guardrails, approval gates, input validation, result sanitization, logging, and analytics. Combined with proper error handling and type safety, hooks enable secure and production-ready agent systems with comprehensive observability.
|
||||
349
docs/en/mcp/dsl-integration.mdx
Normal file
349
docs/en/mcp/dsl-integration.mdx
Normal file
@@ -0,0 +1,349 @@
|
||||
---
|
||||
title: MCP DSL Integration
|
||||
description: Learn how to use CrewAI's simple DSL syntax to integrate MCP servers directly with your agents using the mcps field.
|
||||
icon: code
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI's MCP DSL (Domain Specific Language) integration provides the **simplest way** to connect your agents to MCP (Model Context Protocol) servers. Just add an `mcps` field to your agent and CrewAI handles all the complexity automatically.
|
||||
|
||||
<Info>
|
||||
This is the **recommended approach** for most MCP use cases. For advanced
|
||||
scenarios requiring manual connection management, see
|
||||
[MCPServerAdapter](/en/mcp/overview#advanced-mcpserveradapter).
|
||||
</Info>
|
||||
|
||||
## Basic Usage
|
||||
|
||||
Add MCP servers to your agent using the `mcps` field:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
goal="Help with research and analysis tasks",
|
||||
backstory="Expert assistant with access to advanced research tools",
|
||||
mcps=[
|
||||
"https://mcp.exa.ai/mcp?api_key=your_key&profile=research"
|
||||
]
|
||||
)
|
||||
|
||||
# MCP tools are now automatically available!
|
||||
# No need for manual connection management or tool configuration
|
||||
```
|
||||
|
||||
## Supported Reference Formats
|
||||
|
||||
### External MCP Remote Servers
|
||||
|
||||
```python
|
||||
# Basic HTTPS server
|
||||
"https://api.example.com/mcp"
|
||||
|
||||
# Server with authentication
|
||||
"https://mcp.exa.ai/mcp?api_key=your_key&profile=your_profile"
|
||||
|
||||
# Server with custom path
|
||||
"https://services.company.com/api/v1/mcp"
|
||||
```
|
||||
|
||||
### Specific Tool Selection
|
||||
|
||||
Use the `#` syntax to select specific tools from a server:
|
||||
|
||||
```python
|
||||
# Get only the forecast tool from weather server
|
||||
"https://weather.api.com/mcp#get_forecast"
|
||||
|
||||
# Get only the search tool from Exa
|
||||
"https://mcp.exa.ai/mcp?api_key=your_key#web_search_exa"
|
||||
```
|
||||
|
||||
### CrewAI AMP Marketplace
|
||||
|
||||
Access tools from the CrewAI AMP marketplace:
|
||||
|
||||
```python
|
||||
# Full service with all tools
|
||||
"crewai-amp:financial-data"
|
||||
|
||||
# Specific tool from AMP service
|
||||
"crewai-amp:research-tools#pubmed_search"
|
||||
|
||||
# Multiple AMP services
|
||||
mcps=[
|
||||
"crewai-amp:weather-insights",
|
||||
"crewai-amp:market-analysis",
|
||||
"crewai-amp:social-media-monitoring"
|
||||
]
|
||||
```
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete example using multiple MCP servers:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
|
||||
# Create agent with multiple MCP sources
|
||||
multi_source_agent = Agent(
|
||||
role="Multi-Source Research Analyst",
|
||||
goal="Conduct comprehensive research using multiple data sources",
|
||||
backstory="""Expert researcher with access to web search, weather data,
|
||||
financial information, and academic research tools""",
|
||||
mcps=[
|
||||
# External MCP servers
|
||||
"https://mcp.exa.ai/mcp?api_key=your_exa_key&profile=research",
|
||||
"https://weather.api.com/mcp#get_current_conditions",
|
||||
|
||||
# CrewAI AMP marketplace
|
||||
"crewai-amp:financial-insights",
|
||||
"crewai-amp:academic-research#pubmed_search",
|
||||
"crewai-amp:market-intelligence#competitor_analysis"
|
||||
]
|
||||
)
|
||||
|
||||
# Create comprehensive research task
|
||||
research_task = Task(
|
||||
description="""Research the impact of AI agents on business productivity.
|
||||
Include current weather impacts on remote work, financial market trends,
|
||||
and recent academic publications on AI agent frameworks.""",
|
||||
expected_output="""Comprehensive report covering:
|
||||
1. AI agent business impact analysis
|
||||
2. Weather considerations for remote work
|
||||
3. Financial market trends related to AI
|
||||
4. Academic research citations and insights
|
||||
5. Competitive landscape analysis""",
|
||||
agent=multi_source_agent
|
||||
)
|
||||
|
||||
# Create and execute crew
|
||||
research_crew = Crew(
|
||||
agents=[multi_source_agent],
|
||||
tasks=[research_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = research_crew.kickoff()
|
||||
print(f"Research completed with {len(multi_source_agent.mcps)} MCP data sources")
|
||||
```
|
||||
|
||||
## Tool Naming and Organization
|
||||
|
||||
CrewAI automatically handles tool naming to prevent conflicts:
|
||||
|
||||
```python
|
||||
# Original MCP server has tools: "search", "analyze"
|
||||
# CrewAI creates tools: "mcp_exa_ai_search", "mcp_exa_ai_analyze"
|
||||
|
||||
agent = Agent(
|
||||
role="Tool Organization Demo",
|
||||
goal="Show how tool naming works",
|
||||
backstory="Demonstrates automatic tool organization",
|
||||
mcps=[
|
||||
"https://mcp.exa.ai/mcp?api_key=key", # Tools: mcp_exa_ai_*
|
||||
"https://weather.service.com/mcp", # Tools: weather_service_com_*
|
||||
"crewai-amp:financial-data" # Tools: financial_data_*
|
||||
]
|
||||
)
|
||||
|
||||
# Each server's tools get unique prefixes based on the server name
|
||||
# This prevents naming conflicts between different MCP servers
|
||||
```
|
||||
|
||||
## Error Handling and Resilience
|
||||
|
||||
The MCP DSL is designed to be robust and user-friendly:
|
||||
|
||||
### Graceful Server Failures
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role="Resilient Researcher",
|
||||
goal="Research despite server issues",
|
||||
backstory="Experienced researcher who adapts to available tools",
|
||||
mcps=[
|
||||
"https://primary-server.com/mcp", # Primary data source
|
||||
"https://backup-server.com/mcp", # Backup if primary fails
|
||||
"https://unreachable-server.com/mcp", # Will be skipped with warning
|
||||
"crewai-amp:reliable-service" # Reliable AMP service
|
||||
]
|
||||
)
|
||||
|
||||
# Agent will:
|
||||
# 1. Successfully connect to working servers
|
||||
# 2. Log warnings for failing servers
|
||||
# 3. Continue with available tools
|
||||
# 4. Not crash or hang on server failures
|
||||
```
|
||||
|
||||
### Timeout Protection
|
||||
|
||||
All MCP operations have built-in timeouts:
|
||||
|
||||
- **Connection timeout**: 10 seconds
|
||||
- **Tool execution timeout**: 30 seconds
|
||||
- **Discovery timeout**: 15 seconds
|
||||
|
||||
```python
|
||||
# These servers will timeout gracefully if unresponsive
|
||||
mcps=[
|
||||
"https://slow-server.com/mcp", # Will timeout after 10s if unresponsive
|
||||
"https://overloaded-api.com/mcp" # Will timeout if discovery takes > 15s
|
||||
]
|
||||
```
|
||||
|
||||
## Performance Features
|
||||
|
||||
### Automatic Caching
|
||||
|
||||
Tool schemas are cached for 5 minutes to improve performance:
|
||||
|
||||
```python
|
||||
# First agent creation - discovers tools from server
|
||||
agent1 = Agent(role="First", goal="Test", backstory="Test",
|
||||
mcps=["https://api.example.com/mcp"])
|
||||
|
||||
# Second agent creation (within 5 minutes) - uses cached tool schemas
|
||||
agent2 = Agent(role="Second", goal="Test", backstory="Test",
|
||||
mcps=["https://api.example.com/mcp"]) # Much faster!
|
||||
```
|
||||
|
||||
### On-Demand Connections
|
||||
|
||||
Tool connections are established only when tools are actually used:
|
||||
|
||||
```python
|
||||
# Agent creation is fast - no MCP connections made yet
|
||||
agent = Agent(
|
||||
role="On-Demand Agent",
|
||||
goal="Use tools efficiently",
|
||||
backstory="Efficient agent that connects only when needed",
|
||||
mcps=["https://api.example.com/mcp"]
|
||||
)
|
||||
|
||||
# MCP connection is made only when a tool is actually executed
|
||||
# This minimizes connection overhead and improves startup performance
|
||||
```
|
||||
|
||||
## Integration with Existing Features
|
||||
|
||||
MCP tools work seamlessly with other CrewAI features:
|
||||
|
||||
```python
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class CustomTool(BaseTool):
|
||||
name: str = "custom_analysis"
|
||||
description: str = "Custom analysis tool"
|
||||
|
||||
def _run(self, **kwargs):
|
||||
return "Custom analysis result"
|
||||
|
||||
agent = Agent(
|
||||
role="Full-Featured Agent",
|
||||
goal="Use all available tool types",
|
||||
backstory="Agent with comprehensive tool access",
|
||||
|
||||
# All tool types work together
|
||||
tools=[CustomTool()], # Custom tools
|
||||
apps=["gmail", "slack"], # Platform integrations
|
||||
mcps=[ # MCP servers
|
||||
"https://mcp.exa.ai/mcp?api_key=key",
|
||||
"crewai-amp:research-tools"
|
||||
],
|
||||
|
||||
verbose=True,
|
||||
max_iter=15
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Specific Tools When Possible
|
||||
|
||||
```python
|
||||
# Good - only get the tools you need
|
||||
mcps=["https://weather.api.com/mcp#get_forecast"]
|
||||
|
||||
# Less efficient - gets all tools from server
|
||||
mcps=["https://weather.api.com/mcp"]
|
||||
```
|
||||
|
||||
### 2. Handle Authentication Securely
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
# Store API keys in environment variables
|
||||
exa_key = os.getenv("EXA_API_KEY")
|
||||
exa_profile = os.getenv("EXA_PROFILE")
|
||||
|
||||
agent = Agent(
|
||||
role="Secure Agent",
|
||||
goal="Use MCP tools securely",
|
||||
backstory="Security-conscious agent",
|
||||
mcps=[f"https://mcp.exa.ai/mcp?api_key={exa_key}&profile={exa_profile}"]
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Plan for Server Failures
|
||||
|
||||
```python
|
||||
# Always include backup options
|
||||
mcps=[
|
||||
"https://primary-api.com/mcp", # Primary choice
|
||||
"https://backup-api.com/mcp", # Backup option
|
||||
"crewai-amp:reliable-service" # AMP fallback
|
||||
]
|
||||
```
|
||||
|
||||
### 4. Use Descriptive Agent Roles
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role="Weather-Enhanced Market Analyst",
|
||||
goal="Analyze markets considering weather impacts",
|
||||
backstory="Financial analyst with access to weather data for agricultural market insights",
|
||||
mcps=[
|
||||
"https://weather.service.com/mcp#get_forecast",
|
||||
"crewai-amp:financial-data#stock_analysis"
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**No tools discovered:**
|
||||
|
||||
```python
|
||||
# Check your MCP server URL and authentication
|
||||
# Verify the server is running and accessible
|
||||
mcps=["https://mcp.example.com/mcp?api_key=valid_key"]
|
||||
```
|
||||
|
||||
**Connection timeouts:**
|
||||
|
||||
```python
|
||||
# Server may be slow or overloaded
|
||||
# CrewAI will log warnings and continue with other servers
|
||||
# Check server status or try backup servers
|
||||
```
|
||||
|
||||
**Authentication failures:**
|
||||
|
||||
```python
|
||||
# Verify API keys and credentials
|
||||
# Check server documentation for required parameters
|
||||
# Ensure query parameters are properly URL encoded
|
||||
```
|
||||
|
||||
## Advanced: MCPServerAdapter
|
||||
|
||||
For complex scenarios requiring manual connection management, use the `MCPServerAdapter` class from `crewai-tools`. Using a Python context manager (`with` statement) is the recommended approach as it automatically handles starting and stopping the connection to the MCP server.
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: 'MCP Servers as Tools in CrewAI'
|
||||
description: 'Learn how to integrate MCP servers as tools in your CrewAI agents using the `crewai-tools` library.'
|
||||
title: "MCP Servers as Tools in CrewAI"
|
||||
description: "Learn how to integrate MCP servers as tools in your CrewAI agents using the `crewai-tools` library."
|
||||
icon: plug
|
||||
mode: "wide"
|
||||
---
|
||||
@@ -8,16 +8,86 @@ mode: "wide"
|
||||
## Overview
|
||||
|
||||
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
|
||||
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
|
||||
This gives your crews access to a vast ecosystem of functionalities.
|
||||
|
||||
CrewAI offers **two approaches** for MCP integration:
|
||||
|
||||
### 🚀 **Simple DSL Integration** (Recommended)
|
||||
|
||||
Use the `mcps` field directly on agents for seamless MCP tool integration. The DSL supports both **string references** (for quick setup) and **structured configurations** (for full control).
|
||||
|
||||
#### String-Based References (Quick Setup)
|
||||
|
||||
Perfect for remote HTTPS servers and CrewAI AMP marketplace:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Research and analyze information",
|
||||
backstory="Expert researcher with access to external tools",
|
||||
mcps=[
|
||||
"https://mcp.exa.ai/mcp?api_key=your_key", # External MCP server
|
||||
"https://api.weather.com/mcp#get_forecast", # Specific tool from server
|
||||
"crewai-amp:financial-data", # CrewAI AMP marketplace
|
||||
"crewai-amp:research-tools#pubmed_search" # Specific AMP tool
|
||||
]
|
||||
)
|
||||
# MCP tools are now automatically available to your agent!
|
||||
```
|
||||
|
||||
#### Structured Configurations (Full Control)
|
||||
|
||||
For complete control over connection settings, tool filtering, and all transport types:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai.mcp import MCPServerStdio, MCPServerHTTP, MCPServerSSE
|
||||
from crewai.mcp.filters import create_static_tool_filter
|
||||
|
||||
agent = Agent(
|
||||
role="Advanced Research Analyst",
|
||||
goal="Research with full control over MCP connections",
|
||||
backstory="Expert researcher with advanced tool access",
|
||||
mcps=[
|
||||
# Stdio transport for local servers
|
||||
MCPServerStdio(
|
||||
command="npx",
|
||||
args=["-y", "@modelcontextprotocol/server-filesystem"],
|
||||
env={"API_KEY": "your_key"},
|
||||
tool_filter=create_static_tool_filter(
|
||||
allowed_tool_names=["read_file", "list_directory"]
|
||||
),
|
||||
cache_tools_list=True,
|
||||
),
|
||||
# HTTP/Streamable HTTP transport for remote servers
|
||||
MCPServerHTTP(
|
||||
url="https://api.example.com/mcp",
|
||||
headers={"Authorization": "Bearer your_token"},
|
||||
streamable=True,
|
||||
cache_tools_list=True,
|
||||
),
|
||||
# SSE transport for real-time streaming
|
||||
MCPServerSSE(
|
||||
url="https://stream.example.com/mcp/sse",
|
||||
headers={"Authorization": "Bearer your_token"},
|
||||
),
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### 🔧 **Advanced: MCPServerAdapter** (For Complex Scenarios)
|
||||
|
||||
For advanced use cases requiring manual connection management, the `crewai-tools` library provides the `MCPServerAdapter` class.
|
||||
|
||||
We currently support the following transport mechanisms:
|
||||
|
||||
- **Stdio**: for local servers (communication via standard input/output between processes on the same machine)
|
||||
- **Server-Sent Events (SSE)**: for remote servers (unidirectional, real-time data streaming from server to client over HTTP)
|
||||
- **Streamable HTTP**: for remote servers (flexible, potentially bi-directional communication over HTTP, often utilizing SSE for server-to-client streams)
|
||||
- **Streamable HTTPS**: for remote servers (flexible, potentially bi-directional communication over HTTPS, often utilizing SSE for server-to-client streams)
|
||||
|
||||
## Video Tutorial
|
||||
|
||||
Watch this video tutorial for a comprehensive guide on MCP integration with CrewAI:
|
||||
|
||||
<iframe
|
||||
@@ -31,17 +101,339 @@ Watch this video tutorial for a comprehensive guide on MCP integration with Crew
|
||||
|
||||
## Installation
|
||||
|
||||
Before you start using MCP with `crewai-tools`, you need to install the `mcp` extra `crewai-tools` dependency with the following command:
|
||||
CrewAI MCP integration requires the `mcp` library:
|
||||
|
||||
```shell
|
||||
# For Simple DSL Integration (Recommended)
|
||||
uv add mcp
|
||||
|
||||
# For Advanced MCPServerAdapter usage
|
||||
uv pip install 'crewai-tools[mcp]'
|
||||
```
|
||||
|
||||
## Key Concepts & Getting Started
|
||||
## Quick Start: Simple DSL Integration
|
||||
|
||||
The `MCPServerAdapter` class from `crewai-tools` is the primary way to connect to an MCP server and make its tools available to your CrewAI agents. It supports different transport mechanisms and simplifies connection management.
|
||||
The easiest way to integrate MCP servers is using the `mcps` field on your agents. You can use either string references or structured configurations.
|
||||
|
||||
Using a Python context manager (`with` statement) is the **recommended approach** for `MCPServerAdapter`. It automatically handles starting and stopping the connection to the MCP server.
|
||||
### Quick Start with String References
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create agent with MCP tools using string references
|
||||
research_agent = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Find and analyze information using advanced search tools",
|
||||
backstory="Expert researcher with access to multiple data sources",
|
||||
mcps=[
|
||||
"https://mcp.exa.ai/mcp?api_key=your_key&profile=your_profile",
|
||||
"crewai-amp:weather-service#current_conditions"
|
||||
]
|
||||
)
|
||||
|
||||
# Create task
|
||||
research_task = Task(
|
||||
description="Research the latest developments in AI agent frameworks",
|
||||
expected_output="Comprehensive research report with citations",
|
||||
agent=research_agent
|
||||
)
|
||||
|
||||
# Create and run crew
|
||||
crew = Crew(agents=[research_agent], tasks=[research_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Quick Start with Structured Configurations
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai.mcp import MCPServerStdio, MCPServerHTTP, MCPServerSSE
|
||||
|
||||
# Create agent with structured MCP configurations
|
||||
research_agent = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Find and analyze information using advanced search tools",
|
||||
backstory="Expert researcher with access to multiple data sources",
|
||||
mcps=[
|
||||
# Local stdio server
|
||||
MCPServerStdio(
|
||||
command="python",
|
||||
args=["local_server.py"],
|
||||
env={"API_KEY": "your_key"},
|
||||
),
|
||||
# Remote HTTP server
|
||||
MCPServerHTTP(
|
||||
url="https://api.research.com/mcp",
|
||||
headers={"Authorization": "Bearer your_token"},
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
# Create task
|
||||
research_task = Task(
|
||||
description="Research the latest developments in AI agent frameworks",
|
||||
expected_output="Comprehensive research report with citations",
|
||||
agent=research_agent
|
||||
)
|
||||
|
||||
# Create and run crew
|
||||
crew = Crew(agents=[research_agent], tasks=[research_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
That's it! The MCP tools are automatically discovered and available to your agent.
|
||||
|
||||
## MCP Reference Formats
|
||||
|
||||
The `mcps` field supports both **string references** (for quick setup) and **structured configurations** (for full control). You can mix both formats in the same list.
|
||||
|
||||
### String-Based References
|
||||
|
||||
#### External MCP Servers
|
||||
|
||||
```python
|
||||
mcps=[
|
||||
# Full server - get all available tools
|
||||
"https://mcp.example.com/api",
|
||||
|
||||
# Specific tool from server using # syntax
|
||||
"https://api.weather.com/mcp#get_current_weather",
|
||||
|
||||
# Server with authentication parameters
|
||||
"https://mcp.exa.ai/mcp?api_key=your_key&profile=your_profile"
|
||||
]
|
||||
```
|
||||
|
||||
#### CrewAI AMP Marketplace
|
||||
|
||||
```python
|
||||
mcps=[
|
||||
# Full AMP MCP service - get all available tools
|
||||
"crewai-amp:financial-data",
|
||||
|
||||
# Specific tool from AMP service using # syntax
|
||||
"crewai-amp:research-tools#pubmed_search",
|
||||
|
||||
# Multiple AMP services
|
||||
"crewai-amp:weather-service",
|
||||
"crewai-amp:market-analysis"
|
||||
]
|
||||
```
|
||||
|
||||
### Structured Configurations
|
||||
|
||||
#### Stdio Transport (Local Servers)
|
||||
|
||||
Perfect for local MCP servers that run as processes:
|
||||
|
||||
```python
|
||||
from crewai.mcp import MCPServerStdio
|
||||
from crewai.mcp.filters import create_static_tool_filter
|
||||
|
||||
mcps=[
|
||||
MCPServerStdio(
|
||||
command="npx",
|
||||
args=["-y", "@modelcontextprotocol/server-filesystem"],
|
||||
env={"API_KEY": "your_key"},
|
||||
tool_filter=create_static_tool_filter(
|
||||
allowed_tool_names=["read_file", "write_file"]
|
||||
),
|
||||
cache_tools_list=True,
|
||||
),
|
||||
# Python-based server
|
||||
MCPServerStdio(
|
||||
command="python",
|
||||
args=["path/to/server.py"],
|
||||
env={"UV_PYTHON": "3.12", "API_KEY": "your_key"},
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
#### HTTP/Streamable HTTP Transport (Remote Servers)
|
||||
|
||||
For remote MCP servers over HTTP/HTTPS:
|
||||
|
||||
```python
|
||||
from crewai.mcp import MCPServerHTTP
|
||||
|
||||
mcps=[
|
||||
# Streamable HTTP (default)
|
||||
MCPServerHTTP(
|
||||
url="https://api.example.com/mcp",
|
||||
headers={"Authorization": "Bearer your_token"},
|
||||
streamable=True,
|
||||
cache_tools_list=True,
|
||||
),
|
||||
# Standard HTTP
|
||||
MCPServerHTTP(
|
||||
url="https://api.example.com/mcp",
|
||||
headers={"Authorization": "Bearer your_token"},
|
||||
streamable=False,
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
#### SSE Transport (Real-Time Streaming)
|
||||
|
||||
For remote servers using Server-Sent Events:
|
||||
|
||||
```python
|
||||
from crewai.mcp import MCPServerSSE
|
||||
|
||||
mcps=[
|
||||
MCPServerSSE(
|
||||
url="https://stream.example.com/mcp/sse",
|
||||
headers={"Authorization": "Bearer your_token"},
|
||||
cache_tools_list=True,
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
### Mixed References
|
||||
|
||||
You can combine string references and structured configurations:
|
||||
|
||||
```python
|
||||
from crewai.mcp import MCPServerStdio, MCPServerHTTP
|
||||
|
||||
mcps=[
|
||||
# String references
|
||||
"https://external-api.com/mcp", # External server
|
||||
"crewai-amp:financial-insights", # AMP service
|
||||
|
||||
# Structured configurations
|
||||
MCPServerStdio(
|
||||
command="npx",
|
||||
args=["-y", "@modelcontextprotocol/server-filesystem"],
|
||||
),
|
||||
MCPServerHTTP(
|
||||
url="https://api.example.com/mcp",
|
||||
headers={"Authorization": "Bearer token"},
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
### Tool Filtering
|
||||
|
||||
Structured configurations support advanced tool filtering:
|
||||
|
||||
```python
|
||||
from crewai.mcp import MCPServerStdio
|
||||
from crewai.mcp.filters import create_static_tool_filter, create_dynamic_tool_filter, ToolFilterContext
|
||||
|
||||
# Static filtering (allow/block lists)
|
||||
static_filter = create_static_tool_filter(
|
||||
allowed_tool_names=["read_file", "write_file"],
|
||||
blocked_tool_names=["delete_file"],
|
||||
)
|
||||
|
||||
# Dynamic filtering (context-aware)
|
||||
def dynamic_filter(context: ToolFilterContext, tool: dict) -> bool:
|
||||
# Block dangerous tools for certain agent roles
|
||||
if context.agent.role == "Code Reviewer":
|
||||
if "delete" in tool.get("name", "").lower():
|
||||
return False
|
||||
return True
|
||||
|
||||
mcps=[
|
||||
MCPServerStdio(
|
||||
command="npx",
|
||||
args=["-y", "@modelcontextprotocol/server-filesystem"],
|
||||
tool_filter=static_filter, # or dynamic_filter
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
## Configuration Parameters
|
||||
|
||||
Each transport type supports specific configuration options:
|
||||
|
||||
### MCPServerStdio Parameters
|
||||
|
||||
- **`command`** (required): Command to execute (e.g., `"python"`, `"node"`, `"npx"`, `"uvx"`)
|
||||
- **`args`** (optional): List of command arguments (e.g., `["server.py"]` or `["-y", "@mcp/server"]`)
|
||||
- **`env`** (optional): Dictionary of environment variables to pass to the process
|
||||
- **`tool_filter`** (optional): Tool filter function for filtering available tools
|
||||
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
|
||||
|
||||
### MCPServerHTTP Parameters
|
||||
|
||||
- **`url`** (required): Server URL (e.g., `"https://api.example.com/mcp"`)
|
||||
- **`headers`** (optional): Dictionary of HTTP headers for authentication or other purposes
|
||||
- **`streamable`** (optional): Whether to use streamable HTTP transport (default: `True`)
|
||||
- **`tool_filter`** (optional): Tool filter function for filtering available tools
|
||||
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
|
||||
|
||||
### MCPServerSSE Parameters
|
||||
|
||||
- **`url`** (required): Server URL (e.g., `"https://api.example.com/mcp/sse"`)
|
||||
- **`headers`** (optional): Dictionary of HTTP headers for authentication or other purposes
|
||||
- **`tool_filter`** (optional): Tool filter function for filtering available tools
|
||||
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
|
||||
|
||||
### Common Parameters
|
||||
|
||||
All transport types support:
|
||||
|
||||
- **`tool_filter`**: Filter function to control which tools are available. Can be:
|
||||
- `None` (default): All tools are available
|
||||
- Static filter: Created with `create_static_tool_filter()` for allow/block lists
|
||||
- Dynamic filter: Created with `create_dynamic_tool_filter()` for context-aware filtering
|
||||
- **`cache_tools_list`**: When `True`, caches the tool list after first discovery to improve performance on subsequent connections
|
||||
|
||||
## Key Features
|
||||
|
||||
- 🔄 **Automatic Tool Discovery**: Tools are automatically discovered and integrated
|
||||
- 🏷️ **Name Collision Prevention**: Server names are prefixed to tool names
|
||||
- ⚡ **Performance Optimized**: On-demand connections with schema caching
|
||||
- 🛡️ **Error Resilience**: Graceful handling of unavailable servers
|
||||
- ⏱️ **Timeout Protection**: Built-in timeouts prevent hanging connections
|
||||
- 📊 **Transparent Integration**: Works seamlessly with existing CrewAI features
|
||||
- 🔧 **Full Transport Support**: Stdio, HTTP/Streamable HTTP, and SSE transports
|
||||
- 🎯 **Advanced Filtering**: Static and dynamic tool filtering capabilities
|
||||
- 🔐 **Flexible Authentication**: Support for headers, environment variables, and query parameters
|
||||
|
||||
## Error Handling
|
||||
|
||||
The MCP DSL integration is designed to be resilient and handles failures gracefully:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai.mcp import MCPServerStdio, MCPServerHTTP
|
||||
|
||||
agent = Agent(
|
||||
role="Resilient Agent",
|
||||
goal="Continue working despite server issues",
|
||||
backstory="Agent that handles failures gracefully",
|
||||
mcps=[
|
||||
# String references
|
||||
"https://reliable-server.com/mcp", # Will work
|
||||
"https://unreachable-server.com/mcp", # Will be skipped gracefully
|
||||
"crewai-amp:working-service", # Will work
|
||||
|
||||
# Structured configs
|
||||
MCPServerStdio(
|
||||
command="python",
|
||||
args=["reliable_server.py"], # Will work
|
||||
),
|
||||
MCPServerHTTP(
|
||||
url="https://slow-server.com/mcp", # Will timeout gracefully
|
||||
),
|
||||
]
|
||||
)
|
||||
# Agent will use tools from working servers and log warnings for failing ones
|
||||
```
|
||||
|
||||
All connection errors are handled gracefully:
|
||||
|
||||
- **Connection failures**: Logged as warnings, agent continues with available tools
|
||||
- **Timeout errors**: Connections timeout after 30 seconds (configurable)
|
||||
- **Authentication errors**: Logged clearly for debugging
|
||||
- **Invalid configurations**: Validation errors are raised at agent creation time
|
||||
|
||||
## Advanced: MCPServerAdapter
|
||||
|
||||
For complex scenarios requiring manual connection management, use the `MCPServerAdapter` class from `crewai-tools`. Using a Python context manager (`with` statement) is the recommended approach as it automatically handles starting and stopping the connection to the MCP server.
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
@@ -95,6 +487,7 @@ with MCPServerAdapter(server_params, connect_timeout=60) as mcp_tools:
|
||||
)
|
||||
# ... rest of your crew setup ...
|
||||
```
|
||||
|
||||
This general pattern shows how to integrate tools. For specific examples tailored to each transport, refer to the detailed guides below.
|
||||
|
||||
## Filtering Tools
|
||||
@@ -181,6 +574,7 @@ When a crew class is decorated with `@CrewBase`, the adapter lifecycle is manage
|
||||
- If `mcp_server_params` is not defined, `get_mcp_tools()` simply returns an empty list, allowing the same code paths to run with or without MCP configured.
|
||||
|
||||
This makes it safe to call `get_mcp_tools()` from multiple agent methods or selectively enable MCP per environment.
|
||||
|
||||
</Tip>
|
||||
|
||||
### Connection Timeout Configuration
|
||||
@@ -241,45 +635,54 @@ class CrewWithCustomTimeout:
|
||||
## Explore MCP Integrations
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Simple DSL Integration"
|
||||
icon="code"
|
||||
href="/en/mcp/dsl-integration"
|
||||
color="#3B82F6"
|
||||
>
|
||||
**Recommended**: Use the simple `mcps=[]` field syntax for effortless MCP
|
||||
integration.
|
||||
</Card>
|
||||
<Card
|
||||
title="Stdio Transport"
|
||||
icon="server"
|
||||
href="/en/mcp/stdio"
|
||||
color="#3B82F6"
|
||||
>
|
||||
Connect to local MCP servers via standard input/output. Ideal for scripts and local executables.
|
||||
</Card>
|
||||
<Card
|
||||
title="SSE Transport"
|
||||
icon="wifi"
|
||||
href="/en/mcp/sse"
|
||||
color="#10B981"
|
||||
>
|
||||
Integrate with remote MCP servers using Server-Sent Events for real-time data streaming.
|
||||
Connect to local MCP servers via standard input/output. Ideal for scripts
|
||||
and local executables.
|
||||
</Card>
|
||||
<Card title="SSE Transport" icon="wifi" href="/en/mcp/sse" color="#F59E0B">
|
||||
Integrate with remote MCP servers using Server-Sent Events for real-time
|
||||
data streaming.
|
||||
</Card>
|
||||
<Card
|
||||
title="Streamable HTTP Transport"
|
||||
icon="globe"
|
||||
href="/en/mcp/streamable-http"
|
||||
color="#F59E0B"
|
||||
color="#8B5CF6"
|
||||
>
|
||||
Utilize flexible Streamable HTTP for robust communication with remote MCP servers.
|
||||
Utilize flexible Streamable HTTP for robust communication with remote MCP
|
||||
servers.
|
||||
</Card>
|
||||
<Card
|
||||
title="Connecting to Multiple Servers"
|
||||
icon="layer-group"
|
||||
href="/en/mcp/multiple-servers"
|
||||
color="#8B5CF6"
|
||||
color="#EF4444"
|
||||
>
|
||||
Aggregate tools from several MCP servers simultaneously using a single adapter.
|
||||
Aggregate tools from several MCP servers simultaneously using a single
|
||||
adapter.
|
||||
</Card>
|
||||
<Card
|
||||
title="Security Considerations"
|
||||
icon="lock"
|
||||
href="/en/mcp/security"
|
||||
color="#EF4444"
|
||||
color="#DC2626"
|
||||
>
|
||||
Review important security best practices for MCP integration to keep your agents safe.
|
||||
Review important security best practices for MCP integration to keep your
|
||||
agents safe.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -295,11 +698,11 @@ CrewAI MCP Demo
|
||||
</Card>
|
||||
|
||||
## Staying Safe with MCP
|
||||
<Warning>
|
||||
Always ensure that you trust an MCP Server before using it.
|
||||
</Warning>
|
||||
|
||||
<Warning>Always ensure that you trust an MCP Server before using it.</Warning>
|
||||
|
||||
#### Security Warning: DNS Rebinding Attacks
|
||||
|
||||
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
|
||||
To prevent this:
|
||||
|
||||
@@ -312,6 +715,7 @@ Without these protections, attackers could use DNS rebinding to interact with lo
|
||||
For more details, see the [Anthropic's MCP Transport Security docs](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).
|
||||
|
||||
### Limitations
|
||||
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
|
||||
|
||||
- **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
|
||||
Other MCP primitives like `prompts` or `resources` are not directly integrated as CrewAI components through this adapter at this time.
|
||||
* **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern.
|
||||
- **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern.
|
||||
|
||||
109
docs/en/observability/datadog.mdx
Normal file
109
docs/en/observability/datadog.mdx
Normal file
@@ -0,0 +1,109 @@
|
||||
---
|
||||
title: Datadog Integration
|
||||
description: Learn how to integrate Datadog with CrewAI to submit LLM Observability traces to Datadog.
|
||||
icon: dog
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# Integrate Datadog with CrewAI
|
||||
|
||||
This guide will demonstrate how to integrate **[Datadog LLM Observability](https://docs.datadoghq.com/llm_observability/)** with **CrewAI** using [Datadog auto-instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python). By the end of this guide, you will be able to submit LLM Observability traces to Datadog and view your CrewAI agent runs in Datadog LLM Observability's [Agentic Execution View](https://docs.datadoghq.com/llm_observability/monitoring/agent_monitoring).
|
||||
|
||||
## What is Datadog LLM Observability?
|
||||
|
||||
[Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/) helps AI engineers, data scientists, and application developers quickly develop, evaluate, and monitor LLM applications. Confidently improve output quality, performance, costs, and overall risk with structured experiments, end-to-end tracing across AI agents, and evaluations.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Install Dependencies
|
||||
|
||||
```shell
|
||||
pip install ddtrace crewai crewai-tools
|
||||
```
|
||||
|
||||
### Set Environment Variables
|
||||
|
||||
If you do not have a Datadog API key, you can [create an account](https://www.datadoghq.com/) and [get your API key](https://docs.datadoghq.com/account_management/api-app-keys/#api-keys).
|
||||
|
||||
You will also need to specify an ML Application name in the following environment variables. An ML Application is a grouping of LLM Observability traces associated with a specific LLM-based application. See [ML Application Naming Guidelines](https://docs.datadoghq.com/llm_observability/instrumentation/sdk?tab=python#application-naming-guidelines) for more information on limitations with ML Application names.
|
||||
|
||||
```shell
|
||||
export DD_API_KEY=<YOUR_DD_API_KEY>
|
||||
export DD_SITE=<YOUR_DD_SITE>
|
||||
export DD_LLMOBS_ENABLED=true
|
||||
export DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>
|
||||
export DD_LLMOBS_AGENTLESS_ENABLED=true
|
||||
export DD_APM_TRACING_ENABLED=false
|
||||
```
|
||||
|
||||
Additionally, configure any LLM provider API keys
|
||||
|
||||
```shell
|
||||
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
|
||||
export ANTHROPIC_API_KEY=<YOUR_ANTHROPIC_API_KEY>
|
||||
export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>
|
||||
...
|
||||
```
|
||||
|
||||
### Create a CrewAI Agent Application
|
||||
|
||||
```python
|
||||
# crewai_agent.py
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
from crewai_tools import (
|
||||
WebsiteSearchTool
|
||||
)
|
||||
|
||||
web_rag_tool = WebsiteSearchTool()
|
||||
|
||||
writer = Agent(
|
||||
role="Writer",
|
||||
goal="You make math engaging and understandable for young children through poetry",
|
||||
backstory="You're an expert in writing haikus but you know nothing of math.",
|
||||
tools=[web_rag_tool],
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description=("What is {multiplication}?"),
|
||||
expected_output=("Compose a haiku that includes the answer."),
|
||||
agent=writer
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[writer],
|
||||
tasks=[task],
|
||||
share_crew=False
|
||||
)
|
||||
|
||||
output = crew.kickoff(dict(multiplication="2 * 2"))
|
||||
```
|
||||
|
||||
### Run the Application with Datadog Auto-Instrumentation
|
||||
|
||||
With the [environment variables](#set-environment-variables) set, you can now run the application with Datadog auto-instrumentation.
|
||||
|
||||
```shell
|
||||
ddtrace-run python crewai_agent.py
|
||||
```
|
||||
|
||||
### View the Traces in Datadog
|
||||
|
||||
After running the application, you can view the traces in [Datadog LLM Observability's Traces View](https://app.datadoghq.com/llm/traces), selecting the ML Application name you chose from the top-left dropdown.
|
||||
|
||||
Clicking on a trace will show you the details of the trace, including total tokens used, number of LLM calls, models used, and estimated cost. Clicking into a specific span will narrow down these details, and show related input, output, and metadata.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/datadog-llm-observability-1.png" alt="Datadog LLM Observability Trace View" />
|
||||
</Frame>
|
||||
|
||||
Additionally, you can view the execution graph view of the trace, which shows the control and data flow of the trace, which will scale with larger agents to show handoffs and relationships between LLM calls, tool calls, and agent interactions.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/datadog-llm-observability-2.png" alt="Datadog LLM Observability Agent Execution Flow View" />
|
||||
</Frame>
|
||||
|
||||
## References
|
||||
|
||||
- [Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/)
|
||||
- [Datadog LLM Observability CrewAI Auto-Instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python#crew-ai)
|
||||
@@ -733,9 +733,7 @@ Here's a basic configuration to route requests to OpenAI, specifically using GPT
|
||||
- Collect relevant metadata to filter logs
|
||||
- Enforce access permissions
|
||||
|
||||
Create API keys through:
|
||||
- [Portkey App](https://app.portkey.ai/)
|
||||
- [API Key Management API](/en/api-reference/admin-api/control-plane/api-keys/create-api-key)
|
||||
Create API keys through the [Portkey App](https://app.portkey.ai/)
|
||||
|
||||
Example using Python SDK:
|
||||
```python
|
||||
@@ -758,7 +756,7 @@ Here's a basic configuration to route requests to OpenAI, specifically using GPT
|
||||
)
|
||||
```
|
||||
|
||||
For detailed key management instructions, see our [API Keys documentation](/en/api-reference/admin-api/control-plane/api-keys/create-api-key).
|
||||
For detailed key management instructions, see the [Portkey documentation](https://portkey.ai/docs).
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Step 4: Deploy & Monitor">
|
||||
|
||||
@@ -45,6 +45,7 @@ crewai login
|
||||
```
|
||||
|
||||
This command will:
|
||||
|
||||
1. Open your browser to the authentication page
|
||||
2. Prompt you to enter a device code
|
||||
3. Authenticate your local environment with your CrewAI AMP account
|
||||
@@ -153,7 +154,6 @@ After running the crew or flow, you can view the traces generated by your CrewAI
|
||||
Just click on the link below to view the traces or head over to the traces tab in the dashboard [here](https://app.crewai.com/crewai_plus/trace_batches)
|
||||

|
||||
|
||||
|
||||
### Alternative: Environment Variable Configuration
|
||||
|
||||
You can also enable tracing globally by setting an environment variable:
|
||||
@@ -190,6 +190,7 @@ CrewAI tracing provides comprehensive visibility into:
|
||||
- **Error Tracking**: Detailed error information and stack traces
|
||||
|
||||
### Trace Features
|
||||
|
||||
- **Execution Timeline**: Click through different stages of execution
|
||||
- **Detailed Logs**: Access comprehensive logs for debugging
|
||||
- **Performance Analytics**: Analyze execution patterns and optimize performance
|
||||
|
||||
@@ -58,6 +58,7 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
your ability to turn complex data into clear and concise reports, making
|
||||
it easy for others to understand and act on the information you provide.
|
||||
```
|
||||
|
||||
</Step>
|
||||
<Step title="Modify your `tasks.yaml` file">
|
||||
```yaml tasks.yaml
|
||||
@@ -81,6 +82,7 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
agent: reporting_analyst
|
||||
output_file: report.md
|
||||
```
|
||||
|
||||
</Step>
|
||||
<Step title="Modify your `crew.py` file">
|
||||
```python crew.py
|
||||
@@ -136,6 +138,7 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
</Step>
|
||||
<Step title="[Optional] Add before and after crew functions">
|
||||
```python crew.py
|
||||
@@ -160,6 +163,7 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
|
||||
# ... remaining code
|
||||
```
|
||||
|
||||
</Step>
|
||||
<Step title="Feel free to pass custom inputs to your crew">
|
||||
For example, you can pass the `topic` input to your crew to customize the research and reporting.
|
||||
@@ -178,6 +182,7 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
}
|
||||
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
|
||||
```
|
||||
|
||||
</Step>
|
||||
<Step title="Set your environment variables">
|
||||
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
|
||||
@@ -289,6 +294,7 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
## 8. Conclusion
|
||||
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
@@ -297,6 +303,7 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
Congratulations!
|
||||
|
||||
You have successfully set up your crew project and are ready to start building your own agentic workflows!
|
||||
|
||||
</Check>
|
||||
|
||||
### Note on Consistency in Naming
|
||||
@@ -308,7 +315,9 @@ This naming consistency allows CrewAI to automatically link your configurations
|
||||
#### Example References
|
||||
|
||||
<Tip>
|
||||
Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file.
|
||||
Note how we use the same name for the agent in the `agents.yaml`
|
||||
(`email_summarizer`) file as the method name in the `crew.py`
|
||||
(`email_summarizer`) file.
|
||||
</Tip>
|
||||
|
||||
```yaml agents.yaml
|
||||
@@ -323,7 +332,9 @@ email_summarizer:
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note how we use the same name for the task in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
|
||||
Note how we use the same name for the task in the `tasks.yaml`
|
||||
(`email_summarizer_task`) file as the method name in the `crew.py`
|
||||
(`email_summarizer_task`) file.
|
||||
</Tip>
|
||||
|
||||
```yaml tasks.yaml
|
||||
@@ -354,18 +365,16 @@ Watch this video tutorial for a step-by-step demonstration of deploying your cre
|
||||
></iframe>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Deploy on Enterprise"
|
||||
icon="rocket"
|
||||
href="http://app.crewai.com"
|
||||
>
|
||||
Get started with CrewAI AMP and deploy your crew in a production environment with just a few clicks.
|
||||
<Card title="Deploy on Enterprise" icon="rocket" href="http://app.crewai.com">
|
||||
Get started with CrewAI AMP and deploy your crew in a production environment
|
||||
with just a few clicks.
|
||||
</Card>
|
||||
<Card
|
||||
title="Join the Community"
|
||||
icon="comments"
|
||||
href="https://community.crewai.com"
|
||||
>
|
||||
Join our open source community to discuss ideas, share your projects, and connect with other CrewAI developers.
|
||||
Join our open source community to discuss ideas, share your projects, and
|
||||
connect with other CrewAI developers.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -77,7 +77,7 @@ The `RagTool` accepts the following parameters:
|
||||
|
||||
- **summarize**: Optional. Whether to summarize the retrieved content. Default is `False`.
|
||||
- **adapter**: Optional. A custom adapter for the knowledge base. If not provided, a CrewAIRagAdapter will be used.
|
||||
- **config**: Optional. Configuration for the underlying CrewAI RAG system.
|
||||
- **config**: Optional. Configuration for the underlying CrewAI RAG system. Accepts a `RagToolConfig` TypedDict with optional `embedding_model` (ProviderSpec) and `vectordb` (VectorDbConfig) keys. All configuration values provided programmatically take precedence over environment variables.
|
||||
|
||||
## Adding Content
|
||||
|
||||
@@ -127,26 +127,528 @@ You can customize the behavior of the `RagTool` by providing a configuration dic
|
||||
|
||||
```python Code
|
||||
from crewai_tools import RagTool
|
||||
from crewai_tools.tools.rag import RagToolConfig, VectorDbConfig, ProviderSpec
|
||||
|
||||
# Create a RAG tool with custom configuration
|
||||
config = {
|
||||
"vectordb": {
|
||||
|
||||
vectordb: VectorDbConfig = {
|
||||
"provider": "qdrant",
|
||||
"config": {
|
||||
"collection_name": "my-collection"
|
||||
}
|
||||
},
|
||||
"embedding_model": {
|
||||
}
|
||||
|
||||
embedding_model: ProviderSpec = {
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": "text-embedding-3-small"
|
||||
"model_name": "text-embedding-3-small"
|
||||
}
|
||||
}
|
||||
|
||||
config: RagToolConfig = {
|
||||
"vectordb": vectordb,
|
||||
"embedding_model": embedding_model
|
||||
}
|
||||
|
||||
rag_tool = RagTool(config=config, summarize=True)
|
||||
```
|
||||
|
||||
## Embedding Model Configuration
|
||||
|
||||
The `embedding_model` parameter accepts a `crewai.rag.embeddings.types.ProviderSpec` dictionary with the structure:
|
||||
|
||||
```python
|
||||
{
|
||||
"provider": "provider-name", # Required
|
||||
"config": { # Optional
|
||||
# Provider-specific configuration
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Supported Providers
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="OpenAI">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.openai.types import OpenAIProviderSpec
|
||||
|
||||
embedding_model: OpenAIProviderSpec = {
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model_name": "text-embedding-ada-002",
|
||||
"dimensions": 1536,
|
||||
"organization_id": "your-org-id",
|
||||
"api_base": "https://api.openai.com/v1",
|
||||
"api_version": "v1",
|
||||
"default_headers": {"Custom-Header": "value"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `api_key` (str): OpenAI API key
|
||||
- `model_name` (str): Model to use. Default: `text-embedding-ada-002`. Options: `text-embedding-3-small`, `text-embedding-3-large`, `text-embedding-ada-002`
|
||||
- `dimensions` (int): Number of dimensions for the embedding
|
||||
- `organization_id` (str): OpenAI organization ID
|
||||
- `api_base` (str): Custom API base URL
|
||||
- `api_version` (str): API version
|
||||
- `default_headers` (dict): Custom headers for API requests
|
||||
|
||||
**Environment Variables:**
|
||||
- `OPENAI_API_KEY` or `EMBEDDINGS_OPENAI_API_KEY`: `api_key`
|
||||
- `OPENAI_ORGANIZATION_ID` or `EMBEDDINGS_OPENAI_ORGANIZATION_ID`: `organization_id`
|
||||
- `OPENAI_MODEL_NAME` or `EMBEDDINGS_OPENAI_MODEL_NAME`: `model_name`
|
||||
- `OPENAI_API_BASE` or `EMBEDDINGS_OPENAI_API_BASE`: `api_base`
|
||||
- `OPENAI_API_VERSION` or `EMBEDDINGS_OPENAI_API_VERSION`: `api_version`
|
||||
- `OPENAI_DIMENSIONS` or `EMBEDDINGS_OPENAI_DIMENSIONS`: `dimensions`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Cohere">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.cohere.types import CohereProviderSpec
|
||||
|
||||
embedding_model: CohereProviderSpec = {
|
||||
"provider": "cohere",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model_name": "embed-english-v3.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `api_key` (str): Cohere API key
|
||||
- `model_name` (str): Model to use. Default: `large`. Options: `embed-english-v3.0`, `embed-multilingual-v3.0`, `large`, `small`
|
||||
|
||||
**Environment Variables:**
|
||||
- `COHERE_API_KEY` or `EMBEDDINGS_COHERE_API_KEY`: `api_key`
|
||||
- `EMBEDDINGS_COHERE_MODEL_NAME`: `model_name`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="VoyageAI">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.voyageai.types import VoyageAIProviderSpec
|
||||
|
||||
embedding_model: VoyageAIProviderSpec = {
|
||||
"provider": "voyageai",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model": "voyage-3",
|
||||
"input_type": "document",
|
||||
"truncation": True,
|
||||
"output_dtype": "float32",
|
||||
"output_dimension": 1024,
|
||||
"max_retries": 3,
|
||||
"timeout": 60.0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `api_key` (str): VoyageAI API key
|
||||
- `model` (str): Model to use. Default: `voyage-2`. Options: `voyage-3`, `voyage-3-lite`, `voyage-code-3`, `voyage-large-2`
|
||||
- `input_type` (str): Type of input. Options: `document` (for storage), `query` (for search)
|
||||
- `truncation` (bool): Whether to truncate inputs that exceed max length. Default: `True`
|
||||
- `output_dtype` (str): Output data type
|
||||
- `output_dimension` (int): Dimension of output embeddings
|
||||
- `max_retries` (int): Maximum number of retry attempts. Default: `0`
|
||||
- `timeout` (float): Request timeout in seconds
|
||||
|
||||
**Environment Variables:**
|
||||
- `VOYAGEAI_API_KEY` or `EMBEDDINGS_VOYAGEAI_API_KEY`: `api_key`
|
||||
- `VOYAGEAI_MODEL` or `EMBEDDINGS_VOYAGEAI_MODEL`: `model`
|
||||
- `VOYAGEAI_INPUT_TYPE` or `EMBEDDINGS_VOYAGEAI_INPUT_TYPE`: `input_type`
|
||||
- `VOYAGEAI_TRUNCATION` or `EMBEDDINGS_VOYAGEAI_TRUNCATION`: `truncation`
|
||||
- `VOYAGEAI_OUTPUT_DTYPE` or `EMBEDDINGS_VOYAGEAI_OUTPUT_DTYPE`: `output_dtype`
|
||||
- `VOYAGEAI_OUTPUT_DIMENSION` or `EMBEDDINGS_VOYAGEAI_OUTPUT_DIMENSION`: `output_dimension`
|
||||
- `VOYAGEAI_MAX_RETRIES` or `EMBEDDINGS_VOYAGEAI_MAX_RETRIES`: `max_retries`
|
||||
- `VOYAGEAI_TIMEOUT` or `EMBEDDINGS_VOYAGEAI_TIMEOUT`: `timeout`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Ollama">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.ollama.types import OllamaProviderSpec
|
||||
|
||||
embedding_model: OllamaProviderSpec = {
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model_name": "llama2",
|
||||
"url": "http://localhost:11434/api/embeddings"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_name` (str): Ollama model name (e.g., `llama2`, `mistral`, `nomic-embed-text`)
|
||||
- `url` (str): Ollama API endpoint URL. Default: `http://localhost:11434/api/embeddings`
|
||||
|
||||
**Environment Variables:**
|
||||
- `OLLAMA_MODEL` or `EMBEDDINGS_OLLAMA_MODEL`: `model_name`
|
||||
- `OLLAMA_URL` or `EMBEDDINGS_OLLAMA_URL`: `url`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Amazon Bedrock">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.aws.types import BedrockProviderSpec
|
||||
|
||||
embedding_model: BedrockProviderSpec = {
|
||||
"provider": "amazon-bedrock",
|
||||
"config": {
|
||||
"model_name": "amazon.titan-embed-text-v2:0",
|
||||
"session": boto3_session
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_name` (str): Bedrock model ID. Default: `amazon.titan-embed-text-v1`. Options: `amazon.titan-embed-text-v1`, `amazon.titan-embed-text-v2:0`, `cohere.embed-english-v3`, `cohere.embed-multilingual-v3`
|
||||
- `session` (Any): Boto3 session object for AWS authentication
|
||||
|
||||
**Environment Variables:**
|
||||
- `AWS_ACCESS_KEY_ID`: AWS access key
|
||||
- `AWS_SECRET_ACCESS_KEY`: AWS secret key
|
||||
- `AWS_REGION`: AWS region (e.g., `us-east-1`)
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure OpenAI">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.microsoft.types import AzureProviderSpec
|
||||
|
||||
embedding_model: AzureProviderSpec = {
|
||||
"provider": "azure",
|
||||
"config": {
|
||||
"deployment_id": "your-deployment-id",
|
||||
"api_key": "your-api-key",
|
||||
"api_base": "https://your-resource.openai.azure.com",
|
||||
"api_version": "2024-02-01",
|
||||
"model_name": "text-embedding-ada-002",
|
||||
"api_type": "azure"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `deployment_id` (str): **Required** - Azure OpenAI deployment ID
|
||||
- `api_key` (str): Azure OpenAI API key
|
||||
- `api_base` (str): Azure OpenAI resource endpoint
|
||||
- `api_version` (str): API version. Example: `2024-02-01`
|
||||
- `model_name` (str): Model name. Default: `text-embedding-ada-002`
|
||||
- `api_type` (str): API type. Default: `azure`
|
||||
- `dimensions` (int): Output dimensions
|
||||
- `default_headers` (dict): Custom headers
|
||||
|
||||
**Environment Variables:**
|
||||
- `AZURE_OPENAI_API_KEY` or `EMBEDDINGS_AZURE_API_KEY`: `api_key`
|
||||
- `AZURE_OPENAI_ENDPOINT` or `EMBEDDINGS_AZURE_API_BASE`: `api_base`
|
||||
- `EMBEDDINGS_AZURE_DEPLOYMENT_ID`: `deployment_id`
|
||||
- `EMBEDDINGS_AZURE_API_VERSION`: `api_version`
|
||||
- `EMBEDDINGS_AZURE_MODEL_NAME`: `model_name`
|
||||
- `EMBEDDINGS_AZURE_API_TYPE`: `api_type`
|
||||
- `EMBEDDINGS_AZURE_DIMENSIONS`: `dimensions`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google Generative AI">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.google.types import GenerativeAiProviderSpec
|
||||
|
||||
embedding_model: GenerativeAiProviderSpec = {
|
||||
"provider": "google-generativeai",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model_name": "gemini-embedding-001",
|
||||
"task_type": "RETRIEVAL_DOCUMENT"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `api_key` (str): Google AI API key
|
||||
- `model_name` (str): Model name. Default: `gemini-embedding-001`. Options: `gemini-embedding-001`, `text-embedding-005`, `text-multilingual-embedding-002`
|
||||
- `task_type` (str): Task type for embeddings. Default: `RETRIEVAL_DOCUMENT`. Options: `RETRIEVAL_DOCUMENT`, `RETRIEVAL_QUERY`
|
||||
|
||||
**Environment Variables:**
|
||||
- `GOOGLE_API_KEY`, `GEMINI_API_KEY`, or `EMBEDDINGS_GOOGLE_API_KEY`: `api_key`
|
||||
- `EMBEDDINGS_GOOGLE_GENERATIVE_AI_MODEL_NAME`: `model_name`
|
||||
- `EMBEDDINGS_GOOGLE_GENERATIVE_AI_TASK_TYPE`: `task_type`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google Vertex AI">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.google.types import VertexAIProviderSpec
|
||||
|
||||
embedding_model: VertexAIProviderSpec = {
|
||||
"provider": "google-vertex",
|
||||
"config": {
|
||||
"model_name": "text-embedding-004",
|
||||
"project_id": "your-project-id",
|
||||
"region": "us-central1",
|
||||
"api_key": "your-api-key"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_name` (str): Model name. Default: `textembedding-gecko`. Options: `text-embedding-004`, `textembedding-gecko`, `textembedding-gecko-multilingual`
|
||||
- `project_id` (str): Google Cloud project ID. Default: `cloud-large-language-models`
|
||||
- `region` (str): Google Cloud region. Default: `us-central1`
|
||||
- `api_key` (str): API key for authentication
|
||||
|
||||
**Environment Variables:**
|
||||
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account JSON file
|
||||
- `GOOGLE_CLOUD_PROJECT` or `EMBEDDINGS_GOOGLE_VERTEX_PROJECT_ID`: `project_id`
|
||||
- `EMBEDDINGS_GOOGLE_VERTEX_MODEL_NAME`: `model_name`
|
||||
- `EMBEDDINGS_GOOGLE_VERTEX_REGION`: `region`
|
||||
- `EMBEDDINGS_GOOGLE_VERTEX_API_KEY`: `api_key`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Jina AI">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.jina.types import JinaProviderSpec
|
||||
|
||||
embedding_model: JinaProviderSpec = {
|
||||
"provider": "jina",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model_name": "jina-embeddings-v3"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `api_key` (str): Jina AI API key
|
||||
- `model_name` (str): Model name. Default: `jina-embeddings-v2-base-en`. Options: `jina-embeddings-v3`, `jina-embeddings-v2-base-en`, `jina-embeddings-v2-small-en`
|
||||
|
||||
**Environment Variables:**
|
||||
- `JINA_API_KEY` or `EMBEDDINGS_JINA_API_KEY`: `api_key`
|
||||
- `EMBEDDINGS_JINA_MODEL_NAME`: `model_name`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="HuggingFace">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.huggingface.types import HuggingFaceProviderSpec
|
||||
|
||||
embedding_model: HuggingFaceProviderSpec = {
|
||||
"provider": "huggingface",
|
||||
"config": {
|
||||
"url": "https://api-inference.huggingface.co/models/sentence-transformers/all-MiniLM-L6-v2"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `url` (str): Full URL to HuggingFace inference API endpoint
|
||||
|
||||
**Environment Variables:**
|
||||
- `HUGGINGFACE_URL` or `EMBEDDINGS_HUGGINGFACE_URL`: `url`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Instructor">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.instructor.types import InstructorProviderSpec
|
||||
|
||||
embedding_model: InstructorProviderSpec = {
|
||||
"provider": "instructor",
|
||||
"config": {
|
||||
"model_name": "hkunlp/instructor-xl",
|
||||
"device": "cuda",
|
||||
"instruction": "Represent the document"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_name` (str): HuggingFace model ID. Default: `hkunlp/instructor-base`. Options: `hkunlp/instructor-xl`, `hkunlp/instructor-large`, `hkunlp/instructor-base`
|
||||
- `device` (str): Device to run on. Default: `cpu`. Options: `cpu`, `cuda`, `mps`
|
||||
- `instruction` (str): Instruction prefix for embeddings
|
||||
|
||||
**Environment Variables:**
|
||||
- `EMBEDDINGS_INSTRUCTOR_MODEL_NAME`: `model_name`
|
||||
- `EMBEDDINGS_INSTRUCTOR_DEVICE`: `device`
|
||||
- `EMBEDDINGS_INSTRUCTOR_INSTRUCTION`: `instruction`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Sentence Transformer">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.sentence_transformer.types import SentenceTransformerProviderSpec
|
||||
|
||||
embedding_model: SentenceTransformerProviderSpec = {
|
||||
"provider": "sentence-transformer",
|
||||
"config": {
|
||||
"model_name": "all-mpnet-base-v2",
|
||||
"device": "cuda",
|
||||
"normalize_embeddings": True
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_name` (str): Sentence Transformers model name. Default: `all-MiniLM-L6-v2`. Options: `all-mpnet-base-v2`, `all-MiniLM-L6-v2`, `paraphrase-multilingual-MiniLM-L12-v2`
|
||||
- `device` (str): Device to run on. Default: `cpu`. Options: `cpu`, `cuda`, `mps`
|
||||
- `normalize_embeddings` (bool): Whether to normalize embeddings. Default: `False`
|
||||
|
||||
**Environment Variables:**
|
||||
- `EMBEDDINGS_SENTENCE_TRANSFORMER_MODEL_NAME`: `model_name`
|
||||
- `EMBEDDINGS_SENTENCE_TRANSFORMER_DEVICE`: `device`
|
||||
- `EMBEDDINGS_SENTENCE_TRANSFORMER_NORMALIZE_EMBEDDINGS`: `normalize_embeddings`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="ONNX">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.onnx.types import ONNXProviderSpec
|
||||
|
||||
embedding_model: ONNXProviderSpec = {
|
||||
"provider": "onnx",
|
||||
"config": {
|
||||
"preferred_providers": ["CUDAExecutionProvider", "CPUExecutionProvider"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `preferred_providers` (list[str]): List of ONNX execution providers in order of preference
|
||||
|
||||
**Environment Variables:**
|
||||
- `EMBEDDINGS_ONNX_PREFERRED_PROVIDERS`: `preferred_providers` (comma-separated list)
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="OpenCLIP">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.openclip.types import OpenCLIPProviderSpec
|
||||
|
||||
embedding_model: OpenCLIPProviderSpec = {
|
||||
"provider": "openclip",
|
||||
"config": {
|
||||
"model_name": "ViT-B-32",
|
||||
"checkpoint": "laion2b_s34b_b79k",
|
||||
"device": "cuda"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_name` (str): OpenCLIP model architecture. Default: `ViT-B-32`. Options: `ViT-B-32`, `ViT-B-16`, `ViT-L-14`
|
||||
- `checkpoint` (str): Pretrained checkpoint name. Default: `laion2b_s34b_b79k`. Options: `laion2b_s34b_b79k`, `laion400m_e32`, `openai`
|
||||
- `device` (str): Device to run on. Default: `cpu`. Options: `cpu`, `cuda`
|
||||
|
||||
**Environment Variables:**
|
||||
- `EMBEDDINGS_OPENCLIP_MODEL_NAME`: `model_name`
|
||||
- `EMBEDDINGS_OPENCLIP_CHECKPOINT`: `checkpoint`
|
||||
- `EMBEDDINGS_OPENCLIP_DEVICE`: `device`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Text2Vec">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.text2vec.types import Text2VecProviderSpec
|
||||
|
||||
embedding_model: Text2VecProviderSpec = {
|
||||
"provider": "text2vec",
|
||||
"config": {
|
||||
"model_name": "shibing624/text2vec-base-multilingual"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_name` (str): Text2Vec model name from HuggingFace. Default: `shibing624/text2vec-base-chinese`. Options: `shibing624/text2vec-base-multilingual`, `shibing624/text2vec-base-chinese`
|
||||
|
||||
**Environment Variables:**
|
||||
- `EMBEDDINGS_TEXT2VEC_MODEL_NAME`: `model_name`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Roboflow">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.roboflow.types import RoboflowProviderSpec
|
||||
|
||||
embedding_model: RoboflowProviderSpec = {
|
||||
"provider": "roboflow",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"api_url": "https://infer.roboflow.com"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `api_key` (str): Roboflow API key. Default: `""` (empty string)
|
||||
- `api_url` (str): Roboflow inference API URL. Default: `https://infer.roboflow.com`
|
||||
|
||||
**Environment Variables:**
|
||||
- `ROBOFLOW_API_KEY` or `EMBEDDINGS_ROBOFLOW_API_KEY`: `api_key`
|
||||
- `ROBOFLOW_API_URL` or `EMBEDDINGS_ROBOFLOW_API_URL`: `api_url`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="WatsonX (IBM)">
|
||||
```python main.py
|
||||
from crewai.rag.embeddings.providers.ibm.types import WatsonXProviderSpec
|
||||
|
||||
embedding_model: WatsonXProviderSpec = {
|
||||
"provider": "watsonx",
|
||||
"config": {
|
||||
"model_id": "ibm/slate-125m-english-rtrvr",
|
||||
"url": "https://us-south.ml.cloud.ibm.com",
|
||||
"api_key": "your-api-key",
|
||||
"project_id": "your-project-id",
|
||||
"batch_size": 100,
|
||||
"concurrency_limit": 10,
|
||||
"persistent_connection": True
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `model_id` (str): WatsonX model identifier
|
||||
- `url` (str): WatsonX API endpoint
|
||||
- `api_key` (str): IBM Cloud API key
|
||||
- `project_id` (str): WatsonX project ID
|
||||
- `space_id` (str): WatsonX space ID (alternative to project_id)
|
||||
- `batch_size` (int): Batch size for embeddings. Default: `100`
|
||||
- `concurrency_limit` (int): Maximum concurrent requests. Default: `10`
|
||||
- `persistent_connection` (bool): Use persistent connections. Default: `True`
|
||||
- Plus 20+ additional authentication and configuration options
|
||||
|
||||
**Environment Variables:**
|
||||
- `WATSONX_API_KEY` or `EMBEDDINGS_WATSONX_API_KEY`: `api_key`
|
||||
- `WATSONX_URL` or `EMBEDDINGS_WATSONX_URL`: `url`
|
||||
- `WATSONX_PROJECT_ID` or `EMBEDDINGS_WATSONX_PROJECT_ID`: `project_id`
|
||||
- `EMBEDDINGS_WATSONX_MODEL_ID`: `model_id`
|
||||
- `EMBEDDINGS_WATSONX_SPACE_ID`: `space_id`
|
||||
- `EMBEDDINGS_WATSONX_BATCH_SIZE`: `batch_size`
|
||||
- `EMBEDDINGS_WATSONX_CONCURRENCY_LIMIT`: `concurrency_limit`
|
||||
- `EMBEDDINGS_WATSONX_PERSISTENT_CONNECTION`: `persistent_connection`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Custom">
|
||||
```python main.py
|
||||
from crewai.rag.core.base_embeddings_callable import EmbeddingFunction
|
||||
from crewai.rag.embeddings.providers.custom.types import CustomProviderSpec
|
||||
|
||||
class MyEmbeddingFunction(EmbeddingFunction):
|
||||
def __call__(self, input):
|
||||
# Your custom embedding logic
|
||||
return embeddings
|
||||
|
||||
embedding_model: CustomProviderSpec = {
|
||||
"provider": "custom",
|
||||
"config": {
|
||||
"embedding_callable": MyEmbeddingFunction
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Config Options:**
|
||||
- `embedding_callable` (type[EmbeddingFunction]): Custom embedding function class
|
||||
|
||||
**Note:** Custom embedding functions must implement the `EmbeddingFunction` protocol defined in `crewai.rag.core.base_embeddings_callable`. The `__call__` method should accept input data and return embeddings as a list of numpy arrays (or compatible format that will be normalized). The returned embeddings are automatically normalized and validated.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
### Notes
|
||||
- All config fields are optional unless marked as **Required**
|
||||
- API keys can typically be provided via environment variables instead of config
|
||||
- Default values are shown where applicable
|
||||
|
||||
|
||||
## Conclusion
|
||||
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.
|
||||
|
||||
@@ -18,7 +18,7 @@ These tools enable your agents to interact with cloud services, access cloud sto
|
||||
Write and upload files to Amazon S3 storage.
|
||||
</Card>
|
||||
|
||||
<Card title="Bedrock Invoke Agent" icon="aws" href="/en/tools/cloud-storage/bedrockinvokeagenttool">
|
||||
<Card title="Bedrock Invoke Agent" icon="aws" href="/en/tools/integration/bedrockinvokeagenttool">
|
||||
Invoke Amazon Bedrock agents for AI-powered tasks.
|
||||
</Card>
|
||||
|
||||
|
||||
@@ -58,10 +58,10 @@ tool = MySQLSearchTool(
|
||||
),
|
||||
),
|
||||
embedder=dict(
|
||||
provider="google",
|
||||
provider="google-generativeai",
|
||||
config=dict(
|
||||
model="models/embedding-001",
|
||||
task_type="retrieval_document",
|
||||
model_name="gemini-embedding-001",
|
||||
task_type="RETRIEVAL_DOCUMENT",
|
||||
# title="Embeddings",
|
||||
),
|
||||
),
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user