Compare commits

..

2 Commits

Author SHA1 Message Date
Guilherme Vieira
30aa52096e Fix langchain at 0.1.0 2024-01-29 19:23:28 -03:00
Guilherme Vieira
6c711db409 Test 2024-01-29 19:16:43 -03:00
2097 changed files with 19907 additions and 433633 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +0,0 @@
# .editorconfig
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 2

View File

@@ -1,115 +0,0 @@
name: Bug report
description: Create a report to help us improve CrewAI
title: "[BUG]"
labels: ["bug"]
assignees: []
body:
- type: textarea
id: description
attributes:
label: Description
description: Provide a clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
description: Provide a step-by-step process to reproduce the behavior.
placeholder: |
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: screenshots-code
attributes:
label: Screenshots/Code snippets
description: If applicable, add screenshots or code snippets to help explain your problem.
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating System
description: Select the operating system you're using
options:
- Ubuntu 20.04
- Ubuntu 22.04
- Ubuntu 24.04
- macOS Catalina
- macOS Big Sur
- macOS Monterey
- macOS Ventura
- macOS Sonoma
- Windows 10
- Windows 11
- Other (specify in additional context)
validations:
required: true
- type: dropdown
id: python-version
attributes:
label: Python Version
description: Version of Python your Crew is running on
options:
- '3.10'
- '3.11'
- '3.12'
validations:
required: true
- type: input
id: crewai-version
attributes:
label: crewAI Version
description: What version of CrewAI are you using
validations:
required: true
- type: input
id: crewai-tools-version
attributes:
label: crewAI Tools Version
description: What version of CrewAI Tools are you using
validations:
required: true
- type: dropdown
id: virtual-environment
attributes:
label: Virtual Environment
description: What Virtual Environment are you running your crew in.
options:
- Venv
- Conda
- Poetry
validations:
required: true
- type: textarea
id: evidence
attributes:
label: Evidence
description: Include relevant information, logs or error messages. These can be screenshots.
validations:
required: true
- type: textarea
id: possible-solution
attributes:
label: Possible Solution
description: Have a solution in mind? Please suggest it here, or write "None".
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here.
validations:
required: true

View File

@@ -1 +0,0 @@
blank_issues_enabled: false

View File

@@ -1,65 +0,0 @@
name: Feature request
description: Suggest a new feature for CrewAI
title: "[FEATURE]"
labels: ["feature-request"]
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this feature request!
- type: dropdown
id: feature-area
attributes:
label: Feature Area
description: Which area of CrewAI does this feature primarily relate to?
options:
- Core functionality
- Agent capabilities
- Task management
- Integration with external tools
- Performance optimization
- Documentation
- Other (please specify in additional context)
validations:
required: true
- type: textarea
id: problem
attributes:
label: Is your feature request related to a an existing bug? Please link it here.
description: A link to the bug or NA if not related to an existing bug.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: context
attributes:
label: Additional context
description: Add any other context, screenshots, or examples about the feature request here.
validations:
required: false
- type: dropdown
id: willingness-to-contribute
attributes:
label: Willingness to Contribute
description: Would you be willing to contribute to the implementation of this feature?
options:
- Yes, I'd be happy to submit a pull request
- I could provide more detailed specifications
- I can test the feature once it's implemented
- No, I'm just suggesting the idea
validations:
required: true

View File

@@ -1,28 +0,0 @@
name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality

50
.github/security.md vendored
View File

@@ -1,50 +0,0 @@
## CrewAI Security Policy
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
### Scope
We welcome reports for vulnerabilities that could impact:
- CrewAI-maintained source code and repositories
- CrewAI-operated infrastructure and services
- Official CrewAI releases, packages, and distributions
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
### How to Report
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.

10
.github/workflows/black.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
name: Lint
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable

View File

@@ -1,48 +0,0 @@
name: Build uv cache
on:
push:
branches:
- main
paths:
- "uv.lock"
- "pyproject.toml"
schedule:
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
workflow_dispatch:
permissions:
contents: read
jobs:
build-cache:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies and populate cache
run: |
echo "Building global UV cache for Python ${{ matrix.python-version }}..."
uv sync --all-groups --all-extras --no-install-project
echo "Cache populated successfully"
- name: Save uv caches
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

View File

@@ -1,103 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
push:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: actions
build-mode: none
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
config-file: ./.github/codeql/codeql-config.yml
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -1,69 +0,0 @@
name: Lint
on: [pull_request]
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
env:
TARGET_BRANCH: ${{ github.event.pull_request.base.ref }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py3.11-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.11"
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras --no-install-project
- name: Get Changed Python Files
id: changed-files
run: |
merge_base=$(git merge-base origin/"$TARGET_BRANCH" HEAD)
changed_files=$(git diff --name-only --diff-filter=ACMRTUB "$merge_base" | grep '\.py$' || true)
echo "files<<EOF" >> $GITHUB_OUTPUT
echo "$changed_files" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| grep -v '/tests/' \
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}

35
.github/workflows/mkdocs.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Deploy MkDocs
on:
workflow_dispatch:
push:
branches:
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install Requirements
run: |
sudo apt-get update &&
sudo apt-get install pngquant &&
pip install mkdocs-material
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
- name: Build and deploy MkDocs
run: mkdocs gh-deploy --force

View File

@@ -1,33 +0,0 @@
name: Notify Downstream
on:
push:
branches:
- main
permissions:
contents: read
jobs:
notify-downstream:
runs-on: ubuntu-latest
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.OSS_SYNC_APP_ID }}
private_key: ${{ secrets.OSS_SYNC_APP_PRIVATE_KEY }}
- name: Notify Repo B
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ steps.app-token.outputs.token }}
repository: ${{ secrets.OSS_SYNC_DOWNSTREAM_REPO }}
event-type: upstream-commit
client-payload: |
{
"commit_sha": "${{ github.sha }}"
}

View File

@@ -1,81 +0,0 @@
name: Publish to PyPI
on:
release:
types: [ published ]
workflow_dispatch:
jobs:
build:
name: Build packages
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

View File

@@ -1,29 +0,0 @@
name: Mark stale issues and pull requests
permissions:
contents: write
issues: write
pull-requests: write
on:
schedule:
- cron: '10 12 * * *'
workflow_dispatch:
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-label: 'no-issue-activity'
stale-issue-message: 'This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
days-before-issue-stale: 30
days-before-issue-close: 5
stale-pr-label: 'no-pr-activity'
stale-pr-message: 'This PR is stale because it has been open for 45 days with no activity.'
days-before-pr-stale: 45
days-before-pr-close: -1
operations-per-run: 1200

View File

@@ -1,118 +1,32 @@
name: Run Tests
on: [pull_request]
on: [push, pull_request]
permissions:
contents: read
contents: write
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
BRAVE_API_KEY: fake-brave-key
SNOWFLAKE_USER: fake-snowflake-user
SNOWFLAKE_PASSWORD: fake-snowflake-password
SNOWFLAKE_ACCOUNT: fake-snowflake-account
SNOWFLAKE_WAREHOUSE: fake-snowflake-warehouse
SNOWFLAKE_DATABASE: fake-snowflake-database
SNOWFLAKE_SCHEMA: fake-snowflake-schema
EMBEDCHAIN_DB_URI: sqlite:///test.db
jobs:
tests:
name: tests (${{ matrix.python-version }})
deploy:
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
fail-fast: true
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
group: [1, 2, 3, 4, 5, 6, 7, 8]
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
python-version: '3.10'
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --all-groups --all-extras
- name: Restore test durations
uses: actions/cache/restore@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Run tests (group ${{ matrix.group }} of 8)
- name: Install Requirements
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}"
sudo apt-get update &&
pip install poetry &&
poetry lock &&
poetry install
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
# DURATIONS_ARG=""
# elif git diff origin/${{ github.base_ref }}...HEAD --name-only 2>/dev/null | grep -q "^tests/.*\.py$"; then
# echo "Test files have changed, skipping cached durations to avoid mismatches"
# DURATIONS_ARG=""
# else
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
cd lib/crewai && uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
$DURATIONS_ARG \
--durations=10 \
-n auto \
--maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8)
run: |
cd lib/crewai-tools && uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
--durations=10 \
-n auto \
--maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
- name: Run tests
run: poetry run pytest

View File

@@ -1,101 +0,0 @@
name: Run Type Checks
on: [pull_request]
permissions:
contents: read
jobs:
type-checker-matrix:
name: type-checker (${{ matrix.python-version }})
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras
- name: Get changed Python files
id: changed-files
run: |
# Get the list of changed Python files compared to the base branch
echo "Fetching changed files..."
git diff --name-only --diff-filter=ACMRT origin/${{ github.base_ref }}...HEAD -- '*.py' > changed_files.txt
# Filter for files in src/ directory only (excluding tests/)
grep -E "^src/" changed_files.txt > filtered_changed_files.txt || true
# Check if there are any changed files
if [ -s filtered_changed_files.txt ]; then
echo "Changed Python files in src/:"
cat filtered_changed_files.txt
echo "has_changes=true" >> $GITHUB_OUTPUT
# Convert newlines to spaces for mypy command
echo "files=$(cat filtered_changed_files.txt | tr '\n' ' ')" >> $GITHUB_OUTPUT
else
echo "No Python files changed in src/"
echo "has_changes=false" >> $GITHUB_OUTPUT
fi
- name: Run type checks on changed files
if: steps.changed-files.outputs.has_changes == 'true'
run: |
echo "Running mypy on changed files with Python ${{ matrix.python-version }}..."
uv run mypy ${{ steps.changed-files.outputs.files }}
- name: No files to check
if: steps.changed-files.outputs.has_changes == 'false'
run: echo "No Python files in src/ were modified - skipping type checks"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
# Summary job to provide single status for branch protection
type-checker:
name: type-checker
runs-on: ubuntu-latest
needs: type-checker-matrix
if: always()
steps:
- name: Check matrix results
run: |
if [ "${{ needs.type-checker-matrix.result }}" == "success" ] || [ "${{ needs.type-checker-matrix.result }}" == "skipped" ]; then
echo "✅ All type checks passed"
else
echo "❌ Type checks failed"
exit 1
fi

View File

@@ -1,71 +0,0 @@
name: Update Test Durations
on:
push:
branches:
- main
paths:
- 'tests/**/*.py'
workflow_dispatch:
permissions:
contents: read
jobs:
update-durations:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --all-groups --all-extras
- name: Run all tests and store durations
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
uv run pytest --store-durations --durations-path=.test_durations_py${PYTHON_VERSION_SAFE} -n auto
continue-on-error: true
- name: Save durations to cache
if: always()
uses: actions/cache/save@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

20
.gitignore vendored
View File

@@ -5,24 +5,4 @@ dist/
.env
assets/*
.idea
test/
docs_crew/
chroma.sqlite3
old_en.json
db/
test.py
rc-tests/*
*.pkl
temp/*
.vscode/*
crew_tasks_output.json
.codesight
.mypy_cache
.ruff_cache
.venv
test_flow.html
crewairules.mdc
plan.md
conceptual_plan.md
build_image
chromadb-*.lock

View File

@@ -1,26 +1,21 @@
repos:
- repo: local
hooks:
- id: ruff
name: ruff
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: ruff-format
name: ruff-format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: mypy
name: mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.12.1
hooks:
- id: black
language_version: python3.11
files: \.(py)$
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: isort
name: isort (python)
args: ["--profile", "black", "--filter-files"]
- repo: https://github.com/PyCQA/autoflake
rev: v2.2.1
hooks:
- id: autoflake
args: ['--in-place', '--remove-all-unused-imports', '--remove-unused-variables', '--ignore-init-module-imports']

View File

@@ -1,4 +1,4 @@
Copyright (c) 2025 crewAI, Inc.
Copyright (c) 2018 The Python Packaging Authority
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

821
README.md
View File

@@ -1,566 +1,183 @@
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="docs/images/crewai_logo.png" width="600px" alt="Open source Multi-AI Agent orchestration framework">
</a>
</p>
<p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;">
<a href="https://trendshift.io/repositories/11239" target="_blank">
<img src="https://trendshift.io/api/badge/repositories/11239" alt="crewAIInc%2FcrewAI | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/>
</a>
</p>
THIS IS A TEST
<p align="center">
<a href="https://crewai.com">Homepage</a>
·
<a href="https://docs.crewai.com">Docs</a>
·
<a href="https://app.crewai.com">Start Cloud Trial</a>
·
<a href="https://blog.crewai.com">Blog</a>
·
<a href="https://community.crewai.com">Forum</a>
</p>
# crewAI
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="https://img.shields.io/github/stars/crewAIInc/crewAI" alt="GitHub Repo stars">
</a>
<a href="https://github.com/crewAIInc/crewAI/network/members">
<img src="https://img.shields.io/github/forks/crewAIInc/crewAI" alt="GitHub forks">
</a>
<a href="https://github.com/crewAIInc/crewAI/issues">
<img src="https://img.shields.io/github/issues/crewAIInc/crewAI" alt="GitHub issues">
</a>
<a href="https://github.com/crewAIInc/crewAI/pulls">
<img src="https://img.shields.io/github/issues-pr/crewAIInc/crewAI" alt="GitHub pull requests">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
</a>
</p>
![Logo of crewAI, tow people rowing on a boat](./docs/crewai_logo.png)
<p align="center">
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/v/crewai" alt="PyPI version">
</a>
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/dm/crewai" alt="PyPI downloads">
</a>
<a href="https://twitter.com/crewAIInc">
<img src="https://img.shields.io/twitter/follow/crewAIInc?style=social" alt="Twitter Follow">
</a>
</p>
🤖 Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
### Fast and Flexible Multi-Agent Automation Framework
> CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely **independent of LangChain or other agent frameworks**.
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI AMP Suite
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
## Crew Control Plane Key Features:
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
## Table of contents
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Understanding Flows and Crews](#understanding-flows-and-crews)
- [CrewAI vs LangGraph](#how-crewai-compares)
- [Examples](#examples)
- [crewAI](#crewai)
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Examples](#examples)
- [Code](#code)
- [Video](#video)
- [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Using Crews and Flows Together](#using-crews-and-flows-together)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
- [Contribution](#contribution)
- [Telemetry](#telemetry)
- [License](#license)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Contribution](#contribution)
- [Installing Dependencies](#installing-dependencies)
- [Virtual Env](#virtual-env)
- [Pre-commit hooks](#pre-commit-hooks)
- [Running Tests](#running-tests)
- [Packaging](#packaging)
- [Installing Locally](#installing-locally)
- [Hire CrewAI](#hire-crewai)
- [License](#license)
## Why CrewAI?
<div align="center" style="margin-bottom: 30px;">
<img src="docs/images/asset.png" alt="CrewAI Logo" width="100%">
</div>
The power of AI collaboration has too much to offer.
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
- 🤖 [Talk with the Docs](https://chatg.pt/DWjSBZn)
- 📄 [Documentation Wiki](https://joaomdmoura.github.io/crewAI/)
## Getting Started
Setup and run your first CrewAI agents by following this tutorial.
[![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learn CrewAI through our comprehensive courses:
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
### Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
### Getting Started with Installation
To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
1. **Installation**:
```shell
pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
The example below also uses duckduckgo, so also install that
```shell
pip install 'crewai[tools]'
pip install duckduckgo-search
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
#### Common Issues
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
```shell
crewai create crew <project_name>
```
This command creates a new project folder with the following structure:
```
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your crew by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents, and the `tasks.yaml` file is where you define your tasks.
#### To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
#### Example of a simple crew with a sequential process:
Instantiate your crew:
```shell
crewai create crew latest-ai-development
```
Modify the files as needed to fit your use case:
**agents.yaml**
```yaml
# src/my_project/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
```
**tasks.yaml**
```yaml
# src/my_project/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
```
**crew.py**
2. **Setting Up Your Crew**:
```python
# src/my_project/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
import os
from crewai import Agent, Task, Crew, Process
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
os.environ["OPENAI_API_KEY"] = "YOUR KEY"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
# You can choose to use a local model through Ollama for example. See ./docs/llm-connections.md for more information.
# from langchain.llms import Ollama
# ollama_llm = Ollama(model="openhermes")
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
from langchain.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
# Define your agents with roles and goals
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting
actionable insights.""",
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
allow_delegation=False,
tools=[search_tool]
# You can pass an optional llm attribute specifying what mode you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://python.langchain.com/docs/integrations/llms/)
#
# Examples:
# llm=ollama_llm # was defined above in the file
# llm=OpenAI(model_name="gpt-3.5", temperature=0.7)
# For the OpenAI model you would need to import
# from langchain_openai import OpenAI
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for
your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
)
allow_delegation=True,
# (optional) llm=ollama_llm
)
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Your final answer MUST be a full analysis report""",
agent=researcher
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.
Your final answer MUST be the full blog post of at least 4 paragraphs.""",
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2, # You can set it to 1 or 2 to different logging levels
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
```
**main.py**
Currently the only supported process is `Process.sequential`, where one task is executed after the other and the outcome of one is passed as extra content into this next.
```python
#!/usr/bin/env python
# src/my_project/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
```
### 3. Running Your Crew
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
Lock the dependencies and install them by using the CLI command but first, navigate to your project directory:
```shell
cd my_project
crewai install (Optional)
```
To run your crew, execute the following command in the root of your project:
```bash
crewai run
```
or
```bash
python src/my_project/main.py
```
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
```bash
crewai update
```
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
## Key Features
CrewAI stands apart as a lean, standalone, high-performance multi-AI Agent framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Processes Driven**: Currently only supports `sequential` task execution but more complex processes like consensual and hierarchical being worked on.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](./docs/llm-connections.md) page for details on configuring you agents' connections to models, even ones running locally!
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
## Examples
You can test different real life examples of AI crews [in the examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file)
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
### Code
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis)
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](./docs/how-to/Human-Input-on-Execution.md)
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis)
### Video
#### Quick Tutorial
[![CrewAI Tutorial](https://img.youtube.com/vi/tnejrr-0a94/0.jpg)](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
### Quick Tutorial
#### Trip Planner
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/0.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
[![CrewAI Tutorial](https://img.youtube.com/vi/tnejrr-0a94/maxresdefault.jpg)](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
### Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines.
CrewAI flows support logical operators like `or_` and `and_` to combine multiple conditions. This can be used with `@start`, `@listen`, or `@router` decorators to create complex triggering conditions.
- `or_`: Triggers when any of the specified conditions are met.
- `and_`Triggers when all of the specified conditions are met.
Here's how you can orchestrate multiple Crews within a Flow:
```python
from crewai.flow.flow import Flow, listen, start, router, or_
from crewai import Crew, Agent, Task, Process
from pydantic import BaseModel
# Define structured state for precise control
class MarketState(BaseModel):
sentiment: str = "neutral"
confidence: float = 0.0
recommendations: list = []
class AdvancedAnalysisFlow(Flow[MarketState]):
@start()
def fetch_market_data(self):
# Demonstrate low-level control with structured state
self.state.sentiment = "analyzing"
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
@listen(fetch_market_data)
def analyze_with_crew(self, market_data):
# Show crew agency through specialized roles
analyst = Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher = Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task = Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task = Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew = Crew(
agents=[analyst, researcher],
tasks=[analysis_task, research_task],
process=Process.sequential,
verbose=True
)
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
@router(analyze_with_crew)
def determine_next_steps(self):
# Show flow control with conditional routing
if self.state.confidence > 0.8:
return "high_confidence"
elif self.state.confidence > 0.5:
return "medium_confidence"
return "low_confidence"
@listen("high_confidence")
def execute_strategy(self):
# Demonstrate complex decision making
strategy_crew = Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
return strategy_crew.kickoff()
@listen(or_("medium_confidence", "low_confidence"))
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
```
This example demonstrates how to:
1. Use Python code for basic data operations
2. Create and execute Crews as steps in your workflow
3. Use Flow decorators to manage the sequence of operations
4. Implement conditional branching based on Crew results
#### Stock Analysis
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/0.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
## Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
crewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
Please refer to the [Connect crewAI to LLMs](./docs/how-to/llm-connections.md) page for details on configuring you agents' connections to models.
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
- **Autogen**: While Autogen excels in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
@@ -572,16 +189,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
- We appreciate your input!
### Installing Dependencies
```bash
uv lock
uv sync
poetry lock
poetry install
```
### Virtual Env
```bash
uv venv
poetry shell
```
### Pre-commit hooks
@@ -591,187 +206,23 @@ pre-commit install
```
### Running Tests
```bash
uv run pytest .
```
### Running static type checks
```bash
uvx mypy src
poetry run pytest
```
### Packaging
```bash
uv build
poetry build
```
### Installing Locally
```bash
pip install dist/*.tar.gz
```
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
Data collected includes:
- Version of CrewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publicly available tools, which ones are being used the most so we can improve them
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
## Hire CrewAI
We're a company developing crewAI and crewAI Enterprise, we for a limited time are offer consulting with selected customers, to get them early access to our enterprise solution
If you are interested on having access to it and hiring weekly hours with our team, feel free to email us at [sales@crewai.io](mailto:sales@crewai.io)
## License
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
## Frequently Asked Questions (FAQ)
### General
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
- [Is CrewAI open-source?](#q-is-crewai-open-source)
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
### Features and Capabilities
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
### Resources and Community
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
### Enterprise Features
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-amp-offer)
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-amp-for-free)
### Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
### Q: How do I install CrewAI?
A: Install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
### Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
### Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
### Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
### Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
### Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
### Q: How can I contribute to CrewAI?
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI AMP offer?
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI AMP for free?
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
### Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
### Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
### Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
### Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.
CrewAI is released under the MIT License

1737
crewAI.excalidraw Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +0,0 @@
(function() {
if (typeof window === 'undefined') return;
if (typeof window.signals !== 'undefined') return;
var script = document.createElement('script');
script.src = 'https://cdn.cr-relay.com/v1/site/883520f4-c431-44be-80e7-e123a1ee7a2b/signals.js';
script.async = true;
window.signals = Object.assign(
[],
['page', 'identify', 'form'].reduce(function (acc, method){
acc[method] = function () {
signals.push([method, arguments]);
return signals;
};
return acc;
}, {})
);
document.head.appendChild(script);
})();

View File

@@ -0,0 +1,90 @@
# What is a Tool?
A tool in CrewAI is a function or capability that an agent can utilize to perform actions, gather information, or interact with external systems, behind the scenes tools are [LangChain Tools](https://python.langchain.com/docs/modules/agents/tools/).
These tools can be as straightforward as a search function or as sophisticated as integrations with other chains or APIs.
## Key Characteristics of Tools
- **Utility**: Tools are designed to serve specific purposes, such as searching the web, analyzing data, or generating content.
- **Integration**: Tools can be integrated into agents to extend their capabilities beyond their basic functions.
- **Customizability**: Developers can create custom tools tailored to the specific needs of their agents or use pre-built LangChain ones available in the ecosystem.
# Creating your own Tools
You can easily create your own tool using [LangChain Tool Custom Tool Creation](https://python.langchain.com/docs/modules/agents/tools/custom_tools).
Example:
```python
import json
import requests
from crewai import Agent
from langchain.tools import tool
from unstructured.partition.html import partition_html
class BrowserTools():
@tool("Scrape website content")
def scrape_website(website):
"""Useful to scrape a website content"""
url = f"https://chrome.browserless.io/content?token={config('BROWSERLESS_API_KEY')}"
payload = json.dumps({"url": website})
headers = {
'cache-control': 'no-cache',
'content-type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
elements = partition_html(text=response.text)
content = "\n\n".join([str(el) for el in elements])
# Return only the first 5k characters
return content[:5000]
# Create an agent and assign the scrapping tool
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[BrowserTools().scrape_website]
)
```
# Using Existing Tools
Check [LangChain Integration](https://python.langchain.com/docs/integrations/tools/) for a set of useful existing tools.
To assign a tool to an agent, you'd provide it as part of the agent's properties during initialization.
```python
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
# Initialize SerpAPI tool with your API key
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key"
search = GoogleSerperAPIWrapper()
# Create tool to be used by agent
serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search",
)
# Create an agent and assign the search tool
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
```
# Tool Interaction
Tools enhance an agent's ability to perform tasks autonomously or in collaboration with other agents. For instance, an agent might use a search tool to gather information, then pass that data to another agent specialized in analysis.
# Conclusion
Tools are vital components that expand the functionality of agents within the CrewAI framework. They enable agents to perform a wide range of actions and collaborate effectively with one another. As you build with CrewAI, consider the array of tools you can leverage to empower your agents and how they can be interwoven to create a robust AI ecosystem.

View File

@@ -0,0 +1,50 @@
# What is a Task?
A Task in CrewAI is essentially a job or an assignment that an AI agent needs to complete. It's defined by what needs to be done and can include additional information like which agent should do it and what tools they might need.
# Task Properties
- **Description**: A clear, concise statement of what the task entails.
- **Agent**: Optionally, you can specify which agent is responsible for the task. If not, the crew's process will determine who takes it on.
- **Tools**: These are the functions or capabilities the agent can utilize to perform the task. They can be anything from simple actions like 'search' to more complex interactions with other agents or APIs.
# Integrating Tools with Tasks
In CrewAI, tools are functions from the `langchain` toolkit that agents can use to interact with the world. These can be generic utilities or specialized functions designed for specific actions. When you assign tools to a task, they empower the agent to perform its duties more effectively.
## Example of Creating a Task with Tools
```python
from crewai import Task
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
# Initialize SerpAPI tool with your API key
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key"
search = GoogleSerperAPIWrapper()
# Create tool to be used by agent
serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search",
)
# Create a task with a description and the search tool
task = Task(
description='Find and summarize the latest and most relevant news on AI',
tools=[serper_tool]
)
```
When the task is executed by an agent, the tools specified in the task will override the agent's default tools. This means that for the duration of this task, the agent will use the search tool provided, even if it has other tools assigned to it.
# Tool Override Mechanism
The ability to override an agent's tools with those specified in a task allows for greater flexibility. An agent might generally use a set of standard tools, but for certain tasks, you may want it to use a particular tool that is more suited to the task at hand.
# Conclusion
Creating tasks with the right tools is crucial in CrewAI. It ensures that your agents are not only aware of what they need to do but are also equipped with the right functions to do it effectively. This feature underlines the flexibility and power of the CrewAI system, where tasks can be tailored with specific tools to achieve the best outcome.

View File

@@ -0,0 +1,42 @@
# Overview of a Task
In the CrewAI framework, tasks are the individual assignments that agents are responsible for completing. They are the fundamental units of work that your AI crew will undertake. Understanding how to define and manage tasks is key to leveraging the full potential of CrewAI.
A task in CrewAI encapsulates all the information needed for an agent to execute it, including a description, the agent assigned to it, and any specific tools required. Tasks are designed to be flexible, allowing for both simple and complex actions depending on your needs.
# Properties of a Task
Every task in CrewAI has several properties:
- **Description**: A clear and concise statement of what needs to be done.
- **Agent**: The agent assigned to the task (optional). If no agent is specified, the task can be picked up by any agent based on the process defined.
- **Tools**: A list of tools (optional) that the agent can use to complete the task. These can override the agent's default tools if necessary.
# Creating a Task
Creating a task is straightforward. You define what needs to be done and, optionally, who should do it and what tools they should use. Heres a conceptual guide:
```python
from crewai import Task
# Define a task with a designated agent and specific tools
task = Task(description='Generate monthly sales report', agent=sales_agent, tools=[reporting_tool])
```
# Task Assignment
Tasks can be assigned to agents in several ways:
- Directly, by specifying the agent when creating the task.
- [WIP] Through the Crew's process, which can assign tasks based on agent roles, availability, or other criteria.
# Task Execution
Once a task has been defined and assigned, it's ready to be executed. Execution is typically handled by the Crew object, which manages the workflow and ensures that tasks are completed according to the defined process.
# Task Collaboration
Tasks in CrewAI can be designed to require collaboration between agents. For example, one agent might gather data while another analyzes it. This collaborative approach can be defined within the task properties and managed by the Crew's process.
# Conclusion
Tasks are the driving force behind the actions of agents in CrewAI. By properly defining tasks, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. In the following sections, we will explore how tasks fit into the larger picture of processes and crew management.

View File

@@ -0,0 +1,26 @@
# How Agents Collaborate:
In CrewAI, collaboration is the cornerstone of agent interaction. Agents are designed to work together by sharing information, requesting assistance, and combining their skills to complete tasks more efficiently.
- **Information Sharing**: Agents can share findings and data amongst themselves to ensure all members are informed and can contribute effectively.
- **Task Assistance**: If an agent encounters a task that requires additional expertise, it can seek the help of another agent with the necessary skill set.
- **Resource Allocation**: Agents can share or allocate resources such as tools or processing power to optimize task execution.
Collaboration is embedded in the DNA of CrewAI, enabling a dynamic and adaptive approach to problem-solving.
# Delegation: Dividing to Conquer
Delegation is the process by which an agent assigns a task to another agent, or just ask another agent, it's an intelligent decision-making process that enhances the crew's functionality.
By default all agents can delegate work and ask questions, so if you want an agent to work alone make sure to set that option when initializing an Agent, this is useful to prevent deviations if the task is supposed to be straightforward.
## Implementing Collaboration and Delegation
When setting up your crew, you'll define the roles and capabilities of each agent. CrewAI's infrastructure takes care of the rest, managing the complex interplay of agents as they work together.
## Example Scenario:
Imagine a scenario where you have a researcher agent that gathers data and a writer agent that compiles reports. The writer can autonomously ask question or delegate more in depth research work depending on its needs as it tries to complete its task.
# Conclusion
Collaboration and delegation are what transform a collection of AI agents into a unified, intelligent crew. With CrewAI, you have a framework that not only simplifies these interactions but also makes them more effective, paving the way for sophisticated AI systems that can tackle complex, multi-dimensional tasks.

View File

@@ -0,0 +1,49 @@
# Managing Processes in CrewAI
Processes are the heart of CrewAI's workflow management, akin to the way a human team organizes its work. In CrewAI, processes define the sequence and manner in which tasks are executed by agents, mirroring the coordination you'd expect in a well-functioning team of people.
## Understanding Processes
A process in CrewAI can be thought of as the game plan for how your AI agents will handle their workload. Just as a project manager assigns tasks to team members based on their skills and the project timeline, CrewAI processes assign tasks to agents to ensure efficient workflow.
## Process Implementations
- **Sequential (Supported)**: This is the only process currently implemented in CrewAI. It ensures tasks are handled one at a time, in a given order, much like a relay race where one runner passes the baton to the next.
- **Consensual (WIP)**: Envisioned for a future update, the consensual process will enable agents to make joint decisions on task execution, similar to a team consensus in a meeting before proceeding.
- **Hierarchical (WIP)**: Also in the pipeline, this process will introduce a chain of command to task execution, where some agents may have the authority to prioritize tasks or delegate them, akin to a traditional corporate hierarchy.
These additional processes, once implemented, will offer more nuanced and sophisticated ways for agents to interact and complete tasks, much like teams in complex organizational structures.
## Defining a Sequential Process
Creating a sequential process in CrewAI is straightforward and reflects the simplicity of coordinating a team's efforts step by step. In this process the outcome of the previous task is sent into the next one as context that I should use to accomplish it's task
```python
from crewai import Process
# Define a sequential process
sequential_process = Process.sequential
```
# The Magic of Sequential Processes
The sequential process is where much of CrewAI's magic happens. It ensures that tasks are approached with the same thoughtful progression that a human team would use, fostering a natural and logical flow of work while passing on task outcome into the next.
## Assigning Processes to a Crew
To assign a process to a crew, simply set it during the crew's creation. The process will dictate the crew's approach to task execution.
```python
from crewai import Crew
# Create a crew with a sequential process
crew = Crew(agents=my_agents, tasks=my_tasks, process=sequential_process)
```
## The Role of Processes in Teamwork
The process you choose for your crew is critical. It's what transforms a group of individual agents into a cohesive unit that can tackle complex projects with the precision and harmony you'd find in a team of skilled humans.
## Conclusion
Processes bring structure and order to the CrewAI ecosystem, allowing agents to collaborate effectively and accomplish goals systematically. As CrewAI evolves, additional process types will be introduced to enhance the framework's versatility, much like a team that grows and adapts over time.

View File

@@ -0,0 +1,41 @@
# What is an Agent?
In CrewAI, an agent is an autonomous unit programmed to perform tasks, make decisions, and communicate with other agents. Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
# Key Properties of an Agent
- **Role**: Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for.
- **Goal**: The individual objective that the agent aims to achieve. It guides the agent's decision-making process.
- **Backstory**: Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics.
- **Tools**: A set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to specific agents.
- **Verbose**: This allow you to actually see what is going on during the Crew execution.
- **Allow Delegation**: Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent.
# Agent Lifecycle
1. **Initialization**: An agent is created with a defined role, goal, backstory, and set of tools.
2. **Task Assignment**: The agent is assigned tasks either directly or through the crew's process management.
3. **Execution**: The agent performs the task using its available tools and in accordance with its role and goal.
4. **Collaboration**: Throughout the execution, the agent can communicate with other agents to delegate, inquire, or assist.
# Creating an Agent
To create an agent, you would typically initialize an instance of the `Agent` class with the desired properties. Here's a conceptual example:
```python
from crewai import Agent
# Create an agent with a role and a goal
agent = Agent(
role='Data Analyst',
goal='Extract actionable insights',
verbose=True,
backstory="You'er a data analyst at a large company. I am responsible for analyzing data and providing insights to the business. I am currently working on a project to analyze the performance of our marketing campaigns. I have been asked to provide insights on how to improve the performance of our marketing campaigns."
)
```
# Agent Interaction
Agents can interact with each other using the CrewAI's built-in delegation and communication mechanisms. This allows for dynamic task management and problem-solving within the crew.
# Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.

View File

Before

Width:  |  Height:  |  Size: 427 KiB

After

Width:  |  Height:  |  Size: 427 KiB

BIN
docs/crewai_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +0,0 @@
---
title: "GET /inputs"
description: "Get required inputs for your crew"
openapi: "/enterprise-api.en.yaml GET /inputs"
mode: "wide"
---

View File

@@ -1,120 +0,0 @@
---
title: "Introduction"
description: "Complete reference for the CrewAI AMP REST API"
icon: "code"
mode: "wide"
---
# CrewAI AMP API
Welcome to the CrewAI AMP API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
## Quick Start
<Steps>
<Step title="Get Your API Credentials">
Navigate to your crew's detail page in the CrewAI AMP dashboard and copy your Bearer Token from the Status tab.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
</Step>
<Step title="Monitor Progress">
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
</Step>
</Steps>
## Authentication
All API requests require authentication using a Bearer token. Include your token in the `Authorization` header:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
### Token Types
| Token Type | Scope | Use Case |
|:-----------|:--------|:----------|
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI AMP dashboard.
</Tip>
## Base URL
Each deployed crew has its own unique API endpoint:
```
https://your-crew-name.crewai.com
```
Replace `your-crew-name` with your actual crew's URL from the dashboard.
## Typical Workflow
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
4. **Results**: Extract the final output from the completed response
## Error Handling
The API uses standard HTTP status codes:
| Code | Meaning |
|------|:--------|
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| `422` | Validation Error - Missing required inputs |
| `500` | Server Error - Contact support |
## Interactive Testing
<Info>
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
</Info>
Each endpoint page shows you:
- ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
- ✅ **Authentication examples** with proper Bearer token format
### **To Test Your Actual API:**
<CardGroup cols={2}>
<Card title="Copy cURL Examples" icon="terminal">
Copy the cURL examples and replace the URL + token with your real values
</Card>
<Card title="Use Postman/Insomnia" icon="play">
Import the examples into your preferred API testing tool
</Card>
</CardGroup>
**Example workflow:**
1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard
4. **Run the request** in your terminal or API client
## Need Help?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
Get help with API integration and troubleshooting
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
Manage your crews and view execution logs
</Card>
</CardGroup>

View File

@@ -1,8 +0,0 @@
---
title: "POST /kickoff"
description: "Start a crew execution"
openapi: "/enterprise-api.en.yaml POST /kickoff"
mode: "wide"
---

View File

@@ -1,6 +0,0 @@
---
title: "POST /resume"
description: "Resume crew execution with human feedback"
openapi: "/enterprise-api.en.yaml POST /resume"
mode: "wide"
---

View File

@@ -1,8 +0,0 @@
---
title: "GET /status/{kickoff_id}"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
mode: "wide"
---

File diff suppressed because it is too large Load Diff

View File

@@ -1,691 +0,0 @@
---
title: Agents
description: Detailed guide on creating and managing agents within the CrewAI framework.
icon: robot
mode: "wide"
---
## Overview of an Agent
In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Perform specific tasks
- Make decisions based on its role and goal
- Use tools to accomplish objectives
- Communicate and collaborate with other agents
- Maintain memory of interactions
- Delegate tasks when allowed
<Tip>
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
</Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces
- Real-time testing and validation
- Template library with pre-configured agent types
- Easy customization of agent attributes and behaviors
</Note>
## Agent Attributes
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
## Creating Agents
There are two ways to create agents in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
```python Code
crew.kickoff(inputs={'topic': 'AI Agents'})
```
</Note>
Here's an example of how to configure agents using YAML:
```yaml agents.yaml
# src/latest_ai_development/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
```
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
```python Code
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process
from crewai.project import CrewBase, agent, crew
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents_config = "config/agents.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
```
<Note>
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
</Note>
### Direct Code Definition
You can create agents directly in code by instantiating the `Agent` class. Here's a comprehensive example showing all available parameters:
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool
# Create an agent with all available parameters
agent = Agent(
role="Senior Data Scientist",
goal="Analyze and interpret complex datasets to provide actionable insights",
backstory="With over 10 years of experience in data science and machine learning, "
"you excel at finding patterns in complex datasets.",
llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4"
function_calling_llm=None, # Optional: Separate LLM for tool calling
verbose=False, # Default: False
allow_delegation=False, # Default: False
max_iter=20, # Default: 20 iterations
max_rpm=None, # Optional: Rate limit for API calls
max_execution_time=None, # Optional: Maximum execution time in seconds
max_retry_limit=2, # Default: 2 retries on error
allow_code_execution=False, # Default: False
code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe")
respect_context_window=True, # Default: True
use_system_prompt=True, # Default: True
multimodal=False, # Default: False
inject_date=False, # Default: False
date_format="%Y-%m-%d", # Default: ISO format
reasoning=False, # Default: False
max_reasoning_attempts=None, # Default: None
tools=[SerperDevTool()], # Optional: List of tools
knowledge_sources=None, # Optional: List of knowledge sources
embedder=None, # Optional: Custom embedder configuration
system_template=None, # Optional: Custom system prompt template
prompt_template=None, # Optional: Custom prompt template
response_template=None, # Optional: Custom response template
step_callback=None, # Optional: Callback function for monitoring
)
```
Let's break down some key parameter combinations for common use cases:
#### Basic Research Agent
```python Code
research_agent = Agent(
role="Research Analyst",
goal="Find and summarize information about specific topics",
backstory="You are an experienced researcher with attention to detail",
tools=[SerperDevTool()],
verbose=True # Enable logging for debugging
)
```
#### Code Development Agent
```python Code
dev_agent = Agent(
role="Senior Python Developer",
goal="Write and debug Python code",
backstory="Expert Python developer with 10 years of experience",
allow_code_execution=True,
code_execution_mode="safe", # Uses Docker for safety
max_execution_time=300, # 5-minute timeout
max_retry_limit=3 # More retries for complex code tasks
)
```
#### Long-Running Analysis Agent
```python Code
analysis_agent = Agent(
role="Data Analyst",
goal="Perform deep analysis of large datasets",
backstory="Specialized in big data analysis and pattern recognition",
memory=True,
respect_context_window=True,
max_rpm=10, # Limit API calls
function_calling_llm="gpt-4o-mini" # Cheaper model for tool calls
)
```
#### Custom Template Agent
```python Code
custom_agent = Agent(
role="Customer Service Representative",
goal="Assist customers with their inquiries",
backstory="Experienced in customer support with a focus on satisfaction",
system_template="""<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>""",
prompt_template="""<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>""",
response_template="""<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>""",
)
```
#### Date-Aware Agent with Reasoning
```python Code
strategic_agent = Agent(
role="Market Analyst",
goal="Track market movements with precise date references and strategic planning",
backstory="Expert in time-sensitive financial analysis and strategic reporting",
inject_date=True, # Automatically inject current date into tasks
date_format="%B %d, %Y", # Format as "May 21, 2025"
reasoning=True, # Enable strategic planning
max_reasoning_attempts=2, # Limit planning iterations
verbose=True
)
```
#### Reasoning Agent
```python Code
reasoning_agent = Agent(
role="Strategic Planner",
goal="Analyze complex problems and create detailed execution plans",
backstory="Expert strategic planner who methodically breaks down complex challenges",
reasoning=True, # Enable reasoning and planning
max_reasoning_attempts=3, # Limit reasoning attempts
max_iter=30, # Allow more iterations for complex planning
verbose=True
)
```
#### Multimodal Agent
```python Code
multimodal_agent = Agent(
role="Visual Content Analyst",
goal="Analyze and process both text and visual content",
backstory="Specialized in multimodal analysis combining text and image understanding",
multimodal=True, # Enable multimodal capabilities
verbose=True
)
```
### Parameter Details
#### Critical Parameters
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
- `llm` determines the language model used (default: OpenAI's GPT-4)
#### Memory and Context
- `memory`: Enable to maintain conversation history
- `respect_context_window`: Prevents token limit issues
- `knowledge_sources`: Add domain-specific knowledge bases
#### Execution Control
- `max_iter`: Maximum attempts before giving best answer
- `max_execution_time`: Timeout in seconds
- `max_rpm`: Rate limiting for API calls
- `max_retry_limit`: Retries on error
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
Add the code interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks
- `inject_date`: Automatically inject current date into task descriptions
#### Templates
- `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format
- `response_template`: Formats agent responses
<Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
</Note>
## Agent Tools
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
Here's how to add tools to an agent:
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool, WikipediaTools
# Create tools
search_tool = SerperDevTool()
wiki_tool = WikipediaTools()
# Add tools to agent
researcher = Agent(
role="AI Technology Researcher",
goal="Research the latest AI developments",
tools=[search_tool, wiki_tool],
verbose=True
)
```
## Agent Memory and Context
Agents can maintain memory of their interactions and use context from previous tasks. This is particularly useful for complex workflows where information needs to be retained across multiple tasks.
```python Code
from crewai import Agent
analyst = Agent(
role="Data Analyst",
goal="Analyze and remember complex data patterns",
memory=True, # Enable memory
verbose=True
)
```
<Note>
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
</Note>
## Context Window Management
CrewAI includes sophisticated automatic context window management to handle situations where conversations exceed the language model's token limits. This powerful feature is controlled by the `respect_context_window` parameter.
### How Context Window Management Works
When an agent's conversation history grows too large for the LLM's context window, CrewAI automatically detects this situation and can either:
1. **Automatically summarize content** (when `respect_context_window=True`)
2. **Stop execution with an error** (when `respect_context_window=False`)
### Automatic Context Handling (`respect_context_window=True`)
This is the **default and recommended setting** for most use cases. When enabled, CrewAI will:
```python Code
# Agent with automatic context management (default)
smart_agent = Agent(
role="Research Analyst",
goal="Analyze large documents and datasets",
backstory="Expert at processing extensive information",
respect_context_window=True, # 🔑 Default: auto-handle context limits
verbose=True
)
```
**What happens when context limits are exceeded:**
- ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."`
- 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history
- ✅ **Continued execution**: Task execution continues seamlessly with the summarized context
- 📝 **Preserved information**: Key information is retained while reducing token count
### Strict Context Limits (`respect_context_window=False`)
When you need precise control and prefer execution to stop rather than lose any information:
```python Code
# Agent with strict context limits
strict_agent = Agent(
role="Legal Document Reviewer",
goal="Provide precise legal analysis without information loss",
backstory="Legal expert requiring complete context for accurate analysis",
respect_context_window=False, # ❌ Stop execution on context limit
verbose=True
)
```
**What happens when context limits are exceeded:**
- ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."`
- 🛑 **Execution stops**: Task execution halts immediately
- 🔧 **Manual intervention required**: You need to modify your approach
### Choosing the Right Setting
#### Use `respect_context_window=True` (Default) when:
- **Processing large documents** that might exceed context limits
- **Long-running conversations** where some summarization is acceptable
- **Research tasks** where general context is more important than exact details
- **Prototyping and development** where you want robust execution
```python Code
# Perfect for document processing
document_processor = Agent(
role="Document Analyst",
goal="Extract insights from large research papers",
backstory="Expert at analyzing extensive documentation",
respect_context_window=True, # Handle large documents gracefully
max_iter=50, # Allow more iterations for complex analysis
verbose=True
)
```
#### Use `respect_context_window=False` when:
- **Precision is critical** and information loss is unacceptable
- **Legal or medical tasks** requiring complete context
- **Code review** where missing details could introduce bugs
- **Financial analysis** where accuracy is paramount
```python Code
# Perfect for precision tasks
precision_agent = Agent(
role="Code Security Auditor",
goal="Identify security vulnerabilities in code",
backstory="Security expert requiring complete code context",
respect_context_window=False, # Prefer failure over incomplete analysis
max_retry_limit=1, # Fail fast on context issues
verbose=True
)
```
### Alternative Approaches for Large Data
When dealing with very large datasets, consider these strategies:
#### 1. Use RAG Tools
```python Code
from crewai_tools import RagTool
# Create RAG tool for large document processing
rag_tool = RagTool()
rag_agent = Agent(
role="Research Assistant",
goal="Query large knowledge bases efficiently",
backstory="Expert at using RAG tools for information retrieval",
tools=[rag_tool], # Use RAG instead of large context windows
respect_context_window=True,
verbose=True
)
```
#### 2. Use Knowledge Sources
```python Code
# Use knowledge sources instead of large prompts
knowledge_agent = Agent(
role="Knowledge Expert",
goal="Answer questions using curated knowledge",
backstory="Expert at leveraging structured knowledge sources",
knowledge_sources=[your_knowledge_sources], # Pre-processed knowledge
respect_context_window=True,
verbose=True
)
```
### Context Window Best Practices
1. **Monitor Context Usage**: Enable `verbose=True` to see context management in action
2. **Design for Efficiency**: Structure tasks to minimize context accumulation
3. **Use Appropriate Models**: Choose LLMs with context windows suitable for your tasks
4. **Test Both Settings**: Try both `True` and `False` to see which works better for your use case
5. **Combine with RAG**: Use RAG tools for very large datasets instead of relying solely on context windows
### Troubleshooting Context Issues
**If you're getting context limit errors:**
```python Code
# Quick fix: Enable automatic handling
agent.respect_context_window = True
# Better solution: Use RAG tools for large data
from crewai_tools import RagTool
agent.tools = [RagTool()]
# Alternative: Break tasks into smaller pieces
# Or use knowledge sources instead of large prompts
```
**If automatic summarization loses important information:**
```python Code
# Disable auto-summarization and use RAG instead
agent = Agent(
role="Detailed Analyst",
goal="Maintain complete information accuracy",
backstory="Expert requiring full context",
respect_context_window=False, # No summarization
tools=[RagTool()], # Use RAG for large data
verbose=True
)
```
<Note>
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
</Note>
## Direct Agent Interaction with `kickoff()`
Agents can be used directly without going through a task or crew workflow using the `kickoff()` method. This provides a simpler way to interact with an agent when you don't need the full crew orchestration capabilities.
### How `kickoff()` Works
The `kickoff()` method allows you to send messages directly to an agent and get a response, similar to how you would interact with an LLM but with all the agent's capabilities (tools, reasoning, etc.).
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool
# Create an agent
researcher = Agent(
role="AI Technology Researcher",
goal="Research the latest AI developments",
tools=[SerperDevTool()],
verbose=True
)
# Use kickoff() to interact directly with the agent
result = researcher.kickoff("What are the latest developments in language models?")
# Access the raw response
print(result.raw)
```
### Parameters and Return Values
| Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties:
- `raw`: String containing the raw output text
- `pydantic`: Parsed Pydantic model (if a `response_format` was provided)
- `agent_role`: Role of the agent that produced the output
- `usage_metrics`: Token usage metrics for the execution
### Structured Output
You can get structured output by providing a Pydantic model as the `response_format`:
```python Code
from pydantic import BaseModel
from typing import List
class ResearchFindings(BaseModel):
main_points: List[str]
key_technologies: List[str]
future_predictions: str
# Get structured output
result = researcher.kickoff(
"Summarize the latest developments in AI for 2025",
response_format=ResearchFindings
)
# Access structured data
print(result.pydantic.main_points)
print(result.pydantic.future_predictions)
```
### Multiple Messages
You can also provide a conversation history as a list of message dictionaries:
```python Code
messages = [
{"role": "user", "content": "I need information about large language models"},
{"role": "assistant", "content": "I'd be happy to help with that! What specifically would you like to know?"},
{"role": "user", "content": "What are the latest developments in 2025?"}
]
result = researcher.kickoff(messages)
```
### Async Support
An asynchronous version is available via `kickoff_async()` with the same parameters:
```python Code
import asyncio
async def main():
result = await researcher.kickoff_async("What are the latest developments in AI?")
print(result.raw)
asyncio.run(main())
```
<Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
### Performance Optimization
- Use `respect_context_window: true` to prevent token limit issues
- Set appropriate `max_rpm` to avoid rate limiting
- Enable `cache: true` to improve performance for repetitive tasks
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Advanced Features
- Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks
- Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts)
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes
- Enable `multimodal: true` for agents that need to process both text and visual content
### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions
- Consider using different LLMs for different purposes:
- Main `llm` for complex reasoning
- `function_calling_llm` for efficient tool usage
### Date Awareness and Reasoning
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes
- Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc.
- Invalid date formats will be logged as warnings and will not modify the task description
- Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection
### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling)
## Troubleshooting Common Issues
1. **Rate Limiting**: If you're hitting API rate limits:
- Implement appropriate `max_rpm`
- Use caching for repetitive operations
- Consider batching requests
2. **Context Window Errors**: If you're exceeding context limits:
- Enable `respect_context_window`
- Use more efficient prompts
- Clear agent memory periodically
3. **Code Execution Issues**: If code execution fails:
- Verify Docker is installed for safe mode
- Check execution permissions
- Review code sandbox settings
4. **Memory Issues**: If agent responses seem inconsistent:
- Check knowledge source configuration
- Review conversation history management
Remember that agents are most effective when configured according to their specific use case. Take time to understand your requirements and adjust these parameters accordingly.

View File

@@ -1,411 +0,0 @@
---
title: CLI
description: Learn how to use the CrewAI CLI to interact with CrewAI.
icon: terminal
mode: "wide"
---
<Warning>Since release 0.140.0, CrewAI AMP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
## Overview
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
## Installation
To use the CrewAI CLI, make sure you have CrewAI installed:
```shell Terminal
pip install crewai
```
## Basic Usage
The basic structure of a CrewAI CLI command is:
```shell Terminal
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
## Available Commands
### 1. Create
Create a new crew or flow.
```shell Terminal
crewai create [OPTIONS] TYPE NAME
```
- `TYPE`: Choose between "crew" or "flow"
- `NAME`: Name of the crew or flow
Example:
```shell Terminal
crewai create crew my_new_crew
crewai create flow my_new_flow
```
### 2. Version
Show the installed version of CrewAI.
```shell Terminal
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell Terminal
crewai version
crewai version --tools
```
### 3. Train
Train the crew for a specified number of iterations.
```shell Terminal
crewai train [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5)
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell Terminal
crewai train -n 10 -f my_training_data.pkl
```
### 4. Replay
Replay the crew execution from a specific task.
```shell Terminal
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell Terminal
crewai replay -t task_123456
```
### 5. Log-tasks-outputs
Retrieve your latest crew.kickoff() task outputs.
```shell Terminal
crewai log-tasks-outputs
```
### 6. Reset-memories
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```shell Terminal
crewai reset-memories [OPTIONS]
```
- `-l, --long`: Reset LONG TERM memory
- `-s, --short`: Reset SHORT TERM memory
- `-e, --entities`: Reset ENTITIES memory
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
- `-kn, --knowledge`: Reset KNOWLEDGE storage
- `-akn, --agent-knowledge`: Reset AGENT KNOWLEDGE storage
- `-a, --all`: Reset ALL memories
Example:
```shell Terminal
crewai reset-memories --long --short
crewai reset-memories --all
```
### 7. Test
Test the crew and evaluate the results.
```shell Terminal
crewai test [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3)
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
### 8. Run
Run the crew or flow.
```shell Terminal
crewai run
```
<Note>
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.
</Note>
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
### 9. Chat
Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
After receiving the results, you can continue interacting with the assistant for further instructions or questions.
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
<Note>
IMPORTANT: Set the `chat_llm` property in your `crew.py` file to enable this command.
```python
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. Deploy
Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
You can login or create an account with:
```shell Terminal
crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
Manage your CrewAI AMP organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI AMP to use these organization management commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI AMP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list
```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI AMP platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
### 11. Login
Authenticate with CrewAI AMP using a secure device code flow (no email entry required).
```shell Terminal
crewai login
```
What happens:
- A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL
- Enter/confirm the code to complete authentication
Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider and model, you will be prompted for API keys.
#### Available LLM Providers
Here's a list of the most popular LLM providers suggested by the CLI:
* OpenAI
* Groq
* Anthropic
* Google Gemini
* SambaNova
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
#### Other Options
If you select "other", you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management
Manage CLI configuration settings for CrewAI.
```shell Terminal
crewai config [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: Display all CLI configuration parameters
```shell Terminal
crewai config list
```
- `set`: Set a CLI configuration parameter
```shell Terminal
crewai config set <key> <value>
```
- `reset`: Reset all CLI configuration parameters to default values
```shell Terminal
crewai config reset
```
#### Available Configuration Parameters
- `enterprise_base_url`: Base URL of the CrewAI AMP instance
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
- `oauth2_domain`: OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens
#### Examples
Display current configuration:
```shell Terminal
crewai config list
```
Example output:
| Setting | Value | Description |
| :------------------ | :----------------------- | :---------------------------------------------------------- |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
Set the enterprise base URL:
```shell Terminal
crewai config set enterprise_base_url https://my-enterprise.crewai.com
```
Set OAuth2 provider:
```shell Terminal
crewai config set oauth2_provider auth0
```
Set OAuth2 domain:
```shell Terminal
crewai config set oauth2_domain my-company.auth0.com
```
Reset all configuration to defaults:
```shell Terminal
crewai config reset
```
<Tip>
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
<Tip>
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
</Tip>
<Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
</Note>

View File

@@ -1,363 +0,0 @@
---
title: Collaboration
description: How to enable agents to work together, delegate tasks, and communicate effectively within CrewAI teams.
icon: screen-users
mode: "wide"
---
## Overview
Collaboration in CrewAI enables agents to work together as a team by delegating tasks and asking questions to leverage each other's expertise. When `allow_delegation=True`, agents automatically gain access to powerful collaboration tools.
## Quick Start: Enable Collaboration
```python
from crewai import Agent, Crew, Task
# Enable collaboration for agents
researcher = Agent(
role="Research Specialist",
goal="Conduct thorough research on any topic",
backstory="Expert researcher with access to various sources",
allow_delegation=True, # 🔑 Key setting for collaboration
verbose=True
)
writer = Agent(
role="Content Writer",
goal="Create engaging content based on research",
backstory="Skilled writer who transforms research into compelling content",
allow_delegation=True, # 🔑 Enables asking questions to other agents
verbose=True
)
# Agents can now collaborate automatically
crew = Crew(
agents=[researcher, writer],
tasks=[...],
verbose=True
)
```
## How Agent Collaboration Works
When `allow_delegation=True`, CrewAI automatically provides agents with two powerful tools:
### 1. **Delegate Work Tool**
Allows agents to assign tasks to teammates with specific expertise.
```python
# Agent automatically gets this tool:
# Delegate work to coworker(task: str, context: str, coworker: str)
```
### 2. **Ask Question Tool**
Enables agents to ask specific questions to gather information from colleagues.
```python
# Agent automatically gets this tool:
# Ask question to coworker(question: str, context: str, coworker: str)
```
## Collaboration in Action
Here's a complete example showing agents collaborating on a content creation task:
```python
from crewai import Agent, Crew, Task, Process
# Create collaborative agents
researcher = Agent(
role="Research Specialist",
goal="Find accurate, up-to-date information on any topic",
backstory="""You're a meticulous researcher with expertise in finding
reliable sources and fact-checking information across various domains.""",
allow_delegation=True,
verbose=True
)
writer = Agent(
role="Content Writer",
goal="Create engaging, well-structured content",
backstory="""You're a skilled content writer who excels at transforming
research into compelling, readable content for different audiences.""",
allow_delegation=True,
verbose=True
)
editor = Agent(
role="Content Editor",
goal="Ensure content quality and consistency",
backstory="""You're an experienced editor with an eye for detail,
ensuring content meets high standards for clarity and accuracy.""",
allow_delegation=True,
verbose=True
)
# Create a task that encourages collaboration
article_task = Task(
description="""Write a comprehensive 1000-word article about 'The Future of AI in Healthcare'.
The article should include:
- Current AI applications in healthcare
- Emerging trends and technologies
- Potential challenges and ethical considerations
- Expert predictions for the next 5 years
Collaborate with your teammates to ensure accuracy and quality.""",
expected_output="A well-researched, engaging 1000-word article with proper structure and citations",
agent=writer # Writer leads, but can delegate research to researcher
)
# Create collaborative crew
crew = Crew(
agents=[researcher, writer, editor],
tasks=[article_task],
process=Process.sequential,
verbose=True
)
result = crew.kickoff()
```
## Collaboration Patterns
### Pattern 1: Research → Write → Edit
```python
research_task = Task(
description="Research the latest developments in quantum computing",
expected_output="Comprehensive research summary with key findings and sources",
agent=researcher
)
writing_task = Task(
description="Write an article based on the research findings",
expected_output="Engaging 800-word article about quantum computing",
agent=writer,
context=[research_task] # Gets research output as context
)
editing_task = Task(
description="Edit and polish the article for publication",
expected_output="Publication-ready article with improved clarity and flow",
agent=editor,
context=[writing_task] # Gets article draft as context
)
```
### Pattern 2: Collaborative Single Task
```python
collaborative_task = Task(
description="""Create a marketing strategy for a new AI product.
Writer: Focus on messaging and content strategy
Researcher: Provide market analysis and competitor insights
Work together to create a comprehensive strategy.""",
expected_output="Complete marketing strategy with research backing",
agent=writer # Lead agent, but can delegate to researcher
)
```
## Hierarchical Collaboration
For complex projects, use a hierarchical process with a manager agent:
```python
from crewai import Agent, Crew, Task, Process
# Manager agent coordinates the team
manager = Agent(
role="Project Manager",
goal="Coordinate team efforts and ensure project success",
backstory="Experienced project manager skilled at delegation and quality control",
allow_delegation=True,
verbose=True
)
# Specialist agents
researcher = Agent(
role="Researcher",
goal="Provide accurate research and analysis",
backstory="Expert researcher with deep analytical skills",
allow_delegation=False, # Specialists focus on their expertise
verbose=True
)
writer = Agent(
role="Writer",
goal="Create compelling content",
backstory="Skilled writer who creates engaging content",
allow_delegation=False,
verbose=True
)
# Manager-led task
project_task = Task(
description="Create a comprehensive market analysis report with recommendations",
expected_output="Executive summary, detailed analysis, and strategic recommendations",
agent=manager # Manager will delegate to specialists
)
# Hierarchical crew
crew = Crew(
agents=[manager, researcher, writer],
tasks=[project_task],
process=Process.hierarchical, # Manager coordinates everything
manager_llm="gpt-4o", # Specify LLM for manager
verbose=True
)
```
## Best Practices for Collaboration
### 1. **Clear Role Definition**
```python
# ✅ Good: Specific, complementary roles
researcher = Agent(role="Market Research Analyst", ...)
writer = Agent(role="Technical Content Writer", ...)
# ❌ Avoid: Overlapping or vague roles
agent1 = Agent(role="General Assistant", ...)
agent2 = Agent(role="Helper", ...)
```
### 2. **Strategic Delegation Enabling**
```python
# ✅ Enable delegation for coordinators and generalists
lead_agent = Agent(
role="Content Lead",
allow_delegation=True, # Can delegate to specialists
...
)
# ✅ Disable for focused specialists (optional)
specialist_agent = Agent(
role="Data Analyst",
allow_delegation=False, # Focuses on core expertise
...
)
```
### 3. **Context Sharing**
```python
# ✅ Use context parameter for task dependencies
writing_task = Task(
description="Write article based on research",
agent=writer,
context=[research_task], # Shares research results
...
)
```
### 4. **Clear Task Descriptions**
```python
# ✅ Specific, actionable descriptions
Task(
description="""Research competitors in the AI chatbot space.
Focus on: pricing models, key features, target markets.
Provide data in a structured format.""",
...
)
# ❌ Vague descriptions that don't guide collaboration
Task(description="Do some research about chatbots", ...)
```
## Troubleshooting Collaboration
### Issue: Agents Not Collaborating
**Symptoms:** Agents work in isolation, no delegation occurs
```python
# ✅ Solution: Ensure delegation is enabled
agent = Agent(
role="...",
allow_delegation=True, # This is required!
...
)
```
### Issue: Too Much Back-and-Forth
**Symptoms:** Agents ask excessive questions, slow progress
```python
# ✅ Solution: Provide better context and specific roles
Task(
description="""Write a technical blog post about machine learning.
Context: Target audience is software developers with basic ML knowledge.
Length: 1200 words
Include: code examples, practical applications, best practices
If you need specific technical details, delegate research to the researcher.""",
...
)
```
### Issue: Delegation Loops
**Symptoms:** Agents delegate back and forth indefinitely
```python
# ✅ Solution: Clear hierarchy and responsibilities
manager = Agent(role="Manager", allow_delegation=True)
specialist1 = Agent(role="Specialist A", allow_delegation=False) # No re-delegation
specialist2 = Agent(role="Specialist B", allow_delegation=False)
```
## Advanced Collaboration Features
### Custom Collaboration Rules
```python
# Set specific collaboration guidelines in agent backstory
agent = Agent(
role="Senior Developer",
backstory="""You lead development projects and coordinate with team members.
Collaboration guidelines:
- Delegate research tasks to the Research Analyst
- Ask the Designer for UI/UX guidance
- Consult the QA Engineer for testing strategies
- Only escalate blocking issues to the Project Manager""",
allow_delegation=True
)
```
### Monitoring Collaboration
```python
def track_collaboration(output):
"""Track collaboration patterns"""
if "Delegate work to coworker" in output.raw:
print("🤝 Delegation occurred")
if "Ask question to coworker" in output.raw:
print("❓ Question asked")
crew = Crew(
agents=[...],
tasks=[...],
step_callback=track_collaboration, # Monitor collaboration
verbose=True
)
```
## Memory and Learning
Enable agents to remember past collaborations:
```python
agent = Agent(
role="Content Lead",
memory=True, # Remembers past interactions
allow_delegation=True,
verbose=True
)
```
With memory enabled, agents learn from previous collaborations and improve their delegation decisions over time.
## Next Steps
- **Try the examples**: Start with the basic collaboration example
- **Experiment with roles**: Test different agent role combinations
- **Monitor interactions**: Use `verbose=True` to see collaboration in action
- **Optimize task descriptions**: Clear tasks lead to better collaboration
- **Scale up**: Try hierarchical processes for complex projects
Collaboration transforms individual AI agents into powerful teams that can tackle complex, multi-faceted challenges together.

View File

@@ -1,369 +0,0 @@
---
title: Crews
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
icon: people-group
mode: "wide"
---
## Overview
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
## Crew Attributes
| Attribute | Parameters | Description |
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
| **Agents** | `agents` | A list of agents that are part of the crew. |
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. Default is `sequential`. |
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. Defaults to `False`. |
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). | |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defaults to `None`. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
<Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Tip>
## Creating Crews
There are two ways to create crews in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
#### Example Crew Class with Decorators
```python code
from crewai import Agent, Crew, Task, Process
from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class YourCrewName:
"""Description of your crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Paths to your YAML configuration files
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff
def prepare_inputs(self, inputs):
# Modify inputs before the crew starts
inputs['additional_data'] = "Some extra information"
return inputs
@after_kickoff
def process_output(self, output):
# Modify output after the crew finishes
output.raw += "\nProcessed after kickoff."
return output
@agent
def agent_one(self) -> Agent:
return Agent(
config=self.agents_config['agent_one'], # type: ignore[index]
verbose=True
)
@agent
def agent_two(self) -> Agent:
return Agent(
config=self.agents_config['agent_two'], # type: ignore[index]
verbose=True
)
@task
def task_one(self) -> Task:
return Task(
config=self.tasks_config['task_one'] # type: ignore[index]
)
@task
def task_two(self) -> Task:
return Task(
config=self.tasks_config['task_two'] # type: ignore[index]
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the @agent decorator
tasks=self.tasks, # Automatically collected by the @task decorator.
process=Process.sequential,
verbose=True,
)
```
How to run the above code:
```python code
YourCrewName().crew().kickoff(inputs={"any": "input here"})
```
<Note>
Tasks will be executed in the order they are defined.
</Note>
The `CrewBase` class, along with these decorators, automates the collection of agents and tasks, reducing the need for manual management.
#### Decorators overview from `annotations.py`
CrewAI provides several decorators in the `annotations.py` file that are used to mark methods within your crew class for special handling:
- `@CrewBase`: Marks the class as a crew base class.
- `@agent`: Denotes a method that returns an `Agent` object.
- `@task`: Denotes a method that returns a `Task` object.
- `@crew`: Denotes the method that returns the `Crew` object.
- `@before_kickoff`: (Optional) Marks a method to be executed before the crew starts.
- `@after_kickoff`: (Optional) Marks a method to be executed after the crew finishes.
These decorators help in organizing your crew's structure and automatically collecting agents and tasks without manually listing them.
### Direct Code Definition (Alternative)
Alternatively, you can define the crew directly in code without using YAML configuration files.
```python code
from crewai import Agent, Crew, Task, Process
from crewai_tools import YourCustomTool
class YourCrewName:
def agent_one(self) -> Agent:
return Agent(
role="Data Analyst",
goal="Analyze data trends in the market",
backstory="An experienced data analyst with a background in economics",
verbose=True,
tools=[YourCustomTool()]
)
def agent_two(self) -> Agent:
return Agent(
role="Market Researcher",
goal="Gather information on market dynamics",
backstory="A diligent researcher with a keen eye for detail",
verbose=True
)
def task_one(self) -> Task:
return Task(
description="Collect recent market data and identify trends.",
expected_output="A report summarizing key trends in the market.",
agent=self.agent_one()
)
def task_two(self) -> Task:
return Task(
description="Research factors affecting market dynamics.",
expected_output="An analysis of factors influencing the market.",
agent=self.agent_two()
)
def crew(self) -> Crew:
return Crew(
agents=[self.agent_one(), self.agent_two()],
tasks=[self.task_one(), self.task_two()],
process=Process.sequential,
verbose=True
)
```
How to run the above code:
```python code
YourCrewName().crew().kickoff(inputs={})
```
In this example:
- Agents and tasks are defined directly within the class without decorators.
- We manually create and manage the list of agents and tasks.
- This approach provides more control but can be less maintainable for larger projects.
## Crew Output
The output of a crew in the CrewAI framework is encapsulated within the `CrewOutput` class.
This class provides a structured way to access results of the crew's execution, including various formats such as raw strings, JSON, and Pydantic models.
The `CrewOutput` includes the results from the final task output, token usage, and individual task outputs.
### Crew Output Attributes
| Attribute | Parameters | Type | Description |
| :--------------- | :------------- | :------------------------- | :--------------------------------------------------------------------------------------------------- |
| **Raw** | `raw` | `str` | The raw output of the crew. This is the default format for the output. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the crew. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the crew. |
| **Tasks Output** | `tasks_output` | `List[TaskOutput]` | A list of `TaskOutput` objects, each representing the output of a task in the crew. |
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage, providing insights into the language model's performance during execution. |
### Crew Output Methods and Properties
| Method/Property | Description |
| :-------------- | :------------------------------------------------------------------------------------------------ |
| **json** | Returns the JSON string representation of the crew output if the output format is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| \***\*str\*\*** | Returns the string representation of the crew output, prioritizing Pydantic, then JSON, then raw. |
### Accessing Crew Outputs
Once a crew has been executed, its output can be accessed through the `output` attribute of the `Crew` object. The `CrewOutput` class provides various ways to interact with and present this output.
#### Example
```python Code
# Example crew execution
crew = Crew(
agents=[research_agent, writer_agent],
tasks=[research_task, write_article_task],
verbose=True
)
crew_output = crew.kickoff()
# Accessing the crew output
print(f"Raw Output: {crew_output.raw}")
if crew_output.json_dict:
print(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}")
if crew_output.pydantic:
print(f"Pydantic Output: {crew_output.pydantic}")
print(f"Tasks Output: {crew_output.tasks_output}")
print(f"Token Usage: {crew_output.token_usage}")
```
## Accessing Crew Logs
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
In case of `True(Boolean)` will save as `logs.txt`.
In case of `output_log_file` is set as `False(Boolean)` or `None`, the logs will not be populated.
```python Code
# Save crew logs
crew = Crew(output_log_file = True) # Logs will be saved as logs.txt
crew = Crew(output_log_file = file_name) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.txt) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name.json
```
## Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
## Cache Utilization
Caches can be employed to store the results of tools' execution, making the process more efficient by reducing the need to re-execute identical tasks.
## Crew Usage Metrics
After the crew execution, you can access the `usage_metrics` attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.
```python Code
# Access the crew's usage metrics
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)
```
## Crew Execution Process
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` or `manager_agent` is required for this process and it's essential for validating the process flow.
### Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the `kickoff()` method. This starts the execution process according to the defined process flow.
```python Code
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
```
### Different Ways to Kick Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
```python Code
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
# Example of using kickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
### Replaying from a Specific Task
You can now replay from a specific task using our CLI command `replay`.
The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command `crewai replay -t <task_id>`, you can specify the `task_id` for the replay process.
Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.
### Replaying from a Specific Task Using the CLI
To use the replay feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
To view the latest kickoff task IDs, use:
```shell
crewai log-tasks-outputs
```
Then, to replay from a specific task, use:
```shell
crewai replay -t <task_id>
```
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.

View File

@@ -1,313 +0,0 @@
---
title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring'
icon: spinner
mode: "wide"
---
## Overview
CrewAI provides a powerful event system that allows you to listen for and react to various events that occur during the execution of your Crew. This feature enables you to build custom integrations, monitoring solutions, logging systems, or any other functionality that needs to be triggered based on CrewAI's internal events.
## How It Works
CrewAI uses an event bus architecture to emit events throughout the execution lifecycle. The event system is built on the following components:
1. **CrewAIEventsBus**: A singleton event bus that manages event registration and emission
2. **BaseEvent**: Base class for all events in the system
3. **BaseEventListener**: Abstract base class for creating custom event listeners
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png)
With Prompt Tracing you can:
- View the complete history of all prompts sent to your LLM
- Track token usage and costs
- Debug agent reasoning failures
- Share prompt sequences with your team
- Compare different prompt strategies
- Export traces for compliance and auditing
</Note>
## Creating a Custom Event Listener
To create a custom event listener, you need to:
1. Create a class that inherits from `BaseEventListener`
2. Implement the `setup_listeners` method
3. Register handlers for the events you're interested in
4. Create an instance of your listener in the appropriate file
Here's a simple example of a custom event listener class:
```python
from crewai.events import (
CrewKickoffStartedEvent,
CrewKickoffCompletedEvent,
AgentExecutionCompletedEvent,
)
from crewai.events import BaseEventListener
class MyCustomListener(BaseEventListener):
def __init__(self):
super().__init__()
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_started(source, event):
print(f"Crew '{event.crew_name}' has started execution!")
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_completed(source, event):
print(f"Crew '{event.crew_name}' has completed execution!")
print(f"Output: {event.output}")
@crewai_event_bus.on(AgentExecutionCompletedEvent)
def on_agent_execution_completed(source, event):
print(f"Agent '{event.agent.role}' completed task")
print(f"Output: {event.output}")
```
## Properly Registering Your Listener
Simply defining your listener class isn't enough. You need to create an instance of it and ensure it's imported in your application. This ensures that:
1. The event handlers are registered with the event bus
2. The listener instance remains in memory (not garbage collected)
3. The listener is active when events are emitted
### Option 1: Import and Instantiate in Your Crew or Flow Implementation
The most important thing is to create an instance of your listener in the file where your Crew or Flow is defined and executed:
#### For Crew-based Applications
Create and import your listener at the top of your Crew implementation file:
```python
# In your crew.py file
from crewai import Agent, Crew, Task
from my_listeners import MyCustomListener
# Create an instance of your listener
my_listener = MyCustomListener()
class MyCustomCrew:
# Your crew implementation...
def crew(self):
return Crew(
agents=[...],
tasks=[...],
# ...
)
```
#### For Flow-based Applications
Create and import your listener at the top of your Flow implementation file:
```python
# In your main.py or flow.py file
from crewai.flow import Flow, listen, start
from my_listeners import MyCustomListener
# Create an instance of your listener
my_listener = MyCustomListener()
class MyCustomFlow(Flow):
# Your flow implementation...
@start()
def first_step(self):
# ...
```
This ensures that your listener is loaded and active when your Crew or Flow is executed.
### Option 2: Create a Package for Your Listeners
For a more structured approach, especially if you have multiple listeners:
1. Create a package for your listeners:
```
my_project/
├── listeners/
│ ├── __init__.py
│ ├── my_custom_listener.py
│ └── another_listener.py
```
2. In `my_custom_listener.py`, define your listener class and create an instance:
```python
# my_custom_listener.py
from crewai.events import BaseEventListener
# ... import events ...
class MyCustomListener(BaseEventListener):
# ... implementation ...
# Create an instance of your listener
my_custom_listener = MyCustomListener()
```
3. In `__init__.py`, import the listener instances to ensure they're loaded:
```python
# __init__.py
from .my_custom_listener import my_custom_listener
from .another_listener import another_listener
# Optionally export them if you need to access them elsewhere
__all__ = ['my_custom_listener', 'another_listener']
```
4. Import your listeners package in your Crew or Flow file:
```python
# In your crew.py or flow.py file
import my_project.listeners # This loads all your listeners
class MyCustomCrew:
# Your crew implementation...
```
This is how third-party event listeners are registered in the CrewAI codebase.
## Available Event Types
CrewAI provides a wide range of events that you can listen for:
### Crew Events
- **CrewKickoffStartedEvent**: Emitted when a Crew starts execution
- **CrewKickoffCompletedEvent**: Emitted when a Crew completes execution
- **CrewKickoffFailedEvent**: Emitted when a Crew fails to complete execution
- **CrewTestStartedEvent**: Emitted when a Crew starts testing
- **CrewTestCompletedEvent**: Emitted when a Crew completes testing
- **CrewTestFailedEvent**: Emitted when a Crew fails to complete testing
- **CrewTrainStartedEvent**: Emitted when a Crew starts training
- **CrewTrainCompletedEvent**: Emitted when a Crew completes training
- **CrewTrainFailedEvent**: Emitted when a Crew fails to complete training
### Agent Events
- **AgentExecutionStartedEvent**: Emitted when an Agent starts executing a task
- **AgentExecutionCompletedEvent**: Emitted when an Agent completes executing a task
- **AgentExecutionErrorEvent**: Emitted when an Agent encounters an error during execution
### Task Events
- **TaskStartedEvent**: Emitted when a Task starts execution
- **TaskCompletedEvent**: Emitted when a Task completes execution
- **TaskFailedEvent**: Emitted when a Task fails to complete execution
- **TaskEvaluationEvent**: Emitted when a Task is evaluated
### Tool Usage Events
- **ToolUsageStartedEvent**: Emitted when a tool execution is started
- **ToolUsageFinishedEvent**: Emitted when a tool execution is completed
- **ToolUsageErrorEvent**: Emitted when a tool execution encounters an error
- **ToolValidateInputErrorEvent**: Emitted when a tool input validation encounters an error
- **ToolExecutionErrorEvent**: Emitted when a tool execution encounters an error
- **ToolSelectionErrorEvent**: Emitted when there's an error selecting a tool
### Knowledge Events
- **KnowledgeRetrievalStartedEvent**: Emitted when a knowledge retrieval is started
- **KnowledgeRetrievalCompletedEvent**: Emitted when a knowledge retrieval is completed
- **KnowledgeQueryStartedEvent**: Emitted when a knowledge query is started
- **KnowledgeQueryCompletedEvent**: Emitted when a knowledge query is completed
- **KnowledgeQueryFailedEvent**: Emitted when a knowledge query fails
- **KnowledgeSearchQueryFailedEvent**: Emitted when a knowledge search query fails
### LLM Guardrail Events
- **LLMGuardrailStartedEvent**: Emitted when a guardrail validation starts. Contains details about the guardrail being applied and retry count.
- **LLMGuardrailCompletedEvent**: Emitted when a guardrail validation completes. Contains details about validation success/failure, results, and error messages if any.
### Flow Events
- **FlowCreatedEvent**: Emitted when a Flow is created
- **FlowStartedEvent**: Emitted when a Flow starts execution
- **FlowFinishedEvent**: Emitted when a Flow completes execution
- **FlowPlotEvent**: Emitted when a Flow is plotted
- **MethodExecutionStartedEvent**: Emitted when a Flow method starts execution
- **MethodExecutionFinishedEvent**: Emitted when a Flow method completes execution
- **MethodExecutionFailedEvent**: Emitted when a Flow method fails to complete execution
### LLM Events
- **LLMCallStartedEvent**: Emitted when an LLM call starts
- **LLMCallCompletedEvent**: Emitted when an LLM call completes
- **LLMCallFailedEvent**: Emitted when an LLM call fails
- **LLMStreamChunkEvent**: Emitted for each chunk received during streaming LLM responses
### Memory Events
- **MemoryQueryStartedEvent**: Emitted when a memory query is started. Contains the query, limit, and optional score threshold.
- **MemoryQueryCompletedEvent**: Emitted when a memory query is completed successfully. Contains the query, results, limit, score threshold, and query execution time.
- **MemoryQueryFailedEvent**: Emitted when a memory query fails. Contains the query, limit, score threshold, and error message.
- **MemorySaveStartedEvent**: Emitted when a memory save operation is started. Contains the value to be saved, metadata, and optional agent role.
- **MemorySaveCompletedEvent**: Emitted when a memory save operation is completed successfully. Contains the saved value, metadata, agent role, and save execution time.
- **MemorySaveFailedEvent**: Emitted when a memory save operation fails. Contains the value, metadata, agent role, and error message.
- **MemoryRetrievalStartedEvent**: Emitted when memory retrieval for a task prompt starts. Contains the optional task ID.
- **MemoryRetrievalCompletedEvent**: Emitted when memory retrieval for a task prompt completes successfully. Contains the task ID, memory content, and retrieval execution time.
## Event Handler Structure
Each event handler receives two parameters:
1. **source**: The object that emitted the event
2. **event**: The event instance, containing event-specific data
The structure of the event object depends on the event type, but all events inherit from `BaseEvent` and include:
- **timestamp**: The time when the event was emitted
- **type**: A string identifier for the event type
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:
```python
from crewai.events import crewai_event_bus, CrewKickoffStartedEvent
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStartedEvent)
def temp_handler(source, event):
print("This handler only exists within this context")
# Do something that emits events
# Outside the context, the temporary handler is removed
```
## Use Cases
Event listeners can be used for a variety of purposes:
1. **Logging and Monitoring**: Track the execution of your Crew and log important events
2. **Analytics**: Collect data about your Crew's performance and behavior
3. **Debugging**: Set up temporary listeners to debug specific issues
4. **Integration**: Connect CrewAI with external systems like monitoring platforms, databases, or notification services
5. **Custom Behavior**: Trigger custom actions based on specific events
## Best Practices
1. **Keep Handlers Light**: Event handlers should be lightweight and avoid blocking operations
2. **Error Handling**: Include proper error handling in your event handlers to prevent exceptions from affecting the main execution
3. **Cleanup**: If your listener allocates resources, ensure they're properly cleaned up
4. **Selective Listening**: Only listen for events you actually need to handle
5. **Testing**: Test your event listeners in isolation to ensure they behave as expected
By leveraging CrewAI's event system, you can extend its functionality and integrate it seamlessly with your existing infrastructure.

View File

@@ -1,916 +0,0 @@
---
title: Flows
description: Learn how to create and manage AI workflows using CrewAI Flows.
icon: arrow-progress
mode: "wide"
---
## Overview
CrewAI Flows is a powerful feature designed to streamline the creation and management of AI workflows. Flows allow developers to combine and coordinate coding tasks and Crews efficiently, providing a robust framework for building sophisticated AI automations.
Flows allow you to create structured, event-driven workflows. They provide a seamless way to connect multiple tasks, manage state, and control the flow of execution in your AI applications. With Flows, you can easily design and implement multi-step processes that leverage the full potential of CrewAI's capabilities.
1. **Simplified Workflow Creation**: Easily chain together multiple Crews and tasks to create complex AI workflows.
2. **State Management**: Flows make it super easy to manage and share state between different tasks in your workflow.
3. **Event-Driven Architecture**: Built on an event-driven model, allowing for dynamic and responsive workflows.
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
## Getting Started
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
```python Code
from crewai.flow.flow import Flow, listen, start
from dotenv import load_dotenv
from litellm import completion
class ExampleFlow(Flow):
model = "gpt-4o-mini"
@start()
def generate_city(self):
print("Starting flow")
# Each flow state automatically gets a unique ID
print(f"Flow State ID: {self.state['id']}")
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
)
random_city = response["choices"][0]["message"]["content"]
# Store the city in our state
self.state["city"] = random_city
print(f"Random City: {random_city}")
return random_city
@listen(generate_city)
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
)
fun_fact = response["choices"][0]["message"]["content"]
# Store the fun fact in our state
self.state["fun_fact"] = fun_fact
return fun_fact
flow = ExampleFlow()
flow.plot()
result = flow.kickoff()
print(f"Generated fun fact: {result}")
```
![Flow Visual image](/images/crewai-flow-1.png)
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
When you run the Flow, it will:
1. Generate a unique ID for the flow state
2. Generate a random city and store it in the state
3. Generate a fun fact about that city and store it in the state
4. Print the results to the console
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
### @start()
The `@start()` decorator marks entry points for a Flow. You can:
- Declare multiple unconditional starts: `@start()`
- Gate a start on a prior method or router label: `@start("method_or_label")`
- Provide a callable condition to control when a start should fire
All satisfied `@start()` methods will execute (often in parallel) when the Flow begins or resumes.
### @listen()
The `@listen()` decorator is used to mark a method as a listener for the output of another task in the Flow. The method decorated with `@listen()` will be executed when the specified task emits an output. The method can access the output of the task it is listening to as an argument.
#### Usage
The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python Code
@listen("generate_city")
def generate_fun_fact(self, random_city):
# Implementation
```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python Code
@listen(generate_city)
def generate_fun_fact(self, random_city):
# Implementation
```
### Flow Output
Accessing and handling the output of a Flow is essential for integrating your AI workflows into larger applications or systems. CrewAI Flows provide straightforward mechanisms to retrieve the final output, access intermediate results, and manage the overall state of your Flow.
#### Retrieving the Final Output
When you run a Flow, the final output is determined by the last method that completes. The `kickoff()` method returns the output of this final method.
Here's how you can access the final output:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, listen, start
class OutputExampleFlow(Flow):
@start()
def first_method(self):
return "Output from first_method"
@listen(first_method)
def second_method(self, first_output):
return f"Second method received: {first_output}"
flow = OutputExampleFlow()
flow.plot("my_flow_plot")
final_output = flow.kickoff()
print("---- Final Output ----")
print(final_output)
```
```text Output
---- Final Output ----
Second method received: Output from first_method
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-2.png)
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
The `kickoff()` method will return the final output, which is then printed to the console. The `plot()` method will generate the HTML file, which will help you understand the flow.
#### Accessing and Updating State
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
Here's an example of how to update and access the state:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StateExampleFlow(Flow[ExampleState]):
@start()
def first_method(self):
self.state.message = "Hello from first_method"
self.state.counter += 1
@listen(first_method)
def second_method(self):
self.state.message += " - updated by second_method"
self.state.counter += 1
return self.state.message
flow = StateExampleFlow()
flow.plot("my_flow_plot")
final_output = flow.kickoff()
print(f"Final Output: {final_output}")
print("Final State:")
print(flow.state)
```
```text Output
Final Output: Hello from first_method - updated by second_method
Final State:
counter=2 message='Hello from first_method - updated by second_method'
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-2.png)
In this example, the state is updated by both `first_method` and `second_method`.
After the Flow has run, you can access the final state to see the updates made by these methods.
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
while also maintaining and accessing the state throughout the Flow's execution.
## Flow State Management
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
allowing developers to choose the approach that best fits their application's needs.
### Unstructured State Management
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
Even with unstructured states, CrewAI Flows automatically generates and maintains a unique identifier (UUID) for each state instance.
```python Code
from crewai.flow.flow import Flow, listen, start
class UnstructuredExampleFlow(Flow):
@start()
def first_method(self):
# The state automatically includes an 'id' field
print(f"State ID: {self.state['id']}")
self.state['counter'] = 0
self.state['message'] = "Hello from structured flow"
@listen(first_method)
def second_method(self):
self.state['counter'] += 1
self.state['message'] += " - updated"
@listen(second_method)
def third_method(self):
self.state['counter'] += 1
self.state['message'] += " - updated again"
print(f"State after third_method: {self.state}")
flow = UnstructuredExampleFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
![Flow Visual image](/images/crewai-flow-3.png)
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
**Key Points:**
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
- **Simplicity:** Ideal for straightforward workflows where state structure is minimal or varies significantly.
### Structured State Management
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
Each state in CrewAI Flows automatically receives a unique identifier (UUID) to help track and manage state instances. This ID is automatically generated and managed by the Flow system.
```python Code
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
# Note: 'id' field is automatically added to all states
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow[ExampleState]):
@start()
def first_method(self):
# Access the auto-generated ID if needed
print(f"State ID: {self.state.id}")
self.state.message = "Hello from structured flow"
@listen(first_method)
def second_method(self):
self.state.counter += 1
self.state.message += " - updated"
@listen(second_method)
def third_method(self):
self.state.counter += 1
self.state.message += " - updated again"
print(f"State after third_method: {self.state}")
flow = StructuredExampleFlow()
flow.kickoff()
```
![Flow Visual image](/images/crewai-flow-3.png)
**Key Points:**
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
- **Type Safety:** Leveraging Pydantic ensures that state attributes adhere to the specified types, reducing runtime errors.
- **Auto-Completion:** IDEs can provide better auto-completion and error checking based on the defined state model.
### Choosing Between Unstructured and Structured State Management
- **Use Unstructured State Management when:**
- The workflow's state is simple or highly dynamic.
- Flexibility is prioritized over strict state definitions.
- Rapid prototyping is required without the overhead of defining schemas.
- **Use Structured State Management when:**
- The workflow requires a well-defined and consistent state structure.
- Type safety and validation are important for your application's reliability.
- You want to leverage IDE features like auto-completion and type checking for better developer experience.
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
## Flow Persistence
The @persist decorator enables automatic state persistence in CrewAI Flows, allowing you to maintain flow state across restarts or different workflow executions. This decorator can be applied at either the class level or method level, providing flexibility in how you manage state persistence.
### Class-Level Persistence
When applied at the class level, the @persist decorator automatically persists all flow method states:
```python
@persist # Using SQLiteFlowPersistence by default
class MyFlow(Flow[MyState]):
@start()
def initialize_flow(self):
# This method will automatically have its state persisted
self.state.counter = 1
print("Initialized flow. State ID:", self.state.id)
@listen(initialize_flow)
def next_step(self):
# The state (including self.state.id) is automatically reloaded
self.state.counter += 1
print("Flow state is persisted. Counter:", self.state.counter)
```
### Method-Level Persistence
For more granular control, you can apply @persist to specific methods:
```python
class AnotherFlow(Flow[dict]):
@persist # Persists only this method's state
@start()
def begin(self):
if "runs" not in self.state:
self.state["runs"] = 0
self.state["runs"] += 1
print("Method-level persisted runs:", self.state["runs"])
```
### How It Works
1. **Unique State Identification**
- Each flow state automatically receives a unique UUID
- The ID is preserved across state updates and method calls
- Supports both structured (Pydantic BaseModel) and unstructured (dictionary) states
2. **Default SQLite Backend**
- SQLiteFlowPersistence is the default storage backend
- States are automatically saved to a local SQLite database
- Robust error handling ensures clear messages if database operations fail
3. **Error Handling**
- Comprehensive error messages for database operations
- Automatic state validation during save and load
- Clear feedback when persistence operations encounter issues
### Important Considerations
- **State Types**: Both structured (Pydantic BaseModel) and unstructured (dictionary) states are supported
- **Automatic ID**: The `id` field is automatically added if not present
- **State Recovery**: Failed or restarted flows can automatically reload their previous state
- **Custom Implementation**: You can provide your own FlowPersistence implementation for specialized storage needs
### Technical Advantages
1. **Precise Control Through Low-Level Access**
- Direct access to persistence operations for advanced use cases
- Fine-grained control via method-level persistence decorators
- Built-in state inspection and debugging capabilities
- Full visibility into state changes and persistence operations
2. **Enhanced Reliability**
- Automatic state recovery after system failures or restarts
- Transaction-based state updates for data integrity
- Comprehensive error handling with clear error messages
- Robust validation during state save and load operations
3. **Extensible Architecture**
- Customizable persistence backend through FlowPersistence interface
- Support for specialized storage solutions beyond SQLite
- Compatible with both structured (Pydantic) and unstructured (dict) states
- Seamless integration with existing CrewAI flow patterns
The persistence system's architecture emphasizes technical precision and customization options, allowing developers to maintain full control over state management while benefiting from built-in reliability features.
## Flow Control
### Conditional Logic: `or`
The `or_` function in Flows allows you to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, listen, or_, start
class OrExampleFlow(Flow):
@start()
def start_method(self):
return "Hello from the start method"
@listen(start_method)
def second_method(self):
return "Hello from the second method"
@listen(or_(start_method, second_method))
def logger(self, result):
print(f"Logger: {result}")
flow = OrExampleFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
```text Output
Logger: Hello from the start method
Logger: Hello from the second method
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-4.png)
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
### Conditional Logic: `and`
The `and_` function in Flows allows you to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, and_, listen, start
class AndExampleFlow(Flow):
@start()
def start_method(self):
self.state["greeting"] = "Hello from the start method"
@listen(start_method)
def second_method(self):
self.state["joke"] = "What do computers eat? Microchips."
@listen(and_(start_method, second_method))
def logger(self):
print("---- Logger ----")
print(self.state)
flow = AndExampleFlow()
flow.plot()
flow.kickoff()
```
```text Output
---- Logger ----
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-5.png)
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
### Router
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
<CodeGroup>
```python Code
import random
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
class ExampleState(BaseModel):
success_flag: bool = False
class RouterFlow(Flow[ExampleState]):
@start()
def start_method(self):
print("Starting the structured flow")
random_boolean = random.choice([True, False])
self.state.success_flag = random_boolean
@router(start_method)
def second_method(self):
if self.state.success_flag:
return "success"
else:
return "failed"
@listen("success")
def third_method(self):
print("Third method running")
@listen("failed")
def fourth_method(self):
print("Fourth method running")
flow = RouterFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
```text Output
Starting the structured flow
Third method running
Fourth method running
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-6.png)
In the above example, the `start_method` generates a random boolean value and sets it in the state.
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
## Adding Agents to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
```python
import asyncio
from typing import Any, Dict, List
from crewai_tools import SerperDevTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.flow.flow import Flow, listen, start
# Define a structured output format
class MarketAnalysis(BaseModel):
key_trends: List[str] = Field(description="List of identified market trends")
market_size: str = Field(description="Estimated market size")
competitors: List[str] = Field(description="Major competitors in the space")
# Define flow state
class MarketResearchState(BaseModel):
product: str = ""
analysis: MarketAnalysis | None = None
# Create a flow class
class MarketResearchFlow(Flow[MarketResearchState]):
@start()
def initialize_research(self) -> Dict[str, Any]:
print(f"Starting market research for {self.state.product}")
return {"product": self.state.product}
@listen(initialize_research)
async def analyze_market(self) -> Dict[str, Any]:
# Create an Agent for market research
analyst = Agent(
role="Market Research Analyst",
goal=f"Analyze the market for {self.state.product}",
backstory="You are an experienced market analyst with expertise in "
"identifying market trends and opportunities.",
tools=[SerperDevTool()],
verbose=True,
)
# Define the research query
query = f"""
Research the market for {self.state.product}. Include:
1. Key market trends
2. Market size
3. Major competitors
Format your response according to the specified structure.
"""
# Execute the analysis with structured output format
result = await analyst.kickoff_async(query, response_format=MarketAnalysis)
if result.pydantic:
print("result", result.pydantic)
else:
print("result", result)
# Return the analysis to update the state
return {"analysis": result.pydantic}
@listen(analyze_market)
def present_results(self, analysis) -> None:
print("\nMarket Analysis Results")
print("=====================")
if isinstance(analysis, dict):
# If we got a dict with 'analysis' key, extract the actual analysis object
market_analysis = analysis.get("analysis")
else:
market_analysis = analysis
if market_analysis and isinstance(market_analysis, MarketAnalysis):
print("\nKey Market Trends:")
for trend in market_analysis.key_trends:
print(f"- {trend}")
print(f"\nMarket Size: {market_analysis.market_size}")
print("\nMajor Competitors:")
for competitor in market_analysis.competitors:
print(f"- {competitor}")
else:
print("No structured analysis data available.")
print("Raw analysis:", analysis)
# Usage example
async def run_flow():
flow = MarketResearchFlow()
flow.plot("MarketResearchFlowPlot")
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
return result
# Run the flow
if __name__ == "__main__":
asyncio.run(run_flow())
```
![Flow Visual image](/images/crewai-flow-7.png)
This example demonstrates several key features of using Agents in flows:
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
2. **State Management**: The flow state (`MarketResearchState`) maintains context between steps and stores both inputs and outputs.
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
## Adding Crews to Flows
Creating a flow with multiple crews in CrewAI is straightforward.
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
```bash
crewai create flow name_of_flow
```
This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews.
### Folder Structure
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
| Directory/File | Description |
| :--------------------- | :----------------------------------------------------------------- |
| `name_of_flow/` | Root directory for the flow. |
| ├── `crews/` | Contains directories for specific crews. |
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
| ├── `tools/` | Directory for additional tools used in the flow. |
| │ └── `custom_tool.py` | Custom tool implementation. |
| ├── `main.py` | Main script for running the flow. |
| ├── `README.md` | Project description and instructions. |
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
### Building Your Crews
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
- `config/agents.yaml`: Defines the agents for the crew.
- `config/tasks.yaml`: Defines the tasks for the crew.
- `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself.
You can copy, paste, and edit the `poem_crew` to create other crews.
### Connecting Crews in `main.py`
The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution.
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python Code
#!/usr/bin/env python
from random import randint
from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):
sentence_count: int = 1
poem: str = ""
class PoemFlow(Flow[PoemState]):
@start()
def generate_sentence_count(self):
print("Generating sentence count")
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count)
def generate_poem(self):
print("Generating poem")
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
print("Poem generated", result.raw)
self.state.poem = result.raw
@listen(generate_poem)
def save_poem(self):
print("Saving poem")
with open("poem.txt", "w") as f:
f.write(self.state.poem)
def kickoff():
poem_flow = PoemFlow()
poem_flow.kickoff()
def plot():
poem_flow = PoemFlow()
poem_flow.plot("PoemFlowPlot")
if __name__ == "__main__":
kickoff()
plot()
```
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. The PoemFlowPlot will be generated by `plot()` method.
![Flow Visual image](/images/crewai-flow-8.png)
### Running the Flow
(Optional) Before running the flow, you can install the dependencies by running:
```bash
crewai install
```
Once all of the dependencies are installed, you need to activate the virtual environment by running:
```bash
source .venv/bin/activate
```
After activating the virtual environment, you can run the flow by executing one of the following commands:
```bash
crewai flow kickoff
```
or
```bash
uv run kickoff
```
The flow will execute, and you should see the output in the console.
## Plot Flows
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
### What are Plots?
Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations.
### How to Generate a Plot
CrewAI provides two convenient methods to generate plots of your flows:
#### Option 1: Using the `plot()` Method
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python Code
# Assuming you have a flow instance
flow.plot("my_flow_plot")
```
This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot.
#### Option 2: Using the Command Line
If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup.
```bash
crewai flow plot
```
This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow.
### Understanding the Plot
The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details.
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow)
3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows)
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
Also, check out our YouTube video on how to use flows in CrewAI below!
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/MTb5my6VOT8"
title="CrewAI Flows overview"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerPolicy="strict-origin-when-cross-origin"
allowFullScreen
></iframe>
## Running Flows
There are two ways to run a flow:
### Using the Flow API
You can run a flow programmatically by creating an instance of your flow class and calling the `kickoff()` method:
```python
flow = ExampleFlow()
result = flow.kickoff()
```
### Using the CLI
Starting from version 0.103.0, you can run flows using the `crewai run` command:
```shell
crewai run
```
This command automatically detects if your project is a flow (based on the `type = "flow"` setting in your pyproject.toml) and runs it accordingly. This is the recommended way to run flows from the command line.
For backward compatibility, you can also use:
```shell
crewai flow kickoff
```
However, the `crewai run` command is now the preferred method as it works for both crews and flows.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,155 +0,0 @@
---
title: Planning
description: Learn how to add planning to your CrewAI Crew and improve their performance.
icon: ruler-combined
mode: "wide"
---
## Overview
The planning feature in CrewAI allows you to add planning capability to your crew. When enabled, before each Crew iteration,
all Crew information is sent to an AgentPlanner that will plan the tasks step by step, and this plan will be added to each task description.
### Using the Planning Feature
Getting started with the planning feature is very easy, the only step required is to add `planning=True` to your Crew:
<CodeGroup>
```python Code
from crewai import Crew, Agent, Task, Process
# Assemble your crew with planning capabilities
my_crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
planning=True,
)
```
</CodeGroup>
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
responsible for creating the step-by-step logic to add to the Agents' tasks.
<CodeGroup>
```python Code
from crewai import Crew, Agent, Task, Process
# Assemble your crew with planning capabilities and custom LLM
my_crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
planning=True,
planning_llm="gpt-4o"
)
# Run the crew
my_crew.kickoff()
```
```markdown Result
[2024-07-15 16:49:11][INFO]: Planning the crew execution
**Step-by-Step Plan for Task Execution**
**Task Number 1: Conduct a thorough research about AI LLMs**
**Agent:** AI LLMs Senior Data Researcher
**Agent Goal:** Uncover cutting-edge developments in AI LLMs
**Task Expected Output:** A list with 10 bullet points of the most relevant information about AI LLMs
**Task Tools:** None specified
**Agent Tools:** None specified
**Step-by-Step Plan:**
1. **Define Research Scope:**
- Determine the specific areas of AI LLMs to focus on, such as advancements in architecture, use cases, ethical considerations, and performance metrics.
2. **Identify Reliable Sources:**
- List reputable sources for AI research, including academic journals, industry reports, conferences (e.g., NeurIPS, ACL), AI research labs (e.g., OpenAI, Google AI), and online databases (e.g., IEEE Xplore, arXiv).
3. **Collect Data:**
- Search for the latest papers, articles, and reports published in 2024 and early 2025.
- Use keywords like "Large Language Models 2025", "AI LLM advancements", "AI ethics 2025", etc.
4. **Analyze Findings:**
- Read and summarize the key points from each source.
- Highlight new techniques, models, and applications introduced in the past year.
5. **Organize Information:**
- Categorize the information into relevant topics (e.g., new architectures, ethical implications, real-world applications).
- Ensure each bullet point is concise but informative.
6. **Create the List:**
- Compile the 10 most relevant pieces of information into a bullet point list.
- Review the list to ensure clarity and relevance.
**Expected Output:**
A list with 10 bullet points of the most relevant information about AI LLMs.
---
**Task Number 2: Review the context you got and expand each topic into a full section for a report**
**Agent:** AI LLMs Reporting Analyst
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
**Task Expected Output:** A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'
**Task Tools:** None specified
**Agent Tools:** None specified
**Step-by-Step Plan:**
1. **Review the Bullet Points:**
- Carefully read through the list of 10 bullet points provided by the AI LLMs Senior Data Researcher.
2. **Outline the Report:**
- Create an outline with each bullet point as a main section heading.
- Plan sub-sections under each main heading to cover different aspects of the topic.
3. **Research Further Details:**
- For each bullet point, conduct additional research if necessary to gather more detailed information.
- Look for case studies, examples, and statistical data to support each section.
4. **Write Detailed Sections:**
- Expand each bullet point into a comprehensive section.
- Ensure each section includes an introduction, detailed explanation, examples, and a conclusion.
- Use markdown formatting for headings, subheadings, lists, and emphasis.
5. **Review and Edit:**
- Proofread the report for clarity, coherence, and correctness.
- Make sure the report flows logically from one section to the next.
- Format the report according to markdown standards.
6. **Finalize the Report:**
- Ensure the report is complete with all sections expanded and detailed.
- Double-check formatting and make any necessary adjustments.
**Expected Output:**
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```
</CodeGroup>

View File

@@ -1,67 +0,0 @@
---
title: Processes
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
icon: bars-staggered
mode: "wide"
---
## Overview
<Tip>
Processes orchestrate the execution of tasks by agents, akin to project management in human teams.
These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
</Tip>
## Process Implementations
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) or a custom manager agent (`manager_agent`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
- **Consensual Process (Planned)**: Aiming for collaborative decision-making among agents on task execution, this process type introduces a democratic approach to task management within CrewAI. It is planned for future development and is not currently implemented in the codebase.
## The Role of Processes in Teamwork
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
## Assigning Processes to a Crew
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` or `manager_agent` for the manager agent.
```python
from crewai import Crew, Process
# Example: Creating a crew with a sequential process
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.sequential
)
# Example: Creating a crew with a hierarchical process
# Ensure to provide a manager_llm or manager_agent
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm="gpt-4o"
# or
# manager_agent=my_manager_agent
)
```
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object, and for the hierarchical process, either `manager_llm` or `manager_agent` is also required.
## Sequential Process
This method mirrors dynamic team workflows, progressing through tasks in a thoughtful and systematic manner. Task execution follows the predefined order in the task list, with the output of one task serving as context for the next.
To customize task context, utilize the `context` parameter in the `Task` class to specify outputs that should be used as context for subsequent tasks.
## Hierarchical Process
Emulates a corporate hierarchy, CrewAI allows specifying a custom manager agent or automatically creates one, requiring the specification of a manager language model (`manager_llm`). This agent oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task completion.
## Process Class: Detailed Overview
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`). The consensual process is planned for future inclusion, emphasizing our commitment to continuous development and innovation.
## Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents.
This documentation has been updated to reflect the latest features, enhancements, and the planned integration of the Consensual Process, ensuring users have access to the most current and comprehensive information.

View File

@@ -1,148 +0,0 @@
---
title: Reasoning
description: "Learn how to enable and use agent reasoning to improve task execution."
icon: brain
mode: "wide"
---
## Overview
Agent reasoning is a feature that allows agents to reflect on a task and create a plan before execution. This helps agents approach tasks more methodically and ensures they're ready to perform the assigned work.
## Usage
To enable reasoning for an agent, simply set `reasoning=True` when creating the agent:
```python
from crewai import Agent
agent = Agent(
role="Data Analyst",
goal="Analyze complex datasets and provide insights",
backstory="You are an experienced data analyst with expertise in finding patterns in complex data.",
reasoning=True, # Enable reasoning
max_reasoning_attempts=3 # Optional: Set a maximum number of reasoning attempts
)
```
## How It Works
When reasoning is enabled, before executing a task, the agent will:
1. Reflect on the task and create a detailed plan
2. Evaluate whether it's ready to execute the task
3. Refine the plan as necessary until it's ready or max_reasoning_attempts is reached
4. Inject the reasoning plan into the task description before execution
This process helps the agent break down complex tasks into manageable steps and identify potential challenges before starting.
## Configuration Options
<ParamField body="reasoning" type="bool" default="False">
Enable or disable reasoning
</ParamField>
<ParamField body="max_reasoning_attempts" type="int" default="None">
Maximum number of attempts to refine the plan before proceeding with execution. If None (default), the agent will continue refining until it's ready.
</ParamField>
## Example
Here's a complete example:
```python
from crewai import Agent, Task, Crew
# Create an agent with reasoning enabled
analyst = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
backstory="You are an expert data analyst.",
reasoning=True,
max_reasoning_attempts=3 # Optional: Set a limit on reasoning attempts
)
# Create a task
analysis_task = Task(
description="Analyze the provided sales data and identify key trends.",
expected_output="A report highlighting the top 3 sales trends.",
agent=analyst
)
# Create a crew and run the task
crew = Crew(agents=[analyst], tasks=[analysis_task])
result = crew.kickoff()
print(result)
```
## Error Handling
The reasoning process is designed to be robust, with error handling built in. If an error occurs during reasoning, the agent will proceed with executing the task without the reasoning plan. This ensures that tasks can still be executed even if the reasoning process fails.
Here's how to handle potential errors in your code:
```python
from crewai import Agent, Task
import logging
# Set up logging to capture any reasoning errors
logging.basicConfig(level=logging.INFO)
# Create an agent with reasoning enabled
agent = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
reasoning=True,
max_reasoning_attempts=3
)
# Create a task
task = Task(
description="Analyze the provided sales data and identify key trends.",
expected_output="A report highlighting the top 3 sales trends.",
agent=agent
)
# Execute the task
# If an error occurs during reasoning, it will be logged and execution will continue
result = agent.execute_task(task)
```
## Example Reasoning Output
Here's an example of what a reasoning plan might look like for a data analysis task:
```
Task: Analyze the provided sales data and identify key trends.
Reasoning Plan:
I'll analyze the sales data to identify the top 3 trends.
1. Understanding of the task:
I need to analyze sales data to identify key trends that would be valuable for business decision-making.
2. Key steps I'll take:
- First, I'll examine the data structure to understand what fields are available
- Then I'll perform exploratory data analysis to identify patterns
- Next, I'll analyze sales by time periods to identify temporal trends
- I'll also analyze sales by product categories and customer segments
- Finally, I'll identify the top 3 most significant trends
3. Approach to challenges:
- If the data has missing values, I'll decide whether to fill or filter them
- If the data has outliers, I'll investigate whether they're valid data points or errors
- If trends aren't immediately obvious, I'll apply statistical methods to uncover patterns
4. Use of available tools:
- I'll use data analysis tools to explore and visualize the data
- I'll use statistical tools to identify significant patterns
- I'll use knowledge retrieval to access relevant information about sales analysis
5. Expected outcome:
A concise report highlighting the top 3 sales trends with supporting evidence from the data.
READY: I am ready to execute the task.
```
This reasoning plan helps the agent organize its approach to the task, consider potential challenges, and ensure it delivers the expected output.

View File

@@ -1,914 +0,0 @@
---
title: Tasks
description: Detailed guide on managing and creating tasks within the CrewAI framework.
icon: list-check
mode: "wide"
---
## Overview
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
<Note type="info" title="Enterprise Enhancement: Visual Task Builder">
CrewAI AMP includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
![Task Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Task Builder enables:
- Drag-and-drop task creation
- Visual task dependencies and flow
- Real-time testing and validation
- Easy sharing and collaboration
</Note>
### Task Execution Flow
Tasks can be executed in two ways:
- **Sequential**: Tasks are executed in the order they are defined
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
The execution flow is defined when creating the crew:
```python Code
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
process=Process.sequential # or Process.hierarchical
)
```
## Task Attributes
| Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
<Note type="warning" title="Deprecated: max_retries">
The task attribute `max_retries` is deprecated and will be removed in v1.0.0.
Use `guardrail_max_retries` instead to control retry attempts when a guardrail fails.
</Note>
## Creating Tasks
There are two ways to create tasks in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
```python Code
crew.kickoff(inputs={'topic': 'AI Agents'})
```
</Note>
Here's an example of how to configure tasks using YAML:
```yaml tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
markdown: true
output_file: report.md
```
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'] # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'] # type: ignore[index]
)
@crew
def crew(self) -> Crew:
return Crew(
agents=[
self.researcher(),
self.reporting_analyst()
],
tasks=[
self.research_task(),
self.reporting_task()
],
process=Process.sequential
)
```
<Note>
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
</Note>
### Direct Code Definition (Alternative)
Alternatively, you can define tasks directly in your code without using YAML configuration:
```python task.py
from crewai import Task
research_task = Task(
description="""
Conduct a thorough research about AI Agents.
Make sure you find any interesting and relevant information given
the current year is 2025.
""",
expected_output="""
A list with 10 bullet points of the most relevant information about AI Agents
""",
agent=researcher
)
reporting_task = Task(
description="""
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
""",
expected_output="""
A fully fledge reports with the mains topics, each with a full section of information.
""",
agent=reporting_analyst,
markdown=True, # Enable markdown formatting for the final output
output_file="report.md"
)
```
<Tip>
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc.
</Tip>
## Task Output
Understanding task outputs is crucial for building effective AI workflows. CrewAI provides a structured way to handle task results through the `TaskOutput` class, which supports multiple output formats and can be easily passed between tasks.
The output of a task in CrewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
### Task Output Attributes
| Attribute | Parameters | Type | Description |
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | Description of the task. |
| **Summary** | `summary` | `Optional[str]` | Summary of the task, auto-generated from the first 10 words of the description. |
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
| **Agent** | `agent` | `str` | The agent that executed the task. |
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
### Task Methods and Properties
| Method/Property | Description |
| :-------------- | :------------------------------------------------------------------------------------------------ |
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| **str** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
### Accessing Task Outputs
Once a task has been executed, its output can be accessed through the `output` attribute of the `Task` object. The `TaskOutput` class provides various ways to interact with and present this output.
#### Example
```python Code
# Example task
task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
# Execute the crew
crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
# Accessing the task output
task_output = task.output
print(f"Task Description: {task_output.description}")
print(f"Task Summary: {task_output.summary}")
print(f"Raw Output: {task_output.raw}")
if task_output.json_dict:
print(f"JSON Output: {json.dumps(task_output.json_dict, indent=2)}")
if task_output.pydantic:
print(f"Pydantic Output: {task_output.pydantic}")
```
## Markdown Output Formatting
The `markdown` parameter enables automatic markdown formatting for task outputs. When set to `True`, the task will instruct the agent to format the final answer using proper Markdown syntax.
### Using Markdown Formatting
```python Code
# Example task with markdown formatting enabled
formatted_task = Task(
description="Create a comprehensive report on AI trends",
expected_output="A well-structured report with headers, sections, and bullet points",
agent=reporter_agent,
markdown=True # Enable automatic markdown formatting
)
```
When `markdown=True`, the agent will receive additional instructions to format the output using:
- `#` for headers
- `**text**` for bold text
- `*text*` for italic text
- `-` or `*` for bullet points
- `` `code` `` for inline code
- ``` ```language ``` for code blocks
### YAML Configuration with Markdown
```yaml tasks.yaml
analysis_task:
description: >
Analyze the market data and create a detailed report
expected_output: >
A comprehensive analysis with charts and key findings
agent: analyst
markdown: true # Enable markdown formatting
output_file: analysis.md
```
### Benefits of Markdown Output
- **Consistent Formatting**: Ensures all outputs follow proper markdown conventions
- **Better Readability**: Structured content with headers, lists, and emphasis
- **Documentation Ready**: Output can be directly used in documentation systems
- **Cross-Platform Compatibility**: Markdown is universally supported
<Note>
The markdown formatting instructions are automatically added to the task prompt when `markdown=True`, so you don't need to specify formatting requirements in your task description.
</Note>
## Task Dependencies and Context
Tasks can depend on the output of other tasks using the `context` attribute. For example:
```python Code
research_task = Task(
description="Research the latest developments in AI",
expected_output="A list of recent AI developments",
agent=researcher
)
analysis_task = Task(
description="Analyze the research findings and identify key trends",
expected_output="Analysis report of AI trends",
agent=analyst,
context=[research_task] # This task will wait for research_task to complete
)
```
## Task Guardrails
Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides
feedback to agents when their output doesn't meet specific criteria.
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
### Function-Based Guardrails
To add a function-based guardrail to a task, provide a validation function through the `guardrail` parameter:
```python Code
from typing import Tuple, Union, Dict, Any
from crewai import TaskOutput
def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate blog content meets requirements."""
try:
# Check word count
word_count = len(result.split())
if word_count > 200:
return (False, "Blog content exceeds 200 words")
# Additional validation logic here
return (True, result.strip())
except Exception as e:
return (False, "Unexpected error during validation")
blog_task = Task(
description="Write a blog post about AI",
expected_output="A blog post under 200 words",
agent=blog_agent,
guardrail=validate_blog_content # Add the guardrail function
)
```
### Guardrail Function Requirements
1. **Function Signature**:
- Must accept exactly one parameter (the task output)
- Should return a tuple of `(bool, Any)`
- Type hints are recommended but optional
2. **Return Values**:
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
from crewai import TaskOutput, LLMGuardrail
def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
try:
# Main validation logic
validated_data = perform_validation(result)
return (True, validated_data)
except ValidationError as e:
return (False, f"VALIDATION_ERROR: {str(e)}")
except Exception as e:
return (False, str(e))
```
2. **Error Categories**:
- Use specific error codes
- Include relevant context
- Provide actionable feedback
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
from crewai import TaskOutput
def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
"""Chain multiple validation steps."""
# Step 1: Basic validation
if not result:
return (False, "Empty result")
# Step 2: Content validation
try:
validated = validate_content(result)
if not validated:
return (False, "Invalid content")
# Step 3: Format validation
formatted = format_output(validated)
return (True, formatted)
except Exception as e:
return (False, str(e))
```
### Handling Guardrail Results
When a guardrail returns `(False, error)`:
1. The error is sent back to the agent
2. The agent attempts to fix the issue
3. The process repeats until:
- The guardrail returns `(True, result)`
- Maximum retries are reached (`guardrail_max_retries`)
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
from crewai import TaskOutput, Task
def validate_json_output(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate and parse JSON output."""
try:
# Try to parse as JSON
data = json.loads(result)
return (True, data)
except json.JSONDecodeError as e:
return (False, "Invalid JSON format")
task = Task(
description="Generate a JSON report",
expected_output="A valid JSON object",
agent=analyst,
guardrail=validate_json_output,
guardrail_max_retries=3 # Limit retry attempts
)
```
## Getting Structured Consistent Outputs from Tasks
<Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
</Note>
### Using `output_pydantic`
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
Here's an example demonstrating how to use output_pydantic:
```python Code
import json
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
class Blog(BaseModel):
title: str
content: str
blog_agent = Agent(
role="Blog Content Generator Agent",
goal="Generate a blog title and content",
backstory="""You are an expert content creator, skilled in crafting engaging and informative blog posts.""",
verbose=False,
allow_delegation=False,
llm="gpt-4o",
)
task1 = Task(
description="""Create a blog title and content on a given topic. Make sure the content is under 200 words.""",
expected_output="A compelling blog title and well-written content.",
agent=blog_agent,
output_pydantic=Blog,
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[blog_agent],
tasks=[task1],
verbose=True,
process=Process.sequential,
)
result = crew.kickoff()
# Option 1: Accessing Properties Using Dictionary-Style Indexing
print("Accessing Properties - Option 1")
title = result["title"]
content = result["content"]
print("Title:", title)
print("Content:", content)
# Option 2: Accessing Properties Directly from the Pydantic Model
print("Accessing Properties - Option 2")
title = result.pydantic.title
content = result.pydantic.content
print("Title:", title)
print("Content:", content)
# Option 3: Accessing Properties Using the to_dict() Method
print("Accessing Properties - Option 3")
output_dict = result.to_dict()
title = output_dict["title"]
content = output_dict["content"]
print("Title:", title)
print("Content:", content)
# Option 4: Printing the Entire Blog Object
print("Accessing Properties - Option 5")
print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields.
* The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
* After executing the crew, you can access the structured output in multiple ways as shown.
#### Explanation of Accessing the Output
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the __getitem__ method.
2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object.
3. Using to_dict() Method: Convert the output to a dictionary and access the fields.
4. Printing the Entire Object: Simply print the result object to see the structured output.
### Using `output_json`
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
Here's an example demonstrating how to use `output_json`:
```python Code
import json
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
# Define the Pydantic model for the blog
class Blog(BaseModel):
title: str
content: str
# Define the agent
blog_agent = Agent(
role="Blog Content Generator Agent",
goal="Generate a blog title and content",
backstory="""You are an expert content creator, skilled in crafting engaging and informative blog posts.""",
verbose=False,
allow_delegation=False,
llm="gpt-4o",
)
# Define the task with output_json set to the Blog model
task1 = Task(
description="""Create a blog title and content on a given topic. Make sure the content is under 200 words.""",
expected_output="A JSON object with 'title' and 'content' fields.",
agent=blog_agent,
output_json=Blog,
)
# Instantiate the crew with a sequential process
crew = Crew(
agents=[blog_agent],
tasks=[task1],
verbose=True,
process=Process.sequential,
)
# Kickoff the crew to execute the task
result = crew.kickoff()
# Option 1: Accessing Properties Using Dictionary-Style Indexing
print("Accessing Properties - Option 1")
title = result["title"]
content = result["content"]
print("Title:", title)
print("Content:", content)
# Option 2: Printing the Entire Blog Object
print("Accessing Properties - Option 2")
print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
* The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
* After executing the crew, you can access the structured JSON output in two ways as shown.
#### Explanation of Accessing the Output
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the __getitem__ method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the __str__ method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
---
By using output_pydantic or output_json, you ensure that your tasks produce outputs in a consistent and structured format, making it easier to process and utilize the data within your application or across multiple tasks.
## Integrating Tools with Tasks
Leverage tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) for enhanced task performance and agent interaction.
## Creating a Task with Tools
```python Code
import os
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business.""",
verbose=True
)
# to perform a semantic search for a specified query from a text's content across the internet
search_tool = SerperDevTool()
task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
print(result)
```
This demonstrates how tasks with specific tools can override an agent's default set for tailored task execution.
## Referring to Other Tasks
In CrewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output, including multiple, should be used as context for another task.
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
```python Code
# ...
research_ai_task = Task(
description="Research the latest developments in AI",
expected_output="A list of recent AI developments",
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
research_ops_task = Task(
description="Research the latest developments in AI Ops",
expected_output="A list of recent AI Ops developments",
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output="Full blog post that is 4 paragraphs long",
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)
#...
```
## Asynchronous Execution
You can define a task to be executed asynchronously. This means that the crew will not wait for it to be completed to continue with the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
You can then use the `context` attribute to define in a future task that it should wait for the output of the asynchronous task to be completed.
```python Code
#...
list_ideas = Task(
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
write_article = Task(
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)
#...
```
## Callback Mechanism
The callback function is executed after the task is completed, allowing for actions or notifications to be triggered based on the task's outcome.
```python Code
# ...
def callback_function(output: TaskOutput):
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw}
""")
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)
#...
```
## Accessing a Specific Task Output
Once a crew finishes running, you can access the output of a specific task by using the `output` attribute of the task object:
```python Code
# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
#...
crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=True
)
result = crew.kickoff()
# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw}
""")
```
## Tool Override Mechanism
Specifying tools in a task allows for dynamic adaptation of agent capabilities, emphasizing CrewAI's flexibility.
## Error Handling and Validation Mechanisms
While creating and executing tasks, certain validation mechanisms are in place to ensure the robustness and reliability of task attributes. These include but are not limited to:
- Ensuring only one output type is set per task to maintain clear output expectations.
- Preventing the manual assignment of the `id` attribute to uphold the integrity of the unique identifier system.
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Creating Directories when Saving Files
The `create_directory` parameter controls whether CrewAI should automatically create directories when saving task outputs to files. This feature is particularly useful for organizing outputs and ensuring that file paths are correctly structured, especially when working with complex project hierarchies.
### Default Behavior
By default, `create_directory=True`, which means CrewAI will automatically create any missing directories in the output file path:
```python Code
# Default behavior - directories are created automatically
report_task = Task(
description='Generate a comprehensive market analysis report',
expected_output='A detailed market analysis with charts and insights',
agent=analyst_agent,
output_file='reports/2025/market_analysis.md', # Creates 'reports/2025/' if it doesn't exist
markdown=True
)
```
### Disabling Directory Creation
If you want to prevent automatic directory creation and ensure that the directory already exists, set `create_directory=False`:
```python Code
# Strict mode - directory must already exist
strict_output_task = Task(
description='Save critical data that requires existing infrastructure',
expected_output='Data saved to pre-configured location',
agent=data_agent,
output_file='secure/vault/critical_data.json',
create_directory=False # Will raise RuntimeError if 'secure/vault/' doesn't exist
)
```
### YAML Configuration
You can also configure this behavior in your YAML task definitions:
```yaml tasks.yaml
analysis_task:
description: >
Generate quarterly financial analysis
expected_output: >
A comprehensive financial report with quarterly insights
agent: financial_analyst
output_file: reports/quarterly/q4_2024_analysis.pdf
create_directory: true # Automatically create 'reports/quarterly/' directory
audit_task:
description: >
Perform compliance audit and save to existing audit directory
expected_output: >
A compliance audit report
agent: auditor
output_file: audit/compliance_report.md
create_directory: false # Directory must already exist
```
### Use Cases
**Automatic Directory Creation (`create_directory=True`):**
- Development and prototyping environments
- Dynamic report generation with date-based folders
- Automated workflows where directory structure may vary
- Multi-tenant applications with user-specific folders
**Manual Directory Management (`create_directory=False`):**
- Production environments with strict file system controls
- Security-sensitive applications where directories must be pre-configured
- Systems with specific permission requirements
- Compliance environments where directory creation is audited
### Error Handling
When `create_directory=False` and the directory doesn't exist, CrewAI will raise a `RuntimeError`:
```python Code
try:
result = crew.kickoff()
except RuntimeError as e:
# Handle missing directory error
print(f"Directory creation failed: {e}")
# Create directory manually or use fallback location
```
Check out the video below to see how to use structured outputs in CrewAI:
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="Structured outputs in CrewAI"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerPolicy="strict-origin-when-cross-origin"
allowFullScreen
></iframe>
## Conclusion
Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -1,49 +0,0 @@
---
title: Testing
description: Learn how to test your CrewAI Crew and evaluate their performance.
icon: vial
mode: "wide"
---
## Overview
Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. With crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities.
### Using the Testing Feature
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
```bash
crewai test
```
If you want to run more iterations or use a different model, you can specify the parameters like this:
```bash
crewai test --n_iterations 5 --model gpt-4o
```
or using the short forms:
```bash
crewai test -n 5 -m gpt-4o
```
When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.
A table of scores at the end will show the performance of the crew in terms of the following metrics:
<center>**Tasks Scores (1-10 Higher is better)**</center>
| Tasks/Crew/Agents | Run 1 | Run 2 | Avg. Total | Agents | Additional Info |
|:------------------|:-----:|:-----:|:----------:|:------------------------------:|:---------------------------------|
| Task 1 | 9.0 | 9.5 | **9.2** | Professional Insights | |
| | | | | Researcher | |
| Task 2 | 9.0 | 10.0 | **9.5** | Company Profile Investigator | |
| Task 3 | 9.0 | 9.0 | **9.0** | Automation Insights | |
| | | | | Specialist | |
| Task 4 | 9.0 | 9.0 | **9.0** | Final Report Compiler | Automation Insights Specialist |
| Crew | 9.00 | 9.38 | **9.2** | | |
| Execution Time (s) | 126 | 145 | **135** | | |
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.

View File

@@ -1,286 +0,0 @@
---
title: Tools
description: Understanding and leveraging tools within the CrewAI framework for agent collaboration and task execution.
icon: screwdriver-wrench
mode: "wide"
---
## Overview
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
## What is a Tool?
A tool in CrewAI is a skill or function that agents can utilize to perform various actions.
This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools),
enabling everything from simple searches to complex interactions and effective teamwork among agents.
<Note type="info" title="Enterprise Enhancement: Tools Repository">
CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
The Enterprise Tools Repository includes:
- Pre-built connectors for popular enterprise systems
- Custom tool creation interface
- Version control and sharing capabilities
- Security and compliance features
</Note>
## Key Characteristics of Tools
- **Utility**: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
- **Integration**: Boosts agent capabilities by seamlessly integrating tools into their workflow.
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
- **Asynchronous Support**: Handles both synchronous and asynchronous tools, enabling non-blocking operations.
## Using CrewAI Tools
To enhance your agents' capabilities with crewAI tools, begin by installing our extra tools package:
```bash
pip install 'crewai[tools]'
```
Here's an example demonstrating their use:
```python Code
import os
from crewai import Agent, Task, Crew
# Importing crewAI tools
from crewai_tools import (
DirectoryReadTool,
FileReadTool,
SerperDevTool,
WebsiteSearchTool
)
# Set up API keys
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
# Instantiate tools
docs_tool = DirectoryReadTool(directory='./blog-posts')
file_tool = FileReadTool()
search_tool = SerperDevTool()
web_rag_tool = WebsiteSearchTool()
# Create agents
researcher = Agent(
role='Market Research Analyst',
goal='Provide up-to-date market analysis of the AI industry',
backstory='An expert analyst with a keen eye for market trends.',
tools=[search_tool, web_rag_tool],
verbose=True
)
writer = Agent(
role='Content Writer',
goal='Craft engaging blog posts about the AI industry',
backstory='A skilled writer with a passion for technology.',
tools=[docs_tool, file_tool],
verbose=True
)
# Define tasks
research = Task(
description='Research the latest trends in the AI industry and provide a summary.',
expected_output='A summary of the top 3 trending developments in the AI industry with a unique perspective on their significance.',
agent=researcher
)
write = Task(
description='Write an engaging blog post about the AI industry, based on the research analyst's summary. Draw inspiration from the latest blog posts in the directory.',
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.',
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be saved here
)
# Assemble a crew with planning enabled
crew = Crew(
agents=[researcher, writer],
tasks=[research, write],
verbose=True,
planning=True, # Enable planning feature
)
# Execute tasks
crew.kickoff()
```
## Available CrewAI Tools
- **Error Handling**: All tools are built with error handling capabilities, allowing agents to gracefully manage exceptions and continue their tasks.
- **Caching Mechanism**: All tools support caching, enabling agents to efficiently reuse previously obtained results, reducing the load on external resources and speeding up the execution time. You can also define finer control over the caching mechanism using the `cache_function` attribute on the tool.
Here is a list of the available tools and their descriptions:
| Tool | Description |
| :------------------------------- | :--------------------------------------------------------------------------------------------- |
| **ApifyActorsTool** | A tool that integrates Apify Actors with your workflows for web scraping and automation tasks. |
| **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
| **CodeInterpreterTool** | A tool for interpreting python code. |
| **ComposioTool** | Enables use of Composio tools. |
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
| **EXASearchTool** | A tool designed for performing exhaustive searches across various data sources. |
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search. |
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
| **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. |
| **LlamaIndexTool** | Enables the use of LlamaIndex tools. |
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
| **Vision Tool** | A tool for generating images using the DALL-E API. |
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool** | A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
## Creating your own Tools
<Tip>
Developers can craft `custom tools` tailored for their agent's needs or
utilize pre-built options.
</Tip>
There are two main ways for one to create a CrewAI tool:
### Subclassing `BaseTool`
```python Code
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization."
args_schema: Type[BaseModel] = MyToolInput
def _run(self, argument: str) -> str:
# Your tool's logic here
return "Tool's result"
```
## Asynchronous Tool Support
CrewAI supports asynchronous tools, allowing you to implement tools that perform non-blocking operations like network requests, file I/O, or other async operations without blocking the main execution thread.
### Creating Async Tools
You can create async tools in two ways:
#### 1. Using the `tool` Decorator with Async Functions
```python Code
from crewai.tools import tool
@tool("fetch_data_async")
async def fetch_data_async(query: str) -> str:
"""Asynchronously fetch data based on the query."""
# Simulate async operation
await asyncio.sleep(1)
return f"Data retrieved for {query}"
```
#### 2. Implementing Async Methods in Custom Tool Classes
```python Code
from crewai.tools import BaseTool
class AsyncCustomTool(BaseTool):
name: str = "async_custom_tool"
description: str = "An asynchronous custom tool"
async def _run(self, query: str = "") -> str:
"""Asynchronously run the tool"""
# Your async implementation here
await asyncio.sleep(1)
return f"Processed {query} asynchronously"
```
### Using Async Tools
Async tools work seamlessly in both standard Crew workflows and Flow-based workflows:
```python Code
# In standard Crew
agent = Agent(role="researcher", tools=[async_custom_tool])
# In Flow
class MyFlow(Flow):
@start()
async def begin(self):
crew = Crew(agents=[agent])
result = await crew.kickoff_async()
return result
```
The CrewAI framework automatically handles the execution of both synchronous and asynchronous tools, so you don't need to worry about how to call them differently.
### Utilizing the `tool` Decorator
```python Code
from crewai.tools import tool
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, your agent will need this information to use it."""
# Function logic here
return "Result from your custom tool"
```
### Custom Caching Mechanism
<Tip>
Tools can optionally implement a `cache_function` to fine-tune caching
behavior. This function determines when to cache results based on specific
conditions, offering granular control over caching logic.
</Tip>
```python Code
from crewai.tools import tool
@tool
def multiplication_tool(first_number: int, second_number: int) -> str:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
def cache_func(args, result):
# In this case, we only cache the result if it's a multiple of 2
cache = result % 2 == 0
return cache
multiplication_tool.cache_function = cache_func
writer1 = Agent(
role="Writer",
goal="You write lessons of math for kids.",
backstory="You're an expert in writing and you love to teach kids but you know nothing of math.",
tools=[multiplication_tool],
allow_delegation=False,
)
#...
```
## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling,
caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -1,197 +0,0 @@
---
title: Training
description: Learn how to train your CrewAI agents by giving them feedback early on and get consistent results.
icon: dumbbell
mode: "wide"
---
## Overview
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
This helps the agents improve their understanding, decision-making, and problem-solving abilities.
### Training Your Crew Using the CLI
To use the training feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
```shell
crewai train -n <n_iterations> -f <filename.pkl>
```
<Tip>
Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`.
</Tip>
<Note>
If you omit `-f`, the output defaults to `trained_agents_data.pkl` in the current working directory. You can pass an absolute path to control where the file is written.
</Note>
### Training your Crew programmatically
To train your crew programmatically, use the following steps:
1. Define the number of iterations for training.
2. Specify the input parameters for the training process.
3. Execute the training command within a try-except block to handle potential errors.
```python Code
n_iterations = 2
inputs = {"topic": "CrewAI Training"}
filename = "your_model.pkl"
try:
YourCrewName_Crew().crew().train(
n_iterations=n_iterations,
inputs=inputs,
filename=filename
)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
```
## How trained data is used by agents
CrewAI uses the training artifacts in two ways: during training to incorporate your human feedback, and after training to guide agents with consolidated suggestions.
### Training data flow
```mermaid
flowchart TD
A["Start training<br/>CLI: crewai train -n -f<br/>or Python: crew.train(...)"] --> B["Setup training mode<br/>- task.human_input = true<br/>- disable delegation<br/>- init training_data.pkl + trained file"]
subgraph "Iterations"
direction LR
C["Iteration i<br/>initial_output"] --> D["User human_feedback"]
D --> E["improved_output"]
E --> F["Append to training_data.pkl<br/>by agent_id and iteration"]
end
B --> C
F --> G{"More iterations?"}
G -- "Yes" --> C
G -- "No" --> H["Evaluate per agent<br/>aggregate iterations"]
H --> I["Consolidate<br/>suggestions[] + quality + final_summary"]
I --> J["Save by agent role to trained file<br/>(default: trained_agents_data.pkl)"]
J --> K["Normal (non-training) runs"]
K --> L["Auto-load suggestions<br/>from trained_agents_data.pkl"]
L --> M["Append to prompt<br/>for consistent improvements"]
```
### During training runs
- On each iteration, the system records for every agent:
- `initial_output`: the agents first answer
- `human_feedback`: your inline feedback when prompted
- `improved_output`: the agents follow-up answer after feedback
- This data is stored in a working file named `training_data.pkl` keyed by the agents internal ID and iteration.
- While training is active, the agent automatically appends your prior human feedback to its prompt to enforce those instructions on subsequent attempts within the training session.
Training is interactive: tasks set `human_input = true`, so running in a non-interactive environment will block on user input.
### After training completes
- When `train(...)` finishes, CrewAI evaluates the collected training data per agent and produces a consolidated result containing:
- `suggestions`: clear, actionable instructions distilled from your feedback and the difference between initial/improved outputs
- `quality`: a 010 score capturing improvement
- `final_summary`: a step-by-step set of action items for future tasks
- These consolidated results are saved to the filename you pass to `train(...)` (default via CLI is `trained_agents_data.pkl`). Entries are keyed by the agents `role` so they can be applied across sessions.
- During normal (non-training) execution, each agent automatically loads its consolidated `suggestions` and appends them to the task prompt as mandatory instructions. This gives you consistent improvements without changing your agent definitions.
### File summary
- `training_data.pkl` (ephemeral, per-session):
- Structure: `agent_id -> { iteration_number: { initial_output, human_feedback, improved_output } }`
- Purpose: capture raw data and human feedback during training
- Location: saved in the current working directory (CWD)
- `trained_agents_data.pkl` (or your custom filename):
- Structure: `agent_role -> { suggestions: string[], quality: number, final_summary: string }`
- Purpose: persist consolidated guidance for future runs
- Location: written to the CWD by default; use `-f` to set a custom (including absolute) path
## Small Language Model Considerations
<Warning>
When using smaller language models (≤7B parameters) for training data evaluation, be aware that they may face challenges with generating structured outputs and following complex instructions.
</Warning>
### Limitations of Small Models in Training Evaluation
<CardGroup cols={2}>
<Card title="JSON Output Accuracy" icon="triangle-exclamation">
Smaller models often struggle with producing valid JSON responses needed for structured training evaluations, leading to parsing errors and incomplete data.
</Card>
<Card title="Evaluation Quality" icon="chart-line">
Models under 7B parameters may provide less nuanced evaluations with limited reasoning depth compared to larger models.
</Card>
<Card title="Instruction Following" icon="list-check">
Complex training evaluation criteria may not be fully followed or considered by smaller models.
</Card>
<Card title="Consistency" icon="rotate">
Evaluations across multiple training iterations may lack consistency with smaller models.
</Card>
</CardGroup>
### Recommendations for Training
<Tabs>
<Tab title="Best Practice">
For optimal training quality and reliable evaluations, we strongly recommend using models with at least 7B parameters or larger:
```python
from crewai import Agent, Crew, Task, LLM
# Recommended minimum for training evaluation
llm = LLM(model="mistral/open-mistral-7b")
# Better options for reliable training evaluation
llm = LLM(model="anthropic/claude-3-sonnet-20240229-v1:0")
llm = LLM(model="gpt-4o")
# Use this LLM with your agents
agent = Agent(
role="Training Evaluator",
goal="Provide accurate training feedback",
llm=llm
)
```
<Tip>
More powerful models provide higher quality feedback with better reasoning, leading to more effective training iterations.
</Tip>
</Tab>
<Tab title="Small Model Usage">
If you must use smaller models for training evaluation, be aware of these constraints:
```python
# Using a smaller model (expect some limitations)
llm = LLM(model="huggingface/microsoft/Phi-3-mini-4k-instruct")
```
<Warning>
While CrewAI includes optimizations for small models, expect less reliable and less nuanced evaluation results that may require more human intervention during training.
</Warning>
</Tab>
</Tabs>
### Key Points to Note
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
- **Filename Requirement:** Ensure that the filename ends with `.pkl`. The code will raise a `ValueError` if this condition is not met.
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
- Trained guidance is applied at prompt time; it does not modify your Python/YAML agent configuration.
- Agents automatically load trained suggestions from a file named `trained_agents_data.pkl` located in the current working directory. If you trained to a different filename, either rename it to `trained_agents_data.pkl` before running, or adjust the loader in code.
- You can change the output filename when calling `crewai train` with `-f/--filename`. Absolute paths are supported if you want to save outside the CWD.
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.

View File

@@ -1,155 +0,0 @@
---
title: 'Agent Repositories'
description: 'Learn how to use Agent Repositories to share and reuse your agents across teams and projects'
icon: 'people-group'
mode: "wide"
---
Agent Repositories allow enterprise users to store, share, and reuse agent definitions across teams and projects. This feature enables organizations to maintain a centralized library of standardized agents, promoting consistency and reducing duplication of effort.
<Frame>
![Agent Repositories](/images/enterprise/agent-repositories.png)
</Frame>
## Benefits of Agent Repositories
- **Standardization**: Maintain consistent agent definitions across your organization
- **Reusability**: Create an agent once and use it in multiple crews and projects
- **Governance**: Implement organization-wide policies for agent configurations
- **Collaboration**: Enable teams to share and build upon each other's work
## Creating and Use Agent Repositories
1. You must have an account at CrewAI, try the [free plan](https://app.crewai.com).
2. Create agents with specific roles and goals for your workflows.
3. Configure tools and capabilities for each specialized assistant.
4. Deploy agents across projects via visual interface or API integration.
<Frame>
![Agent Repositories](/images/enterprise/create-agent-repository.png)
</Frame>
### Loading Agents from Repositories
You can load agents from repositories in your code using the `from_repository` parameter to run locally:
```python
from crewai import Agent
# Create an agent by loading it from a repository
# The agent is loaded with all its predefined configurations
researcher = Agent(
from_repository="market-research-agent"
)
```
### Overriding Repository Settings
You can override specific settings from the repository by providing them in the configuration:
```python
researcher = Agent(
from_repository="market-research-agent",
goal="Research the latest trends in AI development", # Override the repository goal
verbose=True # Add a setting not in the repository
)
```
### Example: Creating a Crew with Repository Agents
```python
from crewai import Crew, Agent, Task
# Load agents from repositories
researcher = Agent(
from_repository="market-research-agent"
)
writer = Agent(
from_repository="content-writer-agent"
)
# Create tasks
research_task = Task(
description="Research the latest trends in AI",
agent=researcher
)
writing_task = Task(
description="Write a comprehensive report based on the research",
agent=writer
)
# Create the crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True
)
# Run the crew
result = crew.kickoff()
```
### Example: Using `kickoff()` with Repository Agents
You can also use repository agents directly with the `kickoff()` method for simpler interactions:
```python
from crewai import Agent
from pydantic import BaseModel
from typing import List
# Define a structured output format
class MarketAnalysis(BaseModel):
key_trends: List[str]
opportunities: List[str]
recommendation: str
# Load an agent from repository
analyst = Agent(
from_repository="market-analyst-agent",
verbose=True
)
# Get a free-form response
result = analyst.kickoff("Analyze the AI market in 2025")
print(result.raw) # Access the raw response
# Get structured output
structured_result = analyst.kickoff(
"Provide a structured analysis of the AI market in 2025",
response_format=MarketAnalysis
)
# Access structured data
print(f"Key Trends: {structured_result.pydantic.key_trends}")
print(f"Recommendation: {structured_result.pydantic.recommendation}")
```
## Best Practices
1. **Naming Convention**: Use clear, descriptive names for your repository agents
2. **Documentation**: Include comprehensive descriptions for each agent
3. **Tool Management**: Ensure that tools referenced by repository agents are available in your environment
4. **Access Control**: Manage permissions to ensure only authorized team members can modify repository agents
## Organization Management
To switch between organizations or see your current organization, use the CrewAI CLI:
```bash
# View current organization
crewai org current
# Switch to a different organization
crewai org switch <org_id>
# List all available organizations
crewai org list
```
<Note>
When loading agents from repositories, you must be authenticated and switched to the correct organization. If you receive errors, check your authentication status and organization settings using the CLI commands above.
</Note>

View File

@@ -1,104 +0,0 @@
---
title: Automations
description: "Manage, deploy, and monitor your live crews (automations) in one place."
icon: "rocket"
mode: "wide"
---
## Overview
Automations is the live operations hub for your deployed crews. Use it to deploy from GitHub or a ZIP file, manage environment variables, redeploy when needed, and monitor the status of each automation.
<Frame>
![Automations Overview](/images/enterprise/automations-overview.png)
</Frame>
## Deployment Methods
### Deploy from GitHub
Use this for versioncontrolled projects and continuous deployment.
<Steps>
<Step title="Connect GitHub">
Click <b>Configure GitHub</b> and authorize access.
</Step>
<Step title="Select Repository & Branch">
Choose the <b>Repository</b> and <b>Branch</b> you want to deploy from.
</Step>
<Step title="Enable Autodeploy (optional)">
Turn on <b>Automatically deploy new commits</b> to ship updates on every push.
</Step>
<Step title="Add Environment Variables">
Add secrets individually or use <b>Bulk View</b> for multiple variables.
</Step>
<Step title="Deploy">
Click <b>Deploy</b> to create your live automation.
</Step>
</Steps>
<Frame>
![GitHub Deployment](/images/enterprise/deploy-from-github.png)
</Frame>
### Deploy from ZIP
Ship quickly without Git—upload a compressed package of your project.
<Steps>
<Step title="Choose File">
Select the ZIP archive from your computer.
</Step>
<Step title="Add Environment Variables">
Provide any required variables or keys.
</Step>
<Step title="Deploy">
Click <b>Deploy</b> to create your live automation.
</Step>
</Steps>
<Frame>
![ZIP Deployment](/images/enterprise/deploy-from-zip.png)
</Frame>
## Automations Dashboard
The table lists all live automations with key details:
- **CREW**: Automation name
- **STATUS**: Online / Failed / In Progress
- **URL**: Endpoint for kickoff/status
- **TOKEN**: Automation token
- **ACTIONS**: Redeploy, delete, and more
Use the topright controls to filter and search:
- Search by name
- Filter by <b>Status</b>
- Filter by <b>Source</b> (GitHub / Studio / ZIP)
Once deployed, you can view the automation details and have the **Options** dropdown menu to `chat with this crew`, `Export React Component` and `Export as MCP`.
<Frame>
![Automations Table](/images/enterprise/automations-table.png)
</Frame>
## Best Practices
- Prefer GitHub deployments for version control and CI/CD
- Use redeploy to roll forward after code or config updates or set it to auto-deploy on every push
## Related
<CardGroup cols={3}>
<Card title="Deploy a Crew" href="/en/enterprise/guides/deploy-crew" icon="rocket">
Deploy a Crew from GitHub or ZIP file.
</Card>
<Card title="Automation Triggers" href="/en/enterprise/guides/automation-triggers" icon="trigger">
Trigger automations via webhooks or API.
</Card>
<Card title="Webhook Automation" href="/en/enterprise/guides/webhook-automation" icon="webhook">
Stream real-time events and updates to your systems.
</Card>
</CardGroup>

View File

@@ -1,88 +0,0 @@
---
title: Crew Studio
description: "Build new automations with AI assistance, a visual editor, and integrated testing."
icon: "pencil"
mode: "wide"
---
## Overview
Crew Studio is an interactive, AIassisted workspace for creating new automations from scratch using natural language and a visual workflow editor.
<Frame>
![Crew Studio Overview](/images/enterprise/crew-studio-overview.png)
</Frame>
## Promptbased Creation
- Describe the automation you want; the AI generates agents, tasks, and tools.
- Use voice input via the microphone icon if preferred.
- Start from builtin prompts for common use cases.
<Frame>
![Prompt Builder](/images/enterprise/crew-studio-prompt.png)
</Frame>
## Visual Editor
The canvas reflects the workflow as nodes and edges with three supporting panels that allow you to configure the workflow easily without writing code; a.k.a. "**vibe coding AI Agents**".
You can use the drag-and-drop functionality to add agents, tasks, and tools to the canvas or you can use the chat section to build the agents. Both approaches share state and can be used interchangeably.
- **AI Thoughts (left)**: streaming reasoning as the workflow is designed
- **Canvas (center)**: agents and tasks as connected nodes
- **Resources (right)**: draganddrop components (agents, tasks, tools)
<Frame>
![Visual Canvas](/images/enterprise/crew-studio-canvas.png)
</Frame>
## Execution & Debugging
Switch to the <b>Execution</b> view to run and observe the workflow:
- Event timeline
- Detailed logs (Details, Messages, Raw Data)
- Local test runs before publishing
<Frame>
![Execution View](/images/enterprise/crew-studio-execution.png)
</Frame>
## Publish & Export
- <b>Publish</b> to deploy a live automation
- <b>Download</b> source as a ZIP for local development or customization
<Frame>
![Publish & Download](/images/enterprise/crew-studio-publish.png)
</Frame>
Once published, you can view the automation details and have the **Options** dropdown menu to `chat with this crew`, `Export React Component` and `Export as MCP`.
<Frame>
![Published Automation](/images/enterprise/crew-studio-published.png)
</Frame>
## Best Practices
- Iterate quickly in Studio; publish only when stable
- Keep tools constrained to minimum permissions needed
- Use Traces to validate behavior and performance
## Related
<CardGroup cols={4}>
<Card title="Enable Crew Studio" href="/en/enterprise/guides/enable-crew-studio" icon="palette">
Enable Crew Studio.
</Card>
<Card title="Build a Crew" href="/en/enterprise/guides/build-crew" icon="paintbrush">
Build a Crew.
</Card>
<Card title="Deploy a Crew" href="/en/enterprise/guides/deploy-crew" icon="rocket">
Deploy a Crew from GitHub or ZIP file.
</Card>
<Card title="Export a React Component" href="/en/enterprise/guides/react-component-export" icon="download">
Export a React Component.
</Card>
</CardGroup>

View File

@@ -1,251 +0,0 @@
---
title: Hallucination Guardrail
description: "Prevent and detect AI hallucinations in your CrewAI tasks"
icon: "shield-check"
mode: "wide"
---
## Overview
The Hallucination Guardrail is an enterprise feature that validates AI-generated content to ensure it's grounded in facts and doesn't contain hallucinations. It analyzes task outputs against reference context and provides detailed feedback when potentially hallucinated content is detected.
## What are Hallucinations?
AI hallucinations occur when language models generate content that appears plausible but is factually incorrect or not supported by the provided context. The Hallucination Guardrail helps prevent these issues by:
- Comparing outputs against reference context
- Evaluating faithfulness to source material
- Providing detailed feedback on problematic content
- Supporting custom thresholds for validation strictness
## Basic Usage
### Setting Up the Guardrail
```python
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
from crewai import LLM
# Basic usage - will use task's expected_output as context
guardrail = HallucinationGuardrail(
llm=LLM(model="gpt-4o-mini")
)
# With explicit reference context
context_guardrail = HallucinationGuardrail(
context="AI helps with various tasks including analysis and generation.",
llm=LLM(model="gpt-4o-mini")
)
```
### Adding to Tasks
```python
from crewai import Task
# Create your task with the guardrail
task = Task(
description="Write a summary about AI capabilities",
expected_output="A factual summary based on the provided context",
agent=my_agent,
guardrail=guardrail # Add the guardrail to validate output
)
```
## Advanced Configuration
### Custom Threshold Validation
For stricter validation, you can set a custom faithfulness threshold (0-10 scale):
```python
# Strict guardrail requiring high faithfulness score
strict_guardrail = HallucinationGuardrail(
context="Quantum computing uses qubits that exist in superposition states.",
llm=LLM(model="gpt-4o-mini"),
threshold=8.0 # Requires score >= 8 to pass validation
)
```
### Including Tool Response Context
When your task uses tools, you can include tool responses for more accurate validation:
```python
# Guardrail with tool response context
weather_guardrail = HallucinationGuardrail(
context="Current weather information for the requested location",
llm=LLM(model="gpt-4o-mini"),
tool_response="Weather API returned: Temperature 22°C, Humidity 65%, Clear skies"
)
```
## How It Works
### Validation Process
1. **Context Analysis**: The guardrail compares task output against the provided reference context
2. **Faithfulness Scoring**: Uses an internal evaluator to assign a faithfulness score (0-10)
3. **Verdict Determination**: Determines if content is faithful or contains hallucinations
4. **Threshold Checking**: If a custom threshold is set, validates against that score
5. **Feedback Generation**: Provides detailed reasons when validation fails
### Validation Logic
- **Default Mode**: Uses verdict-based validation (FAITHFUL vs HALLUCINATED)
- **Threshold Mode**: Requires faithfulness score to meet or exceed the specified threshold
- **Error Handling**: Gracefully handles evaluation errors and provides informative feedback
## Guardrail Results
The guardrail returns structured results indicating validation status:
```python
# Example of guardrail result structure
{
"valid": False,
"feedback": "Content appears to be hallucinated (score: 4.2/10, verdict: HALLUCINATED). The output contains information not supported by the provided context."
}
```
### Result Properties
- **valid**: Boolean indicating whether the output passed validation
- **feedback**: Detailed explanation when validation fails, including:
- Faithfulness score
- Verdict classification
- Specific reasons for failure
## Integration with Task System
### Automatic Validation
When a guardrail is added to a task, it automatically validates the output before the task is marked as complete:
```python
# Task output validation flow
task_output = agent.execute_task(task)
validation_result = guardrail(task_output)
if validation_result.valid:
# Task completes successfully
return task_output
else:
# Task fails with validation feedback
raise ValidationError(validation_result.feedback)
```
### Event Tracking
The guardrail integrates with CrewAI's event system to provide observability:
- **Validation Started**: When guardrail evaluation begins
- **Validation Completed**: When evaluation finishes with results
- **Validation Failed**: When technical errors occur during evaluation
## Best Practices
### Context Guidelines
<Steps>
<Step title="Provide Comprehensive Context">
Include all relevant factual information that the AI should base its output on:
```python
context = """
Company XYZ was founded in 2020 and specializes in renewable energy solutions.
They have 150 employees and generated $50M revenue in 2023.
Their main products include solar panels and wind turbines.
"""
```
</Step>
<Step title="Keep Context Relevant">
Only include information directly related to the task to avoid confusion:
```python
# Good: Focused context
context = "The current weather in New York is 18°C with light rain."
# Avoid: Unrelated information
context = "The weather is 18°C. The city has 8 million people. Traffic is heavy."
```
</Step>
<Step title="Update Context Regularly">
Ensure your reference context reflects current, accurate information.
</Step>
</Steps>
### Threshold Selection
<Steps>
<Step title="Start with Default Validation">
Begin without custom thresholds to understand baseline performance.
</Step>
<Step title="Adjust Based on Requirements">
- **High-stakes content**: Use threshold 8-10 for maximum accuracy
- **General content**: Use threshold 6-7 for balanced validation
- **Creative content**: Use threshold 4-5 or default verdict-based validation
</Step>
<Step title="Monitor and Iterate">
Track validation results and adjust thresholds based on false positives/negatives.
</Step>
</Steps>
## Performance Considerations
### Impact on Execution Time
- **Validation Overhead**: Each guardrail adds ~1-3 seconds per task
- **LLM Efficiency**: Choose efficient models for evaluation (e.g., gpt-4o-mini)
### Cost Optimization
- **Model Selection**: Use smaller, efficient models for guardrail evaluation
- **Context Size**: Keep reference context concise but comprehensive
- **Caching**: Consider caching validation results for repeated content
## Troubleshooting
<Accordion title="Validation Always Fails">
**Possible Causes:**
- Context is too restrictive or unrelated to task output
- Threshold is set too high for the content type
- Reference context contains outdated information
**Solutions:**
- Review and update context to match task requirements
- Lower threshold or use default verdict-based validation
- Ensure context is current and accurate
</Accordion>
<Accordion title="False Positives (Valid Content Marked Invalid)">
**Possible Causes:**
- Threshold too high for creative or interpretive tasks
- Context doesn't cover all valid aspects of the output
- Evaluation model being overly conservative
**Solutions:**
- Lower threshold or use default validation
- Expand context to include broader acceptable content
- Test with different evaluation models
</Accordion>
<Accordion title="Evaluation Errors">
**Possible Causes:**
- Network connectivity issues
- LLM model unavailable or rate limited
- Malformed task output or context
**Solutions:**
- Check network connectivity and LLM service status
- Implement retry logic for transient failures
- Validate task output format before guardrail evaluation
</Accordion>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with hallucination guardrail configuration or troubleshooting.
</Card>

View File

@@ -1,46 +0,0 @@
---
title: Marketplace
description: "Discover, install, and govern reusable assets for your enterprise crews."
icon: "store"
mode: "wide"
---
## Overview
The Marketplace provides a curated surface for discovering integrations, internal tools, and reusable assets that accelerate crew development.
<Frame>
![Marketplace Overview](/images/enterprise/marketplace-overview.png)
</Frame>
## Discoverability
- Browse by category and capability
- Search for assets by name or keyword
## Install & Enable
- Oneclick install for approved assets
- Enable or disable per crew as needed
- Configure required environment variables and scopes
<Frame>
![Install & Configure](/images/enterprise/marketplace-install.png)
</Frame>
You can also download the templates directly from the marketplace by clicking on the `Download` button so
you can use them locally or refine them to your needs.
## Related
<CardGroup cols={3}>
<Card title="Tools & Integrations" href="/en/enterprise/features/tools-and-integrations" icon="wrench">
Connect external apps and manage internal tools your agents can use.
</Card>
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox">
Publish and install tools to enhance your crews' capabilities.
</Card>
<Card title="Agents Repository" href="/en/enterprise/features/agent-repositories" icon="people-group">
Store, share, and reuse agent definitions across teams and projects.
</Card>
</CardGroup>

View File

@@ -1,102 +0,0 @@
---
title: "Role-Based Access Control (RBAC)"
description: "Control access to crews, tools, and data with roles, scopes, and granular permissions."
icon: "shield"
mode: "wide"
---
## Overview
RBAC in CrewAI AMP enables secure, scalable access management through a combination of organizationlevel roles and automationlevel visibility controls.
<Frame>
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI AMP" />
</Frame>
## Users and Roles
Each member in your CrewAI workspace is assigned a role, which determines their access across various features.
You can:
- Use predefined roles (Owner, Member)
- Create custom roles tailored to specific permissions
- Assign roles at any time through the settings panel
You can configure users and roles in Settings → Roles.
<Steps>
<Step title="Open Roles settings">
Go to <b>Settings → Roles</b> in CrewAI AMP.
</Step>
<Step title="Choose a role type">
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click <b>Create role</b> to define a custom one.
</Step>
<Step title="Assign to members">
Select users and assign the role. You can change this anytime.
</Step>
</Steps>
### Configuration summary
| Area | Where to configure | Options |
|:---|:---|:---|
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
## Automationlevel Access Control
In addition to organizationwide roles, CrewAI Automations support finegrained visibility settings that let you restrict access to specific automations by user or role.
This is useful for:
- Keeping sensitive or experimental automations private
- Managing visibility across large teams or external collaborators
- Testing automations in isolated contexts
Deployments can be configured as private, meaning only whitelisted users and roles will be able to:
- View the deployment
- Run it or interact with its API
- Access its logs, metrics, and settings
The organization owner always has access, regardless of visibility settings.
You can configure automationlevel access control in Automation → Settings → Visibility tab.
<Steps>
<Step title="Open Visibility tab">
Navigate to <b>Automation → Settings → Visibility</b>.
</Step>
<Step title="Set visibility">
Choose <b>Private</b> to restrict access. The organization owner always retains access.
</Step>
<Step title="Whitelist access">
Add specific users and roles allowed to view, run, and access logs/metrics/settings.
</Step>
<Step title="Save and verify">
Save changes, then confirm that nonwhitelisted users cannot view or run the automation.
</Step>
</Steps>
### Private visibility: access outcomes
| Action | Owner | Whitelisted user/role | Not whitelisted |
|:---|:---|:---|:---|
| View automation | ✓ | ✓ | ✗ |
| Run automation/API | ✓ | ✓ | ✗ |
| Access logs/metrics/settings | ✓ | ✓ | ✗ |
<Tip>
The organization owner always has access. In private mode, only whitelisted users and roles can view, run, and access logs/metrics/settings.
</Tip>
<Frame>
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI AMP" />
</Frame>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with RBAC questions.
</Card>

View File

@@ -1,250 +0,0 @@
---
title: Tools & Integrations
description: "Connect external apps and manage internal tools your agents can use."
icon: "wrench"
mode: "wide"
---
## Overview
Tools & Integrations is the central hub for connecting thirdparty apps and managing internal tools that your agents can use at runtime.
<Frame>
![Tools & Integrations Overview](/images/enterprise/crew_connectors.png)
</Frame>
## Explore
<Tabs>
<Tab title="Integrations" icon="plug">
## Agent Apps (Integrations)
Connect enterprisegrade applications (e.g., Gmail, Google Drive, HubSpot, Slack) via OAuth to enable agent actions.
<Steps>
<Step title="Connect">
Click <b>Connect</b> on an app and complete OAuth.
</Step>
<Step title="Configure">
Optionally adjust scopes, triggers, and action availability.
</Step>
<Step title="Use in Agents">
Connected services become available as tools for your agents.
</Step>
</Steps>
<Frame>
![Integrations Grid](/images/enterprise/agent-apps.png)
</Frame>
### Connect your Account
1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link>
2. Click <b>Connect</b> on the desired service
3. Complete the OAuth flow and grant scopes
4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link>
<Frame>
![Enterprise Token](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
### Install Integration Tools
To use the integrations locally, you need to install the latest `crewai-tools` package.
```bash
uv add crewai-tools
```
### Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
### Usage Example
<Tip>
Use the new streamlined approach to integrate enterprise apps. Simply specify the app and its actions directly in the Agent configuration.
</Tip>
```python
from crewai import Agent, Task, Crew
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
apps=['gmail', 'gmail/send_email'] # Using canonical name 'gmail'
)
# Task to send an email
email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent,
expected_output="Confirmation that email was sent successfully"
)
# Run the task
crew = Crew(
agents=[email_agent],
tasks=[email_task]
)
# Run the crew
crew.kickoff()
```
### Filtering Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific Gmail actions only
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
apps=['gmail/fetch_emails'] # Using canonical name with specific action
)
notification_task = Task(
description="Find the email from john@example.com",
agent=gmail_agent,
expected_output="Email found from john@example.com"
)
crew = Crew(
agents=[gmail_agent],
tasks=[notification_task]
)
```
On a deployed crew, you can specify which actions are available for each integration from the service settings page.
<Frame>
![Filter Actions](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
### Scoped Deployments (multiuser orgs)
You can scope each integration to a specific user. For example, a crew that connects to Google can use a specific users Gmail account.
<Tip>
Useful when different teams/users must keep data access separated.
</Tip>
Use the `user_bearer_token` to scope authentication to the requesting user. If the user isnt logged in, the crew wont use connected integrations. Otherwise it falls back to the default bearer token configured for the deployment.
<Frame>
![User Bearer Token](/images/enterprise/user_bearer_token.png)
</Frame>
<div id="catalog"></div>
### Catalog
#### Communication & Collaboration
- Gmail — Manage emails and drafts
- Slack — Workspace notifications and alerts
- Microsoft — Office 365 and Teams integration
#### Project Management
- Jira — Issue tracking and project management
- ClickUp — Task and productivity management
- Asana — Team task and project coordination
- Notion — Page and database management
- Linear — Software project and bug tracking
- GitHub — Repository and issue management
#### Customer Relationship Management
- Salesforce — CRM account and opportunity management
- HubSpot — Sales pipeline and contact management
- Zendesk — Customer support ticket management
#### Business & Finance
- Stripe — Payment processing and customer management
- Shopify — Ecommerce store and product management
#### Productivity & Storage
- Google Sheets — Spreadsheet data synchronization
- Google Calendar — Event and schedule management
- Box — File storage and document management
…and more to come!
</Tab>
<Tab title="Internal Tools" icon="toolbox">
## Internal Tools
Create custom tools locally, publish them on CrewAI AMP Tool Repository and use them in your agents.
<Tip>
Before running the commands below, make sure you log in to your CrewAI AMP account by running this command:
```bash
crewai login
```
</Tip>
<Frame>
![Internal Tool Detail](/images/enterprise/tools-integrations-internal.png)
</Frame>
<Steps>
<Step title="Create">
Create a new tool locally.
```bash
crewai tool create your-tool
```
</Step>
<Step title="Publish">
Publish the tool to the CrewAI AMP Tool Repository.
```bash
crewai tool publish
```
</Step>
<Step title="Install">
Install the tool from the CrewAI AMP Tool Repository.
```bash
crewai tool install your-tool
```
</Step>
</Steps>
Manage:
- Name and description
- Visibility (Private / Public)
- Required environment variables
- Version history and downloads
- Team and role access
<Frame>
![Internal Tool Detail](/images/enterprise/tool-configs.png)
</Frame>
</Tab>
</Tabs>
## Related
<CardGroup cols={2}>
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox">
Create, publish, and version custom tools for your organization.
</Card>
<Card title="Webhook Automation" href="/en/enterprise/guides/webhook-automation" icon="bolt">
Automate workflows and integrate with external platforms and services.
</Card>
</CardGroup>

View File

@@ -1,157 +0,0 @@
---
title: Traces
description: "Using Traces to monitor your Crews"
icon: "timeline"
mode: "wide"
---
## Overview
Traces provide comprehensive visibility into your crew executions, helping you monitor performance, debug issues, and optimize your AI agent workflows.
## What are Traces?
Traces in CrewAI AMP are detailed execution records that capture every aspect of your crew's operation, from initial inputs to final outputs. They record:
- Agent thoughts and reasoning
- Task execution details
- Tool usage and outputs
- Token consumption metrics
- Execution times
- Cost estimates
<Frame>
![Traces Overview](/images/enterprise/traces-overview.png)
</Frame>
## Accessing Traces
<Steps>
<Step title="Navigate to the Traces Tab">
Once in your CrewAI AMP dashboard, click on the **Traces** to view all execution records.
</Step>
<Step title="Select an Execution">
You'll see a list of all crew executions, sorted by date. Click on any execution to view its detailed trace.
</Step>
</Steps>
## Understanding the Trace Interface
The trace interface is divided into several sections, each providing different insights into your crew's execution:
### 1. Execution Summary
The top section displays high-level metrics about the execution:
- **Total Tokens**: Number of tokens consumed across all tasks
- **Prompt Tokens**: Tokens used in prompts to the LLM
- **Completion Tokens**: Tokens generated in LLM responses
- **Requests**: Number of API calls made
- **Execution Time**: Total duration of the crew run
- **Estimated Cost**: Approximate cost based on token usage
<Frame>
![Execution Summary](/images/enterprise/trace-summary.png)
</Frame>
### 2. Tasks & Agents
This section shows all tasks and agents that were part of the crew execution:
- Task name and agent assignment
- Agents and LLMs used for each task
- Status (completed/failed)
- Individual execution time of the task
<Frame>
![Task List](/images/enterprise/trace-tasks.png)
</Frame>
### 3. Final Output
Displays the final result produced by the crew after all tasks are completed.
<Frame>
![Final Output](/images/enterprise/final-output.png)
</Frame>
### 4. Execution Timeline
A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns.
<Frame>
![Execution Timeline](/images/enterprise/trace-timeline.png)
</Frame>
### 5. Detailed Task View
When you click on a specific task in the timeline or task list, you'll see:
<Frame>
![Detailed Task View](/images/enterprise/trace-detailed-task.png)
</Frame>
- **Task Key**: Unique identifier for the task
- **Task ID**: Technical identifier in the system
- **Status**: Current state (completed/running/failed)
- **Agent**: Which agent performed the task
- **LLM**: Language model used for this task
- **Start/End Time**: When the task began and completed
- **Execution Time**: Duration of this specific task
- **Task Description**: What the agent was instructed to do
- **Expected Output**: What output format was requested
- **Input**: Any input provided to this task from previous tasks
- **Output**: The actual result produced by the agent
## Using Traces for Debugging
Traces are invaluable for troubleshooting issues with your crews:
<Steps>
<Step title="Identify Failure Points">
When a crew execution doesn't produce the expected results, examine the trace to find where things went wrong. Look for:
- Failed tasks
- Unexpected agent decisions
- Tool usage errors
- Misinterpreted instructions
<Frame>
![Failure Points](/images/enterprise/failure.png)
</Frame>
</Step>
<Step title="Optimize Performance">
Use execution metrics to identify performance bottlenecks:
- Tasks that took longer than expected
- Excessive token usage
- Redundant tool operations
- Unnecessary API calls
</Step>
<Step title="Improve Cost Efficiency">
Analyze token usage and cost estimates to optimize your crew's efficiency:
- Consider using smaller models for simpler tasks
- Refine prompts to be more concise
- Cache frequently accessed information
- Structure tasks to minimize redundant operations
</Step>
</Steps>
## Performance and batching
CrewAI batches trace uploads to reduce overhead on high-volume runs:
- A TraceBatchManager buffers events and sends them in batches via the Plus API client
- Reduces network chatter and improves reliability on flaky connections
- Automatically enabled in the default trace listener; no configuration needed
This yields more stable tracing under load while preserving detailed task/agent telemetry.
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with trace analysis or any other CrewAI AMP features.
</Card>

View File

@@ -1,168 +0,0 @@
---
title: Webhook Streaming
description: "Using Webhook Streaming to stream events to your webhook"
icon: "webhook"
mode: "wide"
---
## Overview
Enterprise Event Streaming lets you receive real-time webhook updates about your crews and flows deployed to
CrewAI AMP, such as model calls, tool usage, and flow steps.
## Usage
When using the Kickoff API, include a `webhooks` object to your request, for example:
```json
{
"inputs": {"foo": "bar"},
"webhooks": {
"events": ["crew_kickoff_started", "llm_call_started"],
"url": "https://your.endpoint/webhook",
"realtime": false,
"authentication": {
"strategy": "bearer",
"token": "my-secret-token"
}
}
}
```
If `realtime` is set to `true`, each event is delivered individually and immediately, at the cost of crew/flow performance.
## Webhook Format
Each webhook sends a list of events:
```json
{
"events": [
{
"id": "event-id",
"execution_id": "crew-run-id",
"timestamp": "2025-02-16T10:58:44.965Z",
"type": "llm_call_started",
"data": {
"model": "gpt-4",
"messages": [
{"role": "system", "content": "You are an assistant."},
{"role": "user", "content": "Summarize this article."}
]
}
}
]
}
```
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) on GitHub.
As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field.
## Supported Events
CrewAI supports both system events and custom events in Enterprise Event Streaming. These events are sent to your configured webhook endpoint during crew and flow execution.
### Flow Events:
- `flow_created`
- `flow_started`
- `flow_finished`
- `flow_plot`
- `method_execution_started`
- `method_execution_finished`
- `method_execution_failed`
### Agent Events:
- `agent_execution_started`
- `agent_execution_completed`
- `agent_execution_error`
- `lite_agent_execution_started`
- `lite_agent_execution_completed`
- `lite_agent_execution_error`
- `agent_logs_started`
- `agent_logs_execution`
- `agent_evaluation_started`
- `agent_evaluation_completed`
- `agent_evaluation_failed`
### Crew Events:
- `crew_kickoff_started`
- `crew_kickoff_completed`
- `crew_kickoff_failed`
- `crew_train_started`
- `crew_train_completed`
- `crew_train_failed`
- `crew_test_started`
- `crew_test_completed`
- `crew_test_failed`
- `crew_test_result`
### Task Events:
- `task_started`
- `task_completed`
- `task_failed`
- `task_evaluation`
### Tool Usage Events:
- `tool_usage_started`
- `tool_usage_finished`
- `tool_usage_error`
- `tool_validate_input_error`
- `tool_selection_error`
- `tool_execution_error`
### LLM Events:
- `llm_call_started`
- `llm_call_completed`
- `llm_call_failed`
- `llm_stream_chunk`
### LLM Guardrail Events:
- `llm_guardrail_started`
- `llm_guardrail_completed`
### Memory Events:
- `memory_query_started`
- `memory_query_completed`
- `memory_query_failed`
- `memory_save_started`
- `memory_save_completed`
- `memory_save_failed`
- `memory_retrieval_started`
- `memory_retrieval_completed`
### Knowledge Events:
- `knowledge_search_query_started`
- `knowledge_search_query_completed`
- `knowledge_search_query_failed`
- `knowledge_query_started`
- `knowledge_query_completed`
- `knowledge_query_failed`
### Reasoning Events:
- `agent_reasoning_started`
- `agent_reasoning_completed`
- `agent_reasoning_failed`
Event names match the internal event bus. See GitHub for the full list of events.
You can emit your own custom events, and they will be delivered through the webhook stream alongside system events.
<CardGroup>
<Card title="GitHub" icon="github" href="https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events">
Full list of events
</Card>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with webhook integration or troubleshooting.
</Card>
</CardGroup>

View File

@@ -1,283 +0,0 @@
---
title: "Triggers Overview"
description: "Understand how CrewAI AMP triggers work, how to manage them, and where to find integration-specific playbooks"
icon: "face-smile"
mode: "wide"
---
CrewAI AMP triggers connect your automations to real-time events across the tools your teams already use. Instead of polling systems or relying on manual kickoffs, triggers listen for changes—new emails, calendar updates, CRM status changes—and immediately launch the crew or flow you specify.
<Frame>
![Automation Triggers Overview](/images/enterprise/crew_connectors.png)
</Frame>
### Integration Playbooks
Deep-dive guides walk through setup and sample workflows for each integration:
<CardGroup cols={2}>
<Card title="Gmail Trigger" icon="envelope">
<a href="/en/enterprise/guides/gmail-trigger">Enable crews when emails arrive or threads update.</a>
</Card>
<Card title="Google Calendar Trigger" icon="calendar-days">
<a href="/en/enterprise/guides/google-calendar-trigger">React to calendar events as they are created, updated, or cancelled.</a>
</Card>
<Card title="Google Drive Trigger" icon="folder-open">
<a href="/en/enterprise/guides/google-drive-trigger">Handle Drive file uploads, edits, and deletions.</a>
</Card>
<Card title="Outlook Trigger" icon="envelope-open">
<a href="/en/enterprise/guides/outlook-trigger">Automate responses to new Outlook messages and calendar updates.</a>
</Card>
<Card title="OneDrive Trigger" icon="cloud">
<a href="/en/enterprise/guides/onedrive-trigger">Audit file activity and sharing changes in OneDrive.</a>
</Card>
<Card title="Microsoft Teams Trigger" icon="comments">
<a href="/en/enterprise/guides/microsoft-teams-trigger">Kick off workflows when new Teams chats start.</a>
</Card>
<Card title="HubSpot Trigger" icon="hubspot">
<a href="/en/enterprise/guides/hubspot-trigger">Launch automations from HubSpot workflows and lifecycle events.</a>
</Card>
<Card title="Salesforce Trigger" icon="salesforce">
<a href="/en/enterprise/guides/salesforce-trigger">Connect Salesforce processes to CrewAI for CRM automation.</a>
</Card>
<Card title="Slack Trigger" icon="slack">
<a href="/en/enterprise/guides/slack-trigger">Start crews directly from Slack slash commands.</a>
</Card>
<Card title="Zapier Trigger" icon="bolt">
<a href="/en/enterprise/guides/zapier-trigger">Bridge CrewAI with thousands of Zapier-supported apps.</a>
</Card>
</CardGroup>
## Trigger Capabilities
With triggers, you can:
- **Respond to real-time events** - Automatically execute workflows when specific conditions are met
- **Integrate with external systems** - Connect with platforms like Gmail, Outlook, OneDrive, JIRA, Slack, Stripe and more
- **Scale your automation** - Handle high-volume events without manual intervention
- **Maintain context** - Access trigger data within your crews and flows
## Managing Triggers
### Viewing Available Triggers
To access and manage your automation triggers:
1. Navigate to your deployment in the CrewAI dashboard
2. Click on the **Triggers** tab to view all available trigger integrations
<Frame caption="Example of available automation triggers for a Gmail deployment">
<img src="/images/enterprise/list-available-triggers.png" alt="List of available automation triggers" />
</Frame>
This view shows all the trigger integrations available for your deployment, along with their current connection status.
### Enabling and Disabling Triggers
Each trigger can be easily enabled or disabled using the toggle switch:
<Frame caption="Enable or disable triggers with toggle">
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
</Frame>
- **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur
- **Disabled (gray toggle)**: The trigger is inactive and will not respond to events
Simply click the toggle to change the trigger state. Changes take effect immediately.
### Monitoring Trigger Executions
Track the performance and history of your triggered executions:
<Frame caption="List of executions triggered by automation">
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
</Frame>
## Building Trigger-Driven Automations
Before building your automation, it's helpful to understand the structure of trigger payloads that your crews and flows will receive.
### Trigger Setup Checklist
Before wiring a trigger into production, make sure you:
- Connect the integration under **Tools & Integrations** and complete any OAuth or API key steps
- Enable the trigger toggle on the deployment that should respond to events
- Provide any required environment variables (API tokens, tenant IDs, shared secrets)
- Create or update tasks that can parse the incoming payload within the first crew task or flow step
- Decide whether to pass trigger context automatically using `allow_crewai_trigger_context`
- Set up monitoring—webhook logs, CrewAI execution history, and optional external alerting
### Testing Triggers Locally with CLI
The CrewAI CLI provides powerful commands to help you develop and test trigger-driven automations without deploying to production.
#### List Available Triggers
View all available triggers for your connected integrations:
```bash
crewai triggers list
```
This command displays all triggers available based on your connected integrations, showing:
- Integration name and connection status
- Available trigger types
- Trigger names and descriptions
#### Simulate Trigger Execution
Test your crew with realistic trigger payloads before deployment:
```bash
crewai triggers run <trigger_name>
```
For example:
```bash
crewai triggers run microsoft_onedrive/file_changed
```
This command:
- Executes your crew locally
- Passes a complete, realistic trigger payload
- Simulates exactly how your crew will be called in production
<Warning>
**Important Development Notes:**
- Use `crewai triggers run <trigger>` to simulate trigger execution during development
- Using `crewai run` will NOT simulate trigger calls and won't pass the trigger payload
- After deployment, your crew will be executed with the actual trigger payload
- If your crew expects parameters that aren't in the trigger payload, execution may fail
</Warning>
### Triggers with Crew
Your existing crew definitions work seamlessly with triggers, you just need to have a task to parse the received payload:
```python
@CrewBase
class MyAutomatedCrew:
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
)
@task
def parse_trigger_payload(self) -> Task:
return Task(
config=self.tasks_config['parse_trigger_payload'],
agent=self.researcher(),
)
@task
def analyze_trigger_content(self) -> Task:
return Task(
config=self.tasks_config['analyze_trigger_data'],
agent=self.researcher(),
)
```
The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms.
<Note>
Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI automatically injects this payload:
- Tasks: appended to the first task's description by default ("Trigger Payload: {crewai_trigger_payload}")
- Control via `allow_crewai_trigger_context`: set `True` to always inject, `False` to never inject
- Flows: any `@start()` method that accepts a `crewai_trigger_payload` parameter will receive it
</Note>
### Integration with Flows
For flows, you have more control over how trigger data is handled:
#### Accessing Trigger Payload
All `@start()` methods in your flows will accept an additional parameter called `crewai_trigger_payload`:
```python
from crewai.flow import Flow, start, listen
class MyAutomatedFlow(Flow):
@start()
def handle_trigger(self, crewai_trigger_payload: dict = None):
"""
This start method can receive trigger data
"""
if crewai_trigger_payload:
# Process the trigger data
trigger_id = crewai_trigger_payload.get('id')
event_data = crewai_trigger_payload.get('payload', {})
# Store in flow state for use by other methods
self.state.trigger_id = trigger_id
self.state.trigger_type = event_data
return event_data
# Handle manual execution
return None
@listen(handle_trigger)
def process_data(self, trigger_data):
"""
Process the data from the trigger
"""
# ... process the trigger
```
#### Triggering Crews from Flows
When kicking off a crew within a flow that was triggered, pass the trigger payload as it:
```python
@start()
def delegate_to_crew(self, crewai_trigger_payload: dict = None):
"""
Delegate processing to a specialized crew
"""
crew = MySpecializedCrew()
# Pass the trigger payload to the crew
result = crew.crew().kickoff(
inputs={
'a_custom_parameter': "custom_value",
'crewai_trigger_payload': crewai_trigger_payload
},
)
return result
```
## Troubleshooting
**Trigger not firing:**
- Verify the trigger is enabled in your deployment's Triggers tab
- Check integration connection status under Tools & Integrations
- Ensure all required environment variables are properly configured
**Execution failures:**
- Check the execution logs for error details
- Use `crewai triggers run <trigger_name>` to test locally and see the exact payload structure
- Verify your crew can handle the `crewai_trigger_payload` parameter
- Ensure your crew doesn't expect parameters that aren't included in the trigger payload
**Development issues:**
- Always test with `crewai triggers run <trigger>` before deploying to see the complete payload
- Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead
- Use `crewai triggers list` to verify which triggers are available for your connected integrations
- After deployment, your crew will receive the actual trigger payload, so test thoroughly locally first
Automation triggers transform your CrewAI deployments into responsive, event-driven systems that can seamlessly integrate with your existing business processes and tools.

View File

@@ -1,52 +0,0 @@
---
title: "Azure OpenAI Setup"
description: "Configure Azure OpenAI with Crew Studio for enterprise LLM connections"
icon: "microsoft"
mode: "wide"
---
This guide walks you through connecting Azure OpenAI with Crew Studio for seamless enterprise AI operations.
## Setup Process
<Steps>
<Step title="Access Azure AI Foundry">
1. In Azure, go to [Azure AI Foundry](https://ai.azure.com/) > select your Azure OpenAI deployment.
2. On the left menu, click `Deployments`. If you don't have one, create a deployment with your desired model.
3. Once created, select your deployment and locate the `Target URI` and `Key` on the right side of the page. Keep this page open, as you'll need this information.
<Frame>
<img src="/images/enterprise/azure-openai-studio.png" alt="Azure AI Foundry" />
</Frame>
</Step>
<Step title="Configure CrewAI AMP Connection">
4. In another tab, open `CrewAI AMP > LLM Connections`. Name your LLM Connection, select Azure as the provider, and choose the same model you selected in Azure.
5. On the same page, add environment variables from step 3:
- One named `AZURE_DEPLOYMENT_TARGET_URL` (using the Target URI). The URL should look like this: https://your-deployment.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview
- Another named `AZURE_API_KEY` (using the Key).
6. Click `Add Connection` to save your LLM Connection.
</Step>
<Step title="Set Default Configuration">
7. In `CrewAI AMP > Settings > Defaults > Crew Studio LLM Settings`, set the new LLM Connection and model as defaults.
</Step>
<Step title="Configure Network Access">
8. Ensure network access settings:
- In Azure, go to `Azure OpenAI > select your deployment`.
- Navigate to `Resource Management > Networking`.
- Ensure that `Allow access from all networks` is enabled. If this setting is restricted, CrewAI may be blocked from accessing your Azure OpenAI endpoint.
</Step>
</Steps>
## Verification
You're all set! Crew Studio will now use your Azure OpenAI connection. Test the connection by creating a simple crew or task to ensure everything is working properly.
## Troubleshooting
If you encounter issues:
- Verify the Target URI format matches the expected pattern
- Check that the API key is correct and has proper permissions
- Ensure network access is configured to allow CrewAI connections
- Confirm the deployment model matches what you've configured in CrewAI

View File

@@ -1,42 +0,0 @@
---
title: "Build Crew"
description: "A Crew is a group of agents that work together to complete a task."
icon: "people-arrows"
mode: "wide"
---
## Overview
[CrewAI AMP](https://app.crewai.com) streamlines the process of **creating**, **deploying**, and **managing** your AI agents in production environments.
## Getting Started
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/-kSOTtYzgEw"
title="Building crews with the CrewAI CLI"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/en/installation">
Follow our standard installation guide to set up CrewAI CLI and create your first project.
</Card>
### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration.
</Card>
## Support and Resources
For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com).
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
</Card>

View File

@@ -1,35 +0,0 @@
---
title: "Open Telemetry Logs"
description: "Understand how to capture telemetry logs from your CrewAI AMP deployments"
icon: "magnifying-glass-chart"
mode: "wide"
---
CrewAI AMP provides a powerful way to capture telemetry logs from your deployments. This allows you to monitor the performance of your agents and workflows, and to debug issues that may arise.
## Prerequisites
<CardGroup cols={2}>
<Card title="ENTERPRISE OTEL SETUP enabled" icon="users">
Your organization should have ENTERPRISE OTEL SETUP enabled
</Card>
<Card title="OTEL collector setup" icon="server">
Your organization should have an OTEL collector setup or a provider like Datadog log intake setup
</Card>
</CardGroup>
## How to capture telemetry logs
1. Go to settings/organization tab
2. Configure your OTEL collector setup
3. Save
Example to setup OTEL log collection capture to Datadog.
<Frame>
![Capture Telemetry Logs](/images/crewai-otel-export.png)
</Frame>

View File

@@ -1,291 +0,0 @@
---
title: "Deploy Crew"
description: "Deploying a Crew on CrewAI AMP"
icon: "rocket"
mode: "wide"
---
<Note>
After creating a crew locally or through Crew Studio, the next step is deploying it to the CrewAI AMP platform. This guide covers multiple deployment methods to help you choose the best approach for your workflow.
</Note>
## Prerequisites
<CardGroup cols={2}>
<Card title="Crew Ready for Deployment" icon="users">
You should have a working crew either built locally or created through Crew Studio
</Card>
<Card title="GitHub Repository" icon="github">
Your crew code should be in a GitHub repository (for GitHub integration method)
</Card>
</CardGroup>
## Option 1: Deploy Using CrewAI CLI
The CLI provides the fastest way to deploy locally developed crews to the Enterprise platform.
<Steps>
<Step title="Install CrewAI CLI">
If you haven't already, install the CrewAI CLI:
```bash
pip install crewai[tools]
```
<Tip>
The CLI comes with the main CrewAI package, but the `[tools]` extra ensures you have all deployment dependencies.
</Tip>
</Step>
<Step title="Authenticate with the Enterprise Platform">
First, you need to authenticate your CLI with the CrewAI AMP platform:
```bash
# If you already have a CrewAI AMP account, or want to create one:
crewai login
```
When you run either command, the CLI will:
1. Display a URL and a unique device code
2. Open your browser to the authentication page
3. Prompt you to confirm the device
4. Complete the authentication process
Upon successful authentication, you'll see a confirmation message in your terminal!
</Step>
<Step title="Create a Deployment">
From your project directory, run:
```bash
crewai deploy create
```
This command will:
1. Detect your GitHub repository information
2. Identify environment variables in your local `.env` file
3. Securely transfer these variables to the Enterprise platform
4. Create a new deployment with a unique identifier
On successful creation, you'll see a message like:
```shell
Deployment created successfully!
Name: your_project_name
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
Current Status: Deploy Enqueued
```
</Step>
<Step title="Monitor Deployment Progress">
Track the deployment status with:
```bash
crewai deploy status
```
For detailed logs of the build process:
```bash
crewai deploy logs
```
<Tip>
The first deployment typically takes 10-15 minutes as it builds the container images. Subsequent deployments are much faster.
</Tip>
</Step>
</Steps>
## Additional CLI Commands
The CrewAI CLI offers several commands to manage your deployments:
```bash
# List all your deployments
crewai deploy list
# Get the status of your deployment
crewai deploy status
# View the logs of your deployment
crewai deploy logs
# Push updates after code changes
crewai deploy push
# Remove a deployment
crewai deploy remove <deployment_id>
```
## Option 2: Deploy Directly via Web Interface
You can also deploy your crews directly through the CrewAI AMP web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
<Steps>
<Step title="Pushing to GitHub">
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart).
</Step>
<Step title="Connecting GitHub to CrewAI AMP">
1. Log in to [CrewAI AMP](https://app.crewai.com)
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
</Frame>
</Step>
<Step title="Select the Repository">
After connecting your GitHub account, you'll be able to select which repository to deploy:
<Frame>
![Select Repository](/images/enterprise/select-repo.png)
</Frame>
</Step>
<Step title="Set Environment Variables">
Before deploying, you'll need to set up your environment variables to connect to your LLM provider or other services:
1. You can add variables individually or in bulk
2. Enter your environment variables in `KEY=VALUE` format (one per line)
<Frame>
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
</Step>
<Step title="Deploy Your Crew">
1. Click the "Deploy" button to start the deployment process
2. You can monitor the progress through the progress bar
3. The first deployment typically takes around 10-15 minutes; subsequent deployments will be faster
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)
</Frame>
Once deployment is complete, you'll see:
- Your crew's unique URL
- A Bearer token to protect your crew API
- A "Delete" button if you need to remove the deployment
</Step>
</Steps>
## ⚠️ Environment Variable Security Requirements
<Warning>
**Important**: CrewAI AMP has security restrictions on environment variable names that can cause deployment failures if not followed.
</Warning>
### Blocked Environment Variable Patterns
For security reasons, the following environment variable naming patterns are **automatically filtered** and will cause deployment issues:
**Blocked Patterns:**
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
- Variables ending with `_SECRET` (e.g., `API_SECRET`)
- Variables ending with `_KEY` in certain contexts
**Specific Blocked Variables:**
- `GITHUB_USER`, `GITHUB_TOKEN`
- `AWS_REGION`, `AWS_DEFAULT_REGION`
- Various internal CrewAI system variables
### Allowed Exceptions
Some variables are explicitly allowed despite matching blocked patterns:
- `AZURE_AD_TOKEN`
- `AZURE_OPENAI_AD_TOKEN`
- `ENTERPRISE_ACTION_TOKEN`
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
### How to Fix Naming Issues
If your deployment fails due to environment variable restrictions:
```bash
# ❌ These will cause deployment failures
OPENAI_TOKEN=sk-...
DATABASE_PASSWORD=mypassword
API_SECRET=secret123
# ✅ Use these naming patterns instead
OPENAI_API_KEY=sk-...
DATABASE_CREDENTIALS=mypassword
API_CONFIG=secret123
```
### Best Practices
1. **Use standard naming conventions**: `PROVIDER_API_KEY` instead of `PROVIDER_TOKEN`
2. **Test locally first**: Ensure your crew works with the renamed variables
3. **Update your code**: Change any references to the old variable names
4. **Document changes**: Keep track of renamed variables for your team
<Tip>
If you encounter deployment failures with cryptic environment variable errors, check your variable names against these patterns first.
</Tip>
### Interact with Your Deployed Crew
Once deployment is complete, you can access your crew through:
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
- `/inputs`: Lists the required input parameters
- `/kickoff`: Initiates an execution with provided inputs
- `/status/{kickoff_id}`: Checks the execution status
2. **Web Interface**: Visit [app.crewai.com](https://app.crewai.com) to access:
- **Status tab**: View deployment information, API endpoint details, and authentication token
- **Run tab**: Visual representation of your crew's structure
- **Executions tab**: History of all executions
- **Metrics tab**: Performance analytics
- **Traces tab**: Detailed execution insights
### Trigger an Execution
From the Enterprise dashboard, you can:
1. Click on your crew's name to open its details
2. Select "Trigger Crew" from the management interface
3. Enter the required inputs in the modal that appears
4. Monitor progress as the execution moves through the pipeline
### Monitoring and Analytics
The Enterprise platform provides comprehensive observability features:
- **Execution Management**: Track active and completed runs
- **Traces**: Detailed breakdowns of each execution
- **Metrics**: Token usage, execution times, and costs
- **Timeline View**: Visual representation of task sequences
### Advanced Features
The Enterprise platform also offers:
- **Environment Variables Management**: Securely store and manage API keys
- **LLM Connections**: Configure integrations with various LLM providers
- **Custom Tools Repository**: Create, share, and install tools
- **Crew Studio**: Build crews through a chat interface without writing code
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with deployment issues or questions about the Enterprise platform.
</Card>

View File

@@ -1,166 +0,0 @@
---
title: "Enable Crew Studio"
description: "Enabling Crew Studio on CrewAI AMP"
icon: "comments"
mode: "wide"
---
<Tip>
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
</Tip>
## What is Crew Studio?
Crew Studio is an innovative way to create AI agent crews without writing code.
<Frame>
![Crew Studio Interface](/images/enterprise/crew-studio-interface.png)
</Frame>
With Crew Studio, you can:
- Chat with the Crew Assistant to describe your problem
- Automatically generate agents and tasks
- Select appropriate tools
- Configure necessary inputs
- Generate downloadable code for customization
- Deploy directly to the CrewAI AMP platform
## Configuration Steps
Before you can start using Crew Studio, you need to configure your LLM connections:
<Steps>
<Step title="Set Up LLM Connection">
Go to the **LLM Connections** tab in your CrewAI AMP dashboard and create a new LLM connection.
<Note>
Feel free to use any LLM provider you want that is supported by CrewAI.
</Note>
Configure your LLM connection:
- Enter a `Connection Name` (e.g., `OpenAI`)
- Select your model provider: `openai` or `azure`
- Select models you'd like to use in your Studio-generated Crews
- We recommend at least `gpt-4o`, `o1-mini`, and `gpt-4o-mini`
- Add your API key as an environment variable:
- For OpenAI: Add `OPENAI_API_KEY` with your API key
- For Azure OpenAI: Refer to [this article](https://blog.crewai.com/configuring-azure-openai-with-crewai-a-comprehensive-guide/) for configuration details
- Click `Add Connection` to save your configuration
<Frame>
![LLM Connection Configuration](/images/enterprise/llm-connection-config.png)
</Frame>
</Step>
<Step title="Verify Connection Added">
Once you complete the setup, you'll see your new connection added to the list of available connections.
<Frame>
![Connection Added](/images/enterprise/connection-added.png)
</Frame>
</Step>
<Step title="Configure LLM Defaults">
In the main menu, go to **Settings → Defaults** and configure the LLM Defaults settings:
- Select default models for agents and other components
- Set default configurations for Crew Studio
Click `Save Settings` to apply your changes.
<Frame>
![LLM Defaults Configuration](/images/enterprise/llm-defaults.png)
</Frame>
</Step>
</Steps>
## Using Crew Studio
Now that you've configured your LLM connection and default settings, you're ready to start using Crew Studio!
<Steps>
<Step title="Access Studio">
Navigate to the **Studio** section in your CrewAI AMP dashboard.
</Step>
<Step title="Start a Conversation">
Start a conversation with the Crew Assistant by describing the problem you want to solve:
```md
I need a crew that can research the latest AI developments and create a summary report.
```
The Crew Assistant will ask clarifying questions to better understand your requirements.
</Step>
<Step title="Review Generated Crew">
Review the generated crew configuration, including:
- Agents and their roles
- Tasks to be performed
- Required inputs
- Tools to be used
This is your opportunity to refine the configuration before proceeding.
</Step>
<Step title="Deploy or Download">
Once you're satisfied with the configuration, you can:
- Download the generated code for local customization
- Deploy the crew directly to the CrewAI AMP platform
- Modify the configuration and regenerate the crew
</Step>
<Step title="Test Your Crew">
After deployment, test your crew with sample inputs to ensure it performs as expected.
</Step>
</Steps>
<Tip>
For best results, provide clear, detailed descriptions of what you want your crew to accomplish. Include specific inputs and expected outputs in your description.
</Tip>
## Example Workflow
Here's a typical workflow for creating a crew with Crew Studio:
<Steps>
<Step title="Describe Your Problem">
Start by describing your problem:
```md
I need a crew that can analyze financial news and provide investment recommendations
```
</Step>
<Step title="Answer Questions">
Respond to clarifying questions from the Crew Assistant to refine your requirements.
</Step>
<Step title="Review the Plan">
Review the generated crew plan, which might include:
- A Research Agent to gather financial news
- An Analysis Agent to interpret the data
- A Recommendations Agent to provide investment advice
</Step>
<Step title="Approve or Modify">
Approve the plan or request changes if necessary.
</Step>
<Step title="Download or Deploy">
Download the code for customization or deploy directly to the platform.
</Step>
<Step title="Test and Refine">
Test your crew with sample inputs and refine as needed.
</Step>
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Crew Studio or any other CrewAI AMP features.
</Card>

View File

@@ -1,88 +0,0 @@
---
title: "Gmail Trigger"
description: "Trigger automations when Gmail events occur (e.g., new emails, labels)."
icon: "envelope"
mode: "wide"
---
## Overview
Use the Gmail Trigger to kick off your deployed crews when Gmail events happen in connected accounts, such as receiving a new email or messages matching a label/filter.
<Tip>
Make sure Gmail is connected in Tools & Integrations and the trigger is enabled for your deployment.
</Tip>
## Enabling the Gmail Trigger
1. Open your deployment in CrewAI AMP
2. Go to the **Triggers** tab
3. Locate **Gmail** and switch the toggle to enable
<Frame>
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
</Frame>
## Example: Process new emails
When a new email arrives, the Gmail Trigger will send the payload to your Crew or Flow. Below is a Crew example that parses and processes the trigger payload.
```python
@CrewBase
class GmailProcessingCrew:
@agent
def parser(self) -> Agent:
return Agent(
config=self.agents_config['parser'],
)
@task
def parse_gmail_payload(self) -> Task:
return Task(
config=self.tasks_config['parse_gmail_payload'],
agent=self.parser(),
)
@task
def act_on_email(self) -> Task:
return Task(
config=self.tasks_config['act_on_email'],
agent=self.parser(),
)
```
The Gmail payload will be available via the standard context mechanisms.
### Testing Locally
Test your Gmail trigger integration locally using the CrewAI CLI:
```bash
# View all available triggers
crewai triggers list
# Simulate a Gmail trigger with realistic payload
crewai triggers run gmail/new_email
```
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
Track history and performance of triggered runs:
<Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
</Frame>
## Troubleshooting
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -1,74 +0,0 @@
---
title: "Google Calendar Trigger"
description: "Kick off crews when Google Calendar events are created, updated, or cancelled"
icon: "calendar"
mode: "wide"
---
## Overview
Use the Google Calendar trigger to launch automations whenever calendar events change. Common use cases include briefing a team before a meeting, notifying stakeholders when a critical event is cancelled, or summarizing daily schedules.
<Tip>
Make sure Google Calendar is connected in **Tools & Integrations** and enabled for the deployment you want to automate.
</Tip>
## Enabling the Google Calendar Trigger
1. Open your deployment in CrewAI AMP
2. Go to the **Triggers** tab
3. Locate **Google Calendar** and switch the toggle to enable
<Frame>
<img src="/images/enterprise/calendar-trigger.png" alt="Enable or disable triggers with toggle" />
</Frame>
## Example: Summarize meeting details
The snippet below mirrors the `calendar-event-crew.py` example in the trigger repository. It parses the payload, analyses the attendees and timing, and produces a meeting brief for downstream tools.
```python
from calendar_event_crew import GoogleCalendarEventTrigger
crew = GoogleCalendarEventTrigger().crew()
result = crew.kickoff({
"crewai_trigger_payload": calendar_payload,
})
print(result.raw)
```
Use `crewai_trigger_payload` exactly as it is delivered by the trigger so the crew can extract the proper fields.
## Testing Locally
Test your Google Calendar trigger integration locally using the CrewAI CLI:
```bash
# View all available triggers
crewai triggers list
# Simulate a Google Calendar trigger with realistic payload
crewai triggers run google_calendar/event_changed
```
The `crewai triggers run` command will execute your crew with a complete Calendar payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
The **Executions** list in the deployment dashboard tracks every triggered run and surfaces payload metadata, output summaries, and errors.
<Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
</Frame>
## Troubleshooting
- Ensure the correct Google account is connected and the trigger is enabled
- Test locally with `crewai triggers run google_calendar/event_changed` to see the exact payload structure
- Confirm your workflow handles all-day events (payloads use `start.date` and `end.date` instead of timestamps)
- Check execution logs if reminders or attendee arrays are missing—calendar permissions can limit fields in the payload
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -1,71 +0,0 @@
---
title: "Google Drive Trigger"
description: "Respond to Google Drive file events with automated crews"
icon: "folder"
mode: "wide"
---
## Overview
Trigger your automations when files are created, updated, or removed in Google Drive. Typical workflows include summarizing newly uploaded content, enforcing sharing policies, or notifying owners when critical files change.
<Tip>
Connect Google Drive in **Tools & Integrations** and confirm the trigger is enabled for the automation you want to monitor.
</Tip>
## Enabling the Google Drive Trigger
1. Open your deployment in CrewAI AMP
2. Go to the **Triggers** tab
3. Locate **Google Drive** and switch the toggle to enable
<Frame>
<img src="/images/enterprise/gdrive-trigger.png" alt="Enable or disable triggers with toggle" />
</Frame>
## Example: Summarize file activity
The drive example crews parse the payload to extract file metadata, evaluate permissions, and publish a summary.
```python
from drive_file_crew import GoogleDriveFileTrigger
crew = GoogleDriveFileTrigger().crew()
crew.kickoff({
"crewai_trigger_payload": drive_payload,
})
```
## Testing Locally
Test your Google Drive trigger integration locally using the CrewAI CLI:
```bash
# View all available triggers
crewai triggers list
# Simulate a Google Drive trigger with realistic payload
crewai triggers run google_drive/file_changed
```
The `crewai triggers run` command will execute your crew with a complete Drive payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
Track history and performance of triggered runs with the **Executions** list in the deployment dashboard.
<Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
</Frame>
## Troubleshooting
- Verify Google Drive is connected and the trigger toggle is enabled
- Test locally with `crewai triggers run google_drive/file_changed` to see the exact payload structure
- If a payload is missing permission data, ensure the connected account has access to the file or folder
- The trigger sends file IDs only; use the Drive API if you need to fetch binary content during the crew run
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -1,52 +0,0 @@
---
title: "HubSpot Trigger"
description: "Trigger CrewAI crews directly from HubSpot Workflows"
icon: "hubspot"
mode: "wide"
---
This guide provides a step-by-step process to set up HubSpot triggers for CrewAI AMP, enabling you to initiate crews directly from HubSpot Workflows.
## Prerequisites
- A CrewAI AMP account
- A HubSpot account with the [HubSpot Workflows](https://knowledge.hubspot.com/workflows/create-workflows) feature
## Setup Steps
<Steps>
<Step title="Connect your HubSpot account with CrewAI AMP">
- Log in to your `CrewAI AMP account > Triggers`
- Select `HubSpot` from the list of available triggers
- Choose the HubSpot account you want to connect with CrewAI AMP
- Follow the on-screen prompts to authorize CrewAI AMP access to your HubSpot account
- A confirmation message will appear once HubSpot is successfully connected with CrewAI AMP
</Step>
<Step title="Create a HubSpot Workflow">
- Log in to your `HubSpot account > Automations > Workflows > New workflow`
- Select the workflow type that fits your needs (e.g., Start from scratch)
- In the workflow builder, click the Plus (+) icon to add a new action.
- Choose `Integrated apps > CrewAI > Kickoff a Crew`.
- Select the Crew you want to initiate.
- Click `Save` to add the action to your workflow
<Frame>
<img src="/images/enterprise/hubspot-workflow-1.png" alt="HubSpot Workflow 1" />
</Frame>
</Step>
<Step title="Use Crew results with other actions">
- After the Kickoff a Crew step, click the Plus (+) icon to add a new action.
- For example, to send an internal email notification, choose `Communications > Send internal email notification`
- In the Body field, click `Insert data`, select `View properties or action outputs from > Action outputs > Crew Result` to include Crew data in the email
<Frame>
<img src="/images/enterprise/hubspot-workflow-2.png" alt="HubSpot Workflow 2" />
</Frame>
- Configure any additional actions as needed
- Review your workflow steps to ensure everything is set up correctly
- Activate the workflow
<Frame>
<img src="/images/enterprise/hubspot-workflow-3.png" alt="HubSpot Workflow 3" />
</Frame>
</Step>
</Steps>
For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows).

View File

@@ -1,101 +0,0 @@
---
title: "HITL Workflows"
description: "Learn how to implement Human-In-The-Loop workflows in CrewAI for enhanced decision-making"
icon: "user-check"
mode: "wide"
---
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
## Setting Up HITL Workflows
<Steps>
<Step title="Configure Your Task">
Set up your task with human input enabled:
<Frame>
<img src="/images/enterprise/crew-human-input.png" alt="Crew Human Input" />
</Frame>
</Step>
<Step title="Provide Webhook URL">
When kicking off your crew, include a webhook URL for human input:
<Frame>
<img src="/images/enterprise/crew-webhook-url.png" alt="Crew Webhook URL" />
</Frame>
</Step>
<Step title="Receive Webhook Notification">
Once the crew completes the task requiring human input, you'll receive a webhook notification containing:
- **Execution ID**
- **Task ID**
- **Task output**
</Step>
<Step title="Review Task Output">
The system will pause in the `Pending Human Input` state. Review the task output carefully.
</Step>
<Step title="Submit Human Feedback">
Call the resume endpoint of your crew with the following information:
<Frame>
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
</Frame>
<Warning>
**Critical: Webhook URLs Must Be Provided Again**:
You **must** provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) in the resume call that you used in the kickoff call. Webhook configurations are **NOT** automatically carried over from kickoff - they must be explicitly included in the resume request to continue receiving notifications for task completion, agent steps, and crew completion.
</Warning>
Example resume call with webhooks:
```bash
curl -X POST {BASE_URL}/resume \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"human_feedback": "Great work! Please add more details.",
"is_approve": true,
"taskWebhookUrl": "https://your-server.com/webhooks/task",
"stepWebhookUrl": "https://your-server.com/webhooks/step",
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
}'
```
<Warning>
**Feedback Impact on Task Execution**:
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
</Warning>
This means:
- All information in your feedback becomes part of the task's context.
- Irrelevant details may negatively influence it.
- Concise, relevant feedback helps maintain task focus and efficiency.
- Always review your feedback carefully before submission to ensure it contains only pertinent information that will positively guide the task's execution.
</Step>
<Step title="Handle Negative Feedback">
If you provide negative feedback:
- The crew will retry the task with added context from your feedback.
- You'll receive another webhook notification for further review.
- Repeat steps 4-6 until satisfied.
</Step>
<Step title="Execution Continuation">
When you submit positive feedback, the execution will proceed to the next steps.
</Step>
</Steps>
## Best Practices
- **Be Specific**: Provide clear, actionable feedback that directly addresses the task at hand
- **Stay Relevant**: Only include information that will help improve the task execution
- **Be Timely**: Respond to HITL prompts promptly to avoid workflow delays
- **Review Carefully**: Double-check your feedback before submitting to ensure accuracy
## Common Use Cases
HITL workflows are particularly valuable for:
- Quality assurance and validation
- Complex decision-making scenarios
- Sensitive or high-stakes operations
- Creative tasks requiring human judgment
- Compliance and regulatory reviews

View File

@@ -1,186 +0,0 @@
---
title: "Kickoff Crew"
description: "Kickoff a Crew on CrewAI AMP"
icon: "flag-checkered"
mode: "wide"
---
## Overview
Once you've deployed your crew to the CrewAI AMP platform, you can kickoff executions through the web interface or the API. This guide covers both approaches.
## Method 1: Using the Web Interface
### Step 1: Navigate to Your Deployed Crew
1. Log in to [CrewAI AMP](https://app.crewai.com)
2. Click on the crew name from your projects list
3. You'll be taken to the crew's detail page
<Frame>
![Crew Dashboard](/images/enterprise/crew-dashboard.png)
</Frame>
### Step 2: Initiate Execution
From your crew's detail page, you have two options to kickoff an execution:
#### Option A: Quick Kickoff
1. Click the `Kickoff` link in the Test Endpoints section
2. Enter the required input parameters for your crew in the JSON editor
3. Click the `Send Request` button
<Frame>
![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)
</Frame>
#### Option B: Using the Visual Interface
1. Click the `Run` tab in the crew detail page
2. Enter the required inputs in the form fields
3. Click the `Run Crew` button
<Frame>
![Run Crew](/images/enterprise/run-crew.png)
</Frame>
### Step 3: Monitor Execution Progress
After initiating the execution:
1. You'll receive a response containing a `kickoff_id` - **copy this ID**
2. This ID is essential for tracking your execution
<Frame>
![Copy Task ID](/images/enterprise/copy-task-id.png)
</Frame>
### Step 4: Check Execution Status
To monitor the progress of your execution:
1. Click the "Status" endpoint in the Test Endpoints section
2. Paste the `kickoff_id` into the designated field
3. Click the "Get Status" button
<Frame>
![Get Status](/images/enterprise/get-status.png)
</Frame>
The status response will show:
- Current execution state (`running`, `completed`, etc.)
- Details about which tasks are in progress
- Any outputs produced so far
### Step 5: View Final Results
Once execution is complete:
1. The status will change to `completed`
2. You can view the full execution results and outputs
3. For a more detailed view, check the `Executions` tab in the crew detail page
## Method 2: Using the API
You can also kickoff crews programmatically using the CrewAI AMP REST API.
### Authentication
All API requests require a bearer token for authentication:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
Your bearer token is available on the Status tab of your crew's detail page.
### Checking Crew Health
Before executing operations, you can verify that your crew is running properly:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
A successful response will return a message indicating the crew is operational:
```
Healthy%
```
### Step 1: Retrieve Required Inputs
First, determine what inputs your crew requires:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
The response will be a JSON object containing an array of required input parameters, for example:
```json
{"inputs":["topic","current_year"]}
```
This example shows that this particular crew requires two inputs: `topic` and `current_year`.
### Step 2: Kickoff Execution
Initiate execution by providing the required inputs:
```bash
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
-d '{"inputs": {"topic": "AI Agent Frameworks", "current_year": "2025"}}' \
https://your-crew-url.crewai.com/kickoff
```
The response will include a `kickoff_id` that you'll need for tracking:
```json
{"kickoff_id":"abcd1234-5678-90ef-ghij-klmnopqrstuv"}
```
### Step 3: Check Execution Status
Monitor the execution progress using the kickoff_id:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
## Handling Executions
### Long-Running Executions
For executions that may take a long time:
1. Consider implementing a polling mechanism to check status periodically
2. Use webhooks (if available) for notification when execution completes
3. Implement error handling for potential timeouts
### Execution Context
The execution context includes:
- Inputs provided at kickoff
- Environment variables configured during deployment
- Any state maintained between tasks
### Debugging Failed Executions
If an execution fails:
1. Check the "Executions" tab for detailed logs
2. Review the "Traces" tab for step-by-step execution details
3. Look for LLM responses and tool usage in the trace details
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with execution issues or questions about the Enterprise platform.
</Card>

View File

@@ -1,64 +0,0 @@
---
title: "Microsoft Teams Trigger"
description: "Kick off crews from Microsoft Teams chat activity"
icon: "microsoft"
mode: "wide"
---
## Overview
Use the Microsoft Teams trigger to start automations whenever a new chat is created. Common patterns include summarizing inbound requests, routing urgent messages to support teams, or creating follow-up tasks in other systems.
<Tip>
Confirm Microsoft Teams is connected under **Tools & Integrations** and enabled in the **Triggers** tab for your deployment.
</Tip>
## Enabling the Microsoft Teams Trigger
1. Open your deployment in CrewAI AMP
2. Go to the **Triggers** tab
3. Locate **Microsoft Teams** and switch the toggle to enable
<Frame caption="Microsoft Teams trigger connection">
<img src="/images/enterprise/msteams-trigger.png" alt="Enable or disable triggers with toggle" />
</Frame>
## Example: Summarize a new chat thread
```python
from teams_chat_created_crew import MicrosoftTeamsChatTrigger
crew = MicrosoftTeamsChatTrigger().crew()
result = crew.kickoff({
"crewai_trigger_payload": teams_payload,
})
print(result.raw)
```
The crew parses thread metadata (subject, created time, roster) and generates an action plan for the receiving team.
## Testing Locally
Test your Microsoft Teams trigger integration locally using the CrewAI CLI:
```bash
# View all available triggers
crewai triggers list
# Simulate a Microsoft Teams trigger with realistic payload
crewai triggers run microsoft_teams/teams_message_created
```
The `crewai triggers run` command will execute your crew with a complete Teams payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_teams/teams_message_created` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Ensure the Teams connection is active; it must be refreshed if the tenant revokes permissions
- Test locally with `crewai triggers run microsoft_teams/teams_message_created` to see the exact payload structure
- Confirm the webhook subscription in Microsoft 365 is still valid if payloads stop arriving
- Review execution logs for payload shape mismatches—Graph notifications may omit fields when a chat is private or restricted
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -1,63 +0,0 @@
---
title: "OneDrive Trigger"
description: "Automate responses to OneDrive file activity"
icon: "cloud"
mode: "wide"
---
## Overview
Start automations when files change inside OneDrive. You can generate audit summaries, notify security teams about external sharing, or update downstream line-of-business systems with new document metadata.
<Tip>
Connect OneDrive in **Tools & Integrations** and toggle the trigger on for your deployment.
</Tip>
## Enabling the OneDrive Trigger
1. Open your deployment in CrewAI AMP
2. Go to the **Triggers** tab
3. Locate **OneDrive** and switch the toggle to enable
<Frame caption="Microsoft OneDrive trigger connection">
<img src="/images/enterprise/onedrive-trigger.png" alt="Enable or disable triggers with toggle" />
</Frame>
## Example: Audit file permissions
```python
from onedrive_file_crew import OneDriveFileTrigger
crew = OneDriveFileTrigger().crew()
crew.kickoff({
"crewai_trigger_payload": onedrive_payload,
})
```
The crew inspects file metadata, user activity, and permission changes to produce a compliance-friendly summary.
## Testing Locally
Test your OneDrive trigger integration locally using the CrewAI CLI:
```bash
# View all available triggers
crewai triggers list
# Simulate a OneDrive trigger with realistic payload
crewai triggers run microsoft_onedrive/file_changed
```
The `crewai triggers run` command will execute your crew with a complete OneDrive payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_onedrive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Ensure the connected account has permission to read the file metadata included in the webhook
- Test locally with `crewai triggers run microsoft_onedrive/file_changed` to see the exact payload structure
- If the trigger fires but the payload is missing `permissions`, confirm the site-level sharing settings allow Graph to return this field
- For large tenants, filter notifications upstream so the crew only runs on relevant directories
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -1,63 +0,0 @@
---
title: "Outlook Trigger"
description: "Launch automations from Outlook emails and calendar updates"
icon: "microsoft"
mode: "wide"
---
## Overview
Automate responses when Outlook delivers a new message or when an event is removed from the calendar. Teams commonly route escalations, file tickets, or alert attendees of cancellations.
<Tip>
Connect Outlook in **Tools & Integrations** and ensure the trigger is enabled for your deployment.
</Tip>
## Enabling the Outlook Trigger
1. Open your deployment in CrewAI AMP
2. Go to the **Triggers** tab
3. Locate **Outlook** and switch the toggle to enable
<Frame caption="Microsoft Outlook trigger connection">
<img src="/images/enterprise/outlook-trigger.png" alt="Enable or disable triggers with toggle" />
</Frame>
## Example: Summarize a new email
```python
from outlook_message_crew import OutlookMessageTrigger
crew = OutlookMessageTrigger().crew()
crew.kickoff({
"crewai_trigger_payload": outlook_payload,
})
```
The crew extracts sender details, subject, body preview, and attachments before generating a structured response.
## Testing Locally
Test your Outlook trigger integration locally using the CrewAI CLI:
```bash
# View all available triggers
crewai triggers list
# Simulate an Outlook trigger with realistic payload
crewai triggers run microsoft_outlook/email_received
```
The `crewai triggers run` command will execute your crew with a complete Outlook payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_outlook/email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Verify the Outlook connector is still authorized; the subscription must be renewed periodically
- Test locally with `crewai triggers run microsoft_outlook/email_received` to see the exact payload structure
- If attachments are missing, confirm the webhook subscription includes the `includeResourceData` flag
- Review execution logs when events fail to match—cancellation payloads lack attendee lists by design and the crew should account for that
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -1,104 +0,0 @@
---
title: "React Component Export"
description: "Learn how to export and integrate CrewAI AMP React components into your applications"
icon: "react"
mode: "wide"
---
This guide explains how to export CrewAI AMP crews as React components and integrate them into your own applications.
## Exporting a React Component
<Steps>
<Step title="Export the Component">
Click on the ellipsis (three dots on the right of your deployed crew) and select the export option and save the file locally. We will be using `CrewLead.jsx` for our example.
<Frame>
<img src="/images/enterprise/export-react-component.png" alt="Export React Component" />
</Frame>
</Step>
</Steps>
## Setting Up Your React Environment
To run this React component locally, you'll need to set up a React development environment and integrate this component into a React project.
<Steps>
<Step title="Install Node.js">
- Download and install Node.js from the official website: https://nodejs.org/
- Choose the LTS (Long Term Support) version for stability.
</Step>
<Step title="Create a new React project">
- Open Command Prompt or PowerShell
- Navigate to the directory where you want to create your project
- Run the following command to create a new React project:
```bash
npx create-react-app my-crew-app
```
- Change into the project directory:
```bash
cd my-crew-app
```
</Step>
<Step title="Install necessary dependencies">
```bash
npm install react-dom
```
</Step>
<Step title="Create the CrewLead component">
- Move the downloaded file `CrewLead.jsx` into the `src` folder of your project,
</Step>
<Step title="Modify your App.js to use the CrewLead component">
- Open `src/App.js`
- Replace its contents with something like this:
```jsx
import React from 'react';
import CrewLead from './CrewLead';
function App() {
return (
<div className="App">
<CrewLead baseUrl="YOUR_API_BASE_URL" bearerToken="YOUR_BEARER_TOKEN" />
</div>
);
}
export default App;
```
- Replace `YOUR_API_BASE_URL` and `YOUR_BEARER_TOKEN` with the actual values for your API.
</Step>
<Step title="Start the development server">
- In your project directory, run:
```bash
npm start
```
- This will start the development server, and your default web browser should open automatically to http://localhost:3000, where you'll see your React app running.
</Step>
</Steps>
## Customization
You can then customise the `CrewLead.jsx` to add color, title etc
<Frame>
<img src="/images/enterprise/customise-react-component.png" alt="Customise React Component" />
</Frame>
<Frame>
<img src="/images/enterprise/customise-react-component-2.png" alt="Customise React Component" />
</Frame>
## Next Steps
- Customize the component styling to match your application's design
- Add additional props for configuration
- Integrate with your application's state management
- Add error handling and loading states

View File

@@ -1,50 +0,0 @@
---
title: "Salesforce Trigger"
description: "Trigger CrewAI crews from Salesforce workflows for CRM automation"
icon: "salesforce"
mode: "wide"
---
CrewAI AMP can be triggered from Salesforce to automate customer relationship management workflows and enhance your sales operations.
## Overview
Salesforce is a leading customer relationship management (CRM) platform that helps businesses streamline their sales, service, and marketing operations. By setting up CrewAI triggers from Salesforce, you can:
- Automate lead scoring and qualification
- Generate personalized sales materials
- Enhance customer service with AI-powered responses
- Streamline data analysis and reporting
## Demo
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/oJunVqjjfu4"
title="CrewAI + Salesforce trigger demo"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
## Getting Started
To set up Salesforce triggers:
1. **Contact Support**: Reach out to CrewAI AMP support for assistance with Salesforce trigger setup
2. **Review Requirements**: Ensure you have the necessary Salesforce permissions and API access
3. **Configure Connection**: Work with the support team to establish the connection between CrewAI and your Salesforce instance
4. **Test Triggers**: Verify the triggers work correctly with your specific use cases
## Use Cases
Common Salesforce + CrewAI trigger scenarios include:
- **Lead Processing**: Automatically analyze and score incoming leads
- **Proposal Generation**: Create customized proposals based on opportunity data
- **Customer Insights**: Generate analysis reports from customer interaction history
- **Follow-up Automation**: Create personalized follow-up messages and recommendations
## Next Steps
For detailed setup instructions and advanced configuration options, please contact CrewAI AMP support who can provide tailored guidance for your specific Salesforce environment and business needs.

View File

@@ -1,62 +0,0 @@
---
title: "Slack Trigger"
description: "Trigger CrewAI crews directly from Slack using slash commands"
icon: "slack"
mode: "wide"
---
This guide explains how to start a crew directly from Slack using CrewAI triggers.
## Prerequisites
- CrewAI Slack trigger installed and connected to your Slack workspace
- At least one crew configured in CrewAI
## Setup Steps
<Steps>
<Step title="Ensure the CrewAI Slack trigger is set up">
In the CrewAI dashboard, navigate to the **Triggers** section.
<Frame>
<img src="/images/enterprise/slack-integration.png" alt="CrewAI Slack Integration" />
</Frame>
Verify that Slack is listed and is connected.
</Step>
<Step title="Open your Slack channel">
- Navigate to the channel where you want to kickoff the crew.
- Type the slash command "**/kickoff**" to initiate the crew kickoff process.
- You should see a "**Kickoff crew**" appear as you type:
<Frame>
<img src="/images/enterprise/kickoff-slack-crew.png" alt="Kickoff crew" />
</Frame>
- Press Enter or select the "**Kickoff crew**" option. A dialog box titled "**Kickoff an AI Crew**" will appear.
</Step>
<Step title="Select the crew you want to start">
- In the dropdown menu labeled "**Select of the crews online:**", choose the crew you want to start.
- In the example below, "**prep-for-meeting**" is selected:
<Frame>
<img src="/images/enterprise/kickoff-slack-crew-dropdown.png" alt="Kickoff crew dropdown" />
</Frame>
- If your crew requires any inputs, click the "**Add Inputs**" button to provide them.
<Note>
The "**Add Inputs**" button is shown in the example above but is not yet clicked.
</Note>
</Step>
<Step title="Click Kickoff and wait for the crew to complete">
- Once you've selected the crew and added any necessary inputs, click "**Kickoff**" to start the crew.
<Frame>
<img src="/images/enterprise/kickoff-slack-crew-kickoff.png" alt="Kickoff crew" />
</Frame>
- The crew will start executing and you will see the results in the Slack channel.
<Frame>
<img src="/images/enterprise/kickoff-slack-crew-results.png" alt="Kickoff crew results" />
</Frame>
</Step>
</Steps>
## Tips
- Make sure you have the necessary permissions to use the `/kickoff` command in your Slack workspace.
- If you don't see your desired crew in the dropdown, ensure it's properly configured and online in CrewAI.

View File

@@ -1,88 +0,0 @@
---
title: "Team Management"
description: "Learn how to invite and manage team members in your CrewAI AMP organization"
icon: "users"
mode: "wide"
---
As an administrator of a CrewAI AMP account, you can easily invite new team members to join your organization. This guide will walk you through the process step-by-step.
## Inviting Team Members
<Steps>
<Step title="Access the Settings Page">
- Log in to your CrewAI AMP account
- Look for the gear icon (⚙️) in the top right corner of the dashboard
- Click on the gear icon to access the **Settings** page:
<Frame caption="Settings page">
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame>
</Step>
<Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Members` tab
- Click on the `Members` tab to access the **Members** page:
<Frame caption="Members tab">
<img src="/images/enterprise/members-tab.png" alt="Members Tab" />
</Frame>
</Step>
<Step title="Invite New Members">
- In the Members section, you'll see a list of current members (including yourself)
- Locate the `Email` input field
- Enter the email address of the person you want to invite
- Click the `Invite` button to send the invitation
</Step>
<Step title="Repeat as Needed">
- You can repeat this process to invite multiple team members
- Each invited member will receive an email invitation to join your organization
</Step>
</Steps>
## Adding Roles
You can add roles to your team members to control their access to different parts of the platform.
<Steps>
<Step title="Access the Settings Page">
- Log in to your CrewAI AMP account
- Look for the gear icon (⚙️) in the top right corner of the dashboard
- Click on the gear icon to access the **Settings** page:
<Frame>
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame>
</Step>
<Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Roles` tab
- Click on the `Roles` tab to access the **Roles** page.
<Frame>
<img src="/images/enterprise/roles-tab.png" alt="Roles Tab" />
</Frame>
- Click on the `Add Role` button to add a new role.
- Enter the details and permissions of the role and click the `Create Role` button to create the role.
<Frame>
<img src="/images/enterprise/add-role-modal.png" alt="Add Role Modal" />
</Frame>
</Step>
<Step title="Add Roles to Members">
- In the Members section, you'll see a list of current members (including yourself)
<Frame>
<img src="/images/enterprise/member-accepted-invitation.png" alt="Member Accepted Invitation" />
</Frame>
- Once the member has accepted the invitation, you can add a role to them.
- Navigate back to `Roles` tab
- Go to the member you want to add a role to and under the `Role` column, click on the dropdown
- Select the role you want to add to the member
- Click the `Update` button to save the role
<Frame>
<img src="/images/enterprise/assign-role.png" alt="Add Role to Member" />
</Frame>
</Step>
</Steps>
## Important Notes
- **Admin Privileges**: Only users with administrative privileges can invite new members
- **Email Accuracy**: Ensure you have the correct email addresses for your team members
- **Invitation Acceptance**: Invited members will need to accept the invitation to join your organization
- **Email Notifications**: You may want to inform your team members to check their email (including spam folders) for the invitation
By following these steps, you can easily expand your team and collaborate more effectively within your CrewAI AMP organization.

View File

@@ -1,155 +0,0 @@
---
title: Tool Repository
description: "Using the Tool Repository to manage your tools"
icon: "toolbox"
mode: "wide"
---
## Overview
The Tool Repository is a package manager for CrewAI tools. It allows users to publish, install, and manage tools that integrate with CrewAI crews and flows.
Tools can be:
- **Private**: accessible only within your organization (default)
- **Public**: accessible to all CrewAI users if published with the `--public` flag
The repository is not a version control system. Use Git to track code changes and enable collaboration.
## Prerequisites
Before using the Tool Repository, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
- uv>=0.5.0 installed. Check out [how to upgrade](https://docs.astral.sh/uv/getting-started/installation/#upgrading-uv)
- [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI AMP organization
## Installing Tools
To install a tool:
```bash
crewai tool install <tool-name>
```
This installs the tool and adds it to `pyproject.toml`.
You can use the tool by importing it and adding it to your agents:
```python
from your_tool.tool import YourTool
custom_tool = YourTool()
researcher = Agent(
role='Market Research Analyst',
goal='Provide up-to-date market analysis of the AI industry',
backstory='An expert analyst with a keen eye for market trends.',
tools=[custom_tool],
verbose=True
)
```
## Adding other packages after installing a tool
After installing a tool from the CrewAI AMP Tool Repository, you need to use the `crewai uv` command to add other packages to your project.
Using pure `uv` commands will fail due to authentication to tool repository being handled by the CLI. By using the `crewai uv` command, you can add other packages to your project without having to worry about authentication.
Any `uv` command can be used with the `crewai uv` command, making it a powerful tool for managing your project's dependencies without the hassle of managing authentication through environment variables or other methods.
Say that you have installed a custom tool from the CrewAI AMP Tool Repository called "my-tool":
```bash
crewai tool install my-tool
```
And now you want to add another package to your project, you can use the following command:
```bash
crewai uv add requests
```
Other commands like `uv sync` or `uv remove` can also be used with the `crewai uv` command:
```bash
crewai uv sync
```
```bash
crewai uv remove requests
```
This will add the package to your project and update `pyproject.toml` accordingly.
## Creating and Publishing Tools
To create a new tool project:
```bash
crewai tool create <tool-name>
```
This generates a scaffolded tool project locally.
After making changes, initialize a Git repository and commit the code:
```bash
git init
git add .
git commit -m "Initial version"
```
To publish the tool:
```bash
crewai tool publish
```
By default, tools are published as private. To make a tool public:
```bash
crewai tool publish --public
```
For more details on how to build tools, see [Creating your own tools](https://docs.crewai.com/concepts/tools#creating-your-own-tools).
## Updating Tools
To update a published tool:
1. Modify the tool locally
2. Update the version in `pyproject.toml` (e.g., from `0.1.0` to `0.1.1`)
3. Commit the changes and publish
```bash
git commit -m "Update version to 0.1.1"
crewai tool publish
```
## Deleting Tools
To delete a tool:
1. Go to [CrewAI AMP](https://app.crewai.com)
2. Navigate to **Tools**
3. Select the tool
4. Click **Delete**
<Warning>
Deletion is permanent. Deleted tools cannot be restored or re-installed.
</Warning>
## Security Checks
Every published version undergoes automated security checks, and are only available to install after they pass.
You can check the security check status of a tool at:
`CrewAI AMP > Tools > Your Tool > Versions`
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -1,89 +0,0 @@
---
title: "Update Crew"
description: "Updating a Crew on CrewAI AMP"
icon: "pencil"
mode: "wide"
---
<Note>
After deploying your crew to CrewAI AMP, you may need to make updates to the code, security settings, or configuration.
This guide explains how to perform these common update operations.
</Note>
## Why Update Your Crew?
CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew.
There are several reasons you might want to update your crew deployment:
- You want to update the code with a latest commit you pushed to GitHub
- You want to reset the bearer token for security reasons
- You want to update environment variables
## 1. Updating Your Crew Code for a Latest Commit
When you've pushed new commits to your GitHub repository and want to update your deployment:
1. Navigate to your crew in the CrewAI AMP platform
2. Click on the `Re-deploy` button on your crew details page
<Frame>
![Re-deploy Button](/images/enterprise/redeploy-button.png)
</Frame>
This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment.
## 2. Resetting Bearer Token
If you need to generate a new bearer token (for example, if you suspect the current token might have been compromised):
1. Navigate to your crew in the CrewAI AMP platform
2. Find the `Bearer Token` section
3. Click the `Reset` button next to your current token
<Frame>
![Reset Token](/images/enterprise/reset-token.png)
</Frame>
<Warning>
Resetting your bearer token will invalidate the previous token immediately. Make sure to update any applications or scripts that are using the old token.
</Warning>
## 3. Updating Environment Variables
To update the environment variables for your crew:
1. First access the deployment page by clicking on your crew's name
<Frame>
![Environment Variables Button](/images/enterprise/env-vars-button.png)
</Frame>
2. Locate the `Environment Variables` section (you will need to click the `Settings` icon to access it)
3. Edit the existing variables or add new ones in the fields provided
4. Click the `Update` button next to each variable you modify
<Frame>
![Update Environment Variables](/images/enterprise/update-env-vars.png)
</Frame>
5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes
<Note>
Updating environment variables will trigger a new deployment, but this will only update the environment configuration and not the code itself.
</Note>
## After Updating
After performing any update:
1. The system will rebuild and redeploy your crew
2. You can monitor the deployment progress in real-time
3. Once complete, test your crew to ensure the changes are working as expected
<Tip>
If you encounter any issues after updating, you can view deployment logs in the platform or contact support for assistance.
</Tip>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with updating your crew or troubleshooting deployment issues.
</Card>

View File

@@ -1,155 +0,0 @@
---
title: "Webhook Automation"
description: "Automate CrewAI AMP workflows using webhooks with platforms like ActivePieces, Zapier, and Make.com"
icon: "webhook"
mode: "wide"
---
CrewAI AMP allows you to automate your workflow using webhooks. This article will guide you through the process of setting up and using webhooks to kickoff your crew execution, with a focus on integration with ActivePieces, a workflow automation platform similar to Zapier and Make.com.
## Setting Up Webhooks
<Steps>
<Step title="Accessing the Kickoff Interface">
- Navigate to the CrewAI AMP dashboard
- Look for the `/kickoff` section, which is used to start the crew execution
<Frame>
<img src="/images/enterprise/kickoff-interface.png" alt="Kickoff Interface" />
</Frame>
</Step>
<Step title="Configuring the JSON Content">
In the JSON Content section, you'll need to provide the following information:
- **inputs**: A JSON object containing:
- `company`: The name of the company (e.g., "tesla")
- `product_name`: The name of the product (e.g., "crewai")
- `form_response`: The type of response (e.g., "financial")
- `icp_description`: A brief description of the Ideal Customer Profile
- `product_description`: A short description of the product
- `taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`: URLs for various webhook endpoints (ActivePieces, Zapier, Make.com or another compatible platform)
</Step>
<Step title="Integrating with ActivePieces">
In this example we will be using ActivePieces. You can use other platforms such as Zapier and Make.com
To integrate with ActivePieces:
1. Set up a new flow in ActivePieces
2. Add a trigger (e.g., `Every Day` schedule)
<Frame>
<img src="/images/enterprise/activepieces-trigger.png" alt="ActivePieces Trigger" />
</Frame>
3. Add an HTTP action step
- Set the action to `Send HTTP request`
- Use `POST` as the method
- Set the URL to your CrewAI AMP kickoff endpoint
- Add necessary headers (e.g., `Bearer Token`)
<Frame>
<img src="/images/enterprise/activepieces-headers.png" alt="ActivePieces Headers" />
</Frame>
- In the body, include the JSON content as configured in step 2
<Frame>
<img src="/images/enterprise/activepieces-body.png" alt="ActivePieces Body" />
</Frame>
- The crew will then kickoff at the pre-defined time.
</Step>
<Step title="Setting Up the Webhook">
1. Create a new flow in ActivePieces and name it
<Frame>
<img src="/images/enterprise/activepieces-flow.png" alt="ActivePieces Flow" />
</Frame>
2. Add a webhook step as the trigger:
- Select `Catch Webhook` as the trigger type
- This will generate a unique URL that will receive HTTP requests and trigger your flow
<Frame>
<img src="/images/enterprise/activepieces-webhook.png" alt="ActivePieces Webhook" />
</Frame>
- Configure the email to use crew webhook body text
<Frame>
<img src="/images/enterprise/activepieces-email.png" alt="ActivePieces Email" />
</Frame>
</Step>
</Steps>
## Webhook Output Examples
**Note:** Any `meta` object provided in your kickoff request will be included in all webhook payloads, allowing you to track requests and maintain context across the entire crew execution lifecycle.
<Tabs>
<Tab title="Step Webhook">
`stepWebhookUrl` - Callback that will be executed upon each agent inner thought
```json
{
"prompt": "Research the financial industry for potential AI solutions",
"thought": "I need to conduct preliminary research on the financial industry",
"tool": "research_tool",
"tool_input": "financial industry AI solutions",
"result": "**Preliminary Research Report on the Financial Industry for crewai Enterprise Solution**\n1. Industry Overview and Trends\nThe financial industry in ....\nConclusion:\nThe financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"kickoff_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"meta": {
"requestId": "travel-req-123",
"source": "web-app"
}
}
```
</Tab>
<Tab title="Task Webhook">
`taskWebhookUrl` - Callback that will be executed upon the end of each task
```json
{
"description": "Using the information gathered from the lead's data, conduct preliminary research on the lead's industry, company background, and potential use cases for crewai. Focus on finding relevant data that can aid in scoring the lead and planning a strategy to pitch them crewai.",
"name": "Industry Research Task",
"expected_output": "Detailed research report on the financial industry",
"summary": "The financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"agent": "Research Agent",
"output": "**Preliminary Research Report on the Financial Industry for crewai Enterprise Solution**\n1. Industry Overview and Trends\nThe financial industry in ....\nConclusion:\nThe financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance.",
"output_json": {
"industry": "financial",
"key_opportunities": ["digital customer engagement", "risk management", "regulatory compliance"]
},
"kickoff_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"meta": {
"requestId": "travel-req-123",
"source": "web-app"
}
}
```
</Tab>
<Tab title="Crew Webhook">
`crewWebhookUrl` - Callback that will be executed upon the end of the crew execution
```json
{
"kickoff_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"result": "**Final Analysis Report**\n\nLead Score: Customer service enhancement and compliance are particularly relevant.\n\nTalking Points:\n- Highlight how crewai's AI solutions can transform customer service\n- Discuss crewai's potential for sustainability goals\n- Emphasize compliance capabilities\n- Stress adaptability for various operation scales",
"result_json": {
"lead_score": "Customer service enhancement, and compliance are particularly relevant.",
"talking_points": [
"Highlight how crewai's AI solutions can transform customer service with automated, personalized experiences and 24/7 support, improving both customer satisfaction and operational efficiency.",
"Discuss crewai's potential to help the institution achieve its sustainability goals through better data analysis and decision-making, contributing to responsible investing and green initiatives.",
"Emphasize crewai's ability to enhance compliance with evolving regulations through efficient data processing and reporting, reducing the risk of non-compliance penalties.",
"Stress the adaptability of crewai to support both extensive multinational operations and smaller, targeted projects, ensuring the solution grows with the institution's needs."
]
},
"token_usage": {
"total_tokens": 1250,
"prompt_tokens": 800,
"completion_tokens": 450
},
"meta": {
"requestId": "travel-req-123",
"source": "web-app"
}
}
```
</Tab>
</Tabs>

View File

@@ -1,104 +0,0 @@
---
title: "Zapier Trigger"
description: "Trigger CrewAI crews from Zapier workflows to automate cross-app workflows"
icon: "bolt"
mode: "wide"
---
This guide will walk you through the process of setting up Zapier triggers for CrewAI AMP, allowing you to automate workflows between CrewAI AMP and other applications.
## Prerequisites
- A CrewAI AMP account
- A Zapier account
- A Slack account (for this specific example)
## Step-by-Step Setup
<Steps>
<Step title="Set Up the Slack Trigger">
- In Zapier, create a new Zap.
<Frame>
<img src="/images/enterprise/zapier-1.png" alt="Zapier 1" />
</Frame>
</Step>
<Step title="Choose Slack as your trigger app">
<Frame>
<img src="/images/enterprise/zapier-2.png" alt="Zapier 2" />
</Frame>
- Select `New Pushed Message` as the Trigger Event.
- Connect your Slack account if you haven't already.
</Step>
<Step title="Configure the CrewAI AMP Action">
- Add a new action step to your Zap.
- Choose CrewAI+ as your action app and Kickoff as the Action Event
<Frame>
<img src="/images/enterprise/zapier-3.png" alt="Zapier 5" />
</Frame>
</Step>
<Step title="Connect your CrewAI AMP account">
- Connect your CrewAI AMP account.
- Select the appropriate Crew for your workflow.
<Frame>
<img src="/images/enterprise/zapier-4.png" alt="Zapier 6" />
</Frame>
- Configure the inputs for the Crew using the data from the Slack message.
</Step>
<Step title="Format the CrewAI AMP Output">
- Add another action step to format the text output from CrewAI AMP.
- Use Zapier's formatting tools to convert the Markdown output to HTML.
<Frame>
<img src="/images/enterprise/zapier-5.png" alt="Zapier 8" />
</Frame>
<Frame>
<img src="/images/enterprise/zapier-6.png" alt="Zapier 9" />
</Frame>
</Step>
<Step title="Send the Output via Email">
- Add a final action step to send the formatted output via email.
- Choose your preferred email service (e.g., Gmail, Outlook).
- Configure the email details, including recipient, subject, and body.
- Insert the formatted CrewAI AMP output into the email body.
<Frame>
<img src="/images/enterprise/zapier-7.png" alt="Zapier 7" />
</Frame>
</Step>
<Step title="Kick Off the crew from Slack">
- Enter the text in your Slack channel
<Frame>
<img src="/images/enterprise/zapier-7b.png" alt="Zapier 10" />
</Frame>
- Select the 3 ellipsis button and then chose Push to Zapier
<Frame>
<img src="/images/enterprise/zapier-8.png" alt="Zapier 11" />
</Frame>
</Step>
<Step title="Select the crew and then Push to Kick Off">
<Frame>
<img src="/images/enterprise/zapier-9.png" alt="Zapier 12" />
</Frame>
</Step>
</Steps>
## Tips for Success
- Ensure that your CrewAI AMP inputs are correctly mapped from the Slack message.
- Test your Zap thoroughly before turning it on to catch any potential issues.
- Consider adding error handling steps to manage potential failures in the workflow.
By following these steps, you'll have successfully set up Zapier triggers for CrewAI AMP, allowing for automated workflows triggered by Slack messages and resulting in email notifications with CrewAI AMP output.

View File

@@ -1,242 +0,0 @@
---
title: Asana Integration
description: "Team task and project coordination with Asana integration for CrewAI."
icon: "circle"
mode: "wide"
---
## Overview
Enable your agents to manage tasks, projects, and team coordination through Asana. Create tasks, update project status, manage assignments, and streamline your team's workflow with AI-powered automation.
## Prerequisites
Before using the Asana integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- An Asana account with appropriate permissions
- Connected your Asana account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Asana Integration
### 1. Connect Your Asana Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Asana** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="asana/create_comment">
**Description:** Create a comment in Asana.
**Parameters:**
- `task` (string, required): Task ID - The ID of the Task the comment will be added to. The comment will be authored by the currently authenticated user.
- `text` (string, required): Text (example: "This is a comment.").
</Accordion>
<Accordion title="asana/create_project">
**Description:** Create a project in Asana.
**Parameters:**
- `name` (string, required): Name (example: "Stuff to buy").
- `workspace` (string, required): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Projects in. Defaults to the user's first Workspace if left blank.
- `team` (string, optional): Team - Use Connect Portal Workflow Settings to allow users to select which Team to share this Project with. Defaults to the user's first Team if left blank.
- `notes` (string, optional): Notes (example: "These are things we need to purchase.").
</Accordion>
<Accordion title="asana/get_projects">
**Description:** Get a list of projects in Asana.
**Parameters:**
- `archived` (string, optional): Archived - Choose "true" to show archived projects, "false" to display only active projects, or "default" to show both archived and active projects.
- Options: `default`, `true`, `false`
</Accordion>
<Accordion title="asana/get_project_by_id">
**Description:** Get a project by ID in Asana.
**Parameters:**
- `projectFilterId` (string, required): Project ID.
</Accordion>
<Accordion title="asana/create_task">
**Description:** Create a task in Asana.
**Parameters:**
- `name` (string, required): Name (example: "Task Name").
- `workspace` (string, optional): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Tasks in. Defaults to the user's first Workspace if left blank..
- `project` (string, optional): Project - Use Connect Portal Workflow Settings to allow users to select which Project to create this Task in.
- `notes` (string, optional): Notes.
- `dueOnDate` (string, optional): Due On - The date on which this task is due. Cannot be used together with Due At. (example: "YYYY-MM-DD").
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="asana/update_task">
**Description:** Update a task in Asana.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the Task that will be updated.
- `completeStatus` (string, optional): Completed Status.
- Options: `true`, `false`
- `name` (string, optional): Name (example: "Task Name").
- `notes` (string, optional): Notes.
- `dueOnDate` (string, optional): Due On - The date on which this task is due. Cannot be used together with Due At. (example: "YYYY-MM-DD").
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="asana/get_tasks">
**Description:** Get a list of tasks in Asana.
**Parameters:**
- `workspace` (string, optional): Workspace - The ID of the Workspace to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Workspace.
- `project` (string, optional): Project - The ID of the Project to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Project.
- `assignee` (string, optional): Assignee - The ID of the assignee to filter tasks on. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00").
</Accordion>
<Accordion title="asana/get_tasks_by_id">
**Description:** Get a list of tasks by ID in Asana.
**Parameters:**
- `taskId` (string, required): Task ID.
</Accordion>
<Accordion title="asana/get_task_by_external_id">
**Description:** Get a task by external ID in Asana.
**Parameters:**
- `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application.
</Accordion>
<Accordion title="asana/add_task_to_section">
**Description:** Add a task to a section in Asana.
**Parameters:**
- `sectionId` (string, required): Section ID - The ID of the section to add this task to.
- `taskId` (string, required): Task ID - The ID of the task. (example: "1204619611402340").
- `beforeTaskId` (string, optional): Before Task ID - The ID of a task in this section that this task will be inserted before. Cannot be used with After Task ID. (example: "1204619611402340").
- `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340").
</Accordion>
<Accordion title="asana/get_teams">
**Description:** Get a list of teams in Asana.
**Parameters:**
- `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user.
</Accordion>
<Accordion title="asana/get_workspaces">
**Description:** Get a list of workspaces in Asana.
**Parameters:** None required.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Asana Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Asana capabilities
asana_agent = Agent(
role="Project Manager",
goal="Manage tasks and projects in Asana efficiently",
backstory="An AI assistant specialized in project management and task coordination.",
apps=['asana'] # All Asana actions will be available
)
# Task to create a new project
create_project_task = Task(
description="Create a new project called 'Q1 Marketing Campaign' in the Marketing workspace",
agent=asana_agent,
expected_output="Confirmation that the project was created successfully with project ID"
)
# Run the task
crew = Crew(
agents=[asana_agent],
tasks=[create_project_task]
)
crew.kickoff()
```
### Filtering Specific Asana Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific Asana actions only
task_manager_agent = Agent(
role="Task Manager",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and management.",
apps=[
'asana/create_task',
'asana/update_task',
'asana/get_tasks'
] # Specific Asana actions
)
# Task to create and assign a task
task_management = Task(
description="Create a task called 'Review quarterly reports' and assign it to the appropriate team member",
agent=task_manager_agent,
expected_output="Task created and assigned successfully"
)
crew = Crew(
agents=[task_manager_agent],
tasks=[task_management]
)
crew.kickoff()
```
### Advanced Project Management
```python
from crewai import Agent, Task, Crew
project_coordinator = Agent(
role="Project Coordinator",
goal="Coordinate project activities and track progress",
backstory="An experienced project coordinator who ensures projects run smoothly.",
apps=['asana']
)
# Complex task involving multiple Asana operations
coordination_task = Task(
description="""
1. Get all active projects in the workspace
2. For each project, get the list of incomplete tasks
3. Create a summary report task in the 'Management Reports' project
4. Add comments to overdue tasks to request status updates
""",
agent=project_coordinator,
expected_output="Summary report created and status update requests sent for overdue tasks"
)
crew = Crew(
agents=[project_coordinator],
tasks=[coordination_task]
)
crew.kickoff()
```

View File

@@ -1,254 +0,0 @@
---
title: Box Integration
description: "File storage and document management with Box integration for CrewAI."
icon: "box"
mode: "wide"
---
## Overview
Enable your agents to manage files, folders, and documents through Box. Upload files, organize folder structures, search content, and streamline your team's document management with AI-powered automation.
## Prerequisites
Before using the Box integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Box account with appropriate permissions
- Connected your Box account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Box Integration
### 1. Connect Your Box Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Box** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for file and folder management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="box/save_file">
**Description:** Save a file from URL in Box.
**Parameters:**
- `fileAttributes` (object, required): Attributes - File metadata including name, parent folder, and timestamps.
```json
{
"content_created_at": "2012-12-12T10:53:43-08:00",
"content_modified_at": "2012-12-12T10:53:43-08:00",
"name": "qwerty.png",
"parent": { "id": "1234567" }
}
```
- `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "https://picsum.photos/200/300").
</Accordion>
<Accordion title="box/save_file_from_object">
**Description:** Save a file in Box.
**Parameters:**
- `file` (string, required): File - Accepts a File Object containing file data. Files must be smaller than 50MB in size.
- `fileName` (string, required): File Name (example: "qwerty.png").
- `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank.
</Accordion>
<Accordion title="box/get_file_by_id">
**Description:** Get a file by ID in Box.
**Parameters:**
- `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345").
</Accordion>
<Accordion title="box/list_files">
**Description:** List files in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "direction",
"operator": "$stringExactlyMatches",
"value": "ASC"
}
]
}
]
}
```
</Accordion>
<Accordion title="box/create_folder">
**Description:** Create a folder in Box.
**Parameters:**
- `folderName` (string, required): Name - The name for the new folder. (example: "New Folder").
- `folderParent` (object, required): Parent Folder - The parent folder where the new folder will be created.
```json
{
"id": "123456"
}
```
</Accordion>
<Accordion title="box/move_folder">
**Description:** Move a folder in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `folderName` (string, required): Name - The name for the folder. (example: "New Folder").
- `folderParent` (object, required): Parent Folder - The new parent folder destination.
```json
{
"id": "123456"
}
```
</Accordion>
<Accordion title="box/get_folder_by_id">
**Description:** Get a folder by ID in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
</Accordion>
<Accordion title="box/search_folders">
**Description:** Search folders in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The folder to search within.
- `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "sort",
"operator": "$stringExactlyMatches",
"value": "name"
}
]
}
]
}
```
</Accordion>
<Accordion title="box/delete_folder">
**Description:** Delete a folder in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `recursive` (boolean, optional): Recursive - Delete a folder that is not empty by recursively deleting the folder and all of its content.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Box Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai import Agent, Task, Crew
# Create an agent with Box capabilities
box_agent = Agent(
role="Document Manager",
goal="Manage files and folders in Box efficiently",
backstory="An AI assistant specialized in document management and file organization.",
apps=['box'] # All Box actions will be available
)
# Task to create a folder structure
create_structure_task = Task(
description="Create a folder called 'Project Files' in the root directory and upload a document from URL",
agent=box_agent,
expected_output="Folder created and file uploaded successfully"
)
# Run the task
crew = Crew(
agents=[box_agent],
tasks=[create_structure_task]
)
crew.kickoff()
```
### Filtering Specific Box Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific Box actions only
file_organizer_agent = Agent(
role="File Organizer",
goal="Organize and manage file storage efficiently",
backstory="An AI assistant that focuses on file organization and storage management.",
apps=['box/create_folder', 'box/save_file', 'box/list_files'] # Specific Box actions
)
# Task to organize files
organization_task = Task(
description="Create a folder structure for the marketing team and organize existing files",
agent=file_organizer_agent,
expected_output="Folder structure created and files organized"
)
crew = Crew(
agents=[file_organizer_agent],
tasks=[organization_task]
)
crew.kickoff()
```
### Advanced File Management
```python
from crewai import Agent, Task, Crew
file_manager = Agent(
role="File Manager",
goal="Maintain organized file structure and manage document lifecycle",
backstory="An experienced file manager who ensures documents are properly organized and accessible.",
apps=['box']
)
# Complex task involving multiple Box operations
management_task = Task(
description="""
1. List all files in the root folder
2. Create monthly archive folders for the current year
3. Move old files to appropriate archive folders
4. Generate a summary report of the file organization
""",
agent=file_manager,
expected_output="Files organized into archive structure with summary report"
)
crew = Crew(
agents=[file_manager],
tasks=[management_task]
)
crew.kickoff()
```

View File

@@ -1,272 +0,0 @@
---
title: ClickUp Integration
description: "Task and productivity management with ClickUp integration for CrewAI."
icon: "list-check"
mode: "wide"
---
## Overview
Enable your agents to manage tasks, projects, and productivity workflows through ClickUp. Create and update tasks, organize projects, manage team assignments, and streamline your productivity management with AI-powered automation.
## Prerequisites
Before using the ClickUp integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A ClickUp account with appropriate permissions
- Connected your ClickUp account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up ClickUp Integration
### 1. Connect Your ClickUp Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **ClickUp** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="clickup/search_tasks">
**Description:** Search for tasks in ClickUp using advanced filters.
**Parameters:**
- `taskFilterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "statuses%5B%5D",
"operator": "$stringExactlyMatches",
"value": "open"
}
]
}
]
}
```
Available fields: `space_ids%5B%5D`, `project_ids%5B%5D`, `list_ids%5B%5D`, `statuses%5B%5D`, `include_closed`, `assignees%5B%5D`, `tags%5B%5D`, `due_date_gt`, `due_date_lt`, `date_created_gt`, `date_created_lt`, `date_updated_gt`, `date_updated_lt`
</Accordion>
<Accordion title="clickup/get_task_in_list">
**Description:** Get tasks in a specific list in ClickUp.
**Parameters:**
- `listId` (string, required): List - Select a List to get tasks from. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `taskFilterFormula` (string, optional): Search for tasks that match specified filters. For example: name=task1.
</Accordion>
<Accordion title="clickup/create_task">
**Description:** Create a task in ClickUp.
**Parameters:**
- `listId` (string, required): List - Select a List to create this task in. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `name` (string, required): Name - The task name.
- `description` (string, optional): Description - Task description.
- `status` (string, optional): Status - Select a Status for this task. Use Connect Portal User Settings to allow users to select a ClickUp Status.
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="clickup/update_task">
**Description:** Update a task in ClickUp.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the task to update.
- `listId` (string, required): List - Select a List to create this task in. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `name` (string, optional): Name - The task name.
- `description` (string, optional): Description - Task description.
- `status` (string, optional): Status - Select a Status for this task. Use Connect Portal User Settings to allow users to select a ClickUp Status.
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="clickup/delete_task">
**Description:** Delete a task in ClickUp.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the task to delete.
</Accordion>
<Accordion title="clickup/get_list">
**Description:** Get List information in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the lists.
</Accordion>
<Accordion title="clickup/get_custom_fields_in_list">
**Description:** Get Custom Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get custom fields from.
</Accordion>
<Accordion title="clickup/get_all_fields_in_list">
**Description:** Get All Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get all fields from.
</Accordion>
<Accordion title="clickup/get_space">
**Description:** Get Space information in ClickUp.
**Parameters:**
- `spaceId` (string, optional): Space ID - The ID of the space to retrieve.
</Accordion>
<Accordion title="clickup/get_folders">
**Description:** Get Folders in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the folders.
</Accordion>
<Accordion title="clickup/get_member">
**Description:** Get Member information in ClickUp.
**Parameters:** None required.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic ClickUp Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai import Agent, Task, Crew
# Create an agent with Clickup capabilities
clickup_agent = Agent(
role="Task Manager",
goal="Manage tasks and projects in ClickUp efficiently",
backstory="An AI assistant specialized in task management and productivity coordination.",
apps=['clickup'] # All Clickup actions will be available
)
# Task to create a new task
create_task = Task(
description="Create a task called 'Review Q1 Reports' in the Marketing list with high priority",
agent=clickup_agent,
expected_output="Task created successfully with task ID"
)
# Run the task
crew = Crew(
agents=[clickup_agent],
tasks=[create_task]
)
crew.kickoff()
```
### Filtering Specific ClickUp Tools
```python
task_coordinator = Agent(
role="Task Coordinator",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and status management.",
apps=['clickup/create_task']
)
# Task to manage task workflow
task_workflow = Task(
description="Create a task for project planning and assign it to the development team",
agent=task_coordinator,
expected_output="Task created and assigned successfully"
)
crew = Crew(
agents=[task_coordinator],
tasks=[task_workflow]
)
crew.kickoff()
```
### Advanced Project Management
```python
from crewai import Agent, Task, Crew
project_manager = Agent(
role="Project Manager",
goal="Coordinate project activities and track team productivity",
backstory="An experienced project manager who ensures projects are delivered on time.",
apps=['clickup']
)
# Complex task involving multiple ClickUp operations
project_coordination = Task(
description="""
1. Get all open tasks in the current space
2. Identify overdue tasks and update their status
3. Create a weekly report task summarizing project progress
4. Assign the report task to the team lead
""",
agent=project_manager,
expected_output="Project status updated and weekly report task created and assigned"
)
crew = Crew(
agents=[project_manager],
tasks=[project_coordination]
)
crew.kickoff()
```
### Task Search and Management
```python
from crewai import Agent, Task, Crew
task_analyst = Agent(
role="Task Analyst",
goal="Analyze task patterns and optimize team productivity",
backstory="An AI assistant that analyzes task data to improve team efficiency.",
apps=['clickup']
)
# Task to analyze and optimize task distribution
task_analysis = Task(
description="""
Search for all tasks assigned to team members in the last 30 days,
analyze completion patterns, and create optimization recommendations
""",
agent=task_analyst,
expected_output="Task analysis report with optimization recommendations"
)
crew = Crew(
agents=[task_analyst],
tasks=[task_analysis]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with ClickUp integration setup or troubleshooting.
</Card>

View File

@@ -1,302 +0,0 @@
---
title: GitHub Integration
description: "Repository and issue management with GitHub integration for CrewAI."
icon: "github"
mode: "wide"
---
## Overview
Enable your agents to manage repositories, issues, and releases through GitHub. Create and update issues, manage releases, track project development, and streamline your software development workflow with AI-powered automation.
## Prerequisites
Before using the GitHub integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A GitHub account with appropriate repository permissions
- Connected your GitHub account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up GitHub Integration
### 1. Connect Your GitHub Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **GitHub** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for repository and issue management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="github/create_issue">
**Description:** Create an issue in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `title` (string, required): Issue Title - Specify the title of the issue to create.
- `body` (string, optional): Issue Body - Specify the body contents of the issue to create.
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
</Accordion>
<Accordion title="github/update_issue">
**Description:** Update an issue in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `issue_number` (string, required): Issue Number - Specify the number of the issue to update.
- `title` (string, required): Issue Title - Specify the title of the issue to update.
- `body` (string, optional): Issue Body - Specify the body contents of the issue to update.
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
- `state` (string, optional): State - Specify the updated state of the issue.
- Options: `open`, `closed`
</Accordion>
<Accordion title="github/get_issue_by_number">
**Description:** Get an issue by number in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `issue_number` (string, required): Issue Number - Specify the number of the issue to fetch.
</Accordion>
<Accordion title="github/lock_issue">
**Description:** Lock an issue in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `issue_number` (string, required): Issue Number - Specify the number of the issue to lock.
- `lock_reason` (string, required): Lock Reason - Specify a reason for locking the issue or pull request conversation.
- Options: `off-topic`, `too heated`, `resolved`, `spam`
</Accordion>
<Accordion title="github/search_issue">
**Description:** Search for issues in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `filter` (object, required): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "assignee",
"operator": "$stringExactlyMatches",
"value": "octocat"
}
]
}
]
}
```
Available fields: `assignee`, `creator`, `mentioned`, `labels`
</Accordion>
<Accordion title="github/create_release">
**Description:** Create a release in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `tag_name` (string, required): Name - Specify the name of the release tag to be created. (example: "v1.0.0").
- `target_commitish` (string, optional): Target - Specify the target of the release. This can either be a branch name or a commit SHA. Defaults to the main branch. (example: "master").
- `body` (string, optional): Body - Specify a description for this release.
- `draft` (string, optional): Draft - Specify whether the created release should be a draft (unpublished) release.
- Options: `true`, `false`
- `prerelease` (string, optional): Prerelease - Specify whether the created release should be a prerelease.
- Options: `true`, `false`
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
- Options: `true`, `false`
</Accordion>
<Accordion title="github/update_release">
**Description:** Update a release in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the ID of the release to update.
- `tag_name` (string, optional): Name - Specify the name of the release tag to be updated. (example: "v1.0.0").
- `target_commitish` (string, optional): Target - Specify the target of the release. This can either be a branch name or a commit SHA. Defaults to the main branch. (example: "master").
- `body` (string, optional): Body - Specify a description for this release.
- `draft` (string, optional): Draft - Specify whether the created release should be a draft (unpublished) release.
- Options: `true`, `false`
- `prerelease` (string, optional): Prerelease - Specify whether the created release should be a prerelease.
- Options: `true`, `false`
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
- Options: `true`, `false`
</Accordion>
<Accordion title="github/get_release_by_id">
**Description:** Get a release by ID in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the release ID of the release to fetch.
</Accordion>
<Accordion title="github/get_release_by_tag_name">
**Description:** Get a release by tag name in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `tag_name` (string, required): Name - Specify the tag of the release to fetch. (example: "v1.0.0").
</Accordion>
<Accordion title="github/delete_release">
**Description:** Delete a release in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the ID of the release to delete.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic GitHub Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai import Agent, Task, Crew
# Create an agent with Github capabilities
github_agent = Agent(
role="Repository Manager",
goal="Manage GitHub repositories, issues, and releases efficiently",
backstory="An AI assistant specialized in repository management and issue tracking.",
apps=['github'] # All Github actions will be available
)
# Task to create a new issue
create_issue_task = Task(
description="Create a bug report issue for the login functionality in the main repository",
agent=github_agent,
expected_output="Issue created successfully with issue number"
)
# Run the task
crew = Crew(
agents=[github_agent],
tasks=[create_issue_task]
)
crew.kickoff()
```
### Filtering Specific GitHub Tools
```python
issue_manager = Agent(
role="Issue Manager",
goal="Create and manage GitHub issues efficiently",
backstory="An AI assistant that focuses on issue tracking and management.",
apps=['github/create_issue']
)
# Task to manage issue workflow
issue_workflow = Task(
description="Create a feature request issue and assign it to the development team",
agent=issue_manager,
expected_output="Feature request issue created and assigned successfully"
)
crew = Crew(
agents=[issue_manager],
tasks=[issue_workflow]
)
crew.kickoff()
```
### Release Management
```python
from crewai import Agent, Task, Crew
release_manager = Agent(
role="Release Manager",
goal="Manage software releases and versioning",
backstory="An experienced release manager who handles version control and release processes.",
apps=['github']
)
# Task to create a new release
release_task = Task(
description="""
Create a new release v2.1.0 for the project with:
- Auto-generated release notes
- Target the main branch
- Include a description of new features and bug fixes
""",
agent=release_manager,
expected_output="Release v2.1.0 created successfully with release notes"
)
crew = Crew(
agents=[release_manager],
tasks=[release_task]
)
crew.kickoff()
```
### Issue Tracking and Management
```python
from crewai import Agent, Task, Crew
project_coordinator = Agent(
role="Project Coordinator",
goal="Track and coordinate project issues and development progress",
backstory="An AI assistant that helps coordinate development work and track project progress.",
apps=['github']
)
# Complex task involving multiple GitHub operations
coordination_task = Task(
description="""
1. Search for all open issues assigned to the current milestone
2. Identify overdue issues and update their priority labels
3. Create a weekly progress report issue
4. Lock resolved issues that have been inactive for 30 days
""",
agent=project_coordinator,
expected_output="Project coordination completed with progress report and issue management"
)
crew = Crew(
agents=[project_coordinator],
tasks=[coordination_task]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with GitHub integration setup or troubleshooting.
</Card>

View File

@@ -1,274 +0,0 @@
---
title: Gmail Integration
description: "Email and contact management with Gmail integration for CrewAI."
icon: "envelope"
mode: "wide"
---
## Overview
Enable your agents to manage emails, contacts, and drafts through Gmail. Send emails, search messages, manage contacts, create drafts, and streamline your email communications with AI-powered automation.
## Prerequisites
Before using the Gmail integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Gmail account with appropriate permissions
- Connected your Gmail account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Gmail Integration
### 1. Connect Your Gmail Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Gmail** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for email and contact management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="gmail/fetch_emails">
**Description:** Retrieve a list of messages.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `q` (string, optional): Search query to filter messages (e.g., 'from:someone@example.com is:unread').
- `maxResults` (integer, optional): Maximum number of messages to return (1-500). (default: 100)
- `pageToken` (string, optional): Page token to retrieve a specific page of results.
- `labelIds` (array, optional): Only return messages with labels that match all of the specified label IDs.
- `includeSpamTrash` (boolean, optional): Include messages from SPAM and TRASH in the results. (default: false)
</Accordion>
<Accordion title="gmail/send_email">
**Description:** Send an email.
**Parameters:**
- `to` (string, required): Recipient email address.
- `subject` (string, required): Email subject line.
- `body` (string, required): Email message content.
- `userId` (string, optional): The user's email address or 'me' for the authenticated user. (default: "me")
- `cc` (string, optional): CC email addresses (comma-separated).
- `bcc` (string, optional): BCC email addresses (comma-separated).
- `from` (string, optional): Sender email address (if different from authenticated user).
- `replyTo` (string, optional): Reply-to email address.
- `threadId` (string, optional): Thread ID if replying to an existing conversation.
</Accordion>
<Accordion title="gmail/delete_email">
**Description:** Delete an email by ID.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user.
- `id` (string, required): The ID of the message to delete.
</Accordion>
<Accordion title="gmail/create_draft">
**Description:** Create a new draft email.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user.
- `message` (object, required): Message object containing the draft content.
- `raw` (string, required): Base64url encoded email message.
</Accordion>
<Accordion title="gmail/get_message">
**Description:** Retrieve a specific message by ID.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the message to retrieve.
- `format` (string, optional): The format to return the message in. Options: "full", "metadata", "minimal", "raw". (default: "full")
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
</Accordion>
<Accordion title="gmail/get_attachment">
**Description:** Retrieve a message attachment.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `messageId` (string, required): The ID of the message containing the attachment.
- `id` (string, required): The ID of the attachment to retrieve.
</Accordion>
<Accordion title="gmail/fetch_thread">
**Description:** Retrieve a specific email thread by ID.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to retrieve.
- `format` (string, optional): The format to return the messages in. Options: "full", "metadata", "minimal". (default: "full")
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
</Accordion>
<Accordion title="gmail/modify_thread">
**Description:** Modify the labels applied to a thread.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to modify.
- `addLabelIds` (array, optional): A list of IDs of labels to add to this thread.
- `removeLabelIds` (array, optional): A list of IDs of labels to remove from this thread.
</Accordion>
<Accordion title="gmail/trash_thread">
**Description:** Move a thread to the trash.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to trash.
</Accordion>
<Accordion title="gmail/untrash_thread">
**Description:** Remove a thread from the trash.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to untrash.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Gmail Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Gmail capabilities
gmail_agent = Agent(
role="Email Manager",
goal="Manage email communications and messages efficiently",
backstory="An AI assistant specialized in email management and communication.",
apps=['gmail'] # All Gmail actions will be available
)
# Task to send a follow-up email
send_email_task = Task(
description="Send a follow-up email to john@example.com about the project update meeting",
agent=gmail_agent,
expected_output="Email sent successfully with confirmation"
)
# Run the task
crew = Crew(
agents=[gmail_agent],
tasks=[send_email_task]
)
crew.kickoff()
```
### Filtering Specific Gmail Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific Gmail actions only
email_coordinator = Agent(
role="Email Coordinator",
goal="Coordinate email communications and manage drafts",
backstory="An AI assistant that focuses on email coordination and draft management.",
apps=[
'gmail/send_email',
'gmail/fetch_emails',
'gmail/create_draft'
]
)
# Task to prepare and send emails
email_coordination = Task(
description="Search for emails from the marketing team, create a summary draft, and send it to stakeholders",
agent=email_coordinator,
expected_output="Summary email sent to stakeholders"
)
crew = Crew(
agents=[email_coordinator],
tasks=[email_coordination]
)
crew.kickoff()
```
### Email Search and Analysis
```python
from crewai import Agent, Task, Crew
# Create agent with Gmail search and analysis capabilities
email_analyst = Agent(
role="Email Analyst",
goal="Analyze email patterns and provide insights",
backstory="An AI assistant that analyzes email data to provide actionable insights.",
apps=['gmail/fetch_emails', 'gmail/get_message'] # Specific actions for email analysis
)
# Task to analyze email patterns
analysis_task = Task(
description="""
Search for all unread emails from the last 7 days,
categorize them by sender domain,
and create a summary report of communication patterns
""",
agent=email_analyst,
expected_output="Email analysis report with communication patterns and recommendations"
)
crew = Crew(
agents=[email_analyst],
tasks=[analysis_task]
)
crew.kickoff()
```
### Thread Management
```python
from crewai import Agent, Task, Crew
# Create agent with Gmail thread management capabilities
thread_manager = Agent(
role="Thread Manager",
goal="Organize and manage email threads efficiently",
backstory="An AI assistant that specializes in email thread organization and management.",
apps=[
'gmail/fetch_thread',
'gmail/modify_thread',
'gmail/trash_thread'
]
)
# Task to organize email threads
thread_task = Task(
description="""
1. Fetch all threads from the last month
2. Apply appropriate labels to organize threads by project
3. Archive or trash threads that are no longer relevant
""",
agent=thread_manager,
expected_output="Email threads organized with appropriate labels and cleanup completed"
)
crew = Crew(
agents=[thread_manager],
tasks=[thread_task]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Gmail integration setup or troubleshooting.
</Card>

View File

@@ -1,338 +0,0 @@
---
title: Google Calendar Integration
description: "Event and schedule management with Google Calendar integration for CrewAI."
icon: "calendar"
mode: "wide"
---
## Overview
Enable your agents to manage calendar events, schedules, and availability through Google Calendar. Create and update events, manage attendees, check availability, and streamline your scheduling workflows with AI-powered automation.
## Prerequisites
Before using the Google Calendar integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Calendar access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Calendar Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Calendar** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for calendar access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="google_calendar/get_availability">
**Description:** Get calendar availability (free/busy information).
**Parameters:**
- `timeMin` (string, required): Start time (RFC3339 format)
- `timeMax` (string, required): End time (RFC3339 format)
- `items` (array, required): Calendar IDs to check
```json
[
{
"id": "calendar_id"
}
]
```
- `timeZone` (string, optional): Time zone used in the response. The default is UTC.
- `groupExpansionMax` (integer, optional): Maximal number of calendar identifiers to be provided for a single group. Maximum: 100
- `calendarExpansionMax` (integer, optional): Maximal number of calendars for which FreeBusy information is to be provided. Maximum: 50
</Accordion>
<Accordion title="google_calendar/create_event">
**Description:** Create a new event in the specified calendar.
**Parameters:**
- `calendarId` (string, required): Calendar ID (use 'primary' for main calendar)
- `summary` (string, required): Event title/summary
- `start_dateTime` (string, required): Start time in RFC3339 format (e.g., 2024-01-20T10:00:00-07:00)
- `end_dateTime` (string, required): End time in RFC3339 format
- `description` (string, optional): Event description
- `timeZone` (string, optional): Time zone (e.g., America/Los_Angeles)
- `location` (string, optional): Geographic location of the event as free-form text.
- `attendees` (array, optional): List of attendees for the event.
```json
[
{
"email": "attendee@example.com",
"displayName": "Attendee Name",
"optional": false
}
]
```
- `reminders` (object, optional): Information about the event's reminders.
```json
{
"useDefault": true,
"overrides": [
{
"method": "email",
"minutes": 15
}
]
}
```
- `conferenceData` (object, optional): The conference-related information, such as details of a Google Meet conference.
```json
{
"createRequest": {
"requestId": "unique-request-id",
"conferenceSolutionKey": {
"type": "hangoutsMeet"
}
}
}
```
- `visibility` (string, optional): Visibility of the event. Options: default, public, private, confidential. Default: default
- `transparency` (string, optional): Whether the event blocks time on the calendar. Options: opaque, transparent. Default: opaque
</Accordion>
<Accordion title="google_calendar/view_events">
**Description:** Retrieve events for the specified calendar.
**Parameters:**
- `calendarId` (string, required): Calendar ID (use 'primary' for main calendar)
- `timeMin` (string, optional): Lower bound for events (RFC3339)
- `timeMax` (string, optional): Upper bound for events (RFC3339)
- `maxResults` (integer, optional): Maximum number of events (default 10). Minimum: 1, Maximum: 2500
- `orderBy` (string, optional): The order of the events returned in the result. Options: startTime, updated. Default: startTime
- `singleEvents` (boolean, optional): Whether to expand recurring events into instances and only return single one-off events and instances of recurring events. Default: true
- `showDeleted` (boolean, optional): Whether to include deleted events (with status equals cancelled) in the result. Default: false
- `showHiddenInvitations` (boolean, optional): Whether to include hidden invitations in the result. Default: false
- `q` (string, optional): Free text search terms to find events that match these terms in any field.
- `pageToken` (string, optional): Token specifying which result page to return.
- `timeZone` (string, optional): Time zone used in the response.
- `updatedMin` (string, optional): Lower bound for an event's last modification time (RFC3339) to filter by.
- `iCalUID` (string, optional): Specifies an event ID in the iCalendar format to be provided in the response.
</Accordion>
<Accordion title="google_calendar/update_event">
**Description:** Update an existing event.
**Parameters:**
- `calendarId` (string, required): Calendar ID
- `eventId` (string, required): Event ID to update
- `summary` (string, optional): Updated event title
- `description` (string, optional): Updated event description
- `start_dateTime` (string, optional): Updated start time
- `end_dateTime` (string, optional): Updated end time
</Accordion>
<Accordion title="google_calendar/delete_event">
**Description:** Delete a specified event.
**Parameters:**
- `calendarId` (string, required): Calendar ID
- `eventId` (string, required): Event ID to delete
</Accordion>
<Accordion title="google_calendar/view_calendar_list">
**Description:** Retrieve user's calendar list.
**Parameters:**
- `maxResults` (integer, optional): Maximum number of entries returned on one result page. Minimum: 1
- `pageToken` (string, optional): Token specifying which result page to return.
- `showDeleted` (boolean, optional): Whether to include deleted calendar list entries in the result. Default: false
- `showHidden` (boolean, optional): Whether to show hidden entries. Default: false
- `minAccessRole` (string, optional): The minimum access role for the user in the returned entries. Options: freeBusyReader, owner, reader, writer
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Calendar Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Calendar capabilities
calendar_agent = Agent(
role="Schedule Manager",
goal="Manage calendar events and scheduling efficiently",
backstory="An AI assistant specialized in calendar management and scheduling coordination.",
apps=['google_calendar'] # All Google Calendar actions will be available
)
# Task to create a meeting
create_meeting_task = Task(
description="Create a team standup meeting for tomorrow at 9 AM with the development team",
agent=calendar_agent,
expected_output="Meeting created successfully with Google Meet link"
)
# Run the task
crew = Crew(
agents=[calendar_agent],
tasks=[create_meeting_task]
)
crew.kickoff()
```
### Filtering Specific Calendar Tools
```python
meeting_coordinator = Agent(
role="Meeting Coordinator",
goal="Coordinate meetings and check availability",
backstory="An AI assistant that focuses on meeting scheduling and availability management.",
apps=['google_calendar/create_event', 'google_calendar/get_availability']
)
# Task to schedule a meeting with availability check
schedule_meeting = Task(
description="Check availability for next week and schedule a project review meeting with stakeholders",
agent=meeting_coordinator,
expected_output="Meeting scheduled after checking availability of all participants"
)
crew = Crew(
agents=[meeting_coordinator],
tasks=[schedule_meeting]
)
crew.kickoff()
```
### Event Management and Updates
```python
from crewai import Agent, Task, Crew
event_manager = Agent(
role="Event Manager",
goal="Manage and update calendar events efficiently",
backstory="An experienced event manager who handles event logistics and updates.",
apps=['google_calendar']
)
# Task to manage event updates
event_management = Task(
description="""
1. List all events for this week
2. Update any events that need location changes to include video conference links
3. Check availability for upcoming meetings
""",
agent=event_manager,
expected_output="Weekly events updated with proper locations and availability checked"
)
crew = Crew(
agents=[event_manager],
tasks=[event_management]
)
crew.kickoff()
```
### Availability and Calendar Management
```python
from crewai import Agent, Task, Crew
availability_coordinator = Agent(
role="Availability Coordinator",
goal="Coordinate availability and manage calendars for scheduling",
backstory="An AI assistant that specializes in availability management and calendar coordination.",
apps=['google_calendar']
)
# Task to coordinate availability
availability_task = Task(
description="""
1. Get the list of available calendars
2. Check availability for all calendars next Friday afternoon
3. Create a team meeting for the first available 2-hour slot
4. Include Google Meet link and send invitations
""",
agent=availability_coordinator,
expected_output="Team meeting scheduled based on availability with all team members invited"
)
crew = Crew(
agents=[availability_coordinator],
tasks=[availability_task]
)
crew.kickoff()
```
### Automated Scheduling Workflows
```python
from crewai import Agent, Task, Crew
scheduling_automator = Agent(
role="Scheduling Automator",
goal="Automate scheduling workflows and calendar management",
backstory="An AI assistant that automates complex scheduling scenarios and calendar workflows.",
apps=['google_calendar']
)
# Complex scheduling automation task
automation_task = Task(
description="""
1. List all upcoming events for the next two weeks
2. Identify any scheduling conflicts or back-to-back meetings
3. Suggest optimal meeting times by checking availability
4. Create buffer time between meetings where needed
5. Update event descriptions with agenda items and meeting links
""",
agent=scheduling_automator,
expected_output="Calendar optimized with resolved conflicts, buffer times, and updated meeting details"
)
crew = Crew(
agents=[scheduling_automator],
tasks=[automation_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Authentication Errors**
- Ensure your Google account has the necessary permissions for calendar access
- Verify that the OAuth connection includes all required scopes for Google Calendar API
- Check if calendar sharing settings allow the required access level
**Event Creation Issues**
- Verify that time formats are correct (RFC3339 format)
- Ensure attendee email addresses are properly formatted
- Check that the target calendar exists and is accessible
- Verify time zones are correctly specified
**Availability and Time Conflicts**
- Use proper RFC3339 format for time ranges when checking availability
- Ensure time zones are consistent across all operations
- Verify that calendar IDs are correct when checking multiple calendars
**Event Updates and Deletions**
- Verify that event IDs are correct and events exist
- Ensure you have edit permissions for the events
- Check that calendar ownership allows modifications
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Calendar integration setup or troubleshooting.
</Card>

View File

@@ -1,402 +0,0 @@
---
title: Google Contacts Integration
description: "Contact and directory management with Google Contacts integration for CrewAI."
icon: "address-book"
mode: "wide"
---
## Overview
Enable your agents to manage contacts and directory information through Google Contacts. Access personal contacts, search directory people, create and update contact information, and manage contact groups with AI-powered automation.
## Prerequisites
Before using the Google Contacts integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Contacts access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Contacts Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Contacts** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for contacts and directory access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="google_contacts/get_contacts">
**Description:** Retrieve user's contacts from Google Contacts.
**Parameters:**
- `pageSize` (integer, optional): Number of contacts to return (max 1000). Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): The token of the page to retrieve.
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
- `sortOrder` (string, optional): The order in which the connections should be sorted. Options: LAST_MODIFIED_ASCENDING, LAST_MODIFIED_DESCENDING, FIRST_NAME_ASCENDING, LAST_NAME_ASCENDING
</Accordion>
<Accordion title="google_contacts/search_contacts">
**Description:** Search for contacts using a query string.
**Parameters:**
- `query` (string, required): Search query string
- `readMask` (string, required): Fields to read (e.g., 'names,emailAddresses,phoneNumbers')
- `pageSize` (integer, optional): Number of results to return. Minimum: 1, Maximum: 30
- `pageToken` (string, optional): Token specifying which result page to return.
- `sources` (array, optional): The sources to search in. Options: READ_SOURCE_TYPE_CONTACT, READ_SOURCE_TYPE_PROFILE. Default: READ_SOURCE_TYPE_CONTACT
</Accordion>
<Accordion title="google_contacts/list_directory_people">
**Description:** List people in the authenticated user's directory.
**Parameters:**
- `sources` (array, required): Directory sources to search within. Options: DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE, DIRECTORY_SOURCE_TYPE_DOMAIN_CONTACT. Default: DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE
- `pageSize` (integer, optional): Number of people to return. Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): Token specifying which result page to return.
- `readMask` (string, optional): Fields to read (e.g., 'names,emailAddresses')
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
- `mergeSources` (array, optional): Additional data to merge into the directory people responses. Options: CONTACT
</Accordion>
<Accordion title="google_contacts/search_directory_people">
**Description:** Search for people in the directory.
**Parameters:**
- `query` (string, required): Search query
- `sources` (string, required): Directory sources (use 'DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE')
- `pageSize` (integer, optional): Number of results to return
- `readMask` (string, optional): Fields to read
</Accordion>
<Accordion title="google_contacts/list_other_contacts">
**Description:** List other contacts (not in user's personal contacts).
**Parameters:**
- `pageSize` (integer, optional): Number of contacts to return. Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): Token specifying which result page to return.
- `readMask` (string, optional): Fields to read
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
</Accordion>
<Accordion title="google_contacts/search_other_contacts">
**Description:** Search other contacts.
**Parameters:**
- `query` (string, required): Search query
- `readMask` (string, required): Fields to read (e.g., 'names,emailAddresses')
- `pageSize` (integer, optional): Number of results
</Accordion>
<Accordion title="google_contacts/get_person">
**Description:** Get a single person's contact information by resource name.
**Parameters:**
- `resourceName` (string, required): The resource name of the person to get (e.g., 'people/c123456789')
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
</Accordion>
<Accordion title="google_contacts/create_contact">
**Description:** Create a new contact in the user's address book.
**Parameters:**
- `names` (array, optional): Person's names
```json
[
{
"givenName": "John",
"familyName": "Doe",
"displayName": "John Doe"
}
]
```
- `emailAddresses` (array, optional): Email addresses
```json
[
{
"value": "john.doe@example.com",
"type": "work"
}
]
```
- `phoneNumbers` (array, optional): Phone numbers
```json
[
{
"value": "+1234567890",
"type": "mobile"
}
]
```
- `addresses` (array, optional): Postal addresses
```json
[
{
"formattedValue": "123 Main St, City, State 12345",
"type": "home"
}
]
```
- `organizations` (array, optional): Organizations/companies
```json
[
{
"name": "Company Name",
"title": "Job Title",
"type": "work"
}
]
```
</Accordion>
<Accordion title="google_contacts/update_contact">
**Description:** Update an existing contact's information.
**Parameters:**
- `resourceName` (string, required): The resource name of the person to update (e.g., 'people/c123456789')
- `updatePersonFields` (string, required): Fields to update (e.g., 'names,emailAddresses,phoneNumbers')
- `names` (array, optional): Person's names
- `emailAddresses` (array, optional): Email addresses
- `phoneNumbers` (array, optional): Phone numbers
</Accordion>
<Accordion title="google_contacts/delete_contact">
**Description:** Delete a contact from the user's address book.
**Parameters:**
- `resourceName` (string, required): The resource name of the person to delete (e.g., 'people/c123456789')
</Accordion>
<Accordion title="google_contacts/batch_get_people">
**Description:** Get information about multiple people in a single request.
**Parameters:**
- `resourceNames` (array, required): Resource names of people to get. Maximum: 200 items
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
</Accordion>
<Accordion title="google_contacts/list_contact_groups">
**Description:** List the user's contact groups (labels).
**Parameters:**
- `pageSize` (integer, optional): Number of contact groups to return. Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): Token specifying which result page to return.
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Contacts Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Contacts capabilities
contacts_agent = Agent(
role="Contact Manager",
goal="Manage contacts and directory information efficiently",
backstory="An AI assistant specialized in contact management and directory operations.",
apps=['google_contacts'] # All Google Contacts actions will be available
)
# Task to retrieve and organize contacts
contact_management_task = Task(
description="Retrieve all contacts and organize them by company affiliation",
agent=contacts_agent,
expected_output="Contacts retrieved and organized by company with summary report"
)
# Run the task
crew = Crew(
agents=[contacts_agent],
tasks=[contact_management_task]
)
crew.kickoff()
```
### Directory Search and Management
```python
from crewai import Agent, Task, Crew
directory_manager = Agent(
role="Directory Manager",
goal="Search and manage directory people and contacts",
backstory="An AI assistant that specializes in directory management and people search.",
apps=[
'google_contacts/search_directory_people',
'google_contacts/list_directory_people',
'google_contacts/search_contacts'
]
)
# Task to search and manage directory
directory_task = Task(
description="Search for team members in the company directory and create a team contact list",
agent=directory_manager,
expected_output="Team directory compiled with contact information"
)
crew = Crew(
agents=[directory_manager],
tasks=[directory_task]
)
crew.kickoff()
```
### Contact Creation and Updates
```python
from crewai import Agent, Task, Crew
contact_curator = Agent(
role="Contact Curator",
goal="Create and update contact information systematically",
backstory="An AI assistant that maintains accurate and up-to-date contact information.",
apps=['google_contacts']
)
# Task to create and update contacts
curation_task = Task(
description="""
1. Search for existing contacts related to new business partners
2. Create new contacts for partners not in the system
3. Update existing contact information with latest details
4. Organize contacts into appropriate groups
""",
agent=contact_curator,
expected_output="Contact database updated with new partners and organized groups"
)
crew = Crew(
agents=[contact_curator],
tasks=[curation_task]
)
crew.kickoff()
```
### Contact Group Management
```python
from crewai import Agent, Task, Crew
group_organizer = Agent(
role="Contact Group Organizer",
goal="Organize contacts into meaningful groups and categories",
backstory="An AI assistant that specializes in contact organization and group management.",
apps=['google_contacts']
)
# Task to organize contact groups
organization_task = Task(
description="""
1. List all existing contact groups
2. Analyze contact distribution across groups
3. Create new groups for better organization
4. Move contacts to appropriate groups based on their information
""",
agent=group_organizer,
expected_output="Contacts organized into logical groups with improved structure"
)
crew = Crew(
agents=[group_organizer],
tasks=[organization_task]
)
crew.kickoff()
```
### Comprehensive Contact Management
```python
from crewai import Agent, Task, Crew
contact_specialist = Agent(
role="Contact Management Specialist",
goal="Provide comprehensive contact management across all sources",
backstory="An AI assistant that handles all aspects of contact management including personal, directory, and other contacts.",
apps=['google_contacts']
)
# Complex contact management task
comprehensive_task = Task(
description="""
1. Retrieve contacts from all sources (personal, directory, other)
2. Search for duplicate contacts and merge information
3. Update outdated contact information
4. Create missing contacts for important stakeholders
5. Organize contacts into meaningful groups
6. Generate a comprehensive contact report
""",
agent=contact_specialist,
expected_output="Complete contact management performed with unified contact database and detailed report"
)
crew = Crew(
agents=[contact_specialist],
tasks=[comprehensive_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Google account has appropriate permissions for contacts access
- Verify that the OAuth connection includes required scopes for Google Contacts API
- Check that directory access permissions are granted for organization contacts
**Resource Name Format Issues**
- Ensure resource names follow the correct format (e.g., 'people/c123456789' for contacts)
- Verify that contact group resource names use the format 'contactGroups/groupId'
- Check that resource names exist and are accessible
**Search and Query Issues**
- Ensure search queries are properly formatted and not empty
- Use appropriate readMask fields for the data you need
- Verify that search sources are correctly specified (contacts vs profiles)
**Contact Creation and Updates**
- Ensure required fields are provided when creating contacts
- Verify that email addresses and phone numbers are properly formatted
- Check that updatePersonFields parameter includes all fields being updated
**Directory Access Issues**
- Ensure you have appropriate permissions to access organization directory
- Verify that directory sources are correctly specified
- Check that your organization allows API access to directory information
**Pagination and Limits**
- Be mindful of page size limits (varies by endpoint)
- Use pageToken for pagination through large result sets
- Respect API rate limits and implement appropriate delays
**Contact Groups and Organization**
- Ensure contact group names are unique when creating new groups
- Verify that contacts exist before adding them to groups
- Check that you have permissions to modify contact groups
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Contacts integration setup or troubleshooting.
</Card>

View File

@@ -1,228 +0,0 @@
---
title: Google Docs Integration
description: "Document creation and editing with Google Docs integration for CrewAI."
icon: "file-lines"
mode: "wide"
---
## Overview
Enable your agents to create, edit, and manage Google Docs documents with text manipulation and formatting. Automate document creation, insert and replace text, manage content ranges, and streamline your document workflows with AI-powered automation.
## Prerequisites
Before using the Google Docs integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Docs access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Docs Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Docs** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for document access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="google_docs/create_document">
**Description:** Create a new Google Document.
**Parameters:**
- `title` (string, optional): The title for the new document.
</Accordion>
<Accordion title="google_docs/get_document">
**Description:** Get the contents and metadata of a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to retrieve.
- `includeTabsContent` (boolean, optional): Whether to include tab content. Default is `false`.
- `suggestionsViewMode` (string, optional): The suggestions view mode to apply to the document. Enum: `DEFAULT_FOR_CURRENT_ACCESS`, `PREVIEW_SUGGESTIONS_ACCEPTED`, `PREVIEW_WITHOUT_SUGGESTIONS`. Default is `DEFAULT_FOR_CURRENT_ACCESS`.
</Accordion>
<Accordion title="google_docs/batch_update">
**Description:** Apply one or more updates to a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `requests` (array, required): A list of updates to apply to the document. Each item is an object representing a request.
- `writeControl` (object, optional): Provides control over how write requests are executed. Contains `requiredRevisionId` (string) and `targetRevisionId` (string).
</Accordion>
<Accordion title="google_docs/insert_text">
**Description:** Insert text into a Google Document at a specific location.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `text` (string, required): The text to insert.
- `index` (integer, optional): The zero-based index where to insert the text. Default is `1`.
</Accordion>
<Accordion title="google_docs/replace_text">
**Description:** Replace all instances of text in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `containsText` (string, required): The text to find and replace.
- `replaceText` (string, required): The text to replace it with.
- `matchCase` (boolean, optional): Whether the search should respect case. Default is `false`.
</Accordion>
<Accordion title="google_docs/delete_content_range">
**Description:** Delete content from a specific range in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `startIndex` (integer, required): The start index of the range to delete.
- `endIndex` (integer, required): The end index of the range to delete.
</Accordion>
<Accordion title="google_docs/insert_page_break">
**Description:** Insert a page break at a specific location in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `index` (integer, optional): The zero-based index where to insert the page break. Default is `1`.
</Accordion>
<Accordion title="google_docs/create_named_range">
**Description:** Create a named range in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `name` (string, required): The name for the named range.
- `startIndex` (integer, required): The start index of the range.
- `endIndex` (integer, required): The end index of the range.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Docs Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Docs capabilities
docs_agent = Agent(
role="Document Creator",
goal="Create and manage Google Docs documents efficiently",
backstory="An AI assistant specialized in Google Docs document creation and editing.",
apps=['google_docs'] # All Google Docs actions will be available
)
# Task to create a new document
create_doc_task = Task(
description="Create a new Google Document titled 'Project Status Report'",
agent=docs_agent,
expected_output="New Google Document 'Project Status Report' created successfully"
)
# Run the task
crew = Crew(
agents=[docs_agent],
tasks=[create_doc_task]
)
crew.kickoff()
```
### Text Editing and Content Management
```python
from crewai import Agent, Task, Crew
# Create an agent focused on text editing
text_editor = Agent(
role="Document Editor",
goal="Edit and update content in Google Docs documents",
backstory="An AI assistant skilled in precise text editing and content management.",
apps=['google_docs/insert_text', 'google_docs/replace_text', 'google_docs/delete_content_range']
)
# Task to edit document content
edit_content_task = Task(
description="In document 'your_document_id', insert the text 'Executive Summary: ' at the beginning, then replace all instances of 'TODO' with 'COMPLETED'.",
agent=text_editor,
expected_output="Document updated with new text inserted and TODO items replaced."
)
crew = Crew(
agents=[text_editor],
tasks=[edit_content_task]
)
crew.kickoff()
```
### Advanced Document Operations
```python
from crewai import Agent, Task, Crew
# Create an agent for advanced document operations
document_formatter = Agent(
role="Document Formatter",
goal="Apply advanced formatting and structure to Google Docs",
backstory="An AI assistant that handles complex document formatting and organization.",
apps=['google_docs/batch_update', 'google_docs/insert_page_break', 'google_docs/create_named_range']
)
# Task to format document
format_doc_task = Task(
description="In document 'your_document_id', insert a page break at position 100, create a named range called 'Introduction' for characters 1-50, and apply batch formatting updates.",
agent=document_formatter,
expected_output="Document formatted with page break, named range, and styling applied."
)
crew = Crew(
agents=[document_formatter],
tasks=[format_doc_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Authentication Errors**
- Ensure your Google account has the necessary permissions for Google Docs access.
- Verify that the OAuth connection includes all required scopes (`https://www.googleapis.com/auth/documents`).
**Document ID Issues**
- Double-check document IDs for correctness.
- Ensure the document exists and is accessible to your account.
- Document IDs can be found in the Google Docs URL.
**Text Insertion and Range Operations**
- When using `insert_text` or `delete_content_range`, ensure index positions are valid.
- Remember that Google Docs uses zero-based indexing.
- The document must have content at the specified index positions.
**Batch Update Request Formatting**
- When using `batch_update`, ensure the `requests` array is correctly formatted according to the Google Docs API documentation.
- Complex updates require specific JSON structures for each request type.
**Replace Text Operations**
- For `replace_text`, ensure the `containsText` parameter exactly matches the text you want to replace.
- Use `matchCase` parameter to control case sensitivity.
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Docs integration setup or troubleshooting.
</Card>

View File

@@ -1,213 +0,0 @@
---
title: Google Drive Integration
description: "File storage and management with Google Drive integration for CrewAI."
icon: "google"
mode: "wide"
---
## Overview
Enable your agents to manage files and folders through Google Drive. Upload, download, organize, and share files, create folders, and streamline your document management workflows with AI-powered automation.
## Prerequisites
Before using the Google Drive integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Drive access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Drive Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Drive** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for file and folder management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="google_drive/get_file">
**Description:** Get a file by ID from Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to retrieve.
</Accordion>
<Accordion title="google_drive/list_files">
**Description:** List files in Google Drive.
**Parameters:**
- `q` (string, optional): Query string to filter files (example: "name contains 'report'").
- `page_size` (integer, optional): Maximum number of files to return (default: 100, max: 1000).
- `page_token` (string, optional): Token for retrieving the next page of results.
- `order_by` (string, optional): Sort order (example: "name", "createdTime desc", "modifiedTime").
- `spaces` (string, optional): Comma-separated list of spaces to query (drive, appDataFolder, photos).
</Accordion>
<Accordion title="google_drive/upload_file">
**Description:** Upload a file to Google Drive.
**Parameters:**
- `name` (string, required): Name of the file to create.
- `content` (string, required): Content of the file to upload.
- `mime_type` (string, optional): MIME type of the file (example: "text/plain", "application/pdf").
- `parent_folder_id` (string, optional): ID of the parent folder where the file should be created.
- `description` (string, optional): Description of the file.
</Accordion>
<Accordion title="google_drive/download_file">
**Description:** Download a file from Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to download.
- `mime_type` (string, optional): MIME type for export (required for Google Workspace documents).
</Accordion>
<Accordion title="google_drive/create_folder">
**Description:** Create a new folder in Google Drive.
**Parameters:**
- `name` (string, required): Name of the folder to create.
- `parent_folder_id` (string, optional): ID of the parent folder where the new folder should be created.
- `description` (string, optional): Description of the folder.
</Accordion>
<Accordion title="google_drive/delete_file">
**Description:** Delete a file from Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to delete.
</Accordion>
<Accordion title="google_drive/share_file">
**Description:** Share a file in Google Drive with specific users or make it public.
**Parameters:**
- `file_id` (string, required): The ID of the file to share.
- `role` (string, required): The role granted by this permission (reader, writer, commenter, owner).
- `type` (string, required): The type of the grantee (user, group, domain, anyone).
- `email_address` (string, optional): The email address of the user or group to share with (required for user/group types).
- `domain` (string, optional): The domain to share with (required for domain type).
- `send_notification_email` (boolean, optional): Whether to send a notification email (default: true).
- `email_message` (string, optional): A plain text custom message to include in the notification email.
</Accordion>
<Accordion title="google_drive/update_file">
**Description:** Update an existing file in Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to update.
- `name` (string, optional): New name for the file.
- `content` (string, optional): New content for the file.
- `mime_type` (string, optional): New MIME type for the file.
- `description` (string, optional): New description for the file.
- `add_parents` (string, optional): Comma-separated list of parent folder IDs to add.
- `remove_parents` (string, optional): Comma-separated list of parent folder IDs to remove.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Drive Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Drive capabilities
drive_agent = Agent(
role="File Manager",
goal="Manage files and folders in Google Drive efficiently",
backstory="An AI assistant specialized in document and file management.",
apps=['google_drive'] # All Google Drive actions will be available
)
# Task to organize files
organize_files_task = Task(
description="List all files in the root directory and organize them into appropriate folders",
agent=drive_agent,
expected_output="Summary of files organized with folder structure"
)
# Run the task
crew = Crew(
agents=[drive_agent],
tasks=[organize_files_task]
)
crew.kickoff()
```
### Filtering Specific Google Drive Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific Google Drive actions only
file_manager_agent = Agent(
role="Document Manager",
goal="Upload and manage documents efficiently",
backstory="An AI assistant that focuses on document upload and organization.",
apps=[
'google_drive/upload_file',
'google_drive/create_folder',
'google_drive/share_file'
] # Specific Google Drive actions
)
# Task to upload and share documents
document_task = Task(
description="Upload the quarterly report and share it with the finance team",
agent=file_manager_agent,
expected_output="Document uploaded and sharing permissions configured"
)
crew = Crew(
agents=[file_manager_agent],
tasks=[document_task]
)
crew.kickoff()
```
### Advanced File Management
```python
from crewai import Agent, Task, Crew
file_organizer = Agent(
role="File Organizer",
goal="Maintain organized file structure and manage permissions",
backstory="An experienced file manager who ensures proper organization and access control.",
apps=['google_drive']
)
# Complex task involving multiple Google Drive operations
organization_task = Task(
description="""
1. List all files in the shared folder
2. Create folders for different document types (Reports, Presentations, Spreadsheets)
3. Move files to appropriate folders based on their type
4. Set appropriate sharing permissions for each folder
5. Create a summary document of the organization changes
""",
agent=file_organizer,
expected_output="Files organized into categorized folders with proper permissions and summary report"
)
crew = Crew(
agents=[file_organizer],
tasks=[organization_task]
)
crew.kickoff()
```

View File

@@ -1,339 +0,0 @@
---
title: Google Sheets Integration
description: "Spreadsheet data synchronization with Google Sheets integration for CrewAI."
icon: "google"
mode: "wide"
---
## Overview
Enable your agents to manage spreadsheet data through Google Sheets. Read rows, create new entries, update existing data, and streamline your data management workflows with AI-powered automation. Perfect for data tracking, reporting, and collaborative data management.
## Prerequisites
Before using the Google Sheets integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Sheets access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
- Spreadsheets with proper column headers for data operations
## Setting Up Google Sheets Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Sheets** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for spreadsheet access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="google_sheets/get_spreadsheet">
**Description:** Retrieve properties and data of a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): The ID of the spreadsheet to retrieve.
- `ranges` (array, optional): The ranges to retrieve from the spreadsheet.
- `includeGridData` (boolean, optional): True if grid data should be returned. Default: false
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
</Accordion>
<Accordion title="google_sheets/get_values">
**Description:** Returns a range of values from a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): The ID of the spreadsheet to retrieve data from.
- `range` (string, required): The A1 notation or R1C1 notation of the range to retrieve values from.
- `valueRenderOption` (string, optional): How values should be represented in the output. Options: FORMATTED_VALUE, UNFORMATTED_VALUE, FORMULA. Default: FORMATTED_VALUE
- `dateTimeRenderOption` (string, optional): How dates, times, and durations should be represented in the output. Options: SERIAL_NUMBER, FORMATTED_STRING. Default: SERIAL_NUMBER
- `majorDimension` (string, optional): The major dimension that results should use. Options: ROWS, COLUMNS. Default: ROWS
</Accordion>
<Accordion title="google_sheets/update_values">
**Description:** Sets values in a range of a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): The ID of the spreadsheet to update.
- `range` (string, required): The A1 notation of the range to update.
- `values` (array, required): The data to be written. Each array represents a row.
```json
[
["Value1", "Value2", "Value3"],
["Value4", "Value5", "Value6"]
]
```
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
</Accordion>
<Accordion title="google_sheets/append_values">
**Description:** Appends values to a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): The ID of the spreadsheet to update.
- `range` (string, required): The A1 notation of a range to search for a logical table of data.
- `values` (array, required): The data to append. Each array represents a row.
```json
[
["Value1", "Value2", "Value3"],
["Value4", "Value5", "Value6"]
]
```
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
- `insertDataOption` (string, optional): How the input data should be inserted. Options: OVERWRITE, INSERT_ROWS. Default: INSERT_ROWS
</Accordion>
<Accordion title="google_sheets/create_spreadsheet">
**Description:** Creates a new spreadsheet.
**Parameters:**
- `title` (string, required): The title of the new spreadsheet.
- `sheets` (array, optional): The sheets that are part of the spreadsheet.
```json
[
{
"properties": {
"title": "Sheet1"
}
}
]
```
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Sheets Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Sheets capabilities
sheets_agent = Agent(
role="Data Manager",
goal="Manage spreadsheet data and track information efficiently",
backstory="An AI assistant specialized in data management and spreadsheet operations.",
apps=['google_sheets']
)
# Task to add new data to a spreadsheet
data_entry_task = Task(
description="Add a new customer record to the customer database spreadsheet with name, email, and signup date",
agent=sheets_agent,
expected_output="New customer record added successfully to the spreadsheet"
)
# Run the task
crew = Crew(
agents=[sheets_agent],
tasks=[data_entry_task]
)
crew.kickoff()
```
### Filtering Specific Google Sheets Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific Google Sheets actions only
data_collector = Agent(
role="Data Collector",
goal="Collect and organize data in spreadsheets",
backstory="An AI assistant that focuses on data collection and organization.",
apps=[
'google_sheets/get_values',
'google_sheets/update_values'
]
)
# Task to collect and organize data
data_collection = Task(
description="Retrieve current inventory data and add new product entries to the inventory spreadsheet",
agent=data_collector,
expected_output="Inventory data retrieved and new products added successfully"
)
crew = Crew(
agents=[data_collector],
tasks=[data_collection]
)
crew.kickoff()
```
### Data Analysis and Reporting
```python
from crewai import Agent, Task, Crew
data_analyst = Agent(
role="Data Analyst",
goal="Analyze spreadsheet data and generate insights",
backstory="An experienced data analyst who extracts insights from spreadsheet data.",
apps=['google_sheets']
)
# Task to analyze data and create reports
analysis_task = Task(
description="""
1. Retrieve all sales data from the current month's spreadsheet
2. Analyze the data for trends and patterns
3. Create a summary report in a new row with key metrics
""",
agent=data_analyst,
expected_output="Sales data analyzed and summary report created with key insights"
)
crew = Crew(
agents=[data_analyst],
tasks=[analysis_task]
)
crew.kickoff()
```
### Spreadsheet Creation and Management
```python
from crewai import Agent, Task, Crew
spreadsheet_manager = Agent(
role="Spreadsheet Manager",
goal="Create and manage spreadsheets efficiently",
backstory="An AI assistant that specializes in creating and organizing spreadsheets.",
apps=['google_sheets']
)
# Task to create and set up new spreadsheets
setup_task = Task(
description="""
1. Create a new spreadsheet for quarterly reports
2. Set up proper headers and structure
3. Add initial data and formatting
""",
agent=spreadsheet_manager,
expected_output="New quarterly report spreadsheet created and properly structured"
)
crew = Crew(
agents=[spreadsheet_manager],
tasks=[setup_task]
)
crew.kickoff()
```
### Automated Data Updates
```python
from crewai import Agent, Task, Crew
data_updater = Agent(
role="Data Updater",
goal="Automatically update and maintain spreadsheet data",
backstory="An AI assistant that maintains data accuracy and updates records automatically.",
apps=['google_sheets']
)
# Task to update data based on conditions
update_task = Task(
description="""
1. Get spreadsheet properties and structure
2. Read current data from specific ranges
3. Update values in target ranges with new data
4. Append new records to the bottom of the sheet
""",
agent=data_updater,
expected_output="Spreadsheet data updated successfully with new values and records"
)
crew = Crew(
agents=[data_updater],
tasks=[update_task]
)
crew.kickoff()
```
### Complex Data Management Workflow
```python
from crewai import Agent, Task, Crew
workflow_manager = Agent(
role="Data Workflow Manager",
goal="Manage complex data workflows across multiple spreadsheets",
backstory="An AI assistant that orchestrates complex data operations across multiple spreadsheets.",
apps=['google_sheets']
)
# Complex workflow task
workflow_task = Task(
description="""
1. Get all customer data from the main customer spreadsheet
2. Create a new monthly summary spreadsheet
3. Append summary data to the new spreadsheet
4. Update customer status based on activity metrics
5. Generate reports with proper formatting
""",
agent=workflow_manager,
expected_output="Monthly customer workflow completed with new spreadsheet and updated data"
)
crew = Crew(
agents=[workflow_manager],
tasks=[workflow_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Google account has edit access to the target spreadsheets
- Verify that the OAuth connection includes required scopes for Google Sheets API
- Check that spreadsheets are shared with the authenticated account
**Spreadsheet Structure Issues**
- Ensure worksheets have proper column headers before creating or updating rows
- Verify that range notation (A1 format) is correct for the target cells
- Check that the specified spreadsheet ID exists and is accessible
**Data Type and Format Issues**
- Ensure data values match the expected format for each column
- Use proper date formats for date columns (ISO format recommended)
- Verify that numeric values are properly formatted for number columns
**Range and Cell Reference Issues**
- Use proper A1 notation for ranges (e.g., "A1:C10", "Sheet1!A1:B5")
- Ensure range references don't exceed the actual spreadsheet dimensions
- Verify that sheet names in range references match actual sheet names
**Value Input and Rendering Options**
- Choose appropriate `valueInputOption` (RAW vs USER_ENTERED) for your data
- Select proper `valueRenderOption` based on how you want data formatted
- Consider `dateTimeRenderOption` for consistent date/time handling
**Spreadsheet Creation Issues**
- Ensure spreadsheet titles are unique and follow naming conventions
- Verify that sheet properties are properly structured when creating sheets
- Check that you have permissions to create new spreadsheets in your account
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Sheets integration setup or troubleshooting.
</Card>

View File

@@ -1,371 +0,0 @@
---
title: Google Slides Integration
description: "Presentation creation and management with Google Slides integration for CrewAI."
icon: "chart-bar"
mode: "wide"
---
## Overview
Enable your agents to create, edit, and manage Google Slides presentations. Create presentations, update content, import data from Google Sheets, manage pages and thumbnails, and streamline your presentation workflows with AI-powered automation.
## Prerequisites
Before using the Google Slides integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Slides access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Slides Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Slides** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for presentations, spreadsheets, and drive access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="google_slides/create_blank_presentation">
**Description:** Creates a blank presentation with no content.
**Parameters:**
- `title` (string, required): The title of the presentation.
</Accordion>
<Accordion title="google_slides/get_presentation">
**Description:** Retrieves a presentation by ID.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation to retrieve.
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
</Accordion>
<Accordion title="google_slides/batch_update_presentation">
**Description:** Applies updates, add content, or remove content from a presentation.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation to update.
- `requests` (array, required): A list of updates to apply to the presentation.
```json
[
{
"insertText": {
"objectId": "slide_id",
"text": "Your text content here"
}
}
]
```
- `writeControl` (object, optional): Provides control over how write requests are executed.
```json
{
"requiredRevisionId": "revision_id_string"
}
```
</Accordion>
<Accordion title="google_slides/get_page">
**Description:** Retrieves a specific page by its ID.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `pageObjectId` (string, required): The ID of the page to retrieve.
</Accordion>
<Accordion title="google_slides/get_thumbnail">
**Description:** Generates a page thumbnail.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `pageObjectId` (string, required): The ID of the page for thumbnail generation.
</Accordion>
<Accordion title="google_slides/import_data_from_sheet">
**Description:** Imports data from a Google Sheet into a presentation.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `sheetId` (string, required): The ID of the Google Sheet to import from.
- `dataRange` (string, required): The range of data to import from the sheet.
</Accordion>
<Accordion title="google_slides/upload_file_to_drive">
**Description:** Uploads a file to Google Drive associated with the presentation.
**Parameters:**
- `file` (string, required): The file data to upload.
- `presentationId` (string, required): The ID of the presentation to link the uploaded file.
</Accordion>
<Accordion title="google_slides/link_file_to_presentation">
**Description:** Links a file in Google Drive to a presentation.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `fileId` (string, required): The ID of the file to link.
</Accordion>
<Accordion title="google_slides/get_all_presentations">
**Description:** Lists all presentations accessible to the user.
**Parameters:**
- `pageSize` (integer, optional): The number of presentations to return per page.
- `pageToken` (string, optional): A token for pagination.
</Accordion>
<Accordion title="google_slides/delete_presentation">
**Description:** Deletes a presentation by ID.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation to delete.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Slides Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Slides capabilities
slides_agent = Agent(
role="Presentation Manager",
goal="Create and manage presentations efficiently",
backstory="An AI assistant specialized in presentation creation and content management.",
apps=['google_slides'] # All Google Slides actions will be available
)
# Task to create a presentation
create_presentation_task = Task(
description="Create a new presentation for the quarterly business review with key slides",
agent=slides_agent,
expected_output="Quarterly business review presentation created with structured content"
)
# Run the task
crew = Crew(
agents=[slides_agent],
tasks=[create_presentation_task]
)
crew.kickoff()
```
### Presentation Content Management
```python
from crewai import Agent, Task, Crew
content_manager = Agent(
role="Content Manager",
goal="Manage presentation content and updates",
backstory="An AI assistant that focuses on content creation and presentation updates.",
apps=[
'google_slides/create_blank_presentation',
'google_slides/batch_update_presentation',
'google_slides/get_presentation'
]
)
# Task to create and update presentations
content_task = Task(
description="Create a new presentation and add content slides with charts and text",
agent=content_manager,
expected_output="Presentation created with updated content and visual elements"
)
crew = Crew(
agents=[content_manager],
tasks=[content_task]
)
crew.kickoff()
```
### Data Integration and Visualization
```python
from crewai import Agent, Task, Crew
data_visualizer = Agent(
role="Data Visualizer",
goal="Create presentations with data imported from spreadsheets",
backstory="An AI assistant that specializes in data visualization and presentation integration.",
apps=['google_slides']
)
# Task to create data-driven presentations
visualization_task = Task(
description="""
1. Create a new presentation for monthly sales report
2. Import data from the sales spreadsheet
3. Create charts and visualizations from the imported data
4. Generate thumbnails for slide previews
""",
agent=data_visualizer,
expected_output="Data-driven presentation created with imported spreadsheet data and visualizations"
)
crew = Crew(
agents=[data_visualizer],
tasks=[visualization_task]
)
crew.kickoff()
```
### Presentation Library Management
```python
from crewai import Agent, Task, Crew
library_manager = Agent(
role="Presentation Library Manager",
goal="Manage and organize presentation libraries",
backstory="An AI assistant that manages presentation collections and file organization.",
apps=['google_slides']
)
# Task to manage presentation library
library_task = Task(
description="""
1. List all existing presentations
2. Generate thumbnails for presentation previews
3. Upload supporting files to Drive and link to presentations
4. Organize presentations by topic and date
""",
agent=library_manager,
expected_output="Presentation library organized with thumbnails and linked supporting files"
)
crew = Crew(
agents=[library_manager],
tasks=[library_task]
)
crew.kickoff()
```
### Automated Presentation Workflows
```python
from crewai import Agent, Task, Crew
presentation_automator = Agent(
role="Presentation Automator",
goal="Automate presentation creation and management workflows",
backstory="An AI assistant that automates complex presentation workflows and content generation.",
apps=['google_slides']
)
# Complex presentation automation task
automation_task = Task(
description="""
1. Create multiple presentations for different departments
2. Import relevant data from various spreadsheets
3. Update existing presentations with new content
4. Generate thumbnails for all presentations
5. Link supporting documents from Drive
6. Create a master index presentation with links to all others
""",
agent=presentation_automator,
expected_output="Automated presentation workflow completed with multiple presentations and organized structure"
)
crew = Crew(
agents=[presentation_automator],
tasks=[automation_task]
)
crew.kickoff()
```
### Template and Content Creation
```python
from crewai import Agent, Task, Crew
template_creator = Agent(
role="Template Creator",
goal="Create presentation templates and standardized content",
backstory="An AI assistant that creates consistent presentation templates and content standards.",
apps=['google_slides']
)
# Task to create templates
template_task = Task(
description="""
1. Create blank presentation templates for different use cases
2. Add standard layouts and content placeholders
3. Create sample presentations with best practices
4. Generate thumbnails for template previews
5. Upload template assets to Drive and link appropriately
""",
agent=template_creator,
expected_output="Presentation templates created with standardized layouts and linked assets"
)
crew = Crew(
agents=[template_creator],
tasks=[template_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Google account has appropriate permissions for Google Slides
- Verify that the OAuth connection includes required scopes for presentations, spreadsheets, and drive access
- Check that presentations are shared with the authenticated account
**Presentation ID Issues**
- Verify that presentation IDs are correct and presentations exist
- Ensure you have access permissions to the presentations you're trying to modify
- Check that presentation IDs are properly formatted
**Content Update Issues**
- Ensure batch update requests are properly formatted according to Google Slides API specifications
- Verify that object IDs for slides and elements exist in the presentation
- Check that write control revision IDs are current if using optimistic concurrency
**Data Import Issues**
- Verify that Google Sheet IDs are correct and accessible
- Ensure data ranges are properly specified using A1 notation
- Check that you have read permissions for the source spreadsheets
**File Upload and Linking Issues**
- Ensure file data is properly encoded for upload
- Verify that Drive file IDs are correct when linking files
- Check that you have appropriate Drive permissions for file operations
**Page and Thumbnail Operations**
- Verify that page object IDs exist in the specified presentation
- Ensure presentations have content before attempting to generate thumbnails
- Check that page structure is valid for thumbnail generation
**Pagination and Listing Issues**
- Use appropriate page sizes for listing presentations
- Implement proper pagination using page tokens for large result sets
- Handle empty result sets gracefully
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Slides integration setup or troubleshooting.
</Card>

View File

@@ -1,565 +0,0 @@
---
title: "HubSpot Integration"
description: "Manage companies and contacts in HubSpot with CrewAI."
icon: "briefcase"
mode: "wide"
---
## Overview
Enable your agents to manage companies and contacts within HubSpot. Create new records and streamline your CRM processes with AI-powered automation.
## Prerequisites
Before using the HubSpot integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription.
- A HubSpot account with appropriate permissions.
- Connected your HubSpot account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors).
## Setting Up HubSpot Integration
### 1. Connect Your HubSpot Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors).
2. Find **HubSpot** in the Authentication Integrations section.
3. Click **Connect** and complete the OAuth flow.
4. Grant the necessary permissions for company and contact management.
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="hubspot/create_company">
**Description:** Create a new company record in HubSpot.
**Parameters:**
- `name` (string, required): Name of the company.
- `domain` (string, optional): Company Domain Name.
- `industry` (string, optional): Industry. Must be one of the predefined values from HubSpot.
- `phone` (string, optional): Phone Number.
- `hubspot_owner_id` (string, optional): Company owner ID.
- `type` (string, optional): Type of the company. Available values: `PROSPECT`, `PARTNER`, `RESELLER`, `VENDOR`, `OTHER`.
- `city` (string, optional): City.
- `state` (string, optional): State/Region.
- `zip` (string, optional): Postal Code.
- `numberofemployees` (number, optional): Number of Employees.
- `annualrevenue` (number, optional): Annual Revenue.
- `timezone` (string, optional): Time Zone.
- `description` (string, optional): Description.
- `linkedin_company_page` (string, optional): LinkedIn Company Page URL.
- `company_email` (string, optional): Company Email.
- `first_name` (string, optional): First Name of a contact at the company.
- `last_name` (string, optional): Last Name of a contact at the company.
- `about_us` (string, optional): About Us.
- `hs_csm_sentiment` (string, optional): CSM Sentiment. Available values: `at_risk`, `neutral`, `healthy`.
- `closedate` (string, optional): Close Date.
- `hs_keywords` (string, optional): Company Keywords. Must be one of the predefined values.
- `country` (string, optional): Country/Region.
- `hs_country_code` (string, optional): Country/Region Code.
- `hs_employee_range` (string, optional): Employee range.
- `facebook_company_page` (string, optional): Facebook Company Page URL.
- `facebookfans` (number, optional): Number of Facebook Fans.
- `hs_gps_coordinates` (string, optional): GPS Coordinates.
- `hs_gps_error` (string, optional): GPS Error.
- `googleplus_page` (string, optional): Google Plus Page URL.
- `owneremail` (string, optional): HubSpot Owner Email.
- `ownername` (string, optional): HubSpot Owner Name.
- `hs_ideal_customer_profile` (string, optional): Ideal Customer Profile Tier. Available values: `tier_1`, `tier_2`, `tier_3`.
- `hs_industry_group` (string, optional): Industry group.
- `is_public` (boolean, optional): Is Public.
- `hs_last_metered_enrichment_timestamp` (string, optional): Last Metered Enrichment Timestamp.
- `hs_lead_status` (string, optional): Lead Status. Available values: `NEW`, `OPEN`, `IN_PROGRESS`, `OPEN_DEAL`, `UNQUALIFIED`, `ATTEMPTED_TO_CONTACT`, `CONNECTED`, `BAD_TIMING`.
- `lifecyclestage` (string, optional): Lifecycle Stage. Available values: `subscriber`, `lead`, `marketingqualifiedlead`, `salesqualifiedlead`, `opportunity`, `customer`, `evangelist`, `other`.
- `linkedinbio` (string, optional): LinkedIn Bio.
- `hs_linkedin_handle` (string, optional): LinkedIn handle.
- `hs_live_enrichment_deadline` (string, optional): Live enrichment deadline.
- `hs_logo_url` (string, optional): Logo URL.
- `hs_analytics_source` (string, optional): Original Traffic Source.
- `hs_pinned_engagement_id` (number, optional): Pinned Engagement ID.
- `hs_quick_context` (string, optional): Quick context.
- `hs_revenue_range` (string, optional): Revenue range.
- `hs_state_code` (string, optional): State/Region Code.
- `address` (string, optional): Street Address.
- `address2` (string, optional): Street Address 2.
- `hs_is_target_account` (boolean, optional): Target Account.
- `hs_target_account` (string, optional): Target Account Tier. Available values: `tier_1`, `tier_2`, `tier_3`.
- `hs_target_account_recommendation_snooze_time` (string, optional): Target Account Recommendation Snooze Time.
- `hs_target_account_recommendation_state` (string, optional): Target Account Recommendation State. Available values: `DISMISSED`, `NONE`, `SNOOZED`.
- `total_money_raised` (string, optional): Total Money Raised.
- `twitterbio` (string, optional): Twitter Bio.
- `twitterfollowers` (number, optional): Twitter Followers.
- `twitterhandle` (string, optional): Twitter Handle.
- `web_technologies` (string, optional): Web Technologies used. Must be one of the predefined values.
- `website` (string, optional): Website URL.
- `founded_year` (string, optional): Year Founded.
</Accordion>
<Accordion title="hubspot/create_contact">
**Description:** Create a new contact record in HubSpot.
**Parameters:**
- `email` (string, required): Email address of the contact.
- `firstname` (string, optional): First Name.
- `lastname` (string, optional): Last Name.
- `phone` (string, optional): Phone Number.
- `hubspot_owner_id` (string, optional): Contact owner.
- `lifecyclestage` (string, optional): Lifecycle Stage. Available values: `subscriber`, `lead`, `marketingqualifiedlead`, `salesqualifiedlead`, `opportunity`, `customer`, `evangelist`, `other`.
- `hs_lead_status` (string, optional): Lead Status. Available values: `NEW`, `OPEN`, `IN_PROGRESS`, `OPEN_DEAL`, `UNQUALIFIED`, `ATTEMPTED_TO_CONTACT`, `CONNECTED`, `BAD_TIMING`.
- `annualrevenue` (string, optional): Annual Revenue.
- `hs_buying_role` (string, optional): Buying Role.
- `cc_emails` (string, optional): CC Emails.
- `ch_customer_id` (string, optional): Chargify Customer ID.
- `ch_customer_reference` (string, optional): Chargify Customer Reference.
- `chargify_sites` (string, optional): Chargify Site(s).
- `city` (string, optional): City.
- `hs_facebook_ad_clicked` (boolean, optional): Clicked Facebook ad.
- `hs_linkedin_ad_clicked` (string, optional): Clicked LinkedIn Ad.
- `hs_clicked_linkedin_ad` (string, optional): Clicked on a LinkedIn Ad.
- `closedate` (string, optional): Close Date.
- `company` (string, optional): Company Name.
- `company_size` (string, optional): Company size.
- `country` (string, optional): Country/Region.
- `hs_country_region_code` (string, optional): Country/Region Code.
- `date_of_birth` (string, optional): Date of birth.
- `degree` (string, optional): Degree.
- `hs_email_customer_quarantined_reason` (string, optional): Email address quarantine reason.
- `hs_role` (string, optional): Employment Role. Must be one of the predefined values.
- `hs_seniority` (string, optional): Employment Seniority. Must be one of the predefined values.
- `hs_sub_role` (string, optional): Employment Sub Role. Must be one of the predefined values.
- `hs_employment_change_detected_date` (string, optional): Employment change detected date.
- `hs_enriched_email_bounce_detected` (boolean, optional): Enriched Email Bounce Detected.
- `hs_facebookid` (string, optional): Facebook ID.
- `hs_facebook_click_id` (string, optional): Facebook click id.
- `fax` (string, optional): Fax Number.
- `field_of_study` (string, optional): Field of study.
- `followercount` (number, optional): Follower Count.
- `gender` (string, optional): Gender.
- `hs_google_click_id` (string, optional): Google ad click id.
- `graduation_date` (string, optional): Graduation date.
- `owneremail` (string, optional): HubSpot Owner Email (legacy).
- `ownername` (string, optional): HubSpot Owner Name (legacy).
- `industry` (string, optional): Industry.
- `hs_inferred_language_codes` (string, optional): Inferred Language Codes. Must be one of the predefined values.
- `jobtitle` (string, optional): Job Title.
- `hs_job_change_detected_date` (string, optional): Job change detected date.
- `job_function` (string, optional): Job function.
- `hs_journey_stage` (string, optional): Journey Stage. Must be one of the predefined values.
- `kloutscoregeneral` (number, optional): Klout Score.
- `hs_last_metered_enrichment_timestamp` (string, optional): Last Metered Enrichment Timestamp.
- `hs_latest_source` (string, optional): Latest Traffic Source.
- `hs_latest_source_timestamp` (string, optional): Latest Traffic Source Date.
- `hs_legal_basis` (string, optional): Legal basis for processing contact's data.
- `linkedinbio` (string, optional): LinkedIn Bio.
- `linkedinconnections` (number, optional): LinkedIn Connections.
- `hs_linkedin_url` (string, optional): LinkedIn URL.
- `hs_linkedinid` (string, optional): Linkedin ID.
- `hs_live_enrichment_deadline` (string, optional): Live enrichment deadline.
- `marital_status` (string, optional): Marital Status.
- `hs_content_membership_email` (string, optional): Member email.
- `hs_content_membership_notes` (string, optional): Membership Notes.
- `message` (string, optional): Message.
- `military_status` (string, optional): Military status.
- `mobilephone` (string, optional): Mobile Phone Number.
- `numemployees` (string, optional): Number of Employees.
- `hs_analytics_source` (string, optional): Original Traffic Source.
- `photo` (string, optional): Photo.
- `hs_pinned_engagement_id` (number, optional): Pinned engagement ID.
- `zip` (string, optional): Postal Code.
- `hs_language` (string, optional): Preferred language. Must be one of the predefined values.
- `associatedcompanyid` (number, optional): Primary Associated Company ID.
- `hs_email_optout_survey_reason` (string, optional): Reason for opting out of email.
- `relationship_status` (string, optional): Relationship Status.
- `hs_returning_to_office_detected_date` (string, optional): Returning to office detected date.
- `salutation` (string, optional): Salutation.
- `school` (string, optional): School.
- `seniority` (string, optional): Seniority.
- `hs_feedback_show_nps_web_survey` (boolean, optional): Should be shown an NPS web survey.
- `start_date` (string, optional): Start date.
- `state` (string, optional): State/Region.
- `hs_state_code` (string, optional): State/Region Code.
- `hs_content_membership_status` (string, optional): Status.
- `address` (string, optional): Street Address.
- `tax_exempt` (string, optional): Tax Exempt.
- `hs_timezone` (string, optional): Time Zone. Must be one of the predefined values.
- `twitterbio` (string, optional): Twitter Bio.
- `hs_twitterid` (string, optional): Twitter ID.
- `twitterprofilephoto` (string, optional): Twitter Profile Photo.
- `twitterhandle` (string, optional): Twitter Username.
- `vat_number` (string, optional): VAT Number.
- `ch_verified` (string, optional): Verified for ACH/eCheck Payments.
- `website` (string, optional): Website URL.
- `hs_whatsapp_phone_number` (string, optional): WhatsApp Phone Number.
- `work_email` (string, optional): Work email.
- `hs_googleplusid` (string, optional): googleplus ID.
</Accordion>
<Accordion title="hubspot/create_deal">
**Description:** Create a new deal record in HubSpot.
**Parameters:**
- `dealname` (string, required): Name of the deal.
- `amount` (number, optional): The value of the deal.
- `dealstage` (string, optional): The pipeline stage of the deal.
- `pipeline` (string, optional): The pipeline the deal belongs to.
- `closedate` (string, optional): The date the deal is expected to close.
- `hubspot_owner_id` (string, optional): The owner of the deal.
- `dealtype` (string, optional): The type of deal. Available values: `newbusiness`, `existingbusiness`.
- `description` (string, optional): A description of the deal.
- `hs_priority` (string, optional): The priority of the deal. Available values: `low`, `medium`, `high`.
</Accordion>
<Accordion title="hubspot/create_record_engagements">
**Description:** Create a new engagement (e.g., note, email, call, meeting, task) in HubSpot.
**Parameters:**
- `engagementType` (string, required): The type of engagement. Available values: `NOTE`, `EMAIL`, `CALL`, `MEETING`, `TASK`.
- `hubspot_owner_id` (string, optional): The user the activity is assigned to.
- `hs_timestamp` (string, optional): The date and time of the activity.
- `hs_note_body` (string, optional): The body of the note. (Used for `NOTE`)
- `hs_task_subject` (string, optional): The title of the task. (Used for `TASK`)
- `hs_task_body` (string, optional): The notes for the task. (Used for `TASK`)
- `hs_task_status` (string, optional): The status of the task. (Used for `TASK`)
- `hs_meeting_title` (string, optional): The title of the meeting. (Used for `MEETING`)
- `hs_meeting_body` (string, optional): The description for the meeting. (Used for `MEETING`)
- `hs_meeting_start_time` (string, optional): The start time of the meeting. (Used for `MEETING`)
- `hs_meeting_end_time` (string, optional): The end time of the meeting. (Used for `MEETING`)
</Accordion>
<Accordion title="hubspot/update_company">
**Description:** Update an existing company record in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the company to update.
- `name` (string, optional): Name of the company.
- `domain` (string, optional): Company Domain Name.
- `industry` (string, optional): Industry.
- `phone` (string, optional): Phone Number.
- `city` (string, optional): City.
- `state` (string, optional): State/Region.
- `zip` (string, optional): Postal Code.
- `numberofemployees` (number, optional): Number of Employees.
- `annualrevenue` (number, optional): Annual Revenue.
- `description` (string, optional): Description.
</Accordion>
<Accordion title="hubspot/create_record_any">
**Description:** Create a record for a specified object type in HubSpot.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- Additional parameters depend on the custom object's schema.
</Accordion>
<Accordion title="hubspot/update_contact">
**Description:** Update an existing contact record in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the contact to update.
- `firstname` (string, optional): First Name.
- `lastname` (string, optional): Last Name.
- `email` (string, optional): Email address.
- `phone` (string, optional): Phone Number.
- `company` (string, optional): Company Name.
- `jobtitle` (string, optional): Job Title.
- `lifecyclestage` (string, optional): Lifecycle Stage.
</Accordion>
<Accordion title="hubspot/update_deal">
**Description:** Update an existing deal record in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the deal to update.
- `dealname` (string, optional): Name of the deal.
- `amount` (number, optional): The value of the deal.
- `dealstage` (string, optional): The pipeline stage of the deal.
- `pipeline` (string, optional): The pipeline the deal belongs to.
- `closedate` (string, optional): The date the deal is expected to close.
- `dealtype` (string, optional): The type of deal.
</Accordion>
<Accordion title="hubspot/update_record_engagements">
**Description:** Update an existing engagement in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to update.
- `hs_note_body` (string, optional): The body of the note.
- `hs_task_subject` (string, optional): The title of the task.
- `hs_task_body` (string, optional): The notes for the task.
- `hs_task_status` (string, optional): The status of the task.
</Accordion>
<Accordion title="hubspot/update_record_any">
**Description:** Update a record for a specified object type in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the record to update.
- `recordType` (string, required): The object type ID of the custom object.
- Additional parameters depend on the custom object's schema.
</Accordion>
<Accordion title="hubspot/list_companies">
**Description:** Get a list of company records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/list_contacts">
**Description:** Get a list of contact records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/list_deals">
**Description:** Get a list of deal records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/get_records_engagements">
**Description:** Get a list of engagement records from HubSpot.
**Parameters:**
- `objectName` (string, required): The type of engagement to fetch (e.g., "notes").
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/get_records_any">
**Description:** Get a list of records for any specified object type in HubSpot.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/get_company">
**Description:** Get a single company record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the company to retrieve.
</Accordion>
<Accordion title="hubspot/get_contact">
**Description:** Get a single contact record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the contact to retrieve.
</Accordion>
<Accordion title="hubspot/get_deal">
**Description:** Get a single deal record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the deal to retrieve.
</Accordion>
<Accordion title="hubspot/get_record_by_id_engagements">
**Description:** Get a single engagement record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to retrieve.
</Accordion>
<Accordion title="hubspot/get_record_by_id_any">
**Description:** Get a single record of any specified object type by its ID.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- `recordId` (string, required): The ID of the record to retrieve.
</Accordion>
<Accordion title="hubspot/search_companies">
**Description:** Search for company records in HubSpot using a filter formula.
**Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/search_contacts">
**Description:** Search for contact records in HubSpot using a filter formula.
**Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/search_deals">
**Description:** Search for deal records in HubSpot using a filter formula.
**Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/search_records_engagements">
**Description:** Search for engagement records in HubSpot using a filter formula.
**Parameters:**
- `engagementFilterFormula` (object, optional): A filter for engagements.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/search_records_any">
**Description:** Search for records of any specified object type in HubSpot.
**Parameters:**
- `recordType` (string, required): The object type ID to search.
- `filterFormula` (string, optional): The filter formula to apply.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="hubspot/delete_record_companies">
**Description:** Delete a company record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the company to delete.
</Accordion>
<Accordion title="hubspot/delete_record_contacts">
**Description:** Delete a contact record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the contact to delete.
</Accordion>
<Accordion title="hubspot/delete_record_deals">
**Description:** Delete a deal record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the deal to delete.
</Accordion>
<Accordion title="hubspot/delete_record_engagements">
**Description:** Delete an engagement record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to delete.
</Accordion>
<Accordion title="hubspot/delete_record_any">
**Description:** Delete a record of any specified object type by its ID.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- `recordId` (string, required): The ID of the record to delete.
</Accordion>
<Accordion title="hubspot/get_contacts_by_list_id">
**Description:** Get contacts from a specific list by its ID.
**Parameters:**
- `listId` (string, required): The ID of the list to get contacts from.
- `paginationParameters` (object, optional): Use `pageCursor` for subsequent pages.
</Accordion>
<Accordion title="hubspot/describe_action_schema">
**Description:** Get the expected schema for a given object type and operation.
**Parameters:**
- `recordType` (string, required): The object type ID (e.g., 'companies').
- `operation` (string, required): The operation type (e.g., 'CREATE_RECORD').
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic HubSpot Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with HubSpot capabilities
hubspot_agent = Agent(
role="CRM Manager",
goal="Manage company and contact records in HubSpot",
backstory="An AI assistant specialized in CRM management.",
apps=['hubspot'] # All HubSpot actions will be available
)
# Task to create a new company
create_company_task = Task(
description="Create a new company in HubSpot with name 'Innovate Corp' and domain 'innovatecorp.com'.",
agent=hubspot_agent,
expected_output="Company created successfully with confirmation"
)
# Run the task
crew = Crew(
agents=[hubspot_agent],
tasks=[create_company_task]
)
crew.kickoff()
```
### Filtering Specific HubSpot Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific HubSpot actions only
contact_creator = Agent(
role="Contact Creator",
goal="Create new contacts in HubSpot",
backstory="An AI assistant that focuses on creating new contact entries in the CRM.",
apps=['hubspot/create_contact'] # Only contact creation action
)
# Task to create a contact
create_contact = Task(
description="Create a new contact for 'John Doe' with email 'john.doe@example.com'.",
agent=contact_creator,
expected_output="Contact created successfully in HubSpot."
)
crew = Crew(
agents=[contact_creator],
tasks=[create_contact]
)
crew.kickoff()
```
### Contact Management
```python
from crewai import Agent, Task, Crew
# Create agent with HubSpot contact management capabilities
crm_manager = Agent(
role="CRM Manager",
goal="Manage and organize HubSpot contacts efficiently.",
backstory="An experienced CRM manager who maintains an organized contact database.",
apps=['hubspot'] # All HubSpot actions including contact management
)
# Task to manage contacts
contact_task = Task(
description="Create a new contact for 'Jane Smith' at 'Global Tech Inc.' with email 'jane.smith@globaltech.com'.",
agent=crm_manager,
expected_output="Contact database updated with the new contact."
)
crew = Crew(
agents=[crm_manager],
tasks=[contact_task]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with HubSpot integration setup or troubleshooting.
</Card>

View File

@@ -1,368 +0,0 @@
---
title: Jira Integration
description: "Issue tracking and project management with Jira integration for CrewAI."
icon: "bug"
mode: "wide"
---
## Overview
Enable your agents to manage issues, projects, and workflows through Jira. Create and update issues, track project progress, manage assignments, and streamline your project management with AI-powered automation.
## Prerequisites
Before using the Jira integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Jira account with appropriate project permissions
- Connected your Jira account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Jira Integration
### 1. Connect Your Jira Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Jira** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for issue and project management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="jira/create_issue">
**Description:** Create an issue in Jira.
**Parameters:**
- `summary` (string, required): Summary - A brief one-line summary of the issue. (example: "The printer stopped working").
- `project` (string, optional): Project - The project which the issue belongs to. Defaults to the user's first project if not provided. Use Connect Portal Workflow Settings to allow users to select a Project.
- `issueType` (string, optional): Issue type - Defaults to Task if not provided.
- `jiraIssueStatus` (string, optional): Status - Defaults to the project's first status if not provided.
- `assignee` (string, optional): Assignee - Defaults to the authenticated user if not provided.
- `descriptionType` (string, optional): Description Type - Select the Description Type.
- Options: `description`, `descriptionJSON`
- `description` (string, optional): Description - A detailed description of the issue. This field appears only when 'descriptionType' = 'description'.
- `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format. Use Connect Portal Workflow Settings to allow users to select which Issue Fields to update.
```json
{
"customfield_10001": "value"
}
```
</Accordion>
<Accordion title="jira/update_issue">
**Description:** Update an issue in Jira.
**Parameters:**
- `issueKey` (string, required): Issue Key (example: "TEST-1234").
- `summary` (string, optional): Summary - A brief one-line summary of the issue. (example: "The printer stopped working").
- `issueType` (string, optional): Issue type - Use Connect Portal Workflow Settings to allow users to select an Issue Type.
- `jiraIssueStatus` (string, optional): Status - Use Connect Portal Workflow Settings to allow users to select a Status.
- `assignee` (string, optional): Assignee - Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `descriptionType` (string, optional): Description Type - Select the Description Type.
- Options: `description`, `descriptionJSON`
- `description` (string, optional): Description - A detailed description of the issue. This field appears only when 'descriptionType' = 'description'.
- `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format.
</Accordion>
<Accordion title="jira/get_issue_by_key">
**Description:** Get an issue by key in Jira.
**Parameters:**
- `issueKey` (string, required): Issue Key (example: "TEST-1234").
</Accordion>
<Accordion title="jira/filter_issues">
**Description:** Search issues in Jira using filters.
**Parameters:**
- `jqlQuery` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "status",
"operator": "$stringExactlyMatches",
"value": "Open"
}
]
}
]
}
```
Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`
- `limit` (string, optional): Limit results - Limit the maximum number of issues to return. Defaults to 10 if left blank.
</Accordion>
<Accordion title="jira/search_by_jql">
**Description:** Search issues by JQL in Jira.
**Parameters:**
- `jqlQuery` (string, required): JQL Query (example: "project = PROJECT").
- `paginationParameters` (object, optional): Pagination parameters for paginated results.
```json
{
"pageCursor": "cursor_string"
}
```
</Accordion>
<Accordion title="jira/update_issue_any">
**Description:** Update any issue in Jira. Use DESCRIBE_ACTION_SCHEMA to get properties schema for this function.
**Parameters:** No specific parameters - use JIRA_DESCRIBE_ACTION_SCHEMA first to get the expected schema.
</Accordion>
<Accordion title="jira/describe_action_schema">
**Description:** Get the expected schema for an issue type. Use this function first if no other function matches the issue type you want to operate on.
**Parameters:**
- `issueTypeId` (string, required): Issue Type ID.
- `projectKey` (string, required): Project key.
- `operation` (string, required): Operation Type value, for example CREATE_ISSUE or UPDATE_ISSUE.
</Accordion>
<Accordion title="jira/get_projects">
**Description:** Get Projects in Jira.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "cursor_string"
}
```
</Accordion>
<Accordion title="jira/get_issue_types_by_project">
**Description:** Get Issue Types by project in Jira.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
<Accordion title="jira/get_issue_types">
**Description:** Get all Issue Types in Jira.
**Parameters:** None required.
</Accordion>
<Accordion title="jira/get_issue_status_by_project">
**Description:** Get issue statuses for a given project.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
<Accordion title="jira/get_all_assignees_by_project">
**Description:** Get assignees for a given project.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Jira Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai import Agent, Task, Crew
# Create an agent with Jira capabilities
jira_agent = Agent(
role="Issue Manager",
goal="Manage Jira issues and track project progress efficiently",
backstory="An AI assistant specialized in issue tracking and project management.",
apps=['jira'] # All Jira actions will be available
)
# Task to create a bug report
create_bug_task = Task(
description="Create a bug report for the login functionality with high priority and assign it to the development team",
agent=jira_agent,
expected_output="Bug report created successfully with issue key"
)
# Run the task
crew = Crew(
agents=[jira_agent],
tasks=[create_bug_task]
)
crew.kickoff()
```
### Filtering Specific Jira Tools
```python
issue_coordinator = Agent(
role="Issue Coordinator",
goal="Create and manage Jira issues efficiently",
backstory="An AI assistant that focuses on issue creation and management.",
apps=['jira']
)
# Task to manage issue workflow
issue_workflow = Task(
description="Create a feature request issue and update the status of related issues",
agent=issue_coordinator,
expected_output="Feature request created and related issues updated"
)
crew = Crew(
agents=[issue_coordinator],
tasks=[issue_workflow]
)
crew.kickoff()
```
### Project Analysis and Reporting
```python
from crewai import Agent, Task, Crew
project_analyst = Agent(
role="Project Analyst",
goal="Analyze project data and generate insights from Jira",
backstory="An experienced project analyst who extracts insights from project management data.",
apps=['jira']
)
# Task to analyze project status
analysis_task = Task(
description="""
1. Get all projects and their issue types
2. Search for all open issues across projects
3. Analyze issue distribution by status and assignee
4. Create a summary report issue with findings
""",
agent=project_analyst,
expected_output="Project analysis completed with summary report created"
)
crew = Crew(
agents=[project_analyst],
tasks=[analysis_task]
)
crew.kickoff()
```
### Automated Issue Management
```python
from crewai import Agent, Task, Crew
automation_manager = Agent(
role="Automation Manager",
goal="Automate issue management and workflow processes",
backstory="An AI assistant that automates repetitive issue management tasks.",
apps=['jira']
)
# Task to automate issue management
automation_task = Task(
description="""
1. Search for all unassigned issues using JQL
2. Get available assignees for each project
3. Automatically assign issues based on workload and expertise
4. Update issue priorities based on age and type
5. Create weekly sprint planning issues
""",
agent=automation_manager,
expected_output="Issues automatically assigned and sprint planning issues created"
)
crew = Crew(
agents=[automation_manager],
tasks=[automation_task]
)
crew.kickoff()
```
### Advanced Schema-Based Operations
```python
from crewai import Agent, Task, Crew
schema_specialist = Agent(
role="Schema Specialist",
goal="Handle complex Jira operations using dynamic schemas",
backstory="An AI assistant that can work with dynamic Jira schemas and custom issue types.",
apps=['jira']
)
# Task using schema-based operations
schema_task = Task(
description="""
1. Get all projects and their custom issue types
2. For each custom issue type, describe the action schema
3. Create issues using the dynamic schema for complex custom fields
4. Update issues with custom field values based on business rules
""",
agent=schema_specialist,
expected_output="Custom issues created and updated using dynamic schemas"
)
crew = Crew(
agents=[schema_specialist],
tasks=[schema_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Jira account has necessary permissions for the target projects
- Verify that the OAuth connection includes required scopes for Jira API
- Check if you have create/edit permissions for issues in the specified projects
**Invalid Project or Issue Keys**
- Double-check project keys and issue keys for correct format (e.g., "PROJ-123")
- Ensure projects exist and are accessible to your account
- Verify that issue keys reference existing issues
**Issue Type and Status Issues**
- Use JIRA_GET_ISSUE_TYPES_BY_PROJECT to get valid issue types for a project
- Use JIRA_GET_ISSUE_STATUS_BY_PROJECT to get valid statuses
- Ensure issue types and statuses are available in the target project
**JQL Query Problems**
- Test JQL queries in Jira's issue search before using in API calls
- Ensure field names in JQL are spelled correctly and exist in your Jira instance
- Use proper JQL syntax for complex queries
**Custom Fields and Schema Issues**
- Use JIRA_DESCRIBE_ACTION_SCHEMA to get the correct schema for complex issue types
- Ensure custom field IDs are correct (e.g., "customfield_10001")
- Verify that custom fields are available in the target project and issue type
**Filter Formula Issues**
- Ensure filter formulas follow the correct JSON structure for disjunctive normal form
- Use valid field names that exist in your Jira configuration
- Test simple filters before building complex multi-condition queries
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Jira integration setup or troubleshooting.
</Card>

Some files were not shown because too many files have changed in this diff Show More