Compare commits

..

2 Commits

Author SHA1 Message Date
Brandon Hancock
cbc85f97bf remove print statements 2024-12-02 13:15:35 -05:00
Brandon Hancock
fa397d47e3 v1 of fix implemented. Need to confirm with tokens. 2024-12-02 12:22:50 -05:00
2287 changed files with 101356 additions and 390843 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -65,6 +65,7 @@ body:
- '3.10'
- '3.11'
- '3.12'
- '3.13'
validations:
required: true
- type: input
@@ -112,4 +113,4 @@ body:
label: Additional context
description: Add any other context about the problem here.
validations:
required: true
required: true

View File

@@ -1,21 +0,0 @@
name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
paths:
# Include all Python source code
- "src/**"
# Include tests (but exclude cassettes)
- "tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality

61
.github/security.md vendored
View File

@@ -1,50 +1,19 @@
## CrewAI Security Policy
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
## Reporting a Vulnerability
Please do not report security vulnerabilities through public GitHub issues.
To report a vulnerability, please email us at security@crewai.com.
Please include the requested information listed below so that we can triage your report more quickly
### Scope
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- The location of the affected source code (tag/branch/commit or direct URL)
- Any special configuration required to reproduce the issue
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit the issue
We welcome reports for vulnerabilities that could impact:
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
- CrewAI-maintained source code and repositories
- CrewAI-operated infrastructure and services
- Official CrewAI releases, packages, and distributions
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
### How to Report
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.

View File

@@ -1,48 +0,0 @@
name: Build uv cache
on:
push:
branches:
- main
paths:
- "uv.lock"
- "pyproject.toml"
schedule:
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
workflow_dispatch:
permissions:
contents: read
jobs:
build-cache:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies and populate cache
run: |
echo "Building global UV cache for Python ${{ matrix.python-version }}..."
uv sync --all-groups --all-extras --no-install-project
echo "Cache populated successfully"
- name: Save uv caches
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

View File

@@ -1,103 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
push:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: actions
build-mode: none
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
config-file: ./.github/codeql/codeql-config.yml
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -2,68 +2,15 @@ name: Lint
on: [pull_request]
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
env:
TARGET_BRANCH: ${{ github.event.pull_request.base.ref }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py3.11-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.11"
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras --no-install-project
- name: Get Changed Python Files
id: changed-files
- name: Install Requirements
run: |
merge_base=$(git merge-base origin/"$TARGET_BRANCH" HEAD)
changed_files=$(git diff --name-only --diff-filter=ACMRTUB "$merge_base" | grep '\.py$' || true)
echo "files<<EOF" >> $GITHUB_OUTPUT
echo "$changed_files" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
pip install ruff
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| grep -v '/tests/' \
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}
- name: Run Ruff Linter
run: ruff check --exclude "templates","__init__.py"

45
.github/workflows/mkdocs.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Deploy MkDocs
on:
release:
types: [published]
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Calculate requirements hash
id: req-hash
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
- name: Setup cache
uses: actions/cache@v4
with:
key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Requirements
run: |
sudo apt-get update &&
sudo apt-get install pngquant &&
pip install mkdocs-material mkdocs-material-extensions pillow cairosvg
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
- name: Build and deploy MkDocs
run: mkdocs gh-deploy --force

View File

@@ -1,33 +0,0 @@
name: Notify Downstream
on:
push:
branches:
- main
permissions:
contents: read
jobs:
notify-downstream:
runs-on: ubuntu-latest
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.OSS_SYNC_APP_ID }}
private_key: ${{ secrets.OSS_SYNC_APP_PRIVATE_KEY }}
- name: Notify Repo B
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ steps.app-token.outputs.token }}
repository: ${{ secrets.OSS_SYNC_DOWNSTREAM_REPO }}
event-type: upstream-commit
client-payload: |
{
"commit_sha": "${{ github.sha }}"
}

View File

@@ -1,81 +0,0 @@
name: Publish to PyPI
on:
release:
types: [ published ]
workflow_dispatch:
jobs:
build:
name: Build packages
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

23
.github/workflows/security-checker.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Security Checker
on: [pull_request]
jobs:
security-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11.9"
- name: Install dependencies
run: pip install bandit
- name: Run Bandit
run: bandit -c pyproject.toml -r src/ -ll

View File

@@ -1,10 +1,5 @@
name: Mark stale issues and pull requests
permissions:
contents: write
issues: write
pull-requests: write
on:
schedule:
- cron: '10 12 * * *'
@@ -13,6 +8,9 @@ on:
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:

View File

@@ -3,116 +3,30 @@ name: Run Tests
on: [pull_request]
permissions:
contents: read
contents: write
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
BRAVE_API_KEY: fake-brave-key
SNOWFLAKE_USER: fake-snowflake-user
SNOWFLAKE_PASSWORD: fake-snowflake-password
SNOWFLAKE_ACCOUNT: fake-snowflake-account
SNOWFLAKE_WAREHOUSE: fake-snowflake-warehouse
SNOWFLAKE_DATABASE: fake-snowflake-database
SNOWFLAKE_SCHEMA: fake-snowflake-schema
EMBEDCHAIN_DB_URI: sqlite:///test.db
jobs:
tests:
name: tests (${{ matrix.python-version }})
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
fail-fast: true
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
group: [1, 2, 3, 4, 5, 6, 7, 8]
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
uses: astral-sh/setup-uv@v3
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
enable-cache: true
- name: Set up Python
run: uv python install 3.11.9
- name: Install the project
run: uv sync --all-groups --all-extras
run: uv sync --dev --all-extras
- name: Restore test durations
uses: actions/cache/restore@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Run tests (group ${{ matrix.group }} of 8)
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}"
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
# DURATIONS_ARG=""
# elif git diff origin/${{ github.base_ref }}...HEAD --name-only 2>/dev/null | grep -q "^tests/.*\.py$"; then
# echo "Test files have changed, skipping cached durations to avoid mismatches"
# DURATIONS_ARG=""
# else
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
cd lib/crewai && uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
$DURATIONS_ARG \
--durations=10 \
-n auto \
--maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8)
run: |
cd lib/crewai-tools && uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
--durations=10 \
-n auto \
--maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
- name: Run tests
run: uv run pytest tests -vv

View File

@@ -3,99 +3,24 @@ name: Run Type Checks
on: [pull_request]
permissions:
contents: read
contents: write
jobs:
type-checker-matrix:
name: type-checker (${{ matrix.python-version }})
type-checker:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
fetch-depth: 0 # Fetch all history for proper diff
python-version: "3.11.9"
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras
- name: Get changed Python files
id: changed-files
- name: Install Requirements
run: |
# Get the list of changed Python files compared to the base branch
echo "Fetching changed files..."
git diff --name-only --diff-filter=ACMRT origin/${{ github.base_ref }}...HEAD -- '*.py' > changed_files.txt
pip install mypy
# Filter for files in src/ directory only (excluding tests/)
grep -E "^src/" changed_files.txt > filtered_changed_files.txt || true
# Check if there are any changed files
if [ -s filtered_changed_files.txt ]; then
echo "Changed Python files in src/:"
cat filtered_changed_files.txt
echo "has_changes=true" >> $GITHUB_OUTPUT
# Convert newlines to spaces for mypy command
echo "files=$(cat filtered_changed_files.txt | tr '\n' ' ')" >> $GITHUB_OUTPUT
else
echo "No Python files changed in src/"
echo "has_changes=false" >> $GITHUB_OUTPUT
fi
- name: Run type checks on changed files
if: steps.changed-files.outputs.has_changes == 'true'
run: |
echo "Running mypy on changed files with Python ${{ matrix.python-version }}..."
uv run mypy ${{ steps.changed-files.outputs.files }}
- name: No files to check
if: steps.changed-files.outputs.has_changes == 'false'
run: echo "No Python files in src/ were modified - skipping type checks"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
# Summary job to provide single status for branch protection
type-checker:
name: type-checker
runs-on: ubuntu-latest
needs: type-checker-matrix
if: always()
steps:
- name: Check matrix results
run: |
if [ "${{ needs.type-checker-matrix.result }}" == "success" ] || [ "${{ needs.type-checker-matrix.result }}" == "skipped" ]; then
echo "✅ All type checks passed"
else
echo "❌ Type checks failed"
exit 1
fi
- name: Run type checks
run: mypy src

View File

@@ -1,71 +0,0 @@
name: Update Test Durations
on:
push:
branches:
- main
paths:
- 'tests/**/*.py'
workflow_dispatch:
permissions:
contents: read
jobs:
update-durations:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --all-groups --all-extras
- name: Run all tests and store durations
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
uv run pytest --store-durations --durations-path=.test_durations_py${PYTHON_VERSION_SAFE} -n auto
continue-on-error: true
- name: Save durations to cache
if: always()
uses: actions/cache/save@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

7
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.pytest_cache
__pycache__
dist/
lib/
.env
assets/*
.idea
@@ -20,9 +21,3 @@ crew_tasks_output.json
.mypy_cache
.ruff_cache
.venv
test_flow.html
crewairules.mdc
plan.md
conceptual_plan.md
build_image
chromadb-*.lock

View File

@@ -1,26 +1,9 @@
repos:
- repo: local
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.4
hooks:
- id: ruff
name: ruff
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
args: ["--fix"]
exclude: "templates"
- id: ruff-format
name: ruff-format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: mypy
name: mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock
exclude: "templates"

View File

@@ -1,4 +1,4 @@
Copyright (c) 2025 crewAI, Inc.
Copyright (c) 2018 The Python Packaging Authority
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

427
README.md
View File

@@ -1,167 +1,50 @@
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="docs/images/crewai_logo.png" width="600px" alt="Open source Multi-AI Agent orchestration framework">
</a>
</p>
<p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;">
<a href="https://trendshift.io/repositories/11239" target="_blank">
<img src="https://trendshift.io/api/badge/repositories/11239" alt="crewAIInc%2FcrewAI | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/>
</a>
</p>
<div align="center">
<p align="center">
<a href="https://crewai.com">Homepage</a>
·
<a href="https://docs.crewai.com">Docs</a>
·
<a href="https://app.crewai.com">Start Cloud Trial</a>
·
<a href="https://blog.crewai.com">Blog</a>
·
<a href="https://community.crewai.com">Forum</a>
</p>
![Logo of CrewAI, two people rowing on a boat](./docs/crewai_logo.png)
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="https://img.shields.io/github/stars/crewAIInc/crewAI" alt="GitHub Repo stars">
</a>
<a href="https://github.com/crewAIInc/crewAI/network/members">
<img src="https://img.shields.io/github/forks/crewAIInc/crewAI" alt="GitHub forks">
</a>
<a href="https://github.com/crewAIInc/crewAI/issues">
<img src="https://img.shields.io/github/issues/crewAIInc/crewAI" alt="GitHub issues">
</a>
<a href="https://github.com/crewAIInc/crewAI/pulls">
<img src="https://img.shields.io/github/issues-pr/crewAIInc/crewAI" alt="GitHub pull requests">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
</a>
</p>
# **CrewAI**
<p align="center">
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/v/crewai" alt="PyPI version">
</a>
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/dm/crewai" alt="PyPI downloads">
</a>
<a href="https://twitter.com/crewAIInc">
<img src="https://img.shields.io/twitter/follow/crewAIInc?style=social" alt="Twitter Follow">
</a>
</p>
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
### Fast and Flexible Multi-Agent Automation Framework
<h3>
> CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely **independent of LangChain or other agent frameworks**.
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
</h3>
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/crewAIInc/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
# CrewAI AMP Suite
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
## Crew Control Plane Key Features:
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
</div>
## Table of contents
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Understanding Flows and Crews](#understanding-flows-and-crews)
- [CrewAI vs LangGraph](#how-crewai-compares)
- [Examples](#examples)
- [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Using Crews and Flows Together](#using-crews-and-flows-together)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
- [Contribution](#contribution)
- [Telemetry](#telemetry)
- [License](#license)
## Why CrewAI?
<div align="center" style="margin-bottom: 30px;">
<img src="docs/images/asset.png" alt="CrewAI Logo" width="100%">
</div>
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
The power of AI collaboration has too much to offer.
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
## Getting Started
Setup and run your first CrewAI agents by following this tutorial.
[![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learn CrewAI through our comprehensive courses:
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
### Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
### Getting Started with Installation
To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
@@ -174,26 +57,8 @@ If you want to install the 'crewai' package along with its optional features tha
```shell
pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
#### Common Issues
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
@@ -276,7 +141,7 @@ research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
@@ -299,14 +164,10 @@ reporting_task:
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
@@ -403,25 +264,24 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features
CrewAI stands apart as a lean, standalone, high-performance multi-AI Agent framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
## Examples
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator)
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
### Quick Tutorial
@@ -429,136 +289,34 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
### Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines.
CrewAI flows support logical operators like `or_` and `and_` to combine multiple conditions. This can be used with `@start`, `@listen`, or `@router` decorators to create complex triggering conditions.
- `or_`: Triggers when any of the specified conditions are met.
- `and_`Triggers when all of the specified conditions are met.
Here's how you can orchestrate multiple Crews within a Flow:
```python
from crewai.flow.flow import Flow, listen, start, router, or_
from crewai import Crew, Agent, Task, Process
from pydantic import BaseModel
# Define structured state for precise control
class MarketState(BaseModel):
sentiment: str = "neutral"
confidence: float = 0.0
recommendations: list = []
class AdvancedAnalysisFlow(Flow[MarketState]):
@start()
def fetch_market_data(self):
# Demonstrate low-level control with structured state
self.state.sentiment = "analyzing"
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
@listen(fetch_market_data)
def analyze_with_crew(self, market_data):
# Show crew agency through specialized roles
analyst = Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher = Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task = Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task = Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew = Crew(
agents=[analyst, researcher],
tasks=[analysis_task, research_task],
process=Process.sequential,
verbose=True
)
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
@router(analyze_with_crew)
def determine_next_steps(self):
# Show flow control with conditional routing
if self.state.confidence > 0.8:
return "high_confidence"
elif self.state.confidence > 0.5:
return "medium_confidence"
return "low_confidence"
@listen("high_confidence")
def execute_strategy(self):
# Demonstrate complex decision making
strategy_crew = Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
return strategy_crew.kickoff()
@listen(or_("medium_confidence", "low_confidence"))
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
```
This example demonstrates how to:
1. Use Python code for basic data operations
2. Create and execute Crews as steps in your workflow
3. Use Flow decorators to manage the sequence of operations
4. Implement conditional branching based on Crew results
## Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
## Contribution
@@ -618,7 +376,7 @@ pip install dist/*.tar.gz
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. We don't offer a way to disable it now, but we will in the future.
Data collected includes:
@@ -651,127 +409,36 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b
## Frequently Asked Questions (FAQ)
### General
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
- [Is CrewAI open-source?](#q-is-crewai-open-source)
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
### Features and Capabilities
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
### Resources and Community
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
### Enterprise Features
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-amp-offer)
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-amp-for-free)
### Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### Q: How do I install CrewAI?
A: Install CrewAI using pip:
A: You can install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
### Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
### Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
### Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: How can I contribute to CrewAI?
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI AMP offer?
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI AMP for free?
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
### Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
### Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
### Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
### Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

1737
crewAI.excalidraw Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +0,0 @@
(function() {
if (typeof window === 'undefined') return;
if (typeof window.signals !== 'undefined') return;
var script = document.createElement('script');
script.src = 'https://cdn.cr-relay.com/v1/site/883520f4-c431-44be-80e7-e123a1ee7a2b/signals.js';
script.async = true;
window.signals = Object.assign(
[],
['page', 'identify', 'form'].reduce(function (acc, method){
acc[method] = function () {
signals.push([method, arguments]);
return signals;
};
return acc;
}, {})
);
document.head.appendChild(script);
})();

View File

@@ -2,7 +2,6 @@
title: Agents
description: Detailed guide on creating and managing agents within the CrewAI framework.
icon: robot
mode: "wide"
---
## Overview of an Agent
@@ -19,18 +18,6 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
</Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces
- Real-time testing and validation
- Template library with pre-configured agent types
- Easy customization of agent attributes and behaviors
</Note>
## Agent Attributes
| Attribute | Parameter | Type | Description |
@@ -44,6 +31,7 @@ The Visual Agent Builder enables:
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Memory** _(optional)_ | `memory` | `bool` | Whether the agent should maintain memory of interactions. Default is True. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
@@ -55,12 +43,7 @@ The Visual Agent Builder enables:
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Embedder Config** _(optional)_ | `embedder_config` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
@@ -72,7 +55,7 @@ There are two ways to create agents in CrewAI: using **YAML configuration (recom
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
@@ -118,12 +101,10 @@ from crewai_tools import SerperDevTool
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents_config = "config/agents.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@@ -131,7 +112,7 @@ class LatestAiDevelopmentCrew():
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
config=self.agents_config['reporting_analyst'],
verbose=True
)
```
@@ -156,6 +137,7 @@ agent = Agent(
"you excel at finding patterns in complex datasets.",
llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4"
function_calling_llm=None, # Optional: Separate LLM for tool calling
memory=True, # Default: True
verbose=False, # Default: False
allow_delegation=False, # Default: False
max_iter=20, # Default: 20 iterations
@@ -166,14 +148,9 @@ agent = Agent(
code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe")
respect_context_window=True, # Default: True
use_system_prompt=True, # Default: True
multimodal=False, # Default: False
inject_date=False, # Default: False
date_format="%Y-%m-%d", # Default: ISO format
reasoning=False, # Default: False
max_reasoning_attempts=None, # Default: None
tools=[SerperDevTool()], # Optional: List of tools
knowledge_sources=None, # Optional: List of knowledge sources
embedder=None, # Optional: Custom embedder configuration
embedder_config=None, # Optional: Custom embedder configuration
system_template=None, # Optional: Custom system prompt template
prompt_template=None, # Optional: Custom prompt template
response_template=None, # Optional: Custom response template
@@ -235,44 +212,6 @@ custom_agent = Agent(
)
```
#### Date-Aware Agent with Reasoning
```python Code
strategic_agent = Agent(
role="Market Analyst",
goal="Track market movements with precise date references and strategic planning",
backstory="Expert in time-sensitive financial analysis and strategic reporting",
inject_date=True, # Automatically inject current date into tasks
date_format="%B %d, %Y", # Format as "May 21, 2025"
reasoning=True, # Enable strategic planning
max_reasoning_attempts=2, # Limit planning iterations
verbose=True
)
```
#### Reasoning Agent
```python Code
reasoning_agent = Agent(
role="Strategic Planner",
goal="Analyze complex problems and create detailed execution plans",
backstory="Expert strategic planner who methodically breaks down complex challenges",
reasoning=True, # Enable reasoning and planning
max_reasoning_attempts=3, # Limit reasoning attempts
max_iter=30, # Allow more iterations for complex planning
verbose=True
)
```
#### Multimodal Agent
```python Code
multimodal_agent = Agent(
role="Visual Content Analyst",
goal="Analyze and process both text and visual content",
backstory="Specialized in multimodal analysis combining text and image understanding",
multimodal=True, # Enable multimodal capabilities
verbose=True
)
```
### Parameter Details
#### Critical Parameters
@@ -292,31 +231,17 @@ multimodal_agent = Agent(
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
Add the code interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks
- `inject_date`: Automatically inject current date into task descriptions
#### Templates
- `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format
- `response_template`: Formats agent responses
<Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{input}` in your templates. These will be automatically populated during execution.
</Note>
## Agent Tools
@@ -363,267 +288,6 @@ analyst = Agent(
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
</Note>
## Context Window Management
CrewAI includes sophisticated automatic context window management to handle situations where conversations exceed the language model's token limits. This powerful feature is controlled by the `respect_context_window` parameter.
### How Context Window Management Works
When an agent's conversation history grows too large for the LLM's context window, CrewAI automatically detects this situation and can either:
1. **Automatically summarize content** (when `respect_context_window=True`)
2. **Stop execution with an error** (when `respect_context_window=False`)
### Automatic Context Handling (`respect_context_window=True`)
This is the **default and recommended setting** for most use cases. When enabled, CrewAI will:
```python Code
# Agent with automatic context management (default)
smart_agent = Agent(
role="Research Analyst",
goal="Analyze large documents and datasets",
backstory="Expert at processing extensive information",
respect_context_window=True, # 🔑 Default: auto-handle context limits
verbose=True
)
```
**What happens when context limits are exceeded:**
- ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."`
- 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history
- ✅ **Continued execution**: Task execution continues seamlessly with the summarized context
- 📝 **Preserved information**: Key information is retained while reducing token count
### Strict Context Limits (`respect_context_window=False`)
When you need precise control and prefer execution to stop rather than lose any information:
```python Code
# Agent with strict context limits
strict_agent = Agent(
role="Legal Document Reviewer",
goal="Provide precise legal analysis without information loss",
backstory="Legal expert requiring complete context for accurate analysis",
respect_context_window=False, # ❌ Stop execution on context limit
verbose=True
)
```
**What happens when context limits are exceeded:**
- ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."`
- 🛑 **Execution stops**: Task execution halts immediately
- 🔧 **Manual intervention required**: You need to modify your approach
### Choosing the Right Setting
#### Use `respect_context_window=True` (Default) when:
- **Processing large documents** that might exceed context limits
- **Long-running conversations** where some summarization is acceptable
- **Research tasks** where general context is more important than exact details
- **Prototyping and development** where you want robust execution
```python Code
# Perfect for document processing
document_processor = Agent(
role="Document Analyst",
goal="Extract insights from large research papers",
backstory="Expert at analyzing extensive documentation",
respect_context_window=True, # Handle large documents gracefully
max_iter=50, # Allow more iterations for complex analysis
verbose=True
)
```
#### Use `respect_context_window=False` when:
- **Precision is critical** and information loss is unacceptable
- **Legal or medical tasks** requiring complete context
- **Code review** where missing details could introduce bugs
- **Financial analysis** where accuracy is paramount
```python Code
# Perfect for precision tasks
precision_agent = Agent(
role="Code Security Auditor",
goal="Identify security vulnerabilities in code",
backstory="Security expert requiring complete code context",
respect_context_window=False, # Prefer failure over incomplete analysis
max_retry_limit=1, # Fail fast on context issues
verbose=True
)
```
### Alternative Approaches for Large Data
When dealing with very large datasets, consider these strategies:
#### 1. Use RAG Tools
```python Code
from crewai_tools import RagTool
# Create RAG tool for large document processing
rag_tool = RagTool()
rag_agent = Agent(
role="Research Assistant",
goal="Query large knowledge bases efficiently",
backstory="Expert at using RAG tools for information retrieval",
tools=[rag_tool], # Use RAG instead of large context windows
respect_context_window=True,
verbose=True
)
```
#### 2. Use Knowledge Sources
```python Code
# Use knowledge sources instead of large prompts
knowledge_agent = Agent(
role="Knowledge Expert",
goal="Answer questions using curated knowledge",
backstory="Expert at leveraging structured knowledge sources",
knowledge_sources=[your_knowledge_sources], # Pre-processed knowledge
respect_context_window=True,
verbose=True
)
```
### Context Window Best Practices
1. **Monitor Context Usage**: Enable `verbose=True` to see context management in action
2. **Design for Efficiency**: Structure tasks to minimize context accumulation
3. **Use Appropriate Models**: Choose LLMs with context windows suitable for your tasks
4. **Test Both Settings**: Try both `True` and `False` to see which works better for your use case
5. **Combine with RAG**: Use RAG tools for very large datasets instead of relying solely on context windows
### Troubleshooting Context Issues
**If you're getting context limit errors:**
```python Code
# Quick fix: Enable automatic handling
agent.respect_context_window = True
# Better solution: Use RAG tools for large data
from crewai_tools import RagTool
agent.tools = [RagTool()]
# Alternative: Break tasks into smaller pieces
# Or use knowledge sources instead of large prompts
```
**If automatic summarization loses important information:**
```python Code
# Disable auto-summarization and use RAG instead
agent = Agent(
role="Detailed Analyst",
goal="Maintain complete information accuracy",
backstory="Expert requiring full context",
respect_context_window=False, # No summarization
tools=[RagTool()], # Use RAG for large data
verbose=True
)
```
<Note>
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
</Note>
## Direct Agent Interaction with `kickoff()`
Agents can be used directly without going through a task or crew workflow using the `kickoff()` method. This provides a simpler way to interact with an agent when you don't need the full crew orchestration capabilities.
### How `kickoff()` Works
The `kickoff()` method allows you to send messages directly to an agent and get a response, similar to how you would interact with an LLM but with all the agent's capabilities (tools, reasoning, etc.).
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool
# Create an agent
researcher = Agent(
role="AI Technology Researcher",
goal="Research the latest AI developments",
tools=[SerperDevTool()],
verbose=True
)
# Use kickoff() to interact directly with the agent
result = researcher.kickoff("What are the latest developments in language models?")
# Access the raw response
print(result.raw)
```
### Parameters and Return Values
| Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties:
- `raw`: String containing the raw output text
- `pydantic`: Parsed Pydantic model (if a `response_format` was provided)
- `agent_role`: Role of the agent that produced the output
- `usage_metrics`: Token usage metrics for the execution
### Structured Output
You can get structured output by providing a Pydantic model as the `response_format`:
```python Code
from pydantic import BaseModel
from typing import List
class ResearchFindings(BaseModel):
main_points: List[str]
key_technologies: List[str]
future_predictions: str
# Get structured output
result = researcher.kickoff(
"Summarize the latest developments in AI for 2025",
response_format=ResearchFindings
)
# Access structured data
print(result.pydantic.main_points)
print(result.pydantic.future_predictions)
```
### Multiple Messages
You can also provide a conversation history as a list of message dictionaries:
```python Code
messages = [
{"role": "user", "content": "I need information about large language models"},
{"role": "assistant", "content": "I'd be happy to help with that! What specifically would you like to know?"},
{"role": "user", "content": "What are the latest developments in 2025?"}
]
result = researcher.kickoff(messages)
```
### Async Support
An asynchronous version is available via `kickoff_async()` with the same parameters:
```python Code
import asyncio
async def main():
result = await researcher.kickoff_async("What are the latest developments in AI?")
print(result.raw)
asyncio.run(main())
```
<Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
@@ -638,17 +302,11 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Use `memory: true` for tasks requiring historical context
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models
- Configure `embedder_config` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Advanced Features
- Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks
- Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts)
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes
- Enable `multimodal: true` for agents that need to process both text and visual content
### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions
@@ -656,13 +314,6 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Main `llm` for complex reasoning
- `function_calling_llm` for efficient tool usage
### Date Awareness and Reasoning
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes
- Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc.
- Invalid date formats will be logged as warnings and will not modify the task description
- Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection
### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling)
@@ -685,6 +336,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Review code sandbox settings
4. **Memory Issues**: If agent responses seem inconsistent:
- Verify memory is enabled
- Check knowledge source configuration
- Review conversation history management

179
docs/concepts/cli.mdx Normal file
View File

@@ -0,0 +1,179 @@
---
title: CLI
description: Learn how to use the CrewAI CLI to interact with CrewAI.
icon: terminal
---
# CrewAI CLI Documentation
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
## Installation
To use the CrewAI CLI, make sure you have CrewAI installed:
```shell
pip install crewai
```
## Basic Usage
The basic structure of a CrewAI CLI command is:
```shell
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
## Available Commands
### 1. Create
Create a new crew or pipeline.
```shell
crewai create [OPTIONS] TYPE NAME
```
- `TYPE`: Choose between "crew" or "pipeline"
- `NAME`: Name of the crew or pipeline
- `--router`: (Optional) Create a pipeline with router functionality
Example:
```shell
crewai create crew my_new_crew
crewai create pipeline my_new_pipeline --router
```
### 2. Version
Show the installed version of CrewAI.
```shell
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell
crewai version
crewai version --tools
```
### 3. Train
Train the crew for a specified number of iterations.
```shell
crewai train [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5)
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell
crewai train -n 10 -f my_training_data.pkl
```
### 4. Replay
Replay the crew execution from a specific task.
```shell
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell
crewai replay -t task_123456
```
### 5. Log-tasks-outputs
Retrieve your latest crew.kickoff() task outputs.
```shell
crewai log-tasks-outputs
```
### 6. Reset-memories
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```shell
crewai reset-memories [OPTIONS]
```
- `-l, --long`: Reset LONG TERM memory
- `-s, --short`: Reset SHORT TERM memory
- `-e, --entities`: Reset ENTITIES memory
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
- `-a, --all`: Reset ALL memories
Example:
```shell
crewai reset-memories --long --short
crewai reset-memories --all
```
### 7. Test
Test the crew and evaluate the results.
```shell
crewai test [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3)
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell
crewai test -n 5 -m gpt-3.5-turbo
```
### 8. Run
Run the crew.
```shell
crewai run
```
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
### 9. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
Once you've selected an LLM provider, you will be prompted for API keys.
#### Initial API key providers
The CLI will initially prompt for API keys for the following services:
* OpenAI
* Groq
* Anthropic
* Google Gemini
When you select a provider, the CLI will prompt you to enter your API key.
#### Other Options
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)

View File

@@ -0,0 +1,52 @@
---
title: Collaboration
description: Exploring the dynamics of agent collaboration within the CrewAI framework, focusing on the newly integrated features for enhanced functionality.
icon: screen-users
---
## Collaboration Fundamentals
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
- **Information Sharing**: Ensures all agents are well-informed and can contribute effectively by sharing data and findings.
- **Task Assistance**: Allows agents to seek help from peers with the required expertise for specific tasks.
- **Resource Allocation**: Optimizes task execution through the efficient distribution and sharing of resources among agents.
## Enhanced Attributes for Improved Collaboration
The `Crew` class has been enriched with several attributes to support advanced functionalities:
| Feature | Description |
|:-------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
## Delegation (Dividing to Conquer)
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
## Implementing Collaboration and Delegation
Setting up a crew involves defining the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring efficient collaboration and delegation, with enhanced customization and monitoring features to adapt to various operational needs.
## Example Scenario
Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The integration of advanced language model management and process flow attributes allows for more sophisticated interactions, such as the writer delegating complex research tasks to the researcher or querying specific information, thereby facilitating a seamless workflow.
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -2,10 +2,9 @@
title: Crews
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
icon: people-group
mode: "wide"
---
## Overview
## What is a Crew?
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
@@ -21,187 +20,27 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). | |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defaults to `None`. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
<Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Tip>
## Creating Crews
There are two ways to create crews in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
#### Example Crew Class with Decorators
```python code
from crewai import Agent, Crew, Task, Process
from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class YourCrewName:
"""Description of your crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Paths to your YAML configuration files
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff
def prepare_inputs(self, inputs):
# Modify inputs before the crew starts
inputs['additional_data'] = "Some extra information"
return inputs
@after_kickoff
def process_output(self, output):
# Modify output after the crew finishes
output.raw += "\nProcessed after kickoff."
return output
@agent
def agent_one(self) -> Agent:
return Agent(
config=self.agents_config['agent_one'], # type: ignore[index]
verbose=True
)
@agent
def agent_two(self) -> Agent:
return Agent(
config=self.agents_config['agent_two'], # type: ignore[index]
verbose=True
)
@task
def task_one(self) -> Task:
return Task(
config=self.tasks_config['task_one'] # type: ignore[index]
)
@task
def task_two(self) -> Task:
return Task(
config=self.tasks_config['task_two'] # type: ignore[index]
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the @agent decorator
tasks=self.tasks, # Automatically collected by the @task decorator.
process=Process.sequential,
verbose=True,
)
```
How to run the above code:
```python code
YourCrewName().crew().kickoff(inputs={"any": "input here"})
```
<Note>
Tasks will be executed in the order they are defined.
</Note>
The `CrewBase` class, along with these decorators, automates the collection of agents and tasks, reducing the need for manual management.
#### Decorators overview from `annotations.py`
CrewAI provides several decorators in the `annotations.py` file that are used to mark methods within your crew class for special handling:
- `@CrewBase`: Marks the class as a crew base class.
- `@agent`: Denotes a method that returns an `Agent` object.
- `@task`: Denotes a method that returns a `Task` object.
- `@crew`: Denotes the method that returns the `Crew` object.
- `@before_kickoff`: (Optional) Marks a method to be executed before the crew starts.
- `@after_kickoff`: (Optional) Marks a method to be executed after the crew finishes.
These decorators help in organizing your crew's structure and automatically collecting agents and tasks without manually listing them.
### Direct Code Definition (Alternative)
Alternatively, you can define the crew directly in code without using YAML configuration files.
```python code
from crewai import Agent, Crew, Task, Process
from crewai_tools import YourCustomTool
class YourCrewName:
def agent_one(self) -> Agent:
return Agent(
role="Data Analyst",
goal="Analyze data trends in the market",
backstory="An experienced data analyst with a background in economics",
verbose=True,
tools=[YourCustomTool()]
)
def agent_two(self) -> Agent:
return Agent(
role="Market Researcher",
goal="Gather information on market dynamics",
backstory="A diligent researcher with a keen eye for detail",
verbose=True
)
def task_one(self) -> Task:
return Task(
description="Collect recent market data and identify trends.",
expected_output="A report summarizing key trends in the market.",
agent=self.agent_one()
)
def task_two(self) -> Task:
return Task(
description="Research factors affecting market dynamics.",
expected_output="An analysis of factors influencing the market.",
agent=self.agent_two()
)
def crew(self) -> Crew:
return Crew(
agents=[self.agent_one(), self.agent_two()],
tasks=[self.task_one(), self.task_two()],
process=Process.sequential,
verbose=True
)
```
How to run the above code:
```python code
YourCrewName().crew().kickoff(inputs={})
```
In this example:
- Agents and tasks are defined directly within the class without decorators.
- We manually create and manage the list of agents and tasks.
- This approach provides more control but can be less maintainable for larger projects.
## Crew Output
@@ -253,23 +92,6 @@ print(f"Tasks Output: {crew_output.tasks_output}")
print(f"Token Usage: {crew_output.token_usage}")
```
## Accessing Crew Logs
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
In case of `True(Boolean)` will save as `logs.txt`.
In case of `output_log_file` is set as `False(Boolean)` or `None`, the logs will not be populated.
```python Code
# Save crew logs
crew = Crew(output_log_file = True) # Logs will be saved as logs.txt
crew = Crew(output_log_file = file_name) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.txt) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name.json
```
## Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
@@ -309,9 +131,9 @@ print(result)
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
- `kickoff_for_each()`: Executes tasks for each agent individually.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
- `kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner.
```python Code
# Start the crew's task execution
@@ -326,12 +148,12 @@ for result in results:
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs)
async_result = my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
@@ -366,4 +188,4 @@ Then, to replay from a specific task, use:
crewai replay -t <task_id>
```
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.

View File

@@ -2,10 +2,9 @@
title: Flows
description: Learn how to create and manage AI workflows using CrewAI Flows.
icon: arrow-progress
mode: "wide"
---
## Overview
## Introduction
CrewAI Flows is a powerful feature designed to streamline the creation and management of AI workflows. Flows allow developers to combine and coordinate coding tasks and Crews efficiently, providing a robust framework for building sophisticated AI automations.
@@ -19,92 +18,74 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
5. **Input Flexibility**: Flows can accept inputs to initialize or update their state, with different handling for structured and unstructured state management.
## Getting Started
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
```python Code
### Passing Inputs to Flows
Flows can accept inputs to initialize or update their state before execution. The way inputs are handled depends on whether the flow uses structured or unstructured state management.
#### Structured State Management
In structured state management, the flow's state is defined using a Pydantic `BaseModel`. Inputs must match the model's schema, and any updates will overwrite the default values.
```python
from crewai.flow.flow import Flow, listen, start
from dotenv import load_dotenv
from litellm import completion
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class ExampleFlow(Flow):
model = "gpt-4o-mini"
class StructuredExampleFlow(Flow[ExampleState]):
@start()
def generate_city(self):
print("Starting flow")
# Each flow state automatically gets a unique ID
print(f"Flow State ID: {self.state['id']}")
def first_method(self):
# Implementation
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
)
random_city = response["choices"][0]["message"]["content"]
# Store the city in our state
self.state["city"] = random_city
print(f"Random City: {random_city}")
return random_city
@listen(generate_city)
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
)
fun_fact = response["choices"][0]["message"]["content"]
# Store the fun fact in our state
self.state["fun_fact"] = fun_fact
return fun_fact
flow = ExampleFlow()
flow.plot()
result = flow.kickoff()
print(f"Generated fun fact: {result}")
flow = StructuredExampleFlow()
flow.kickoff(inputs={"counter": 10})
```
![Flow Visual image](/images/crewai-flow-1.png)
In this example, the `counter` is initialized to `10`, while `message` retains its default value.
#### Unstructured State Management
In unstructured state management, the flow's state is a dictionary. You can pass any dictionary to update the state.
```python
from crewai.flow.flow import Flow, listen, start
class UnstructuredExampleFlow(Flow):
@start()
def first_method(self):
# Implementation
flow = UnstructuredExampleFlow()
flow.kickoff(inputs={"counter": 5, "message": "Initial message"})
```
Here, both `counter` and `message` are updated based on the provided inputs.
**Note:** Ensure that inputs for structured state management adhere to the defined schema to avoid validation errors.
### Example Flow
```python
# Existing example code
```
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
When you run the Flow, it will:
1. Generate a unique ID for the flow state
2. Generate a random city and store it in the state
3. Generate a fun fact about that city and store it in the state
4. Print the results to the console
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
### @start()
The `@start()` decorator marks entry points for a Flow. You can:
- Declare multiple unconditional starts: `@start()`
- Gate a start on a prior method or router label: `@start("method_or_label")`
- Provide a callable condition to control when a start should fire
All satisfied `@start()` methods will execute (often in parallel) when the Flow begins or resumes.
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
### @listen()
@@ -116,14 +97,14 @@ The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python Code
```python
@listen("generate_city")
def generate_fun_fact(self, random_city):
# Implementation
```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python Code
```python
@listen(generate_city)
def generate_fun_fact(self, random_city):
# Implementation
@@ -140,7 +121,7 @@ When you run a Flow, the final output is determined by the last method that comp
Here's how you can access the final output:
<CodeGroup>
```python Code
```python
from crewai.flow.flow import Flow, listen, start
class OutputExampleFlow(Flow):
@@ -152,25 +133,22 @@ class OutputExampleFlow(Flow):
def second_method(self, first_output):
return f"Second method received: {first_output}"
flow = OutputExampleFlow()
flow.plot("my_flow_plot")
final_output = flow.kickoff()
print("---- Final Output ----")
print(final_output)
```
```text Output
```text
---- Final Output ----
Second method received: Output from first_method
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-2.png)
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
The `kickoff()` method will return the final output, which is then printed to the console. The `plot()` method will generate the HTML file, which will help you understand the flow.
The `kickoff()` method will return the final output, which is then printed to the console.
#### Accessing and Updating State
@@ -180,7 +158,7 @@ Here's an example of how to update and access the state:
<CodeGroup>
```python Code
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
@@ -202,14 +180,13 @@ class StateExampleFlow(Flow[ExampleState]):
return self.state.message
flow = StateExampleFlow()
flow.plot("my_flow_plot")
final_output = flow.kickoff()
print(f"Final Output: {final_output}")
print("Final State:")
print(flow.state)
```
```text Output
```text
Final Output: Hello from first_method - updated by second_method
Final State:
counter=2 message='Hello from first_method - updated by second_method'
@@ -217,8 +194,6 @@ counter=2 message='Hello from first_method - updated by second_method'
</CodeGroup>
![Flow Visual image](/images/crewai-flow-2.png)
In this example, the state is updated by both `first_method` and `second_method`.
After the Flow has run, you can access the final state to see the updates made by these methods.
@@ -234,42 +209,33 @@ allowing developers to choose the approach that best fits their application's ne
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
Even with unstructured states, CrewAI Flows automatically generates and maintains a unique identifier (UUID) for each state instance.
```python Code
```python
from crewai.flow.flow import Flow, listen, start
class UnstructuredExampleFlow(Flow):
@start()
def first_method(self):
# The state automatically includes an 'id' field
print(f"State ID: {self.state['id']}")
self.state['counter'] = 0
self.state['message'] = "Hello from structured flow"
self.state.message = "Hello from structured flow"
self.state.counter = 0
@listen(first_method)
def second_method(self):
self.state['counter'] += 1
self.state['message'] += " - updated"
self.state.counter += 1
self.state.message += " - updated"
@listen(second_method)
def third_method(self):
self.state['counter'] += 1
self.state['message'] += " - updated again"
self.state.counter += 1
self.state.message += " - updated again"
print(f"State after third_method: {self.state}")
flow = UnstructuredExampleFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
![Flow Visual image](/images/crewai-flow-3.png)
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
**Key Points:**
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
@@ -280,25 +246,18 @@ flow.kickoff()
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
Each state in CrewAI Flows automatically receives a unique identifier (UUID) to help track and manage state instances. This ID is automatically generated and managed by the Flow system.
```python Code
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
# Note: 'id' field is automatically added to all states
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow[ExampleState]):
@start()
def first_method(self):
# Access the auto-generated ID if needed
print(f"State ID: {self.state.id}")
self.state.message = "Hello from structured flow"
@listen(first_method)
@@ -313,13 +272,10 @@ class StructuredExampleFlow(Flow[ExampleState]):
print(f"State after third_method: {self.state}")
flow = StructuredExampleFlow()
flow.kickoff()
```
![Flow Visual image](/images/crewai-flow-3.png)
**Key Points:**
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
@@ -341,91 +297,6 @@ flow.kickoff()
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
## Flow Persistence
The @persist decorator enables automatic state persistence in CrewAI Flows, allowing you to maintain flow state across restarts or different workflow executions. This decorator can be applied at either the class level or method level, providing flexibility in how you manage state persistence.
### Class-Level Persistence
When applied at the class level, the @persist decorator automatically persists all flow method states:
```python
@persist # Using SQLiteFlowPersistence by default
class MyFlow(Flow[MyState]):
@start()
def initialize_flow(self):
# This method will automatically have its state persisted
self.state.counter = 1
print("Initialized flow. State ID:", self.state.id)
@listen(initialize_flow)
def next_step(self):
# The state (including self.state.id) is automatically reloaded
self.state.counter += 1
print("Flow state is persisted. Counter:", self.state.counter)
```
### Method-Level Persistence
For more granular control, you can apply @persist to specific methods:
```python
class AnotherFlow(Flow[dict]):
@persist # Persists only this method's state
@start()
def begin(self):
if "runs" not in self.state:
self.state["runs"] = 0
self.state["runs"] += 1
print("Method-level persisted runs:", self.state["runs"])
```
### How It Works
1. **Unique State Identification**
- Each flow state automatically receives a unique UUID
- The ID is preserved across state updates and method calls
- Supports both structured (Pydantic BaseModel) and unstructured (dictionary) states
2. **Default SQLite Backend**
- SQLiteFlowPersistence is the default storage backend
- States are automatically saved to a local SQLite database
- Robust error handling ensures clear messages if database operations fail
3. **Error Handling**
- Comprehensive error messages for database operations
- Automatic state validation during save and load
- Clear feedback when persistence operations encounter issues
### Important Considerations
- **State Types**: Both structured (Pydantic BaseModel) and unstructured (dictionary) states are supported
- **Automatic ID**: The `id` field is automatically added if not present
- **State Recovery**: Failed or restarted flows can automatically reload their previous state
- **Custom Implementation**: You can provide your own FlowPersistence implementation for specialized storage needs
### Technical Advantages
1. **Precise Control Through Low-Level Access**
- Direct access to persistence operations for advanced use cases
- Fine-grained control via method-level persistence decorators
- Built-in state inspection and debugging capabilities
- Full visibility into state changes and persistence operations
2. **Enhanced Reliability**
- Automatic state recovery after system failures or restarts
- Transaction-based state updates for data integrity
- Comprehensive error handling with clear error messages
- Robust validation during state save and load operations
3. **Extensible Architecture**
- Customizable persistence backend through FlowPersistence interface
- Support for specialized storage solutions beyond SQLite
- Compatible with both structured (Pydantic) and unstructured (dict) states
- Seamless integration with existing CrewAI flow patterns
The persistence system's architecture emphasizes technical precision and customization options, allowing developers to maintain full control over state management while benefiting from built-in reliability features.
## Flow Control
### Conditional Logic: `or`
@@ -434,7 +305,7 @@ The `or_` function in Flows allows you to listen to multiple methods and trigger
<CodeGroup>
```python Code
```python
from crewai.flow.flow import Flow, listen, or_, start
class OrExampleFlow(Flow):
@@ -451,22 +322,17 @@ class OrExampleFlow(Flow):
def logger(self, result):
print(f"Logger: {result}")
flow = OrExampleFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
```text Output
```text
Logger: Hello from the start method
Logger: Hello from the second method
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-4.png)
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
@@ -476,7 +342,7 @@ The `and_` function in Flows allows you to listen to multiple methods and trigge
<CodeGroup>
```python Code
```python
from crewai.flow.flow import Flow, and_, listen, start
class AndExampleFlow(Flow):
@@ -495,19 +361,16 @@ class AndExampleFlow(Flow):
print(self.state)
flow = AndExampleFlow()
flow.plot()
flow.kickoff()
```
```text Output
```text
---- Logger ----
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-5.png)
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
@@ -518,7 +381,7 @@ You can specify different routes based on the output of the method, allowing you
<CodeGroup>
```python Code
```python
import random
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
@@ -549,13 +412,11 @@ class RouterFlow(Flow[ExampleState]):
def fourth_method(self):
print("Fourth method running")
flow = RouterFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
```text Output
```text
Starting the structured flow
Third method running
Fourth method running
@@ -563,8 +424,6 @@ Fourth method running
</CodeGroup>
![Flow Visual image](/images/crewai-flow-6.png)
In the above example, the `start_method` generates a random boolean value and sets it in the state.
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
@@ -572,122 +431,6 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
## Adding Agents to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
```python
import asyncio
from typing import Any, Dict, List
from crewai_tools import SerperDevTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.flow.flow import Flow, listen, start
# Define a structured output format
class MarketAnalysis(BaseModel):
key_trends: List[str] = Field(description="List of identified market trends")
market_size: str = Field(description="Estimated market size")
competitors: List[str] = Field(description="Major competitors in the space")
# Define flow state
class MarketResearchState(BaseModel):
product: str = ""
analysis: MarketAnalysis | None = None
# Create a flow class
class MarketResearchFlow(Flow[MarketResearchState]):
@start()
def initialize_research(self) -> Dict[str, Any]:
print(f"Starting market research for {self.state.product}")
return {"product": self.state.product}
@listen(initialize_research)
async def analyze_market(self) -> Dict[str, Any]:
# Create an Agent for market research
analyst = Agent(
role="Market Research Analyst",
goal=f"Analyze the market for {self.state.product}",
backstory="You are an experienced market analyst with expertise in "
"identifying market trends and opportunities.",
tools=[SerperDevTool()],
verbose=True,
)
# Define the research query
query = f"""
Research the market for {self.state.product}. Include:
1. Key market trends
2. Market size
3. Major competitors
Format your response according to the specified structure.
"""
# Execute the analysis with structured output format
result = await analyst.kickoff_async(query, response_format=MarketAnalysis)
if result.pydantic:
print("result", result.pydantic)
else:
print("result", result)
# Return the analysis to update the state
return {"analysis": result.pydantic}
@listen(analyze_market)
def present_results(self, analysis) -> None:
print("\nMarket Analysis Results")
print("=====================")
if isinstance(analysis, dict):
# If we got a dict with 'analysis' key, extract the actual analysis object
market_analysis = analysis.get("analysis")
else:
market_analysis = analysis
if market_analysis and isinstance(market_analysis, MarketAnalysis):
print("\nKey Market Trends:")
for trend in market_analysis.key_trends:
print(f"- {trend}")
print(f"\nMarket Size: {market_analysis.market_size}")
print("\nMajor Competitors:")
for competitor in market_analysis.competitors:
print(f"- {competitor}")
else:
print("No structured analysis data available.")
print("Raw analysis:", analysis)
# Usage example
async def run_flow():
flow = MarketResearchFlow()
flow.plot("MarketResearchFlowPlot")
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
return result
# Run the flow
if __name__ == "__main__":
asyncio.run(run_flow())
```
![Flow Visual image](/images/crewai-flow-7.png)
This example demonstrates several key features of using Agents in flows:
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
2. **State Management**: The flow state (`MarketResearchState`) maintains context between steps and stores both inputs and outputs.
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
## Adding Crews to Flows
Creating a flow with multiple crews in CrewAI is straightforward.
@@ -736,7 +479,7 @@ The `main.py` file is where you create your flow and connect the crews together.
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python Code
```python
#!/usr/bin/env python
from random import randint
@@ -776,16 +519,13 @@ def kickoff():
def plot():
poem_flow = PoemFlow()
poem_flow.plot("PoemFlowPlot")
poem_flow.plot()
if __name__ == "__main__":
kickoff()
plot()
```
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. The PoemFlowPlot will be generated by `plot()` method.
![Flow Visual image](/images/crewai-flow-8.png)
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
### Running the Flow
@@ -815,6 +555,42 @@ uv run kickoff
The flow will execute, and you should see the output in the console.
### Adding Additional Crews Using the CLI
Once you have created your initial flow, you can easily add additional crews to your project using the CLI. This allows you to expand your flow's capabilities by integrating new crews without starting from scratch.
To add a new crew to your existing flow, use the following command:
```bash
crewai flow add-crew <crew_name>
```
This command will create a new directory for your crew within the `crews` folder of your flow project. It will include the necessary configuration files and a crew definition file, similar to the initial setup.
#### Folder Structure
After adding a new crew, your folder structure will look like this:
| Directory/File | Description |
| :--------------------- | :----------------------------------------------------------------- |
| `name_of_flow/` | Root directory for the flow. |
| ├── `crews/` | Contains directories for specific crews. |
| │ ├── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
| │ │ ├── `config/` | Configuration files directory for the "poem_crew". |
| │ │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
| │ │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
| │ │ └── `poem_crew.py` | Script for "poem_crew" functionality. |
| └── `name_of_crew/` | Directory for the new crew. |
| ├── `config/` | Configuration files directory for the new crew. |
| │ ├── `agents.yaml` | YAML file defining the agents for the new crew. |
| │ └── `tasks.yaml` | YAML file defining the tasks for the new crew. |
| └── `name_of_crew.py` | Script for the new crew functionality. |
You can then customize the `agents.yaml` and `tasks.yaml` files to define the agents and tasks for your new crew. The `name_of_crew.py` file will contain the crew's logic, which you can modify to suit your needs.
By using the CLI to add additional crews, you can efficiently build complex AI workflows that leverage multiple crews working together.
## Plot Flows
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
@@ -831,7 +607,7 @@ CrewAI provides two convenient methods to generate plots of your flows:
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python Code
```python
# Assuming you have a flow instance
flow.plot("my_flow_plot")
```
@@ -854,13 +630,114 @@ The generated plot will display nodes representing the tasks in your flow, with
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
## Advanced
In this section, we explore more complex use cases of CrewAI Flows, starting with a self-evaluation loop. This pattern is crucial for developing AI systems that can iteratively improve their outputs through feedback.
### 1) Self-Evaluation Loop
The self-evaluation loop is a powerful pattern that allows AI workflows to automatically assess and refine their outputs. This example demonstrates how to set up a flow that generates content, evaluates it, and iterates based on feedback until the desired quality is achieved.
#### Overview
The self-evaluation loop involves two main Crews:
1. **ShakespeareanXPostCrew**: Generates a Shakespearean-style post on a given topic.
2. **XPostReviewCrew**: Evaluates the generated post, providing feedback on its validity and quality.
The process iterates until the post meets the criteria or a maximum retry limit is reached. This approach ensures high-quality outputs through iterative refinement.
#### Importance
This pattern is essential for building robust AI systems that can adapt and improve over time. By automating the evaluation and feedback loop, developers can ensure that their AI workflows produce reliable and high-quality results.
#### Main Code Highlights
Below is the `main.py` file for the self-evaluation loop flow:
```python
from typing import Optional
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
from self_evaluation_loop_flow.crews.shakespeare_crew.shakespeare_crew import (
ShakespeareanXPostCrew,
)
from self_evaluation_loop_flow.crews.x_post_review_crew.x_post_review_crew import (
XPostReviewCrew,
)
class ShakespeareXPostFlowState(BaseModel):
x_post: str = ""
feedback: Optional[str] = None
valid: bool = False
retry_count: int = 0
class ShakespeareXPostFlow(Flow[ShakespeareXPostFlowState]):
@start("retry")
def generate_shakespeare_x_post(self):
print("Generating Shakespearean X post")
topic = "Flying cars"
result = (
ShakespeareanXPostCrew()
.crew()
.kickoff(inputs={"topic": topic, "feedback": self.state.feedback})
)
print("X post generated", result.raw)
self.state.x_post = result.raw
@router(generate_shakespeare_x_post)
def evaluate_x_post(self):
if self.state.retry_count > 3:
return "max_retry_exceeded"
result = XPostReviewCrew().crew().kickoff(inputs={"x_post": self.state.x_post})
self.state.valid = result["valid"]
self.state.feedback = result["feedback"]
print("valid", self.state.valid)
print("feedback", self.state.feedback)
self.state.retry_count += 1
if self.state.valid:
return "complete"
return "retry"
@listen("complete")
def save_result(self):
print("X post is valid")
print("X post:", self.state.x_post)
with open("x_post.txt", "w") as file:
file.write(self.state.x_post)
@listen("max_retry_exceeded")
def max_retry_exceeded_exit(self):
print("Max retry count exceeded")
print("X post:", self.state.x_post)
print("Feedback:", self.state.feedback)
def kickoff():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.kickoff()
def plot():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.plot()
if __name__ == "__main__":
kickoff()
```
#### Code Highlights
- **Retry Mechanism**: The flow uses a retry mechanism to regenerate the post if it doesn't meet the criteria, up to a maximum of three retries.
- **Feedback Loop**: Feedback from the `XPostReviewCrew` is used to refine the post iteratively.
- **State Management**: The flow maintains state using a Pydantic model, ensuring type safety and clarity.
For a complete example and further details, please refer to the [Self Evaluation Loop Flow repository](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow).
## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are five specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
@@ -870,47 +747,19 @@ If you're interested in exploring additional examples of flows, we have a variet
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
5. **Self Evaluation Loop Flow**: This flow demonstrates a self-evaluation loop where AI workflows automatically assess and refine their outputs through feedback. It involves generating content, evaluating it, and iterating until the desired quality is achieved. This pattern is crucial for developing robust AI systems that can adapt and improve over time. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
Also, check out our YouTube video on how to use flows in CrewAI below!
<iframe
className="w-full aspect-video rounded-xl"
width="560"
height="315"
src="https://www.youtube.com/embed/MTb5my6VOT8"
title="CrewAI Flows overview"
frameBorder="0"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerPolicy="strict-origin-when-cross-origin"
allowFullScreen
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
## Running Flows
There are two ways to run a flow:
### Using the Flow API
You can run a flow programmatically by creating an instance of your flow class and calling the `kickoff()` method:
```python
flow = ExampleFlow()
result = flow.kickoff()
```
### Using the CLI
Starting from version 0.103.0, you can run flows using the `crewai run` command:
```shell
crewai run
```
This command automatically detects if your project is a flow (based on the `type = "flow"` setting in your pyproject.toml) and runs it accordingly. This is the recommended way to run flows from the command line.
For backward compatibility, you can also use:
```shell
crewai flow kickoff
```
However, the `crewai run` command is now the preferred method as it works for both crews and flows.

210
docs/concepts/knowledge.mdx Normal file
View File

@@ -0,0 +1,210 @@
---
title: Knowledge
description: Understand what knowledge is in CrewAI and how to effectively use it.
icon: book
---
# Using Knowledge in CrewAI
## What is Knowledge?
Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks. Think of it as giving your agents a reference library they can consult while working.
<Info>
Key benefits of using Knowledge:
- Enhance agents with domain-specific information
- Support decisions with real-world data
- Maintain context across conversations
- Ground responses in factual information
</Info>
## Supported Knowledge Sources
CrewAI supports various types of knowledge sources out of the box:
<CardGroup cols={2}>
<Card title="Text Sources" icon="text">
- Raw strings
- Text files (.txt)
- PDF documents
</Card>
<Card title="Structured Data" icon="table">
- CSV files
- Excel spreadsheets
- JSON documents
</Card>
</CardGroup>
## Quick Start
Here's a simple example using string-based knowledge:
```python
from crewai import Agent, Task, Crew
from crewai.knowledge import StringKnowledgeSource
# 1. Create a knowledge source
product_info = StringKnowledgeSource(
content="""Our product X1000 has the following features:
- 10-hour battery life
- Water-resistant
- Available in black and silver
Price: $299.99""",
metadata={"category": "product"}
)
# 2. Create an agent with knowledge
sales_agent = Agent(
role="Sales Representative",
goal="Accurately answer customer questions about products",
backstory="Expert in product features and customer service",
knowledge_sources=[product_info] # Attach knowledge to agent
)
# 3. Create a task
answer_task = Task(
description="Answer: What colors is the X1000 available in and how much does it cost?",
agent=sales_agent
)
# 4. Create and run the crew
crew = Crew(
agents=[sales_agent],
tasks=[answer_task]
)
result = crew.kickoff()
```
## Knowledge Configuration
### Collection Names
Knowledge sources are organized into collections for better management:
```python
# Create knowledge sources with specific collections
tech_specs = StringKnowledgeSource(
content="Technical specifications...",
collection_name="product_tech_specs"
)
pricing_info = StringKnowledgeSource(
content="Pricing information...",
collection_name="product_pricing"
)
```
### Metadata and Filtering
Add metadata to organize and filter knowledge:
```python
knowledge_source = StringKnowledgeSource(
content="Product details...",
metadata={
"category": "electronics",
"product_line": "premium",
"last_updated": "2024-03"
}
)
```
### Chunking Configuration
Control how your content is split for processing:
```python
knowledge_source = PDFKnowledgeSource(
file_path="product_manual.pdf",
chunk_size=2000, # Characters per chunk
chunk_overlap=200 # Overlap between chunks
)
```
## Advanced Usage
### Custom Knowledge Sources
Create your own knowledge source by extending the base class:
```python
from crewai.knowledge.source import BaseKnowledgeSource
class APIKnowledgeSource(BaseKnowledgeSource):
def __init__(self, api_endpoint: str, **kwargs):
super().__init__(**kwargs)
self.api_endpoint = api_endpoint
def load_content(self):
# Implement API data fetching
response = requests.get(self.api_endpoint)
return response.json()
def add(self):
content = self.load_content()
# Process and store content
self.save_documents({"source": "api"})
```
### Embedder Configuration
Customize the embedding process:
```python
crew = Crew(
agents=[agent],
tasks=[task],
knowledge_sources=[source],
embedder_config={
"model": "BAAI/bge-small-en-v1.5",
"normalize": True,
"max_length": 512
}
)
```
## Best Practices
<AccordionGroup>
<Accordion title="Content Organization">
- Use meaningful collection names
- Add detailed metadata for filtering
- Keep chunk sizes appropriate for your content
- Consider content overlap for context preservation
</Accordion>
<Accordion title="Performance Tips">
- Use smaller chunk sizes for precise retrieval
- Implement metadata filtering for faster searches
- Choose appropriate embedding models for your use case
- Cache frequently accessed knowledge
</Accordion>
<Accordion title="Error Handling">
- Validate knowledge source content
- Handle missing or corrupted files
- Monitor embedding generation
- Implement fallback options
</Accordion>
</AccordionGroup>
## Common Issues and Solutions
<AccordionGroup>
<Accordion title="Content Not Found">
If agents can't find relevant information:
- Check chunk sizes
- Verify knowledge source loading
- Review metadata filters
- Test with simpler queries first
</Accordion>
<Accordion title="Performance Issues">
If knowledge retrieval is slow:
- Reduce chunk sizes
- Optimize metadata filtering
- Consider using a lighter embedding model
- Cache frequently accessed content
</Accordion>
</AccordionGroup>

View File

@@ -0,0 +1,45 @@
---
title: Using LangChain Tools
description: Learn how to integrate LangChain tools with CrewAI agents to enhance search-based queries and more.
icon: link
---
## Using LangChain Tools
<Info>
CrewAI seamlessly integrates with LangChains comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI.
</Info>
```python Code
import os
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
# Setup API keys
os.environ["SERPER_API_KEY"] = "Your Key"
search = GoogleSerperAPIWrapper()
# Create and assign the search tool to an agent
serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="Useful for search-based queries",
)
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
# rest of the code ...
```
## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms,
and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -0,0 +1,71 @@
---
title: Using LlamaIndex Tools
description: Learn how to integrate LlamaIndex tools with CrewAI agents to enhance search-based queries and more.
icon: toolbox
---
## Using LlamaIndex Tools
<Info>
CrewAI seamlessly integrates with LlamaIndexs comprehensive toolkit for RAG (Retrieval-Augmented Generation) and agentic pipelines, enabling advanced search-based queries and more.
</Info>
Here are the available built-in tools offered by LlamaIndex.
```python Code
from crewai import Agent
from crewai_tools import LlamaIndexTool
# Example 1: Initialize from FunctionTool
from llama_index.core.tools import FunctionTool
your_python_function = lambda ...: ...
og_tool = FunctionTool.from_defaults(
your_python_function,
name="<name>",
description='<description>'
)
tool = LlamaIndexTool.from_tool(og_tool)
# Example 2: Initialize from LlamaHub Tools
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
wolfram_spec = WolframAlphaToolSpec(app_id="<app_id>")
wolfram_tools = wolfram_spec.to_tool_list()
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
# Example 3: Initialize Tool from a LlamaIndex Query Engine
query_engine = index.as_query_engine()
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Uber 2019 10K Query Tool",
description="Use this tool to lookup the 2019 Uber 10K Annual Report"
)
# Create and assign the tools to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
)
# rest of the code ...
```
## Steps to Get Started
To effectively use the LlamaIndexTool, follow these steps:
<Steps>
<Step title="Package Installation">
Make sure that `crewai[tools]` package is installed in your Python environment:
<CodeGroup>
```shell Terminal
pip install 'crewai[tools]'
```
</CodeGroup>
</Step>
<Step title="Install and Use LlamaIndex">
Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
</Step>
</Steps>

634
docs/concepts/llms.mdx Normal file
View File

@@ -0,0 +1,634 @@
---
title: 'LLMs'
description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects'
icon: 'microchip-ai'
---
<Note>
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
</Note>
## What are LLMs?
Large Language Models (LLMs) are the core intelligence behind CrewAI agents. They enable agents to understand context, make decisions, and generate human-like responses. Here's what you need to know:
<CardGroup cols={2}>
<Card title="LLM Basics" icon="brain">
Large Language Models are AI systems trained on vast amounts of text data. They power the intelligence of your CrewAI agents, enabling them to understand and generate human-like text.
</Card>
<Card title="Context Window" icon="window">
The context window determines how much text an LLM can process at once. Larger windows (e.g., 128K tokens) allow for more context but may be more expensive and slower.
</Card>
<Card title="Temperature" icon="temperature-three-quarters">
Temperature (0.0 to 1.0) controls response randomness. Lower values (e.g., 0.2) produce more focused, deterministic outputs, while higher values (e.g., 0.8) increase creativity and variability.
</Card>
<Card title="Provider Selection" icon="server">
Each LLM provider (e.g., OpenAI, Anthropic, Google) offers different models with varying capabilities, pricing, and features. Choose based on your needs for accuracy, speed, and cost.
</Card>
</CardGroup>
## Available Models and Their Capabilities
Here's a detailed breakdown of supported models and their capabilities:
<Tabs>
<Tab title="OpenAI">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
<Note>
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note>
</Tab>
<Tab title="Groq">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
<Tip>
Groq is known for its fast inference speeds, making it suitable for real-time applications.
</Tip>
</Tab>
<Tab title="Others">
| Provider | Context Window | Key Features |
|----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemini | Varies by model | Multimodal capabilities |
<Info>
Provider selection should consider factors like:
- API availability in your region
- Pricing structure
- Required features (e.g., streaming, function calling)
- Performance requirements
</Info>
</Tab>
</Tabs>
## Setting Up Your LLM
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
<Tabs>
<Tab title="1. Environment Variables">
The simplest way to get started. Set these variables in your environment:
```bash
# Required: Your API key for authentication
OPENAI_API_KEY=<your-api-key>
# Optional: Default model selection
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
# Optional: Organization ID (if applicable)
OPENAI_ORGANIZATION_ID=<your-org-id>
```
<Warning>
Never commit API keys to version control. Use environment files (.env) or your system's secret management.
</Warning>
</Tab>
<Tab title="2. YAML Configuration">
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
```yaml
researcher:
# Agent Definition
role: Research Specialist
goal: Conduct comprehensive research and analysis
backstory: A dedicated research professional with years of experience
verbose: true
# Model Selection (uncomment your choice)
# OpenAI Models - Known for reliability and performance
llm: openai/gpt-4o-mini
# llm: openai/gpt-4 # More accurate but expensive
# llm: openai/gpt-4-turbo # Fast with large context
# llm: openai/gpt-4o # Optimized for longer texts
# llm: openai/o1-preview # Latest features
# llm: openai/o1-mini # Cost-effective
# Azure Models - For enterprise deployments
# llm: azure/gpt-4o-mini
# llm: azure/gpt-4
# llm: azure/gpt-35-turbo
# Anthropic Models - Strong reasoning capabilities
# llm: anthropic/claude-3-opus-20240229-v1:0
# llm: anthropic/claude-3-sonnet-20240229-v1:0
# llm: anthropic/claude-3-haiku-20240307-v1:0
# llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0
# Google Models - Good for general tasks
# llm: gemini/gemini-pro
# llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.0-pro-latest
# AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: bedrock/anthropic.claude-v2:1
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
# llm: mistral/mistral-small-latest
# Groq Models - Fast inference
# llm: groq/mixtral-8x7b-32768
# llm: groq/llama-3.1-70b-versatile
# llm: groq/llama-3.2-90b-text-preview
# llm: groq/gemma2-9b-it
# llm: groq/gemma-7b-it
# IBM watsonx.ai Models - Enterprise features
# llm: watsonx/ibm/granite-13b-chat-v2
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# llm: watsonx/bigcode/starcoder2-15b
# Ollama Models - Local deployment
# llm: ollama/llama3:70b
# llm: ollama/codellama
# llm: ollama/mistral
# llm: ollama/mixtral
# llm: ollama/phi
# Fireworks AI Models - Specialized tasks
# llm: fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct
# llm: fireworks_ai/accounts/fireworks/models/mixtral-8x7b
# llm: fireworks_ai/accounts/fireworks/models/zephyr-7b-beta
# Perplexity AI Models - Research focused
# llm: pplx/llama-3.1-sonar-large-128k-online
# llm: pplx/mistral-7b-instruct
# llm: pplx/codellama-34b-instruct
# llm: pplx/mixtral-8x7b-instruct
# Hugging Face Models - Community models
# llm: huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct
# llm: huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1
# llm: huggingface/tiiuae/falcon-180B-chat
# llm: huggingface/google/gemma-7b-it
# Nvidia NIM Models - GPU-optimized
# llm: nvidia_nim/meta/llama3-70b-instruct
# llm: nvidia_nim/mistral/mixtral-8x7b
# llm: nvidia_nim/google/gemma-7b
# SambaNova Models - Enterprise AI
# llm: sambanova/Meta-Llama-3.1-8B-Instruct
# llm: sambanova/BioMistral-7B
# llm: sambanova/Falcon-180B
```
<Info>
The YAML configuration allows you to:
- Version control your agent settings
- Easily switch between different models
- Share configurations across team members
- Document model choices and their purposes
</Info>
</Tab>
<Tab title="3. Direct Code">
For maximum flexibility, configure LLMs directly in your Python code:
```python
from crewai import LLM
# Basic configuration
llm = LLM(model="gpt-4")
# Advanced configuration with detailed parameters
llm = LLM(
model="gpt-4o-mini",
temperature=0.7, # Higher for more creative outputs
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1, # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
response_format={"type": "json"}, # For structured outputs
seed=42 # For reproducible results
)
```
<Info>
Parameter explanations:
- `temperature`: Controls randomness (0.0-1.0)
- `timeout`: Maximum wait time for response
- `max_tokens`: Limits response length
- `top_p`: Alternative to temperature for sampling
- `frequency_penalty`: Reduces word repetition
- `presence_penalty`: Encourages new topics
- `response_format`: Specifies output structure
- `seed`: Ensures consistent outputs
</Info>
</Tab>
</Tabs>
## Advanced Features and Optimization
Learn how to get the most out of your LLM configuration:
<AccordionGroup>
<Accordion title="Context Window Management">
CrewAI includes smart context management features:
```python
from crewai import LLM
# CrewAI automatically handles:
# 1. Token counting and tracking
# 2. Content summarization when needed
# 3. Task splitting for large contexts
llm = LLM(
model="gpt-4",
max_tokens=4000, # Limit response length
)
```
<Info>
Best practices for context management:
1. Choose models with appropriate context windows
2. Pre-process long inputs when possible
3. Use chunking for large documents
4. Monitor token usage to optimize costs
</Info>
</Accordion>
<Accordion title="Performance Optimization">
<Steps>
<Step title="Token Usage Optimization">
Choose the right context window for your task:
- Small tasks (up to 4K tokens): Standard models
- Medium tasks (between 4K-32K): Enhanced models
- Large tasks (over 32K): Large context models
```python
# Configure model with appropriate settings
llm = LLM(
model="openai/gpt-4-turbo-preview",
temperature=0.7, # Adjust based on task
max_tokens=4096, # Set based on output needs
timeout=300 # Longer timeout for complex tasks
)
```
<Tip>
- Lower temperature (0.1 to 0.3) for factual responses
- Higher temperature (0.7 to 0.9) for creative tasks
</Tip>
</Step>
<Step title="Best Practices">
1. Monitor token usage
2. Implement rate limiting
3. Use caching when possible
4. Set appropriate max_tokens limits
</Step>
</Steps>
<Info>
Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance.
</Info>
</Accordion>
</AccordionGroup>
## Provider Configuration Examples
<AccordionGroup>
<Accordion title="OpenAI">
```python Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
```
Example usage:
```python Code
from crewai import LLM
llm = LLM(
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
)
```
</Accordion>
<Accordion title="Anthropic">
```python Code
ANTHROPIC_API_KEY=sk-ant-...
```
Example usage:
```python Code
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
)
```
</Accordion>
<Accordion title="Google">
```python Code
GEMINI_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="gemini/gemini-pro",
temperature=0.7
)
```
</Accordion>
<Accordion title="Azure">
```python Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
```
Example usage:
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
)
```
</Accordion>
<Accordion title="AWS Bedrock">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
```
</Accordion>
<Accordion title="Mistral">
```python Code
MISTRAL_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="mistral/mistral-large-latest",
temperature=0.7
)
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="groq/llama-3.2-90b-text-preview",
temperature=0.7
)
```
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
# Required
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
# Optional
WATSONX_TOKEN=<your-token>
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
```
Example usage:
```python Code
llm = LLM(
model="watsonx/meta-llama/llama-3-1-70b-instruct",
base_url="https://api.watsonx.ai/v1"
)
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure:
```python Code
llm = LLM(
model="ollama/llama3:70b",
base_url="http://localhost:11434"
)
```
</Accordion>
<Accordion title="Fireworks AI">
```python Code
FIREWORKS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Perplexity AI">
```python Code
PERPLEXITY_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/"
)
```
</Accordion>
<Accordion title="Hugging Face">
```python Code
HUGGINGFACE_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
base_url="your_api_endpoint"
)
```
</Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="SambaNova">
```python Code
SAMBANOVA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="sambanova/Meta-Llama-3.1-8B-Instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Cerebras">
```python Code
# Required
CEREBRAS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="cerebras/llama3.1-70b",
temperature=0.7,
max_tokens=8192
)
```
<Info>
Cerebras features:
- Fast inference speeds
- Competitive pricing
- Good balance of speed and quality
- Support for long context windows
</Info>
</Accordion>
</AccordionGroup>
## Common Issues and Solutions
<Tabs>
<Tab title="Authentication">
<Warning>
Most authentication issues can be resolved by checking API key format and environment variable names.
</Warning>
```bash
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
```
</Tab>
<Tab title="Model Names">
<Check>
Always include the provider prefix in model names
</Check>
```python
# Correct
llm = LLM(model="openai/gpt-4")
# Incorrect
llm = LLM(model="gpt-4")
```
</Tab>
<Tab title="Context Length">
<Tip>
Use larger context models for extensive tasks
</Tip>
```python
# Large context model
llm = LLM(model="openai/gpt-4o") # 128K tokens
```
</Tab>
</Tabs>
## Getting Help
If you need assistance, these resources are available:
<CardGroup cols={3}>
<Card
title="LiteLLM Documentation"
href="https://docs.litellm.ai/docs/"
icon="book"
>
Comprehensive documentation for LiteLLM integration and troubleshooting common issues.
</Card>
<Card
title="GitHub Issues"
href="https://github.com/joaomdmoura/crewAI/issues"
icon="bug"
>
Report bugs, request features, or browse existing issues for solutions.
</Card>
<Card
title="Community Forum"
href="https://community.crewai.com"
icon="comment-question"
>
Connect with other CrewAI users, share experiences, and get help from the community.
</Card>
</CardGroup>
<Note>
Best Practices for API Key Security:
- Use environment variables or secure vaults
- Never commit keys to version control
- Rotate keys regularly
- Use separate keys for development and production
- Monitor key usage for unusual patterns
</Note>

350
docs/concepts/memory.mdx Normal file
View File

@@ -0,0 +1,350 @@
---
title: Memory
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
icon: database
---
## Introduction to Memory Systems in CrewAI
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
## How Memory Systems Empower Agents
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
It's also possible to initialize the memory instance with your own instance.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform-specific location found using the appdirs package,
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
### Example: Configuring Memory for a Crew
```python Code
from crewai import Crew, Agent, Task, Process
# Assemble your crew with memory capabilities
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True
)
```
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
```python Code
from crewai import Crew, Agent, Task, Process
# Assemble your crew with memory capabilities
my_crew = Crew(
agents=[...],
tasks=[...],
process="Process.sequential",
memory=True,
long_term_memory=EnhanceLongTermMemory(
storage=LTMSQLiteStorage(
db_path="/my_data_dir/my_crew1/long_term_memory_storage.db"
)
),
short_term_memory=EnhanceShortTermMemory(
storage=CustomRAGStorage(
crew_name="my_crew",
storage_type="short_term",
data_dir="//my_data_dir",
model=embedder["model"],
dimension=embedder["dimension"],
),
),
entity_memory=EnhanceEntityMemory(
storage=CustomRAGStorage(
crew_name="my_crew",
storage_type="entities",
data_dir="//my_data_dir",
model=embedder["model"],
dimension=embedder["dimension"],
),
),
verbose=True,
)
```
## Integrating Mem0 for Enhanced User Memory
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
```python Code
import os
from crewai import Crew, Process
from mem0 import MemoryClient
# Set environment variables for Mem0
os.environ["MEM0_API_KEY"] = "m0-xx"
# Step 1: Record preferences based on past conversation or user input
client = MemoryClient()
messages = [
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
{"role": "user", "content": "I am more of a beach person than a mountain person."},
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
{"role": "user", "content": "I like Airbnb more."},
]
client.add(messages, user_id="john")
# Step 2: Create a Crew with User Memory
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
process=Process.sequential,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
},
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "openai",
"config": {
"model": 'text-embedding-3-small'
}
}
)
```
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
Example:
```python Code
from crewai import Crew, Agent, Task, Process
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
)
```
### Using Ollama embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "ollama",
"config": {
"model": "mxbai-embed-large"
}
}
)
```
### Using Google AI embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "google",
"config": {
"api_key": "<YOUR_API_KEY>",
"model_name": "<model_name>"
}
}
)
```
### Using Azure OpenAI embeddings
```python Code
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder=OpenAIEmbeddingFunction(
api_key="YOUR_API_KEY",
api_base="YOUR_API_BASE_PATH",
api_type="azure",
api_version="YOUR_API_VERSION",
model_name="text-embedding-3-small"
)
)
```
### Using Vertex AI embeddings
```python Code
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder=GoogleVertexEmbeddingFunction(
project_id="YOUR_PROJECT_ID",
region="YOUR_REGION",
api_key="YOUR_API_KEY",
model_name="textembedding-gecko"
)
)
```
### Using Cohere embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "cohere",
"config": {
"api_key": "YOUR_API_KEY",
"model_name": "<model_name>"
}
}
)
```
### Using HuggingFace embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "huggingface",
"config": {
"api_url": "<api_url>",
}
}
)
```
### Using Watson embeddings
```python Code
from crewai import Crew, Agent, Task, Process
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "watson",
"config": {
"model": "<model_name>",
"api_url": "<api_url>",
"api_key": "<YOUR_API_KEY>",
"project_id": "<YOUR_PROJECT_ID>",
}
}
)
```
### Resetting Memory
```shell
crewai reset-memories [OPTIONS]
```
#### Resetting Memory Options
| Option | Description | Type | Default |
| :----------------- | :------------------------------- | :------------- | :------ |
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
## Benefits of Using CrewAI's Memory System
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
## Conclusion
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.

View File

@@ -1,11 +1,10 @@
---
title: Planning
description: Learn how to add planning to your CrewAI Crew and improve their performance.
icon: ruler-combined
mode: "wide"
icon: brain
---
## Overview
## Introduction
The planning feature in CrewAI allows you to add planning capability to your crew. When enabled, before each Crew iteration,
all Crew information is sent to an AgentPlanner that will plan the tasks step by step, and this plan will be added to each task description.
@@ -30,13 +29,9 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.
Now you can define the LLM that will be used to plan the tasks. You can use any ChatOpenAI LLM model available.
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
responsible for creating the step-by-step logic to add to the Agents' tasks.
@@ -44,6 +39,7 @@ responsible for creating the step-by-step logic to add to the Agents' tasks.
<CodeGroup>
```python Code
from crewai import Crew, Agent, Task, Process
from langchain_openai import ChatOpenAI
# Assemble your crew with planning capabilities and custom LLM
my_crew = Crew(
@@ -51,7 +47,7 @@ my_crew = Crew(
tasks=self.tasks,
process=Process.sequential,
planning=True,
planning_llm="gpt-4o"
planning_llm=ChatOpenAI(model="gpt-4o")
)
# Run the crew
@@ -86,8 +82,8 @@ my_crew.kickoff()
3. **Collect Data:**
- Search for the latest papers, articles, and reports published in 2024 and early 2025.
- Use keywords like "Large Language Models 2025", "AI LLM advancements", "AI ethics 2025", etc.
- Search for the latest papers, articles, and reports published in 2023 and early 2024.
- Use keywords like "Large Language Models 2024", "AI LLM advancements", "AI ethics 2024", etc.
4. **Analyze Findings:**

View File

@@ -2,11 +2,9 @@
title: Processes
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
icon: bars-staggered
mode: "wide"
---
## Overview
## Understanding Processes
<Tip>
Processes orchestrate the execution of tasks by agents, akin to project management in human teams.
These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
@@ -25,7 +23,9 @@ Processes enable individual agents to operate as a cohesive unit, streamlining t
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` or `manager_agent` for the manager agent.
```python
from crewai import Crew, Process
from crewai import Crew
from crewai.process import Process
from langchain_openai import ChatOpenAI
# Example: Creating a crew with a sequential process
crew = Crew(
@@ -40,7 +40,7 @@ crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm="gpt-4o"
manager_llm=ChatOpenAI(model="gpt-4")
# or
# manager_agent=my_manager_agent
)

474
docs/concepts/tasks.mdx Normal file
View File

@@ -0,0 +1,474 @@
---
title: Tasks
description: Detailed guide on managing and creating tasks within the CrewAI framework.
icon: list-check
---
## Overview of a Task
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
### Task Execution Flow
Tasks can be executed in two ways:
- **Sequential**: Tasks are executed in the order they are defined
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
The execution flow is defined when creating the crew:
```python Code
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
process=Process.sequential # or Process.hierarchical
)
```
## Task Attributes
| Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
## Creating Tasks
There are two ways to create tasks in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
```python Code
crew.kickoff(inputs={'topic': 'AI Agents'})
```
</Note>
Here's an example of how to configure tasks using YAML:
```yaml tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
```
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task']
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task']
)
@crew
def crew(self) -> Crew:
return Crew(
agents=[
self.researcher(),
self.reporting_analyst()
],
tasks=[
self.research_task(),
self.reporting_task()
],
process=Process.sequential
)
```
<Note>
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
</Note>
### Direct Code Definition (Alternative)
Alternatively, you can define tasks directly in your code without using YAML configuration:
```python task.py
from crewai import Task
research_task = Task(
description="""
Conduct a thorough research about AI Agents.
Make sure you find any interesting and relevant information given
the current year is 2024.
""",
expected_output="""
A list with 10 bullet points of the most relevant information about AI Agents
""",
agent=researcher
)
reporting_task = Task(
description="""
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
""",
expected_output="""
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
""",
agent=reporting_analyst,
output_file="report.md"
)
```
<Tip>
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc.
</Tip>
## Task Output
Understanding task outputs is crucial for building effective AI workflows. CrewAI provides a structured way to handle task results through the `TaskOutput` class, which supports multiple output formats and can be easily passed between tasks.
The output of a task in CrewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
### Task Output Attributes
| Attribute | Parameters | Type | Description |
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | Description of the task. |
| **Summary** | `summary` | `Optional[str]` | Summary of the task, auto-generated from the first 10 words of the description. |
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
| **Agent** | `agent` | `str` | The agent that executed the task. |
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
### Task Methods and Properties
| Method/Property | Description |
| :-------------- | :------------------------------------------------------------------------------------------------ |
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| **str** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
### Accessing Task Outputs
Once a task has been executed, its output can be accessed through the `output` attribute of the `Task` object. The `TaskOutput` class provides various ways to interact with and present this output.
#### Example
```python Code
# Example task
task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
# Execute the crew
crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
# Accessing the task output
task_output = task.output
print(f"Task Description: {task_output.description}")
print(f"Task Summary: {task_output.summary}")
print(f"Raw Output: {task_output.raw}")
if task_output.json_dict:
print(f"JSON Output: {json.dumps(task_output.json_dict, indent=2)}")
if task_output.pydantic:
print(f"Pydantic Output: {task_output.pydantic}")
```
## Task Dependencies and Context
Tasks can depend on the output of other tasks using the `context` attribute. For example:
```python Code
research_task = Task(
description="Research the latest developments in AI",
expected_output="A list of recent AI developments",
agent=researcher
)
analysis_task = Task(
description="Analyze the research findings and identify key trends",
expected_output="Analysis report of AI trends",
agent=analyst,
context=[research_task] # This task will wait for research_task to complete
)
```
## Integrating Tools with Tasks
Leverage tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) for enhanced task performance and agent interaction.
## Creating a Task with Tools
```python Code
import os
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business.""",
verbose=True
)
# to perform a semantic search for a specified query from a text's content across the internet
search_tool = SerperDevTool()
task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
print(result)
```
This demonstrates how tasks with specific tools can override an agent's default set for tailored task execution.
## Referring to Other Tasks
In CrewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output, including multiple, should be used as context for another task.
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
```python Code
# ...
research_ai_task = Task(
description="Research the latest developments in AI",
expected_output="A list of recent AI developments",
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
research_ops_task = Task(
description="Research the latest developments in AI Ops",
expected_output="A list of recent AI Ops developments",
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output="Full blog post that is 4 paragraphs long",
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)
#...
```
## Asynchronous Execution
You can define a task to be executed asynchronously. This means that the crew will not wait for it to be completed to continue with the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
You can then use the `context` attribute to define in a future task that it should wait for the output of the asynchronous task to be completed.
```python Code
#...
list_ideas = Task(
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
write_article = Task(
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)
#...
```
## Callback Mechanism
The callback function is executed after the task is completed, allowing for actions or notifications to be triggered based on the task's outcome.
```python Code
# ...
def callback_function(output: TaskOutput):
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw}
""")
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)
#...
```
## Accessing a Specific Task Output
Once a crew finishes running, you can access the output of a specific task by using the `output` attribute of the task object:
```python Code
# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
#...
crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=True
)
result = crew.kickoff()
# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw}
""")
```
## Tool Override Mechanism
Specifying tools in a task allows for dynamic adaptation of agent capabilities, emphasizing CrewAI's flexibility.
## Error Handling and Validation Mechanisms
While creating and executing tasks, certain validation mechanisms are in place to ensure the robustness and reliability of task attributes. These include but are not limited to:
- Ensuring only one output type is set per task to maintain clear output expectations.
- Preventing the manual assignment of the `id` attribute to uphold the integrity of the unique identifier system.
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Creating Directories when Saving Files
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
```python Code
# ...
save_output_task = Task(
description='Save the summarized AI news to a file',
expected_output='File saved successfully',
agent=research_agent,
tools=[file_save_tool],
output_file='outputs/ai_news_summary.txt',
create_directory=True
)
#...
```
## Conclusion
Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -2,10 +2,9 @@
title: Testing
description: Learn how to test your CrewAI Crew and evaluate their performance.
icon: vial
mode: "wide"
---
## Overview
## Introduction
Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. With crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities.

View File

@@ -2,10 +2,9 @@
title: Tools
description: Understanding and leveraging tools within the CrewAI framework for agent collaboration and task execution.
icon: screwdriver-wrench
mode: "wide"
---
## Overview
## Introduction
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
@@ -16,16 +15,6 @@ A tool in CrewAI is a skill or function that agents can utilize to perform vario
This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools),
enabling everything from simple searches to complex interactions and effective teamwork among agents.
<Note type="info" title="Enterprise Enhancement: Tools Repository">
CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
The Enterprise Tools Repository includes:
- Pre-built connectors for popular enterprise systems
- Custom tool creation interface
- Version control and sharing capabilities
- Security and compliance features
</Note>
## Key Characteristics of Tools
- **Utility**: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
@@ -33,7 +22,6 @@ The Enterprise Tools Repository includes:
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
- **Asynchronous Support**: Handles both synchronous and asynchronous tools, enabling non-blocking operations.
## Using CrewAI Tools
@@ -91,7 +79,7 @@ research = Task(
)
write = Task(
description='Write an engaging blog post about the AI industry, based on the research analyst's summary. Draw inspiration from the latest blog posts in the directory.',
description='Write an engaging blog post about the AI industry, based on the research analysts summary. Draw inspiration from the latest blog posts in the directory.',
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.',
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be saved here
@@ -118,7 +106,6 @@ Here is a list of the available tools and their descriptions:
| Tool | Description |
| :------------------------------- | :--------------------------------------------------------------------------------------------- |
| **ApifyActorsTool** | A tool that integrates Apify Actors with your workflows for web scraping and automation tasks. |
| **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
| **CodeInterpreterTool** | A tool for interpreting python code. |
@@ -153,7 +140,7 @@ Here is a list of the available tools and their descriptions:
## Creating your own Tools
<Tip>
Developers can craft `custom tools` tailored for their agent's needs or
Developers can craft `custom tools` tailored for their agents needs or
utilize pre-built options.
</Tip>
@@ -163,78 +150,17 @@ There are two main ways for one to create a CrewAI tool:
```python Code
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization."
args_schema: Type[BaseModel] = MyToolInput
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
def _run(self, argument: str) -> str:
# Your tool's logic here
return "Tool's result"
# Implementation goes here
return "Result from custom tool"
```
## Asynchronous Tool Support
CrewAI supports asynchronous tools, allowing you to implement tools that perform non-blocking operations like network requests, file I/O, or other async operations without blocking the main execution thread.
### Creating Async Tools
You can create async tools in two ways:
#### 1. Using the `tool` Decorator with Async Functions
```python Code
from crewai.tools import tool
@tool("fetch_data_async")
async def fetch_data_async(query: str) -> str:
"""Asynchronously fetch data based on the query."""
# Simulate async operation
await asyncio.sleep(1)
return f"Data retrieved for {query}"
```
#### 2. Implementing Async Methods in Custom Tool Classes
```python Code
from crewai.tools import BaseTool
class AsyncCustomTool(BaseTool):
name: str = "async_custom_tool"
description: str = "An asynchronous custom tool"
async def _run(self, query: str = "") -> str:
"""Asynchronously run the tool"""
# Your async implementation here
await asyncio.sleep(1)
return f"Processed {query} asynchronously"
```
### Using Async Tools
Async tools work seamlessly in both standard Crew workflows and Flow-based workflows:
```python Code
# In standard Crew
agent = Agent(role="researcher", tools=[async_custom_tool])
# In Flow
class MyFlow(Flow):
@start()
async def begin(self):
crew = Crew(agents=[agent])
result = await crew.kickoff_async()
return result
```
The CrewAI framework automatically handles the execution of both synchronous and asynchronous tools, so you don't need to worry about how to call them differently.
### Utilizing the `tool` Decorator
```python Code

View File

@@ -0,0 +1,67 @@
---
title: Training
description: Learn how to train your CrewAI agents by giving them feedback early on and get consistent results.
icon: dumbbell
---
## Introduction
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
This helps the agents improve their understanding, decision-making, and problem-solving abilities.
### Training Your Crew Using the CLI
To use the training feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
```shell
crewai train -n <n_iterations> <filename> (optional)
```
<Tip>
Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`.
</Tip>
### Training Your Crew Programmatically
To train your crew programmatically, use the following steps:
1. Define the number of iterations for training.
2. Specify the input parameters for the training process.
3. Execute the training command within a try-except block to handle potential errors.
```python Code
n_iterations = 2
inputs = {"topic": "CrewAI Training"}
filename = "your_model.pkl"
try:
YourCrewName_Crew().crew().train(
n_iterations=n_iterations,
inputs=inputs,
filename=filename
)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
```
### Key Points to Note
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
- **Filename Requirement:** Ensure that the filename ends with `.pkl`. The code will raise a `ValueError` if this condition is not met.
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.
Happy training with CrewAI! 🚀

View File

Before

Width:  |  Height:  |  Size: 427 KiB

After

Width:  |  Height:  |  Size: 427 KiB

Some files were not shown because too many files have changed in this diff Show More