Compare commits

..

29 Commits

Author SHA1 Message Date
Cursor Agent
eeb8120784 chore: verify benchmark path bug false positive 2026-03-01 09:34:29 +00:00
Greyson LaLonde
1ac5801578 fix: inject tool errors as observations and resolve name collisions
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
2026-03-01 00:46:04 -05:00
Matt Aitchison
c00a348837 fix: upgrade pypdf 4.x → 6.7.4 to resolve 11 Dependabot alerts
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
pypdf <6.7.4 has multiple DoS vulnerabilities via crafted PDF streams
(FlateDecode, LZWDecode, RunLengthDecode, XFA, TreeObject, outlines).

Only basic PdfReader/PdfWriter APIs are used in crewai-files, none of
which changed in the 5.0 or 6.0 breaking releases.
2026-02-28 17:16:45 -05:00
Matt Aitchison
6c8c6c8e12 fix: resolve critical/high Dependabot security alerts (#4652)
Upgrade pillow 10.4.0 → 12.1.1 (out-of-bounds write on PSD images),
langchain-core 0.3.76 → 0.3.83 (template injection), and
urllib3 2.6.1 → 2.6.3 (decompression-bomb bypass on redirects).

Bump docling ~=2.63.0 → ~=2.75.0 for pillow 12 compat, and add
uv overrides for pillow/langchain-core to unblock transitive pins
from fastembed and langchain-apify.
2026-02-28 13:04:35 -06:00
Musthaq Ahamad
3899910aa9 docs: sync Composio tool docs across locales (#4639)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* docs: update Composio tool docs across locales

Align the Composio automation docs with the new session-based example flow and keep localized pages in sync with the updated English content.

Made-with: Cursor

* docs: clarify manual user authentication wording

Refine the Composio auth section language to reflect session-based automatic auth during agent chat while keeping the manual `authorize` flow explicit.

Made-with: Cursor

* docs: sync updated Composio auth wording across locales

Propagate the latest English wording updates for CrewAI provider initialization and manual user authentication guidance to pt-BR and ko docs.

Made-with: Cursor
2026-02-27 13:38:45 -08:00
Greyson LaLonde
757a435ee3 chore: update changelog and version for v1.10.1a1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-02-27 09:58:48 -05:00
Greyson LaLonde
8bfdb188f7 feat: bump versions to 1.10.1a1 2026-02-27 09:44:47 -05:00
João Moura
1bdb9496a3 refactor: update step callback methods to support asynchronous invocation (#4633)
* refactor: update step callback methods to support asynchronous invocation

- Replaced synchronous step callback invocations with asynchronous counterparts in the CrewAgentExecutor class.
- Introduced a new async method _ainvoke_step_callback to handle step callbacks in an async context, improving responsiveness and performance in asynchronous workflows.

* chore: bump version to 1.10.1b1 across multiple files

- Updated version strings from 1.10.1b to 1.10.1b1 in various project files including pyproject.toml and __init__.py files.
- Adjusted dependency specifications to reflect the new version in relevant templates and modules.
2026-02-27 07:35:03 -03:00
Joao Moura
979aa26c3d bump new alpha version 2026-02-27 01:43:33 -08:00
João Moura
514c082882 refactor: implement lazy loading for heavy dependencies in Memory module (#4632)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Introduced lazy imports for the Memory and EncodingFlow classes to optimize import time and reduce initial load, particularly beneficial for deployment scenarios like Celery pre-fork.
- Updated the Memory class to include new configuration options for aggregation queries, enhancing its functionality.
- Adjusted the __getattr__ method in both the crewai and memory modules to support lazy loading of specified attributes.
2026-02-27 03:20:02 -03:00
Greyson LaLonde
c9e8068578 docs: update changelog and version for v1.10.0 2026-02-26 19:14:25 -05:00
Greyson LaLonde
df2778f08b fix: make branch for release notes
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-02-26 18:49:13 -05:00
Greyson LaLonde
d8fea2518d feat: bump versions to 1.10.0
* feat: bump versions to 1.10.0

* chore: update tool specifications

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-02-26 18:31:14 -05:00
Lucas Gomide
d259150d8d Enhance MCP tool resolution and related events (#4580)
* feat: enhance MCP tool resolution

* feat: emit event when MCP configuration fails

* feat: emit event when MCP tool execution has failed

* style: resolve linter issues

* refactor: use clear and natural mcp tool name resolution

* test: fix broken tests

* fix: resolve MCP connection leaks, slug validation, duplicate connections, and httpx exception handling

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
2026-02-26 13:59:30 -08:00
Greyson LaLonde
c4a328c9d5 fix: validate tool kwargs even when empty to prevent cryptic TypeError (#4611) 2026-02-26 16:18:03 -05:00
Greyson LaLonde
373abbb6b7 fix: add dict overload to build_embedder and type default embedder 2026-02-26 16:04:28 -05:00
João Moura
86d3ee022d feat: update lancedb version and add lance-namespace packages
* chore(deps): update lancedb version and add lance-namespace packages

- Updated lancedb dependency version from 0.4.0 to 0.29.2 in multiple files.
- Added new packages: lance-namespace and lance-namespace-urllib3-client with version 0.5.2, including their dependencies and installation details.
- Enhanced MemoryTUI to display a limit on entries and improved the LanceDBStorage class with automatic background compaction and index creation for better performance.

* linter

* refactor: update memory recall limit and formatting in Agent class

- Reduced the memory recall limit from 10 to 5 in multiple locations within the Agent class.
- Updated the memory formatting to use a new `format` method in the MemoryMatch class for improved readability and metadata inclusion.

* refactor: enhance memory handling with read-only support

- Updated memory-related classes and methods to support read-only functionality, allowing for silent no-ops when attempting to remember data in read-only mode.
- Modified the LiteAgent and CrewAgentExecutorMixin classes to check for read-only status before saving memories.
- Adjusted MemorySlice and Memory classes to reflect changes in behavior when read-only is enabled.
- Updated tests to verify that memory operations behave correctly under read-only conditions.

* test: set mock memory to read-write in unit tests

- Updated unit tests in test_unified_memory.py to set mock_memory._read_only to False, ensuring that memory operations can be tested in a writable state.

* fix test

* fix: preserve falsy metadata values and fix remember() return type

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
2026-02-26 15:05:10 -05:00
Lucas Gomide
09e3b81ca3 fix: preserve null types in tool parameter schemas for LLM (#4579)
* fix: preserve null types in tool parameter schemas for LLM

Tool parameter schemas were stripping null from optional fields via
generate_model_description, forcing the LLM to provide non-null values
for fields.
Adds strip_null_types parameter to generate_model_description and passes False when generating tool
schemas, so optional fields keep anyOf: [{type: T}, {type: null}]

* Update lib/crewai/src/crewai/utilities/pydantic_schema_utils.py

Co-authored-by: Gabe Milani <gabriel@crewai.com>

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Gabe Milani <gabriel@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-02-26 11:51:34 -05:00
Heitor Carvalho
b6d8ce5c55 docs: add litellm dependency note for non-native LLM providers (#4600)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-02-26 10:57:37 -03:00
Greyson LaLonde
b371f97a2f fix: map output_pydantic/output_json to native structured output
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: map output_pydantic/output_json to native structured output

* test: add crew+tools+structured output integration test for Gemini

* fix: re-record stale cassette for test_crew_testing_function

* fix: re-record remaining stale cassettes for native structured output

* fix: enable native structured output for lite agent and fix mypy errors
2026-02-25 17:13:34 -05:00
dependabot[bot]
017189db78 chore(deps): bump nltk in the security-updates group across 1 directory (#4598)
Bumps the security-updates group with 1 update in the / directory: [nltk](https://github.com/nltk/nltk).


Updates `nltk` from 3.9.2 to 3.9.3
- [Changelog](https://github.com/nltk/nltk/blob/develop/ChangeLog)
- [Commits](https://github.com/nltk/nltk/compare/3.9.2...3.9.3)

---
updated-dependencies:
- dependency-name: nltk
  dependency-version: 3.9.3
  dependency-type: indirect
  dependency-group: security-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-25 15:37:21 -06:00
dependabot[bot]
02d911494f chore(deps): bump cryptography (#4506)
Bumps the security-updates group with 1 update in the / directory: [cryptography](https://github.com/pyca/cryptography).


Updates `cryptography` from 46.0.4 to 46.0.5
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.4...46.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.5
  dependency-type: indirect
  dependency-group: security-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-25 15:04:07 -06:00
João Moura
8102d0a6ca feat: enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool
* feat: enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool

- Added error handling for malformed JSON tool arguments in CrewAgentExecutor, providing descriptive error messages.
- Implemented schema validation for tool arguments in BaseTool, ensuring that invalid arguments raise appropriate exceptions.
- Introduced tests to verify correct behavior for both valid and invalid JSON inputs, enhancing robustness of tool execution.

* refactor: improve argument validation in BaseTool

- Introduced a new private method  to handle argument validation for tools, enhancing code clarity and reusability.
- Updated the  method to utilize the new validation method, ensuring consistent error handling for invalid arguments.
- Enhanced exception handling to specifically catch , providing clearer error messages for tool argument validation failures.

* feat: introduce parse_tool_call_args for improved argument parsing

- Added a new utility function, parse_tool_call_args, to handle parsing of tool call arguments from JSON strings or dictionaries, enhancing error handling for malformed JSON inputs.
- Updated CrewAgentExecutor and AgentExecutor to utilize the new parsing function, streamlining argument validation and improving clarity in error reporting.
- Introduced unit tests for parse_tool_call_args to ensure robust functionality and correct handling of various input scenarios.

* feat: add keyword argument validation in BaseTool and Tool classes

- Introduced a new method `_validate_kwargs` in BaseTool to validate keyword arguments against the defined schema, ensuring proper argument handling.
- Updated the `run` and `arun` methods in both BaseTool and Tool classes to utilize the new validation method, improving error handling and robustness.
- Added comprehensive tests for asynchronous execution in `TestBaseToolArunValidation` to verify correct behavior for valid and invalid keyword arguments.

* Potential fix for pull request finding 'Syntax error'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-02-25 13:13:31 -05:00
Greyson LaLonde
ee374d01de chore: add versioning logic for devtools 2026-02-25 12:13:00 -05:00
Greyson LaLonde
9914e51199 feat: add versioned docs
starting with 1.10.0
2026-02-25 11:05:31 -05:00
nicoferdi96
2dbb83ae31 Private package registry (#4583)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
adding reference and explaination for package registry

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-02-24 19:37:17 +01:00
Mike Plachta
7377e1aa26 fix: bedrock region was always set to "us-east-1" not respecting the env var. (#4582)
* fix: bedrock region was always set to "us-east-1" not respecting the env
var.

code had AWS_REGION_NAME referenced, but not used, unified to
AWS_DEFAULT_REGION as per documentation

* DRY code improvement and fix caught by tests.

* Supporting litellm configuration
2026-02-24 09:59:01 -08:00
Greyson LaLonde
51754899a2 feat: migrate CLI http client from requests to httpx
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-02-20 18:21:05 -05:00
Greyson LaLonde
71b4f8402a fix: ensure callbacks are ran/awaited if promise
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
2026-02-20 13:15:50 -05:00
167 changed files with 11393 additions and 6071 deletions

View File

@@ -21,7 +21,6 @@ OPENROUTER_API_KEY=fake-openrouter-key
AWS_ACCESS_KEY_ID=fake-aws-access-key
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_REGION_NAME=us-east-1
# -----------------------------------------------------------------------------
# Azure OpenAI Configuration

View File

@@ -1,8 +1,6 @@
name: Publish to PyPI
on:
repository_dispatch:
types: [deployment-tests-passed]
workflow_dispatch:
inputs:
release_tag:
@@ -20,11 +18,8 @@ jobs:
- name: Determine release tag
id: release
run: |
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
if [ -n "${{ inputs.release_tag }}" ]; then
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
else
echo "tag=" >> $GITHUB_OUTPUT
fi

View File

@@ -1,18 +0,0 @@
name: Trigger Deployment Tests
on:
release:
types: [published]
jobs:
trigger:
name: Trigger deployment tests
runs-on: ubuntu-latest
steps:
- name: Trigger deployment tests
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
event-type: crewai-release
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +0,0 @@
---
title: "Agents: Examples"
description: "Runnable examples for robust agent configuration and execution."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/guides/agents/crafting-effective-agents](/en/guides/agents/crafting-effective-agents)
- [/en/learn/customizing-agents](/en/learn/customizing-agents)
- [/en/learn/coding-agents](/en/learn/coding-agents)

View File

@@ -1,32 +0,0 @@
---
title: "Agents: Concepts"
description: "Agent role contracts, task boundaries, and decision criteria for robust agent behavior."
icon: "user"
mode: "wide"
---
## When to use
- You need specialized behavior with explicit role and goal.
- You need tool-enabled execution under constraints.
## When not to use
- Static transformations are enough without model reasoning.
- Task can be solved by deterministic code only.
## Core decisions
| Decision | Choose this when |
|---|---|
| Single agent | Narrow scope, low coordination needs |
| Multi-agent crew | Distinct expertise and review loops needed |
| Tool-enabled agent | Model needs external actions or data |
## Canonical links
- Reference: [/en/ai/agents/reference](/en/ai/agents/reference)
- Patterns: [/en/ai/agents/patterns](/en/ai/agents/patterns)
- Troubleshooting: [/en/ai/agents/troubleshooting](/en/ai/agents/troubleshooting)
- Examples: [/en/ai/agents/examples](/en/ai/agents/examples)
- Existing docs: [/en/concepts/agents](/en/concepts/agents)

View File

@@ -1,17 +0,0 @@
---
title: "Agents: Patterns"
description: "Practical agent patterns for role design, tool boundaries, and reliable outputs."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Role + reviewer pair
- One agent drafts, one agent validates.
2. Tool-bounded agent
- Restrict tool list to minimal action set.
3. Structured output agent
- Force JSON or schema output for automation pipelines.

View File

@@ -1,22 +0,0 @@
---
title: "Agents: Reference"
description: "Reference for agent fields, prompt contracts, tool usage, and output constraints."
icon: "book"
mode: "wide"
---
## Agent contract
- `role`: stable operating identity
- `goal`: measurable completion objective
- `backstory`: bounded style and context
- `tools`: allowed action surface
## Output contract
- Prefer structured outputs for machine workflows.
- Define failure behavior for missing tool data.
## Canonical source
Primary API details live in [/en/concepts/agents](/en/concepts/agents).

View File

@@ -1,12 +0,0 @@
---
title: "Agents: Troubleshooting"
description: "Diagnose and fix common agent reliability and instruction-following failures."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Hallucinated tool results: require tool-call evidence in output.
- Prompt drift: tighten role and success criteria.
- Verbose but low-signal output: enforce concise schema output.

View File

@@ -1,12 +0,0 @@
---
title: "Crews: Examples"
description: "Runnable crew examples for sequential and hierarchical execution."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/guides/crews/first-crew](/en/guides/crews/first-crew)
- [/en/learn/sequential-process](/en/learn/sequential-process)
- [/en/learn/hierarchical-process](/en/learn/hierarchical-process)

View File

@@ -1,26 +0,0 @@
---
title: "Crews: Concepts"
description: "When to use crews, process selection, delegation boundaries, and collaboration strategy."
icon: "users"
mode: "wide"
---
## When to use
- You need multiple agents with specialized roles.
- You need staged execution and reviewer loops.
## Process decision table
| Process | Best for |
|---|---|
| Sequential | Linear pipelines and deterministic ordering |
| Hierarchical | Manager-controlled planning and delegation |
## Canonical links
- Reference: [/en/ai/crews/reference](/en/ai/crews/reference)
- Patterns: [/en/ai/crews/patterns](/en/ai/crews/patterns)
- Troubleshooting: [/en/ai/crews/troubleshooting](/en/ai/crews/troubleshooting)
- Examples: [/en/ai/crews/examples](/en/ai/crews/examples)
- Existing docs: [/en/concepts/crews](/en/concepts/crews)

View File

@@ -1,12 +0,0 @@
---
title: "Crews: Patterns"
description: "Production crew patterns for decomposition, review loops, and hybrid orchestration with Flows."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Researcher + writer + reviewer
2. Manager-directed hierarchical crew
3. Flow-orchestrated multi-crew pipeline

View File

@@ -1,21 +0,0 @@
---
title: "Crews: Reference"
description: "Reference for crew composition, process semantics, task context passing, and execution modes."
icon: "book"
mode: "wide"
---
## Crew contract
- `agents`: available executors
- `tasks`: work units with expected output
- `process`: ordering and delegation semantics
## Runtime
- `kickoff()` for synchronous runs
- `kickoff_async()` for async execution
## Canonical source
Primary API details live in [/en/concepts/crews](/en/concepts/crews).

View File

@@ -1,12 +0,0 @@
---
title: "Crews: Troubleshooting"
description: "Common multi-agent coordination failures and practical fixes."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Agents overlap on responsibilities: tighten role boundaries.
- Output inconsistency: standardize expected outputs per task.
- Slow runs: reduce unnecessary handoffs and model size.

View File

@@ -1,17 +0,0 @@
---
title: "Flows: Examples"
description: "Runnable end-to-end examples for production flow orchestration."
icon: "rocket-launch"
mode: "wide"
---
## Canonical examples
<CardGroup cols={2}>
<Card title="Flowstate Chat History" icon="comments" href="/en/learn/flowstate-chat-history">
Persistent chat history with summary compaction and memory scope.
</Card>
<Card title="Flows Concepts Example" icon="arrow-progress" href="/en/concepts/flows">
Full API and feature-oriented flow examples, including routers and persistence.
</Card>
</CardGroup>

View File

@@ -1,39 +0,0 @@
---
title: "Flows: Concepts"
description: "When to use Flows, when not to use them, and key design constraints for production orchestration."
icon: "arrow-progress"
mode: "wide"
---
## When to use
- You need deterministic orchestration, branching, and resumable execution.
- You need explicit state transitions across steps.
- You need persistence, routing, and event-driven control.
## When not to use
- A single prompt/response interaction is enough.
- You only need one agent call without orchestration logic.
## Core decisions
| Decision | Choose this when |
|---|---|
| Unstructured state | Fast prototyping, highly dynamic fields |
| Structured state | Stable contracts, team development, type safety |
| `@persist()` | Long-running workflows and recovery requirements |
| Router labels | Deterministic branch handling |
## Canonical links
- Reference: [/en/ai/flows/reference](/en/ai/flows/reference)
- Patterns: [/en/ai/flows/patterns](/en/ai/flows/patterns)
- Troubleshooting: [/en/ai/flows/troubleshooting](/en/ai/flows/troubleshooting)
- Examples: [/en/ai/flows/examples](/en/ai/flows/examples)
## Existing docs
- [/en/concepts/flows](/en/concepts/flows)
- [/en/guides/flows/mastering-flow-state](/en/guides/flows/mastering-flow-state)
- [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)

View File

@@ -1,29 +0,0 @@
---
title: "Flows: Patterns"
description: "Production flow patterns: triage routing, flowstate chat history, and human-in-the-loop checkpoints."
icon: "diagram-project"
mode: "wide"
---
## Recommended patterns
1. Triage router flow
- Inputs: normalized request payload
- Output: deterministic route label + action
- Reference: [/en/concepts/flows](/en/concepts/flows)
2. Flowstate chat history
- Inputs: `session_id`, `last_user_message`
- Output: assistant reply + compact context state
- Reference: [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)
3. Human feedback gates
- Inputs: generated artifact + reviewer feedback
- Output: approved/rejected/revision path
- Reference: [/en/learn/human-feedback-in-flows](/en/learn/human-feedback-in-flows)
## Pattern requirements
- declare explicit input schema
- define expected output shape
- list failure modes and retries

View File

@@ -1,34 +0,0 @@
---
title: "Flows: Reference"
description: "API-oriented reference for Flow decorators, lifecycle semantics, state, routing, and persistence."
icon: "book"
mode: "wide"
---
## Decorators
- `@start()` entrypoint, optional conditional trigger
- `@listen(...)` downstream method subscription
- `@router(...)` label-based deterministic routing
- `@persist()` automatic state persistence checkpoints
## Runtime contracts
- `kickoff(inputs=...)` initializes or updates run inputs.
- final output is the value from the last completed method.
- `self.state` always has an auto-generated `id`.
## State contracts
- Use typed state for durable workflows.
- Keep control fields explicit (`route`, `status`, `retry_count`).
- Avoid storing unbounded raw transcripts in state.
## Resume and recovery
- Use persistence for recoverable runs.
- Keep idempotent step logic for safe retries.
## Canonical source
Primary API details live in [/en/concepts/flows](/en/concepts/flows).

View File

@@ -1,28 +0,0 @@
---
title: "Flows: Troubleshooting"
description: "Common flow failures, causes, and fixes for state, routing, persistence, and resumption."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
### Branch did not trigger
- Cause: router label mismatch.
- Fix: align returned label with `@listen("label")` exactly.
### State fields missing
- Cause: untyped dynamic writes or missing inputs.
- Fix: switch to typed state and validate required fields at `@start()`.
### Context window blow-up
- Cause: raw message accumulation.
- Fix: use sliding window + summary compaction pattern.
### Resume behavior inconsistent
- Cause: non-idempotent side effects in retried steps.
- Fix: make side-effecting calls idempotent and record execution markers in state.

View File

@@ -1,12 +0,0 @@
---
title: "LLMs: Examples"
description: "Concrete examples for model setup, routing, and output-control patterns."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/concepts/llms](/en/concepts/llms)
- [/en/learn/llm-connections](/en/learn/llm-connections)
- [/en/learn/custom-llm](/en/learn/custom-llm)

View File

@@ -1,27 +0,0 @@
---
title: "LLMs: Concepts"
description: "Model selection strategy, cost-quality tradeoffs, and reliability posture for CrewAI systems."
icon: "microchip-ai"
mode: "wide"
---
## When to use advanced LLM configuration
- You need predictable quality, latency, and cost control.
- You need model routing by task type.
## Core decisions
| Decision | Choose this when |
|---|---|
| Single model | Small systems with uniform task profile |
| Routed models | Mixed workloads with different quality/cost needs |
| Structured output | Automation pipelines and strict parsing needs |
## Canonical links
- Reference: [/en/ai/llms/reference](/en/ai/llms/reference)
- Patterns: [/en/ai/llms/patterns](/en/ai/llms/patterns)
- Troubleshooting: [/en/ai/llms/troubleshooting](/en/ai/llms/troubleshooting)
- Examples: [/en/ai/llms/examples](/en/ai/llms/examples)
- Existing docs: [/en/concepts/llms](/en/concepts/llms)

View File

@@ -1,17 +0,0 @@
---
title: "LLMs: Patterns"
description: "Model routing, reliability defaults, and structured outputs for production AI workflows."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Role-based model routing
2. Reliability defaults (`timeout`, `max_retries`, low temperature)
3. JSON-first outputs for machine consumption
4. Responses API for multi-turn reasoning flows
## Reference
- [/en/concepts/llms#production-llm-patterns](/en/concepts/llms#production-llm-patterns)

View File

@@ -1,25 +0,0 @@
---
title: "LLMs: Reference"
description: "Provider-agnostic LLM configuration reference for CrewAI projects."
icon: "book"
mode: "wide"
---
## Common parameters
- `model`
- `temperature`
- `max_tokens`
- `timeout`
- `max_retries`
- `response_format`
## Contract guidance
- Set low temperature for extraction/classification.
- Use structured outputs for downstream automation.
- Set explicit timeout and retry policy for production.
## Canonical source
Primary API details live in [/en/concepts/llms](/en/concepts/llms).

View File

@@ -1,12 +0,0 @@
---
title: "LLMs: Troubleshooting"
description: "Fix common model behavior failures: drift, latency spikes, malformed output, and cost overruns."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Malformed JSON: enforce `response_format` and validate at boundary.
- Latency spikes: route heavy tasks to smaller models when acceptable.
- Cost growth: add budget-aware model routing and truncation rules.

View File

@@ -1,11 +0,0 @@
---
title: "Memory: Examples"
description: "Runnable examples for scoped storage and semantic retrieval in CrewAI."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/concepts/memory](/en/concepts/memory)
- [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)

View File

@@ -1,24 +0,0 @@
---
title: "Memory: Concepts"
description: "Designing recall systems with scope boundaries and state-vs-memory separation."
icon: "database"
mode: "wide"
---
## When to use memory
- You need semantic recall across runs.
- You need long-term context outside immediate flow state.
## When to use state instead
- Data is only needed for current control flow.
- Data must remain deterministic and explicit per step.
## Canonical links
- Reference: [/en/ai/memory/reference](/en/ai/memory/reference)
- Patterns: [/en/ai/memory/patterns](/en/ai/memory/patterns)
- Troubleshooting: [/en/ai/memory/troubleshooting](/en/ai/memory/troubleshooting)
- Examples: [/en/ai/memory/examples](/en/ai/memory/examples)
- Existing docs: [/en/concepts/memory](/en/concepts/memory)

View File

@@ -1,17 +0,0 @@
---
title: "Memory: Patterns"
description: "Practical memory patterns for session recall, scoped retrieval, and hybrid flow-state designs."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Session-scoped recall (`/chat/{session_id}`)
2. Project-scoped knowledge (`/project/{project_id}`)
3. Hybrid pattern: flow state for control, memory for long-tail context
## Reference
- [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)
- [/en/guides/flows/mastering-flow-state](/en/guides/flows/mastering-flow-state)

View File

@@ -1,23 +0,0 @@
---
title: "Memory: Reference"
description: "Reference for remember/recall contracts, scopes, and retrieval tuning."
icon: "book"
mode: "wide"
---
## API surface
- `remember(content, scope=...)`
- `recall(query, limit=...)`
- `extract_memories(text)`
- `scope(path)` and `subscope(name)`
## Scope rules
- use `/{entity_type}/{identifier}` paths
- keep hierarchy shallow
- isolate sessions by stable identifiers
## Canonical source
Primary API details live in [/en/concepts/memory](/en/concepts/memory).

View File

@@ -1,12 +0,0 @@
---
title: "Memory: Troubleshooting"
description: "Diagnose poor recall quality, scope leakage, and stale memory retrieval."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Irrelevant recall: tighten scopes and query wording.
- Missing recall: check scope path and recency weighting.
- Scope leakage: avoid shared broad scopes for unrelated workflows.

View File

@@ -1,54 +0,0 @@
---
title: "AI-First Documentation"
description: "Canonical, agent-optimized documentation map for Flows, Agents, Crews, LLMs, Memory, and Tools."
icon: "sitemap"
mode: "wide"
---
## Purpose
This section is the canonical map for AI agents and developers.
Use it when you need:
- one source of truth per domain
- predictable page structure
- runnable patterns with explicit inputs and outputs
## Domain Packs
<CardGroup cols={3}>
<Card title="Flows" icon="arrow-progress" href="/en/ai/flows/index">
State, routing, persistence, resume, and orchestration lifecycle.
</Card>
<Card title="Agents" icon="user" href="/en/ai/agents/index">
Agent contracts, tool boundaries, prompt roles, and output discipline.
</Card>
<Card title="Crews" icon="users" href="/en/ai/crews/index">
Multi-agent execution, process choice, delegation, and coordination.
</Card>
<Card title="LLMs" icon="microchip-ai" href="/en/ai/llms/index">
Model configuration contracts, routing, reliability defaults, and providers.
</Card>
<Card title="Memory" icon="database" href="/en/ai/memory/index">
Retrieval semantics, scope design, and state-vs-memory architecture.
</Card>
<Card title="Tools" icon="wrench" href="/en/ai/tools/index">
Tool safety, schema contracts, retries, and integration patterns.
</Card>
</CardGroup>
## Writing Contract
Every domain follows the same structure:
1. Concepts (`index`)
2. Reference (`reference`)
3. Patterns (`patterns`)
4. Troubleshooting (`troubleshooting`)
5. Examples (`examples`)
## Deprecation Policy
When a page is replaced:
- keep a redirect for the old URL
- keep one canonical destination
- avoid duplicated conceptual prose

View File

@@ -1,12 +0,0 @@
---
title: "Tools: Examples"
description: "Practical examples for tool-driven agents and crews."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/tools/overview](/en/tools/overview)
- [/en/learn/create-custom-tools](/en/learn/create-custom-tools)
- [/en/learn/tool-hooks](/en/learn/tool-hooks)

View File

@@ -1,25 +0,0 @@
---
title: "Tools: Concepts"
description: "Tool selection strategy, safety boundaries, and reliability rules for agentic execution."
icon: "wrench"
mode: "wide"
---
## When to use tools
- Agents need external data or side effects.
- Deterministic systems must be integrated into agent workflows.
## Tool safety rules
- define clear input schemas
- validate outputs before downstream use
- isolate privileged tools behind policy checks
## Canonical links
- Reference: [/en/ai/tools/reference](/en/ai/tools/reference)
- Patterns: [/en/ai/tools/patterns](/en/ai/tools/patterns)
- Troubleshooting: [/en/ai/tools/troubleshooting](/en/ai/tools/troubleshooting)
- Examples: [/en/ai/tools/examples](/en/ai/tools/examples)
- Existing docs: [/en/concepts/tools](/en/concepts/tools)

View File

@@ -1,12 +0,0 @@
---
title: "Tools: Patterns"
description: "Tool execution patterns for retrieval, action safety, and response grounding."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Read-first then write pattern
2. Validation gate before side effects
3. Fallback tool chains for degraded mode

View File

@@ -1,22 +0,0 @@
---
title: "Tools: Reference"
description: "Reference for tool invocation contracts, argument schemas, and runtime safeguards."
icon: "book"
mode: "wide"
---
## Tool contract
- deterministic input schema
- stable output schema
- explicit error behavior
## Runtime safeguards
- timeout and retry policy
- idempotency for side effects
- validation before commit
## Canonical source
Primary API details live in [/en/concepts/tools](/en/concepts/tools).

View File

@@ -1,12 +0,0 @@
---
title: "Tools: Troubleshooting"
description: "Common tool-call failures and fixes for schema mismatch, retries, and side effects."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Schema mismatch: align tool args with declared model output schema.
- Repeated side effects: add idempotency keys.
- Tool timeouts: define retries with bounded backoff.

View File

@@ -4,6 +4,106 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Feb 27, 2026">
## v1.10.1a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## What's Changed
### Features
- Implement asynchronous invocation support in step callback methods
- Implement lazy loading for heavy dependencies in Memory module
### Documentation
- Update changelog and version for v1.10.0
### Refactoring
- Refactor step callback methods to support asynchronous invocation
- Refactor to implement lazy loading for heavy dependencies in Memory module
### Bug Fixes
- Fix branch for release notes
## Contributors
@greysonlalonde, @joaomdmoura
</Update>
<Update label="Feb 27, 2026">
## v1.10.1a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## What's Changed
### Refactoring
- Refactor step callback methods to support asynchronous invocation
- Implement lazy loading for heavy dependencies in Memory module
### Documentation
- Update changelog and version for v1.10.0
### Bug Fixes
- Make branch for release notes
## Contributors
@greysonlalonde, @joaomdmoura
</Update>
<Update label="Feb 26, 2026">
## v1.10.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.0)
## What's Changed
### Features
- Enhance MCP tool resolution and related events
- Update lancedb version and add lance-namespace packages
- Enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool
- Migrate CLI HTTP client from requests to httpx
- Add versioned documentation
- Add yanked detection for version notes
- Implement user input handling in Flows
- Enhance HITL self-loop functionality in human feedback integration tests
- Add started_event_id and set in eventbus
- Auto update tools.specs
### Bug Fixes
- Validate tool kwargs even when empty to prevent cryptic TypeError
- Preserve null types in tool parameter schemas for LLM
- Map output_pydantic/output_json to native structured output
- Ensure callbacks are ran/awaited if promise
- Capture method name in exception context
- Preserve enum type in router result; improve types
- Fix cyclic flows silently breaking when persistence ID is passed in inputs
- Correct CLI flag format from --skip-provider to --skip_provider
- Ensure OpenAI tool call stream is finalized
- Resolve complex schema $ref pointers in MCP tools
- Enforce additionalProperties=false in schemas
- Reject reserved script names for crew folders
- Resolve race condition in guardrail event emission test
### Documentation
- Add litellm dependency note for non-native LLM providers
- Clarify NL2SQL security model and hardening guidance
- Add 96 missing actions across 9 integrations
### Refactoring
- Refactor crew to provider
- Extract HITL to provider pattern
- Improve hook typing and registration
## Contributors
@dependabot[bot], @github-actions[bot], @github-code-quality[bot], @greysonlalonde, @heitorado, @hobostay, @joaomdmoura, @johnvan7, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha, @mplachta, @nicoferdi96, @theCyberTech, @thiagomoretto, @vinibrsl
</Update>
<Update label="Jan 26, 2026">
## v1.9.0

View File

@@ -23,17 +23,6 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
at creating content.
</Tip>
## When to Use Agents
- You need role-specific reasoning and decision-making.
- You need tool-enabled execution with delegated responsibilities.
- You need reusable behavioral units across tasks and crews.
## When Not to Use Agents
- Deterministic business logic in plain code is sufficient.
- A static transformation without reasoning is sufficient.
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.

View File

@@ -9,17 +9,6 @@ mode: "wide"
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
## When to Use Crews
- You need multiple specialized agents collaborating on a shared outcome.
- You need process-level orchestration (`sequential` or `hierarchical`).
- You need task-level handoffs and context propagation.
## When Not to Use Crews
- A single agent can complete the work end-to-end.
- You do not need multi-step task decomposition.
## Crew Attributes
| Attribute | Parameters | Description |
@@ -428,17 +417,3 @@ crewai replay -t <task_id>
```
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.
## Common Failure Modes
### Agents overlap responsibilities
- Cause: role/goal definitions are too broad.
- Fix: tighten role boundaries and task ownership.
### Hierarchical runs stall or degrade
- Cause: weak manager configuration or unclear delegation criteria.
- Fix: define a stronger manager objective and explicit completion criteria.
### Crew outputs are inconsistent
- Cause: expected outputs are underspecified across tasks.
- Fix: enforce structured outputs and stronger task contracts.

View File

@@ -19,121 +19,82 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
## When to Use Flows
- You need deterministic orchestration and branching logic.
- You need explicit state transitions across multiple steps.
- You need resumable workflows with persistence.
- You need to combine crews, direct model calls, and Python logic in one runtime.
## When Not to Use Flows
- A single prompt/response call is sufficient.
- A single crew kickoff with no orchestration logic is sufficient.
- You do not need stateful multi-step execution.
## Getting Started
The example below shows a realistic Flow for support-ticket triage. It demonstrates features teams use in production: typed state, routing, memory access, and persistence.
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
```python Code
from crewai.flow.flow import Flow, listen, router, start
from crewai.flow.persistence import persist
from pydantic import BaseModel, Field
from crewai.flow.flow import Flow, listen, start
from dotenv import load_dotenv
from litellm import completion
class SupportTriageState(BaseModel):
ticket_id: str = ""
customer_tier: str = "standard" # standard | enterprise
issue: str = ""
urgency: str = "normal"
route: str = ""
draft_reply: str = ""
internal_notes: list[str] = Field(default_factory=list)
class ExampleFlow(Flow):
model = "gpt-4o-mini"
@persist()
class SupportTriageFlow(Flow[SupportTriageState]):
@start()
def ingest_ticket(self):
# kickoff(inputs={...}) is merged into typed state fields
print(f"Flow State ID: {self.state.id}")
def generate_city(self):
print("Starting flow")
# Each flow state automatically gets a unique ID
print(f"Flow State ID: {self.state['id']}")
self.remember(
f"Ticket {self.state.ticket_id}: {self.state.issue}",
scope=f"/support/{self.state.ticket_id}",
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
)
issue = self.state.issue.lower()
if "security" in issue or "breach" in issue:
self.state.urgency = "critical"
elif self.state.customer_tier == "enterprise":
self.state.urgency = "high"
else:
self.state.urgency = "normal"
random_city = response["choices"][0]["message"]["content"]
# Store the city in our state
self.state["city"] = random_city
print(f"Random City: {random_city}")
return self.state.issue
return random_city
@router(ingest_ticket)
def route_ticket(self):
issue = self.state.issue.lower()
if "security" in issue or "breach" in issue:
self.state.route = "security"
return "security_review"
if self.state.customer_tier == "enterprise" or self.state.urgency == "high":
self.state.route = "priority"
return "priority_queue"
self.state.route = "standard"
return "standard_queue"
@listen("security_review")
def handle_security(self):
self.state.internal_notes.append("Escalated to Security Incident Response")
self.state.draft_reply = (
"We have escalated your case to our security team and will update you shortly."
@listen(generate_city)
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
)
return self.state.draft_reply
@listen("priority_queue")
def handle_priority(self):
history = self.recall("SLA commitments for enterprise support", limit=2)
self.state.internal_notes.append(
f"Loaded {len(history)} memory hits for priority handling"
)
self.state.draft_reply = (
"Your ticket has been prioritized and assigned to a senior support engineer."
)
return self.state.draft_reply
@listen("standard_queue")
def handle_standard(self):
self.state.internal_notes.append("Routed to standard support queue")
self.state.draft_reply = "Thanks for reporting this. Our team will follow up soon."
return self.state.draft_reply
fun_fact = response["choices"][0]["message"]["content"]
# Store the fun fact in our state
self.state["fun_fact"] = fun_fact
return fun_fact
flow = SupportTriageFlow()
flow.plot("support_triage_flow")
result = flow.kickoff(
inputs={
"ticket_id": "TCK-1024",
"customer_tier": "enterprise",
"issue": "Cannot access SSO after enabling new policy",
}
)
print("Final reply:", result)
print("Route:", flow.state.route)
print("Notes:", flow.state.internal_notes)
flow = ExampleFlow()
flow.plot()
result = flow.kickoff()
print(f"Generated fun fact: {result}")
```
![Flow Visual image](/images/crewai-flow-1.png)
In this example, one flow demonstrates several core features together:
1. `@start()` initializes and normalizes state for downstream steps.
2. `@router()` performs deterministic branching into labeled routes.
3. Route listeners implement lane-specific behavior (`security`, `priority`, `standard`).
4. `@persist()` keeps the flow state recoverable between runs.
5. Built-in memory methods (`remember`, `recall`) add durable context beyond a single method call.
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
This pattern mirrors typical production workflows where request classification, policy-aware routing, and auditable state all happen in one orchestrated flow.
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
When you run the Flow, it will:
1. Generate a unique ID for the flow state
2. Generate a random city and store it in the state
3. Generate a fun fact about that city and store it in the state
4. Print the results to the console
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
### @start()
@@ -156,15 +117,15 @@ The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python Code
@listen("upstream_method")
def downstream_method(self, upstream_result):
@listen("generate_city")
def generate_fun_fact(self, random_city):
# Implementation
```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python Code
@listen(upstream_method)
def downstream_method(self, upstream_result):
@listen(generate_city)
def generate_fun_fact(self, random_city):
# Implementation
```
@@ -780,17 +741,201 @@ This example demonstrates several key features of using Agents in flows:
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
## Multi-Crew Flows and Plotting
## Adding Crews to Flows
Detailed build walkthroughs and project scaffolding are documented in guide pages to keep this concepts page focused.
Creating a flow with multiple crews in CrewAI is straightforward.
- Build your first flow: [/en/guides/flows/first-flow](/en/guides/flows/first-flow)
- Master state and persistence: [/en/guides/flows/mastering-flow-state](/en/guides/flows/mastering-flow-state)
- Real-world chat-state pattern: [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
For visualization:
- Use `flow.plot("my_flow_plot")` in code, or
- Use `crewai flow plot` in CLI projects.
```bash
crewai create flow name_of_flow
```
This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews.
### Folder Structure
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
| Directory/File | Description |
| :--------------------- | :----------------------------------------------------------------- |
| `name_of_flow/` | Root directory for the flow. |
| ├── `crews/` | Contains directories for specific crews. |
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
| ├── `tools/` | Directory for additional tools used in the flow. |
| │ └── `custom_tool.py` | Custom tool implementation. |
| ├── `main.py` | Main script for running the flow. |
| ├── `README.md` | Project description and instructions. |
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
### Building Your Crews
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
- `config/agents.yaml`: Defines the agents for the crew.
- `config/tasks.yaml`: Defines the tasks for the crew.
- `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself.
You can copy, paste, and edit the `poem_crew` to create other crews.
### Connecting Crews in `main.py`
The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution.
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python Code
#!/usr/bin/env python
from random import randint
from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):
sentence_count: int = 1
poem: str = ""
class PoemFlow(Flow[PoemState]):
@start()
def generate_sentence_count(self):
print("Generating sentence count")
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count)
def generate_poem(self):
print("Generating poem")
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
print("Poem generated", result.raw)
self.state.poem = result.raw
@listen(generate_poem)
def save_poem(self):
print("Saving poem")
with open("poem.txt", "w") as f:
f.write(self.state.poem)
def kickoff():
poem_flow = PoemFlow()
poem_flow.kickoff()
def plot():
poem_flow = PoemFlow()
poem_flow.plot("PoemFlowPlot")
if __name__ == "__main__":
kickoff()
plot()
```
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. The PoemFlowPlot will be generated by `plot()` method.
![Flow Visual image](/images/crewai-flow-8.png)
### Running the Flow
(Optional) Before running the flow, you can install the dependencies by running:
```bash
crewai install
```
Once all of the dependencies are installed, you need to activate the virtual environment by running:
```bash
source .venv/bin/activate
```
After activating the virtual environment, you can run the flow by executing one of the following commands:
```bash
crewai flow kickoff
```
or
```bash
uv run kickoff
```
The flow will execute, and you should see the output in the console.
## Plot Flows
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
### What are Plots?
Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations.
### How to Generate a Plot
CrewAI provides two convenient methods to generate plots of your flows:
#### Option 1: Using the `plot()` Method
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python Code
# Assuming you have a flow instance
flow.plot("my_flow_plot")
```
This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot.
#### Option 2: Using the Command Line
If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup.
```bash
crewai flow plot
```
This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow.
### Understanding the Plot
The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details.
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow)
3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows)
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
Also, check out our YouTube video on how to use flows in CrewAI below!
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/MTb5my6VOT8"
title="CrewAI Flows overview"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerPolicy="strict-origin-when-cross-origin"
allowFullScreen
></iframe>
## Running Flows
@@ -801,7 +946,7 @@ There are two ways to run a flow:
You can run a flow programmatically by creating an instance of your flow class and calling the `kickoff()` method:
```python
flow = SupportTriageFlow()
flow = ExampleFlow()
result = flow.kickoff()
```
@@ -920,21 +1065,3 @@ crewai flow kickoff
```
However, the `crewai run` command is now the preferred method as it works for both crews and flows.
## Common Failure Modes
### Router branch not firing
- Cause: returned label does not match a `@listen("label")` value.
- Fix: align router return strings with listener labels exactly.
### State fields missing at runtime
- Cause: untyped dynamic fields or missing kickoff inputs.
- Fix: use typed state and validate required fields in `@start()`.
### Prompt/token growth over time
- Cause: appending unbounded message history in state.
- Fix: apply sliding-window state and summary compaction patterns.
### Non-idempotent retries
- Cause: side effects executed on retried steps.
- Fix: add idempotency keys/markers to state and guard external writes.

File diff suppressed because it is too large Load Diff

View File

@@ -156,7 +156,6 @@ class ResearchFlow(Flow):
```
See the [Flows documentation](/concepts/flows) for more on memory in Flows.
For a production-style conversational pattern that combines Flow state and memory, see [Flowstate Chat History](/en/learn/flowstate-chat-history).
## Hierarchical Scopes

View File

@@ -10,17 +10,6 @@ mode: "wide"
The planning feature in CrewAI allows you to add planning capability to your crew. When enabled, before each Crew iteration,
all Crew information is sent to an AgentPlanner that will plan the tasks step by step, and this plan will be added to each task description.
## When to Use Planning
- Tasks require multi-step decomposition before execution.
- You need more consistent execution quality on complex tasks.
- You want transparent planning traces in crew runs.
## When Not to Use Planning
- Tasks are simple and deterministic.
- Latency and token budget are strict and planning overhead is not justified.
### Using the Planning Feature
Getting started with the planning feature is very easy, the only step required is to add `planning=True` to your Crew:
@@ -42,7 +31,7 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
Planning model defaults can vary by version and environment. To avoid implicit provider dependencies, set `planning_llm` explicitly in your crew configuration.
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
@@ -163,14 +152,4 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
**Expected Output:**
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```
</CodeGroup>
## Common Failure Modes
### Planning adds cost/latency without quality gains
- Cause: planning enabled for simple tasks.
- Fix: disable `planning` for straightforward pipelines.
### Unexpected provider authentication errors
- Cause: implicit planner model/provider assumptions.
- Fix: set `planning_llm` explicitly and ensure matching credentials are configured.
</CodeGroup>

View File

@@ -12,20 +12,11 @@ mode: "wide"
These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
</Tip>
## When to Use Each Process
- Use `sequential` when task order is fixed and outputs feed directly into the next task.
- Use `hierarchical` when you need a manager to delegate and validate work dynamically.
## When Not to Use Hierarchical
- You do not need dynamic delegation.
- You cannot provide a reliable `manager_llm` or `manager_agent`.
## Process Implementations
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) or a custom manager agent (`manager_agent`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
- **Consensual Process (Planned)**: Aiming for collaborative decision-making among agents on task execution, this process type introduces a democratic approach to task management within CrewAI. It is planned for future development and is not currently implemented in the codebase.
## The Role of Processes in Teamwork
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
@@ -68,17 +59,9 @@ Emulates a corporate hierarchy, CrewAI allows specifying a custom manager agent
## Process Class: Detailed Overview
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`).
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`). The consensual process is planned for future inclusion, emphasizing our commitment to continuous development and innovation.
## Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents.
## Common Failure Modes
### Hierarchical process fails at startup
- Cause: missing `manager_llm` or `manager_agent`.
- Fix: provide one of them explicitly in crew configuration.
### Sequential process produces weak outputs
- Cause: task boundaries/context are underspecified.
- Fix: improve task descriptions, expected outputs, and task context chaining.
This documentation has been updated to reflect the latest features, enhancements, and the planned integration of the Consensual Process, ensuring users have access to the most current and comprehensive information.

View File

@@ -9,20 +9,9 @@ mode: "wide"
Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. With crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities.
## When to Use Testing
- Before promoting a crew to production.
- After changing prompts, tools, or model configurations.
- When benchmarking quality/cost/latency tradeoffs.
## When Not to Rely on Testing Alone
- For safety-critical deployments without human review gates.
- When test datasets are too small or unrepresentative.
### Using the Testing Feature
Use the CLI command `crewai test` to run repeated crew executions and compare outputs across iterations. The parameters are `n_iterations` and `model`, which are optional and default to `2` and `gpt-4o-mini`.
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
```bash
crewai test
@@ -58,13 +47,3 @@ A table of scores at the end will show the performance of the crew in terms of t
| Execution Time (s) | 126 | 145 | **135** | | |
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.
## Common Failure Modes
### Scores fluctuate too much between runs
- Cause: high sampling randomness or unstable prompts.
- Fix: lower temperature and tighten output constraints.
### Good test scores but poor production quality
- Cause: test prompts do not match real workload.
- Fix: build a representative test set from real production inputs.

View File

@@ -10,17 +10,6 @@ mode: "wide"
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
## When to Use Tools
- Agents need external data or side effects.
- You need deterministic actions wrapped in reusable interfaces.
- You need to connect APIs, files, databases, or browser actions into agent workflows.
## When Not to Use Tools
- The task can be solved entirely from prompt context.
- The external side effect cannot be made safe or idempotent.
## What is a Tool?
A tool in CrewAI is a skill or function that agents can utilize to perform various actions.
@@ -296,17 +285,3 @@ writer1 = Agent(
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling,
caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.
## Common Failure Modes
### Tool schema mismatch
- Cause: model-generated arguments do not match tool signature.
- Fix: tighten tool descriptions and validate input schemas.
### Repeated side effects
- Cause: retries trigger duplicate writes/actions.
- Fix: add idempotency keys and deduplication checks in tool logic.
### Tool timeouts under load
- Cause: unbounded retries or slow external services.
- Fix: set explicit timeout/retry policy and graceful fallbacks.

View File

@@ -177,6 +177,11 @@ You need to push your crew to a GitHub repository. If you haven't created a crew
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
<Info>
Using private Python packages? You'll need to add your registry credentials here too.
See [Private Package Registries](/en/enterprise/guides/private-package-registry) for the required variables.
</Info>
</Step>
<Step title="Deploy Your Crew">

View File

@@ -256,6 +256,12 @@ Before deployment, ensure you have:
1. **LLM API keys** ready (OpenAI, Anthropic, Google, etc.)
2. **Tool API keys** if using external tools (Serper, etc.)
<Info>
If your project depends on packages from a **private PyPI registry**, you'll also need to configure
registry authentication credentials as environment variables. See the
[Private Package Registries](/en/enterprise/guides/private-package-registry) guide for details.
</Info>
<Tip>
Test your project locally with the same environment variables before deploying
to catch configuration issues early.

View File

@@ -0,0 +1,263 @@
---
title: "Private Package Registries"
description: "Install private Python packages from authenticated PyPI registries in CrewAI AMP"
icon: "lock"
mode: "wide"
---
<Note>
This guide covers how to configure your CrewAI project to install Python packages
from private PyPI registries (Azure DevOps Artifacts, GitHub Packages, GitLab, AWS CodeArtifact, etc.)
when deploying to CrewAI AMP.
</Note>
## When You Need This
If your project depends on internal or proprietary Python packages hosted on a private registry
rather than the public PyPI, you'll need to:
1. Tell UV **where** to find the package (an index URL)
2. Tell UV **which** packages come from that index (a source mapping)
3. Provide **credentials** so UV can authenticate during install
CrewAI AMP uses [UV](https://docs.astral.sh/uv/) for dependency resolution and installation.
UV supports authenticated private registries through `pyproject.toml` configuration combined
with environment variables for credentials.
## Step 1: Configure pyproject.toml
Three pieces work together in your `pyproject.toml`:
### 1a. Declare the dependency
Add the private package to your `[project.dependencies]` like any other dependency:
```toml
[project]
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
```
### 1b. Define the index
Register your private registry as a named index under `[[tool.uv.index]]`:
```toml
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
```
<Info>
The `name` field is important — UV uses it to construct the environment variable names
for authentication (see [Step 2](#step-2-set-authentication-credentials) below).
Setting `explicit = true` means UV won't search this index for every package — only the
ones you explicitly map to it in `[tool.uv.sources]`. This avoids unnecessary queries
against your private registry and protects against dependency confusion attacks.
</Info>
### 1c. Map the package to the index
Tell UV which packages should be resolved from your private index using `[tool.uv.sources]`:
```toml
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
### Complete example
```toml
[project]
name = "my-crew-project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
[tool.crewai]
type = "crew"
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
After updating `pyproject.toml`, regenerate your lock file:
```bash
uv lock
```
<Warning>
Always commit the updated `uv.lock` along with your `pyproject.toml` changes.
The lock file is required for deployment — see [Prepare for Deployment](/en/enterprise/guides/prepare-for-deployment).
</Warning>
## Step 2: Set Authentication Credentials
UV authenticates against private indexes using environment variables that follow a naming convention
based on the index name you defined in `pyproject.toml`:
```
UV_INDEX_{UPPER_NAME}_USERNAME
UV_INDEX_{UPPER_NAME}_PASSWORD
```
Where `{UPPER_NAME}` is your index name converted to **uppercase** with **hyphens replaced by underscores**.
For example, an index named `my-private-registry` uses:
| Variable | Value |
|----------|-------|
| `UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME` | Your registry username or token name |
| `UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD` | Your registry password or token/PAT |
<Warning>
These environment variables **must** be added via the CrewAI AMP **Environment Variables** settings —
either globally or at the deployment level. They cannot be set in `.env` files or hardcoded in your project.
See [Setting Environment Variables in AMP](#setting-environment-variables-in-amp) below.
</Warning>
## Registry Provider Reference
The table below shows the index URL format and credential values for common registry providers.
Replace placeholder values with your actual organization and feed details.
| Provider | Index URL | Username | Password |
|----------|-----------|----------|----------|
| **Azure DevOps Artifacts** | `https://pkgs.dev.azure.com/{org}/_packaging/{feed}/pypi/simple/` | Any non-empty string (e.g. `token`) | Personal Access Token (PAT) with Packaging Read scope |
| **GitHub Packages** | `https://pypi.pkg.github.com/{owner}/simple/` | GitHub username | Personal Access Token (classic) with `read:packages` scope |
| **GitLab Package Registry** | `https://gitlab.com/api/v4/projects/{project_id}/packages/pypi/simple/` | `__token__` | Project or Personal Access Token with `read_api` scope |
| **AWS CodeArtifact** | Use the URL from `aws codeartifact get-repository-endpoint` | `aws` | Token from `aws codeartifact get-authorization-token` |
| **Google Artifact Registry** | `https://{region}-python.pkg.dev/{project}/{repo}/simple/` | `_json_key_base64` | Base64-encoded service account key |
| **JFrog Artifactory** | `https://{instance}.jfrog.io/artifactory/api/pypi/{repo}/simple/` | Username or email | API key or identity token |
| **Self-hosted (devpi, Nexus, etc.)** | Your registry's simple API URL | Registry username | Registry password |
<Tip>
For **AWS CodeArtifact**, the authorization token expires periodically.
You'll need to refresh the `UV_INDEX_*_PASSWORD` value when it expires.
Consider automating this in your CI/CD pipeline.
</Tip>
## Setting Environment Variables in AMP
Private registry credentials must be configured as environment variables in CrewAI AMP.
You have two options:
<Tabs>
<Tab title="Web Interface">
1. Log in to [CrewAI AMP](https://app.crewai.com)
2. Navigate to your automation
3. Open the **Environment Variables** tab
4. Add each variable (`UV_INDEX_*_USERNAME` and `UV_INDEX_*_PASSWORD`) with its value
See the [Deploy to AMP — Set Environment Variables](/en/enterprise/guides/deploy-to-amp#set-environment-variables) step for details.
</Tab>
<Tab title="CLI Deployment">
Add the variables to your local `.env` file before running `crewai deploy create`.
The CLI will securely transfer them to the platform:
```bash
# .env
OPENAI_API_KEY=sk-...
UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat-here
```
```bash
crewai deploy create
```
</Tab>
</Tabs>
<Warning>
**Never** commit credentials to your repository. Use AMP environment variables for all secrets.
The `.env` file should be listed in `.gitignore`.
</Warning>
To update credentials on an existing deployment, see [Update Your Crew — Environment Variables](/en/enterprise/guides/update-crew).
## How It All Fits Together
When CrewAI AMP builds your automation, the resolution flow works like this:
<Steps>
<Step title="Build starts">
AMP pulls your repository and reads `pyproject.toml` and `uv.lock`.
</Step>
<Step title="UV resolves dependencies">
UV reads `[tool.uv.sources]` to determine which index each package should come from.
</Step>
<Step title="UV authenticates">
For each private index, UV looks up `UV_INDEX_{NAME}_USERNAME` and `UV_INDEX_{NAME}_PASSWORD`
from the environment variables you configured in AMP.
</Step>
<Step title="Packages install">
UV downloads and installs all packages — both public (from PyPI) and private (from your registry).
</Step>
<Step title="Automation runs">
Your crew or flow starts with all dependencies available.
</Step>
</Steps>
## Troubleshooting
### Authentication Errors During Build
**Symptom**: Build fails with `401 Unauthorized` or `403 Forbidden` when resolving a private package.
**Check**:
- The `UV_INDEX_*` environment variable names match your index name exactly (uppercased, hyphens → underscores)
- Credentials are set in AMP environment variables, not just in a local `.env`
- Your token/PAT has the required read permissions for the package feed
- The token hasn't expired (especially relevant for AWS CodeArtifact)
### Package Not Found
**Symptom**: `No matching distribution found for my-private-package`.
**Check**:
- The index URL in `pyproject.toml` ends with `/simple/`
- The `[tool.uv.sources]` entry maps the correct package name to the correct index name
- The package is actually published to your private registry
- Run `uv lock` locally with the same credentials to verify resolution works
### Lock File Conflicts
**Symptom**: `uv lock` fails or produces unexpected results after adding a private index.
**Solution**: Set the credentials locally and regenerate:
```bash
export UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
export UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat
uv lock
```
Then commit the updated `uv.lock`.
## Related Guides
<CardGroup cols={3}>
<Card title="Prepare for Deployment" icon="clipboard-check" href="/en/enterprise/guides/prepare-for-deployment">
Verify project structure and dependencies before deploying.
</Card>
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
Deploy your crew or flow and configure environment variables.
</Card>
<Card title="Update Your Crew" icon="arrows-rotate" href="/en/enterprise/guides/update-crew">
Update environment variables and push changes to a running deployment.
</Card>
</CardGroup>

View File

@@ -8,10 +8,6 @@ mode: "wide"
## Quickstarts & Demos
<CardGroup cols={3}>
<Card title="Flowstate Chat History" icon="comments" href="/en/learn/flowstate-chat-history">
Manage chat sessions with sliding-window history, summary compaction, and persisted Flow state.
</Card>
<Card title="Collaboration" icon="people-arrows" href="https://github.com/crewAIInc/crewAI-quickstarts/blob/main/Collaboration/crewai_collaboration.ipynb">
Coordinate multiple agents on shared tasks. Includes notebook with end-to-end collaboration pattern.
</Card>

View File

@@ -34,10 +34,6 @@ mode: "wide"
## Flows
<CardGroup cols={3}>
<Card title="Flowstate Chat History" icon="comments" href="/en/learn/flowstate-chat-history">
Stateful chat pattern with compacted context and persisted session state.
</Card>
<Card title="Content Creator Flow" icon="pen" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/content_creator_flow">
Multicrew content generation with routing.
</Card>

View File

@@ -47,23 +47,6 @@ CrewAI offers two ways to manage state in your flows:
Let's examine each approach in detail.
### Flow State vs Memory: When to use each
Both features keep context, but they solve different problems.
| Dimension | Flow State (`self.state`) | Memory (`self.remember` / `self.recall`) |
|---|---|---|
| Primary purpose | Track execution and deterministic workflow data | Store and retrieve semantic knowledge across interactions |
| Data shape | Explicit fields (dict/Pydantic model) | Text records with inferred scopes and ranked recall |
| Typical lifetime | Current flow run (or persisted checkpoints) | Long-term knowledge over many runs |
| Access pattern | Direct reads/writes (`self.state.field`) | Query-based retrieval (`self.recall("...")`) |
| Best for | Routing flags, counters, intermediate outputs, chat window | Durable facts, prior outcomes, reusable context |
| Chat use | Recent turns + running summary + control flags | Long-tail memory outside context window |
Practical rule:
- Use **state** for what your control flow depends on right now.
- Use **memory** for what you may want to retrieve later by meaning.
## Unstructured State Management
Unstructured state uses a dictionary-like approach, offering flexibility and simplicity for straightforward applications.

View File

@@ -27,11 +27,8 @@ mode: "wide"
</div>
<div style={{ display: 'flex', flexWrap: 'wrap', gap: 12, justifyContent: 'center' }}>
<a className="button button-primary" href="/en/installation">Install</a>
<a className="button" href="/en/quickstart">Quickstart</a>
<a className="button" href="/en/guides/crews/first-crew">First Crew</a>
<a className="button" href="/en/guides/flows/first-flow">First Flow</a>
<a className="button" href="/en/concepts/llms">LLM Setup</a>
<a className="button button-primary" href="/en/quickstart">Get started</a>
<a className="button" href="/en/changelog">View changelog</a>
<a className="button" href="/en/api-reference/introduction">API Reference</a>
</div>
@@ -39,49 +36,17 @@ mode: "wide"
<div style={{ marginTop: 32 }} />
## Start in 3 steps
## Get started
<CardGroup cols={3}>
<Card title="1) Install" href="/en/installation" icon="wrench">
<Card title="Introduction" href="/en/introduction" icon="sparkles">
Overview of CrewAI concepts, architecture, and what you can build with agents, crews, and flows.
</Card>
<Card title="Installation" href="/en/installation" icon="wrench">
Install via `uv`, configure API keys, and set up the CLI for local development.
</Card>
<Card title="2) Run Quickstart" href="/en/quickstart" icon="rocket">
Launch your first working crew with a minimal project and iterate from there.
</Card>
<Card title="3) Pick a path" href="/en/ai/overview" icon="sitemap">
Continue with canonical domain packs for Flows, Agents, Crews, LLMs, Memory, and Tools.
</Card>
</CardGroup>
## Most-used pages
<CardGroup cols={3}>
<Card title="First Crew" href="/en/guides/crews/first-crew" icon="users">
Build a production-style crew with role/task configuration and execution flow.
</Card>
<Card title="First Flow" href="/en/guides/flows/first-flow" icon="arrow-progress">
Build event-driven orchestration with state, listeners, and routing.
</Card>
<Card title="Flowstate Chat History" href="/en/learn/flowstate-chat-history" icon="comments">
Stateful chat history pattern with persistence and summary compaction.
</Card>
<Card title="Agents" href="/en/concepts/agents" icon="user">
Agent role design, tool boundaries, and output contracts.
</Card>
<Card title="Crews" href="/en/concepts/crews" icon="users-gear">
Multi-agent collaboration patterns and process semantics.
</Card>
<Card title="Flows" href="/en/concepts/flows" icon="code-branch">
Deterministic orchestration, state lifecycle, persistence, and resume.
</Card>
<Card title="LLMs" href="/en/concepts/llms" icon="microchip-ai">
Model setup, provider config, routing patterns, and reliability defaults.
</Card>
<Card title="Memory" href="/en/concepts/memory" icon="database">
Semantic recall, scope strategy, and state-vs-memory architecture.
</Card>
<Card title="Tools" href="/en/tools/overview" icon="wrench">
Tool categories, integration surfaces, and practical usage patterns.
<Card title="Quickstart" href="/en/quickstart" icon="rocket">
Spin up your first crew in minutes. Learn the core runtime, project layout, and dev loop.
</Card>
</CardGroup>
@@ -125,11 +90,7 @@ mode: "wide"
</CardGroup>
<Callout title="Explore real-world patterns" icon="github">
Browse the <a href="/en/examples/cookbooks">examples and cookbooks</a> for end-to-end reference implementations across agents, flows, and enterprise automations. For a practical conversational pattern, start with <a href="/en/learn/flowstate-chat-history">Flowstate Chat History</a>.
</Callout>
<Callout title="AI-First Docs" icon="sitemap">
Use the <a href="/en/ai/overview">AI-First Documentation map</a> for canonical domain packs across Flows, Agents, Crews, LLMs, Memory, and Tools.
Browse the <a href="/en/examples/cookbooks">examples and cookbooks</a> for end-to-end reference implementations across agents, flows, and enterprise automations.
</Callout>
## Stay connected

View File

@@ -16,52 +16,6 @@ It empowers developers to build production-ready multi-agent systems by combinin
With over 100,000 developers certified through our community courses, CrewAI is the standard for enterprise-ready AI automation.
## Start Here
<CardGroup cols={3}>
<Card title="Install" href="/en/installation" icon="wrench">
Set up CrewAI, configure API keys, and prepare your local environment.
</Card>
<Card title="Quickstart" href="/en/quickstart" icon="rocket">
Run your first working crew with a minimal setup.
</Card>
<Card title="First Crew" href="/en/guides/crews/first-crew" icon="users-gear">
Build a production-style crew with roles, tasks, and execution flow.
</Card>
<Card title="First Flow" href="/en/guides/flows/first-flow" icon="arrow-progress">
Build event-driven orchestration with state, listeners, and routers.
</Card>
<Card title="LLM Setup" href="/en/concepts/llms" icon="microchip-ai">
Configure providers, models, and reliability defaults.
</Card>
<Card title="API Reference" href="/en/api-reference/introduction" icon="book">
Use kickoff, resume, and status endpoints for production integrations.
</Card>
</CardGroup>
## Most-used Docs
<CardGroup cols={3}>
<Card title="Agents" href="/en/concepts/agents" icon="user">
Role design, tool boundaries, and output contracts.
</Card>
<Card title="Crews" href="/en/concepts/crews" icon="users">
Multi-agent coordination and process choices.
</Card>
<Card title="Flows" href="/en/concepts/flows" icon="code-branch">
Deterministic orchestration, state, persistence, and resume.
</Card>
<Card title="Memory" href="/en/concepts/memory" icon="database">
Scope strategy and semantic recall across runs.
</Card>
<Card title="Flowstate Chat History" href="/en/learn/flowstate-chat-history" icon="comments">
Stateful chat context with summary compaction and persistence.
</Card>
<Card title="AI-First Docs Map" href="/en/ai/overview" icon="sitemap">
Canonical domain packs for Flows, Agents, Crews, LLMs, Memory, and Tools.
</Card>
</CardGroup>
## The CrewAI Architecture
CrewAI's architecture is designed to balance autonomy with control.
@@ -176,7 +130,7 @@ For any production-ready application, **start with a Flow**.
<Card
title="Quick Start"
icon="bolt"
href="/en/quickstart"
href="en/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>

View File

@@ -1,167 +0,0 @@
---
title: "Flowstate Chat History"
description: "Build a stateful chat workflow that keeps context compact, persistent, and production-friendly."
icon: "comments"
mode: "wide"
---
## Overview
This guide shows a practical pattern for managing LLM chat history with Flow state:
- Keep recent turns in a sliding window
- Summarize older turns into a compact running summary
- Persist state automatically with `@persist()`
- Keep optional long-term recall using Flow memory
## Why this pattern works
Naively appending every message to prompts causes token bloat and unstable behavior over long sessions. A better approach is:
1. Keep only the most recent turns in `state.messages`
2. Move older turns into `state.running_summary`
3. Build prompts from `running_summary + recent messages`
## Prerequisites
1. CrewAI installed and configured
2. API key configured for your model provider
3. Basic familiarity with Flow decorators (`@start`, `@listen`)
## Step 1: Define typed chat state
```python Code
from typing import Dict, List
from pydantic import BaseModel, Field
class ChatSessionState(BaseModel):
session_id: str = "demo-session"
running_summary: str = ""
messages: List[Dict[str, str]] = Field(default_factory=list)
max_recent_messages: int = 8
last_user_message: str = ""
assistant_reply: str = ""
turn_count: int = 0
```
## Step 2: Build the Flow
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.persistence import persist
from litellm import completion
@persist()
class ChatHistoryFlow(Flow[ChatSessionState]):
model = "gpt-4o-mini"
@start()
def capture_user_message(self):
self.state.last_user_message = self.state.last_user_message.strip()
self.state.messages.append(
{"role": "user", "content": self.state.last_user_message}
)
self.state.turn_count += 1
return self.state.last_user_message
@listen(capture_user_message)
def compact_old_history(self, _):
if len(self.state.messages) <= self.state.max_recent_messages:
return "no_compaction"
overflow = self.state.messages[:-self.state.max_recent_messages]
self.state.messages = self.state.messages[-self.state.max_recent_messages :]
overflow_text = "\n".join(
f"{m['role']}: {m['content']}" for m in overflow
)
summary_prompt = [
{
"role": "system",
"content": "Summarize old chat turns into short bullet points. Preserve facts, constraints, and decisions.",
},
{
"role": "user",
"content": (
f"Existing summary:\n{self.state.running_summary or '(empty)'}\n\n"
f"New old turns:\n{overflow_text}"
),
},
]
summary_response = completion(model=self.model, messages=summary_prompt)
self.state.running_summary = summary_response["choices"][0]["message"]["content"]
return "compacted"
@listen(compact_old_history)
def generate_reply(self, _):
system_context = (
"You are a helpful assistant.\n"
f"Conversation summary so far:\n{self.state.running_summary or '(none)'}"
)
response = completion(
model=self.model,
messages=[{"role": "system", "content": system_context}, *self.state.messages],
)
answer = response["choices"][0]["message"]["content"]
self.state.assistant_reply = answer
self.state.messages.append({"role": "assistant", "content": answer})
# Optional: store key turns in long-term memory for later recall
self.remember(
f"Session {self.state.session_id} turn {self.state.turn_count}: "
f"user={self.state.last_user_message} assistant={answer}",
scope=f"/chat/{self.state.session_id}",
)
return answer
```
## Step 3: Run it
```python Code
flow = ChatHistoryFlow()
first = flow.kickoff(
inputs={
"session_id": "customer-42",
"last_user_message": "I need help choosing a pricing plan for a 10-person team.",
}
)
print("Assistant:", first)
second = flow.kickoff(
inputs={
"last_user_message": "We also need SSO and audit logs. What do you recommend now?",
}
)
print("Assistant:", second)
print("Turns:", flow.state.turn_count)
print("Recent messages:", len(flow.state.messages))
```
## Expected output (shape)
```text Output
Assistant: ...initial recommendation...
Assistant: ...updated recommendation with SSO and audit-log requirements...
Turns: 2
Recent messages: 4
```
## Troubleshooting
- If replies ignore earlier context:
increase `max_recent_messages` and ensure `running_summary` is included in the system context.
- If prompts become too large:
lower `max_recent_messages` and summarize more aggressively.
- If sessions collide:
provide a stable `session_id` and isolate memory scope with `/chat/{session_id}`.
## Next steps
- Add tool calls for account lookup or product catalog retrieval
- Route to human review for high-risk decisions
- Add structured output to capture recommendations in machine-readable JSON

View File

@@ -7,7 +7,7 @@ mode: "wide"
## Connect CrewAI to LLMs
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
CrewAI connects to LLMs through native SDK integrations for the most popular providers (OpenAI, Anthropic, Google Gemini, Azure, and AWS Bedrock), and uses LiteLLM as a flexible fallback for all other providers.
<Note>
By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set.
@@ -41,6 +41,14 @@ LiteLLM supports a wide range of providers, including but not limited to:
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
<Info>
To use any provider not covered by a native integration, add LiteLLM as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
Native providers (OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock) use their own SDK extras — see the [Provider Configuration Examples](/en/concepts/llms#provider-configuration-examples).
</Info>
## Changing the LLM
To use a different LLM with your CrewAI agents, you have several options:

View File

@@ -35,7 +35,7 @@ Visit [app.crewai.com](https://app.crewai.com) and create your free account. Thi
If you haven't already, install CrewAI with the CLI tools:
```bash
uv add crewai[tools]
uv add 'crewai[tools]'
```
Then authenticate your CLI with your CrewAI AMP account:

View File

@@ -18,77 +18,46 @@ Composio is an integration platform that allows you to connect your AI agents to
To incorporate Composio tools into your project, follow the instructions below:
```shell
pip install composio-crewai
pip install composio composio-crewai
pip install crewai
```
After the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://app.composio.dev)
After the installation is complete, set your Composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://platform.composio.dev)
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize Composio toolset
1. Initialize Composio with CrewAI Provider
```python Code
from composio_crewai import ComposioToolSet, App, Action
from composio_crewai import ComposioProvider
from composio import Composio
from crewai import Agent, Task, Crew
toolset = ComposioToolSet()
composio = Composio(provider=ComposioProvider())
```
2. Connect your GitHub account
2. Create a new Composio Session and retrieve the tools
<CodeGroup>
```shell CLI
composio add github
```
```python Code
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```python
session = composio.create(
user_id="your-user-id",
toolkits=["gmail", "github"] # optional, default is all toolkits
)
tools = session.tools()
```
Read more about sessions and user management [here](https://docs.composio.dev/docs/configuring-sessions)
</CodeGroup>
3. Get Tools
3. Authenticating users manually
- Retrieving all the tools from an app (not recommended for production):
Composio automatically authenticates the users during the agent chat session. However, you can also authenticate the user manually by calling the `authorize` method.
```python Code
tools = toolset.get_tools(apps=[App.GITHUB])
connection_request = session.authorize("github")
print(f"Open this URL to authenticate: {connection_request.redirect_url}")
```
- Filtering tools based on tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtering tools based on use case:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
- Using specific tools:
In this demo, we will use the `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` action from the GitHub app.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Learn more about filtering actions [here](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Define agent
```python Code
@@ -116,4 +85,4 @@ crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://docs.composio.dev/toolkits)

View File

@@ -4,6 +4,106 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 2월 27일">
## v1.10.1a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## 변경 사항
### 기능
- 단계 콜백 메서드에서 비동기 호출 지원 구현
- 메모리 모듈의 무거운 의존성에 대한 지연 로딩 구현
### 문서
- v1.10.0에 대한 변경 로그 및 버전 업데이트
### 리팩토링
- 비동기 호출을 지원하기 위해 단계 콜백 메서드 리팩토링
- 메모리 모듈의 무거운 의존성에 대한 지연 로딩을 구현하기 위해 리팩토링
### 버그 수정
- 릴리스 노트의 분기 수정
## 기여자
@greysonlalonde, @joaomdmoura
</Update>
<Update label="2026년 2월 27일">
## v1.10.1a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## 변경 사항
### 리팩토링
- 비동기 호출을 지원하기 위해 단계 콜백 메서드 리팩토링
- 메모리 모듈의 무거운 의존성에 대해 지연 로딩 구현
### 문서화
- v1.10.0에 대한 변경 로그 및 버전 업데이트
### 버그 수정
- 릴리스 노트를 위한 브랜치 생성
## 기여자
@greysonlalonde, @joaomdmoura
</Update>
<Update label="2026년 2월 26일">
## v1.10.0
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.0)
## 변경 사항
### 기능
- MCP 도구 해상도 및 관련 이벤트 개선
- lancedb 버전 업데이트 및 lance-namespace 패키지 추가
- CrewAgentExecutor 및 BaseTool에서 JSON 인수 파싱 및 검증 개선
- CLI HTTP 클라이언트를 requests에서 httpx로 마이그레이션
- 버전화된 문서 추가
- 버전 노트에 대한 yanked 감지 추가
- Flows에서 사용자 입력 처리 구현
- 인간 피드백 통합 테스트에서 HITL 자기 루프 기능 개선
- eventbus에 started_event_id 추가 및 설정
- tools.specs 자동 업데이트
### 버그 수정
- 빈 경우에도 도구 kwargs를 검증하여 모호한 TypeError 방지
- LLM을 위한 도구 매개변수 스키마에서 null 타입 유지
- output_pydantic/output_json을 네이티브 구조화된 출력으로 매핑
- 약속이 있는 경우 콜백이 실행/대기되도록 보장
- 예외 컨텍스트에서 메서드 이름 캡처
- 라우터 결과에서 enum 타입 유지; 타입 개선
- 입력으로 지속성 ID가 전달될 때 조용히 깨지는 순환 흐름 수정
- CLI 플래그 형식을 --skip-provider에서 --skip_provider로 수정
- OpenAI 도구 호출 스트림이 완료되도록 보장
- MCP 도구에서 복잡한 스키마 $ref 포인터 해결
- 스키마에서 additionalProperties=false 강제 적용
- 크루 폴더에 대해 예약된 스크립트 이름 거부
- 가드레일 이벤트 방출 테스트에서 경쟁 조건 해결
### 문서
- 비네이티브 LLM 공급자를 위한 litellm 종속성 노트 추가
- NL2SQL 보안 모델 및 강화 지침 명확화
- 9개 통합에서 96개의 누락된 작업 추가
### 리팩토링
- crew를 provider로 리팩토링
- HITL을 provider 패턴으로 추출
- 훅 타이핑 및 등록 개선
## 기여자
@dependabot[bot], @github-actions[bot], @github-code-quality[bot], @greysonlalonde, @heitorado, @hobostay, @joaomdmoura, @johnvan7, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha, @mplachta, @nicoferdi96, @theCyberTech, @thiagomoretto, @vinibrsl
</Update>
<Update label="2026년 1월 26일">
## v1.9.0

View File

@@ -105,6 +105,15 @@ CrewAI 코드 내에는 사용할 모델을 지정할 수 있는 여러 위치
</Tab>
</Tabs>
<Info>
CrewAI는 OpenAI, Anthropic, Google (Gemini API), Azure, AWS Bedrock에 대해 네이티브 SDK 통합을 제공합니다 — 제공자별 extras(예: `uv add "crewai[openai]"`) 외에 추가 설치가 필요하지 않습니다.
그 외 모든 제공자는 **LiteLLM**을 통해 지원됩니다. 이를 사용하려면 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Info>
## 공급자 구성 예시
CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양한 LLM 공급자를 지원합니다.
@@ -214,6 +223,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| `meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 128k | 4028 | 텍스트, 이미지 | 텍스트 |
| `meta_llama/Llama-3.3-70B-Instruct` | 128k | 4028 | 텍스트 | 텍스트 |
| `meta_llama/Llama-3.3-8B-Instruct` | 128k | 4028 | 텍스트 | 텍스트 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Anthropic">
@@ -354,6 +368,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| gemini-1.5-flash | 1M 토큰 | 밸런스 잡힌 멀티모달 모델, 대부분의 작업에 적합 |
| gemini-1.5-flash-8B | 1M 토큰 | 가장 빠르고, 비용 효율적, 고빈도 작업에 적합 |
| gemini-1.5-pro | 2M 토큰 | 최고의 성능, 논리적 추론, 코딩, 창의적 협업 등 다양한 추론 작업에 적합 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Azure">
@@ -439,6 +458,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
model="sagemaker/<my-endpoint>"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Mistral">
@@ -454,6 +478,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
temperature=0.7
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nvidia NIM">
@@ -540,6 +569,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| rakuten/rakutenai-7b-instruct | 1,024 토큰 | 언어 이해, 추론, 텍스트 생성이 탁월한 최첨단 LLM |
| rakuten/rakutenai-7b-chat | 1,024 토큰 | 언어 이해, 추론, 텍스트 생성이 탁월한 최첨단 LLM |
| baichuan-inc/baichuan2-13b-chat | 4,096 토큰 | 중국어 및 영어 대화, 코딩, 수학, 지시 따르기, 퀴즈 풀이 지원 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
@@ -580,6 +614,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
# ...
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Groq">
@@ -601,6 +640,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| Llama 3.1 70B/8B| 131,072 토큰 | 고성능, 대용량 문맥 작업 |
| Llama 3.2 Series| 8,192 토큰 | 범용 작업 |
| Mixtral 8x7B | 32,768 토큰 | 성능과 문맥의 균형 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="IBM watsonx.ai">
@@ -623,6 +667,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
base_url="https://api.watsonx.ai/v1"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
@@ -636,6 +685,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
base_url="http://localhost:11434"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Fireworks AI">
@@ -651,6 +705,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
temperature=0.7
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Perplexity AI">
@@ -666,6 +725,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
base_url="https://api.perplexity.ai/"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Hugging Face">
@@ -680,6 +744,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="SambaNova">
@@ -703,6 +772,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| Llama 3.2 Series| 8,192 토큰 | 범용, 멀티모달 작업 |
| Llama 3.3 70B | 최대 131,072 토큰 | 고성능, 높은 출력 품질 |
| Qwen2 familly | 8,192 토큰 | 고성능, 높은 출력 품질 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Cerebras">
@@ -728,6 +802,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
- 속도와 품질의 우수한 밸런스
- 긴 컨텍스트 윈도우 지원
</Info>
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Open Router">
@@ -750,6 +829,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nebius AI Studio">
@@ -772,6 +856,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
- 경쟁력 있는 가격
- 속도와 품질의 우수한 밸런스
</Info>
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
</AccordionGroup>

View File

@@ -176,6 +176,11 @@ Crew를 GitHub 저장소에 푸시해야 합니다. 아직 Crew를 만들지 않
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
<Info>
프라이빗 Python 패키지를 사용하시나요? 여기에 레지스트리 자격 증명도 추가해야 합니다.
필요한 변수는 [프라이빗 패키지 레지스트리](/ko/enterprise/guides/private-package-registry)를 참조하세요.
</Info>
</Step>
<Step title="Crew 배포하기">

View File

@@ -256,6 +256,12 @@ Crews와 Flows 모두 `src/project_name/main.py`에 진입점이 있습니다:
1. **LLM API 키** (OpenAI, Anthropic, Google 등)
2. **도구 API 키** - 외부 도구를 사용하는 경우 (Serper 등)
<Info>
프로젝트가 **프라이빗 PyPI 레지스트리**의 패키지에 의존하는 경우, 레지스트리 인증 자격 증명도
환경 변수로 구성해야 합니다. 자세한 내용은
[프라이빗 패키지 레지스트리](/ko/enterprise/guides/private-package-registry) 가이드를 참조하세요.
</Info>
<Tip>
구성 문제를 조기에 발견하기 위해 배포 전에 동일한 환경 변수로
로컬에서 프로젝트를 테스트하세요.

View File

@@ -0,0 +1,261 @@
---
title: "프라이빗 패키지 레지스트리"
description: "CrewAI AMP에서 인증된 PyPI 레지스트리의 프라이빗 Python 패키지 설치하기"
icon: "lock"
mode: "wide"
---
<Note>
이 가이드는 CrewAI AMP에 배포할 때 프라이빗 PyPI 레지스트리(Azure DevOps Artifacts, GitHub Packages,
GitLab, AWS CodeArtifact 등)에서 Python 패키지를 설치하도록 CrewAI 프로젝트를 구성하는 방법을 다룹니다.
</Note>
## 이 가이드가 필요한 경우
프로젝트가 공개 PyPI가 아닌 프라이빗 레지스트리에 호스팅된 내부 또는 독점 Python 패키지에
의존하는 경우, 다음을 수행해야 합니다:
1. UV에 패키지를 **어디서** 찾을지 알려줍니다 (index URL)
2. UV에 **어떤** 패키지가 해당 index에서 오는지 알려줍니다 (source 매핑)
3. UV가 설치 중에 인증할 수 있도록 **자격 증명**을 제공합니다
CrewAI AMP는 의존성 해결 및 설치에 [UV](https://docs.astral.sh/uv/)를 사용합니다.
UV는 `pyproject.toml` 구성과 자격 증명용 환경 변수를 결합하여 인증된 프라이빗 레지스트리를 지원합니다.
## 1단계: pyproject.toml 구성
`pyproject.toml`에서 세 가지 요소가 함께 작동합니다:
### 1a. 의존성 선언
프라이빗 패키지를 다른 의존성과 마찬가지로 `[project.dependencies]`에 추가합니다:
```toml
[project]
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
```
### 1b. index 정의
프라이빗 레지스트리를 `[[tool.uv.index]]` 아래에 명명된 index로 등록합니다:
```toml
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
```
<Info>
`name` 필드는 중요합니다 — UV는 이를 사용하여 인증을 위한 환경 변수 이름을
구성합니다 (아래 [2단계](#2단계-인증-자격-증명-설정)를 참조하세요).
`explicit = true`를 설정하면 UV가 모든 패키지에 대해 이 index를 검색하지 않습니다 —
`[tool.uv.sources]`에서 명시적으로 매핑한 패키지만 검색합니다. 이렇게 하면 프라이빗
레지스트리에 대한 불필요한 쿼리를 방지하고 의존성 혼동 공격을 차단할 수 있습니다.
</Info>
### 1c. 패키지를 index에 매핑
`[tool.uv.sources]`를 사용하여 프라이빗 index에서 해결해야 할 패키지를 UV에 알려줍니다:
```toml
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
### 전체 예시
```toml
[project]
name = "my-crew-project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
[tool.crewai]
type = "crew"
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
`pyproject.toml`을 업데이트한 후 lock 파일을 다시 생성합니다:
```bash
uv lock
```
<Warning>
업데이트된 `uv.lock`을 항상 `pyproject.toml` 변경 사항과 함께 커밋하세요.
lock 파일은 배포에 필수입니다 — [배포 준비하기](/ko/enterprise/guides/prepare-for-deployment)를 참조하세요.
</Warning>
## 2단계: 인증 자격 증명 설정
UV는 `pyproject.toml`에서 정의한 index 이름을 기반으로 한 명명 규칙을 따르는
환경 변수를 사용하여 프라이빗 index에 인증합니다:
```
UV_INDEX_{UPPER_NAME}_USERNAME
UV_INDEX_{UPPER_NAME}_PASSWORD
```
여기서 `{UPPER_NAME}`은 index 이름을 **대문자**로 변환하고 **하이픈을 언더스코어로 대체**한 것입니다.
예를 들어, `my-private-registry`라는 이름의 index는 다음을 사용합니다:
| 변수 | 값 |
|------|-----|
| `UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME` | 레지스트리 사용자 이름 또는 토큰 이름 |
| `UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD` | 레지스트리 비밀번호 또는 토큰/PAT |
<Warning>
이 환경 변수는 CrewAI AMP **환경 변수** 설정을 통해 **반드시** 추가해야 합니다 —
전역적으로 또는 배포 수준에서. `.env` 파일에 설정하거나 프로젝트에 하드코딩할 수 없습니다.
아래 [AMP에서 환경 변수 설정](#amp에서-환경-변수-설정)을 참조하세요.
</Warning>
## 레지스트리 제공업체 참조
아래 표는 일반적인 레지스트리 제공업체의 index URL 형식과 자격 증명 값을 보여줍니다.
자리 표시자 값을 실제 조직 및 피드 세부 정보로 대체하세요.
| 제공업체 | Index URL | 사용자 이름 | 비밀번호 |
|---------|-----------|-----------|---------|
| **Azure DevOps Artifacts** | `https://pkgs.dev.azure.com/{org}/_packaging/{feed}/pypi/simple/` | 비어 있지 않은 임의의 문자열 (예: `token`) | Packaging Read 범위의 Personal Access Token (PAT) |
| **GitHub Packages** | `https://pypi.pkg.github.com/{owner}/simple/` | GitHub 사용자 이름 | `read:packages` 범위의 Personal Access Token (classic) |
| **GitLab Package Registry** | `https://gitlab.com/api/v4/projects/{project_id}/packages/pypi/simple/` | `__token__` | `read_api` 범위의 Project 또는 Personal Access Token |
| **AWS CodeArtifact** | `aws codeartifact get-repository-endpoint`의 URL 사용 | `aws` | `aws codeartifact get-authorization-token`의 토큰 |
| **Google Artifact Registry** | `https://{region}-python.pkg.dev/{project}/{repo}/simple/` | `_json_key_base64` | Base64로 인코딩된 서비스 계정 키 |
| **JFrog Artifactory** | `https://{instance}.jfrog.io/artifactory/api/pypi/{repo}/simple/` | 사용자 이름 또는 이메일 | API 키 또는 ID 토큰 |
| **자체 호스팅 (devpi, Nexus 등)** | 레지스트리의 simple API URL | 레지스트리 사용자 이름 | 레지스트리 비밀번호 |
<Tip>
**AWS CodeArtifact**의 경우 인증 토큰이 주기적으로 만료됩니다.
만료되면 `UV_INDEX_*_PASSWORD` 값을 갱신해야 합니다.
CI/CD 파이프라인에서 이를 자동화하는 것을 고려하세요.
</Tip>
## AMP에서 환경 변수 설정
프라이빗 레지스트리 자격 증명은 CrewAI AMP에서 환경 변수로 구성해야 합니다.
두 가지 옵션이 있습니다:
<Tabs>
<Tab title="웹 인터페이스">
1. [CrewAI AMP](https://app.crewai.com)에 로그인합니다
2. 자동화로 이동합니다
3. **Environment Variables** 탭을 엽니다
4. 각 변수 (`UV_INDEX_*_USERNAME` 및 `UV_INDEX_*_PASSWORD`)에 값을 추가합니다
자세한 내용은 [AMP에 배포하기 — 환경 변수 설정하기](/ko/enterprise/guides/deploy-to-amp#환경-변수-설정하기) 단계를 참조하세요.
</Tab>
<Tab title="CLI 배포">
`crewai deploy create`를 실행하기 전에 로컬 `.env` 파일에 변수를 추가합니다.
CLI가 이를 안전하게 플랫폼으로 전송합니다:
```bash
# .env
OPENAI_API_KEY=sk-...
UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat-here
```
```bash
crewai deploy create
```
</Tab>
</Tabs>
<Warning>
자격 증명을 저장소에 **절대** 커밋하지 마세요. 모든 비밀 정보에는 AMP 환경 변수를 사용하세요.
`.env` 파일은 `.gitignore`에 포함되어야 합니다.
</Warning>
기존 배포의 자격 증명을 업데이트하려면 [Crew 업데이트하기 — 환경 변수](/ko/enterprise/guides/update-crew)를 참조하세요.
## 전체 동작 흐름
CrewAI AMP가 자동화를 빌드할 때, 해결 흐름은 다음과 같이 작동합니다:
<Steps>
<Step title="빌드 시작">
AMP가 저장소를 가져오고 `pyproject.toml`과 `uv.lock`을 읽습니다.
</Step>
<Step title="UV가 의존성 해결">
UV가 `[tool.uv.sources]`를 읽어 각 패키지가 어떤 index에서 와야 하는지 결정합니다.
</Step>
<Step title="UV가 인증">
각 프라이빗 index에 대해 UV가 AMP에서 구성한 환경 변수에서
`UV_INDEX_{NAME}_USERNAME`과 `UV_INDEX_{NAME}_PASSWORD`를 조회합니다.
</Step>
<Step title="패키지 설치">
UV가 공개(PyPI) 및 프라이빗(레지스트리) 패키지를 모두 다운로드하고 설치합니다.
</Step>
<Step title="자동화 실행">
모든 의존성이 사용 가능한 상태에서 crew 또는 flow가 시작됩니다.
</Step>
</Steps>
## 문제 해결
### 빌드 중 인증 오류
**증상**: 프라이빗 패키지를 해결할 때 `401 Unauthorized` 또는 `403 Forbidden`으로 빌드가 실패합니다.
**확인사항**:
- `UV_INDEX_*` 환경 변수 이름이 index 이름과 정확히 일치하는지 확인합니다 (대문자, 하이픈 -> 언더스코어)
- 자격 증명이 로컬 `.env`뿐만 아니라 AMP 환경 변수에 설정되어 있는지 확인합니다
- 토큰/PAT에 패키지 피드에 필요한 읽기 권한이 있는지 확인합니다
- 토큰이 만료되지 않았는지 확인합니다 (특히 AWS CodeArtifact의 경우)
### 패키지를 찾을 수 없음
**증상**: `No matching distribution found for my-private-package`.
**확인사항**:
- `pyproject.toml`의 index URL이 `/simple/`로 끝나는지 확인합니다
- `[tool.uv.sources]` 항목이 올바른 패키지 이름을 올바른 index 이름에 매핑하는지 확인합니다
- 패키지가 실제로 프라이빗 레지스트리에 게시되어 있는지 확인합니다
- 동일한 자격 증명으로 로컬에서 `uv lock`을 실행하여 해결이 작동하는지 확인합니다
### Lock 파일 충돌
**증상**: 프라이빗 index를 추가한 후 `uv lock`이 실패하거나 예상치 못한 결과를 생성합니다.
**해결책**: 로컬에서 자격 증명을 설정하고 다시 생성합니다:
```bash
export UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
export UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat
uv lock
```
그런 다음 업데이트된 `uv.lock`을 커밋합니다.
## 관련 가이드
<CardGroup cols={3}>
<Card title="배포 준비하기" icon="clipboard-check" href="/ko/enterprise/guides/prepare-for-deployment">
배포 전에 프로젝트 구조와 의존성을 확인합니다.
</Card>
<Card title="AMP에 배포하기" icon="rocket" href="/ko/enterprise/guides/deploy-to-amp">
crew 또는 flow를 배포하고 환경 변수를 구성합니다.
</Card>
<Card title="Crew 업데이트하기" icon="arrows-rotate" href="/ko/enterprise/guides/update-crew">
환경 변수를 업데이트하고 실행 중인 배포에 변경 사항을 푸시합니다.
</Card>
</CardGroup>

View File

@@ -7,7 +7,7 @@ mode: "wide"
## CrewAI를 LLM에 연결하기
CrewAI는 LiteLLM을 사용하여 다양한 언어 모델(LLM)에 연결합니다. 이 통합은 높은 다양성을 제공하여, 여러 공급자의 모델을 간단하고 통합된 인터페이스로 사용할 수 있게 해줍니다.
CrewAI는 가장 인기 있는 제공자(OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock)에 대해 네이티브 SDK 통합을 통해 LLM에 연결하며, 그 외 모든 제공자에 대해서는 LiteLLM을 유연한 폴백으로 사용합니다.
<Note>
기본적으로 CrewAI는 `gpt-4o-mini` 모델을 사용합니다. 이는 `OPENAI_MODEL_NAME` 환경 변수에 의해 결정되며, 설정되지 않은 경우 기본값은 "gpt-4o-mini"입니다.
@@ -41,6 +41,14 @@ LiteLLM은 다음을 포함하되 이에 국한되지 않는 다양한 프로바
지원되는 프로바이더의 전체 및 최신 목록은 [LiteLLM 프로바이더 문서](https://docs.litellm.ai/docs/providers)를 참조하세요.
<Info>
네이티브 통합에서 지원하지 않는 제공자를 사용하려면 LiteLLM을 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
네이티브 제공자(OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock)는 자체 SDK extras를 사용합니다 — [공급자 구성 예시](/ko/concepts/llms#공급자-구성-예시)를 참조하세요.
</Info>
## LLM 변경하기
CrewAI agent에서 다른 LLM을 사용하려면 여러 가지 방법이 있습니다:

View File

@@ -35,7 +35,7 @@ crewai login
아직 설치하지 않았다면 CLI 도구와 함께 CrewAI를 설치하세요:
```bash
uv add crewai[tools]
uv add 'crewai[tools]'
```
그런 다음 CrewAI AMP 계정으로 CLI를 인증하세요:

View File

@@ -18,77 +18,46 @@ Composio는 AI 에이전트를 250개 이상의 도구와 연결할 수 있는
Composio 도구를 프로젝트에 통합하려면 아래 지침을 따르세요:
```shell
pip install composio-crewai
pip install composio composio-crewai
pip install crewai
```
설치가 완료된 후, `composio login`을 실행하거나 Composio API 키를 `COMPOSIO_API_KEY`로 export하세요. Composio API 키는 [여기](https://app.composio.dev)에서 받을 수 있습니다.
설치가 완료되면 Composio API 키를 `COMPOSIO_API_KEY`로 설정하세요. Composio API 키는 [여기](https://platform.composio.dev)에서 받을 수 있습니다.
## 예시
다음 예시는 도구를 초기화하고 github action을 실행하는 방법을 보여줍니다:
다음 예시는 도구를 초기화하고 GitHub 액션을 실행하는 방법을 보여줍니다:
1. Composio 도구 세트 초기화
1. CrewAI Provider와 함께 Composio 초기화
```python Code
from composio_crewai import ComposioToolSet, App, Action
from composio_crewai import ComposioProvider
from composio import Composio
from crewai import Agent, Task, Crew
toolset = ComposioToolSet()
composio = Composio(provider=ComposioProvider())
```
2. GitHub 계정 연결
2. 새 Composio 세션을 만들고 도구 가져오기
<CodeGroup>
```shell CLI
composio add github
```
```python Code
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```python
session = composio.create(
user_id="your-user-id",
toolkits=["gmail", "github"] # optional, default is all toolkits
)
tools = session.tools()
```
세션 및 사용자 관리에 대한 자세한 내용은 [여기](https://docs.composio.dev/docs/configuring-sessions)를 참고하세요.
</CodeGroup>
3. 도구 가져오
3. 사용자 수동 인증하
- 앱에서 모든 도구를 가져오기 (프로덕션 환경에서는 권장하지 않음):
Composio는 에이전트 채팅 세션 중에 사용자를 자동으로 인증합니다. 하지만 `authorize` 메서드를 호출해 사용자를 수동으로 인증할 수도 있습니다.
```python Code
tools = toolset.get_tools(apps=[App.GITHUB])
connection_request = session.authorize("github")
print(f"Open this URL to authenticate: {connection_request.redirect_url}")
```
- 태그를 기반으로 도구 필터링:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- 사용 사례를 기반으로 도구 필터링:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>`advanced`를 True로 설정하면 복잡한 사용 사례를 위한 액션을 가져올 수 있습니다</Tip>
- 특정 도구 사용하기:
이 데모에서는 GitHub 앱의 `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` 액션을 사용합니다.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
액션 필터링에 대해 더 자세한 내용을 보려면 [여기](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)를 참고하세요.
4. 에이전트 정의
```python Code
@@ -116,4 +85,4 @@ crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* 더욱 자세한 도구 리스트는 [여기](https://app.composio.dev)에서 확인하실 수 있습니다.
* 더욱 자세한 도구 목록은 [여기](https://docs.composio.dev/toolkits)에서 확인 수 있습니다.

View File

@@ -4,6 +4,106 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="27 fev 2026">
## v1.10.1a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## O que Mudou
### Funcionalidades
- Implementar suporte a invocação assíncrona em métodos de callback de etapas
- Implementar carregamento sob demanda para dependências pesadas no módulo de Memória
### Documentação
- Atualizar changelog e versão para v1.10.0
### Refatoração
- Refatorar métodos de callback de etapas para suportar invocação assíncrona
- Refatorar para implementar carregamento sob demanda para dependências pesadas no módulo de Memória
### Correções de Bugs
- Corrigir branch para notas de lançamento
## Contribuidores
@greysonlalonde, @joaomdmoura
</Update>
<Update label="27 fev 2026">
## v1.10.1a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## O que Mudou
### Refatoração
- Refatorar métodos de callback de etapas para suportar invocação assíncrona
- Implementar carregamento sob demanda para dependências pesadas no módulo de Memória
### Documentação
- Atualizar changelog e versão para v1.10.0
### Correções de Bugs
- Criar branch para notas de lançamento
## Contribuidores
@greysonlalonde, @joaomdmoura
</Update>
<Update label="26 fev 2026">
## v1.10.0
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.0)
## O que Mudou
### Recursos
- Aprimorar a resolução da ferramenta MCP e eventos relacionados
- Atualizar a versão do lancedb e adicionar pacotes lance-namespace
- Aprimorar a análise e validação de argumentos JSON no CrewAgentExecutor e BaseTool
- Migrar o cliente HTTP da CLI de requests para httpx
- Adicionar documentação versionada
- Adicionar detecção de versões removidas para notas de versão
- Implementar tratamento de entrada do usuário em Flows
- Aprimorar a funcionalidade de auto-loop HITL nos testes de integração de feedback humano
- Adicionar started_event_id e definir no eventbus
- Atualizar automaticamente tools.specs
### Correções de Bugs
- Validar kwargs da ferramenta mesmo quando vazios para evitar TypeError crípticos
- Preservar tipos nulos nos esquemas de parâmetros da ferramenta para LLM
- Mapear output_pydantic/output_json para saída estruturada nativa
- Garantir que callbacks sejam executados/aguardados se forem promessas
- Capturar o nome do método no contexto da exceção
- Preservar tipo enum no resultado do roteador; melhorar tipos
- Corrigir fluxos cíclicos que quebram silenciosamente quando o ID de persistência é passado nas entradas
- Corrigir o formato da flag da CLI de --skip-provider para --skip_provider
- Garantir que o fluxo de chamada da ferramenta OpenAI seja finalizado
- Resolver ponteiros $ref de esquema complexos nas ferramentas MCP
- Impor additionalProperties=false nos esquemas
- Rejeitar nomes de scripts reservados para pastas de equipe
- Resolver condição de corrida no teste de emissão de eventos de guardrail
### Documentação
- Adicionar nota de dependência litellm para provedores de LLM não nativos
- Esclarecer o modelo de segurança NL2SQL e orientações de fortalecimento
- Adicionar 96 ações ausentes em 9 integrações
### Refatoração
- Refatorar crew para provider
- Extrair HITL para padrão de provider
- Melhorar tipagem e registro de hooks
## Contribuidores
@dependabot[bot], @github-actions[bot], @github-code-quality[bot], @greysonlalonde, @heitorado, @hobostay, @joaomdmoura, @johnvan7, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha, @mplachta, @nicoferdi96, @theCyberTech, @thiagomoretto, @vinibrsl
</Update>
<Update label="26 jan 2026">
## v1.9.0

View File

@@ -105,6 +105,15 @@ Existem diferentes locais no código do CrewAI onde você pode especificar o mod
</Tab>
</Tabs>
<Info>
O CrewAI oferece integrações nativas via SDK para OpenAI, Anthropic, Google (Gemini API), Azure e AWS Bedrock — sem necessidade de instalação extra além dos extras específicos do provedor (ex.: `uv add "crewai[openai]"`).
Todos os outros provedores são alimentados pelo **LiteLLM**. Se você planeja usar algum deles, adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Info>
## Exemplos de Configuração de Provedores
O CrewAI suporta uma grande variedade de provedores de LLM, cada um com recursos, métodos de autenticação e capacidades de modelo únicos.
@@ -214,6 +223,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| `meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 128k | 4028 | Texto, Imagem | Texto |
| `meta_llama/Llama-3.3-70B-Instruct` | 128k | 4028 | Texto | Texto |
| `meta_llama/Llama-3.3-8B-Instruct` | 128k | 4028 | Texto | Texto |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Anthropic">
@@ -354,6 +368,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| gemini-1.5-flash | 1M tokens | Modelo multimodal equilibrado, bom para maioria das tarefas |
| gemini-1.5-flash-8B | 1M tokens | Mais rápido, mais eficiente em custo, adequado para tarefas de alta frequência |
| gemini-1.5-pro | 2M tokens | Melhor desempenho para uma ampla variedade de tarefas de raciocínio, incluindo lógica, codificação e colaboração criativa |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Azure">
@@ -438,6 +457,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
model="sagemaker/<my-endpoint>"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Mistral">
@@ -453,6 +477,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
temperature=0.7
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nvidia NIM">
@@ -539,6 +568,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| rakuten/rakutenai-7b-instruct | 1.024 tokens | LLM topo de linha, compreensão, raciocínio e geração textual.|
| rakuten/rakutenai-7b-chat | 1.024 tokens | LLM topo de linha, compreensão, raciocínio e geração textual.|
| baichuan-inc/baichuan2-13b-chat | 4.096 tokens | Suporte a chat em chinês/inglês, programação, matemática, seguir instruções, resolver quizzes.|
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
@@ -579,6 +613,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
# ...
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Groq">
@@ -600,6 +639,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| Llama 3.1 70B/8B | 131.072 tokens | Alta performance e tarefas de contexto grande|
| Llama 3.2 Série | 8.192 tokens | Tarefas gerais |
| Mixtral 8x7B | 32.768 tokens | Equilíbrio entre performance e contexto |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="IBM watsonx.ai">
@@ -622,6 +666,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
base_url="https://api.watsonx.ai/v1"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Ollama (LLMs Locais)">
@@ -635,6 +684,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
base_url="http://localhost:11434"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Fireworks AI">
@@ -650,6 +704,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
temperature=0.7
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Perplexity AI">
@@ -665,6 +724,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
base_url="https://api.perplexity.ai/"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Hugging Face">
@@ -679,6 +743,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="SambaNova">
@@ -702,6 +771,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| Llama 3.2 Série | 8.192 tokens | Tarefas gerais e multimodais |
| Llama 3.3 70B | Até 131.072 tokens | Desempenho e qualidade de saída elevada |
| Família Qwen2 | 8.192 tokens | Desempenho e qualidade de saída elevada |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Cerebras">
@@ -727,6 +801,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
- Equilíbrio entre velocidade e qualidade
- Suporte a longas janelas de contexto
</Info>
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Open Router">
@@ -749,6 +828,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
</AccordionGroup>

View File

@@ -176,6 +176,11 @@ Você precisa enviar seu crew para um repositório do GitHub. Caso ainda não te
![Definir Variáveis de Ambiente](/images/enterprise/set-env-variables.png)
</Frame>
<Info>
Usando pacotes Python privados? Você também precisará adicionar suas credenciais de registro aqui.
Consulte [Registros de Pacotes Privados](/pt-BR/enterprise/guides/private-package-registry) para as variáveis necessárias.
</Info>
</Step>
<Step title="Implante Seu Crew">

View File

@@ -256,6 +256,12 @@ Antes da implantação, certifique-se de ter:
1. **Chaves de API de LLM** prontas (OpenAI, Anthropic, Google, etc.)
2. **Chaves de API de ferramentas** se estiver usando ferramentas externas (Serper, etc.)
<Info>
Se seu projeto depende de pacotes de um **registro PyPI privado**, você também precisará configurar
credenciais de autenticação do registro como variáveis de ambiente. Consulte o guia
[Registros de Pacotes Privados](/pt-BR/enterprise/guides/private-package-registry) para mais detalhes.
</Info>
<Tip>
Teste seu projeto localmente com as mesmas variáveis de ambiente antes de implantar
para detectar problemas de configuração antecipadamente.

View File

@@ -0,0 +1,263 @@
---
title: "Registros de Pacotes Privados"
description: "Instale pacotes Python privados de registros PyPI autenticados no CrewAI AMP"
icon: "lock"
mode: "wide"
---
<Note>
Este guia aborda como configurar seu projeto CrewAI para instalar pacotes Python
de registros PyPI privados (Azure DevOps Artifacts, GitHub Packages, GitLab, AWS CodeArtifact, etc.)
ao implantar no CrewAI AMP.
</Note>
## Quando Você Precisa Disso
Se seu projeto depende de pacotes Python internos ou proprietários hospedados em um registro privado
em vez do PyPI público, você precisará:
1. Informar ao UV **onde** encontrar o pacote (uma URL de index)
2. Informar ao UV **quais** pacotes vêm desse index (um mapeamento de source)
3. Fornecer **credenciais** para que o UV possa autenticar durante a instalação
O CrewAI AMP usa [UV](https://docs.astral.sh/uv/) para resolução e instalação de dependências.
O UV suporta registros privados autenticados por meio da configuração do `pyproject.toml` combinada
com variáveis de ambiente para credenciais.
## Passo 1: Configurar o pyproject.toml
Três elementos trabalham juntos no seu `pyproject.toml`:
### 1a. Declarar a dependência
Adicione o pacote privado ao seu `[project.dependencies]` como qualquer outra dependência:
```toml
[project]
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
```
### 1b. Definir o index
Registre seu registro privado como um index nomeado em `[[tool.uv.index]]`:
```toml
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
```
<Info>
O campo `name` é importante — o UV o utiliza para construir os nomes das variáveis de ambiente
para autenticação (veja o [Passo 2](#passo-2-configurar-credenciais-de-autenticação) abaixo).
Definir `explicit = true` significa que o UV não consultará esse index para todos os pacotes — apenas
os que você mapear explicitamente em `[tool.uv.sources]`. Isso evita consultas desnecessárias
ao seu registro privado e protege contra ataques de confusão de dependências.
</Info>
### 1c. Mapear o pacote para o index
Informe ao UV quais pacotes devem ser resolvidos a partir do seu index privado usando `[tool.uv.sources]`:
```toml
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
### Exemplo completo
```toml
[project]
name = "my-crew-project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
[tool.crewai]
type = "crew"
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
Após atualizar o `pyproject.toml`, regenere seu arquivo lock:
```bash
uv lock
```
<Warning>
Sempre faça commit do `uv.lock` atualizado junto com as alterações no `pyproject.toml`.
O arquivo lock é obrigatório para implantação — veja [Preparar para Implantação](/pt-BR/enterprise/guides/prepare-for-deployment).
</Warning>
## Passo 2: Configurar Credenciais de Autenticação
O UV autentica em indexes privados usando variáveis de ambiente que seguem uma convenção de nomenclatura
baseada no nome do index que você definiu no `pyproject.toml`:
```
UV_INDEX_{UPPER_NAME}_USERNAME
UV_INDEX_{UPPER_NAME}_PASSWORD
```
Onde `{UPPER_NAME}` é o nome do seu index convertido para **maiúsculas** com **hifens substituídos por underscores**.
Por exemplo, um index chamado `my-private-registry` usa:
| Variável | Valor |
|----------|-------|
| `UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME` | Seu nome de usuário ou nome do token do registro |
| `UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD` | Sua senha ou token/PAT do registro |
<Warning>
Essas variáveis de ambiente **devem** ser adicionadas pelas configurações de **Variáveis de Ambiente** do CrewAI AMP —
globalmente ou no nível da implantação. Elas não podem ser definidas em arquivos `.env` ou codificadas no seu projeto.
Veja [Configurar Variáveis de Ambiente no AMP](#configurar-variáveis-de-ambiente-no-amp) abaixo.
</Warning>
## Referência de Provedores de Registro
A tabela abaixo mostra o formato da URL de index e os valores de credenciais para provedores de registro comuns.
Substitua os valores de exemplo pelos detalhes reais da sua organização e feed.
| Provedor | URL do Index | Usuário | Senha |
|----------|-------------|---------|-------|
| **Azure DevOps Artifacts** | `https://pkgs.dev.azure.com/{org}/_packaging/{feed}/pypi/simple/` | Qualquer string não vazia (ex: `token`) | Personal Access Token (PAT) com escopo Packaging Read |
| **GitHub Packages** | `https://pypi.pkg.github.com/{owner}/simple/` | Nome de usuário do GitHub | Personal Access Token (classic) com escopo `read:packages` |
| **GitLab Package Registry** | `https://gitlab.com/api/v4/projects/{project_id}/packages/pypi/simple/` | `__token__` | Project ou Personal Access Token com escopo `read_api` |
| **AWS CodeArtifact** | Use a URL de `aws codeartifact get-repository-endpoint` | `aws` | Token de `aws codeartifact get-authorization-token` |
| **Google Artifact Registry** | `https://{region}-python.pkg.dev/{project}/{repo}/simple/` | `_json_key_base64` | Chave de conta de serviço codificada em Base64 |
| **JFrog Artifactory** | `https://{instance}.jfrog.io/artifactory/api/pypi/{repo}/simple/` | Nome de usuário ou email | Chave API ou token de identidade |
| **Auto-hospedado (devpi, Nexus, etc.)** | URL da API simple do seu registro | Nome de usuário do registro | Senha do registro |
<Tip>
Para **AWS CodeArtifact**, o token de autorização expira periodicamente.
Você precisará atualizar o valor de `UV_INDEX_*_PASSWORD` quando ele expirar.
Considere automatizar isso no seu pipeline de CI/CD.
</Tip>
## Configurar Variáveis de Ambiente no AMP
As credenciais do registro privado devem ser configuradas como variáveis de ambiente no CrewAI AMP.
Você tem duas opções:
<Tabs>
<Tab title="Interface Web">
1. Faça login no [CrewAI AMP](https://app.crewai.com)
2. Navegue até sua automação
3. Abra a aba **Environment Variables**
4. Adicione cada variável (`UV_INDEX_*_USERNAME` e `UV_INDEX_*_PASSWORD`) com seu valor
Veja o passo [Deploy para AMP — Definir Variáveis de Ambiente](/pt-BR/enterprise/guides/deploy-to-amp#definir-as-variáveis-de-ambiente) para detalhes.
</Tab>
<Tab title="Implantação via CLI">
Adicione as variáveis ao seu arquivo `.env` local antes de executar `crewai deploy create`.
A CLI as transferirá com segurança para a plataforma:
```bash
# .env
OPENAI_API_KEY=sk-...
UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat-here
```
```bash
crewai deploy create
```
</Tab>
</Tabs>
<Warning>
**Nunca** faça commit de credenciais no seu repositório. Use variáveis de ambiente do AMP para todos os segredos.
O arquivo `.env` deve estar listado no `.gitignore`.
</Warning>
Para atualizar credenciais em uma implantação existente, veja [Atualizar Seu Crew — Variáveis de Ambiente](/pt-BR/enterprise/guides/update-crew).
## Como Tudo se Conecta
Quando o CrewAI AMP faz o build da sua automação, o fluxo de resolução funciona assim:
<Steps>
<Step title="Build inicia">
O AMP busca seu repositório e lê o `pyproject.toml` e o `uv.lock`.
</Step>
<Step title="UV resolve dependências">
O UV lê `[tool.uv.sources]` para determinar de qual index cada pacote deve vir.
</Step>
<Step title="UV autentica">
Para cada index privado, o UV busca `UV_INDEX_{NAME}_USERNAME` e `UV_INDEX_{NAME}_PASSWORD`
nas variáveis de ambiente que você configurou no AMP.
</Step>
<Step title="Pacotes são instalados">
O UV baixa e instala todos os pacotes — tanto públicos (do PyPI) quanto privados (do seu registro).
</Step>
<Step title="Automação executa">
Seu crew ou flow inicia com todas as dependências disponíveis.
</Step>
</Steps>
## Solução de Problemas
### Erros de Autenticação Durante o Build
**Sintoma**: Build falha com `401 Unauthorized` ou `403 Forbidden` ao resolver um pacote privado.
**Verifique**:
- Os nomes das variáveis de ambiente `UV_INDEX_*` correspondem exatamente ao nome do seu index (maiúsculas, hifens -> underscores)
- As credenciais estão definidas nas variáveis de ambiente do AMP, não apenas em um `.env` local
- Seu token/PAT tem as permissões de leitura necessárias para o feed de pacotes
- O token não expirou (especialmente relevante para AWS CodeArtifact)
### Pacote Não Encontrado
**Sintoma**: `No matching distribution found for my-private-package`.
**Verifique**:
- A URL do index no `pyproject.toml` termina com `/simple/`
- A entrada `[tool.uv.sources]` mapeia o nome correto do pacote para o nome correto do index
- O pacote está realmente publicado no seu registro privado
- Execute `uv lock` localmente com as mesmas credenciais para verificar se a resolução funciona
### Conflitos no Arquivo Lock
**Sintoma**: `uv lock` falha ou produz resultados inesperados após adicionar um index privado.
**Solução**: Defina as credenciais localmente e regenere:
```bash
export UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
export UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat
uv lock
```
Em seguida, faça commit do `uv.lock` atualizado.
## Guias Relacionados
<CardGroup cols={3}>
<Card title="Preparar para Implantação" icon="clipboard-check" href="/pt-BR/enterprise/guides/prepare-for-deployment">
Verifique a estrutura do projeto e as dependências antes de implantar.
</Card>
<Card title="Deploy para AMP" icon="rocket" href="/pt-BR/enterprise/guides/deploy-to-amp">
Implante seu crew ou flow e configure variáveis de ambiente.
</Card>
<Card title="Atualizar Seu Crew" icon="arrows-rotate" href="/pt-BR/enterprise/guides/update-crew">
Atualize variáveis de ambiente e envie alterações para uma implantação em execução.
</Card>
</CardGroup>

View File

@@ -7,7 +7,7 @@ mode: "wide"
## Conecte o CrewAI a LLMs
O CrewAI utiliza o LiteLLM para conectar-se a uma grande variedade de Modelos de Linguagem (LLMs). Essa integração proporciona grande versatilidade, permitindo que você utilize modelos de inúmeros provedores por meio de uma interface simples e unificada.
O CrewAI conecta-se a LLMs por meio de integrações nativas via SDK para os provedores mais populares (OpenAI, Anthropic, Google Gemini, Azure e AWS Bedrock), e usa o LiteLLM como alternativa flexível para todos os demais provedores.
<Note>
Por padrão, o CrewAI usa o modelo `gpt-4o-mini`. Isso é determinado pela variável de ambiente `OPENAI_MODEL_NAME`, que tem como padrão "gpt-4o-mini" se não for definida.
@@ -40,6 +40,14 @@ O LiteLLM oferece suporte a uma ampla gama de provedores, incluindo, mas não se
Para uma lista completa e sempre atualizada dos provedores suportados, consulte a [documentação de Provedores do LiteLLM](https://docs.litellm.ai/docs/providers).
<Info>
Para usar qualquer provedor não coberto por uma integração nativa, adicione o LiteLLM como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
Provedores nativos (OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock) usam seus próprios extras de SDK — consulte os [Exemplos de Configuração de Provedores](/pt-BR/concepts/llms#exemplos-de-configuração-de-provedores).
</Info>
## Alterando a LLM
Para utilizar uma LLM diferente com seus agentes CrewAI, você tem várias opções:

View File

@@ -11,84 +11,53 @@ mode: "wide"
Composio é uma plataforma de integração que permite conectar seus agentes de IA a mais de 250 ferramentas. Os principais recursos incluem:
- **Autenticação de Nível Empresarial**: Suporte integrado para OAuth, Chaves de API, JWT com atualização automática de token
- **Observabilidade Completa**: Logs detalhados de uso das ferramentas, registros de execução, e muito mais
- **Observabilidade Completa**: Logs detalhados de uso das ferramentas, carimbos de data/hora de execução e muito mais
## Instalação
Para incorporar as ferramentas Composio em seu projeto, siga as instruções abaixo:
```shell
pip install composio-crewai
pip install composio composio-crewai
pip install crewai
```
Após a conclusão da instalação, execute `composio login` ou exporte sua chave de API do composio como `COMPOSIO_API_KEY`. Obtenha sua chave de API Composio [aqui](https://app.composio.dev)
Após concluir a instalação, defina sua chave de API do Composio como `COMPOSIO_API_KEY`. Obtenha sua chave de API do Composio [aqui](https://platform.composio.dev)
## Exemplo
O exemplo a seguir demonstra como inicializar a ferramenta e executar uma ação do github:
O exemplo a seguir demonstra como inicializar a ferramenta e executar uma ação do GitHub:
1. Inicialize o conjunto de ferramentas Composio
1. Inicialize o Composio com o Provider do CrewAI
```python Code
from composio_crewai import ComposioToolSet, App, Action
from composio_crewai import ComposioProvider
from composio import Composio
from crewai import Agent, Task, Crew
toolset = ComposioToolSet()
composio = Composio(provider=ComposioProvider())
```
2. Conecte sua conta do GitHub
2. Crie uma nova sessão Composio e recupere as ferramentas
<CodeGroup>
```shell CLI
composio add github
```
```python Code
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```python
session = composio.create(
user_id="your-user-id",
toolkits=["gmail", "github"] # optional, default is all toolkits
)
tools = session.tools()
```
Leia mais sobre sessões e gerenciamento de usuários [aqui](https://docs.composio.dev/docs/configuring-sessions)
</CodeGroup>
3. Obtenha ferramentas
3. Autenticação manual dos usuários
- Recuperando todas as ferramentas de um app (não recomendado em produção):
O Composio autentica automaticamente os usuários durante a sessão de chat do agente. No entanto, você também pode autenticar o usuário manualmente chamando o método `authorize`.
```python Code
tools = toolset.get_tools(apps=[App.GITHUB])
connection_request = session.authorize("github")
print(f"Open this URL to authenticate: {connection_request.redirect_url}")
```
- Filtrando ferramentas com base em tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtrando ferramentas com base no caso de uso:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Defina `advanced` como True para obter ações para casos de uso complexos</Tip>
- Usando ferramentas específicas:
Neste exemplo, usaremos a ação `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` do app GitHub.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Saiba mais sobre como filtrar ações [aqui](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Defina o agente
```python Code
@@ -116,4 +85,4 @@ crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* Uma lista mais detalhada de ferramentas pode ser encontrada [aqui](https://app.composio.dev)
* Uma lista mais detalhada de ferramentas pode ser encontrada [aqui](https://docs.composio.dev/toolkits)

View File

@@ -8,8 +8,8 @@ authors = [
]
requires-python = ">=3.10, <3.14"
dependencies = [
"Pillow~=10.4.0",
"pypdf~=4.0.0",
"Pillow~=12.1.1",
"pypdf~=6.7.4",
"python-magic>=0.4.27",
"aiocache~=0.12.3",
"aiofiles~=24.1.0",

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.9.3"
__version__ = "1.10.1a1"

View File

@@ -8,12 +8,10 @@ authors = [
]
requires-python = ">=3.10, <3.14"
dependencies = [
"lancedb~=0.5.4",
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.9.3",
"lancedb~=0.5.4",
"crewai==1.10.1a1",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -291,4 +291,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.9.3"
__version__ = "1.10.1a1"

View File

@@ -20117,18 +20117,6 @@
"humanized_name": "Web Automation Tool",
"init_params_schema": {
"$defs": {
"AvailableModel": {
"enum": [
"gpt-4o",
"gpt-4o-mini",
"claude-3-5-sonnet-latest",
"claude-3-7-sonnet-latest",
"computer-use-preview",
"gemini-2.0-flash"
],
"title": "AvailableModel",
"type": "string"
},
"EnvVar": {
"properties": {
"default": {
@@ -20206,17 +20194,6 @@
"default": null,
"title": "Model Api Key"
},
"model_name": {
"anyOf": [
{
"$ref": "#/$defs/AvailableModel"
},
{
"type": "null"
}
],
"default": "claude-3-7-sonnet-latest"
},
"project_id": {
"anyOf": [
{

View File

@@ -38,10 +38,11 @@ dependencies = [
"json5~=0.10.0",
"portalocker~=2.7.0",
"pydantic-settings~=2.10.1",
"httpx~=0.28.1",
"mcp~=1.26.0",
"uv~=0.9.13",
"aiosqlite~=0.21.0",
"lancedb>=0.4.0",
"lancedb>=0.29.2",
]
[project.urls]
@@ -52,7 +53,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.9.3",
"crewai-tools==1.10.1a1",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -65,7 +66,7 @@ openpyxl = [
]
mem0 = ["mem0ai~=0.1.94"]
docling = [
"docling~=2.63.0",
"docling~=2.75.0",
]
qdrant = [
"qdrant-client[fastembed]~=1.14.3",

View File

@@ -10,7 +10,6 @@ from crewai.flow.flow import Flow
from crewai.knowledge.knowledge import Knowledge
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
from crewai.memory.unified_memory import Memory
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.llm_guardrail import LLMGuardrail
@@ -41,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.9.3"
__version__ = "1.10.1a1"
_telemetry_submitted = False
@@ -72,6 +71,25 @@ def _track_install_async() -> None:
_track_install_async()
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
"Memory": ("crewai.memory.unified_memory", "Memory"),
}
def __getattr__(name: str) -> Any:
"""Lazily import heavy modules (e.g. Memory → lancedb) on first access."""
if name in _LAZY_IMPORTS:
module_path, attr = _LAZY_IMPORTS[name]
import importlib
mod = importlib.import_module(module_path)
val = getattr(mod, attr)
globals()[name] = val
return val
raise AttributeError(f"module 'crewai' has no attribute {name!r}")
__all__ = [
"LLM",
"Agent",

View File

@@ -8,11 +8,9 @@ import time
from typing import (
TYPE_CHECKING,
Any,
Final,
Literal,
cast,
)
from urllib.parse import urlparse
from pydantic import (
BaseModel,
@@ -61,16 +59,8 @@ from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.lite_agent_output import LiteAgentOutput
from crewai.llms.base_llm import BaseLLM
from crewai.mcp import (
MCPClient,
MCPServerConfig,
MCPServerHTTP,
MCPServerSSE,
MCPServerStdio,
)
from crewai.mcp.transports.http import HTTPTransport
from crewai.mcp.transports.sse import SSETransport
from crewai.mcp.transports.stdio import StdioTransport
from crewai.mcp import MCPServerConfig
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.fingerprint import Fingerprint
from crewai.tools.agent_tools.agent_tools import AgentTools
@@ -111,18 +101,8 @@ if TYPE_CHECKING:
from crewai.utilities.types import LLMMessage
# MCP Connection timeout constants (in seconds)
MCP_CONNECTION_TIMEOUT: Final[int] = 10
MCP_TOOL_EXECUTION_TIMEOUT: Final[int] = 30
MCP_DISCOVERY_TIMEOUT: Final[int] = 15
MCP_MAX_RETRIES: Final[int] = 3
_passthrough_exceptions: tuple[type[Exception], ...] = ()
# Simple in-memory cache for MCP tool schemas (duration: 5 minutes)
_mcp_schema_cache: dict[str, Any] = {}
_cache_ttl: Final[int] = 300 # 5 minutes
class Agent(BaseAgent):
"""Represents an agent in a system.
@@ -154,7 +134,7 @@ class Agent(BaseAgent):
model_config = ConfigDict()
_times_executed: int = PrivateAttr(default=0)
_mcp_clients: list[Any] = PrivateAttr(default_factory=list)
_mcp_resolver: MCPToolResolver | None = PrivateAttr(default=None)
_last_messages: list[LLMMessage] = PrivateAttr(default_factory=list)
max_execution_time: int | None = Field(
default=None,
@@ -384,10 +364,10 @@ class Agent(BaseAgent):
)
if unified_memory is not None:
query = task.description
matches = unified_memory.recall(query, limit=10)
matches = unified_memory.recall(query, limit=5)
if matches:
memory = "Relevant memories:\n" + "\n".join(
f"- {m.record.content}" for m in matches
m.format() for m in matches
)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
@@ -622,10 +602,10 @@ class Agent(BaseAgent):
)
if unified_memory is not None:
query = task.description
matches = unified_memory.recall(query, limit=10)
matches = unified_memory.recall(query, limit=5)
if matches:
memory = "Relevant memories:\n" + "\n".join(
f"- {m.record.content}" for m in matches
m.format() for m in matches
)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
@@ -864,7 +844,11 @@ class Agent(BaseAgent):
respect_context_window=self.respect_context_window,
request_within_rpm_limit=rpm_limit_fn,
callbacks=[TokenCalcHandler(self._token_process)],
response_model=task.response_model if task else None,
response_model=(
task.response_model or task.output_pydantic or task.output_json
)
if task
else None,
)
def _update_executor_parameters(
@@ -893,7 +877,11 @@ class Agent(BaseAgent):
self.agent_executor.stop = stop_words
self.agent_executor.tools_names = get_tool_names(tools)
self.agent_executor.tools_description = render_text_description_and_args(tools)
self.agent_executor.response_model = task.response_model if task else None
self.agent_executor.response_model = (
(task.response_model or task.output_pydantic or task.output_json)
if task
else None
)
self.agent_executor.tools_handler = self.tools_handler
self.agent_executor.request_within_rpm_limit = rpm_limit_fn
@@ -926,544 +914,17 @@ class Agent(BaseAgent):
def get_mcp_tools(self, mcps: list[str | MCPServerConfig]) -> list[BaseTool]:
"""Convert MCP server references/configs to CrewAI tools.
Supports both string references (backwards compatible) and structured
configuration objects (MCPServerStdio, MCPServerHTTP, MCPServerSSE).
Args:
mcps: List of MCP server references (strings) or configurations.
Returns:
List of BaseTool instances from MCP servers.
Delegates to :class:`~crewai.mcp.tool_resolver.MCPToolResolver`.
"""
all_tools = []
clients = []
for mcp_config in mcps:
if isinstance(mcp_config, str):
tools = self._get_mcp_tools_from_string(mcp_config)
else:
tools, client = self._get_native_mcp_tools(mcp_config)
if client:
clients.append(client)
all_tools.extend(tools)
# Store clients for cleanup
self._mcp_clients.extend(clients)
return all_tools
self._cleanup_mcp_clients()
self._mcp_resolver = MCPToolResolver(agent=self, logger=self._logger)
return self._mcp_resolver.resolve(mcps)
def _cleanup_mcp_clients(self) -> None:
"""Cleanup MCP client connections after task execution."""
if not self._mcp_clients:
return
async def _disconnect_all() -> None:
for client in self._mcp_clients:
if client and hasattr(client, "connected") and client.connected:
await client.disconnect()
try:
asyncio.run(_disconnect_all())
except Exception as e:
self._logger.log("error", f"Error during MCP client cleanup: {e}")
finally:
self._mcp_clients.clear()
def _get_mcp_tools_from_string(self, mcp_ref: str) -> list[BaseTool]:
"""Get tools from legacy string-based MCP references.
This method maintains backwards compatibility with string-based
MCP references (https://... and crewai-amp:...).
Args:
mcp_ref: String reference to MCP server.
Returns:
List of BaseTool instances.
"""
if mcp_ref.startswith("crewai-amp:"):
return self._get_amp_mcp_tools(mcp_ref)
if mcp_ref.startswith("https://"):
return self._get_external_mcp_tools(mcp_ref)
return []
def _get_external_mcp_tools(self, mcp_ref: str) -> list[BaseTool]:
"""Get tools from external HTTPS MCP server with graceful error handling."""
from crewai.tools.mcp_tool_wrapper import MCPToolWrapper
# Parse server URL and optional tool name
if "#" in mcp_ref:
server_url, specific_tool = mcp_ref.split("#", 1)
else:
server_url, specific_tool = mcp_ref, None
server_params = {"url": server_url}
server_name = self._extract_server_name(server_url)
try:
# Get tool schemas with timeout and error handling
tool_schemas = self._get_mcp_tool_schemas(server_params)
if not tool_schemas:
self._logger.log(
"warning", f"No tools discovered from MCP server: {server_url}"
)
return []
tools = []
for tool_name, schema in tool_schemas.items():
# Skip if specific tool requested and this isn't it
if specific_tool and tool_name != specific_tool:
continue
try:
wrapper = MCPToolWrapper(
mcp_server_params=server_params,
tool_name=tool_name,
tool_schema=schema,
server_name=server_name,
)
tools.append(wrapper)
except Exception as e:
self._logger.log(
"warning",
f"Failed to create MCP tool wrapper for {tool_name}: {e}",
)
continue
if specific_tool and not tools:
self._logger.log(
"warning",
f"Specific tool '{specific_tool}' not found on MCP server: {server_url}",
)
return cast(list[BaseTool], tools)
except Exception as e:
self._logger.log(
"warning", f"Failed to connect to MCP server {server_url}: {e}"
)
return []
def _get_native_mcp_tools(
self, mcp_config: MCPServerConfig
) -> tuple[list[BaseTool], Any | None]:
"""Get tools from MCP server using structured configuration.
This method creates an MCP client based on the configuration type,
connects to the server, discovers tools, applies filtering, and
returns wrapped tools along with the client instance for cleanup.
Args:
mcp_config: MCP server configuration (MCPServerStdio, MCPServerHTTP, or MCPServerSSE).
Returns:
Tuple of (list of BaseTool instances, MCPClient instance for cleanup).
"""
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_native_tool import MCPNativeTool
transport: StdioTransport | HTTPTransport | SSETransport
if isinstance(mcp_config, MCPServerStdio):
transport = StdioTransport(
command=mcp_config.command,
args=mcp_config.args,
env=mcp_config.env,
)
server_name = f"{mcp_config.command}_{'_'.join(mcp_config.args)}"
elif isinstance(mcp_config, MCPServerHTTP):
transport = HTTPTransport(
url=mcp_config.url,
headers=mcp_config.headers,
streamable=mcp_config.streamable,
)
server_name = self._extract_server_name(mcp_config.url)
elif isinstance(mcp_config, MCPServerSSE):
transport = SSETransport(
url=mcp_config.url,
headers=mcp_config.headers,
)
server_name = self._extract_server_name(mcp_config.url)
else:
raise ValueError(f"Unsupported MCP server config type: {type(mcp_config)}")
client = MCPClient(
transport=transport,
cache_tools_list=mcp_config.cache_tools_list,
)
async def _setup_client_and_list_tools() -> list[dict[str, Any]]:
"""Async helper to connect and list tools in same event loop."""
try:
if not client.connected:
await client.connect()
tools_list = await client.list_tools()
try:
await client.disconnect()
# Small delay to allow background tasks to finish cleanup
# This helps prevent "cancel scope in different task" errors
# when asyncio.run() closes the event loop
await asyncio.sleep(0.1)
except Exception as e:
self._logger.log("error", f"Error during disconnect: {e}")
return tools_list
except Exception as e:
if client.connected:
await client.disconnect()
await asyncio.sleep(0.1)
raise RuntimeError(
f"Error during setup client and list tools: {e}"
) from e
try:
try:
asyncio.get_running_loop()
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run, _setup_client_and_list_tools()
)
tools_list = future.result()
except RuntimeError:
try:
tools_list = asyncio.run(_setup_client_and_list_tools())
except RuntimeError as e:
error_msg = str(e).lower()
if "cancel scope" in error_msg or "task" in error_msg:
raise ConnectionError(
"MCP connection failed due to event loop cleanup issues. "
"This may be due to authentication errors or server unavailability."
) from e
except asyncio.CancelledError as e:
raise ConnectionError(
"MCP connection was cancelled. This may indicate an authentication "
"error or server unavailability."
) from e
if mcp_config.tool_filter:
filtered_tools = []
for tool in tools_list:
if callable(mcp_config.tool_filter):
try:
from crewai.mcp.filters import ToolFilterContext
context = ToolFilterContext(
agent=self,
server_name=server_name,
run_context=None,
)
if mcp_config.tool_filter(context, tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
except (TypeError, AttributeError):
if mcp_config.tool_filter(tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
else:
# Not callable - include tool
filtered_tools.append(tool)
tools_list = filtered_tools
tools = []
for tool_def in tools_list:
tool_name = tool_def.get("name", "")
if not tool_name:
continue
# Convert inputSchema to Pydantic model if present
args_schema = None
if tool_def.get("inputSchema"):
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
tool_schema = {
"description": tool_def.get("description", ""),
"args_schema": args_schema,
}
try:
native_tool = MCPNativeTool(
mcp_client=client,
tool_name=tool_name,
tool_schema=tool_schema,
server_name=server_name,
)
tools.append(native_tool)
except Exception as e:
self._logger.log("error", f"Failed to create native MCP tool: {e}")
continue
return cast(list[BaseTool], tools), client
except Exception as e:
if client.connected:
asyncio.run(client.disconnect())
raise RuntimeError(f"Failed to get native MCP tools: {e}") from e
def _get_amp_mcp_tools(self, amp_ref: str) -> list[BaseTool]:
"""Get tools from CrewAI AMP MCP marketplace."""
# Parse: "crewai-amp:mcp-name" or "crewai-amp:mcp-name#tool_name"
amp_part = amp_ref.replace("crewai-amp:", "")
if "#" in amp_part:
mcp_name, specific_tool = amp_part.split("#", 1)
else:
mcp_name, specific_tool = amp_part, None
# Call AMP API to get MCP server URLs
mcp_servers = self._fetch_amp_mcp_servers(mcp_name)
tools = []
for server_config in mcp_servers:
server_ref = server_config["url"]
if specific_tool:
server_ref += f"#{specific_tool}"
server_tools = self._get_external_mcp_tools(server_ref)
tools.extend(server_tools)
return tools
@staticmethod
def _extract_server_name(server_url: str) -> str:
"""Extract clean server name from URL for tool prefixing."""
parsed = urlparse(server_url)
domain = parsed.netloc.replace(".", "_")
path = parsed.path.replace("/", "_").strip("_")
return f"{domain}_{path}" if path else domain
def _get_mcp_tool_schemas(
self, server_params: dict[str, Any]
) -> dict[str, dict[str, Any]]:
"""Get tool schemas from MCP server for wrapper creation with caching."""
server_url = server_params["url"]
# Check cache first
cache_key = server_url
current_time = time.time()
if cache_key in _mcp_schema_cache:
cached_data, cache_time = _mcp_schema_cache[cache_key]
if current_time - cache_time < _cache_ttl:
self._logger.log(
"debug", f"Using cached MCP tool schemas for {server_url}"
)
return cached_data # type: ignore[no-any-return]
try:
schemas = asyncio.run(self._get_mcp_tool_schemas_async(server_params))
# Cache successful results
_mcp_schema_cache[cache_key] = (schemas, current_time)
return schemas
except Exception as e:
# Log warning but don't raise - this allows graceful degradation
self._logger.log(
"warning", f"Failed to get MCP tool schemas from {server_url}: {e}"
)
return {}
async def _get_mcp_tool_schemas_async(
self, server_params: dict[str, Any]
) -> dict[str, dict[str, Any]]:
"""Async implementation of MCP tool schema retrieval with timeouts and retries."""
server_url = server_params["url"]
return await self._retry_mcp_discovery(
self._discover_mcp_tools_with_timeout, server_url
)
async def _retry_mcp_discovery(
self, operation_func: Any, server_url: str
) -> dict[str, dict[str, Any]]:
"""Retry MCP discovery operation with exponential backoff, avoiding try-except in loop."""
last_error = None
for attempt in range(MCP_MAX_RETRIES):
# Execute single attempt outside try-except loop structure
result, error, should_retry = await self._attempt_mcp_discovery(
operation_func, server_url
)
# Success case - return immediately
if result is not None:
return result
# Non-retryable error - raise immediately
if not should_retry:
raise RuntimeError(error)
# Retryable error - continue with backoff
last_error = error
if attempt < MCP_MAX_RETRIES - 1:
wait_time = 2**attempt # Exponential backoff
await asyncio.sleep(wait_time)
raise RuntimeError(
f"Failed to discover MCP tools after {MCP_MAX_RETRIES} attempts: {last_error}"
)
@staticmethod
async def _attempt_mcp_discovery(
operation_func: Any, server_url: str
) -> tuple[dict[str, dict[str, Any]] | None, str, bool]:
"""Attempt single MCP discovery operation and return (result, error_message, should_retry)."""
try:
result = await operation_func(server_url)
return result, "", False
except ImportError:
return (
None,
"MCP library not available. Please install with: pip install mcp",
False,
)
except asyncio.TimeoutError:
return (
None,
f"MCP discovery timed out after {MCP_DISCOVERY_TIMEOUT} seconds",
True,
)
except Exception as e:
error_str = str(e).lower()
# Classify errors as retryable or non-retryable
if "authentication" in error_str or "unauthorized" in error_str:
return None, f"Authentication failed for MCP server: {e!s}", False
if "connection" in error_str or "network" in error_str:
return None, f"Network connection failed: {e!s}", True
if "json" in error_str or "parsing" in error_str:
return None, f"Server response parsing error: {e!s}", True
return None, f"MCP discovery error: {e!s}", False
async def _discover_mcp_tools_with_timeout(
self, server_url: str
) -> dict[str, dict[str, Any]]:
"""Discover MCP tools with timeout wrapper."""
return await asyncio.wait_for(
self._discover_mcp_tools(server_url), timeout=MCP_DISCOVERY_TIMEOUT
)
async def _discover_mcp_tools(self, server_url: str) -> dict[str, dict[str, Any]]:
"""Discover tools from MCP server with proper timeout handling."""
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
async with streamablehttp_client(server_url) as (read, write, _):
async with ClientSession(read, write) as session:
# Initialize the connection with timeout
await asyncio.wait_for(
session.initialize(), timeout=MCP_CONNECTION_TIMEOUT
)
# List available tools with timeout
tools_result = await asyncio.wait_for(
session.list_tools(),
timeout=MCP_DISCOVERY_TIMEOUT - MCP_CONNECTION_TIMEOUT,
)
schemas = {}
for tool in tools_result.tools:
args_schema = None
if hasattr(tool, "inputSchema") and tool.inputSchema:
args_schema = self._json_schema_to_pydantic(
sanitize_tool_name(tool.name), tool.inputSchema
)
schemas[sanitize_tool_name(tool.name)] = {
"description": getattr(tool, "description", ""),
"args_schema": args_schema,
}
return schemas
def _json_schema_to_pydantic(
self, tool_name: str, json_schema: dict[str, Any]
) -> type:
"""Convert JSON Schema to Pydantic model for tool arguments.
Args:
tool_name: Name of the tool (used for model naming)
json_schema: JSON Schema dict with 'properties', 'required', etc.
Returns:
Pydantic BaseModel class
"""
from pydantic import Field, create_model
properties = json_schema.get("properties", {})
required_fields = json_schema.get("required", [])
field_definitions: dict[str, Any] = {}
for field_name, field_schema in properties.items():
field_type = self._json_type_to_python(field_schema)
field_description = field_schema.get("description", "")
is_required = field_name in required_fields
if is_required:
field_definitions[field_name] = (
field_type,
Field(..., description=field_description),
)
else:
field_definitions[field_name] = (
field_type | None,
Field(default=None, description=field_description),
)
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
return create_model(model_name, **field_definitions) # type: ignore[no-any-return]
def _json_type_to_python(self, field_schema: dict[str, Any]) -> type:
"""Convert JSON Schema type to Python type.
Args:
field_schema: JSON Schema field definition
Returns:
Python type
"""
json_type = field_schema.get("type")
if "anyOf" in field_schema:
types: list[type] = []
for option in field_schema["anyOf"]:
if "const" in option:
types.append(str)
else:
types.append(self._json_type_to_python(option))
unique_types = list(set(types))
if len(unique_types) > 1:
result: Any = unique_types[0]
for t in unique_types[1:]:
result = result | t
return result # type: ignore[no-any-return]
return unique_types[0]
type_mapping: dict[str | None, type] = {
"string": str,
"number": float,
"integer": int,
"boolean": bool,
"array": list,
"object": dict,
}
return type_mapping.get(json_type, Any)
@staticmethod
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict[str, Any]]:
"""Fetch MCP server configurations from CrewAI AMP API."""
# TODO: Implement AMP API call to "integrations/mcps" endpoint
# Should return list of server configs with URLs
return []
if self._mcp_resolver is not None:
self._mcp_resolver.cleanup()
self._mcp_resolver = None
@staticmethod
def get_multimodal_tools() -> Sequence[BaseTool]:
@@ -1712,7 +1173,8 @@ class Agent(BaseAgent):
existing_names = {sanitize_tool_name(t.name) for t in raw_tools}
raw_tools.extend(
mt for mt in create_memory_tools(agent_memory)
mt
for mt in create_memory_tools(agent_memory)
if sanitize_tool_name(mt.name) not in existing_names
)
@@ -1802,11 +1264,11 @@ class Agent(BaseAgent):
),
)
start_time = time.time()
matches = agent_memory.recall(formatted_messages, limit=10)
matches = agent_memory.recall(formatted_messages, limit=5)
memory_block = ""
if matches:
memory_block = "Relevant memories:\n" + "\n".join(
f"- {m.record.content}" for m in matches
m.format() for m in matches
)
if memory_block:
formatted_messages += "\n\n" + self.i18n.slice("memory").format(
@@ -1937,14 +1399,15 @@ class Agent(BaseAgent):
if isinstance(messages, str):
input_str = messages
else:
input_str = "\n".join(
str(msg.get("content", "")) for msg in messages if msg.get("content")
) or "User request"
raw = (
f"Input: {input_str}\n"
f"Agent: {self.role}\n"
f"Result: {output_text}"
)
input_str = (
"\n".join(
str(msg.get("content", ""))
for msg in messages
if msg.get("content")
)
or "User request"
)
raw = f"Input: {input_str}\nAgent: {self.role}\nResult: {output_text}"
extracted = agent_memory.extract_memories(raw)
if extracted:
agent_memory.remember_many(extracted)

View File

@@ -4,7 +4,8 @@ from abc import ABC, abstractmethod
from collections.abc import Callable
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Literal
import re
from typing import Any, Final, Literal
import uuid
from pydantic import (
@@ -36,6 +37,11 @@ from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.string_utils import interpolate_only
_SLUG_RE: Final[re.Pattern[str]] = re.compile(
r"^(?:crewai-amp:)?[a-zA-Z0-9][a-zA-Z0-9_-]*(?:#\w+)?$"
)
PlatformApp = Literal[
"asana",
"box",
@@ -197,7 +203,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
)
mcps: list[str | MCPServerConfig] | None = Field(
default=None,
description="List of MCP server references. Supports 'https://server.com/path' for external servers and 'crewai-amp:mcp-name' for AMP marketplace. Use '#tool_name' suffix for specific tools.",
description="List of MCP server references. Supports 'https://server.com/path' for external servers and bare slugs like 'notion' for connected MCP integrations. Use '#tool_name' suffix for specific tools.",
)
memory: Any = Field(
default=None,
@@ -276,14 +282,16 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
validated_mcps: list[str | MCPServerConfig] = []
for mcp in mcps:
if isinstance(mcp, str):
if mcp.startswith(("https://", "crewai-amp:")):
if mcp.startswith("https://"):
validated_mcps.append(mcp)
elif _SLUG_RE.match(mcp):
validated_mcps.append(mcp)
else:
raise ValueError(
f"Invalid MCP reference: {mcp}. "
"String references must start with 'https://' or 'crewai-amp:'"
f"Invalid MCP reference: {mcp!r}. "
"String references must be an 'https://' URL or a valid "
"slug (e.g. 'notion', 'notion#search', 'crewai-amp:notion')."
)
elif isinstance(mcp, (MCPServerConfig)):
validated_mcps.append(mcp)
else:

View File

@@ -30,7 +30,7 @@ class CrewAgentExecutorMixin:
memory = getattr(self.agent, "memory", None) or (
getattr(self.crew, "_memory", None) if self.crew else None
)
if memory is None or not self.task:
if memory is None or not self.task or getattr(memory, "_read_only", False):
return
if (
f"Action: {sanitize_tool_name('Delegate work to coworker')}"

View File

@@ -6,8 +6,10 @@ and memory management.
from __future__ import annotations
import asyncio
from collections.abc import Callable
from concurrent.futures import ThreadPoolExecutor, as_completed
import inspect
import logging
from typing import TYPE_CHECKING, Any, Literal, cast
@@ -48,6 +50,7 @@ from crewai.utilities.agent_utils import (
handle_unknown_error,
has_reached_max_iterations,
is_context_length_exceeded,
parse_tool_call_args,
process_llm_response,
track_delegation_if_needed,
)
@@ -484,8 +487,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
# No tools available, fall back to simple LLM call
return self._invoke_loop_native_no_tools()
openai_tools, available_functions = convert_tools_to_openai_schema(
self.original_tools
openai_tools, available_functions, self._tool_name_mapping = (
convert_tools_to_openai_schema(self.original_tools)
)
while True:
@@ -697,9 +700,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if not parsed_calls:
return None
original_tools_by_name: dict[str, Any] = {}
for tool in self.original_tools or []:
original_tools_by_name[sanitize_tool_name(tool.name)] = tool
original_tools_by_name: dict[str, Any] = dict(self._tool_name_mapping)
if len(parsed_calls) > 1:
has_result_as_answer_in_batch = any(
@@ -736,7 +737,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
] = []
for call_id, func_name, func_args in parsed_calls:
original_tool = original_tools_by_name.get(func_name)
execution_plan.append((call_id, func_name, func_args, original_tool))
execution_plan.append(
(call_id, func_name, func_args, original_tool)
)
self._append_assistant_tool_calls_message(
[
@@ -746,7 +749,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
)
max_workers = min(8, len(execution_plan))
ordered_results: list[dict[str, Any] | None] = [None] * len(execution_plan)
ordered_results: list[dict[str, Any] | None] = [None] * len(
execution_plan
)
with ThreadPoolExecutor(max_workers=max_workers) as pool:
futures = {
pool.submit(
@@ -803,7 +808,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
return tool_finish
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_message: LLMMessage = {
reasoning_message = {
"role": "user",
"content": reasoning_prompt,
}
@@ -888,13 +893,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
ToolUsageStartedEvent,
)
if isinstance(func_args, str):
try:
args_dict = json.loads(func_args)
except json.JSONDecodeError:
args_dict = {}
else:
args_dict = func_args
args_dict, parse_error = parse_tool_call_args(func_args, func_name, call_id, original_tool)
if parse_error is not None:
return parse_error
if original_tool is None:
for tool in self.original_tools or []:
@@ -908,9 +909,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
elif (
should_execute
and original_tool
and getattr(original_tool, "max_usage_count", None) is not None
and getattr(original_tool, "current_usage_count", 0)
>= original_tool.max_usage_count
and (max_count := getattr(original_tool, "max_usage_count", None))
is not None
and getattr(original_tool, "current_usage_count", 0) >= max_count
):
max_usage_reached = True
@@ -946,10 +947,16 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
track_delegation_if_needed(func_name, args_dict, self.task)
structured_tool: CrewStructuredTool | None = None
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
if original_tool is not None:
for structured in self.tools or []:
if getattr(structured, "_original_tool", None) is original_tool:
structured_tool = structured
break
if structured_tool is None:
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
hook_blocked = False
before_hook_context = ToolCallHookContext(
@@ -989,13 +996,17 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
and hasattr(original_tool, "cache_function")
and callable(original_tool.cache_function)
):
should_cache = original_tool.cache_function(args_dict, raw_result)
should_cache = original_tool.cache_function(
args_dict, raw_result
)
if should_cache:
self.tools_handler.cache.add(
tool=func_name, input=input_str, output=raw_result
)
result = str(raw_result) if not isinstance(raw_result, str) else raw_result
result = (
str(raw_result) if not isinstance(raw_result, str) else raw_result
)
except Exception as e:
result = f"Error executing tool: {e}"
if self.task:
@@ -1252,7 +1263,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer, tool_result
)
self._invoke_step_callback(formatted_answer) # type: ignore[arg-type]
await self._ainvoke_step_callback(formatted_answer) # type: ignore[arg-type]
self._append_message(formatted_answer.text) # type: ignore[union-attr]
except OutputParserError as e:
@@ -1305,8 +1316,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if not self.original_tools:
return await self._ainvoke_loop_native_no_tools()
openai_tools, available_functions = convert_tools_to_openai_schema(
self.original_tools
openai_tools, available_functions, self._tool_name_mapping = (
convert_tools_to_openai_schema(self.original_tools)
)
while True:
@@ -1367,7 +1378,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
output=answer,
text=answer,
)
self._invoke_step_callback(formatted_answer)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(answer) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1379,7 +1390,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
output=answer,
text=output_json,
)
self._invoke_step_callback(formatted_answer)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(output_json)
self._show_logs(formatted_answer)
return formatted_answer
@@ -1390,7 +1401,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
output=str(answer),
text=str(answer),
)
self._invoke_step_callback(formatted_answer)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(str(answer)) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1484,13 +1495,28 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def _invoke_step_callback(
self, formatted_answer: AgentAction | AgentFinish
) -> None:
"""Invoke step callback.
"""Invoke step callback (sync context).
Args:
formatted_answer: Current agent response.
"""
if self.step_callback:
self.step_callback(formatted_answer)
cb_result = self.step_callback(formatted_answer)
if inspect.iscoroutine(cb_result):
asyncio.run(cb_result)
async def _ainvoke_step_callback(
self, formatted_answer: AgentAction | AgentFinish
) -> None:
"""Invoke step callback (async context).
Args:
formatted_answer: Current agent response.
"""
if self.step_callback:
cb_result = self.step_callback(formatted_answer)
if inspect.iscoroutine(cb_result):
await cb_result
def _append_message(
self, text: str, role: Literal["user", "assistant", "system"] = "assistant"

View File

@@ -2,8 +2,8 @@ import time
from typing import TYPE_CHECKING, Any, TypeVar, cast
import webbrowser
import httpx
from pydantic import BaseModel, Field
import requests
from rich.console import Console
from crewai.cli.authentication.utils import validate_jwt_token
@@ -98,7 +98,7 @@ class AuthenticationCommand:
"scope": " ".join(self.oauth2_provider.get_oauth_scopes()),
"audience": self.oauth2_provider.get_audience(),
}
response = requests.post(
response = httpx.post(
url=self.oauth2_provider.get_authorize_url(),
data=device_code_payload,
timeout=20,
@@ -130,7 +130,7 @@ class AuthenticationCommand:
attempts = 0
while True and attempts < 10:
response = requests.post(
response = httpx.post(
self.oauth2_provider.get_token_url(), data=token_payload, timeout=30
)
token_data = response.json()
@@ -149,7 +149,7 @@ class AuthenticationCommand:
return
if token_data["error"] not in ("authorization_pending", "slow_down"):
raise requests.HTTPError(
raise httpx.HTTPError(
token_data.get("error_description") or token_data.get("error")
)

View File

@@ -1,5 +1,6 @@
import requests
from requests.exceptions import JSONDecodeError
import json
import httpx
from rich.console import Console
from crewai.cli.authentication.token import get_auth_token
@@ -30,16 +31,16 @@ class PlusAPIMixin:
console.print("Run 'crewai login' to sign up/login.", style="bold green")
raise SystemExit from None
def _validate_response(self, response: requests.Response) -> None:
def _validate_response(self, response: httpx.Response) -> None:
"""
Handle and display error messages from API responses.
Args:
response (requests.Response): The response from the Plus API
response (httpx.Response): The response from the Plus API
"""
try:
json_response = response.json()
except (JSONDecodeError, ValueError):
except (json.JSONDecodeError, ValueError):
console.print(
"Failed to parse response from Enterprise API failed. Details:",
style="bold red",
@@ -62,7 +63,7 @@ class PlusAPIMixin:
)
raise SystemExit
if not response.ok:
if not response.is_success:
console.print(
"Request to Enterprise API failed. Details:", style="bold red"
)

View File

@@ -69,7 +69,7 @@ ENV_VARS: dict[str, list[dict[str, Any]]] = {
},
{
"prompt": "Enter your AWS Region Name (press Enter to skip)",
"key_name": "AWS_REGION_NAME",
"key_name": "AWS_DEFAULT_REGION",
},
],
"azure": [

View File

@@ -1,7 +1,7 @@
import json
from typing import Any, cast
import requests
from requests.exceptions import JSONDecodeError, RequestException
import httpx
from rich.console import Console
from crewai.cli.authentication.main import Oauth2Settings, ProviderFactory
@@ -47,12 +47,12 @@ class EnterpriseConfigureCommand(BaseCommand):
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
"X-Crewai-Version": get_crewai_version(),
}
response = requests.get(oauth_endpoint, timeout=30, headers=headers)
response = httpx.get(oauth_endpoint, timeout=30, headers=headers)
response.raise_for_status()
try:
oauth_config = response.json()
except JSONDecodeError as e:
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON response from {oauth_endpoint}") from e
self._validate_oauth_config(oauth_config)
@@ -62,7 +62,7 @@ class EnterpriseConfigureCommand(BaseCommand):
)
return cast(dict[str, Any], oauth_config)
except RequestException as e:
except httpx.HTTPError as e:
raise ValueError(f"Failed to connect to enterprise URL: {e!s}") from e
except Exception as e:
raise ValueError(f"Error fetching OAuth2 configuration: {e!s}") from e

View File

@@ -290,13 +290,20 @@ class MemoryTUI(App[None]):
if self._memory is None:
panel.update(self._init_error or "No memory loaded.")
return
display_limit = 1000
info = self._memory.info(path)
self._last_scope_info = info
self._entries = self._memory.list_records(scope=path, limit=200)
self._entries = self._memory.list_records(scope=path, limit=display_limit)
panel.update(_format_scope_info(info))
panel.border_title = "Detail"
entry_list = self.query_one("#entry-list", OptionList)
entry_list.border_title = f"Entries ({len(self._entries)})"
capped = info.record_count > display_limit
count_label = (
f"Entries (showing {display_limit} of {info.record_count} — display limit)"
if capped
else f"Entries ({len(self._entries)})"
)
entry_list.border_title = count_label
self._populate_entry_list()
def on_option_list_option_highlighted(
@@ -376,6 +383,11 @@ class MemoryTUI(App[None]):
return
info_lines: list[str] = []
info_lines.append(
"[dim italic]Searched the full dataset"
+ (f" within [bold]{scope}[/]" if scope else "")
+ " using the recall flow (semantic + recency + importance).[/]\n"
)
if not self._custom_embedder:
info_lines.append(
"[dim italic]Note: Using default OpenAI embedder. "

View File

@@ -1,4 +1,4 @@
from requests import HTTPError
from httpx import HTTPStatusError
from rich.console import Console
from rich.table import Table
@@ -10,11 +10,11 @@ console = Console()
class OrganizationCommand(BaseCommand, PlusAPIMixin):
def __init__(self):
def __init__(self) -> None:
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
def list(self):
def list(self) -> None:
try:
response = self.plus_api_client.get_organizations()
response.raise_for_status()
@@ -33,7 +33,7 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
table.add_row(org["name"], org["uuid"])
console.print(table)
except HTTPError as e:
except HTTPStatusError as e:
if e.response.status_code == 401:
console.print(
"You are not logged in to any organization. Use 'crewai login' to login.",
@@ -50,7 +50,7 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
)
raise SystemExit(1) from e
def switch(self, org_id):
def switch(self, org_id: str) -> None:
try:
response = self.plus_api_client.get_organizations()
response.raise_for_status()
@@ -72,7 +72,7 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
f"Successfully switched to {org['name']} ({org['uuid']})",
style="bold green",
)
except HTTPError as e:
except HTTPStatusError as e:
if e.response.status_code == 401:
console.print(
"You are not logged in to any organization. Use 'crewai login' to login.",
@@ -87,7 +87,7 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
console.print(f"Failed to switch organization: {e!s}", style="bold red")
raise SystemExit(1) from e
def current(self):
def current(self) -> None:
settings = Settings()
if settings.org_uuid:
console.print(

View File

@@ -3,7 +3,6 @@ from typing import Any
from urllib.parse import urljoin
import httpx
import requests
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
@@ -43,16 +42,16 @@ class PlusAPI:
def _make_request(
self, method: str, endpoint: str, **kwargs: Any
) -> requests.Response:
) -> httpx.Response:
url = urljoin(self.base_url, endpoint)
session = requests.Session()
session.trust_env = False
return session.request(method, url, headers=self.headers, **kwargs)
verify = kwargs.pop("verify", True)
with httpx.Client(trust_env=False, verify=verify) as client:
return client.request(method, url, headers=self.headers, **kwargs)
def login_to_tool_repository(self) -> requests.Response:
def login_to_tool_repository(self) -> httpx.Response:
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
def get_tool(self, handle: str) -> requests.Response:
def get_tool(self, handle: str) -> httpx.Response:
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
async def get_agent(self, handle: str) -> httpx.Response:
@@ -68,7 +67,7 @@ class PlusAPI:
description: str | None,
encoded_file: str,
available_exports: list[dict[str, Any]] | None = None,
) -> requests.Response:
) -> httpx.Response:
params = {
"handle": handle,
"public": is_public,
@@ -79,54 +78,52 @@ class PlusAPI:
}
return self._make_request("POST", f"{self.TOOLS_RESOURCE}", json=params)
def deploy_by_name(self, project_name: str) -> requests.Response:
def deploy_by_name(self, project_name: str) -> httpx.Response:
return self._make_request(
"POST", f"{self.CREWS_RESOURCE}/by-name/{project_name}/deploy"
)
def deploy_by_uuid(self, uuid: str) -> requests.Response:
def deploy_by_uuid(self, uuid: str) -> httpx.Response:
return self._make_request("POST", f"{self.CREWS_RESOURCE}/{uuid}/deploy")
def crew_status_by_name(self, project_name: str) -> requests.Response:
def crew_status_by_name(self, project_name: str) -> httpx.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/status"
)
def crew_status_by_uuid(self, uuid: str) -> requests.Response:
def crew_status_by_uuid(self, uuid: str) -> httpx.Response:
return self._make_request("GET", f"{self.CREWS_RESOURCE}/{uuid}/status")
def crew_by_name(
self, project_name: str, log_type: str = "deployment"
) -> requests.Response:
) -> httpx.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/logs/{log_type}"
)
def crew_by_uuid(
self, uuid: str, log_type: str = "deployment"
) -> requests.Response:
def crew_by_uuid(self, uuid: str, log_type: str = "deployment") -> httpx.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/{uuid}/logs/{log_type}"
)
def delete_crew_by_name(self, project_name: str) -> requests.Response:
def delete_crew_by_name(self, project_name: str) -> httpx.Response:
return self._make_request(
"DELETE", f"{self.CREWS_RESOURCE}/by-name/{project_name}"
)
def delete_crew_by_uuid(self, uuid: str) -> requests.Response:
def delete_crew_by_uuid(self, uuid: str) -> httpx.Response:
return self._make_request("DELETE", f"{self.CREWS_RESOURCE}/{uuid}")
def list_crews(self) -> requests.Response:
def list_crews(self) -> httpx.Response:
return self._make_request("GET", self.CREWS_RESOURCE)
def create_crew(self, payload: dict[str, Any]) -> requests.Response:
def create_crew(self, payload: dict[str, Any]) -> httpx.Response:
return self._make_request("POST", self.CREWS_RESOURCE, json=payload)
def get_organizations(self) -> requests.Response:
def get_organizations(self) -> httpx.Response:
return self._make_request("GET", self.ORGANIZATIONS_RESOURCE)
def initialize_trace_batch(self, payload: dict[str, Any]) -> requests.Response:
def initialize_trace_batch(self, payload: dict[str, Any]) -> httpx.Response:
return self._make_request(
"POST",
f"{self.TRACING_RESOURCE}/batches",
@@ -136,7 +133,7 @@ class PlusAPI:
def initialize_ephemeral_trace_batch(
self, payload: dict[str, Any]
) -> requests.Response:
) -> httpx.Response:
return self._make_request(
"POST",
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches",
@@ -145,7 +142,7 @@ class PlusAPI:
def send_trace_events(
self, trace_batch_id: str, payload: dict[str, Any]
) -> requests.Response:
) -> httpx.Response:
return self._make_request(
"POST",
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/events",
@@ -155,7 +152,7 @@ class PlusAPI:
def send_ephemeral_trace_events(
self, trace_batch_id: str, payload: dict[str, Any]
) -> requests.Response:
) -> httpx.Response:
return self._make_request(
"POST",
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches/{trace_batch_id}/events",
@@ -165,7 +162,7 @@ class PlusAPI:
def finalize_trace_batch(
self, trace_batch_id: str, payload: dict[str, Any]
) -> requests.Response:
) -> httpx.Response:
return self._make_request(
"PATCH",
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/finalize",
@@ -175,7 +172,7 @@ class PlusAPI:
def finalize_ephemeral_trace_batch(
self, trace_batch_id: str, payload: dict[str, Any]
) -> requests.Response:
) -> httpx.Response:
return self._make_request(
"PATCH",
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches/{trace_batch_id}/finalize",
@@ -185,7 +182,7 @@ class PlusAPI:
def mark_trace_batch_as_failed(
self, trace_batch_id: str, error_message: str
) -> requests.Response:
) -> httpx.Response:
return self._make_request(
"PATCH",
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}",
@@ -193,13 +190,20 @@ class PlusAPI:
timeout=30,
)
def get_triggers(self) -> requests.Response:
def get_mcp_configs(self, slugs: list[str]) -> httpx.Response:
"""Get MCP server configurations for the given slugs."""
return self._make_request(
"GET",
f"{self.INTEGRATIONS_RESOURCE}/mcp_configs",
params={"slugs": ",".join(slugs)},
timeout=30,
)
def get_triggers(self) -> httpx.Response:
"""Get all available triggers from integrations."""
return self._make_request("GET", f"{self.INTEGRATIONS_RESOURCE}/apps")
def get_trigger_payload(
self, app_slug: str, trigger_slug: str
) -> requests.Response:
def get_trigger_payload(self, app_slug: str, trigger_slug: str) -> httpx.Response:
"""Get sample payload for a specific trigger."""
return self._make_request(
"GET", f"{self.INTEGRATIONS_RESOURCE}/{app_slug}/{trigger_slug}/payload"

View File

@@ -8,7 +8,7 @@ from typing import Any
import certifi
import click
import requests
import httpx
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS
@@ -165,20 +165,20 @@ def fetch_provider_data(cache_file: Path) -> dict[str, Any] | None:
ssl_config = os.environ["SSL_CERT_FILE"] = certifi.where()
try:
response = requests.get(JSON_URL, stream=True, timeout=60, verify=ssl_config)
response.raise_for_status()
data = download_data(response)
with open(cache_file, "w") as f:
json.dump(data, f)
return data
except requests.RequestException as e:
with httpx.stream("GET", JSON_URL, timeout=60, verify=ssl_config) as response:
response.raise_for_status()
data = download_data(response)
with open(cache_file, "w") as f:
json.dump(data, f)
return data
except httpx.HTTPError as e:
click.secho(f"Error fetching provider data: {e}", fg="red")
except json.JSONDecodeError:
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
return None
def download_data(response: requests.Response) -> dict[str, Any]:
def download_data(response: httpx.Response) -> dict[str, Any]:
"""Downloads data from a given HTTP response and returns the JSON content.
Args:
@@ -194,7 +194,7 @@ def download_data(response: requests.Response) -> dict[str, Any]:
with click.progressbar(
length=total_size, label="Downloading", show_pos=True
) as bar:
for chunk in response.iter_content(block_size):
for chunk in response.iter_bytes(block_size):
if chunk:
data_chunks.append(chunk)
bar.update(len(chunk))

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.9.3"
"crewai[tools]==1.10.1a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.9.3"
"crewai[tools]==1.10.1a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.203.1"
"crewai[tools]==1.10.1a1"
]
[tool.crewai]

View File

@@ -63,6 +63,7 @@ from crewai.events.types.logging_events import (
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
@@ -165,6 +166,7 @@ __all__ = [
"LiteAgentExecutionCompletedEvent",
"LiteAgentExecutionErrorEvent",
"LiteAgentExecutionStartedEvent",
"MCPConfigFetchFailedEvent",
"MCPConnectionCompletedEvent",
"MCPConnectionFailedEvent",
"MCPConnectionStartedEvent",

View File

@@ -68,6 +68,7 @@ from crewai.events.types.logging_events import (
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
@@ -665,6 +666,16 @@ class EventListener(BaseEventListener):
event.error_type,
)
@crewai_event_bus.on(MCPConfigFetchFailedEvent)
def on_mcp_config_fetch_failed(
_: Any, event: MCPConfigFetchFailedEvent
) -> None:
self.formatter.handle_mcp_config_fetch_failed(
event.slug,
event.error,
event.error_type,
)
@crewai_event_bus.on(MCPToolExecutionStartedEvent)
def on_mcp_tool_execution_started(
_: Any, event: MCPToolExecutionStartedEvent

View File

@@ -67,6 +67,7 @@ from crewai.events.types.llm_guardrail_events import (
LLMGuardrailStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
@@ -181,4 +182,5 @@ EventTypes = (
| MCPToolExecutionStartedEvent
| MCPToolExecutionCompletedEvent
| MCPToolExecutionFailedEvent
| MCPConfigFetchFailedEvent
)

View File

@@ -83,3 +83,16 @@ class MCPToolExecutionFailedEvent(MCPEvent):
error_type: str | None = None # "timeout", "validation", "server_error", etc.
started_at: datetime | None = None
failed_at: datetime | None = None
class MCPConfigFetchFailedEvent(BaseEvent):
"""Event emitted when fetching an AMP MCP server config fails.
This covers cases where the slug is not connected, the API call
failed, or native MCP resolution failed after config was fetched.
"""
type: str = "mcp_config_fetch_failed"
slug: str
error: str
error_type: str | None = None # "not_connected", "api_error", "connection_failed"

View File

@@ -1512,6 +1512,34 @@ To enable tracing, do any one of these:
self.print(panel)
self.print()
def handle_mcp_config_fetch_failed(
self,
slug: str,
error: str = "",
error_type: str | None = None,
) -> None:
"""Handle MCP config fetch failed event (AMP resolution failures)."""
if not self.verbose:
return
content = Text()
content.append("MCP Config Fetch Failed\n\n", style="red bold")
content.append("Server: ", style="white")
content.append(f"{slug}\n", style="red")
if error_type:
content.append("Error Type: ", style="white")
content.append(f"{error_type}\n", style="red")
if error:
content.append("\nError: ", style="white bold")
error_preview = error[:500] + "..." if len(error) > 500 else error
content.append(f"{error_preview}\n", style="red")
panel = self.create_panel(content, "❌ MCP Config Failed", "red")
self.print(panel)
self.print()
def handle_mcp_tool_execution_started(
self,
server_name: str,

View File

@@ -1,8 +1,10 @@
from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine
from concurrent.futures import ThreadPoolExecutor, as_completed
from datetime import datetime
import inspect
import json
import threading
from typing import TYPE_CHECKING, Any, Literal, cast
@@ -50,6 +52,8 @@ from crewai.hooks.types import (
BeforeLLMCallHookCallable,
BeforeLLMCallHookType,
)
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.utilities.agent_utils import (
convert_tools_to_openai_schema,
enforce_rpm_limit,
@@ -64,6 +68,7 @@ from crewai.utilities.agent_utils import (
has_reached_max_iterations,
is_context_length_exceeded,
is_inside_event_loop,
parse_tool_call_args,
process_llm_response,
track_delegation_if_needed,
)
@@ -82,8 +87,6 @@ if TYPE_CHECKING:
from crewai.crew import Crew
from crewai.llms.base_llm import BaseLLM
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_types import ToolResult
from crewai.utilities.prompts import StandardPromptResult, SystemPromptResult
@@ -318,7 +321,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
def _setup_native_tools(self) -> None:
"""Convert tools to OpenAI schema format for native function calling."""
if self.original_tools:
self._openai_tools, self._available_functions = (
self._openai_tools, self._available_functions, self._tool_name_mapping = (
convert_tools_to_openai_schema(self.original_tools)
)
@@ -591,21 +594,19 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
def execute_tool_action(self) -> Literal["tool_completed", "tool_result_is_final"]:
"""Execute the tool action and handle the result."""
action = cast(AgentAction, self.state.current_answer)
fingerprint_context = {}
if (
self.agent
and hasattr(self.agent, "security_config")
and hasattr(self.agent.security_config, "fingerprint")
):
fingerprint_context = {
"agent_fingerprint": str(self.agent.security_config.fingerprint)
}
try:
action = cast(AgentAction, self.state.current_answer)
# Extract fingerprint context for tool execution
fingerprint_context = {}
if (
self.agent
and hasattr(self.agent, "security_config")
and hasattr(self.agent.security_config, "fingerprint")
):
fingerprint_context = {
"agent_fingerprint": str(self.agent.security_config.fingerprint)
}
# Execute the tool
tool_result = execute_tool_and_check_finality(
agent_action=action,
fingerprint_context=fingerprint_context,
@@ -619,24 +620,19 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
function_calling_llm=self.function_calling_llm,
crew=self.crew,
)
except Exception as e:
if self.agent and self.agent.verbose:
self._printer.print(
content=f"Error in tool execution: {e}", color="red"
)
if self.task:
self.task.increment_tools_errors()
# Handle agent action and append observation to messages
result = self._handle_agent_action(action, tool_result)
self.state.current_answer = result
error_observation = f"\nObservation: Error executing tool: {e}"
action.text += error_observation
action.result = str(e)
self._append_message_to_state(action.text)
# Invoke step callback if configured
self._invoke_step_callback(result)
# Append result message to conversation state
if hasattr(result, "text"):
self._append_message_to_state(result.text)
# Check if tool result became a final answer (result_as_answer flag)
if isinstance(result, AgentFinish):
self.state.is_finished = True
return "tool_result_is_final"
# Inject post-tool reasoning prompt to enforce analysis
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_message: LLMMessage = {
"role": "user",
@@ -646,12 +642,26 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
return "tool_completed"
except Exception as e:
error_text = Text()
error_text.append("❌ Error in tool execution: ", style="red bold")
error_text.append(str(e), style="red")
self._console.print(error_text)
raise
result = self._handle_agent_action(action, tool_result)
self.state.current_answer = result
self._invoke_step_callback(result)
if hasattr(result, "text"):
self._append_message_to_state(result.text)
if isinstance(result, AgentFinish):
self.state.is_finished = True
return "tool_result_is_final"
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_message: LLMMessage = {
"role": "user",
"content": reasoning_prompt,
}
self.state.messages.append(reasoning_message)
return "tool_completed"
@listen("native_tool_calls")
def execute_native_tool(
@@ -725,7 +735,20 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
)
for future in as_completed(future_to_idx):
idx = future_to_idx[future]
ordered_results[idx] = future.result()
try:
ordered_results[idx] = future.result()
except Exception as e:
tool_call = runnable_tool_calls[idx]
info = extract_tool_call_info(tool_call)
call_id = info[0] if info else "unknown"
func_name = info[1] if info else "unknown"
ordered_results[idx] = {
"call_id": call_id,
"func_name": func_name,
"result": f"Error executing tool: {e}",
"from_cache": False,
"original_tool": None,
}
execution_results = [
result for result in ordered_results if result is not None
]
@@ -778,7 +801,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
from_cache = cast(bool, execution_result["from_cache"])
original_tool = execution_result["original_tool"]
tool_message: LLMMessage = {
tool_message = {
"role": "tool",
"tool_call_id": call_id,
"name": func_name,
@@ -821,11 +844,17 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
continue
_, func_name, _ = info
original_tool = None
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
mapping = getattr(self, "_tool_name_mapping", None)
original_tool: BaseTool | None = None
if mapping and func_name in mapping:
mapped = mapping[func_name]
if isinstance(mapped, BaseTool):
original_tool = mapped
if original_tool is None:
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
if not original_tool:
continue
@@ -841,28 +870,40 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
"""Execute a single native tool call and return metadata/result."""
info = extract_tool_call_info(tool_call)
if not info:
raise ValueError("Invalid native tool call format")
call_id = (
getattr(tool_call, "id", None)
or (tool_call.get("id") if isinstance(tool_call, dict) else None)
or "unknown"
)
return {
"call_id": call_id,
"func_name": "unknown",
"result": "Error: Invalid native tool call format",
"from_cache": False,
"original_tool": None,
}
call_id, func_name, func_args = info
# Parse arguments
if isinstance(func_args, str):
try:
args_dict = json.loads(func_args)
except json.JSONDecodeError:
args_dict = {}
else:
args_dict = func_args
args_dict, parse_error = parse_tool_call_args(func_args, func_name, call_id)
if parse_error is not None:
return parse_error
# Get agent_key for event tracking
agent_key = getattr(self.agent, "key", "unknown") if self.agent else "unknown"
# Find original tool by matching sanitized name (needed for cache_function and result_as_answer)
original_tool = None
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
original_tool: BaseTool | None = None
mapping = getattr(self, "_tool_name_mapping", None)
if mapping and func_name in mapping:
mapped = mapping[func_name]
if isinstance(mapped, BaseTool):
original_tool = mapped
if original_tool is None:
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
# Check if tool has reached max usage count
max_usage_reached = False
@@ -905,10 +946,16 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
track_delegation_if_needed(func_name, args_dict, self.task)
structured_tool: CrewStructuredTool | None = None
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
if original_tool is not None:
for structured in self.tools or []:
if getattr(structured, "_original_tool", None) is original_tool:
structured_tool = structured
break
if structured_tool is None:
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
hook_blocked = False
before_hook_context = ToolCallHookContext(
@@ -1358,7 +1405,9 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
formatted_answer: Current agent response.
"""
if self.step_callback:
self.step_callback(formatted_answer)
cb_result = self.step_callback(formatted_answer)
if inspect.iscoroutine(cb_result):
asyncio.run(cb_result)
def _append_message_to_state(
self, text: str, role: Literal["user", "assistant", "system"] = "assistant"

Some files were not shown because too many files have changed in this diff Show More