Compare commits

...

41 Commits

Author SHA1 Message Date
lorenzejay
130da34b48 docs: update changelog and version for v1.10.1 2026-03-04 11:05:18 -08:00
Lorenze Jay
53df41989a feat: bump versions to 1.10.1 (#4706)
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
2026-03-04 11:03:17 -08:00
Greyson LaLonde
ea70976a5d fix: adjust executor listener value to avoid recursion (#4705)
* fix: adjust executor listener value to avoid recursion

* fix: clear call count to ensure zero state

* feat: expose max method call kwarg
2026-03-04 10:47:22 -08:00
João Moura
3cc6516ae5 Memory overall improvements (#4688)
* feat: enhance memory recall limits and update documentation

- Increased the memory recall limit in the Agent class from 5 to 15.
- Updated the RecallMemoryTool to allow a recall limit of 20.
- Expanded the documentation for the recall_memory feature to emphasize the importance of multiple queries for comprehensive results.

* feat: increase memory recall limit and enhance memory context documentation

- Increased the memory recall limit in the Agent class from 15 to 20.
- Updated the memory context message to clarify the nature of the memories presented and the importance of using the Search memory tool for comprehensive results.

* refactor: remove inferred_categories from RecallState and update category merging logic

- Removed the inferred_categories field from RecallState to simplify state management.
- Updated the _merged_categories method to only merge caller-supplied categories, enhancing clarity in category handling.

* refactor: simplify category handling in RecallFlow

- Updated the _merged_categories method to return only caller-supplied categories, removing the previous merging logic for inferred categories. This change enhances clarity and maintains consistency in category management.
2026-03-04 09:19:07 -08:00
nicoferdi96
ad82e52d39 fix(gemini): group parallel function_response parts in a single Content object (#4693)
* fix(gemini): group parallel function_response parts in a single Content object

When Gemini makes N parallel tool calls, the API requires all N function_response parts in one Content object. Previously each tool result created a separate Content, causing 400 INVALID_ARGUMENT errors. Merge consecutive function_response parts into the existing Content instead of appending new ones.

* Address change requested

- function_response is a declared field on the types.Part Pydantic model so hasattr can be replaced with p.function_response is not None
2026-03-04 12:04:23 +01:00
Matt Aitchison
9336702ebc fix(deps): bump pypdf, urllib3 override, and dev dependencies for security fixes
Some checks failed
Build uv cache / build-cache (3.10) (push) Waiting to run
Build uv cache / build-cache (3.11) (push) Waiting to run
Build uv cache / build-cache (3.12) (push) Waiting to run
Build uv cache / build-cache (3.13) (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
- pypdf ~6.7.4 → ~6.7.5 (CVE: inefficient ASCIIHexDecode stream decoding)
- Add urllib3>=2.6.3 override (CVE: decompression-bomb bypass on redirects)
- ruff 0.14.7 → 0.15.1, mypy 1.19.0 → 1.19.1, pre-commit 4.5.0 → 4.5.1
- types-regex 2024.11.6 → 2026.1.15, boto3-stubs 1.40.54 → 1.42.40
- Auto-fixed 13 lint issues from new ruff rules

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-03-04 01:13:38 -05:00
Greyson LaLonde
030f6d6c43 fix: use anon id for ephemeral traces 2026-03-04 00:45:09 -05:00
Mike Plachta
95d51db29f Langgraph migration guide (#4681)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
2026-03-03 11:53:12 -08:00
Greyson LaLonde
a8f51419f6 fix(gemini): surface thought output from thinking models
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
* fix(gemini): surface thought output from thinking models

* chore(llm): remove unreachable hasattr guards on crewai_event_bus
2026-03-03 11:54:55 -05:00
Greyson LaLonde
e7f17d2284 fix: load MCP and platform tools when agent tools is None
Closes #4568
2026-03-03 10:25:25 -05:00
Greyson LaLonde
5d0811258f fix(a2a): support Jupyter environments with running event loops 2026-03-03 10:05:48 -05:00
Greyson LaLonde
7972192d55 fix(deps): bump tokenizers lower bound to >=0.21 to avoid broken 0.20.3
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-03-02 18:04:28 -05:00
Mike Plachta
b3f8a42321 feat: upgrade gemini genai
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-03-02 14:27:56 -05:00
Greyson LaLonde
21224f2bc5 fix: conditionally pass plus header
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Empty strings are considered illegal values for bearer auth in `httpx`.
2026-03-02 09:27:54 -05:00
Giulio Leone
b76022c1e7 fix(telemetry): skip signal handler registration in non-main threads
* fix(telemetry): skip signal handler registration in non-main threads

When CrewAI is initialized from a non-main thread (e.g. Streamlit, Flask,
Django, Jupyter), the telemetry module attempted to register signal handlers
which only work in the main thread. This caused multiple noisy ValueError
tracebacks to be printed to stderr, confusing users even though the errors
were caught and non-fatal.

Check `threading.current_thread() is not threading.main_thread()` before
attempting signal registration, and skip silently with a debug-level log
message instead of printing full tracebacks.

Fixes crewAIInc/crewAI#4289

* fix(test): move Telemetry() inside signal.signal mock context

Refs: #4649

* fix(telemetry): move signal.signal mock inside thread to wrap Telemetry() construction

The patch context now activates inside init_in_thread so the mock
is guaranteed to be active before and during Telemetry.__init__,
addressing the Copilot review feedback.

Refs: #4289

* fix(test): mock logger.debug instead of capsys for deterministic assertion

Replace signal.signal-only mock with combined logger + signal mock.
Assert logger.debug was called with the skip message and signal.signal
was never invoked from the non-main thread.

Refs: #4289
2026-03-02 07:42:55 -05:00
Greyson LaLonde
1ac5801578 fix: inject tool errors as observations and resolve name collisions
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-03-01 00:46:04 -05:00
Matt Aitchison
c00a348837 fix: upgrade pypdf 4.x → 6.7.4 to resolve 11 Dependabot alerts
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
pypdf <6.7.4 has multiple DoS vulnerabilities via crafted PDF streams
(FlateDecode, LZWDecode, RunLengthDecode, XFA, TreeObject, outlines).

Only basic PdfReader/PdfWriter APIs are used in crewai-files, none of
which changed in the 5.0 or 6.0 breaking releases.
2026-02-28 17:16:45 -05:00
Matt Aitchison
6c8c6c8e12 fix: resolve critical/high Dependabot security alerts (#4652)
Upgrade pillow 10.4.0 → 12.1.1 (out-of-bounds write on PSD images),
langchain-core 0.3.76 → 0.3.83 (template injection), and
urllib3 2.6.1 → 2.6.3 (decompression-bomb bypass on redirects).

Bump docling ~=2.63.0 → ~=2.75.0 for pillow 12 compat, and add
uv overrides for pillow/langchain-core to unblock transitive pins
from fastembed and langchain-apify.
2026-02-28 13:04:35 -06:00
Musthaq Ahamad
3899910aa9 docs: sync Composio tool docs across locales (#4639)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* docs: update Composio tool docs across locales

Align the Composio automation docs with the new session-based example flow and keep localized pages in sync with the updated English content.

Made-with: Cursor

* docs: clarify manual user authentication wording

Refine the Composio auth section language to reflect session-based automatic auth during agent chat while keeping the manual `authorize` flow explicit.

Made-with: Cursor

* docs: sync updated Composio auth wording across locales

Propagate the latest English wording updates for CrewAI provider initialization and manual user authentication guidance to pt-BR and ko docs.

Made-with: Cursor
2026-02-27 13:38:45 -08:00
Greyson LaLonde
757a435ee3 chore: update changelog and version for v1.10.1a1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-02-27 09:58:48 -05:00
Greyson LaLonde
8bfdb188f7 feat: bump versions to 1.10.1a1 2026-02-27 09:44:47 -05:00
João Moura
1bdb9496a3 refactor: update step callback methods to support asynchronous invocation (#4633)
* refactor: update step callback methods to support asynchronous invocation

- Replaced synchronous step callback invocations with asynchronous counterparts in the CrewAgentExecutor class.
- Introduced a new async method _ainvoke_step_callback to handle step callbacks in an async context, improving responsiveness and performance in asynchronous workflows.

* chore: bump version to 1.10.1b1 across multiple files

- Updated version strings from 1.10.1b to 1.10.1b1 in various project files including pyproject.toml and __init__.py files.
- Adjusted dependency specifications to reflect the new version in relevant templates and modules.
2026-02-27 07:35:03 -03:00
Joao Moura
979aa26c3d bump new alpha version 2026-02-27 01:43:33 -08:00
João Moura
514c082882 refactor: implement lazy loading for heavy dependencies in Memory module (#4632)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Introduced lazy imports for the Memory and EncodingFlow classes to optimize import time and reduce initial load, particularly beneficial for deployment scenarios like Celery pre-fork.
- Updated the Memory class to include new configuration options for aggregation queries, enhancing its functionality.
- Adjusted the __getattr__ method in both the crewai and memory modules to support lazy loading of specified attributes.
2026-02-27 03:20:02 -03:00
Greyson LaLonde
c9e8068578 docs: update changelog and version for v1.10.0 2026-02-26 19:14:25 -05:00
Greyson LaLonde
df2778f08b fix: make branch for release notes
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-02-26 18:49:13 -05:00
Greyson LaLonde
d8fea2518d feat: bump versions to 1.10.0
* feat: bump versions to 1.10.0

* chore: update tool specifications

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-02-26 18:31:14 -05:00
Lucas Gomide
d259150d8d Enhance MCP tool resolution and related events (#4580)
* feat: enhance MCP tool resolution

* feat: emit event when MCP configuration fails

* feat: emit event when MCP tool execution has failed

* style: resolve linter issues

* refactor: use clear and natural mcp tool name resolution

* test: fix broken tests

* fix: resolve MCP connection leaks, slug validation, duplicate connections, and httpx exception handling

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
2026-02-26 13:59:30 -08:00
Greyson LaLonde
c4a328c9d5 fix: validate tool kwargs even when empty to prevent cryptic TypeError (#4611) 2026-02-26 16:18:03 -05:00
Greyson LaLonde
373abbb6b7 fix: add dict overload to build_embedder and type default embedder 2026-02-26 16:04:28 -05:00
João Moura
86d3ee022d feat: update lancedb version and add lance-namespace packages
* chore(deps): update lancedb version and add lance-namespace packages

- Updated lancedb dependency version from 0.4.0 to 0.29.2 in multiple files.
- Added new packages: lance-namespace and lance-namespace-urllib3-client with version 0.5.2, including their dependencies and installation details.
- Enhanced MemoryTUI to display a limit on entries and improved the LanceDBStorage class with automatic background compaction and index creation for better performance.

* linter

* refactor: update memory recall limit and formatting in Agent class

- Reduced the memory recall limit from 10 to 5 in multiple locations within the Agent class.
- Updated the memory formatting to use a new `format` method in the MemoryMatch class for improved readability and metadata inclusion.

* refactor: enhance memory handling with read-only support

- Updated memory-related classes and methods to support read-only functionality, allowing for silent no-ops when attempting to remember data in read-only mode.
- Modified the LiteAgent and CrewAgentExecutorMixin classes to check for read-only status before saving memories.
- Adjusted MemorySlice and Memory classes to reflect changes in behavior when read-only is enabled.
- Updated tests to verify that memory operations behave correctly under read-only conditions.

* test: set mock memory to read-write in unit tests

- Updated unit tests in test_unified_memory.py to set mock_memory._read_only to False, ensuring that memory operations can be tested in a writable state.

* fix test

* fix: preserve falsy metadata values and fix remember() return type

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
2026-02-26 15:05:10 -05:00
Lucas Gomide
09e3b81ca3 fix: preserve null types in tool parameter schemas for LLM (#4579)
* fix: preserve null types in tool parameter schemas for LLM

Tool parameter schemas were stripping null from optional fields via
generate_model_description, forcing the LLM to provide non-null values
for fields.
Adds strip_null_types parameter to generate_model_description and passes False when generating tool
schemas, so optional fields keep anyOf: [{type: T}, {type: null}]

* Update lib/crewai/src/crewai/utilities/pydantic_schema_utils.py

Co-authored-by: Gabe Milani <gabriel@crewai.com>

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Gabe Milani <gabriel@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-02-26 11:51:34 -05:00
Heitor Carvalho
b6d8ce5c55 docs: add litellm dependency note for non-native LLM providers (#4600)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-02-26 10:57:37 -03:00
Greyson LaLonde
b371f97a2f fix: map output_pydantic/output_json to native structured output
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: map output_pydantic/output_json to native structured output

* test: add crew+tools+structured output integration test for Gemini

* fix: re-record stale cassette for test_crew_testing_function

* fix: re-record remaining stale cassettes for native structured output

* fix: enable native structured output for lite agent and fix mypy errors
2026-02-25 17:13:34 -05:00
dependabot[bot]
017189db78 chore(deps): bump nltk in the security-updates group across 1 directory (#4598)
Bumps the security-updates group with 1 update in the / directory: [nltk](https://github.com/nltk/nltk).


Updates `nltk` from 3.9.2 to 3.9.3
- [Changelog](https://github.com/nltk/nltk/blob/develop/ChangeLog)
- [Commits](https://github.com/nltk/nltk/compare/3.9.2...3.9.3)

---
updated-dependencies:
- dependency-name: nltk
  dependency-version: 3.9.3
  dependency-type: indirect
  dependency-group: security-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-25 15:37:21 -06:00
dependabot[bot]
02d911494f chore(deps): bump cryptography (#4506)
Bumps the security-updates group with 1 update in the / directory: [cryptography](https://github.com/pyca/cryptography).


Updates `cryptography` from 46.0.4 to 46.0.5
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.4...46.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.5
  dependency-type: indirect
  dependency-group: security-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-25 15:04:07 -06:00
João Moura
8102d0a6ca feat: enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool
* feat: enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool

- Added error handling for malformed JSON tool arguments in CrewAgentExecutor, providing descriptive error messages.
- Implemented schema validation for tool arguments in BaseTool, ensuring that invalid arguments raise appropriate exceptions.
- Introduced tests to verify correct behavior for both valid and invalid JSON inputs, enhancing robustness of tool execution.

* refactor: improve argument validation in BaseTool

- Introduced a new private method  to handle argument validation for tools, enhancing code clarity and reusability.
- Updated the  method to utilize the new validation method, ensuring consistent error handling for invalid arguments.
- Enhanced exception handling to specifically catch , providing clearer error messages for tool argument validation failures.

* feat: introduce parse_tool_call_args for improved argument parsing

- Added a new utility function, parse_tool_call_args, to handle parsing of tool call arguments from JSON strings or dictionaries, enhancing error handling for malformed JSON inputs.
- Updated CrewAgentExecutor and AgentExecutor to utilize the new parsing function, streamlining argument validation and improving clarity in error reporting.
- Introduced unit tests for parse_tool_call_args to ensure robust functionality and correct handling of various input scenarios.

* feat: add keyword argument validation in BaseTool and Tool classes

- Introduced a new method `_validate_kwargs` in BaseTool to validate keyword arguments against the defined schema, ensuring proper argument handling.
- Updated the `run` and `arun` methods in both BaseTool and Tool classes to utilize the new validation method, improving error handling and robustness.
- Added comprehensive tests for asynchronous execution in `TestBaseToolArunValidation` to verify correct behavior for valid and invalid keyword arguments.

* Potential fix for pull request finding 'Syntax error'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-02-25 13:13:31 -05:00
Greyson LaLonde
ee374d01de chore: add versioning logic for devtools 2026-02-25 12:13:00 -05:00
Greyson LaLonde
9914e51199 feat: add versioned docs
starting with 1.10.0
2026-02-25 11:05:31 -05:00
nicoferdi96
2dbb83ae31 Private package registry (#4583)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
adding reference and explaination for package registry

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-02-24 19:37:17 +01:00
Mike Plachta
7377e1aa26 fix: bedrock region was always set to "us-east-1" not respecting the env var. (#4582)
* fix: bedrock region was always set to "us-east-1" not respecting the env
var.

code had AWS_REGION_NAME referenced, but not used, unified to
AWS_DEFAULT_REGION as per documentation

* DRY code improvement and fix caught by tests.

* Supporting litellm configuration
2026-02-24 09:59:01 -08:00
138 changed files with 13416 additions and 4847 deletions

View File

@@ -21,7 +21,6 @@ OPENROUTER_API_KEY=fake-openrouter-key
AWS_ACCESS_KEY_ID=fake-aws-access-key
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_REGION_NAME=us-east-1
# -----------------------------------------------------------------------------
# Azure OpenAI Configuration

View File

@@ -1,8 +1,6 @@
name: Publish to PyPI
on:
repository_dispatch:
types: [deployment-tests-passed]
workflow_dispatch:
inputs:
release_tag:
@@ -20,11 +18,8 @@ jobs:
- name: Determine release tag
id: release
run: |
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
if [ -n "${{ inputs.release_tag }}" ]; then
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
else
echo "tag=" >> $GITHUB_OUTPUT
fi

View File

@@ -1,18 +0,0 @@
name: Trigger Deployment Tests
on:
release:
types: [published]
jobs:
trigger:
name: Trigger deployment tests
runs-on: ubuntu-latest
steps:
- name: Trigger deployment tests
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
event-type: crewai-release
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'

View File

@@ -12,6 +12,7 @@ from dotenv import load_dotenv
import pytest
from vcr.request import Request # type: ignore[import-untyped]
try:
import vcr.stubs.httpx_stubs as httpx_stubs # type: ignore[import-untyped]
except ModuleNotFoundError:

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,138 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Mar 04, 2026">
## v1.10.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1)
## What's Changed
### Features
- Upgrade Gemini GenAI
### Bug Fixes
- Adjust executor listener value to avoid recursion
- Group parallel function response parts in a single Content object in Gemini
- Surface thought output from thinking models in Gemini
- Load MCP and platform tools when agent tools are None
- Support Jupyter environments with running event loops in A2A
- Use anonymous ID for ephemeral traces
- Conditionally pass plus header
- Skip signal handler registration in non-main threads for telemetry
- Inject tool errors as observations and resolve name collisions
- Upgrade pypdf from 4.x to 6.7.4 to resolve Dependabot alerts
- Resolve critical and high Dependabot security alerts
### Documentation
- Sync Composio tool documentation across locales
## Contributors
@giulio-leone, @greysonlalonde, @haxzie, @joaomdmoura, @lorenzejay, @mattatcha, @mplachta, @nicoferdi96
</Update>
<Update label="Feb 27, 2026">
## v1.10.1a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## What's Changed
### Features
- Implement asynchronous invocation support in step callback methods
- Implement lazy loading for heavy dependencies in Memory module
### Documentation
- Update changelog and version for v1.10.0
### Refactoring
- Refactor step callback methods to support asynchronous invocation
- Refactor to implement lazy loading for heavy dependencies in Memory module
### Bug Fixes
- Fix branch for release notes
## Contributors
@greysonlalonde, @joaomdmoura
</Update>
<Update label="Feb 27, 2026">
## v1.10.1a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## What's Changed
### Refactoring
- Refactor step callback methods to support asynchronous invocation
- Implement lazy loading for heavy dependencies in Memory module
### Documentation
- Update changelog and version for v1.10.0
### Bug Fixes
- Make branch for release notes
## Contributors
@greysonlalonde, @joaomdmoura
</Update>
<Update label="Feb 26, 2026">
## v1.10.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.0)
## What's Changed
### Features
- Enhance MCP tool resolution and related events
- Update lancedb version and add lance-namespace packages
- Enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool
- Migrate CLI HTTP client from requests to httpx
- Add versioned documentation
- Add yanked detection for version notes
- Implement user input handling in Flows
- Enhance HITL self-loop functionality in human feedback integration tests
- Add started_event_id and set in eventbus
- Auto update tools.specs
### Bug Fixes
- Validate tool kwargs even when empty to prevent cryptic TypeError
- Preserve null types in tool parameter schemas for LLM
- Map output_pydantic/output_json to native structured output
- Ensure callbacks are ran/awaited if promise
- Capture method name in exception context
- Preserve enum type in router result; improve types
- Fix cyclic flows silently breaking when persistence ID is passed in inputs
- Correct CLI flag format from --skip-provider to --skip_provider
- Ensure OpenAI tool call stream is finalized
- Resolve complex schema $ref pointers in MCP tools
- Enforce additionalProperties=false in schemas
- Reject reserved script names for crew folders
- Resolve race condition in guardrail event emission test
### Documentation
- Add litellm dependency note for non-native LLM providers
- Clarify NL2SQL security model and hardening guidance
- Add 96 missing actions across 9 integrations
### Refactoring
- Refactor crew to provider
- Extract HITL to provider pattern
- Improve hook typing and registration
## Contributors
@dependabot[bot], @github-actions[bot], @github-code-quality[bot], @greysonlalonde, @heitorado, @hobostay, @joaomdmoura, @johnvan7, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha, @mplachta, @nicoferdi96, @theCyberTech, @thiagomoretto, @vinibrsl
</Update>
<Update label="Jan 26, 2026">
## v1.9.0

View File

@@ -106,6 +106,15 @@ There are different places in CrewAI code where you can specify the model to use
</Tab>
</Tabs>
<Info>
CrewAI provides native SDK integrations for OpenAI, Anthropic, Google (Gemini API), Azure, and AWS Bedrock — no extra install needed beyond the provider-specific extras (e.g. `uv add "crewai[openai]"`).
All other providers are powered by **LiteLLM**. If you plan to use any of them, add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Info>
## Provider Configuration Examples
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
@@ -275,6 +284,11 @@ In this section, you'll find detailed examples that help you select, configure,
| `meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 128k | 4028 | Text, Image | Text |
| `meta_llama/Llama-3.3-70B-Instruct` | 128k | 4028 | Text | Text |
| `meta_llama/Llama-3.3-8B-Instruct` | 128k | 4028 | Text | Text |
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Anthropic">
@@ -470,7 +484,7 @@ In this section, you'll find detailed examples that help you select, configure,
To get an Express mode API key:
- New Google Cloud users: Get an [express mode API key](https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstart?usertype=apikey)
- Existing Google Cloud users: Get a [Google Cloud API key bound to a service account](https://cloud.google.com/docs/authentication/api-keys)
For more details, see the [Vertex AI Express mode documentation](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/start/quickstart?usertype=apikey).
</Info>
@@ -571,6 +585,11 @@ In this section, you'll find detailed examples that help you select, configure,
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Azure">
@@ -652,6 +671,7 @@ In this section, you'll find detailed examples that help you select, configure,
# Optional
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
AWS_REGION_NAME=<your-region> # Alternative configuration for backwards compatibility with LiteLLM. Defaults to us-east-1
```
**Basic Usage:**
@@ -695,6 +715,7 @@ In this section, you'll find detailed examples that help you select, configure,
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
- `AWS_REGION_NAME`: AWS region (defaults to `us-east-1`). Alternative configuration for backwards compatibility with LiteLLM
**Features:**
- Native tool calling support via Converse API
@@ -764,6 +785,11 @@ In this section, you'll find detailed examples that help you select, configure,
model="sagemaker/<my-endpoint>"
)
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Mistral">
@@ -779,6 +805,11 @@ In this section, you'll find detailed examples that help you select, configure,
temperature=0.7
)
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nvidia NIM">
@@ -865,6 +896,11 @@ In this section, you'll find detailed examples that help you select, configure,
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
@@ -905,6 +941,11 @@ In this section, you'll find detailed examples that help you select, configure,
# ...
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Groq">
@@ -926,6 +967,11 @@ In this section, you'll find detailed examples that help you select, configure,
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="IBM watsonx.ai">
@@ -948,6 +994,11 @@ In this section, you'll find detailed examples that help you select, configure,
base_url="https://api.watsonx.ai/v1"
)
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
@@ -961,6 +1012,11 @@ In this section, you'll find detailed examples that help you select, configure,
base_url="http://localhost:11434"
)
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Fireworks AI">
@@ -976,6 +1032,11 @@ In this section, you'll find detailed examples that help you select, configure,
temperature=0.7
)
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Perplexity AI">
@@ -991,6 +1052,11 @@ In this section, you'll find detailed examples that help you select, configure,
base_url="https://api.perplexity.ai/"
)
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Hugging Face">
@@ -1005,6 +1071,11 @@ In this section, you'll find detailed examples that help you select, configure,
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="SambaNova">
@@ -1028,6 +1099,11 @@ In this section, you'll find detailed examples that help you select, configure,
| Llama 3.2 Series | 8,192 tokens | General-purpose, multimodal tasks |
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality |
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Cerebras">
@@ -1053,6 +1129,11 @@ In this section, you'll find detailed examples that help you select, configure,
- Good balance of speed and quality
- Support for long context windows
</Info>
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Open Router">
@@ -1075,6 +1156,11 @@ In this section, you'll find detailed examples that help you select, configure,
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nebius AI Studio">
@@ -1097,6 +1183,11 @@ In this section, you'll find detailed examples that help you select, configure,
- Competitive pricing
- Good balance of speed and quality
</Info>
**Note:** This provider uses LiteLLM. Add it as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
</AccordionGroup>

View File

@@ -177,6 +177,11 @@ You need to push your crew to a GitHub repository. If you haven't created a crew
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
<Info>
Using private Python packages? You'll need to add your registry credentials here too.
See [Private Package Registries](/en/enterprise/guides/private-package-registry) for the required variables.
</Info>
</Step>
<Step title="Deploy Your Crew">

View File

@@ -256,6 +256,12 @@ Before deployment, ensure you have:
1. **LLM API keys** ready (OpenAI, Anthropic, Google, etc.)
2. **Tool API keys** if using external tools (Serper, etc.)
<Info>
If your project depends on packages from a **private PyPI registry**, you'll also need to configure
registry authentication credentials as environment variables. See the
[Private Package Registries](/en/enterprise/guides/private-package-registry) guide for details.
</Info>
<Tip>
Test your project locally with the same environment variables before deploying
to catch configuration issues early.

View File

@@ -0,0 +1,263 @@
---
title: "Private Package Registries"
description: "Install private Python packages from authenticated PyPI registries in CrewAI AMP"
icon: "lock"
mode: "wide"
---
<Note>
This guide covers how to configure your CrewAI project to install Python packages
from private PyPI registries (Azure DevOps Artifacts, GitHub Packages, GitLab, AWS CodeArtifact, etc.)
when deploying to CrewAI AMP.
</Note>
## When You Need This
If your project depends on internal or proprietary Python packages hosted on a private registry
rather than the public PyPI, you'll need to:
1. Tell UV **where** to find the package (an index URL)
2. Tell UV **which** packages come from that index (a source mapping)
3. Provide **credentials** so UV can authenticate during install
CrewAI AMP uses [UV](https://docs.astral.sh/uv/) for dependency resolution and installation.
UV supports authenticated private registries through `pyproject.toml` configuration combined
with environment variables for credentials.
## Step 1: Configure pyproject.toml
Three pieces work together in your `pyproject.toml`:
### 1a. Declare the dependency
Add the private package to your `[project.dependencies]` like any other dependency:
```toml
[project]
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
```
### 1b. Define the index
Register your private registry as a named index under `[[tool.uv.index]]`:
```toml
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
```
<Info>
The `name` field is important — UV uses it to construct the environment variable names
for authentication (see [Step 2](#step-2-set-authentication-credentials) below).
Setting `explicit = true` means UV won't search this index for every package — only the
ones you explicitly map to it in `[tool.uv.sources]`. This avoids unnecessary queries
against your private registry and protects against dependency confusion attacks.
</Info>
### 1c. Map the package to the index
Tell UV which packages should be resolved from your private index using `[tool.uv.sources]`:
```toml
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
### Complete example
```toml
[project]
name = "my-crew-project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
[tool.crewai]
type = "crew"
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
After updating `pyproject.toml`, regenerate your lock file:
```bash
uv lock
```
<Warning>
Always commit the updated `uv.lock` along with your `pyproject.toml` changes.
The lock file is required for deployment — see [Prepare for Deployment](/en/enterprise/guides/prepare-for-deployment).
</Warning>
## Step 2: Set Authentication Credentials
UV authenticates against private indexes using environment variables that follow a naming convention
based on the index name you defined in `pyproject.toml`:
```
UV_INDEX_{UPPER_NAME}_USERNAME
UV_INDEX_{UPPER_NAME}_PASSWORD
```
Where `{UPPER_NAME}` is your index name converted to **uppercase** with **hyphens replaced by underscores**.
For example, an index named `my-private-registry` uses:
| Variable | Value |
|----------|-------|
| `UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME` | Your registry username or token name |
| `UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD` | Your registry password or token/PAT |
<Warning>
These environment variables **must** be added via the CrewAI AMP **Environment Variables** settings —
either globally or at the deployment level. They cannot be set in `.env` files or hardcoded in your project.
See [Setting Environment Variables in AMP](#setting-environment-variables-in-amp) below.
</Warning>
## Registry Provider Reference
The table below shows the index URL format and credential values for common registry providers.
Replace placeholder values with your actual organization and feed details.
| Provider | Index URL | Username | Password |
|----------|-----------|----------|----------|
| **Azure DevOps Artifacts** | `https://pkgs.dev.azure.com/{org}/_packaging/{feed}/pypi/simple/` | Any non-empty string (e.g. `token`) | Personal Access Token (PAT) with Packaging Read scope |
| **GitHub Packages** | `https://pypi.pkg.github.com/{owner}/simple/` | GitHub username | Personal Access Token (classic) with `read:packages` scope |
| **GitLab Package Registry** | `https://gitlab.com/api/v4/projects/{project_id}/packages/pypi/simple/` | `__token__` | Project or Personal Access Token with `read_api` scope |
| **AWS CodeArtifact** | Use the URL from `aws codeartifact get-repository-endpoint` | `aws` | Token from `aws codeartifact get-authorization-token` |
| **Google Artifact Registry** | `https://{region}-python.pkg.dev/{project}/{repo}/simple/` | `_json_key_base64` | Base64-encoded service account key |
| **JFrog Artifactory** | `https://{instance}.jfrog.io/artifactory/api/pypi/{repo}/simple/` | Username or email | API key or identity token |
| **Self-hosted (devpi, Nexus, etc.)** | Your registry's simple API URL | Registry username | Registry password |
<Tip>
For **AWS CodeArtifact**, the authorization token expires periodically.
You'll need to refresh the `UV_INDEX_*_PASSWORD` value when it expires.
Consider automating this in your CI/CD pipeline.
</Tip>
## Setting Environment Variables in AMP
Private registry credentials must be configured as environment variables in CrewAI AMP.
You have two options:
<Tabs>
<Tab title="Web Interface">
1. Log in to [CrewAI AMP](https://app.crewai.com)
2. Navigate to your automation
3. Open the **Environment Variables** tab
4. Add each variable (`UV_INDEX_*_USERNAME` and `UV_INDEX_*_PASSWORD`) with its value
See the [Deploy to AMP — Set Environment Variables](/en/enterprise/guides/deploy-to-amp#set-environment-variables) step for details.
</Tab>
<Tab title="CLI Deployment">
Add the variables to your local `.env` file before running `crewai deploy create`.
The CLI will securely transfer them to the platform:
```bash
# .env
OPENAI_API_KEY=sk-...
UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat-here
```
```bash
crewai deploy create
```
</Tab>
</Tabs>
<Warning>
**Never** commit credentials to your repository. Use AMP environment variables for all secrets.
The `.env` file should be listed in `.gitignore`.
</Warning>
To update credentials on an existing deployment, see [Update Your Crew — Environment Variables](/en/enterprise/guides/update-crew).
## How It All Fits Together
When CrewAI AMP builds your automation, the resolution flow works like this:
<Steps>
<Step title="Build starts">
AMP pulls your repository and reads `pyproject.toml` and `uv.lock`.
</Step>
<Step title="UV resolves dependencies">
UV reads `[tool.uv.sources]` to determine which index each package should come from.
</Step>
<Step title="UV authenticates">
For each private index, UV looks up `UV_INDEX_{NAME}_USERNAME` and `UV_INDEX_{NAME}_PASSWORD`
from the environment variables you configured in AMP.
</Step>
<Step title="Packages install">
UV downloads and installs all packages — both public (from PyPI) and private (from your registry).
</Step>
<Step title="Automation runs">
Your crew or flow starts with all dependencies available.
</Step>
</Steps>
## Troubleshooting
### Authentication Errors During Build
**Symptom**: Build fails with `401 Unauthorized` or `403 Forbidden` when resolving a private package.
**Check**:
- The `UV_INDEX_*` environment variable names match your index name exactly (uppercased, hyphens → underscores)
- Credentials are set in AMP environment variables, not just in a local `.env`
- Your token/PAT has the required read permissions for the package feed
- The token hasn't expired (especially relevant for AWS CodeArtifact)
### Package Not Found
**Symptom**: `No matching distribution found for my-private-package`.
**Check**:
- The index URL in `pyproject.toml` ends with `/simple/`
- The `[tool.uv.sources]` entry maps the correct package name to the correct index name
- The package is actually published to your private registry
- Run `uv lock` locally with the same credentials to verify resolution works
### Lock File Conflicts
**Symptom**: `uv lock` fails or produces unexpected results after adding a private index.
**Solution**: Set the credentials locally and regenerate:
```bash
export UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
export UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat
uv lock
```
Then commit the updated `uv.lock`.
## Related Guides
<CardGroup cols={3}>
<Card title="Prepare for Deployment" icon="clipboard-check" href="/en/enterprise/guides/prepare-for-deployment">
Verify project structure and dependencies before deploying.
</Card>
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
Deploy your crew or flow and configure environment variables.
</Card>
<Card title="Update Your Crew" icon="arrows-rotate" href="/en/enterprise/guides/update-crew">
Update environment variables and push changes to a running deployment.
</Card>
</CardGroup>

View File

@@ -0,0 +1,518 @@
---
title: "Moving from LangGraph to CrewAI: A Practical Guide for Engineers"
description: If you already have built with LangGraph, learn how to quickly port your projects to CrewAI
icon: switch
mode: "wide"
---
You've built agents with LangGraph. You've wrestled with `StateGraph`, wired up conditional edges, and debugged state dictionaries at 2 AM. It works — but somewhere along the way, you started wondering if there's a better path to production.
There is. **CrewAI Flows** gives you the same power — event-driven orchestration, conditional routing, shared state — with dramatically less boilerplate and a mental model that maps cleanly to how you actually think about multi-step AI workflows.
This article walks through the core concepts side by side, shows real code comparisons, and demonstrates why CrewAI Flows is the framework you'll want to reach for next.
---
## The Mental Model Shift
LangGraph asks you to think in **graphs**: nodes, edges, and state dictionaries. Every workflow is a directed graph where you explicitly wire transitions between computation steps. It's powerful, but the abstraction carries overhead — especially when your workflow is fundamentally sequential with a few decision points.
CrewAI Flows asks you to think in **events**: methods that start things, methods that listen for results, and methods that route execution. The topology of your workflow emerges from decorator annotations rather than explicit graph construction. This isn't just syntactic sugar — it changes how you design, read, and maintain your pipelines.
Here's the core mapping:
| LangGraph Concept | CrewAI Flows Equivalent |
| --- | --- |
| `StateGraph` class | `Flow` class |
| `add_node()` | Methods decorated with `@start`, `@listen` |
| `add_edge()` / `add_conditional_edges()` | `@listen()` / `@router()` decorators |
| `TypedDict` state | Pydantic `BaseModel` state |
| `START` / `END` constants | `@start()` decorator / natural method return |
| `graph.compile()` | `flow.kickoff()` |
| Checkpointer / persistence | Built-in memory (LanceDB-backed) |
Let's see what this looks like in practice.
---
## Demo 1: A Simple Sequential Pipeline
Imagine you're building a pipeline that takes a topic, researches it, writes a summary, and formats the output. Here's how each framework handles it.
### LangGraph Approach
```python
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class ResearchState(TypedDict):
topic: str
raw_research: str
summary: str
formatted_output: str
def research_topic(state: ResearchState) -> dict:
# Call an LLM or search API
result = llm.invoke(f"Research the topic: {state['topic']}")
return {"raw_research": result}
def write_summary(state: ResearchState) -> dict:
result = llm.invoke(
f"Summarize this research:\n{state['raw_research']}"
)
return {"summary": result}
def format_output(state: ResearchState) -> dict:
result = llm.invoke(
f"Format this summary as a polished article section:\n{state['summary']}"
)
return {"formatted_output": result}
# Build the graph
graph = StateGraph(ResearchState)
graph.add_node("research", research_topic)
graph.add_node("summarize", write_summary)
graph.add_node("format", format_output)
graph.add_edge(START, "research")
graph.add_edge("research", "summarize")
graph.add_edge("summarize", "format")
graph.add_edge("format", END)
# Compile and run
app = graph.compile()
result = app.invoke({"topic": "quantum computing advances in 2026"})
print(result["formatted_output"])
```
You define functions, register them as nodes, and manually wire every transition. For a simple sequence like this, there's a lot of ceremony.
### CrewAI Flows Approach
```python
from crewai import LLM, Agent, Crew, Process, Task
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class ResearchState(BaseModel):
topic: str = ""
raw_research: str = ""
summary: str = ""
formatted_output: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def research_topic(self):
# Option 1: Direct LLM call
result = llm.call(f"Research the topic: {self.state.topic}")
self.state.raw_research = result
return result
@listen(research_topic)
def write_summary(self, research_output):
# Option 2: A single agent
summarizer = Agent(
role="Research Summarizer",
goal="Produce concise, accurate summaries of research content",
backstory="You are an expert at distilling complex research into clear, "
"digestible summaries.",
llm=llm,
verbose=True,
)
result = summarizer.kickoff(
f"Summarize this research:\n{self.state.raw_research}"
)
self.state.summary = str(result)
return self.state.summary
@listen(write_summary)
def format_output(self, summary_output):
# Option 3: a complete crew (with one or more agents)
formatter = Agent(
role="Content Formatter",
goal="Transform research summaries into polished, publication-ready article sections",
backstory="You are a skilled editor with expertise in structuring and "
"presenting technical content for a general audience.",
llm=llm,
verbose=True,
)
format_task = Task(
description=f"Format this summary as a polished article section:\n{self.state.summary}",
expected_output="A well-structured, polished article section ready for publication.",
agent=formatter,
)
crew = Crew(
agents=[formatter],
tasks=[format_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
self.state.formatted_output = str(result)
return self.state.formatted_output
# Run the flow
flow = ResearchFlow()
flow.state.topic = "quantum computing advances in 2026"
result = flow.kickoff()
print(flow.state.formatted_output)
```
Notice what's different: no graph construction, no edge wiring, no compile step. The execution order is declared right where the logic lives. `@start()` marks the entry point, and `@listen(method_name)` chains steps together. The state is a proper Pydantic model with type safety, validation, and IDE auto-completion.
---
## Demo 2: Conditional Routing
This is where things get interesting. Say you're building a content pipeline that routes to different processing paths based on the type of content detected.
### LangGraph Approach
```python
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, START, END
class ContentState(TypedDict):
input_text: str
content_type: str
result: str
def classify_content(state: ContentState) -> dict:
content_type = llm.invoke(
f"Classify this content as 'technical', 'creative', or 'business':\n{state['input_text']}"
)
return {"content_type": content_type.strip().lower()}
def process_technical(state: ContentState) -> dict:
result = llm.invoke(f"Process as technical doc:\n{state['input_text']}")
return {"result": result}
def process_creative(state: ContentState) -> dict:
result = llm.invoke(f"Process as creative writing:\n{state['input_text']}")
return {"result": result}
def process_business(state: ContentState) -> dict:
result = llm.invoke(f"Process as business content:\n{state['input_text']}")
return {"result": result}
# Routing function
def route_content(state: ContentState) -> Literal["technical", "creative", "business"]:
return state["content_type"]
# Build the graph
graph = StateGraph(ContentState)
graph.add_node("classify", classify_content)
graph.add_node("technical", process_technical)
graph.add_node("creative", process_creative)
graph.add_node("business", process_business)
graph.add_edge(START, "classify")
graph.add_conditional_edges(
"classify",
route_content,
{
"technical": "technical",
"creative": "creative",
"business": "business",
}
)
graph.add_edge("technical", END)
graph.add_edge("creative", END)
graph.add_edge("business", END)
app = graph.compile()
result = app.invoke({"input_text": "Explain how TCP handshakes work"})
```
You need a separate routing function, explicit conditional edge mapping, and termination edges for every branch. The routing logic is decoupled from the node that produces the routing decision.
### CrewAI Flows Approach
```python
from crewai import LLM, Agent
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class ContentState(BaseModel):
input_text: str = ""
content_type: str = ""
result: str = ""
class ContentFlow(Flow[ContentState]):
@start()
def classify_content(self):
self.state.content_type = (
llm.call(
f"Classify this content as 'technical', 'creative', or 'business':\n"
f"{self.state.input_text}"
)
.strip()
.lower()
)
return self.state.content_type
@router(classify_content)
def route_content(self, classification):
if classification == "technical":
return "process_technical"
elif classification == "creative":
return "process_creative"
else:
return "process_business"
@listen("process_technical")
def handle_technical(self):
agent = Agent(
role="Technical Writer",
goal="Produce clear, accurate technical documentation",
backstory="You are an expert technical writer who specializes in "
"explaining complex technical concepts precisely.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as technical doc:\n{self.state.input_text}")
)
@listen("process_creative")
def handle_creative(self):
agent = Agent(
role="Creative Writer",
goal="Craft engaging and imaginative creative content",
backstory="You are a talented creative writer with a flair for "
"compelling storytelling and vivid expression.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as creative writing:\n{self.state.input_text}")
)
@listen("process_business")
def handle_business(self):
agent = Agent(
role="Business Writer",
goal="Produce professional, results-oriented business content",
backstory="You are an experienced business writer who communicates "
"strategy and value clearly to professional audiences.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as business content:\n{self.state.input_text}")
)
flow = ContentFlow()
flow.state.input_text = "Explain how TCP handshakes work"
flow.kickoff()
print(flow.state.result)
```
The `@router()` decorator turns a method into a decision point. It returns a string that matches a listener — no mapping dictionaries, no separate routing functions. The branching logic reads like a Python `if` statement because it *is* one.
---
## Demo 3: Integrating AI Agent Crews into Flows
Here's where CrewAI's real power shines. Flows aren't just for chaining LLM calls — they orchestrate full **Crews** of autonomous agents. This is something LangGraph simply doesn't have a native equivalent for.
```python
from crewai import Agent, Task, Crew
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ArticleState(BaseModel):
topic: str = ""
research: str = ""
draft: str = ""
final_article: str = ""
class ArticleFlow(Flow[ArticleState]):
@start()
def run_research_crew(self):
"""A full Crew of agents handles research."""
researcher = Agent(
role="Senior Research Analyst",
goal=f"Produce comprehensive research on: {self.state.topic}",
backstory="You're a veteran analyst known for thorough, "
"well-sourced research reports.",
llm="gpt-4o"
)
research_task = Task(
description=f"Research '{self.state.topic}' thoroughly. "
"Cover key trends, data points, and expert opinions.",
expected_output="A detailed research brief with sources.",
agent=researcher
)
crew = Crew(agents=[researcher], tasks=[research_task])
result = crew.kickoff()
self.state.research = result.raw
return result.raw
@listen(run_research_crew)
def run_writing_crew(self, research_output):
"""A different Crew handles writing."""
writer = Agent(
role="Technical Writer",
goal="Write a compelling article based on provided research.",
backstory="You turn complex research into engaging, clear prose.",
llm="gpt-4o"
)
editor = Agent(
role="Senior Editor",
goal="Review and polish articles for publication quality.",
backstory="20 years of editorial experience at top tech publications.",
llm="gpt-4o"
)
write_task = Task(
description=f"Write an article based on this research:\n{self.state.research}",
expected_output="A well-structured draft article.",
agent=writer
)
edit_task = Task(
description="Review, fact-check, and polish the draft article.",
expected_output="A publication-ready article.",
agent=editor
)
crew = Crew(agents=[writer, editor], tasks=[write_task, edit_task])
result = crew.kickoff()
self.state.final_article = result.raw
return result.raw
# Run the full pipeline
flow = ArticleFlow()
flow.state.topic = "The Future of Edge AI"
flow.kickoff()
print(flow.state.final_article)
```
This is the key insight: **Flows provide the orchestration layer, and Crews provide the intelligence layer.** Each step in a Flow can spin up a full team of collaborating agents, each with their own roles, goals, and tools. You get structured, predictable control flow *and* autonomous agent collaboration — the best of both worlds.
In LangGraph, achieving something similar means manually implementing agent communication protocols, tool-calling loops, and delegation logic inside your node functions. It's possible, but it's plumbing you're building from scratch every time.
---
## Demo 4: Parallel Execution and Synchronization
Real-world pipelines often need to fan out work and join the results. CrewAI Flows handles this elegantly with `and_` and `or_` operators.
```python
from crewai import LLM
from crewai.flow.flow import Flow, and_, listen, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class AnalysisState(BaseModel):
topic: str = ""
market_data: str = ""
tech_analysis: str = ""
competitor_intel: str = ""
final_report: str = ""
class ParallelAnalysisFlow(Flow[AnalysisState]):
@start()
def start_method(self):
pass
@listen(start_method)
def gather_market_data(self):
# Your agentic or deterministic code
pass
@listen(start_method)
def run_tech_analysis(self):
# Your agentic or deterministic code
pass
@listen(start_method)
def gather_competitor_intel(self):
# Your agentic or deterministic code
pass
@listen(and_(gather_market_data, run_tech_analysis, gather_competitor_intel))
def synthesize_report(self):
# Your agentic or deterministic code
pass
flow = ParallelAnalysisFlow()
flow.state.topic = "AI-powered developer tools"
flow.kickoff()
```
Multiple `@start()` decorators fire in parallel. The `and_()` combinator on the `@listen` decorator ensures `synthesize_report` only executes after *all three* upstream methods complete. There's also `or_()` for when you want to proceed as soon as *any* upstream task finishes.
In LangGraph, you'd need to build a fan-out/fan-in pattern with parallel branches, a synchronization node, and careful state merging — all wired explicitly through edges.
---
## Why CrewAI Flows for Production
Beyond cleaner syntax, Flows deliver several production-critical advantages:
**Built-in state persistence.** Flow state is backed by LanceDB, meaning your workflows can survive crashes, be resumed, and accumulate knowledge across runs. LangGraph requires you to configure a separate checkpointer.
**Type-safe state management.** Pydantic models give you validation, serialization, and IDE support out of the box. LangGraph's `TypedDict` states don't validate at runtime.
**First-class agent orchestration.** Crews are a native primitive. You define agents with roles, goals, backstories, and tools — and they collaborate autonomously within the structured envelope of a Flow. No need to reinvent multi-agent coordination.
**Simpler mental model.** Decorators declare intent. `@start` means "begin here." `@listen(x)` means "run after x." `@router(x)` means "decide where to go after x." The code reads like the workflow it describes.
**CLI integration.** Run flows with `crewai run`. No separate compilation step, no graph serialization. Your Flow is a Python class, and it runs like one.
---
## Migration Cheat Sheet
If you're sitting on a LangGraph codebase and want to move to CrewAI Flows, here's a practical conversion guide:
1. **Map your state.** Convert your `TypedDict` to a Pydantic `BaseModel`. Add default values for all fields.
2. **Convert nodes to methods.** Each `add_node` function becomes a method on your `Flow` subclass. Replace `state["field"]` reads with `self.state.field`.
3. **Replace edges with decorators.** Your `add_edge(START, "first_node")` becomes `@start()` on the first method. Sequential `add_edge("a", "b")` becomes `@listen(a)` on method `b`.
4. **Replace conditional edges with `@router`.** Your routing function and `add_conditional_edges()` mapping become a single `@router()` method that returns a route string.
5. **Replace compile + invoke with kickoff.** Drop `graph.compile()`. Call `flow.kickoff()` instead.
6. **Consider where Crews fit.** Any node where you have complex multi-step agent logic is a candidate for extraction into a Crew. This is where you'll see the biggest quality improvement.
---
## Getting Started
Install CrewAI and scaffold a new Flow project:
```bash
pip install crewai
crewai create flow my_first_flow
cd my_first_flow
```
This generates a project structure with a ready-to-edit Flow class, configuration files, and a `pyproject.toml` with `type = "flow"` already set. Run it with:
```bash
crewai run
```
From there, add your agents, wire up your listeners, and ship it.
---
## Final Thoughts
LangGraph taught the ecosystem that AI workflows need structure. That was an important lesson. But CrewAI Flows takes that lesson and delivers it in a form that's faster to write, easier to read, and more powerful in production — especially when your workflows involve multiple collaborating agents.
If you're building anything beyond a single-agent chain, give Flows a serious look. The decorator-driven model, native Crew integration, and built-in state management mean you'll spend less time on plumbing and more time on the problems that matter.
Start with `crewai create flow`. You won't look back.

View File

@@ -7,7 +7,7 @@ mode: "wide"
## Connect CrewAI to LLMs
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
CrewAI connects to LLMs through native SDK integrations for the most popular providers (OpenAI, Anthropic, Google Gemini, Azure, and AWS Bedrock), and uses LiteLLM as a flexible fallback for all other providers.
<Note>
By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set.
@@ -41,6 +41,14 @@ LiteLLM supports a wide range of providers, including but not limited to:
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
<Info>
To use any provider not covered by a native integration, add LiteLLM as a dependency to your project:
```bash
uv add 'crewai[litellm]'
```
Native providers (OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock) use their own SDK extras — see the [Provider Configuration Examples](/en/concepts/llms#provider-configuration-examples).
</Info>
## Changing the LLM
To use a different LLM with your CrewAI agents, you have several options:

View File

@@ -35,7 +35,7 @@ Visit [app.crewai.com](https://app.crewai.com) and create your free account. Thi
If you haven't already, install CrewAI with the CLI tools:
```bash
uv add crewai[tools]
uv add 'crewai[tools]'
```
Then authenticate your CLI with your CrewAI AMP account:

View File

@@ -18,77 +18,46 @@ Composio is an integration platform that allows you to connect your AI agents to
To incorporate Composio tools into your project, follow the instructions below:
```shell
pip install composio-crewai
pip install composio composio-crewai
pip install crewai
```
After the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://app.composio.dev)
After the installation is complete, set your Composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://platform.composio.dev)
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize Composio toolset
1. Initialize Composio with CrewAI Provider
```python Code
from composio_crewai import ComposioToolSet, App, Action
from composio_crewai import ComposioProvider
from composio import Composio
from crewai import Agent, Task, Crew
toolset = ComposioToolSet()
composio = Composio(provider=ComposioProvider())
```
2. Connect your GitHub account
2. Create a new Composio Session and retrieve the tools
<CodeGroup>
```shell CLI
composio add github
```
```python Code
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```python
session = composio.create(
user_id="your-user-id",
toolkits=["gmail", "github"] # optional, default is all toolkits
)
tools = session.tools()
```
Read more about sessions and user management [here](https://docs.composio.dev/docs/configuring-sessions)
</CodeGroup>
3. Get Tools
3. Authenticating users manually
- Retrieving all the tools from an app (not recommended for production):
Composio automatically authenticates the users during the agent chat session. However, you can also authenticate the user manually by calling the `authorize` method.
```python Code
tools = toolset.get_tools(apps=[App.GITHUB])
connection_request = session.authorize("github")
print(f"Open this URL to authenticate: {connection_request.redirect_url}")
```
- Filtering tools based on tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtering tools based on use case:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
- Using specific tools:
In this demo, we will use the `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` action from the GitHub app.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Learn more about filtering actions [here](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Define agent
```python Code
@@ -116,4 +85,4 @@ crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://docs.composio.dev/toolkits)

View File

@@ -4,6 +4,138 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 3월 4일">
## v1.10.1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1)
## 변경 사항
### 기능
- Gemini GenAI 업그레이드
### 버그 수정
- 재귀를 피하기 위해 실행기 리스너 값을 조정
- Gemini에서 병렬 함수 응답 부분을 단일 Content 객체로 그룹화
- Gemini에서 사고 모델의 사고 출력을 표시
- 에이전트 도구가 None일 때 MCP 및 플랫폼 도구 로드
- A2A에서 실행 이벤트 루프가 있는 Jupyter 환경 지원
- 일시적인 추적을 위해 익명 ID 사용
- 조건부로 플러스 헤더 전달
- 원격 측정을 위해 비주 스레드에서 신호 처리기 등록 건너뛰기
- 도구 오류를 관찰로 주입하고 이름 충돌 해결
- Dependabot 경고를 해결하기 위해 pypdf를 4.x에서 6.7.4로 업그레이드
- 심각 및 높은 Dependabot 보안 경고 해결
### 문서
- Composio 도구 문서를 지역별로 동기화
## 기여자
@giulio-leone, @greysonlalonde, @haxzie, @joaomdmoura, @lorenzejay, @mattatcha, @mplachta, @nicoferdi96
</Update>
<Update label="2026년 2월 27일">
## v1.10.1a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## 변경 사항
### 기능
- 단계 콜백 메서드에서 비동기 호출 지원 구현
- 메모리 모듈의 무거운 의존성에 대한 지연 로딩 구현
### 문서
- v1.10.0에 대한 변경 로그 및 버전 업데이트
### 리팩토링
- 비동기 호출을 지원하기 위해 단계 콜백 메서드 리팩토링
- 메모리 모듈의 무거운 의존성에 대한 지연 로딩을 구현하기 위해 리팩토링
### 버그 수정
- 릴리스 노트의 분기 수정
## 기여자
@greysonlalonde, @joaomdmoura
</Update>
<Update label="2026년 2월 27일">
## v1.10.1a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## 변경 사항
### 리팩토링
- 비동기 호출을 지원하기 위해 단계 콜백 메서드 리팩토링
- 메모리 모듈의 무거운 의존성에 대해 지연 로딩 구현
### 문서화
- v1.10.0에 대한 변경 로그 및 버전 업데이트
### 버그 수정
- 릴리스 노트를 위한 브랜치 생성
## 기여자
@greysonlalonde, @joaomdmoura
</Update>
<Update label="2026년 2월 26일">
## v1.10.0
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.0)
## 변경 사항
### 기능
- MCP 도구 해상도 및 관련 이벤트 개선
- lancedb 버전 업데이트 및 lance-namespace 패키지 추가
- CrewAgentExecutor 및 BaseTool에서 JSON 인수 파싱 및 검증 개선
- CLI HTTP 클라이언트를 requests에서 httpx로 마이그레이션
- 버전화된 문서 추가
- 버전 노트에 대한 yanked 감지 추가
- Flows에서 사용자 입력 처리 구현
- 인간 피드백 통합 테스트에서 HITL 자기 루프 기능 개선
- eventbus에 started_event_id 추가 및 설정
- tools.specs 자동 업데이트
### 버그 수정
- 빈 경우에도 도구 kwargs를 검증하여 모호한 TypeError 방지
- LLM을 위한 도구 매개변수 스키마에서 null 타입 유지
- output_pydantic/output_json을 네이티브 구조화된 출력으로 매핑
- 약속이 있는 경우 콜백이 실행/대기되도록 보장
- 예외 컨텍스트에서 메서드 이름 캡처
- 라우터 결과에서 enum 타입 유지; 타입 개선
- 입력으로 지속성 ID가 전달될 때 조용히 깨지는 순환 흐름 수정
- CLI 플래그 형식을 --skip-provider에서 --skip_provider로 수정
- OpenAI 도구 호출 스트림이 완료되도록 보장
- MCP 도구에서 복잡한 스키마 $ref 포인터 해결
- 스키마에서 additionalProperties=false 강제 적용
- 크루 폴더에 대해 예약된 스크립트 이름 거부
- 가드레일 이벤트 방출 테스트에서 경쟁 조건 해결
### 문서
- 비네이티브 LLM 공급자를 위한 litellm 종속성 노트 추가
- NL2SQL 보안 모델 및 강화 지침 명확화
- 9개 통합에서 96개의 누락된 작업 추가
### 리팩토링
- crew를 provider로 리팩토링
- HITL을 provider 패턴으로 추출
- 훅 타이핑 및 등록 개선
## 기여자
@dependabot[bot], @github-actions[bot], @github-code-quality[bot], @greysonlalonde, @heitorado, @hobostay, @joaomdmoura, @johnvan7, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha, @mplachta, @nicoferdi96, @theCyberTech, @thiagomoretto, @vinibrsl
</Update>
<Update label="2026년 1월 26일">
## v1.9.0

View File

@@ -105,6 +105,15 @@ CrewAI 코드 내에는 사용할 모델을 지정할 수 있는 여러 위치
</Tab>
</Tabs>
<Info>
CrewAI는 OpenAI, Anthropic, Google (Gemini API), Azure, AWS Bedrock에 대해 네이티브 SDK 통합을 제공합니다 — 제공자별 extras(예: `uv add "crewai[openai]"`) 외에 추가 설치가 필요하지 않습니다.
그 외 모든 제공자는 **LiteLLM**을 통해 지원됩니다. 이를 사용하려면 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Info>
## 공급자 구성 예시
CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양한 LLM 공급자를 지원합니다.
@@ -214,6 +223,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| `meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 128k | 4028 | 텍스트, 이미지 | 텍스트 |
| `meta_llama/Llama-3.3-70B-Instruct` | 128k | 4028 | 텍스트 | 텍스트 |
| `meta_llama/Llama-3.3-8B-Instruct` | 128k | 4028 | 텍스트 | 텍스트 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Anthropic">
@@ -354,6 +368,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| gemini-1.5-flash | 1M 토큰 | 밸런스 잡힌 멀티모달 모델, 대부분의 작업에 적합 |
| gemini-1.5-flash-8B | 1M 토큰 | 가장 빠르고, 비용 효율적, 고빈도 작업에 적합 |
| gemini-1.5-pro | 2M 토큰 | 최고의 성능, 논리적 추론, 코딩, 창의적 협업 등 다양한 추론 작업에 적합 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Azure">
@@ -439,6 +458,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
model="sagemaker/<my-endpoint>"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Mistral">
@@ -454,6 +478,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
temperature=0.7
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nvidia NIM">
@@ -540,6 +569,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| rakuten/rakutenai-7b-instruct | 1,024 토큰 | 언어 이해, 추론, 텍스트 생성이 탁월한 최첨단 LLM |
| rakuten/rakutenai-7b-chat | 1,024 토큰 | 언어 이해, 추론, 텍스트 생성이 탁월한 최첨단 LLM |
| baichuan-inc/baichuan2-13b-chat | 4,096 토큰 | 중국어 및 영어 대화, 코딩, 수학, 지시 따르기, 퀴즈 풀이 지원 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
@@ -580,6 +614,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
# ...
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Groq">
@@ -601,6 +640,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| Llama 3.1 70B/8B| 131,072 토큰 | 고성능, 대용량 문맥 작업 |
| Llama 3.2 Series| 8,192 토큰 | 범용 작업 |
| Mixtral 8x7B | 32,768 토큰 | 성능과 문맥의 균형 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="IBM watsonx.ai">
@@ -623,6 +667,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
base_url="https://api.watsonx.ai/v1"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
@@ -636,6 +685,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
base_url="http://localhost:11434"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Fireworks AI">
@@ -651,6 +705,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
temperature=0.7
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Perplexity AI">
@@ -666,6 +725,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
base_url="https://api.perplexity.ai/"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Hugging Face">
@@ -680,6 +744,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="SambaNova">
@@ -703,6 +772,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
| Llama 3.2 Series| 8,192 토큰 | 범용, 멀티모달 작업 |
| Llama 3.3 70B | 최대 131,072 토큰 | 고성능, 높은 출력 품질 |
| Qwen2 familly | 8,192 토큰 | 고성능, 높은 출력 품질 |
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Cerebras">
@@ -728,6 +802,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
- 속도와 품질의 우수한 밸런스
- 긴 컨텍스트 윈도우 지원
</Info>
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Open Router">
@@ -750,6 +829,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nebius AI Studio">
@@ -772,6 +856,11 @@ CrewAI는 고유한 기능, 인증 방법, 모델 역량을 제공하는 다양
- 경쟁력 있는 가격
- 속도와 품질의 우수한 밸런스
</Info>
**참고:** 이 제공자는 LiteLLM을 사용합니다. 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
</AccordionGroup>

View File

@@ -176,6 +176,11 @@ Crew를 GitHub 저장소에 푸시해야 합니다. 아직 Crew를 만들지 않
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
<Info>
프라이빗 Python 패키지를 사용하시나요? 여기에 레지스트리 자격 증명도 추가해야 합니다.
필요한 변수는 [프라이빗 패키지 레지스트리](/ko/enterprise/guides/private-package-registry)를 참조하세요.
</Info>
</Step>
<Step title="Crew 배포하기">

View File

@@ -256,6 +256,12 @@ Crews와 Flows 모두 `src/project_name/main.py`에 진입점이 있습니다:
1. **LLM API 키** (OpenAI, Anthropic, Google 등)
2. **도구 API 키** - 외부 도구를 사용하는 경우 (Serper 등)
<Info>
프로젝트가 **프라이빗 PyPI 레지스트리**의 패키지에 의존하는 경우, 레지스트리 인증 자격 증명도
환경 변수로 구성해야 합니다. 자세한 내용은
[프라이빗 패키지 레지스트리](/ko/enterprise/guides/private-package-registry) 가이드를 참조하세요.
</Info>
<Tip>
구성 문제를 조기에 발견하기 위해 배포 전에 동일한 환경 변수로
로컬에서 프로젝트를 테스트하세요.

View File

@@ -0,0 +1,261 @@
---
title: "프라이빗 패키지 레지스트리"
description: "CrewAI AMP에서 인증된 PyPI 레지스트리의 프라이빗 Python 패키지 설치하기"
icon: "lock"
mode: "wide"
---
<Note>
이 가이드는 CrewAI AMP에 배포할 때 프라이빗 PyPI 레지스트리(Azure DevOps Artifacts, GitHub Packages,
GitLab, AWS CodeArtifact 등)에서 Python 패키지를 설치하도록 CrewAI 프로젝트를 구성하는 방법을 다룹니다.
</Note>
## 이 가이드가 필요한 경우
프로젝트가 공개 PyPI가 아닌 프라이빗 레지스트리에 호스팅된 내부 또는 독점 Python 패키지에
의존하는 경우, 다음을 수행해야 합니다:
1. UV에 패키지를 **어디서** 찾을지 알려줍니다 (index URL)
2. UV에 **어떤** 패키지가 해당 index에서 오는지 알려줍니다 (source 매핑)
3. UV가 설치 중에 인증할 수 있도록 **자격 증명**을 제공합니다
CrewAI AMP는 의존성 해결 및 설치에 [UV](https://docs.astral.sh/uv/)를 사용합니다.
UV는 `pyproject.toml` 구성과 자격 증명용 환경 변수를 결합하여 인증된 프라이빗 레지스트리를 지원합니다.
## 1단계: pyproject.toml 구성
`pyproject.toml`에서 세 가지 요소가 함께 작동합니다:
### 1a. 의존성 선언
프라이빗 패키지를 다른 의존성과 마찬가지로 `[project.dependencies]`에 추가합니다:
```toml
[project]
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
```
### 1b. index 정의
프라이빗 레지스트리를 `[[tool.uv.index]]` 아래에 명명된 index로 등록합니다:
```toml
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
```
<Info>
`name` 필드는 중요합니다 — UV는 이를 사용하여 인증을 위한 환경 변수 이름을
구성합니다 (아래 [2단계](#2단계-인증-자격-증명-설정)를 참조하세요).
`explicit = true`를 설정하면 UV가 모든 패키지에 대해 이 index를 검색하지 않습니다 —
`[tool.uv.sources]`에서 명시적으로 매핑한 패키지만 검색합니다. 이렇게 하면 프라이빗
레지스트리에 대한 불필요한 쿼리를 방지하고 의존성 혼동 공격을 차단할 수 있습니다.
</Info>
### 1c. 패키지를 index에 매핑
`[tool.uv.sources]`를 사용하여 프라이빗 index에서 해결해야 할 패키지를 UV에 알려줍니다:
```toml
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
### 전체 예시
```toml
[project]
name = "my-crew-project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
[tool.crewai]
type = "crew"
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
`pyproject.toml`을 업데이트한 후 lock 파일을 다시 생성합니다:
```bash
uv lock
```
<Warning>
업데이트된 `uv.lock`을 항상 `pyproject.toml` 변경 사항과 함께 커밋하세요.
lock 파일은 배포에 필수입니다 — [배포 준비하기](/ko/enterprise/guides/prepare-for-deployment)를 참조하세요.
</Warning>
## 2단계: 인증 자격 증명 설정
UV는 `pyproject.toml`에서 정의한 index 이름을 기반으로 한 명명 규칙을 따르는
환경 변수를 사용하여 프라이빗 index에 인증합니다:
```
UV_INDEX_{UPPER_NAME}_USERNAME
UV_INDEX_{UPPER_NAME}_PASSWORD
```
여기서 `{UPPER_NAME}`은 index 이름을 **대문자**로 변환하고 **하이픈을 언더스코어로 대체**한 것입니다.
예를 들어, `my-private-registry`라는 이름의 index는 다음을 사용합니다:
| 변수 | 값 |
|------|-----|
| `UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME` | 레지스트리 사용자 이름 또는 토큰 이름 |
| `UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD` | 레지스트리 비밀번호 또는 토큰/PAT |
<Warning>
이 환경 변수는 CrewAI AMP **환경 변수** 설정을 통해 **반드시** 추가해야 합니다 —
전역적으로 또는 배포 수준에서. `.env` 파일에 설정하거나 프로젝트에 하드코딩할 수 없습니다.
아래 [AMP에서 환경 변수 설정](#amp에서-환경-변수-설정)을 참조하세요.
</Warning>
## 레지스트리 제공업체 참조
아래 표는 일반적인 레지스트리 제공업체의 index URL 형식과 자격 증명 값을 보여줍니다.
자리 표시자 값을 실제 조직 및 피드 세부 정보로 대체하세요.
| 제공업체 | Index URL | 사용자 이름 | 비밀번호 |
|---------|-----------|-----------|---------|
| **Azure DevOps Artifacts** | `https://pkgs.dev.azure.com/{org}/_packaging/{feed}/pypi/simple/` | 비어 있지 않은 임의의 문자열 (예: `token`) | Packaging Read 범위의 Personal Access Token (PAT) |
| **GitHub Packages** | `https://pypi.pkg.github.com/{owner}/simple/` | GitHub 사용자 이름 | `read:packages` 범위의 Personal Access Token (classic) |
| **GitLab Package Registry** | `https://gitlab.com/api/v4/projects/{project_id}/packages/pypi/simple/` | `__token__` | `read_api` 범위의 Project 또는 Personal Access Token |
| **AWS CodeArtifact** | `aws codeartifact get-repository-endpoint`의 URL 사용 | `aws` | `aws codeartifact get-authorization-token`의 토큰 |
| **Google Artifact Registry** | `https://{region}-python.pkg.dev/{project}/{repo}/simple/` | `_json_key_base64` | Base64로 인코딩된 서비스 계정 키 |
| **JFrog Artifactory** | `https://{instance}.jfrog.io/artifactory/api/pypi/{repo}/simple/` | 사용자 이름 또는 이메일 | API 키 또는 ID 토큰 |
| **자체 호스팅 (devpi, Nexus 등)** | 레지스트리의 simple API URL | 레지스트리 사용자 이름 | 레지스트리 비밀번호 |
<Tip>
**AWS CodeArtifact**의 경우 인증 토큰이 주기적으로 만료됩니다.
만료되면 `UV_INDEX_*_PASSWORD` 값을 갱신해야 합니다.
CI/CD 파이프라인에서 이를 자동화하는 것을 고려하세요.
</Tip>
## AMP에서 환경 변수 설정
프라이빗 레지스트리 자격 증명은 CrewAI AMP에서 환경 변수로 구성해야 합니다.
두 가지 옵션이 있습니다:
<Tabs>
<Tab title="웹 인터페이스">
1. [CrewAI AMP](https://app.crewai.com)에 로그인합니다
2. 자동화로 이동합니다
3. **Environment Variables** 탭을 엽니다
4. 각 변수 (`UV_INDEX_*_USERNAME` 및 `UV_INDEX_*_PASSWORD`)에 값을 추가합니다
자세한 내용은 [AMP에 배포하기 — 환경 변수 설정하기](/ko/enterprise/guides/deploy-to-amp#환경-변수-설정하기) 단계를 참조하세요.
</Tab>
<Tab title="CLI 배포">
`crewai deploy create`를 실행하기 전에 로컬 `.env` 파일에 변수를 추가합니다.
CLI가 이를 안전하게 플랫폼으로 전송합니다:
```bash
# .env
OPENAI_API_KEY=sk-...
UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat-here
```
```bash
crewai deploy create
```
</Tab>
</Tabs>
<Warning>
자격 증명을 저장소에 **절대** 커밋하지 마세요. 모든 비밀 정보에는 AMP 환경 변수를 사용하세요.
`.env` 파일은 `.gitignore`에 포함되어야 합니다.
</Warning>
기존 배포의 자격 증명을 업데이트하려면 [Crew 업데이트하기 — 환경 변수](/ko/enterprise/guides/update-crew)를 참조하세요.
## 전체 동작 흐름
CrewAI AMP가 자동화를 빌드할 때, 해결 흐름은 다음과 같이 작동합니다:
<Steps>
<Step title="빌드 시작">
AMP가 저장소를 가져오고 `pyproject.toml`과 `uv.lock`을 읽습니다.
</Step>
<Step title="UV가 의존성 해결">
UV가 `[tool.uv.sources]`를 읽어 각 패키지가 어떤 index에서 와야 하는지 결정합니다.
</Step>
<Step title="UV가 인증">
각 프라이빗 index에 대해 UV가 AMP에서 구성한 환경 변수에서
`UV_INDEX_{NAME}_USERNAME`과 `UV_INDEX_{NAME}_PASSWORD`를 조회합니다.
</Step>
<Step title="패키지 설치">
UV가 공개(PyPI) 및 프라이빗(레지스트리) 패키지를 모두 다운로드하고 설치합니다.
</Step>
<Step title="자동화 실행">
모든 의존성이 사용 가능한 상태에서 crew 또는 flow가 시작됩니다.
</Step>
</Steps>
## 문제 해결
### 빌드 중 인증 오류
**증상**: 프라이빗 패키지를 해결할 때 `401 Unauthorized` 또는 `403 Forbidden`으로 빌드가 실패합니다.
**확인사항**:
- `UV_INDEX_*` 환경 변수 이름이 index 이름과 정확히 일치하는지 확인합니다 (대문자, 하이픈 -> 언더스코어)
- 자격 증명이 로컬 `.env`뿐만 아니라 AMP 환경 변수에 설정되어 있는지 확인합니다
- 토큰/PAT에 패키지 피드에 필요한 읽기 권한이 있는지 확인합니다
- 토큰이 만료되지 않았는지 확인합니다 (특히 AWS CodeArtifact의 경우)
### 패키지를 찾을 수 없음
**증상**: `No matching distribution found for my-private-package`.
**확인사항**:
- `pyproject.toml`의 index URL이 `/simple/`로 끝나는지 확인합니다
- `[tool.uv.sources]` 항목이 올바른 패키지 이름을 올바른 index 이름에 매핑하는지 확인합니다
- 패키지가 실제로 프라이빗 레지스트리에 게시되어 있는지 확인합니다
- 동일한 자격 증명으로 로컬에서 `uv lock`을 실행하여 해결이 작동하는지 확인합니다
### Lock 파일 충돌
**증상**: 프라이빗 index를 추가한 후 `uv lock`이 실패하거나 예상치 못한 결과를 생성합니다.
**해결책**: 로컬에서 자격 증명을 설정하고 다시 생성합니다:
```bash
export UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
export UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat
uv lock
```
그런 다음 업데이트된 `uv.lock`을 커밋합니다.
## 관련 가이드
<CardGroup cols={3}>
<Card title="배포 준비하기" icon="clipboard-check" href="/ko/enterprise/guides/prepare-for-deployment">
배포 전에 프로젝트 구조와 의존성을 확인합니다.
</Card>
<Card title="AMP에 배포하기" icon="rocket" href="/ko/enterprise/guides/deploy-to-amp">
crew 또는 flow를 배포하고 환경 변수를 구성합니다.
</Card>
<Card title="Crew 업데이트하기" icon="arrows-rotate" href="/ko/enterprise/guides/update-crew">
환경 변수를 업데이트하고 실행 중인 배포에 변경 사항을 푸시합니다.
</Card>
</CardGroup>

View File

@@ -0,0 +1,518 @@
---
title: "LangGraph에서 CrewAI로 옮기기: 엔지니어를 위한 실전 가이드"
description: LangGraph로 이미 구축했다면, 프로젝트를 CrewAI로 빠르게 옮기는 방법을 알아보세요
icon: switch
mode: "wide"
---
LangGraph로 에이전트를 구축해 왔습니다. `StateGraph`와 씨름하고, 조건부 에지를 연결하고, 새벽 2시에 상태 딕셔너리를 디버깅해 본 적도 있죠. 동작은 하지만 — 어느 순간부터 프로덕션으로 가는 더 나은 길이 없을까 고민하게 됩니다.
있습니다. **CrewAI Flows**는 이벤트 기반 오케스트레이션, 조건부 라우팅, 공유 상태라는 동일한 힘을 훨씬 적은 보일러플레이트와 실제로 다단계 AI 워크플로우를 생각하는 방식에 잘 맞는 정신적 모델로 제공합니다.
이 글은 핵심 개념을 나란히 비교하고 실제 코드 비교를 보여주며, 다음으로 손이 갈 프레임워크가 왜 CrewAI Flows인지 설명합니다.
---
## 정신적 모델의 전환
LangGraph는 **그래프**로 생각하라고 요구합니다: 노드, 에지, 그리고 상태 딕셔너리. 모든 워크플로우는 계산 단계 사이의 전이를 명시적으로 연결하는 방향 그래프입니다. 강력하지만, 특히 워크플로우가 몇 개의 결정 지점이 있는 순차적 흐름일 때 이 추상화는 오버헤드를 가져옵니다.
CrewAI Flows는 **이벤트**로 생각하라고 요구합니다: 시작하는 메서드, 결과를 듣는 메서드, 실행을 라우팅하는 메서드. 워크플로우의 토폴로지는 명시적 그래프 구성 대신 데코레이터 어노테이션에서 드러납니다. 이것은 단순한 문법 설탕이 아니라 — 파이프라인을 설계하고 읽고 유지하는 방식을 바꿉니다.
핵심 매핑은 다음과 같습니다:
| LangGraph 개념 | CrewAI Flows 대응 |
| --- | --- |
| `StateGraph` class | `Flow` class |
| `add_node()` | Methods decorated with `@start`, `@listen` |
| `add_edge()` / `add_conditional_edges()` | `@listen()` / `@router()` decorators |
| `TypedDict` state | Pydantic `BaseModel` state |
| `START` / `END` constants | `@start()` decorator / natural method return |
| `graph.compile()` | `flow.kickoff()` |
| Checkpointer / persistence | Built-in memory (LanceDB-backed) |
실제로 어떻게 보이는지 살펴보겠습니다.
---
## 데모 1: 간단한 순차 파이프라인
주제를 받아 조사하고, 요약을 작성한 뒤, 결과를 포맷팅하는 파이프라인을 만든다고 해봅시다. 각 프레임워크는 이렇게 처리합니다.
### LangGraph 방식
```python
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class ResearchState(TypedDict):
topic: str
raw_research: str
summary: str
formatted_output: str
def research_topic(state: ResearchState) -> dict:
# Call an LLM or search API
result = llm.invoke(f"Research the topic: {state['topic']}")
return {"raw_research": result}
def write_summary(state: ResearchState) -> dict:
result = llm.invoke(
f"Summarize this research:\n{state['raw_research']}"
)
return {"summary": result}
def format_output(state: ResearchState) -> dict:
result = llm.invoke(
f"Format this summary as a polished article section:\n{state['summary']}"
)
return {"formatted_output": result}
# Build the graph
graph = StateGraph(ResearchState)
graph.add_node("research", research_topic)
graph.add_node("summarize", write_summary)
graph.add_node("format", format_output)
graph.add_edge(START, "research")
graph.add_edge("research", "summarize")
graph.add_edge("summarize", "format")
graph.add_edge("format", END)
# Compile and run
app = graph.compile()
result = app.invoke({"topic": "quantum computing advances in 2026"})
print(result["formatted_output"])
```
함수를 정의하고 노드로 등록한 다음, 모든 전이를 수동으로 연결합니다. 이렇게 단순한 순서인데도 의례처럼 해야 할 작업이 많습니다.
### CrewAI Flows 방식
```python
from crewai import LLM, Agent, Crew, Process, Task
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class ResearchState(BaseModel):
topic: str = ""
raw_research: str = ""
summary: str = ""
formatted_output: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def research_topic(self):
# Option 1: Direct LLM call
result = llm.call(f"Research the topic: {self.state.topic}")
self.state.raw_research = result
return result
@listen(research_topic)
def write_summary(self, research_output):
# Option 2: A single agent
summarizer = Agent(
role="Research Summarizer",
goal="Produce concise, accurate summaries of research content",
backstory="You are an expert at distilling complex research into clear, "
"digestible summaries.",
llm=llm,
verbose=True,
)
result = summarizer.kickoff(
f"Summarize this research:\n{self.state.raw_research}"
)
self.state.summary = str(result)
return self.state.summary
@listen(write_summary)
def format_output(self, summary_output):
# Option 3: a complete crew (with one or more agents)
formatter = Agent(
role="Content Formatter",
goal="Transform research summaries into polished, publication-ready article sections",
backstory="You are a skilled editor with expertise in structuring and "
"presenting technical content for a general audience.",
llm=llm,
verbose=True,
)
format_task = Task(
description=f"Format this summary as a polished article section:\n{self.state.summary}",
expected_output="A well-structured, polished article section ready for publication.",
agent=formatter,
)
crew = Crew(
agents=[formatter],
tasks=[format_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
self.state.formatted_output = str(result)
return self.state.formatted_output
# Run the flow
flow = ResearchFlow()
flow.state.topic = "quantum computing advances in 2026"
result = flow.kickoff()
print(flow.state.formatted_output)
```
눈에 띄는 차이점이 있습니다: 그래프 구성 없음, 에지 연결 없음, 컴파일 단계 없음. 실행 순서는 로직이 있는 곳에서 바로 선언됩니다. `@start()`는 진입점을 표시하고, `@listen(method_name)`은 단계들을 연결합니다. 상태는 타입 안전성, 검증, IDE 자동 완성까지 제공하는 제대로 된 Pydantic 모델입니다.
---
## 데모 2: 조건부 라우팅
여기서 흥미로워집니다. 콘텐츠 유형에 따라 서로 다른 처리 경로로 라우팅하는 파이프라인을 만든다고 해봅시다.
### LangGraph 방식
```python
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, START, END
class ContentState(TypedDict):
input_text: str
content_type: str
result: str
def classify_content(state: ContentState) -> dict:
content_type = llm.invoke(
f"Classify this content as 'technical', 'creative', or 'business':\n{state['input_text']}"
)
return {"content_type": content_type.strip().lower()}
def process_technical(state: ContentState) -> dict:
result = llm.invoke(f"Process as technical doc:\n{state['input_text']}")
return {"result": result}
def process_creative(state: ContentState) -> dict:
result = llm.invoke(f"Process as creative writing:\n{state['input_text']}")
return {"result": result}
def process_business(state: ContentState) -> dict:
result = llm.invoke(f"Process as business content:\n{state['input_text']}")
return {"result": result}
# Routing function
def route_content(state: ContentState) -> Literal["technical", "creative", "business"]:
return state["content_type"]
# Build the graph
graph = StateGraph(ContentState)
graph.add_node("classify", classify_content)
graph.add_node("technical", process_technical)
graph.add_node("creative", process_creative)
graph.add_node("business", process_business)
graph.add_edge(START, "classify")
graph.add_conditional_edges(
"classify",
route_content,
{
"technical": "technical",
"creative": "creative",
"business": "business",
}
)
graph.add_edge("technical", END)
graph.add_edge("creative", END)
graph.add_edge("business", END)
app = graph.compile()
result = app.invoke({"input_text": "Explain how TCP handshakes work"})
```
별도의 라우팅 함수, 명시적 조건부 에지 매핑, 그리고 모든 분기에 대한 종료 에지가 필요합니다. 라우팅 결정 로직이 그 결정을 만들어 내는 노드와 분리됩니다.
### CrewAI Flows 방식
```python
from crewai import LLM, Agent
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class ContentState(BaseModel):
input_text: str = ""
content_type: str = ""
result: str = ""
class ContentFlow(Flow[ContentState]):
@start()
def classify_content(self):
self.state.content_type = (
llm.call(
f"Classify this content as 'technical', 'creative', or 'business':\n"
f"{self.state.input_text}"
)
.strip()
.lower()
)
return self.state.content_type
@router(classify_content)
def route_content(self, classification):
if classification == "technical":
return "process_technical"
elif classification == "creative":
return "process_creative"
else:
return "process_business"
@listen("process_technical")
def handle_technical(self):
agent = Agent(
role="Technical Writer",
goal="Produce clear, accurate technical documentation",
backstory="You are an expert technical writer who specializes in "
"explaining complex technical concepts precisely.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as technical doc:\n{self.state.input_text}")
)
@listen("process_creative")
def handle_creative(self):
agent = Agent(
role="Creative Writer",
goal="Craft engaging and imaginative creative content",
backstory="You are a talented creative writer with a flair for "
"compelling storytelling and vivid expression.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as creative writing:\n{self.state.input_text}")
)
@listen("process_business")
def handle_business(self):
agent = Agent(
role="Business Writer",
goal="Produce professional, results-oriented business content",
backstory="You are an experienced business writer who communicates "
"strategy and value clearly to professional audiences.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as business content:\n{self.state.input_text}")
)
flow = ContentFlow()
flow.state.input_text = "Explain how TCP handshakes work"
flow.kickoff()
print(flow.state.result)
```
`@router()` 데코레이터는 메서드를 결정 지점으로 만듭니다. 리스너와 매칭되는 문자열을 반환하므로, 매핑 딕셔너리도, 별도의 라우팅 함수도 필요 없습니다. 분기 로직이 Python `if` 문처럼 읽히는 이유는, 실제로 `if` 문이기 때문입니다.
---
## 데모 3: AI 에이전트 Crew를 Flow에 통합하기
여기서 CrewAI의 진짜 힘이 드러납니다. Flows는 LLM 호출을 연결하는 것에 그치지 않고 자율적인 에이전트 **Crew** 전체를 오케스트레이션합니다. 이는 LangGraph에 기본으로 대응되는 개념이 없습니다.
```python
from crewai import Agent, Task, Crew
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ArticleState(BaseModel):
topic: str = ""
research: str = ""
draft: str = ""
final_article: str = ""
class ArticleFlow(Flow[ArticleState]):
@start()
def run_research_crew(self):
"""A full Crew of agents handles research."""
researcher = Agent(
role="Senior Research Analyst",
goal=f"Produce comprehensive research on: {self.state.topic}",
backstory="You're a veteran analyst known for thorough, "
"well-sourced research reports.",
llm="gpt-4o"
)
research_task = Task(
description=f"Research '{self.state.topic}' thoroughly. "
"Cover key trends, data points, and expert opinions.",
expected_output="A detailed research brief with sources.",
agent=researcher
)
crew = Crew(agents=[researcher], tasks=[research_task])
result = crew.kickoff()
self.state.research = result.raw
return result.raw
@listen(run_research_crew)
def run_writing_crew(self, research_output):
"""A different Crew handles writing."""
writer = Agent(
role="Technical Writer",
goal="Write a compelling article based on provided research.",
backstory="You turn complex research into engaging, clear prose.",
llm="gpt-4o"
)
editor = Agent(
role="Senior Editor",
goal="Review and polish articles for publication quality.",
backstory="20 years of editorial experience at top tech publications.",
llm="gpt-4o"
)
write_task = Task(
description=f"Write an article based on this research:\n{self.state.research}",
expected_output="A well-structured draft article.",
agent=writer
)
edit_task = Task(
description="Review, fact-check, and polish the draft article.",
expected_output="A publication-ready article.",
agent=editor
)
crew = Crew(agents=[writer, editor], tasks=[write_task, edit_task])
result = crew.kickoff()
self.state.final_article = result.raw
return result.raw
# Run the full pipeline
flow = ArticleFlow()
flow.state.topic = "The Future of Edge AI"
flow.kickoff()
print(flow.state.final_article)
```
핵심 인사이트는 다음과 같습니다: **Flows는 오케스트레이션 레이어를, Crews는 지능 레이어를 제공합니다.** Flow의 각 단계는 각자의 역할, 목표, 도구를 가진 협업 에이전트 팀을 띄울 수 있습니다. 구조화되고 예측 가능한 제어 흐름 *그리고* 자율적 에이전트 협업 — 두 세계의 장점을 모두 얻습니다.
LangGraph에서 비슷한 것을 하려면 노드 함수 안에 에이전트 통신 프로토콜, 도구 호출 루프, 위임 로직을 직접 구현해야 합니다. 가능하긴 하지만, 매번 처음부터 배관을 만드는 셈입니다.
---
## 데모 4: 병렬 실행과 동기화
실제 파이프라인은 종종 작업을 병렬로 분기하고 결과를 합쳐야 합니다. CrewAI Flows는 `and_`와 `or_` 연산자로 이를 우아하게 처리합니다.
```python
from crewai import LLM
from crewai.flow.flow import Flow, and_, listen, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class AnalysisState(BaseModel):
topic: str = ""
market_data: str = ""
tech_analysis: str = ""
competitor_intel: str = ""
final_report: str = ""
class ParallelAnalysisFlow(Flow[AnalysisState]):
@start()
def start_method(self):
pass
@listen(start_method)
def gather_market_data(self):
# Your agentic or deterministic code
pass
@listen(start_method)
def run_tech_analysis(self):
# Your agentic or deterministic code
pass
@listen(start_method)
def gather_competitor_intel(self):
# Your agentic or deterministic code
pass
@listen(and_(gather_market_data, run_tech_analysis, gather_competitor_intel))
def synthesize_report(self):
# Your agentic or deterministic code
pass
flow = ParallelAnalysisFlow()
flow.state.topic = "AI-powered developer tools"
flow.kickoff()
```
여러 `@start()` 데코레이터는 병렬로 실행됩니다. `@listen` 데코레이터의 `and_()` 결합자는 `synthesize_report`가 *세 가지* 상위 메서드가 모두 완료된 뒤에만 실행되도록 보장합니다. *어떤* 상위 작업이든 끝나는 즉시 진행하고 싶다면 `or_()`도 사용할 수 있습니다.
LangGraph에서는 병렬 분기, 동기화 노드, 신중한 상태 병합이 포함된 fan-out/fan-in 패턴을 만들어야 하며 — 모든 것을 에지로 명시적으로 연결해야 합니다.
---
## 프로덕션에서 CrewAI Flows를 쓰는 이유
깔끔한 문법을 넘어, Flows는 여러 프로덕션 핵심 이점을 제공합니다:
**내장 상태 지속성.** Flow 상태는 LanceDB에 의해 백업되므로 워크플로우가 크래시에서 살아남고, 재개될 수 있으며, 실행 간에 지식을 축적할 수 있습니다. LangGraph는 별도의 체크포인터를 구성해야 합니다.
**타입 안전한 상태 관리.** Pydantic 모델은 즉시 검증, 직렬화, IDE 지원을 제공합니다. LangGraph의 `TypedDict` 상태는 런타임 검증을 하지 않습니다.
**일급 에이전트 오케스트레이션.** Crews는 기본 프리미티브입니다. 역할, 목표, 배경, 도구를 가진 에이전트를 정의하고, Flow의 구조적 틀 안에서 자율적으로 협업하게 합니다. 다중 에이전트 조율을 다시 만들 필요가 없습니다.
**더 단순한 정신적 모델.** 데코레이터는 의도를 선언합니다. `@start`는 "여기서 시작", `@listen(x)`는 "x 이후 실행", `@router(x)`는 "x 이후 어디로 갈지 결정"을 의미합니다. 코드는 자신이 설명하는 워크플로우처럼 읽힙니다.
**CLI 통합.** `crewai run`으로 Flows를 실행합니다. 별도의 컴파일 단계나 그래프 직렬화가 없습니다. Flow는 Python 클래스이며, 그대로 실행됩니다.
---
## 마이그레이션 치트 시트
LangGraph 코드베이스를 CrewAI Flows로 옮기고 싶다면, 다음의 실전 변환 가이드를 참고하세요:
1. **상태를 매핑하세요.** `TypedDict`를 Pydantic `BaseModel`로 변환하고 모든 필드에 기본값을 추가하세요.
2. **노드를 메서드로 변환하세요.** 각 `add_node` 함수는 `Flow` 서브클래스의 메서드가 됩니다. `state["field"]` 읽기는 `self.state.field`로 바꾸세요.
3. **에지를 데코레이터로 교체하세요.** `add_edge(START, "first_node")`는 첫 메서드의 `@start()`가 됩니다. 순차적인 `add_edge("a", "b")`는 `b` 메서드의 `@listen(a)`가 됩니다.
4. **조건부 에지는 `@router`로 교체하세요.** 라우팅 함수와 `add_conditional_edges()` 매핑은 하나의 `@router()` 메서드로 통합하고, 라우트 문자열을 반환하세요.
5. **compile + invoke를 kickoff으로 교체하세요.** `graph.compile()`를 제거하고 `flow.kickoff()`를 호출하세요.
6. **Crew가 들어갈 지점을 고려하세요.** 복잡한 다단계 에이전트 로직이 있는 노드는 Crew로 분리할 후보입니다. 이 부분에서 가장 큰 품질 향상을 체감할 수 있습니다.
---
## 시작하기
CrewAI를 설치하고 새 Flow 프로젝트를 스캐폴딩하세요:
```bash
pip install crewai
crewai create flow my_first_flow
cd my_first_flow
```
이렇게 하면 바로 편집 가능한 Flow 클래스, 설정 파일, 그리고 `type = "flow"`가 이미 설정된 `pyproject.toml`이 포함된 프로젝트 구조가 생성됩니다. 다음으로 실행하세요:
```bash
crewai run
```
그 다음부터는 에이전트를 추가하고 리스너를 연결한 뒤, 배포하면 됩니다.
---
## 마무리
LangGraph는 AI 워크플로우에 구조가 필요하다는 사실을 생태계에 일깨워 주었습니다. 중요한 교훈이었습니다. 하지만 CrewAI Flows는 그 교훈을 더 빠르게 쓰고, 더 쉽게 읽으며, 프로덕션에서 더 강력한 형태로 제공합니다 — 특히 워크플로우에 여러 에이전트의 협업이 포함될 때 그렇습니다.
단일 에이전트 체인을 넘는 무엇인가를 만들고 있다면, Flows를 진지하게 검토해 보세요. 데코레이터 기반 모델, Crews의 네이티브 통합, 내장 상태 관리를 통해 배관 작업에 쓰는 시간을 줄이고, 중요한 문제에 더 많은 시간을 쓸 수 있습니다.
`crewai create flow`로 시작하세요. 후회하지 않을 겁니다.

View File

@@ -7,7 +7,7 @@ mode: "wide"
## CrewAI를 LLM에 연결하기
CrewAI는 LiteLLM을 사용하여 다양한 언어 모델(LLM)에 연결합니다. 이 통합은 높은 다양성을 제공하여, 여러 공급자의 모델을 간단하고 통합된 인터페이스로 사용할 수 있게 해줍니다.
CrewAI는 가장 인기 있는 제공자(OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock)에 대해 네이티브 SDK 통합을 통해 LLM에 연결하며, 그 외 모든 제공자에 대해서는 LiteLLM을 유연한 폴백으로 사용합니다.
<Note>
기본적으로 CrewAI는 `gpt-4o-mini` 모델을 사용합니다. 이는 `OPENAI_MODEL_NAME` 환경 변수에 의해 결정되며, 설정되지 않은 경우 기본값은 "gpt-4o-mini"입니다.
@@ -41,6 +41,14 @@ LiteLLM은 다음을 포함하되 이에 국한되지 않는 다양한 프로바
지원되는 프로바이더의 전체 및 최신 목록은 [LiteLLM 프로바이더 문서](https://docs.litellm.ai/docs/providers)를 참조하세요.
<Info>
네이티브 통합에서 지원하지 않는 제공자를 사용하려면 LiteLLM을 프로젝트에 의존성으로 추가하세요:
```bash
uv add 'crewai[litellm]'
```
네이티브 제공자(OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock)는 자체 SDK extras를 사용합니다 — [공급자 구성 예시](/ko/concepts/llms#공급자-구성-예시)를 참조하세요.
</Info>
## LLM 변경하기
CrewAI agent에서 다른 LLM을 사용하려면 여러 가지 방법이 있습니다:

View File

@@ -35,7 +35,7 @@ crewai login
아직 설치하지 않았다면 CLI 도구와 함께 CrewAI를 설치하세요:
```bash
uv add crewai[tools]
uv add 'crewai[tools]'
```
그런 다음 CrewAI AMP 계정으로 CLI를 인증하세요:

View File

@@ -18,77 +18,46 @@ Composio는 AI 에이전트를 250개 이상의 도구와 연결할 수 있는
Composio 도구를 프로젝트에 통합하려면 아래 지침을 따르세요:
```shell
pip install composio-crewai
pip install composio composio-crewai
pip install crewai
```
설치가 완료된 후, `composio login`을 실행하거나 Composio API 키를 `COMPOSIO_API_KEY`로 export하세요. Composio API 키는 [여기](https://app.composio.dev)에서 받을 수 있습니다.
설치가 완료되면 Composio API 키를 `COMPOSIO_API_KEY`로 설정하세요. Composio API 키는 [여기](https://platform.composio.dev)에서 받을 수 있습니다.
## 예시
다음 예시는 도구를 초기화하고 github action을 실행하는 방법을 보여줍니다:
다음 예시는 도구를 초기화하고 GitHub 액션을 실행하는 방법을 보여줍니다:
1. Composio 도구 세트 초기화
1. CrewAI Provider와 함께 Composio 초기화
```python Code
from composio_crewai import ComposioToolSet, App, Action
from composio_crewai import ComposioProvider
from composio import Composio
from crewai import Agent, Task, Crew
toolset = ComposioToolSet()
composio = Composio(provider=ComposioProvider())
```
2. GitHub 계정 연결
2. 새 Composio 세션을 만들고 도구 가져오기
<CodeGroup>
```shell CLI
composio add github
```
```python Code
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```python
session = composio.create(
user_id="your-user-id",
toolkits=["gmail", "github"] # optional, default is all toolkits
)
tools = session.tools()
```
세션 및 사용자 관리에 대한 자세한 내용은 [여기](https://docs.composio.dev/docs/configuring-sessions)를 참고하세요.
</CodeGroup>
3. 도구 가져오
3. 사용자 수동 인증하
- 앱에서 모든 도구를 가져오기 (프로덕션 환경에서는 권장하지 않음):
Composio는 에이전트 채팅 세션 중에 사용자를 자동으로 인증합니다. 하지만 `authorize` 메서드를 호출해 사용자를 수동으로 인증할 수도 있습니다.
```python Code
tools = toolset.get_tools(apps=[App.GITHUB])
connection_request = session.authorize("github")
print(f"Open this URL to authenticate: {connection_request.redirect_url}")
```
- 태그를 기반으로 도구 필터링:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- 사용 사례를 기반으로 도구 필터링:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>`advanced`를 True로 설정하면 복잡한 사용 사례를 위한 액션을 가져올 수 있습니다</Tip>
- 특정 도구 사용하기:
이 데모에서는 GitHub 앱의 `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` 액션을 사용합니다.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
액션 필터링에 대해 더 자세한 내용을 보려면 [여기](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)를 참고하세요.
4. 에이전트 정의
```python Code
@@ -116,4 +85,4 @@ crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* 더욱 자세한 도구 리스트는 [여기](https://app.composio.dev)에서 확인하실 수 있습니다.
* 더욱 자세한 도구 목록은 [여기](https://docs.composio.dev/toolkits)에서 확인 수 있습니다.

View File

@@ -4,6 +4,138 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="04 mar 2026">
## v1.10.1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1)
## O que mudou
### Recursos
- Atualizar Gemini GenAI
### Correções de Bugs
- Ajustar o valor do listener do executor para evitar recursão
- Agrupar partes da resposta da função paralela em um único objeto Content no Gemini
- Exibir a saída de pensamento dos modelos de pensamento no Gemini
- Carregar ferramentas MCP e da plataforma quando as ferramentas do agente forem None
- Suportar ambientes Jupyter com loops de eventos em A2A
- Usar ID anônimo para rastreamentos efêmeros
- Passar condicionalmente o cabeçalho plus
- Ignorar o registro do manipulador de sinal em threads não principais para telemetria
- Injetar erros de ferramentas como observações e resolver colisões de nomes
- Atualizar pypdf de 4.x para 6.7.4 para resolver alertas do Dependabot
- Resolver alertas de segurança críticos e altos do Dependabot
### Documentação
- Sincronizar a documentação da ferramenta Composio entre locais
## Contribuidores
@giulio-leone, @greysonlalonde, @haxzie, @joaomdmoura, @lorenzejay, @mattatcha, @mplachta, @nicoferdi96
</Update>
<Update label="27 fev 2026">
## v1.10.1a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## O que Mudou
### Funcionalidades
- Implementar suporte a invocação assíncrona em métodos de callback de etapas
- Implementar carregamento sob demanda para dependências pesadas no módulo de Memória
### Documentação
- Atualizar changelog e versão para v1.10.0
### Refatoração
- Refatorar métodos de callback de etapas para suportar invocação assíncrona
- Refatorar para implementar carregamento sob demanda para dependências pesadas no módulo de Memória
### Correções de Bugs
- Corrigir branch para notas de lançamento
## Contribuidores
@greysonlalonde, @joaomdmoura
</Update>
<Update label="27 fev 2026">
## v1.10.1a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1a1)
## O que Mudou
### Refatoração
- Refatorar métodos de callback de etapas para suportar invocação assíncrona
- Implementar carregamento sob demanda para dependências pesadas no módulo de Memória
### Documentação
- Atualizar changelog e versão para v1.10.0
### Correções de Bugs
- Criar branch para notas de lançamento
## Contribuidores
@greysonlalonde, @joaomdmoura
</Update>
<Update label="26 fev 2026">
## v1.10.0
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.0)
## O que Mudou
### Recursos
- Aprimorar a resolução da ferramenta MCP e eventos relacionados
- Atualizar a versão do lancedb e adicionar pacotes lance-namespace
- Aprimorar a análise e validação de argumentos JSON no CrewAgentExecutor e BaseTool
- Migrar o cliente HTTP da CLI de requests para httpx
- Adicionar documentação versionada
- Adicionar detecção de versões removidas para notas de versão
- Implementar tratamento de entrada do usuário em Flows
- Aprimorar a funcionalidade de auto-loop HITL nos testes de integração de feedback humano
- Adicionar started_event_id e definir no eventbus
- Atualizar automaticamente tools.specs
### Correções de Bugs
- Validar kwargs da ferramenta mesmo quando vazios para evitar TypeError crípticos
- Preservar tipos nulos nos esquemas de parâmetros da ferramenta para LLM
- Mapear output_pydantic/output_json para saída estruturada nativa
- Garantir que callbacks sejam executados/aguardados se forem promessas
- Capturar o nome do método no contexto da exceção
- Preservar tipo enum no resultado do roteador; melhorar tipos
- Corrigir fluxos cíclicos que quebram silenciosamente quando o ID de persistência é passado nas entradas
- Corrigir o formato da flag da CLI de --skip-provider para --skip_provider
- Garantir que o fluxo de chamada da ferramenta OpenAI seja finalizado
- Resolver ponteiros $ref de esquema complexos nas ferramentas MCP
- Impor additionalProperties=false nos esquemas
- Rejeitar nomes de scripts reservados para pastas de equipe
- Resolver condição de corrida no teste de emissão de eventos de guardrail
### Documentação
- Adicionar nota de dependência litellm para provedores de LLM não nativos
- Esclarecer o modelo de segurança NL2SQL e orientações de fortalecimento
- Adicionar 96 ações ausentes em 9 integrações
### Refatoração
- Refatorar crew para provider
- Extrair HITL para padrão de provider
- Melhorar tipagem e registro de hooks
## Contribuidores
@dependabot[bot], @github-actions[bot], @github-code-quality[bot], @greysonlalonde, @heitorado, @hobostay, @joaomdmoura, @johnvan7, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha, @mplachta, @nicoferdi96, @theCyberTech, @thiagomoretto, @vinibrsl
</Update>
<Update label="26 jan 2026">
## v1.9.0

View File

@@ -105,6 +105,15 @@ Existem diferentes locais no código do CrewAI onde você pode especificar o mod
</Tab>
</Tabs>
<Info>
O CrewAI oferece integrações nativas via SDK para OpenAI, Anthropic, Google (Gemini API), Azure e AWS Bedrock — sem necessidade de instalação extra além dos extras específicos do provedor (ex.: `uv add "crewai[openai]"`).
Todos os outros provedores são alimentados pelo **LiteLLM**. Se você planeja usar algum deles, adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Info>
## Exemplos de Configuração de Provedores
O CrewAI suporta uma grande variedade de provedores de LLM, cada um com recursos, métodos de autenticação e capacidades de modelo únicos.
@@ -214,6 +223,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| `meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 128k | 4028 | Texto, Imagem | Texto |
| `meta_llama/Llama-3.3-70B-Instruct` | 128k | 4028 | Texto | Texto |
| `meta_llama/Llama-3.3-8B-Instruct` | 128k | 4028 | Texto | Texto |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Anthropic">
@@ -354,6 +368,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| gemini-1.5-flash | 1M tokens | Modelo multimodal equilibrado, bom para maioria das tarefas |
| gemini-1.5-flash-8B | 1M tokens | Mais rápido, mais eficiente em custo, adequado para tarefas de alta frequência |
| gemini-1.5-pro | 2M tokens | Melhor desempenho para uma ampla variedade de tarefas de raciocínio, incluindo lógica, codificação e colaboração criativa |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Azure">
@@ -438,6 +457,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
model="sagemaker/<my-endpoint>"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Mistral">
@@ -453,6 +477,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
temperature=0.7
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Nvidia NIM">
@@ -539,6 +568,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| rakuten/rakutenai-7b-instruct | 1.024 tokens | LLM topo de linha, compreensão, raciocínio e geração textual.|
| rakuten/rakutenai-7b-chat | 1.024 tokens | LLM topo de linha, compreensão, raciocínio e geração textual.|
| baichuan-inc/baichuan2-13b-chat | 4.096 tokens | Suporte a chat em chinês/inglês, programação, matemática, seguir instruções, resolver quizzes.|
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
@@ -579,6 +613,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
# ...
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Groq">
@@ -600,6 +639,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| Llama 3.1 70B/8B | 131.072 tokens | Alta performance e tarefas de contexto grande|
| Llama 3.2 Série | 8.192 tokens | Tarefas gerais |
| Mixtral 8x7B | 32.768 tokens | Equilíbrio entre performance e contexto |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="IBM watsonx.ai">
@@ -622,6 +666,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
base_url="https://api.watsonx.ai/v1"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Ollama (LLMs Locais)">
@@ -635,6 +684,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
base_url="http://localhost:11434"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Fireworks AI">
@@ -650,6 +704,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
temperature=0.7
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Perplexity AI">
@@ -665,6 +724,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
base_url="https://api.perplexity.ai/"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Hugging Face">
@@ -679,6 +743,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="SambaNova">
@@ -702,6 +771,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
| Llama 3.2 Série | 8.192 tokens | Tarefas gerais e multimodais |
| Llama 3.3 70B | Até 131.072 tokens | Desempenho e qualidade de saída elevada |
| Família Qwen2 | 8.192 tokens | Desempenho e qualidade de saída elevada |
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Cerebras">
@@ -727,6 +801,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
- Equilíbrio entre velocidade e qualidade
- Suporte a longas janelas de contexto
</Info>
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
<Accordion title="Open Router">
@@ -749,6 +828,11 @@ Nesta seção, você encontrará exemplos detalhados que ajudam a selecionar, co
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
**Nota:** Este provedor usa o LiteLLM. Adicione-o como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
</Accordion>
</AccordionGroup>

View File

@@ -176,6 +176,11 @@ Você precisa enviar seu crew para um repositório do GitHub. Caso ainda não te
![Definir Variáveis de Ambiente](/images/enterprise/set-env-variables.png)
</Frame>
<Info>
Usando pacotes Python privados? Você também precisará adicionar suas credenciais de registro aqui.
Consulte [Registros de Pacotes Privados](/pt-BR/enterprise/guides/private-package-registry) para as variáveis necessárias.
</Info>
</Step>
<Step title="Implante Seu Crew">

View File

@@ -256,6 +256,12 @@ Antes da implantação, certifique-se de ter:
1. **Chaves de API de LLM** prontas (OpenAI, Anthropic, Google, etc.)
2. **Chaves de API de ferramentas** se estiver usando ferramentas externas (Serper, etc.)
<Info>
Se seu projeto depende de pacotes de um **registro PyPI privado**, você também precisará configurar
credenciais de autenticação do registro como variáveis de ambiente. Consulte o guia
[Registros de Pacotes Privados](/pt-BR/enterprise/guides/private-package-registry) para mais detalhes.
</Info>
<Tip>
Teste seu projeto localmente com as mesmas variáveis de ambiente antes de implantar
para detectar problemas de configuração antecipadamente.

View File

@@ -0,0 +1,263 @@
---
title: "Registros de Pacotes Privados"
description: "Instale pacotes Python privados de registros PyPI autenticados no CrewAI AMP"
icon: "lock"
mode: "wide"
---
<Note>
Este guia aborda como configurar seu projeto CrewAI para instalar pacotes Python
de registros PyPI privados (Azure DevOps Artifacts, GitHub Packages, GitLab, AWS CodeArtifact, etc.)
ao implantar no CrewAI AMP.
</Note>
## Quando Você Precisa Disso
Se seu projeto depende de pacotes Python internos ou proprietários hospedados em um registro privado
em vez do PyPI público, você precisará:
1. Informar ao UV **onde** encontrar o pacote (uma URL de index)
2. Informar ao UV **quais** pacotes vêm desse index (um mapeamento de source)
3. Fornecer **credenciais** para que o UV possa autenticar durante a instalação
O CrewAI AMP usa [UV](https://docs.astral.sh/uv/) para resolução e instalação de dependências.
O UV suporta registros privados autenticados por meio da configuração do `pyproject.toml` combinada
com variáveis de ambiente para credenciais.
## Passo 1: Configurar o pyproject.toml
Três elementos trabalham juntos no seu `pyproject.toml`:
### 1a. Declarar a dependência
Adicione o pacote privado ao seu `[project.dependencies]` como qualquer outra dependência:
```toml
[project]
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
```
### 1b. Definir o index
Registre seu registro privado como um index nomeado em `[[tool.uv.index]]`:
```toml
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
```
<Info>
O campo `name` é importante — o UV o utiliza para construir os nomes das variáveis de ambiente
para autenticação (veja o [Passo 2](#passo-2-configurar-credenciais-de-autenticação) abaixo).
Definir `explicit = true` significa que o UV não consultará esse index para todos os pacotes — apenas
os que você mapear explicitamente em `[tool.uv.sources]`. Isso evita consultas desnecessárias
ao seu registro privado e protege contra ataques de confusão de dependências.
</Info>
### 1c. Mapear o pacote para o index
Informe ao UV quais pacotes devem ser resolvidos a partir do seu index privado usando `[tool.uv.sources]`:
```toml
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
### Exemplo completo
```toml
[project]
name = "my-crew-project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.100.1,<1.0.0",
"my-private-package>=1.2.0",
]
[tool.crewai]
type = "crew"
[[tool.uv.index]]
name = "my-private-registry"
url = "https://pkgs.dev.azure.com/my-org/_packaging/my-feed/pypi/simple/"
explicit = true
[tool.uv.sources]
my-private-package = { index = "my-private-registry" }
```
Após atualizar o `pyproject.toml`, regenere seu arquivo lock:
```bash
uv lock
```
<Warning>
Sempre faça commit do `uv.lock` atualizado junto com as alterações no `pyproject.toml`.
O arquivo lock é obrigatório para implantação — veja [Preparar para Implantação](/pt-BR/enterprise/guides/prepare-for-deployment).
</Warning>
## Passo 2: Configurar Credenciais de Autenticação
O UV autentica em indexes privados usando variáveis de ambiente que seguem uma convenção de nomenclatura
baseada no nome do index que você definiu no `pyproject.toml`:
```
UV_INDEX_{UPPER_NAME}_USERNAME
UV_INDEX_{UPPER_NAME}_PASSWORD
```
Onde `{UPPER_NAME}` é o nome do seu index convertido para **maiúsculas** com **hifens substituídos por underscores**.
Por exemplo, um index chamado `my-private-registry` usa:
| Variável | Valor |
|----------|-------|
| `UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME` | Seu nome de usuário ou nome do token do registro |
| `UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD` | Sua senha ou token/PAT do registro |
<Warning>
Essas variáveis de ambiente **devem** ser adicionadas pelas configurações de **Variáveis de Ambiente** do CrewAI AMP —
globalmente ou no nível da implantação. Elas não podem ser definidas em arquivos `.env` ou codificadas no seu projeto.
Veja [Configurar Variáveis de Ambiente no AMP](#configurar-variáveis-de-ambiente-no-amp) abaixo.
</Warning>
## Referência de Provedores de Registro
A tabela abaixo mostra o formato da URL de index e os valores de credenciais para provedores de registro comuns.
Substitua os valores de exemplo pelos detalhes reais da sua organização e feed.
| Provedor | URL do Index | Usuário | Senha |
|----------|-------------|---------|-------|
| **Azure DevOps Artifacts** | `https://pkgs.dev.azure.com/{org}/_packaging/{feed}/pypi/simple/` | Qualquer string não vazia (ex: `token`) | Personal Access Token (PAT) com escopo Packaging Read |
| **GitHub Packages** | `https://pypi.pkg.github.com/{owner}/simple/` | Nome de usuário do GitHub | Personal Access Token (classic) com escopo `read:packages` |
| **GitLab Package Registry** | `https://gitlab.com/api/v4/projects/{project_id}/packages/pypi/simple/` | `__token__` | Project ou Personal Access Token com escopo `read_api` |
| **AWS CodeArtifact** | Use a URL de `aws codeartifact get-repository-endpoint` | `aws` | Token de `aws codeartifact get-authorization-token` |
| **Google Artifact Registry** | `https://{region}-python.pkg.dev/{project}/{repo}/simple/` | `_json_key_base64` | Chave de conta de serviço codificada em Base64 |
| **JFrog Artifactory** | `https://{instance}.jfrog.io/artifactory/api/pypi/{repo}/simple/` | Nome de usuário ou email | Chave API ou token de identidade |
| **Auto-hospedado (devpi, Nexus, etc.)** | URL da API simple do seu registro | Nome de usuário do registro | Senha do registro |
<Tip>
Para **AWS CodeArtifact**, o token de autorização expira periodicamente.
Você precisará atualizar o valor de `UV_INDEX_*_PASSWORD` quando ele expirar.
Considere automatizar isso no seu pipeline de CI/CD.
</Tip>
## Configurar Variáveis de Ambiente no AMP
As credenciais do registro privado devem ser configuradas como variáveis de ambiente no CrewAI AMP.
Você tem duas opções:
<Tabs>
<Tab title="Interface Web">
1. Faça login no [CrewAI AMP](https://app.crewai.com)
2. Navegue até sua automação
3. Abra a aba **Environment Variables**
4. Adicione cada variável (`UV_INDEX_*_USERNAME` e `UV_INDEX_*_PASSWORD`) com seu valor
Veja o passo [Deploy para AMP — Definir Variáveis de Ambiente](/pt-BR/enterprise/guides/deploy-to-amp#definir-as-variáveis-de-ambiente) para detalhes.
</Tab>
<Tab title="Implantação via CLI">
Adicione as variáveis ao seu arquivo `.env` local antes de executar `crewai deploy create`.
A CLI as transferirá com segurança para a plataforma:
```bash
# .env
OPENAI_API_KEY=sk-...
UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat-here
```
```bash
crewai deploy create
```
</Tab>
</Tabs>
<Warning>
**Nunca** faça commit de credenciais no seu repositório. Use variáveis de ambiente do AMP para todos os segredos.
O arquivo `.env` deve estar listado no `.gitignore`.
</Warning>
Para atualizar credenciais em uma implantação existente, veja [Atualizar Seu Crew — Variáveis de Ambiente](/pt-BR/enterprise/guides/update-crew).
## Como Tudo se Conecta
Quando o CrewAI AMP faz o build da sua automação, o fluxo de resolução funciona assim:
<Steps>
<Step title="Build inicia">
O AMP busca seu repositório e lê o `pyproject.toml` e o `uv.lock`.
</Step>
<Step title="UV resolve dependências">
O UV lê `[tool.uv.sources]` para determinar de qual index cada pacote deve vir.
</Step>
<Step title="UV autentica">
Para cada index privado, o UV busca `UV_INDEX_{NAME}_USERNAME` e `UV_INDEX_{NAME}_PASSWORD`
nas variáveis de ambiente que você configurou no AMP.
</Step>
<Step title="Pacotes são instalados">
O UV baixa e instala todos os pacotes — tanto públicos (do PyPI) quanto privados (do seu registro).
</Step>
<Step title="Automação executa">
Seu crew ou flow inicia com todas as dependências disponíveis.
</Step>
</Steps>
## Solução de Problemas
### Erros de Autenticação Durante o Build
**Sintoma**: Build falha com `401 Unauthorized` ou `403 Forbidden` ao resolver um pacote privado.
**Verifique**:
- Os nomes das variáveis de ambiente `UV_INDEX_*` correspondem exatamente ao nome do seu index (maiúsculas, hifens -> underscores)
- As credenciais estão definidas nas variáveis de ambiente do AMP, não apenas em um `.env` local
- Seu token/PAT tem as permissões de leitura necessárias para o feed de pacotes
- O token não expirou (especialmente relevante para AWS CodeArtifact)
### Pacote Não Encontrado
**Sintoma**: `No matching distribution found for my-private-package`.
**Verifique**:
- A URL do index no `pyproject.toml` termina com `/simple/`
- A entrada `[tool.uv.sources]` mapeia o nome correto do pacote para o nome correto do index
- O pacote está realmente publicado no seu registro privado
- Execute `uv lock` localmente com as mesmas credenciais para verificar se a resolução funciona
### Conflitos no Arquivo Lock
**Sintoma**: `uv lock` falha ou produz resultados inesperados após adicionar um index privado.
**Solução**: Defina as credenciais localmente e regenere:
```bash
export UV_INDEX_MY_PRIVATE_REGISTRY_USERNAME=token
export UV_INDEX_MY_PRIVATE_REGISTRY_PASSWORD=your-pat
uv lock
```
Em seguida, faça commit do `uv.lock` atualizado.
## Guias Relacionados
<CardGroup cols={3}>
<Card title="Preparar para Implantação" icon="clipboard-check" href="/pt-BR/enterprise/guides/prepare-for-deployment">
Verifique a estrutura do projeto e as dependências antes de implantar.
</Card>
<Card title="Deploy para AMP" icon="rocket" href="/pt-BR/enterprise/guides/deploy-to-amp">
Implante seu crew ou flow e configure variáveis de ambiente.
</Card>
<Card title="Atualizar Seu Crew" icon="arrows-rotate" href="/pt-BR/enterprise/guides/update-crew">
Atualize variáveis de ambiente e envie alterações para uma implantação em execução.
</Card>
</CardGroup>

View File

@@ -0,0 +1,518 @@
---
title: "Migrando do LangGraph para o CrewAI: um guia prático para engenheiros"
description: Se você já construiu com LangGraph, saiba como portar rapidamente seus projetos para o CrewAI
icon: switch
mode: "wide"
---
Você construiu agentes com LangGraph. Já lutou com o `StateGraph`, ligou arestas condicionais e depurou dicionários de estado às 2 da manhã. Funciona — mas, em algum momento, você começou a se perguntar se existe um caminho melhor para produção.
Existe. **CrewAI Flows** entrega o mesmo poder — orquestração orientada a eventos, roteamento condicional, estado compartilhado — com muito menos boilerplate e um modelo mental que se alinha a como você realmente pensa sobre fluxos de trabalho de IA em múltiplas etapas.
Este artigo apresenta os conceitos principais lado a lado, mostra comparações reais de código e demonstra por que o CrewAI Flows é o framework que você vai querer usar a seguir.
---
## A Mudança de Modelo Mental
LangGraph pede que você pense em **grafos**: nós, arestas e dicionários de estado. Todo workflow é um grafo direcionado em que você conecta explicitamente as transições entre as etapas de computação. É poderoso, mas a abstração traz overhead — especialmente quando o seu fluxo é fundamentalmente sequencial com alguns pontos de decisão.
CrewAI Flows pede que você pense em **eventos**: métodos que iniciam, métodos que escutam resultados e métodos que roteiam a execução. A topologia do workflow emerge de anotações com decorators, em vez de construção explícita do grafo. Isso não é apenas açúcar sintático — muda como você projeta, lê e mantém seus pipelines.
Veja o mapeamento principal:
| Conceito no LangGraph | Equivalente no CrewAI Flows |
| --- | --- |
| `StateGraph` class | `Flow` class |
| `add_node()` | Methods decorated with `@start`, `@listen` |
| `add_edge()` / `add_conditional_edges()` | `@listen()` / `@router()` decorators |
| `TypedDict` state | Pydantic `BaseModel` state |
| `START` / `END` constants | `@start()` decorator / natural method return |
| `graph.compile()` | `flow.kickoff()` |
| Checkpointer / persistence | Built-in memory (LanceDB-backed) |
Vamos ver como isso fica na prática.
---
## Demo 1: Um Pipeline Sequencial Simples
Imagine que você está construindo um pipeline que recebe um tema, pesquisa, escreve um resumo e formata a saída. Veja como cada framework lida com isso.
### Abordagem com LangGraph
```python
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class ResearchState(TypedDict):
topic: str
raw_research: str
summary: str
formatted_output: str
def research_topic(state: ResearchState) -> dict:
# Call an LLM or search API
result = llm.invoke(f"Research the topic: {state['topic']}")
return {"raw_research": result}
def write_summary(state: ResearchState) -> dict:
result = llm.invoke(
f"Summarize this research:\n{state['raw_research']}"
)
return {"summary": result}
def format_output(state: ResearchState) -> dict:
result = llm.invoke(
f"Format this summary as a polished article section:\n{state['summary']}"
)
return {"formatted_output": result}
# Build the graph
graph = StateGraph(ResearchState)
graph.add_node("research", research_topic)
graph.add_node("summarize", write_summary)
graph.add_node("format", format_output)
graph.add_edge(START, "research")
graph.add_edge("research", "summarize")
graph.add_edge("summarize", "format")
graph.add_edge("format", END)
# Compile and run
app = graph.compile()
result = app.invoke({"topic": "quantum computing advances in 2026"})
print(result["formatted_output"])
```
Você define funções, registra-as como nós e conecta manualmente cada transição. Para uma sequência simples como essa, há muita cerimônia.
### Abordagem com CrewAI Flows
```python
from crewai import LLM, Agent, Crew, Process, Task
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class ResearchState(BaseModel):
topic: str = ""
raw_research: str = ""
summary: str = ""
formatted_output: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def research_topic(self):
# Option 1: Direct LLM call
result = llm.call(f"Research the topic: {self.state.topic}")
self.state.raw_research = result
return result
@listen(research_topic)
def write_summary(self, research_output):
# Option 2: A single agent
summarizer = Agent(
role="Research Summarizer",
goal="Produce concise, accurate summaries of research content",
backstory="You are an expert at distilling complex research into clear, "
"digestible summaries.",
llm=llm,
verbose=True,
)
result = summarizer.kickoff(
f"Summarize this research:\n{self.state.raw_research}"
)
self.state.summary = str(result)
return self.state.summary
@listen(write_summary)
def format_output(self, summary_output):
# Option 3: a complete crew (with one or more agents)
formatter = Agent(
role="Content Formatter",
goal="Transform research summaries into polished, publication-ready article sections",
backstory="You are a skilled editor with expertise in structuring and "
"presenting technical content for a general audience.",
llm=llm,
verbose=True,
)
format_task = Task(
description=f"Format this summary as a polished article section:\n{self.state.summary}",
expected_output="A well-structured, polished article section ready for publication.",
agent=formatter,
)
crew = Crew(
agents=[formatter],
tasks=[format_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
self.state.formatted_output = str(result)
return self.state.formatted_output
# Run the flow
flow = ResearchFlow()
flow.state.topic = "quantum computing advances in 2026"
result = flow.kickoff()
print(flow.state.formatted_output)
```
Repare a diferença: nada de construção de grafo, de ligação de arestas, nem de etapa de compilação. A ordem de execução é declarada exatamente onde a lógica vive. `@start()` marca o ponto de entrada, e `@listen(method_name)` encadeia as etapas. O estado é um modelo Pydantic de verdade, com segurança de tipos, validação e auto-complete na IDE.
---
## Demo 2: Roteamento Condicional
Aqui é que fica interessante. Digamos que você está construindo um pipeline de conteúdo que roteia para diferentes caminhos de processamento com base no tipo de conteúdo detectado.
### Abordagem com LangGraph
```python
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, START, END
class ContentState(TypedDict):
input_text: str
content_type: str
result: str
def classify_content(state: ContentState) -> dict:
content_type = llm.invoke(
f"Classify this content as 'technical', 'creative', or 'business':\n{state['input_text']}"
)
return {"content_type": content_type.strip().lower()}
def process_technical(state: ContentState) -> dict:
result = llm.invoke(f"Process as technical doc:\n{state['input_text']}")
return {"result": result}
def process_creative(state: ContentState) -> dict:
result = llm.invoke(f"Process as creative writing:\n{state['input_text']}")
return {"result": result}
def process_business(state: ContentState) -> dict:
result = llm.invoke(f"Process as business content:\n{state['input_text']}")
return {"result": result}
# Routing function
def route_content(state: ContentState) -> Literal["technical", "creative", "business"]:
return state["content_type"]
# Build the graph
graph = StateGraph(ContentState)
graph.add_node("classify", classify_content)
graph.add_node("technical", process_technical)
graph.add_node("creative", process_creative)
graph.add_node("business", process_business)
graph.add_edge(START, "classify")
graph.add_conditional_edges(
"classify",
route_content,
{
"technical": "technical",
"creative": "creative",
"business": "business",
}
)
graph.add_edge("technical", END)
graph.add_edge("creative", END)
graph.add_edge("business", END)
app = graph.compile()
result = app.invoke({"input_text": "Explain how TCP handshakes work"})
```
Você precisa de uma função de roteamento separada, de um mapeamento explícito de arestas condicionais e de arestas de término para cada ramificação. A lógica de roteamento fica desacoplada do nó que produz a decisão.
### Abordagem com CrewAI Flows
```python
from crewai import LLM, Agent
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class ContentState(BaseModel):
input_text: str = ""
content_type: str = ""
result: str = ""
class ContentFlow(Flow[ContentState]):
@start()
def classify_content(self):
self.state.content_type = (
llm.call(
f"Classify this content as 'technical', 'creative', or 'business':\n"
f"{self.state.input_text}"
)
.strip()
.lower()
)
return self.state.content_type
@router(classify_content)
def route_content(self, classification):
if classification == "technical":
return "process_technical"
elif classification == "creative":
return "process_creative"
else:
return "process_business"
@listen("process_technical")
def handle_technical(self):
agent = Agent(
role="Technical Writer",
goal="Produce clear, accurate technical documentation",
backstory="You are an expert technical writer who specializes in "
"explaining complex technical concepts precisely.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as technical doc:\n{self.state.input_text}")
)
@listen("process_creative")
def handle_creative(self):
agent = Agent(
role="Creative Writer",
goal="Craft engaging and imaginative creative content",
backstory="You are a talented creative writer with a flair for "
"compelling storytelling and vivid expression.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as creative writing:\n{self.state.input_text}")
)
@listen("process_business")
def handle_business(self):
agent = Agent(
role="Business Writer",
goal="Produce professional, results-oriented business content",
backstory="You are an experienced business writer who communicates "
"strategy and value clearly to professional audiences.",
llm=llm,
verbose=True,
)
self.state.result = str(
agent.kickoff(f"Process as business content:\n{self.state.input_text}")
)
flow = ContentFlow()
flow.state.input_text = "Explain how TCP handshakes work"
flow.kickoff()
print(flow.state.result)
```
O decorator `@router()` transforma um método em um ponto de decisão. Ele retorna uma string que corresponde a um listener — sem dicionários de mapeamento, sem funções de roteamento separadas. A lógica de ramificação parece um `if` em Python porque *é* um.
---
## Demo 3: Integrando Crews de Agentes de IA em Flows
É aqui que o verdadeiro poder do CrewAI aparece. Flows não servem apenas para encadear chamadas de LLM — elas orquestram **Crews** completas de agentes autônomos. Isso é algo para o qual o LangGraph simplesmente não tem um equivalente nativo.
```python
from crewai import Agent, Task, Crew
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ArticleState(BaseModel):
topic: str = ""
research: str = ""
draft: str = ""
final_article: str = ""
class ArticleFlow(Flow[ArticleState]):
@start()
def run_research_crew(self):
"""A full Crew of agents handles research."""
researcher = Agent(
role="Senior Research Analyst",
goal=f"Produce comprehensive research on: {self.state.topic}",
backstory="You're a veteran analyst known for thorough, "
"well-sourced research reports.",
llm="gpt-4o"
)
research_task = Task(
description=f"Research '{self.state.topic}' thoroughly. "
"Cover key trends, data points, and expert opinions.",
expected_output="A detailed research brief with sources.",
agent=researcher
)
crew = Crew(agents=[researcher], tasks=[research_task])
result = crew.kickoff()
self.state.research = result.raw
return result.raw
@listen(run_research_crew)
def run_writing_crew(self, research_output):
"""A different Crew handles writing."""
writer = Agent(
role="Technical Writer",
goal="Write a compelling article based on provided research.",
backstory="You turn complex research into engaging, clear prose.",
llm="gpt-4o"
)
editor = Agent(
role="Senior Editor",
goal="Review and polish articles for publication quality.",
backstory="20 years of editorial experience at top tech publications.",
llm="gpt-4o"
)
write_task = Task(
description=f"Write an article based on this research:\n{self.state.research}",
expected_output="A well-structured draft article.",
agent=writer
)
edit_task = Task(
description="Review, fact-check, and polish the draft article.",
expected_output="A publication-ready article.",
agent=editor
)
crew = Crew(agents=[writer, editor], tasks=[write_task, edit_task])
result = crew.kickoff()
self.state.final_article = result.raw
return result.raw
# Run the full pipeline
flow = ArticleFlow()
flow.state.topic = "The Future of Edge AI"
flow.kickoff()
print(flow.state.final_article)
```
Este é o insight-chave: **Flows fornecem a camada de orquestração, e Crews fornecem a camada de inteligência.** Cada etapa em um Flow pode subir uma equipe completa de agentes colaborativos, cada um com seus próprios papéis, objetivos e ferramentas. Você obtém fluxo de controle estruturado e previsível *e* colaboração autônoma de agentes — o melhor dos dois mundos.
No LangGraph, alcançar algo similar significa implementar manualmente protocolos de comunicação entre agentes, loops de chamada de ferramentas e lógica de delegação dentro das funções dos nós. É possível, mas é encanamento que você constrói do zero todas as vezes.
---
## Demo 4: Execução Paralela e Sincronização
Pipelines do mundo real frequentemente precisam dividir o trabalho e juntar os resultados. O CrewAI Flows lida com isso de forma elegante com os operadores `and_` e `or_`.
```python
from crewai import LLM
from crewai.flow.flow import Flow, and_, listen, start
from pydantic import BaseModel
llm = LLM(model="openai/gpt-5.2")
class AnalysisState(BaseModel):
topic: str = ""
market_data: str = ""
tech_analysis: str = ""
competitor_intel: str = ""
final_report: str = ""
class ParallelAnalysisFlow(Flow[AnalysisState]):
@start()
def start_method(self):
pass
@listen(start_method)
def gather_market_data(self):
# Your agentic or deterministic code
pass
@listen(start_method)
def run_tech_analysis(self):
# Your agentic or deterministic code
pass
@listen(start_method)
def gather_competitor_intel(self):
# Your agentic or deterministic code
pass
@listen(and_(gather_market_data, run_tech_analysis, gather_competitor_intel))
def synthesize_report(self):
# Your agentic or deterministic code
pass
flow = ParallelAnalysisFlow()
flow.state.topic = "AI-powered developer tools"
flow.kickoff()
```
Vários decorators `@start()` disparam em paralelo. O combinador `and_()` no decorator `@listen` garante que `synthesize_report` só execute depois que *todos os três* métodos upstream forem concluídos. Também existe `or_()` para quando você quer prosseguir assim que *qualquer* tarefa upstream terminar.
No LangGraph, você precisaria construir um padrão fan-out/fan-in com ramificações paralelas, um nó de sincronização e uma mesclagem de estado cuidadosa — tudo conectado explicitamente por arestas.
---
## Por que CrewAI Flows em Produção
Além de uma sintaxe mais limpa, Flows entrega várias vantagens críticas para produção:
**Persistência de estado integrada.** O estado do Flow é respaldado pelo LanceDB, o que significa que seus workflows podem sobreviver a falhas, ser retomados e acumular conhecimento entre execuções. No LangGraph, você precisa configurar um checkpointer separado.
**Gerenciamento de estado com segurança de tipos.** Modelos Pydantic oferecem validação, serialização e suporte de IDE prontos para uso. Estados `TypedDict` do LangGraph não validam em runtime.
**Orquestração de agentes de primeira classe.** Crews são um primitivo nativo. Você define agentes com papéis, objetivos, histórias e ferramentas — e eles colaboram de forma autônoma dentro do envelope estruturado de um Flow. Não é preciso reinventar a coordenação multiagente.
**Modelo mental mais simples.** Decorators declaram intenção. `@start` significa "comece aqui". `@listen(x)` significa "execute depois de x". `@router(x)` significa "decida para onde ir depois de x". O código lê como o workflow que ele descreve.
**Integração com CLI.** Execute flows com `crewai run`. Sem etapa de compilação separada, sem serialização de grafo. Seu Flow é uma classe Python, e ele roda como tal.
---
## Cheat Sheet de Migração
Se você está com uma base de código LangGraph e quer migrar para o CrewAI Flows, aqui vai um guia prático de conversão:
1. **Mapeie seu estado.** Converta seu `TypedDict` para um `BaseModel` do Pydantic. Adicione valores padrão para todos os campos.
2. **Converta nós em métodos.** Cada função de `add_node` vira um método na sua subclasse de `Flow`. Substitua leituras `state["field"]` por `self.state.field`.
3. **Substitua arestas por decorators.** `add_edge(START, "first_node")` vira `@start()` no primeiro método. A sequência `add_edge("a", "b")` vira `@listen(a)` no método `b`.
4. **Substitua arestas condicionais por `@router`.** A função de roteamento e o mapeamento do `add_conditional_edges()` viram um único método `@router()` que retorna a string de rota.
5. **Troque compile + invoke por kickoff.** Remova `graph.compile()`. Chame `flow.kickoff()`.
6. **Considere onde as Crews se encaixam.** Qualquer nó com lógica complexa de agentes em múltiplas etapas é um candidato a extração para uma Crew. É aqui que você verá a maior melhoria de qualidade.
---
## Primeiros Passos
Instale o CrewAI e crie o scaffold de um novo projeto Flow:
```bash
pip install crewai
crewai create flow my_first_flow
cd my_first_flow
```
Isso gera uma estrutura de projeto com uma classe Flow pronta para edição, arquivos de configuração e um `pyproject.toml` com `type = "flow"` já definido. Execute com:
```bash
crewai run
```
A partir daí, adicione seus agentes, conecte seus listeners e publique.
---
## Considerações Finais
O LangGraph ensinou ao ecossistema que workflows de IA precisam de estrutura. Essa foi uma lição importante. Mas o CrewAI Flows pega essa lição e a entrega de um jeito mais rápido de escrever, mais fácil de ler e mais poderoso em produção — especialmente quando seus workflows envolvem múltiplos agentes colaborando.
Se você está construindo algo além de uma cadeia de agente único, dê uma olhada séria no Flows. O modelo baseado em decorators, a integração nativa com Crews e o gerenciamento de estado embutido significam menos tempo com encanamento e mais tempo nos problemas que importam.
Comece com `crewai create flow`. Você não vai olhar para trás.

View File

@@ -7,7 +7,7 @@ mode: "wide"
## Conecte o CrewAI a LLMs
O CrewAI utiliza o LiteLLM para conectar-se a uma grande variedade de Modelos de Linguagem (LLMs). Essa integração proporciona grande versatilidade, permitindo que você utilize modelos de inúmeros provedores por meio de uma interface simples e unificada.
O CrewAI conecta-se a LLMs por meio de integrações nativas via SDK para os provedores mais populares (OpenAI, Anthropic, Google Gemini, Azure e AWS Bedrock), e usa o LiteLLM como alternativa flexível para todos os demais provedores.
<Note>
Por padrão, o CrewAI usa o modelo `gpt-4o-mini`. Isso é determinado pela variável de ambiente `OPENAI_MODEL_NAME`, que tem como padrão "gpt-4o-mini" se não for definida.
@@ -40,6 +40,14 @@ O LiteLLM oferece suporte a uma ampla gama de provedores, incluindo, mas não se
Para uma lista completa e sempre atualizada dos provedores suportados, consulte a [documentação de Provedores do LiteLLM](https://docs.litellm.ai/docs/providers).
<Info>
Para usar qualquer provedor não coberto por uma integração nativa, adicione o LiteLLM como dependência ao seu projeto:
```bash
uv add 'crewai[litellm]'
```
Provedores nativos (OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock) usam seus próprios extras de SDK — consulte os [Exemplos de Configuração de Provedores](/pt-BR/concepts/llms#exemplos-de-configuração-de-provedores).
</Info>
## Alterando a LLM
Para utilizar uma LLM diferente com seus agentes CrewAI, você tem várias opções:

View File

@@ -11,84 +11,53 @@ mode: "wide"
Composio é uma plataforma de integração que permite conectar seus agentes de IA a mais de 250 ferramentas. Os principais recursos incluem:
- **Autenticação de Nível Empresarial**: Suporte integrado para OAuth, Chaves de API, JWT com atualização automática de token
- **Observabilidade Completa**: Logs detalhados de uso das ferramentas, registros de execução, e muito mais
- **Observabilidade Completa**: Logs detalhados de uso das ferramentas, carimbos de data/hora de execução e muito mais
## Instalação
Para incorporar as ferramentas Composio em seu projeto, siga as instruções abaixo:
```shell
pip install composio-crewai
pip install composio composio-crewai
pip install crewai
```
Após a conclusão da instalação, execute `composio login` ou exporte sua chave de API do composio como `COMPOSIO_API_KEY`. Obtenha sua chave de API Composio [aqui](https://app.composio.dev)
Após concluir a instalação, defina sua chave de API do Composio como `COMPOSIO_API_KEY`. Obtenha sua chave de API do Composio [aqui](https://platform.composio.dev)
## Exemplo
O exemplo a seguir demonstra como inicializar a ferramenta e executar uma ação do github:
O exemplo a seguir demonstra como inicializar a ferramenta e executar uma ação do GitHub:
1. Inicialize o conjunto de ferramentas Composio
1. Inicialize o Composio com o Provider do CrewAI
```python Code
from composio_crewai import ComposioToolSet, App, Action
from composio_crewai import ComposioProvider
from composio import Composio
from crewai import Agent, Task, Crew
toolset = ComposioToolSet()
composio = Composio(provider=ComposioProvider())
```
2. Conecte sua conta do GitHub
2. Crie uma nova sessão Composio e recupere as ferramentas
<CodeGroup>
```shell CLI
composio add github
```
```python Code
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```python
session = composio.create(
user_id="your-user-id",
toolkits=["gmail", "github"] # optional, default is all toolkits
)
tools = session.tools()
```
Leia mais sobre sessões e gerenciamento de usuários [aqui](https://docs.composio.dev/docs/configuring-sessions)
</CodeGroup>
3. Obtenha ferramentas
3. Autenticação manual dos usuários
- Recuperando todas as ferramentas de um app (não recomendado em produção):
O Composio autentica automaticamente os usuários durante a sessão de chat do agente. No entanto, você também pode autenticar o usuário manualmente chamando o método `authorize`.
```python Code
tools = toolset.get_tools(apps=[App.GITHUB])
connection_request = session.authorize("github")
print(f"Open this URL to authenticate: {connection_request.redirect_url}")
```
- Filtrando ferramentas com base em tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtrando ferramentas com base no caso de uso:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Defina `advanced` como True para obter ações para casos de uso complexos</Tip>
- Usando ferramentas específicas:
Neste exemplo, usaremos a ação `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` do app GitHub.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Saiba mais sobre como filtrar ações [aqui](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Defina o agente
```python Code
@@ -116,4 +85,4 @@ crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* Uma lista mais detalhada de ferramentas pode ser encontrada [aqui](https://app.composio.dev)
* Uma lista mais detalhada de ferramentas pode ser encontrada [aqui](https://docs.composio.dev/toolkits)

View File

@@ -8,8 +8,8 @@ authors = [
]
requires-python = ">=3.10, <3.14"
dependencies = [
"Pillow~=10.4.0",
"pypdf~=4.0.0",
"Pillow~=12.1.1",
"pypdf~=6.7.5",
"python-magic>=0.4.27",
"aiocache~=0.12.3",
"aiofiles~=24.1.0",

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.9.3"
__version__ = "1.10.1"

View File

@@ -8,12 +8,10 @@ authors = [
]
requires-python = ">=3.10, <3.14"
dependencies = [
"lancedb~=0.5.4",
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.9.3",
"lancedb~=0.5.4",
"crewai==1.10.1",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -291,4 +291,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.9.3"
__version__ = "1.10.1"

View File

@@ -10,6 +10,7 @@ from pydantic import BaseModel, Field
from pydantic.types import StringConstraints
import requests
load_dotenv()

View File

@@ -1,7 +1,7 @@
import os
from crewai import Agent, Crew, Task
from multion_tool import MultiOnTool # type: ignore[import-not-found]
from multion_tool import MultiOnTool # type: ignore[import-not-found]
os.environ["OPENAI_API_KEY"] = "Your Key"

View File

@@ -17,11 +17,11 @@ Usage:
import os
from crewai import Agent, Crew, Process, Task
from crewai.utilities.printer import Printer
from dotenv import load_dotenv
from stagehand.schemas import AvailableModel # type: ignore[import-untyped]
from crewai import Agent, Crew, Process, Task
from crewai_tools import StagehandTool

View File

@@ -20117,18 +20117,6 @@
"humanized_name": "Web Automation Tool",
"init_params_schema": {
"$defs": {
"AvailableModel": {
"enum": [
"gpt-4o",
"gpt-4o-mini",
"claude-3-5-sonnet-latest",
"claude-3-7-sonnet-latest",
"computer-use-preview",
"gemini-2.0-flash"
],
"title": "AvailableModel",
"type": "string"
},
"EnvVar": {
"properties": {
"default": {
@@ -20206,17 +20194,6 @@
"default": null,
"title": "Model Api Key"
},
"model_name": {
"anyOf": [
{
"$ref": "#/$defs/AvailableModel"
},
{
"type": "null"
}
],
"default": "claude-3-7-sonnet-latest"
},
"project_id": {
"anyOf": [
{

View File

@@ -21,7 +21,7 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http~=1.34.0",
# Data Handling
"chromadb~=1.1.0",
"tokenizers~=0.20.3",
"tokenizers>=0.21,<1",
"openpyxl~=3.1.5",
# Authentication and Security
"python-dotenv~=1.1.1",
@@ -42,7 +42,7 @@ dependencies = [
"mcp~=1.26.0",
"uv~=0.9.13",
"aiosqlite~=0.21.0",
"lancedb>=0.4.0",
"lancedb>=0.29.2",
]
[project.urls]
@@ -53,7 +53,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.9.3",
"crewai-tools==1.10.1",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -66,7 +66,7 @@ openpyxl = [
]
mem0 = ["mem0ai~=0.1.94"]
docling = [
"docling~=2.63.0",
"docling~=2.75.0",
]
qdrant = [
"qdrant-client[fastembed]~=1.14.3",
@@ -88,7 +88,7 @@ bedrock = [
"boto3~=1.40.45",
]
google-genai = [
"google-genai~=1.49.0",
"google-genai~=1.65.0",
]
azure-ai-inference = [
"azure-ai-inference~=1.0.0b9",

View File

@@ -10,7 +10,6 @@ from crewai.flow.flow import Flow
from crewai.knowledge.knowledge import Knowledge
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
from crewai.memory.unified_memory import Memory
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.llm_guardrail import LLMGuardrail
@@ -41,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.9.3"
__version__ = "1.10.1"
_telemetry_submitted = False
@@ -72,6 +71,25 @@ def _track_install_async() -> None:
_track_install_async()
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
"Memory": ("crewai.memory.unified_memory", "Memory"),
}
def __getattr__(name: str) -> Any:
"""Lazily import heavy modules (e.g. Memory → lancedb) on first access."""
if name in _LAZY_IMPORTS:
module_path, attr = _LAZY_IMPORTS[name]
import importlib
mod = importlib.import_module(module_path)
val = getattr(mod, attr)
globals()[name] = val
return val
raise AttributeError(f"module 'crewai' has no attribute {name!r}")
__all__ = [
"LLM",
"Agent",

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import asyncio
from collections.abc import MutableMapping
import concurrent.futures
from functools import lru_cache
import ssl
import time
@@ -138,14 +139,17 @@ def fetch_agent_card(
ttl_hash = int(time.time() // cache_ttl)
return _fetch_agent_card_cached(endpoint, auth_hash, timeout, ttl_hash)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
coro = afetch_agent_card(endpoint=endpoint, auth=auth, timeout=timeout)
try:
return loop.run_until_complete(
afetch_agent_card(endpoint=endpoint, auth=auth, timeout=timeout)
)
finally:
loop.close()
asyncio.get_running_loop()
has_running_loop = True
except RuntimeError:
has_running_loop = False
if has_running_loop:
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, coro).result()
return asyncio.run(coro)
async def afetch_agent_card(
@@ -203,14 +207,17 @@ def _fetch_agent_card_cached(
"""Cached sync version of fetch_agent_card."""
auth = _auth_store.get(auth_hash)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
coro = _afetch_agent_card_impl(endpoint=endpoint, auth=auth, timeout=timeout)
try:
return loop.run_until_complete(
_afetch_agent_card_impl(endpoint=endpoint, auth=auth, timeout=timeout)
)
finally:
loop.close()
asyncio.get_running_loop()
has_running_loop = True
except RuntimeError:
has_running_loop = False
if has_running_loop:
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, coro).result()
return asyncio.run(coro)
@cached(ttl=300, serializer=PickleSerializer()) # type: ignore[untyped-decorator]

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
import asyncio
import base64
from collections.abc import AsyncIterator, Callable, MutableMapping
import concurrent.futures
from contextlib import asynccontextmanager
import logging
from typing import TYPE_CHECKING, Any, Final, Literal
@@ -194,56 +195,43 @@ def execute_a2a_delegation(
Returns:
TaskStateResult with status, result/error, history, and agent_card.
Raises:
RuntimeError: If called from an async context with a running event loop.
"""
coro = aexecute_a2a_delegation(
endpoint=endpoint,
auth=auth,
timeout=timeout,
task_description=task_description,
context=context,
context_id=context_id,
task_id=task_id,
reference_task_ids=reference_task_ids,
metadata=metadata,
extensions=extensions,
conversation_history=conversation_history,
agent_id=agent_id,
agent_role=agent_role,
agent_branch=agent_branch,
response_model=response_model,
turn_number=turn_number,
updates=updates,
from_task=from_task,
from_agent=from_agent,
skill_id=skill_id,
client_extensions=client_extensions,
transport=transport,
accepted_output_modes=accepted_output_modes,
input_files=input_files,
)
try:
asyncio.get_running_loop()
raise RuntimeError(
"execute_a2a_delegation() cannot be called from an async context. "
"Use 'await aexecute_a2a_delegation()' instead."
)
except RuntimeError as e:
if "no running event loop" not in str(e).lower():
raise
has_running_loop = True
except RuntimeError:
has_running_loop = False
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
return loop.run_until_complete(
aexecute_a2a_delegation(
endpoint=endpoint,
auth=auth,
timeout=timeout,
task_description=task_description,
context=context,
context_id=context_id,
task_id=task_id,
reference_task_ids=reference_task_ids,
metadata=metadata,
extensions=extensions,
conversation_history=conversation_history,
agent_id=agent_id,
agent_role=agent_role,
agent_branch=agent_branch,
response_model=response_model,
turn_number=turn_number,
updates=updates,
from_task=from_task,
from_agent=from_agent,
skill_id=skill_id,
client_extensions=client_extensions,
transport=transport,
accepted_output_modes=accepted_output_modes,
input_files=input_files,
)
)
finally:
try:
loop.run_until_complete(loop.shutdown_asyncgens())
finally:
loop.close()
if has_running_loop:
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, coro).result()
return asyncio.run(coro)
async def aexecute_a2a_delegation(

View File

@@ -8,11 +8,9 @@ import time
from typing import (
TYPE_CHECKING,
Any,
Final,
Literal,
cast,
)
from urllib.parse import urlparse
from pydantic import (
BaseModel,
@@ -61,16 +59,8 @@ from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.lite_agent_output import LiteAgentOutput
from crewai.llms.base_llm import BaseLLM
from crewai.mcp import (
MCPClient,
MCPServerConfig,
MCPServerHTTP,
MCPServerSSE,
MCPServerStdio,
)
from crewai.mcp.transports.http import HTTPTransport
from crewai.mcp.transports.sse import SSETransport
from crewai.mcp.transports.stdio import StdioTransport
from crewai.mcp import MCPServerConfig
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.fingerprint import Fingerprint
from crewai.tools.agent_tools.agent_tools import AgentTools
@@ -111,18 +101,8 @@ if TYPE_CHECKING:
from crewai.utilities.types import LLMMessage
# MCP Connection timeout constants (in seconds)
MCP_CONNECTION_TIMEOUT: Final[int] = 10
MCP_TOOL_EXECUTION_TIMEOUT: Final[int] = 30
MCP_DISCOVERY_TIMEOUT: Final[int] = 15
MCP_MAX_RETRIES: Final[int] = 3
_passthrough_exceptions: tuple[type[Exception], ...] = ()
# Simple in-memory cache for MCP tool schemas (duration: 5 minutes)
_mcp_schema_cache: dict[str, Any] = {}
_cache_ttl: Final[int] = 300 # 5 minutes
class Agent(BaseAgent):
"""Represents an agent in a system.
@@ -154,7 +134,7 @@ class Agent(BaseAgent):
model_config = ConfigDict()
_times_executed: int = PrivateAttr(default=0)
_mcp_clients: list[Any] = PrivateAttr(default_factory=list)
_mcp_resolver: MCPToolResolver | None = PrivateAttr(default=None)
_last_messages: list[LLMMessage] = PrivateAttr(default_factory=list)
max_execution_time: int | None = Field(
default=None,
@@ -384,10 +364,10 @@ class Agent(BaseAgent):
)
if unified_memory is not None:
query = task.description
matches = unified_memory.recall(query, limit=10)
matches = unified_memory.recall(query, limit=5)
if matches:
memory = "Relevant memories:\n" + "\n".join(
f"- {m.record.content}" for m in matches
m.format() for m in matches
)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
@@ -622,10 +602,10 @@ class Agent(BaseAgent):
)
if unified_memory is not None:
query = task.description
matches = unified_memory.recall(query, limit=10)
matches = unified_memory.recall(query, limit=5)
if matches:
memory = "Relevant memories:\n" + "\n".join(
f"- {m.record.content}" for m in matches
m.format() for m in matches
)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
@@ -864,7 +844,11 @@ class Agent(BaseAgent):
respect_context_window=self.respect_context_window,
request_within_rpm_limit=rpm_limit_fn,
callbacks=[TokenCalcHandler(self._token_process)],
response_model=task.response_model if task else None,
response_model=(
task.response_model or task.output_pydantic or task.output_json
)
if task
else None,
)
def _update_executor_parameters(
@@ -893,7 +877,11 @@ class Agent(BaseAgent):
self.agent_executor.stop = stop_words
self.agent_executor.tools_names = get_tool_names(tools)
self.agent_executor.tools_description = render_text_description_and_args(tools)
self.agent_executor.response_model = task.response_model if task else None
self.agent_executor.response_model = (
(task.response_model or task.output_pydantic or task.output_json)
if task
else None
)
self.agent_executor.tools_handler = self.tools_handler
self.agent_executor.request_within_rpm_limit = rpm_limit_fn
@@ -926,544 +914,17 @@ class Agent(BaseAgent):
def get_mcp_tools(self, mcps: list[str | MCPServerConfig]) -> list[BaseTool]:
"""Convert MCP server references/configs to CrewAI tools.
Supports both string references (backwards compatible) and structured
configuration objects (MCPServerStdio, MCPServerHTTP, MCPServerSSE).
Args:
mcps: List of MCP server references (strings) or configurations.
Returns:
List of BaseTool instances from MCP servers.
Delegates to :class:`~crewai.mcp.tool_resolver.MCPToolResolver`.
"""
all_tools = []
clients = []
for mcp_config in mcps:
if isinstance(mcp_config, str):
tools = self._get_mcp_tools_from_string(mcp_config)
else:
tools, client = self._get_native_mcp_tools(mcp_config)
if client:
clients.append(client)
all_tools.extend(tools)
# Store clients for cleanup
self._mcp_clients.extend(clients)
return all_tools
self._cleanup_mcp_clients()
self._mcp_resolver = MCPToolResolver(agent=self, logger=self._logger)
return self._mcp_resolver.resolve(mcps)
def _cleanup_mcp_clients(self) -> None:
"""Cleanup MCP client connections after task execution."""
if not self._mcp_clients:
return
async def _disconnect_all() -> None:
for client in self._mcp_clients:
if client and hasattr(client, "connected") and client.connected:
await client.disconnect()
try:
asyncio.run(_disconnect_all())
except Exception as e:
self._logger.log("error", f"Error during MCP client cleanup: {e}")
finally:
self._mcp_clients.clear()
def _get_mcp_tools_from_string(self, mcp_ref: str) -> list[BaseTool]:
"""Get tools from legacy string-based MCP references.
This method maintains backwards compatibility with string-based
MCP references (https://... and crewai-amp:...).
Args:
mcp_ref: String reference to MCP server.
Returns:
List of BaseTool instances.
"""
if mcp_ref.startswith("crewai-amp:"):
return self._get_amp_mcp_tools(mcp_ref)
if mcp_ref.startswith("https://"):
return self._get_external_mcp_tools(mcp_ref)
return []
def _get_external_mcp_tools(self, mcp_ref: str) -> list[BaseTool]:
"""Get tools from external HTTPS MCP server with graceful error handling."""
from crewai.tools.mcp_tool_wrapper import MCPToolWrapper
# Parse server URL and optional tool name
if "#" in mcp_ref:
server_url, specific_tool = mcp_ref.split("#", 1)
else:
server_url, specific_tool = mcp_ref, None
server_params = {"url": server_url}
server_name = self._extract_server_name(server_url)
try:
# Get tool schemas with timeout and error handling
tool_schemas = self._get_mcp_tool_schemas(server_params)
if not tool_schemas:
self._logger.log(
"warning", f"No tools discovered from MCP server: {server_url}"
)
return []
tools = []
for tool_name, schema in tool_schemas.items():
# Skip if specific tool requested and this isn't it
if specific_tool and tool_name != specific_tool:
continue
try:
wrapper = MCPToolWrapper(
mcp_server_params=server_params,
tool_name=tool_name,
tool_schema=schema,
server_name=server_name,
)
tools.append(wrapper)
except Exception as e:
self._logger.log(
"warning",
f"Failed to create MCP tool wrapper for {tool_name}: {e}",
)
continue
if specific_tool and not tools:
self._logger.log(
"warning",
f"Specific tool '{specific_tool}' not found on MCP server: {server_url}",
)
return cast(list[BaseTool], tools)
except Exception as e:
self._logger.log(
"warning", f"Failed to connect to MCP server {server_url}: {e}"
)
return []
def _get_native_mcp_tools(
self, mcp_config: MCPServerConfig
) -> tuple[list[BaseTool], Any | None]:
"""Get tools from MCP server using structured configuration.
This method creates an MCP client based on the configuration type,
connects to the server, discovers tools, applies filtering, and
returns wrapped tools along with the client instance for cleanup.
Args:
mcp_config: MCP server configuration (MCPServerStdio, MCPServerHTTP, or MCPServerSSE).
Returns:
Tuple of (list of BaseTool instances, MCPClient instance for cleanup).
"""
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_native_tool import MCPNativeTool
transport: StdioTransport | HTTPTransport | SSETransport
if isinstance(mcp_config, MCPServerStdio):
transport = StdioTransport(
command=mcp_config.command,
args=mcp_config.args,
env=mcp_config.env,
)
server_name = f"{mcp_config.command}_{'_'.join(mcp_config.args)}"
elif isinstance(mcp_config, MCPServerHTTP):
transport = HTTPTransport(
url=mcp_config.url,
headers=mcp_config.headers,
streamable=mcp_config.streamable,
)
server_name = self._extract_server_name(mcp_config.url)
elif isinstance(mcp_config, MCPServerSSE):
transport = SSETransport(
url=mcp_config.url,
headers=mcp_config.headers,
)
server_name = self._extract_server_name(mcp_config.url)
else:
raise ValueError(f"Unsupported MCP server config type: {type(mcp_config)}")
client = MCPClient(
transport=transport,
cache_tools_list=mcp_config.cache_tools_list,
)
async def _setup_client_and_list_tools() -> list[dict[str, Any]]:
"""Async helper to connect and list tools in same event loop."""
try:
if not client.connected:
await client.connect()
tools_list = await client.list_tools()
try:
await client.disconnect()
# Small delay to allow background tasks to finish cleanup
# This helps prevent "cancel scope in different task" errors
# when asyncio.run() closes the event loop
await asyncio.sleep(0.1)
except Exception as e:
self._logger.log("error", f"Error during disconnect: {e}")
return tools_list
except Exception as e:
if client.connected:
await client.disconnect()
await asyncio.sleep(0.1)
raise RuntimeError(
f"Error during setup client and list tools: {e}"
) from e
try:
try:
asyncio.get_running_loop()
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run, _setup_client_and_list_tools()
)
tools_list = future.result()
except RuntimeError:
try:
tools_list = asyncio.run(_setup_client_and_list_tools())
except RuntimeError as e:
error_msg = str(e).lower()
if "cancel scope" in error_msg or "task" in error_msg:
raise ConnectionError(
"MCP connection failed due to event loop cleanup issues. "
"This may be due to authentication errors or server unavailability."
) from e
except asyncio.CancelledError as e:
raise ConnectionError(
"MCP connection was cancelled. This may indicate an authentication "
"error or server unavailability."
) from e
if mcp_config.tool_filter:
filtered_tools = []
for tool in tools_list:
if callable(mcp_config.tool_filter):
try:
from crewai.mcp.filters import ToolFilterContext
context = ToolFilterContext(
agent=self,
server_name=server_name,
run_context=None,
)
if mcp_config.tool_filter(context, tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
except (TypeError, AttributeError):
if mcp_config.tool_filter(tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
else:
# Not callable - include tool
filtered_tools.append(tool)
tools_list = filtered_tools
tools = []
for tool_def in tools_list:
tool_name = tool_def.get("name", "")
if not tool_name:
continue
# Convert inputSchema to Pydantic model if present
args_schema = None
if tool_def.get("inputSchema"):
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
tool_schema = {
"description": tool_def.get("description", ""),
"args_schema": args_schema,
}
try:
native_tool = MCPNativeTool(
mcp_client=client,
tool_name=tool_name,
tool_schema=tool_schema,
server_name=server_name,
)
tools.append(native_tool)
except Exception as e:
self._logger.log("error", f"Failed to create native MCP tool: {e}")
continue
return cast(list[BaseTool], tools), client
except Exception as e:
if client.connected:
asyncio.run(client.disconnect())
raise RuntimeError(f"Failed to get native MCP tools: {e}") from e
def _get_amp_mcp_tools(self, amp_ref: str) -> list[BaseTool]:
"""Get tools from CrewAI AMP MCP marketplace."""
# Parse: "crewai-amp:mcp-name" or "crewai-amp:mcp-name#tool_name"
amp_part = amp_ref.replace("crewai-amp:", "")
if "#" in amp_part:
mcp_name, specific_tool = amp_part.split("#", 1)
else:
mcp_name, specific_tool = amp_part, None
# Call AMP API to get MCP server URLs
mcp_servers = self._fetch_amp_mcp_servers(mcp_name)
tools = []
for server_config in mcp_servers:
server_ref = server_config["url"]
if specific_tool:
server_ref += f"#{specific_tool}"
server_tools = self._get_external_mcp_tools(server_ref)
tools.extend(server_tools)
return tools
@staticmethod
def _extract_server_name(server_url: str) -> str:
"""Extract clean server name from URL for tool prefixing."""
parsed = urlparse(server_url)
domain = parsed.netloc.replace(".", "_")
path = parsed.path.replace("/", "_").strip("_")
return f"{domain}_{path}" if path else domain
def _get_mcp_tool_schemas(
self, server_params: dict[str, Any]
) -> dict[str, dict[str, Any]]:
"""Get tool schemas from MCP server for wrapper creation with caching."""
server_url = server_params["url"]
# Check cache first
cache_key = server_url
current_time = time.time()
if cache_key in _mcp_schema_cache:
cached_data, cache_time = _mcp_schema_cache[cache_key]
if current_time - cache_time < _cache_ttl:
self._logger.log(
"debug", f"Using cached MCP tool schemas for {server_url}"
)
return cached_data # type: ignore[no-any-return]
try:
schemas = asyncio.run(self._get_mcp_tool_schemas_async(server_params))
# Cache successful results
_mcp_schema_cache[cache_key] = (schemas, current_time)
return schemas
except Exception as e:
# Log warning but don't raise - this allows graceful degradation
self._logger.log(
"warning", f"Failed to get MCP tool schemas from {server_url}: {e}"
)
return {}
async def _get_mcp_tool_schemas_async(
self, server_params: dict[str, Any]
) -> dict[str, dict[str, Any]]:
"""Async implementation of MCP tool schema retrieval with timeouts and retries."""
server_url = server_params["url"]
return await self._retry_mcp_discovery(
self._discover_mcp_tools_with_timeout, server_url
)
async def _retry_mcp_discovery(
self, operation_func: Any, server_url: str
) -> dict[str, dict[str, Any]]:
"""Retry MCP discovery operation with exponential backoff, avoiding try-except in loop."""
last_error = None
for attempt in range(MCP_MAX_RETRIES):
# Execute single attempt outside try-except loop structure
result, error, should_retry = await self._attempt_mcp_discovery(
operation_func, server_url
)
# Success case - return immediately
if result is not None:
return result
# Non-retryable error - raise immediately
if not should_retry:
raise RuntimeError(error)
# Retryable error - continue with backoff
last_error = error
if attempt < MCP_MAX_RETRIES - 1:
wait_time = 2**attempt # Exponential backoff
await asyncio.sleep(wait_time)
raise RuntimeError(
f"Failed to discover MCP tools after {MCP_MAX_RETRIES} attempts: {last_error}"
)
@staticmethod
async def _attempt_mcp_discovery(
operation_func: Any, server_url: str
) -> tuple[dict[str, dict[str, Any]] | None, str, bool]:
"""Attempt single MCP discovery operation and return (result, error_message, should_retry)."""
try:
result = await operation_func(server_url)
return result, "", False
except ImportError:
return (
None,
"MCP library not available. Please install with: pip install mcp",
False,
)
except asyncio.TimeoutError:
return (
None,
f"MCP discovery timed out after {MCP_DISCOVERY_TIMEOUT} seconds",
True,
)
except Exception as e:
error_str = str(e).lower()
# Classify errors as retryable or non-retryable
if "authentication" in error_str or "unauthorized" in error_str:
return None, f"Authentication failed for MCP server: {e!s}", False
if "connection" in error_str or "network" in error_str:
return None, f"Network connection failed: {e!s}", True
if "json" in error_str or "parsing" in error_str:
return None, f"Server response parsing error: {e!s}", True
return None, f"MCP discovery error: {e!s}", False
async def _discover_mcp_tools_with_timeout(
self, server_url: str
) -> dict[str, dict[str, Any]]:
"""Discover MCP tools with timeout wrapper."""
return await asyncio.wait_for(
self._discover_mcp_tools(server_url), timeout=MCP_DISCOVERY_TIMEOUT
)
async def _discover_mcp_tools(self, server_url: str) -> dict[str, dict[str, Any]]:
"""Discover tools from MCP server with proper timeout handling."""
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
async with streamablehttp_client(server_url) as (read, write, _):
async with ClientSession(read, write) as session:
# Initialize the connection with timeout
await asyncio.wait_for(
session.initialize(), timeout=MCP_CONNECTION_TIMEOUT
)
# List available tools with timeout
tools_result = await asyncio.wait_for(
session.list_tools(),
timeout=MCP_DISCOVERY_TIMEOUT - MCP_CONNECTION_TIMEOUT,
)
schemas = {}
for tool in tools_result.tools:
args_schema = None
if hasattr(tool, "inputSchema") and tool.inputSchema:
args_schema = self._json_schema_to_pydantic(
sanitize_tool_name(tool.name), tool.inputSchema
)
schemas[sanitize_tool_name(tool.name)] = {
"description": getattr(tool, "description", ""),
"args_schema": args_schema,
}
return schemas
def _json_schema_to_pydantic(
self, tool_name: str, json_schema: dict[str, Any]
) -> type:
"""Convert JSON Schema to Pydantic model for tool arguments.
Args:
tool_name: Name of the tool (used for model naming)
json_schema: JSON Schema dict with 'properties', 'required', etc.
Returns:
Pydantic BaseModel class
"""
from pydantic import Field, create_model
properties = json_schema.get("properties", {})
required_fields = json_schema.get("required", [])
field_definitions: dict[str, Any] = {}
for field_name, field_schema in properties.items():
field_type = self._json_type_to_python(field_schema)
field_description = field_schema.get("description", "")
is_required = field_name in required_fields
if is_required:
field_definitions[field_name] = (
field_type,
Field(..., description=field_description),
)
else:
field_definitions[field_name] = (
field_type | None,
Field(default=None, description=field_description),
)
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
return create_model(model_name, **field_definitions) # type: ignore[no-any-return]
def _json_type_to_python(self, field_schema: dict[str, Any]) -> type:
"""Convert JSON Schema type to Python type.
Args:
field_schema: JSON Schema field definition
Returns:
Python type
"""
json_type = field_schema.get("type")
if "anyOf" in field_schema:
types: list[type] = []
for option in field_schema["anyOf"]:
if "const" in option:
types.append(str)
else:
types.append(self._json_type_to_python(option))
unique_types = list(set(types))
if len(unique_types) > 1:
result: Any = unique_types[0]
for t in unique_types[1:]:
result = result | t
return result # type: ignore[no-any-return]
return unique_types[0]
type_mapping: dict[str | None, type] = {
"string": str,
"number": float,
"integer": int,
"boolean": bool,
"array": list,
"object": dict,
}
return type_mapping.get(json_type, Any)
@staticmethod
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict[str, Any]]:
"""Fetch MCP server configurations from CrewAI AMP API."""
# TODO: Implement AMP API call to "integrations/mcps" endpoint
# Should return list of server configs with URLs
return []
if self._mcp_resolver is not None:
self._mcp_resolver.cleanup()
self._mcp_resolver = None
@staticmethod
def get_multimodal_tools() -> Sequence[BaseTool]:
@@ -1695,11 +1156,15 @@ class Agent(BaseAgent):
# Process platform apps and MCP tools
if self.apps:
platform_tools = self.get_platform_tools(self.apps)
if platform_tools and self.tools is not None:
if platform_tools:
if self.tools is None:
self.tools = []
self.tools.extend(platform_tools)
if self.mcps:
mcps = self.get_mcp_tools(self.mcps)
if mcps and self.tools is not None:
if mcps:
if self.tools is None:
self.tools = []
self.tools.extend(mcps)
# Prepare tools
@@ -1712,7 +1177,8 @@ class Agent(BaseAgent):
existing_names = {sanitize_tool_name(t.name) for t in raw_tools}
raw_tools.extend(
mt for mt in create_memory_tools(agent_memory)
mt
for mt in create_memory_tools(agent_memory)
if sanitize_tool_name(mt.name) not in existing_names
)
@@ -1802,11 +1268,11 @@ class Agent(BaseAgent):
),
)
start_time = time.time()
matches = agent_memory.recall(formatted_messages, limit=10)
matches = agent_memory.recall(formatted_messages, limit=20)
memory_block = ""
if matches:
memory_block = "Relevant memories:\n" + "\n".join(
f"- {m.record.content}" for m in matches
m.format() for m in matches
)
if memory_block:
formatted_messages += "\n\n" + self.i18n.slice("memory").format(
@@ -1937,14 +1403,15 @@ class Agent(BaseAgent):
if isinstance(messages, str):
input_str = messages
else:
input_str = "\n".join(
str(msg.get("content", "")) for msg in messages if msg.get("content")
) or "User request"
raw = (
f"Input: {input_str}\n"
f"Agent: {self.role}\n"
f"Result: {output_text}"
)
input_str = (
"\n".join(
str(msg.get("content", ""))
for msg in messages
if msg.get("content")
)
or "User request"
)
raw = f"Input: {input_str}\nAgent: {self.role}\nResult: {output_text}"
extracted = agent_memory.extract_memories(raw)
if extracted:
agent_memory.remember_many(extracted)

View File

@@ -4,7 +4,8 @@ from abc import ABC, abstractmethod
from collections.abc import Callable
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Literal
import re
from typing import Any, Final, Literal
import uuid
from pydantic import (
@@ -36,6 +37,11 @@ from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.string_utils import interpolate_only
_SLUG_RE: Final[re.Pattern[str]] = re.compile(
r"^(?:crewai-amp:)?[a-zA-Z0-9][a-zA-Z0-9_-]*(?:#\w+)?$"
)
PlatformApp = Literal[
"asana",
"box",
@@ -197,7 +203,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
)
mcps: list[str | MCPServerConfig] | None = Field(
default=None,
description="List of MCP server references. Supports 'https://server.com/path' for external servers and 'crewai-amp:mcp-name' for AMP marketplace. Use '#tool_name' suffix for specific tools.",
description="List of MCP server references. Supports 'https://server.com/path' for external servers and bare slugs like 'notion' for connected MCP integrations. Use '#tool_name' suffix for specific tools.",
)
memory: Any = Field(
default=None,
@@ -276,14 +282,16 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
validated_mcps: list[str | MCPServerConfig] = []
for mcp in mcps:
if isinstance(mcp, str):
if mcp.startswith(("https://", "crewai-amp:")):
if mcp.startswith("https://"):
validated_mcps.append(mcp)
elif _SLUG_RE.match(mcp):
validated_mcps.append(mcp)
else:
raise ValueError(
f"Invalid MCP reference: {mcp}. "
"String references must start with 'https://' or 'crewai-amp:'"
f"Invalid MCP reference: {mcp!r}. "
"String references must be an 'https://' URL or a valid "
"slug (e.g. 'notion', 'notion#search', 'crewai-amp:notion')."
)
elif isinstance(mcp, (MCPServerConfig)):
validated_mcps.append(mcp)
else:

View File

@@ -30,7 +30,7 @@ class CrewAgentExecutorMixin:
memory = getattr(self.agent, "memory", None) or (
getattr(self.crew, "_memory", None) if self.crew else None
)
if memory is None or not self.task:
if memory is None or not self.task or getattr(memory, "_read_only", False):
return
if (
f"Action: {sanitize_tool_name('Delegate work to coworker')}"

View File

@@ -1,5 +1,4 @@
from crewai.agents.cache.cache_handler import CacheHandler
__all__ = ["CacheHandler"]

View File

@@ -50,6 +50,7 @@ from crewai.utilities.agent_utils import (
handle_unknown_error,
has_reached_max_iterations,
is_context_length_exceeded,
parse_tool_call_args,
process_llm_response,
track_delegation_if_needed,
)
@@ -486,8 +487,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
# No tools available, fall back to simple LLM call
return self._invoke_loop_native_no_tools()
openai_tools, available_functions = convert_tools_to_openai_schema(
self.original_tools
openai_tools, available_functions, self._tool_name_mapping = (
convert_tools_to_openai_schema(self.original_tools)
)
while True:
@@ -699,9 +700,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if not parsed_calls:
return None
original_tools_by_name: dict[str, Any] = {}
for tool in self.original_tools or []:
original_tools_by_name[sanitize_tool_name(tool.name)] = tool
original_tools_by_name: dict[str, Any] = dict(self._tool_name_mapping)
if len(parsed_calls) > 1:
has_result_as_answer_in_batch = any(
@@ -894,13 +893,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
ToolUsageStartedEvent,
)
if isinstance(func_args, str):
try:
args_dict = json.loads(func_args)
except json.JSONDecodeError:
args_dict = {}
else:
args_dict = func_args
args_dict, parse_error = parse_tool_call_args(func_args, func_name, call_id, original_tool)
if parse_error is not None:
return parse_error
if original_tool is None:
for tool in self.original_tools or []:
@@ -952,10 +947,16 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
track_delegation_if_needed(func_name, args_dict, self.task)
structured_tool: CrewStructuredTool | None = None
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
if original_tool is not None:
for structured in self.tools or []:
if getattr(structured, "_original_tool", None) is original_tool:
structured_tool = structured
break
if structured_tool is None:
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
hook_blocked = False
before_hook_context = ToolCallHookContext(
@@ -1262,7 +1263,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer, tool_result
)
self._invoke_step_callback(formatted_answer) # type: ignore[arg-type]
await self._ainvoke_step_callback(formatted_answer) # type: ignore[arg-type]
self._append_message(formatted_answer.text) # type: ignore[union-attr]
except OutputParserError as e:
@@ -1315,8 +1316,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if not self.original_tools:
return await self._ainvoke_loop_native_no_tools()
openai_tools, available_functions = convert_tools_to_openai_schema(
self.original_tools
openai_tools, available_functions, self._tool_name_mapping = (
convert_tools_to_openai_schema(self.original_tools)
)
while True:
@@ -1377,7 +1378,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
output=answer,
text=answer,
)
self._invoke_step_callback(formatted_answer)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(answer) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1389,7 +1390,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
output=answer,
text=output_json,
)
self._invoke_step_callback(formatted_answer)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(output_json)
self._show_logs(formatted_answer)
return formatted_answer
@@ -1400,7 +1401,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
output=str(answer),
text=str(answer),
)
self._invoke_step_callback(formatted_answer)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(str(answer)) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1494,7 +1495,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def _invoke_step_callback(
self, formatted_answer: AgentAction | AgentFinish
) -> None:
"""Invoke step callback.
"""Invoke step callback (sync context).
Args:
formatted_answer: Current agent response.
@@ -1504,6 +1505,19 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if inspect.iscoroutine(cb_result):
asyncio.run(cb_result)
async def _ainvoke_step_callback(
self, formatted_answer: AgentAction | AgentFinish
) -> None:
"""Invoke step callback (async context).
Args:
formatted_answer: Current agent response.
"""
if self.step_callback:
cb_result = self.step_callback(formatted_answer)
if inspect.iscoroutine(cb_result):
await cb_result
def _append_message(
self, text: str, role: Literal["user", "assistant", "system"] = "assistant"
) -> None:

View File

@@ -1,5 +1,4 @@
from crewai.cli.authentication.main import AuthenticationCommand
__all__ = ["AuthenticationCommand"]

View File

@@ -69,7 +69,7 @@ ENV_VARS: dict[str, list[dict[str, Any]]] = {
},
{
"prompt": "Enter your AWS Region Name (press Enter to skip)",
"key_name": "AWS_REGION_NAME",
"key_name": "AWS_DEFAULT_REGION",
},
],
"azure": [

View File

@@ -143,7 +143,7 @@ def create_folder_structure(
(folder_path / "src" / folder_name).mkdir(parents=True)
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
# Copy AGENTS.md to project root (top-level projects only)
package_dir = Path(__file__).parent
agents_md_src = package_dir / "templates" / "AGENTS.md"

View File

@@ -1,5 +1,5 @@
import shutil
from pathlib import Path
import shutil
import click

View File

@@ -290,13 +290,20 @@ class MemoryTUI(App[None]):
if self._memory is None:
panel.update(self._init_error or "No memory loaded.")
return
display_limit = 1000
info = self._memory.info(path)
self._last_scope_info = info
self._entries = self._memory.list_records(scope=path, limit=200)
self._entries = self._memory.list_records(scope=path, limit=display_limit)
panel.update(_format_scope_info(info))
panel.border_title = "Detail"
entry_list = self.query_one("#entry-list", OptionList)
entry_list.border_title = f"Entries ({len(self._entries)})"
capped = info.record_count > display_limit
count_label = (
f"Entries (showing {display_limit} of {info.record_count} — display limit)"
if capped
else f"Entries ({len(self._entries)})"
)
entry_list.border_title = count_label
self._populate_entry_list()
def on_option_list_option_highlighted(
@@ -376,6 +383,11 @@ class MemoryTUI(App[None]):
return
info_lines: list[str] = []
info_lines.append(
"[dim italic]Searched the full dataset"
+ (f" within [bold]{scope}[/]" if scope else "")
+ " using the recall flow (semantic + recency + importance).[/]\n"
)
if not self._custom_embedder:
info_lines.append(
"[dim italic]Note: Using default OpenAI embedder. "

View File

@@ -22,14 +22,15 @@ class PlusAPI:
EPHEMERAL_TRACING_RESOURCE = "/crewai_plus/api/v1/tracing/ephemeral"
INTEGRATIONS_RESOURCE = "/crewai_plus/api/v1/integrations"
def __init__(self, api_key: str) -> None:
def __init__(self, api_key: str | None = None) -> None:
self.api_key = api_key
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
"X-Crewai-Version": get_crewai_version(),
}
if api_key:
self.headers["Authorization"] = f"Bearer {api_key}"
settings = Settings()
if settings.org_uuid:
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
@@ -48,8 +49,13 @@ class PlusAPI:
with httpx.Client(trust_env=False, verify=verify) as client:
return client.request(method, url, headers=self.headers, **kwargs)
def login_to_tool_repository(self) -> httpx.Response:
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
def login_to_tool_repository(
self, user_identifier: str | None = None
) -> httpx.Response:
payload = {}
if user_identifier:
payload["user_identifier"] = user_identifier
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login", json=payload)
def get_tool(self, handle: str) -> httpx.Response:
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
@@ -190,6 +196,15 @@ class PlusAPI:
timeout=30,
)
def get_mcp_configs(self, slugs: list[str]) -> httpx.Response:
"""Get MCP server configurations for the given slugs."""
return self._make_request(
"GET",
f"{self.INTEGRATIONS_RESOURCE}/mcp_configs",
params={"slugs": ",".join(slugs)},
timeout=30,
)
def get_triggers(self) -> httpx.Response:
"""Get all available triggers from integrations."""
return self._make_request("GET", f"{self.INTEGRATIONS_RESOURCE}/apps")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.9.3"
"crewai[tools]==1.10.1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.9.3"
"crewai[tools]==1.10.1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.203.1"
"crewai[tools]==1.10.1"
]
[tool.crewai]

View File

@@ -23,6 +23,7 @@ from crewai.cli.utils import (
tree_copy,
tree_find_and_replace,
)
from crewai.events.listeners.tracing.utils import get_user_id
console = Console()
@@ -169,7 +170,9 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
console.print(f"Successfully installed {handle}", style="bold green")
def login(self) -> None:
login_response = self.plus_api_client.login_to_tool_repository()
login_response = self.plus_api_client.login_to_tool_repository(
user_identifier=get_user_id()
)
if login_response.status_code != 200:
console.print(

View File

@@ -1,5 +1,4 @@
from crewai.crews.crew_output import CrewOutput
__all__ = ["CrewOutput"]

View File

@@ -63,6 +63,7 @@ from crewai.events.types.logging_events import (
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
@@ -165,6 +166,7 @@ __all__ = [
"LiteAgentExecutionCompletedEvent",
"LiteAgentExecutionErrorEvent",
"LiteAgentExecutionStartedEvent",
"MCPConfigFetchFailedEvent",
"MCPConnectionCompletedEvent",
"MCPConnectionFailedEvent",
"MCPConnectionStartedEvent",

View File

@@ -23,4 +23,3 @@ class BaseEventListener(ABC):
Args:
crewai_event_bus: The event bus to register listeners on.
"""
pass

View File

@@ -68,6 +68,7 @@ from crewai.events.types.logging_events import (
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
@@ -665,6 +666,16 @@ class EventListener(BaseEventListener):
event.error_type,
)
@crewai_event_bus.on(MCPConfigFetchFailedEvent)
def on_mcp_config_fetch_failed(
_: Any, event: MCPConfigFetchFailedEvent
) -> None:
self.formatter.handle_mcp_config_fetch_failed(
event.slug,
event.error,
event.error_type,
)
@crewai_event_bus.on(MCPToolExecutionStartedEvent)
def on_mcp_tool_execution_started(
_: Any, event: MCPToolExecutionStartedEvent

View File

@@ -67,6 +67,7 @@ from crewai.events.types.llm_guardrail_events import (
LLMGuardrailStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConfigFetchFailedEvent,
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
@@ -181,4 +182,5 @@ EventTypes = (
| MCPToolExecutionStartedEvent
| MCPToolExecutionCompletedEvent
| MCPToolExecutionFailedEvent
| MCPConfigFetchFailedEvent
)

View File

@@ -15,6 +15,7 @@ from crewai.cli.plus_api import PlusAPI
from crewai.cli.version import get_crewai_version
from crewai.events.listeners.tracing.types import TraceEvent
from crewai.events.listeners.tracing.utils import (
get_user_id,
is_tracing_enabled_in_context,
should_auto_collect_first_time_traces,
)
@@ -67,7 +68,7 @@ class TraceBatchManager:
api_key=get_auth_token(),
)
except AuthError:
self.plus_api = PlusAPI(api_key="")
self.plus_api = PlusAPI()
self.ephemeral_trace_url = None
def initialize_batch(
@@ -120,7 +121,6 @@ class TraceBatchManager:
payload = {
"trace_id": self.current_batch.batch_id,
"execution_type": execution_metadata.get("execution_type", "crew"),
"user_identifier": execution_metadata.get("user_context", None),
"execution_context": {
"crew_fingerprint": execution_metadata.get("crew_fingerprint"),
"crew_name": execution_metadata.get("crew_name", None),
@@ -140,6 +140,7 @@ class TraceBatchManager:
}
if use_ephemeral:
payload["ephemeral_trace_id"] = self.current_batch.batch_id
payload["user_identifier"] = get_user_id()
response = (
self.plus_api.initialize_ephemeral_trace_batch(payload)

View File

@@ -86,3 +86,11 @@ class LLMStreamChunkEvent(LLMEventBase):
tool_call: ToolCall | None = None
call_type: LLMCallType | None = None
response_id: str | None = None
class LLMThinkingChunkEvent(LLMEventBase):
"""Event emitted when a thinking/reasoning chunk is received from a thinking model"""
type: str = "llm_thinking_chunk"
chunk: str
response_id: str | None = None

View File

@@ -83,3 +83,16 @@ class MCPToolExecutionFailedEvent(MCPEvent):
error_type: str | None = None # "timeout", "validation", "server_error", etc.
started_at: datetime | None = None
failed_at: datetime | None = None
class MCPConfigFetchFailedEvent(BaseEvent):
"""Event emitted when fetching an AMP MCP server config fails.
This covers cases where the slug is not connected, the API call
failed, or native MCP resolution failed after config was fetched.
"""
type: str = "mcp_config_fetch_failed"
slug: str
error: str
error_type: str | None = None # "not_connected", "api_error", "connection_failed"

View File

@@ -1512,6 +1512,34 @@ To enable tracing, do any one of these:
self.print(panel)
self.print()
def handle_mcp_config_fetch_failed(
self,
slug: str,
error: str = "",
error_type: str | None = None,
) -> None:
"""Handle MCP config fetch failed event (AMP resolution failures)."""
if not self.verbose:
return
content = Text()
content.append("MCP Config Fetch Failed\n\n", style="red bold")
content.append("Server: ", style="white")
content.append(f"{slug}\n", style="red")
if error_type:
content.append("Error Type: ", style="white")
content.append(f"{error_type}\n", style="red")
if error:
content.append("\nError: ", style="white bold")
error_preview = error[:500] + "..." if len(error) > 500 else error
content.append(f"{error_preview}\n", style="red")
panel = self.create_panel(content, "❌ MCP Config Failed", "red")
self.print(panel)
self.print()
def handle_mcp_tool_execution_started(
self,
server_name: str,

View File

@@ -52,6 +52,8 @@ from crewai.hooks.types import (
BeforeLLMCallHookCallable,
BeforeLLMCallHookType,
)
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.utilities.agent_utils import (
convert_tools_to_openai_schema,
enforce_rpm_limit,
@@ -66,6 +68,7 @@ from crewai.utilities.agent_utils import (
has_reached_max_iterations,
is_context_length_exceeded,
is_inside_event_loop,
parse_tool_call_args,
process_llm_response,
track_delegation_if_needed,
)
@@ -84,8 +87,6 @@ if TYPE_CHECKING:
from crewai.crew import Crew
from crewai.llms.base_llm import BaseLLM
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_types import ToolResult
from crewai.utilities.prompts import StandardPromptResult, SystemPromptResult
@@ -301,6 +302,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
super().__init__(
suppress_flow_events=True,
tracing=current_tracing if current_tracing else None,
max_method_calls=self.max_iter * 10,
)
self._flow_initialized = True
@@ -320,7 +322,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
def _setup_native_tools(self) -> None:
"""Convert tools to OpenAI schema format for native function calling."""
if self.original_tools:
self._openai_tools, self._available_functions = (
self._openai_tools, self._available_functions, self._tool_name_mapping = (
convert_tools_to_openai_schema(self.original_tools)
)
@@ -402,7 +404,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self._setup_native_tools()
return "initialized"
@listen("force_final_answer")
@listen("max_iterations_exceeded")
def force_final_answer(self) -> Literal["agent_finished"]:
"""Force agent to provide final answer when max iterations exceeded."""
formatted_answer = handle_max_iterations_exceeded(
@@ -593,21 +595,19 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
def execute_tool_action(self) -> Literal["tool_completed", "tool_result_is_final"]:
"""Execute the tool action and handle the result."""
action = cast(AgentAction, self.state.current_answer)
fingerprint_context = {}
if (
self.agent
and hasattr(self.agent, "security_config")
and hasattr(self.agent.security_config, "fingerprint")
):
fingerprint_context = {
"agent_fingerprint": str(self.agent.security_config.fingerprint)
}
try:
action = cast(AgentAction, self.state.current_answer)
# Extract fingerprint context for tool execution
fingerprint_context = {}
if (
self.agent
and hasattr(self.agent, "security_config")
and hasattr(self.agent.security_config, "fingerprint")
):
fingerprint_context = {
"agent_fingerprint": str(self.agent.security_config.fingerprint)
}
# Execute the tool
tool_result = execute_tool_and_check_finality(
agent_action=action,
fingerprint_context=fingerprint_context,
@@ -621,24 +621,19 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
function_calling_llm=self.function_calling_llm,
crew=self.crew,
)
except Exception as e:
if self.agent and self.agent.verbose:
self._printer.print(
content=f"Error in tool execution: {e}", color="red"
)
if self.task:
self.task.increment_tools_errors()
# Handle agent action and append observation to messages
result = self._handle_agent_action(action, tool_result)
self.state.current_answer = result
error_observation = f"\nObservation: Error executing tool: {e}"
action.text += error_observation
action.result = str(e)
self._append_message_to_state(action.text)
# Invoke step callback if configured
self._invoke_step_callback(result)
# Append result message to conversation state
if hasattr(result, "text"):
self._append_message_to_state(result.text)
# Check if tool result became a final answer (result_as_answer flag)
if isinstance(result, AgentFinish):
self.state.is_finished = True
return "tool_result_is_final"
# Inject post-tool reasoning prompt to enforce analysis
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_message: LLMMessage = {
"role": "user",
@@ -648,12 +643,26 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
return "tool_completed"
except Exception as e:
error_text = Text()
error_text.append("❌ Error in tool execution: ", style="red bold")
error_text.append(str(e), style="red")
self._console.print(error_text)
raise
result = self._handle_agent_action(action, tool_result)
self.state.current_answer = result
self._invoke_step_callback(result)
if hasattr(result, "text"):
self._append_message_to_state(result.text)
if isinstance(result, AgentFinish):
self.state.is_finished = True
return "tool_result_is_final"
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_message_post: LLMMessage = {
"role": "user",
"content": reasoning_prompt,
}
self.state.messages.append(reasoning_message_post)
return "tool_completed"
@listen("native_tool_calls")
def execute_native_tool(
@@ -727,7 +736,20 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
)
for future in as_completed(future_to_idx):
idx = future_to_idx[future]
ordered_results[idx] = future.result()
try:
ordered_results[idx] = future.result()
except Exception as e:
tool_call = runnable_tool_calls[idx]
info = extract_tool_call_info(tool_call)
call_id = info[0] if info else "unknown"
func_name = info[1] if info else "unknown"
ordered_results[idx] = {
"call_id": call_id,
"func_name": func_name,
"result": f"Error executing tool: {e}",
"from_cache": False,
"original_tool": None,
}
execution_results = [
result for result in ordered_results if result is not None
]
@@ -823,11 +845,17 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
continue
_, func_name, _ = info
original_tool = None
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
mapping = getattr(self, "_tool_name_mapping", None)
original_tool: BaseTool | None = None
if mapping and func_name in mapping:
mapped = mapping[func_name]
if isinstance(mapped, BaseTool):
original_tool = mapped
if original_tool is None:
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
if not original_tool:
continue
@@ -843,28 +871,41 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
"""Execute a single native tool call and return metadata/result."""
info = extract_tool_call_info(tool_call)
if not info:
raise ValueError("Invalid native tool call format")
call_id = (
getattr(tool_call, "id", None)
or (tool_call.get("id") if isinstance(tool_call, dict) else None)
or "unknown"
)
return {
"call_id": call_id,
"func_name": "unknown",
"result": "Error: Invalid native tool call format",
"from_cache": False,
"original_tool": None,
}
call_id, func_name, func_args = info
# Parse arguments
if isinstance(func_args, str):
try:
args_dict = json.loads(func_args)
except json.JSONDecodeError:
args_dict = {}
else:
args_dict = func_args
parsed_args, parse_error = parse_tool_call_args(func_args, func_name, call_id)
if parse_error is not None:
return parse_error
args_dict: dict[str, Any] = parsed_args or {}
# Get agent_key for event tracking
agent_key = getattr(self.agent, "key", "unknown") if self.agent else "unknown"
# Find original tool by matching sanitized name (needed for cache_function and result_as_answer)
original_tool = None
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
original_tool: BaseTool | None = None
mapping = getattr(self, "_tool_name_mapping", None)
if mapping and func_name in mapping:
mapped = mapping[func_name]
if isinstance(mapped, BaseTool):
original_tool = mapped
if original_tool is None:
for tool in self.original_tools or []:
if sanitize_tool_name(tool.name) == func_name:
original_tool = tool
break
# Check if tool has reached max usage count
max_usage_reached = False
@@ -907,10 +948,16 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
track_delegation_if_needed(func_name, args_dict, self.task)
structured_tool: CrewStructuredTool | None = None
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
if original_tool is not None:
for structured in self.tools or []:
if getattr(structured, "_original_tool", None) is original_tool:
structured_tool = structured
break
if structured_tool is None:
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
hook_blocked = False
before_hook_context = ToolCallHookContext(
@@ -1062,11 +1109,11 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
def check_max_iterations(
self,
) -> Literal[
"force_final_answer", "continue_reasoning", "continue_reasoning_native"
"max_iterations_exceeded", "continue_reasoning", "continue_reasoning_native"
]:
"""Check if max iterations reached before proceeding with reasoning."""
if has_reached_max_iterations(self.state.iterations, self.max_iter):
return "force_final_answer"
return "max_iterations_exceeded"
if self.state.use_native_tools:
return "continue_reasoning_native"
return "continue_reasoning"

View File

@@ -16,7 +16,7 @@ from collections.abc import (
Sequence,
ValuesView,
)
from concurrent.futures import Future
from concurrent.futures import Future, ThreadPoolExecutor
import copy
import enum
import inspect
@@ -692,6 +692,7 @@ class FlowMeta(type):
condition_type = getattr(
attr_value, "__condition_type__", OR_CONDITION
)
if (
hasattr(attr_value, "__trigger_condition__")
and attr_value.__trigger_condition__ is not None
@@ -769,6 +770,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
persistence: FlowPersistence | None = None,
tracing: bool | None = None,
suppress_flow_events: bool = False,
max_method_calls: int = 100,
**kwargs: Any,
) -> None:
"""Initialize a new Flow instance.
@@ -777,6 +779,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
persistence: Optional persistence backend for storing flow states
tracing: Whether to enable tracing. True=always enable, False=always disable, None=check environment/user settings
suppress_flow_events: Whether to suppress flow event emissions (internal use)
max_method_calls: Maximum times a single method can be called per execution before raising RecursionError
**kwargs: Additional state values to initialize or override
"""
# Initialize basic instance attributes
@@ -792,6 +795,8 @@ class Flow(Generic[T], metaclass=FlowMeta):
self._completed_methods: set[FlowMethodName] = (
set()
) # Track completed methods for reload
self._method_call_counts: dict[FlowMethodName, int] = {}
self._max_method_calls = max_method_calls
self._persistence: FlowPersistence | None = persistence
self._is_execution_resuming: bool = False
self._event_futures: list[Future[None]] = []
@@ -1739,7 +1744,12 @@ class Flow(Generic[T], metaclass=FlowMeta):
async def _run_flow() -> Any:
return await self.kickoff_async(inputs, input_files)
return asyncio.run(_run_flow())
try:
asyncio.get_running_loop()
with ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, _run_flow()).result()
except RuntimeError:
return asyncio.run(_run_flow())
async def kickoff_async(
self,
@@ -1823,6 +1833,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
self._method_outputs.clear()
self._pending_and_listeners.clear()
self._clear_or_listeners()
self._method_call_counts.clear()
else:
# Only enter resumption mode if there are completed methods to
# replay. When _completed_methods is empty (e.g. a pure
@@ -2564,6 +2575,16 @@ class Flow(Generic[T], metaclass=FlowMeta):
- Skips execution if method was already completed (e.g., after reload)
- Catches and logs any exceptions during execution, preventing individual listener failures from breaking the entire flow
"""
count = self._method_call_counts.get(listener_name, 0) + 1
if count > self._max_method_calls:
raise RecursionError(
f"Method '{listener_name}' has been called {self._max_method_calls} times in "
f"this flow execution, which indicates an infinite loop. "
f"This commonly happens when a @listen label matches the "
f"method's own name."
)
self._method_call_counts[listener_name] = count
if listener_name in self._completed_methods:
if self._is_execution_resuming:
# During resumption, skip execution but continue listeners

View File

@@ -2,10 +2,10 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable
import time
from functools import wraps
import inspect
import json
import time
from types import MethodType
from typing import (
TYPE_CHECKING,
@@ -49,15 +49,20 @@ from crewai.events.types.agent_events import (
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
)
from crewai.events.types.logging_events import AgentLogsExecutionEvent
from crewai.events.types.memory_events import (
MemoryRetrievalCompletedEvent,
MemoryRetrievalFailedEvent,
MemoryRetrievalStartedEvent,
)
from crewai.events.types.logging_events import AgentLogsExecutionEvent
from crewai.flow.flow_trackable import FlowTrackable
from crewai.hooks.llm_hooks import get_after_llm_call_hooks, get_before_llm_call_hooks
from crewai.hooks.types import AfterLLMCallHookType, BeforeLLMCallHookType
from crewai.hooks.types import (
AfterLLMCallHookCallable,
AfterLLMCallHookType,
BeforeLLMCallHookCallable,
BeforeLLMCallHookType,
)
from crewai.lite_agent_output import LiteAgentOutput
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
@@ -270,11 +275,11 @@ class LiteAgent(FlowTrackable, BaseModel):
_guardrail: GuardrailCallable | None = PrivateAttr(default=None)
_guardrail_retry_count: int = PrivateAttr(default=0)
_callbacks: list[TokenCalcHandler] = PrivateAttr(default_factory=list)
_before_llm_call_hooks: list[BeforeLLMCallHookType] = PrivateAttr(
default_factory=get_before_llm_call_hooks
_before_llm_call_hooks: list[BeforeLLMCallHookType | BeforeLLMCallHookCallable] = (
PrivateAttr(default_factory=get_before_llm_call_hooks)
)
_after_llm_call_hooks: list[AfterLLMCallHookType] = PrivateAttr(
default_factory=get_after_llm_call_hooks
_after_llm_call_hooks: list[AfterLLMCallHookType | AfterLLMCallHookCallable] = (
PrivateAttr(default_factory=get_after_llm_call_hooks)
)
_memory: Any = PrivateAttr(default=None)
@@ -440,12 +445,16 @@ class LiteAgent(FlowTrackable, BaseModel):
return self.role
@property
def before_llm_call_hooks(self) -> list[BeforeLLMCallHookType]:
def before_llm_call_hooks(
self,
) -> list[BeforeLLMCallHookType | BeforeLLMCallHookCallable]:
"""Get the before_llm_call hooks for this agent."""
return self._before_llm_call_hooks
@property
def after_llm_call_hooks(self) -> list[AfterLLMCallHookType]:
def after_llm_call_hooks(
self,
) -> list[AfterLLMCallHookType | AfterLLMCallHookCallable]:
"""Get the after_llm_call hooks for this agent."""
return self._after_llm_call_hooks
@@ -482,11 +491,12 @@ class LiteAgent(FlowTrackable, BaseModel):
# Inject memory tools once if memory is configured (mirrors Agent._prepare_kickoff)
if self._memory is not None:
from crewai.tools.memory_tools import create_memory_tools
from crewai.utilities.agent_utils import sanitize_tool_name
from crewai.utilities.string_utils import sanitize_tool_name
existing_names = {sanitize_tool_name(t.name) for t in self._parsed_tools}
memory_tools = [
mt for mt in create_memory_tools(self._memory)
mt
for mt in create_memory_tools(self._memory)
if sanitize_tool_name(mt.name) not in existing_names
]
if memory_tools:
@@ -565,9 +575,10 @@ class LiteAgent(FlowTrackable, BaseModel):
if memory_block:
formatted = self.i18n.slice("memory").format(memory=memory_block)
if self._messages and self._messages[0].get("role") == "system":
self._messages[0]["content"] = (
self._messages[0].get("content", "") + "\n\n" + formatted
)
existing_content = self._messages[0].get("content", "")
if not isinstance(existing_content, str):
existing_content = ""
self._messages[0]["content"] = existing_content + "\n\n" + formatted
crewai_event_bus.emit(
self,
event=MemoryRetrievalCompletedEvent(
@@ -588,16 +599,12 @@ class LiteAgent(FlowTrackable, BaseModel):
)
def _save_to_memory(self, output_text: str) -> None:
"""Extract discrete memories from the run and remember each. No-op if _memory is None."""
if self._memory is None:
"""Extract discrete memories from the run and remember each. No-op if _memory is None or read-only."""
if self._memory is None or getattr(self._memory, "_read_only", False):
return
input_str = self._get_last_user_content() or "User request"
try:
raw = (
f"Input: {input_str}\n"
f"Agent: {self.role}\n"
f"Result: {output_text}"
)
raw = f"Input: {input_str}\nAgent: {self.role}\nResult: {output_text}"
extracted = self._memory.extract_memories(raw)
if extracted:
self._memory.remember_many(extracted, agent_role=self.role)
@@ -622,13 +629,20 @@ class LiteAgent(FlowTrackable, BaseModel):
)
# Execute the agent using invoke loop
agent_finish = self._invoke_loop()
active_response_format = response_format or self.response_format
agent_finish = self._invoke_loop(response_model=active_response_format)
if self._memory is not None:
self._save_to_memory(agent_finish.output)
output_text = (
agent_finish.output.model_dump_json()
if isinstance(agent_finish.output, BaseModel)
else agent_finish.output
)
self._save_to_memory(output_text)
formatted_result: BaseModel | None = None
active_response_format = response_format or self.response_format
if active_response_format:
if isinstance(agent_finish.output, BaseModel):
formatted_result = agent_finish.output
elif active_response_format:
try:
model_schema = generate_model_description(active_response_format)
schema = json.dumps(model_schema, indent=2)
@@ -660,8 +674,13 @@ class LiteAgent(FlowTrackable, BaseModel):
usage_metrics = self._token_process.get_summary()
# Create output
raw_output = (
agent_finish.output.model_dump_json()
if isinstance(agent_finish.output, BaseModel)
else agent_finish.output
)
output = LiteAgentOutput(
raw=agent_finish.output,
raw=raw_output,
pydantic=formatted_result,
agent_role=self.role,
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
@@ -838,10 +857,15 @@ class LiteAgent(FlowTrackable, BaseModel):
return formatted_messages
def _invoke_loop(self) -> AgentFinish:
def _invoke_loop(
self, response_model: type[BaseModel] | None = None
) -> AgentFinish:
"""
Run the agent's thought process until it reaches a conclusion or max iterations.
Args:
response_model: Optional Pydantic model for native structured output.
Returns:
AgentFinish: The final result of the agent execution.
"""
@@ -870,12 +894,19 @@ class LiteAgent(FlowTrackable, BaseModel):
printer=self._printer,
from_agent=self,
executor_context=self,
response_model=response_model,
verbose=self.verbose,
)
except Exception as e:
raise e
if isinstance(answer, BaseModel):
formatted_answer = AgentFinish(
thought="", output=answer, text=answer.model_dump_json()
)
break
formatted_answer = process_llm_response(
cast(str, answer), self.use_stop_words
)
@@ -901,7 +932,7 @@ class LiteAgent(FlowTrackable, BaseModel):
)
self._append_message(formatted_answer.text, role="assistant")
except OutputParserError as e: # noqa: PERF203
except OutputParserError as e:
if self.verbose:
self._printer.print(
content="Failed to parse LLM output. Retrying...",

View File

@@ -427,7 +427,7 @@ class LLM(BaseLLM):
f"installed.\n\n"
f"To fix this, either:\n"
f" 1. Install LiteLLM for broad model support: "
f"uv add litellm\n"
f"uv add 'crewai[litellm]'\n"
f"or\n"
f"pip install litellm\n\n"
f"For more details, see: "

View File

@@ -26,6 +26,7 @@ from crewai.events.types.llm_events import (
LLMCallStartedEvent,
LLMCallType,
LLMStreamChunkEvent,
LLMThinkingChunkEvent,
)
from crewai.events.types.tool_usage_events import (
ToolUsageErrorEvent,
@@ -368,9 +369,6 @@ class BaseLLM(ABC):
"""Emit LLM call started event."""
from crewai.utilities.serialization import to_serializable
if not hasattr(crewai_event_bus, "emit"):
raise ValueError("crewai_event_bus does not have an emit method") from None
crewai_event_bus.emit(
self,
event=LLMCallStartedEvent(
@@ -416,9 +414,6 @@ class BaseLLM(ABC):
from_agent: Agent | None = None,
) -> None:
"""Emit LLM call failed event."""
if not hasattr(crewai_event_bus, "emit"):
raise ValueError("crewai_event_bus does not have an emit method") from None
crewai_event_bus.emit(
self,
event=LLMCallFailedEvent(
@@ -449,9 +444,6 @@ class BaseLLM(ABC):
call_type: The type of LLM call (LLM_CALL or TOOL_CALL).
response_id: Unique ID for a particular LLM response, chunks have same response_id.
"""
if not hasattr(crewai_event_bus, "emit"):
raise ValueError("crewai_event_bus does not have an emit method") from None
crewai_event_bus.emit(
self,
event=LLMStreamChunkEvent(
@@ -465,6 +457,32 @@ class BaseLLM(ABC):
),
)
def _emit_thinking_chunk_event(
self,
chunk: str,
from_task: Task | None = None,
from_agent: Agent | None = None,
response_id: str | None = None,
) -> None:
"""Emit thinking/reasoning chunk event from a thinking model.
Args:
chunk: The thinking text content.
from_task: The task that initiated the call.
from_agent: The agent that initiated the call.
response_id: Unique ID for a particular LLM response.
"""
crewai_event_bus.emit(
self,
event=LLMThinkingChunkEvent(
chunk=chunk,
from_task=from_task,
from_agent=from_agent,
response_id=response_id,
call_id=get_current_call_id(),
),
)
def _handle_tool_execution(
self,
function_name: str,

View File

@@ -234,7 +234,7 @@ class BedrockCompletion(BaseLLM):
aws_access_key_id: str | None = None,
aws_secret_access_key: str | None = None,
aws_session_token: str | None = None,
region_name: str = "us-east-1",
region_name: str | None = None,
temperature: float | None = None,
max_tokens: int | None = None,
top_p: float | None = None,
@@ -287,15 +287,6 @@ class BedrockCompletion(BaseLLM):
**kwargs,
)
# Initialize Bedrock client with proper configuration
session = Session(
aws_access_key_id=aws_access_key_id or os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=aws_secret_access_key
or os.getenv("AWS_SECRET_ACCESS_KEY"),
aws_session_token=aws_session_token or os.getenv("AWS_SESSION_TOKEN"),
region_name=region_name,
)
# Configure client with timeouts and retries following AWS best practices
config = Config(
read_timeout=300,
@@ -306,8 +297,12 @@ class BedrockCompletion(BaseLLM):
tcp_keepalive=True,
)
self.client = session.client("bedrock-runtime", config=config)
self.region_name = region_name
self.region_name = (
region_name
or os.getenv("AWS_DEFAULT_REGION")
or os.getenv("AWS_REGION_NAME")
or "us-east-1"
)
self.aws_access_key_id = aws_access_key_id or os.getenv("AWS_ACCESS_KEY_ID")
self.aws_secret_access_key = aws_secret_access_key or os.getenv(
@@ -315,6 +310,16 @@ class BedrockCompletion(BaseLLM):
)
self.aws_session_token = aws_session_token or os.getenv("AWS_SESSION_TOKEN")
# Initialize Bedrock client with proper configuration
session = Session(
aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
aws_session_token=self.aws_session_token,
region_name=self.region_name,
)
self.client = session.client("bedrock-runtime", config=config)
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
self._async_client_initialized = False

View File

@@ -61,6 +61,7 @@ class GeminiCompletion(BaseLLM):
interceptor: BaseInterceptor[Any, Any] | None = None,
use_vertexai: bool | None = None,
response_format: type[BaseModel] | None = None,
thinking_config: types.ThinkingConfig | None = None,
**kwargs: Any,
):
"""Initialize Google Gemini chat completion client.
@@ -93,6 +94,10 @@ class GeminiCompletion(BaseLLM):
api_version="v1" is automatically configured.
response_format: Pydantic model for structured output. Used as default when
response_model is not passed to call()/acall() methods.
thinking_config: ThinkingConfig for thinking models (gemini-2.5+, gemini-3+).
Controls thought output via include_thoughts, thinking_budget,
and thinking_level. When None, thinking models automatically
get include_thoughts=True so thought content is surfaced.
**kwargs: Additional parameters
"""
if interceptor is not None:
@@ -139,6 +144,14 @@ class GeminiCompletion(BaseLLM):
version_match and float(version_match.group(1)) >= 2.0
)
self.thinking_config = thinking_config
if (
self.thinking_config is None
and version_match
and float(version_match.group(1)) >= 2.5
):
self.thinking_config = types.ThinkingConfig(include_thoughts=True)
@property
def stop(self) -> list[str]:
"""Get stop sequences sent to the API."""
@@ -520,6 +533,9 @@ class GeminiCompletion(BaseLLM):
if self.safety_settings:
config_params["safety_settings"] = self.safety_settings
if self.thinking_config is not None:
config_params["thinking_config"] = self.thinking_config
return types.GenerateContentConfig(**config_params)
def _convert_tools_for_interference( # type: ignore[override]
@@ -618,9 +634,17 @@ class GeminiCompletion(BaseLLM):
function_response_part = types.Part.from_function_response(
name=tool_name, response=response_data
)
contents.append(
types.Content(role="user", parts=[function_response_part])
)
if (
contents
and contents[-1].role == "user"
and contents[-1].parts
and contents[-1].parts[-1].function_response is not None
):
contents[-1].parts.append(function_response_part)
else:
contents.append(
types.Content(role="user", parts=[function_response_part])
)
elif role == "assistant" and message.get("tool_calls"):
raw_parts: list[Any] | None = message.get("raw_tool_call_parts")
if raw_parts and all(isinstance(p, types.Part) for p in raw_parts):
@@ -894,7 +918,7 @@ class GeminiCompletion(BaseLLM):
content = self._extract_text_from_response(response)
effective_response_model = None if self.tools else response_model
if not effective_response_model:
if not response_model:
content = self._apply_stop_words(content)
return self._finalize_completion_response(
@@ -931,15 +955,6 @@ class GeminiCompletion(BaseLLM):
if chunk.usage_metadata:
usage_data = self._extract_token_usage(chunk)
if chunk.text:
full_response += chunk.text
self._emit_stream_chunk_event(
chunk=chunk.text,
from_task=from_task,
from_agent=from_agent,
response_id=response_id,
)
if chunk.candidates:
candidate = chunk.candidates[0]
if candidate.content and candidate.content.parts:
@@ -976,6 +991,21 @@ class GeminiCompletion(BaseLLM):
call_type=LLMCallType.TOOL_CALL,
response_id=response_id,
)
elif part.thought and part.text:
self._emit_thinking_chunk_event(
chunk=part.text,
from_task=from_task,
from_agent=from_agent,
response_id=response_id,
)
elif part.text:
full_response += part.text
self._emit_stream_chunk_event(
chunk=part.text,
from_task=from_task,
from_agent=from_agent,
response_id=response_id,
)
return full_response, function_calls, usage_data
@@ -1329,7 +1359,7 @@ class GeminiCompletion(BaseLLM):
text_parts = [
part.text
for part in candidate.content.parts
if hasattr(part, "text") and part.text
if part.text and not part.thought
]
return "".join(text_parts)

View File

@@ -18,6 +18,7 @@ from crewai.mcp.filters import (
create_dynamic_tool_filter,
create_static_tool_filter,
)
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.mcp.transports.base import BaseTransport, TransportType
@@ -28,6 +29,7 @@ __all__ = [
"MCPServerHTTP",
"MCPServerSSE",
"MCPServerStdio",
"MCPToolResolver",
"StaticToolFilter",
"ToolFilter",
"ToolFilterContext",

View File

@@ -6,7 +6,7 @@ from contextlib import AsyncExitStack
from datetime import datetime
import logging
import time
from typing import Any
from typing import Any, NamedTuple
from typing_extensions import Self
@@ -34,6 +34,13 @@ from crewai.mcp.transports.stdio import StdioTransport
from crewai.utilities.string_utils import sanitize_tool_name
class _MCPToolResult(NamedTuple):
"""Internal result from an MCP tool call, carrying the ``isError`` flag."""
content: str
is_error: bool
# MCP Connection timeout constants (in seconds)
MCP_CONNECTION_TIMEOUT = 30 # Increased for slow servers
MCP_TOOL_EXECUTION_TIMEOUT = 30
@@ -420,6 +427,7 @@ class MCPClient:
return [
{
"name": sanitize_tool_name(tool.name),
"original_name": tool.name,
"description": getattr(tool, "description", ""),
"inputSchema": getattr(tool, "inputSchema", {}),
}
@@ -461,29 +469,46 @@ class MCPClient:
)
try:
result = await self._retry_operation(
tool_result: _MCPToolResult = await self._retry_operation(
lambda: self._call_tool_impl(tool_name, cleaned_arguments),
timeout=self.execution_timeout,
)
completed_at = datetime.now()
execution_duration_ms = (completed_at - started_at).total_seconds() * 1000
crewai_event_bus.emit(
self,
MCPToolExecutionCompletedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
tool_name=tool_name,
tool_args=cleaned_arguments,
result=result,
started_at=started_at,
completed_at=completed_at,
execution_duration_ms=execution_duration_ms,
),
)
finished_at = datetime.now()
execution_duration_ms = (finished_at - started_at).total_seconds() * 1000
return result
if tool_result.is_error:
crewai_event_bus.emit(
self,
MCPToolExecutionFailedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
tool_name=tool_name,
tool_args=cleaned_arguments,
error=tool_result.content,
error_type="tool_error",
started_at=started_at,
failed_at=finished_at,
),
)
else:
crewai_event_bus.emit(
self,
MCPToolExecutionCompletedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
tool_name=tool_name,
tool_args=cleaned_arguments,
result=tool_result.content,
started_at=started_at,
completed_at=finished_at,
execution_duration_ms=execution_duration_ms,
),
)
return tool_result.content
except Exception as e:
failed_at = datetime.now()
error_type = (
@@ -564,23 +589,27 @@ class MCPClient:
return cleaned
async def _call_tool_impl(self, tool_name: str, arguments: dict[str, Any]) -> Any:
async def _call_tool_impl(
self, tool_name: str, arguments: dict[str, Any]
) -> _MCPToolResult:
"""Internal implementation of call_tool."""
result = await asyncio.wait_for(
self.session.call_tool(tool_name, arguments),
timeout=self.execution_timeout,
)
is_error = getattr(result, "isError", False) or False
# Extract result content
if hasattr(result, "content") and result.content:
if isinstance(result.content, list) and len(result.content) > 0:
content_item = result.content[0]
if hasattr(content_item, "text"):
return str(content_item.text)
return str(content_item)
return str(result.content)
return _MCPToolResult(str(content_item.text), is_error)
return _MCPToolResult(str(content_item), is_error)
return _MCPToolResult(str(result.content), is_error)
return str(result)
return _MCPToolResult(str(result), is_error)
async def list_prompts(self) -> list[dict[str, Any]]:
"""List available prompts from MCP server.

View File

@@ -0,0 +1,592 @@
"""MCP tool resolution for CrewAI agents.
This module extracts all MCP-related tool resolution logic from the Agent class
into a standalone MCPToolResolver. It handles three flavours of MCP reference:
1. Native configs: MCPServerStdio / MCPServerHTTP / MCPServerSSE objects.
2. HTTPS URLs: e.g. "https://mcp.example.com/api"
3. AMP references: e.g. "notion" or "notion#search" (legacy "crewai-amp:" prefix also works)
"""
from __future__ import annotations
import asyncio
import time
from typing import TYPE_CHECKING, Any, Final, cast
from urllib.parse import urlparse
from crewai.mcp.client import MCPClient
from crewai.mcp.config import (
MCPServerConfig,
MCPServerHTTP,
MCPServerSSE,
MCPServerStdio,
)
from crewai.mcp.transports.http import HTTPTransport
from crewai.mcp.transports.sse import SSETransport
from crewai.mcp.transports.stdio import StdioTransport
if TYPE_CHECKING:
from crewai.tools.base_tool import BaseTool
from crewai.utilities.logger import Logger
MCP_CONNECTION_TIMEOUT: Final[int] = 10
MCP_TOOL_EXECUTION_TIMEOUT: Final[int] = 30
MCP_DISCOVERY_TIMEOUT: Final[int] = 15
MCP_MAX_RETRIES: Final[int] = 3
_mcp_schema_cache: dict[str, Any] = {}
_cache_ttl: Final[int] = 300 # 5 minutes
class MCPToolResolver:
"""Resolves MCP server references / configs into CrewAI ``BaseTool`` instances.
Typical lifecycle::
resolver = MCPToolResolver(agent=my_agent, logger=my_agent._logger)
tools = resolver.resolve(my_agent.mcps)
# … agent executes tasks using *tools* …
resolver.cleanup()
The resolver owns the MCP client connections it creates and is responsible
for tearing them down via :meth:`cleanup`.
"""
def __init__(self, agent: Any, logger: Logger) -> None:
self._agent = agent
self._logger = logger
self._clients: list[Any] = []
@property
def clients(self) -> list[Any]:
return list(self._clients)
def resolve(self, mcps: list[str | MCPServerConfig]) -> list[BaseTool]:
"""Convert MCP server references/configs to CrewAI tools."""
all_tools: list[BaseTool] = []
amp_refs: list[tuple[str, str | None]] = []
for mcp_config in mcps:
if isinstance(mcp_config, str) and mcp_config.startswith("https://"):
all_tools.extend(self._resolve_external(mcp_config))
elif isinstance(mcp_config, str):
amp_refs.append(self._parse_amp_ref(mcp_config))
else:
tools, client = self._resolve_native(mcp_config)
all_tools.extend(tools)
if client:
self._clients.append(client)
if amp_refs:
tools, clients = self._resolve_amp(amp_refs)
all_tools.extend(tools)
self._clients.extend(clients)
return all_tools
def cleanup(self) -> None:
"""Disconnect all MCP client connections."""
if not self._clients:
return
async def _disconnect_all() -> None:
for client in self._clients:
if client and hasattr(client, "connected") and client.connected:
await client.disconnect()
try:
asyncio.run(_disconnect_all())
except Exception as e:
self._logger.log("error", f"Error during MCP client cleanup: {e}")
finally:
self._clients.clear()
@staticmethod
def _parse_amp_ref(mcp_config: str) -> tuple[str, str | None]:
"""Parse an AMP reference into *(slug, optional tool name)*.
Accepts both bare slugs (``"notion"``, ``"notion#search"``) and the
legacy ``"crewai-amp:notion"`` form.
"""
bare = mcp_config.removeprefix("crewai-amp:")
slug, _, specific_tool = bare.partition("#")
return slug, specific_tool or None
def _resolve_amp(
self, amp_refs: list[tuple[str, str | None]]
) -> tuple[list[BaseTool], list[Any]]:
"""Fetch AMP configs in bulk and return their tools and clients.
Resolves each unique slug only once (single connection per server),
then applies per-ref tool filters to select specific tools.
"""
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.mcp_events import MCPConfigFetchFailedEvent
unique_slugs = list(dict.fromkeys(slug for slug, _ in amp_refs))
amp_configs_map = self._fetch_amp_mcp_configs(unique_slugs)
all_tools: list[BaseTool] = []
all_clients: list[Any] = []
resolved_cache: dict[str, tuple[list[BaseTool], Any | None]] = {}
for slug in unique_slugs:
config_dict = amp_configs_map.get(slug)
if not config_dict:
crewai_event_bus.emit(
self,
MCPConfigFetchFailedEvent(
slug=slug,
error=f"Config for '{slug}' not found. Make sure it is connected in your account.",
error_type="not_connected",
),
)
continue
mcp_server_config = self._build_mcp_config_from_dict(config_dict)
try:
tools, client = self._resolve_native(mcp_server_config)
resolved_cache[slug] = (tools, client)
if client:
all_clients.append(client)
except Exception as e:
crewai_event_bus.emit(
self,
MCPConfigFetchFailedEvent(
slug=slug,
error=str(e),
error_type="connection_failed",
),
)
for slug, specific_tool in amp_refs:
cached = resolved_cache.get(slug)
if not cached:
continue
slug_tools, _ = cached
if specific_tool:
all_tools.extend(
t for t in slug_tools if t.name.endswith(f"_{specific_tool}")
)
else:
all_tools.extend(slug_tools)
return all_tools, all_clients
def _fetch_amp_mcp_configs(self, slugs: list[str]) -> dict[str, dict[str, Any]]:
"""Fetch MCP server configurations via CrewAI+ API.
Sends a GET request to the CrewAI+ mcps/configs endpoint with
comma-separated slugs. CrewAI+ proxies the request to crewai-oauth.
API-level failures return ``{}``; individual slugs will then
surface as ``MCPConfigFetchFailedEvent`` in :meth:`_resolve_amp`.
"""
import httpx
try:
from crewai_tools.tools.crewai_platform_tools.misc import (
get_platform_integration_token,
)
from crewai.cli.plus_api import PlusAPI
plus_api = PlusAPI(api_key=get_platform_integration_token())
response = plus_api.get_mcp_configs(slugs)
if response.status_code == 200:
configs: dict[str, dict[str, Any]] = response.json().get("configs", {})
return configs
self._logger.log(
"debug",
f"Failed to fetch MCP configs: HTTP {response.status_code}",
)
return {}
except httpx.HTTPError as e:
self._logger.log("debug", f"Failed to fetch MCP configs: {e}")
return {}
except Exception as e:
self._logger.log("debug", f"Cannot fetch AMP MCP configs: {e}")
return {}
def _resolve_external(self, mcp_ref: str) -> list[BaseTool]:
"""Resolve an HTTPS MCP server URL into tools."""
from crewai.tools.mcp_tool_wrapper import MCPToolWrapper
if "#" in mcp_ref:
server_url, specific_tool = mcp_ref.split("#", 1)
else:
server_url, specific_tool = mcp_ref, None
server_params = {"url": server_url}
server_name = self._extract_server_name(server_url)
try:
tool_schemas = self._get_mcp_tool_schemas(server_params)
if not tool_schemas:
self._logger.log(
"warning", f"No tools discovered from MCP server: {server_url}"
)
return []
tools = []
for tool_name, schema in tool_schemas.items():
if specific_tool and tool_name != specific_tool:
continue
try:
wrapper = MCPToolWrapper(
mcp_server_params=server_params,
tool_name=tool_name,
tool_schema=schema,
server_name=server_name,
)
tools.append(wrapper)
except Exception as e:
self._logger.log(
"warning",
f"Failed to create MCP tool wrapper for {tool_name}: {e}",
)
continue
if specific_tool and not tools:
self._logger.log(
"warning",
f"Specific tool '{specific_tool}' not found on MCP server: {server_url}",
)
return cast(list[BaseTool], tools)
except Exception as e:
self._logger.log(
"warning", f"Failed to connect to MCP server {server_url}: {e}"
)
return []
def _resolve_native(
self, mcp_config: MCPServerConfig
) -> tuple[list[BaseTool], Any | None]:
"""Resolve an ``MCPServerConfig`` into tools, returning the client for cleanup."""
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_native_tool import MCPNativeTool
transport: StdioTransport | HTTPTransport | SSETransport
if isinstance(mcp_config, MCPServerStdio):
transport = StdioTransport(
command=mcp_config.command,
args=mcp_config.args,
env=mcp_config.env,
)
server_name = f"{mcp_config.command}_{'_'.join(mcp_config.args)}"
elif isinstance(mcp_config, MCPServerHTTP):
transport = HTTPTransport(
url=mcp_config.url,
headers=mcp_config.headers,
streamable=mcp_config.streamable,
)
server_name = self._extract_server_name(mcp_config.url)
elif isinstance(mcp_config, MCPServerSSE):
transport = SSETransport(
url=mcp_config.url,
headers=mcp_config.headers,
)
server_name = self._extract_server_name(mcp_config.url)
else:
raise ValueError(f"Unsupported MCP server config type: {type(mcp_config)}")
client = MCPClient(
transport=transport,
cache_tools_list=mcp_config.cache_tools_list,
)
async def _setup_client_and_list_tools() -> list[dict[str, Any]]:
try:
if not client.connected:
await client.connect()
tools_list = await client.list_tools()
try:
await client.disconnect()
await asyncio.sleep(0.1)
except Exception as e:
self._logger.log("error", f"Error during disconnect: {e}")
return tools_list
except Exception as e:
if client.connected:
await client.disconnect()
await asyncio.sleep(0.1)
raise RuntimeError(
f"Error during setup client and list tools: {e}"
) from e
try:
try:
asyncio.get_running_loop()
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run, _setup_client_and_list_tools()
)
tools_list = future.result()
except RuntimeError:
try:
tools_list = asyncio.run(_setup_client_and_list_tools())
except RuntimeError as e:
error_msg = str(e).lower()
if "cancel scope" in error_msg or "task" in error_msg:
raise ConnectionError(
"MCP connection failed due to event loop cleanup issues. "
"This may be due to authentication errors or server unavailability."
) from e
except asyncio.CancelledError as e:
raise ConnectionError(
"MCP connection was cancelled. This may indicate an authentication "
"error or server unavailability."
) from e
if mcp_config.tool_filter:
filtered_tools = []
for tool in tools_list:
if callable(mcp_config.tool_filter):
try:
from crewai.mcp.filters import ToolFilterContext
context = ToolFilterContext(
agent=self._agent,
server_name=server_name,
run_context=None,
)
if mcp_config.tool_filter(context, tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
except (TypeError, AttributeError):
if mcp_config.tool_filter(tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
else:
filtered_tools.append(tool)
tools_list = filtered_tools
tools = []
for tool_def in tools_list:
tool_name = tool_def.get("name", "")
original_tool_name = tool_def.get("original_name", tool_name)
if not tool_name:
continue
args_schema = None
if tool_def.get("inputSchema"):
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
tool_schema = {
"description": tool_def.get("description", ""),
"args_schema": args_schema,
}
try:
native_tool = MCPNativeTool(
mcp_client=client,
tool_name=tool_name,
tool_schema=tool_schema,
server_name=server_name,
original_tool_name=original_tool_name,
)
tools.append(native_tool)
except Exception as e:
self._logger.log("error", f"Failed to create native MCP tool: {e}")
continue
return cast(list[BaseTool], tools), client
except Exception as e:
if client.connected:
asyncio.run(client.disconnect())
raise RuntimeError(f"Failed to get native MCP tools: {e}") from e
@staticmethod
def _build_mcp_config_from_dict(
config_dict: dict[str, Any],
) -> MCPServerConfig:
"""Convert a config dict from crewai-oauth into an MCPServerConfig."""
config_type = config_dict.get("type", "http")
if config_type == "sse":
return MCPServerSSE(
url=config_dict["url"],
headers=config_dict.get("headers"),
cache_tools_list=config_dict.get("cache_tools_list", False),
)
return MCPServerHTTP(
url=config_dict["url"],
headers=config_dict.get("headers"),
streamable=config_dict.get("streamable", True),
cache_tools_list=config_dict.get("cache_tools_list", False),
)
@staticmethod
def _extract_server_name(server_url: str) -> str:
"""Extract clean server name from URL for tool prefixing."""
parsed = urlparse(server_url)
domain = parsed.netloc.replace(".", "_")
path = parsed.path.replace("/", "_").strip("_")
return f"{domain}_{path}" if path else domain
def _get_mcp_tool_schemas(
self, server_params: dict[str, Any]
) -> dict[str, dict[str, Any]]:
"""Get tool schemas from MCP server with caching."""
server_url = server_params["url"]
cache_key = server_url
current_time = time.time()
if cache_key in _mcp_schema_cache:
cached_data, cache_time = _mcp_schema_cache[cache_key]
if current_time - cache_time < _cache_ttl:
self._logger.log(
"debug", f"Using cached MCP tool schemas for {server_url}"
)
return cached_data # type: ignore[no-any-return]
try:
schemas = asyncio.run(self._get_mcp_tool_schemas_async(server_params))
_mcp_schema_cache[cache_key] = (schemas, current_time)
return schemas
except Exception as e:
self._logger.log(
"warning", f"Failed to get MCP tool schemas from {server_url}: {e}"
)
return {}
async def _get_mcp_tool_schemas_async(
self, server_params: dict[str, Any]
) -> dict[str, dict[str, Any]]:
"""Async implementation of MCP tool schema retrieval."""
server_url = server_params["url"]
return await self._retry_mcp_discovery(
self._discover_mcp_tools_with_timeout, server_url
)
async def _retry_mcp_discovery(
self, operation_func: Any, server_url: str
) -> dict[str, dict[str, Any]]:
"""Retry MCP discovery with exponential backoff."""
last_error = None
for attempt in range(MCP_MAX_RETRIES):
result, error, should_retry = await self._attempt_mcp_discovery(
operation_func, server_url
)
if result is not None:
return result
if not should_retry:
raise RuntimeError(error)
last_error = error
if attempt < MCP_MAX_RETRIES - 1:
wait_time = 2**attempt
await asyncio.sleep(wait_time)
raise RuntimeError(
f"Failed to discover MCP tools after {MCP_MAX_RETRIES} attempts: {last_error}"
)
@staticmethod
async def _attempt_mcp_discovery(
operation_func: Any, server_url: str
) -> tuple[dict[str, dict[str, Any]] | None, str, bool]:
"""Attempt single MCP discovery; returns *(result, error_message, should_retry)*."""
try:
result = await operation_func(server_url)
return result, "", False
except ImportError:
return (
None,
"MCP library not available. Please install with: pip install mcp",
False,
)
except asyncio.TimeoutError:
return (
None,
f"MCP discovery timed out after {MCP_DISCOVERY_TIMEOUT} seconds",
True,
)
except Exception as e:
error_str = str(e).lower()
if "authentication" in error_str or "unauthorized" in error_str:
return None, f"Authentication failed for MCP server: {e!s}", False
if "connection" in error_str or "network" in error_str:
return None, f"Network connection failed: {e!s}", True
if "json" in error_str or "parsing" in error_str:
return None, f"Server response parsing error: {e!s}", True
return None, f"MCP discovery error: {e!s}", False
async def _discover_mcp_tools_with_timeout(
self, server_url: str
) -> dict[str, dict[str, Any]]:
"""Discover MCP tools with timeout wrapper."""
return await asyncio.wait_for(
self._discover_mcp_tools(server_url), timeout=MCP_DISCOVERY_TIMEOUT
)
async def _discover_mcp_tools(self, server_url: str) -> dict[str, dict[str, Any]]:
"""Discover tools from an MCP server (HTTPS / streamable-HTTP path)."""
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from crewai.utilities.string_utils import sanitize_tool_name
async with streamablehttp_client(server_url) as (read, write, _):
async with ClientSession(read, write) as session:
await asyncio.wait_for(
session.initialize(), timeout=MCP_CONNECTION_TIMEOUT
)
tools_result = await asyncio.wait_for(
session.list_tools(),
timeout=MCP_DISCOVERY_TIMEOUT - MCP_CONNECTION_TIMEOUT,
)
schemas = {}
for tool in tools_result.tools:
args_schema = None
if hasattr(tool, "inputSchema") and tool.inputSchema:
args_schema = self._json_schema_to_pydantic(
sanitize_tool_name(tool.name), tool.inputSchema
)
schemas[sanitize_tool_name(tool.name)] = {
"description": getattr(tool, "description", ""),
"args_schema": args_schema,
}
return schemas
@staticmethod
def _json_schema_to_pydantic(tool_name: str, json_schema: dict[str, Any]) -> type:
"""Convert JSON Schema to a Pydantic model for tool arguments."""
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
return create_model_from_schema(
json_schema,
model_name=model_name,
enrich_descriptions=True,
)

View File

@@ -1,6 +1,14 @@
"""Memory module: unified Memory with LLM analysis and pluggable storage."""
"""Memory module: unified Memory with LLM analysis and pluggable storage.
Heavy dependencies are lazily imported so that
``import crewai`` does not initialise at runtime — critical for
Celery pre-fork and similar deployment patterns.
"""
from __future__ import annotations
from typing import Any
from crewai.memory.encoding_flow import EncodingFlow
from crewai.memory.memory_scope import MemoryScope, MemorySlice
from crewai.memory.types import (
MemoryMatch,
@@ -10,7 +18,25 @@ from crewai.memory.types import (
embed_text,
embed_texts,
)
from crewai.memory.unified_memory import Memory
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
"Memory": ("crewai.memory.unified_memory", "Memory"),
"EncodingFlow": ("crewai.memory.encoding_flow", "EncodingFlow"),
}
def __getattr__(name: str) -> Any:
"""Lazily import Memory / EncodingFlow to avoid pulling in lancedb at import time."""
if name in _LAZY_IMPORTS:
import importlib
module_path, attr = _LAZY_IMPORTS[name]
mod = importlib.import_module(module_path)
val = getattr(mod, attr)
globals()[name] = val
return val
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
__all__ = [

View File

@@ -145,7 +145,7 @@ class MemoryScope:
class MemorySlice:
"""View over multiple scopes: recall searches all, remember requires explicit scope unless read_only."""
"""View over multiple scopes: recall searches all, remember is a no-op when read_only."""
def __init__(
self,
@@ -160,7 +160,7 @@ class MemorySlice:
memory: The underlying Memory instance.
scopes: List of scope paths to include.
categories: Optional category filter for recall.
read_only: If True, remember() raises PermissionError.
read_only: If True, remember() is a silent no-op.
"""
self._memory = memory
self._scopes = [s.rstrip("/") or "/" for s in scopes]
@@ -176,10 +176,10 @@ class MemorySlice:
importance: float | None = None,
source: str | None = None,
private: bool = False,
) -> MemoryRecord:
"""Remember into an explicit scope. Required when read_only=False."""
) -> MemoryRecord | None:
"""Remember into an explicit scope. No-op when read_only=True."""
if self._read_only:
raise PermissionError("This MemorySlice is read-only")
return None
return self._memory.remember(
content,
scope=scope,

View File

@@ -2,7 +2,6 @@
Implements adaptive-depth retrieval with:
- LLM query distillation into targeted sub-queries
- Keyword-driven category filtering
- Time-based filtering from temporal hints
- Parallel multi-query, multi-scope search
- Confidence-based routing with iterative deepening (budget loop)
@@ -37,7 +36,6 @@ class RecallState(BaseModel):
query: str = ""
scope: str | None = None
categories: list[str] | None = None
inferred_categories: list[str] = Field(default_factory=list)
time_cutoff: datetime | None = None
source: str | None = None
include_private: bool = False
@@ -82,11 +80,8 @@ class RecallFlow(Flow[RecallState]):
# ------------------------------------------------------------------
def _merged_categories(self) -> list[str] | None:
"""Merge caller-supplied and LLM-inferred categories."""
merged = list(
set((self.state.categories or []) + self.state.inferred_categories)
)
return merged or None
"""Return caller-supplied categories, or None if empty."""
return self.state.categories or None
def _do_search(self) -> list[dict[str, Any]]:
"""Run parallel search across (embeddings x scopes) with filters.
@@ -212,10 +207,6 @@ class RecallFlow(Flow[RecallState]):
)
self.state.query_analysis = analysis
# Wire keywords -> category filter
if analysis.keywords:
self.state.inferred_categories = analysis.keywords
# Parse time_filter into a datetime cutoff
if analysis.time_filter:
try:

View File

@@ -53,6 +53,7 @@ class LanceDBStorage:
path: str | Path | None = None,
table_name: str = "memories",
vector_dim: int | None = None,
compact_every: int = 100,
) -> None:
"""Initialize LanceDB storage.
@@ -64,6 +65,10 @@ class LanceDBStorage:
vector_dim: Dimensionality of the embedding vector. When ``None``
(default), the dimension is auto-detected from the existing
table schema or from the first saved embedding.
compact_every: Number of ``save()`` calls between automatic
background compactions. Each ``save()`` creates one new
fragment file; compaction merges them, keeping query
performance consistent. Set to 0 to disable.
"""
if path is None:
storage_dir = os.environ.get("CREWAI_STORAGE_DIR")
@@ -78,6 +83,22 @@ class LanceDBStorage:
self._table_name = table_name
self._db = lancedb.connect(str(self._path))
# On macOS and Linux the default per-process open-file limit is 256.
# A LanceDB table stores one file per fragment (one fragment per save()
# call by default). With hundreds of fragments, a single full-table
# scan opens all of them simultaneously, exhausting the limit.
# Raise it proactively so scans on large tables never hit OS error 24.
try:
import resource
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
if soft < 4096:
resource.setrlimit(resource.RLIMIT_NOFILE, (min(hard, 4096), hard))
except Exception: # noqa: S110
pass # Windows or already at the max hard limit — safe to ignore
self._compact_every = compact_every
self._save_count = 0
# Get or create a shared write lock for this database path.
resolved = str(self._path.resolve())
with LanceDBStorage._path_locks_guard:
@@ -91,6 +112,11 @@ class LanceDBStorage:
try:
self._table: lancedb.table.Table | None = self._db.open_table(self._table_name)
self._vector_dim: int = self._infer_dim_from_table(self._table)
# Best-effort: create the scope index if it doesn't exist yet.
self._ensure_scope_index()
# Compact in the background if the table has accumulated many
# fragments from previous runs (each save() creates one).
self._compact_if_needed()
except Exception:
self._table = None
self._vector_dim = vector_dim or 0 # 0 = not yet known
@@ -178,6 +204,56 @@ class LanceDBStorage:
table.delete("id = '__schema_placeholder__'")
return table
def _ensure_scope_index(self) -> None:
"""Create a BTREE scalar index on the ``scope`` column if not present.
A scalar index lets LanceDB skip a full table scan when filtering by
scope prefix, which is the hot path for ``list_records``,
``get_scope_info``, and ``list_scopes``. The call is best-effort:
if the table is empty or the index already exists the exception is
swallowed silently.
"""
if self._table is None:
return
try:
self._table.create_scalar_index("scope", index_type="BTREE", replace=False)
except Exception: # noqa: S110
pass # index already exists, table empty, or unsupported version
# ------------------------------------------------------------------
# Automatic background compaction
# ------------------------------------------------------------------
def _compact_if_needed(self) -> None:
"""Spawn a background compaction on startup.
Called whenever an existing table is opened so that fragments
accumulated in previous sessions are silently merged before the
first query. ``optimize()`` returns quickly when the table is
already compact, so the cost is negligible in the common case.
"""
if self._table is None or self._compact_every <= 0:
return
self._compact_async()
def _compact_async(self) -> None:
"""Fire-and-forget: compact the table in a daemon background thread."""
threading.Thread(
target=self._compact_safe,
daemon=True,
name="lancedb-compact",
).start()
def _compact_safe(self) -> None:
"""Run ``table.optimize()`` in a background thread, absorbing errors."""
try:
if self._table is not None:
self._table.optimize()
# Refresh the scope index so new fragments are covered.
self._ensure_scope_index()
except Exception:
_logger.debug("LanceDB background compaction failed", exc_info=True)
def _ensure_table(self, vector_dim: int | None = None) -> lancedb.table.Table:
"""Return the table, creating it lazily if needed.
@@ -239,6 +315,7 @@ class LanceDBStorage:
if r.embedding and len(r.embedding) > 0:
dim = len(r.embedding)
break
is_new_table = self._table is None
with self._write_lock:
self._ensure_table(vector_dim=dim)
rows = [self._record_to_row(r) for r in records]
@@ -246,6 +323,13 @@ class LanceDBStorage:
if r["vector"] is None or len(r["vector"]) != self._vector_dim:
r["vector"] = [0.0] * self._vector_dim
self._retry_write("add", rows)
# Create the scope index on the first save so it covers the initial dataset.
if is_new_table:
self._ensure_scope_index()
# Auto-compact every N saves so fragment files don't pile up.
self._save_count += 1
if self._compact_every > 0 and self._save_count % self._compact_every == 0:
self._compact_async()
def update(self, record: MemoryRecord) -> None:
"""Update a record by ID. Preserves created_at, updates last_accessed."""
@@ -261,6 +345,10 @@ class LanceDBStorage:
def touch_records(self, record_ids: list[str]) -> None:
"""Update last_accessed to now for the given record IDs.
Uses a single batch ``table.update()`` call instead of N
delete-and-re-add cycles, which is both faster and avoids
unnecessary write amplification.
Args:
record_ids: IDs of records to touch.
"""
@@ -268,25 +356,20 @@ class LanceDBStorage:
return
with self._write_lock:
now = datetime.utcnow().isoformat()
for rid in record_ids:
safe_id = str(rid).replace("'", "''")
rows = (
self._table.search([0.0] * self._vector_dim)
.where(f"id = '{safe_id}'")
.limit(1)
.to_list()
)
if rows:
rows[0]["last_accessed"] = now
self._retry_write("delete", f"id = '{safe_id}'")
self._retry_write("add", [rows[0]])
safe_ids = [str(rid).replace("'", "''") for rid in record_ids]
ids_expr = ", ".join(f"'{rid}'" for rid in safe_ids)
self._retry_write(
"update",
where=f"id IN ({ids_expr})",
values={"last_accessed": now},
)
def get_record(self, record_id: str) -> MemoryRecord | None:
"""Return a single record by ID, or None if not found."""
if self._table is None:
return None
safe_id = str(record_id).replace("'", "''")
rows = self._table.search([0.0] * self._vector_dim).where(f"id = '{safe_id}'").limit(1).to_list()
rows = self._table.search().where(f"id = '{safe_id}'").limit(1).to_list()
if not rows:
return None
return self._row_to_record(rows[0])
@@ -374,13 +457,31 @@ class LanceDBStorage:
self._retry_write("delete", where_expr)
return before - self._table.count_rows()
def _scan_rows(self, scope_prefix: str | None = None, limit: int = _SCAN_ROWS_LIMIT) -> list[dict[str, Any]]:
"""Scan rows optionally filtered by scope prefix."""
def _scan_rows(
self,
scope_prefix: str | None = None,
limit: int = _SCAN_ROWS_LIMIT,
columns: list[str] | None = None,
) -> list[dict[str, Any]]:
"""Scan rows optionally filtered by scope prefix.
Uses a full table scan (no vector query) so the limit is applied after
the scope filter, not to ANN candidates before filtering.
Args:
scope_prefix: Optional scope path prefix to filter by.
limit: Maximum number of rows to return (applied after filtering).
columns: Optional list of column names to fetch. Pass only the
columns you need for metadata operations to avoid reading the
heavy ``vector`` column unnecessarily.
"""
if self._table is None:
return []
q = self._table.search([0.0] * self._vector_dim)
q = self._table.search()
if scope_prefix is not None and scope_prefix.strip("/"):
q = q.where(f"scope LIKE '{scope_prefix.rstrip('/')}%'")
if columns is not None:
q = q.select(columns)
return q.limit(limit).to_list()
def list_records(
@@ -406,7 +507,10 @@ class LanceDBStorage:
prefix = scope if scope != "/" else ""
if prefix and not prefix.startswith("/"):
prefix = "/" + prefix
rows = self._scan_rows(prefix or None)
rows = self._scan_rows(
prefix or None,
columns=["scope", "categories_str", "created_at"],
)
if not rows:
return ScopeInfo(
path=scope or "/",
@@ -453,7 +557,7 @@ class LanceDBStorage:
def list_scopes(self, parent: str = "/") -> list[str]:
parent = parent.rstrip("/") or ""
prefix = (parent + "/") if parent else "/"
rows = self._scan_rows(prefix if prefix != "/" else None)
rows = self._scan_rows(prefix if prefix != "/" else None, columns=["scope"])
children: set[str] = set()
for row in rows:
sc = str(row.get("scope", ""))
@@ -465,7 +569,7 @@ class LanceDBStorage:
return sorted(children)
def list_categories(self, scope_prefix: str | None = None) -> dict[str, int]:
rows = self._scan_rows(scope_prefix)
rows = self._scan_rows(scope_prefix, columns=["categories_str"])
counts: dict[str, int] = {}
for row in rows:
cat_str = row.get("categories_str") or "[]"
@@ -498,6 +602,21 @@ class LanceDBStorage:
if prefix:
self._table.delete(f"scope >= '{prefix}' AND scope < '{prefix}/\uFFFF'")
def optimize(self) -> None:
"""Compact the table synchronously and refresh the scope index.
Under normal usage this is called automatically in the background
(every ``compact_every`` saves and on startup when the table is
fragmented). Call this explicitly only when you need the compaction
to be complete before the next operation — for example immediately
after a large bulk import, before a latency-sensitive recall.
It is a no-op if the table does not exist.
"""
if self._table is None:
return
self._table.optimize()
self._ensure_scope_index()
async def asave(self, records: list[MemoryRecord]) -> None:
self.save(records)

View File

@@ -87,6 +87,22 @@ class MemoryMatch(BaseModel):
description="Information the system looked for but could not find.",
)
def format(self) -> str:
"""Format this match as a human-readable string including metadata.
Returns:
A multi-line string with score, content, categories, and non-empty
metadata fields.
"""
lines = [f"- (score={self.score:.2f}) {self.record.content}"]
if self.record.categories:
lines.append(f" categories: {', '.join(self.record.categories)}")
if self.record.metadata:
for key, value in self.record.metadata.items():
if value is not None:
lines.append(f" {key}: {value}")
return "\n".join(lines)
class ScopeInfo(BaseModel):
"""Information about a scope in the memory hierarchy."""
@@ -291,7 +307,7 @@ def embed_text(embedder: Any, text: str) -> list[float]:
return []
first = result[0]
if hasattr(first, "tolist"):
return first.tolist()
return list(first.tolist())
if isinstance(first, list):
return [float(x) for x in first]
return list(first)

View File

@@ -6,7 +6,7 @@ from concurrent.futures import Future, ThreadPoolExecutor
from datetime import datetime
import threading
import time
from typing import Any, Literal
from typing import TYPE_CHECKING, Any, Literal
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.memory_events import (
@@ -21,7 +21,6 @@ from crewai.llms.base_llm import BaseLLM
from crewai.memory.analyze import extract_memories_from_content
from crewai.memory.recall_flow import RecallFlow
from crewai.memory.storage.backend import StorageBackend
from crewai.memory.storage.lancedb_storage import LanceDBStorage
from crewai.memory.types import (
MemoryConfig,
MemoryMatch,
@@ -30,13 +29,20 @@ from crewai.memory.types import (
compute_composite_score,
embed_text,
)
from crewai.rag.embeddings.factory import build_embedder
from crewai.rag.embeddings.providers.openai.types import OpenAIProviderSpec
def _default_embedder() -> Any:
if TYPE_CHECKING:
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
def _default_embedder() -> OpenAIEmbeddingFunction:
"""Build default OpenAI embedder for memory."""
from crewai.rag.embeddings.factory import build_embedder
return build_embedder({"provider": "openai", "config": {}})
spec: OpenAIProviderSpec = {"provider": "openai", "config": {}}
return build_embedder(spec)
class Memory:
@@ -88,6 +94,10 @@ class Memory:
# Queries shorter than this skip LLM analysis (saving ~1-3s).
# Longer queries (full task descriptions) benefit from LLM distillation.
query_analysis_threshold: int = 200,
# When True, all write operations (remember, remember_many) are silently
# skipped. Useful for sharing a read-only view of memory across agents
# without any of them persisting new memories.
read_only: bool = False,
) -> None:
"""Initialize Memory.
@@ -107,7 +117,9 @@ class Memory:
complex_query_threshold: For complex queries, explore deeper below this confidence.
exploration_budget: Number of LLM-driven exploration rounds during deep recall.
query_analysis_threshold: Queries shorter than this skip LLM analysis during deep recall.
read_only: If True, remember() and remember_many() are silent no-ops.
"""
self._read_only = read_only
self._config = MemoryConfig(
recency_weight=recency_weight,
semantic_weight=semantic_weight,
@@ -130,14 +142,15 @@ class Memory:
self._llm_instance: BaseLLM | None = None if isinstance(llm, str) else llm
self._embedder_config: Any = embedder
self._embedder_instance: Any = (
embedder if (embedder is not None and not isinstance(embedder, dict)) else None
embedder
if (embedder is not None and not isinstance(embedder, dict))
else None
)
# Storage is initialized eagerly (local, no API key needed).
if storage == "lancedb":
self._storage = LanceDBStorage()
elif isinstance(storage, str):
self._storage = LanceDBStorage(path=storage)
if isinstance(storage, str):
from crewai.memory.storage.lancedb_storage import LanceDBStorage
self._storage = LanceDBStorage() if storage == "lancedb" else LanceDBStorage(path=storage)
else:
self._storage = storage
@@ -160,12 +173,17 @@ class Memory:
from crewai.llm import LLM
try:
self._llm_instance = LLM(model=self._llm_config)
model_name = (
self._llm_config
if isinstance(self._llm_config, str)
else str(self._llm_config)
)
self._llm_instance = LLM(model=model_name)
except Exception as e:
raise RuntimeError(
f"Memory requires an LLM for analysis but initialization failed: {e}\n\n"
"To fix this, do one of the following:\n"
' - Set OPENAI_API_KEY for the default model (gpt-4o-mini)\n'
" - Set OPENAI_API_KEY for the default model (gpt-4o-mini)\n"
' - Pass a different model: Memory(llm="anthropic/claude-3-haiku-20240307")\n'
' - Pass any LLM instance: Memory(llm=LLM(model="your-model"))\n'
" - To skip LLM analysis, pass all fields explicitly to remember()\n"
@@ -180,8 +198,6 @@ class Memory:
if self._embedder_instance is None:
try:
if isinstance(self._embedder_config, dict):
from crewai.rag.embeddings.factory import build_embedder
self._embedder_instance = build_embedder(self._embedder_config)
else:
self._embedder_instance = _default_embedder()
@@ -317,7 +333,7 @@ class Memory:
source: str | None = None,
private: bool = False,
agent_role: str | None = None,
) -> MemoryRecord:
) -> MemoryRecord | None:
"""Store a single item in memory (synchronous).
Routes through the same serialized save pool as ``remember_many``
@@ -335,11 +351,13 @@ class Memory:
agent_role: Optional agent role for event metadata.
Returns:
The created MemoryRecord.
The created MemoryRecord, or None if this memory is read-only.
Raises:
Exception: On save failure (events emitted).
"""
if self._read_only:
return None
_source_type = "unified_memory"
try:
crewai_event_bus.emit(
@@ -356,7 +374,13 @@ class Memory:
# then immediately wait for the result.
future = self._submit_save(
self._encode_batch,
[content], scope, categories, metadata, importance, source, private,
[content],
scope,
categories,
metadata,
importance,
source,
private,
)
records = future.result()
record = records[0] if records else None
@@ -420,13 +444,19 @@ class Memory:
Returns:
Empty list (records are not available until the background save completes).
"""
if not contents:
if not contents or self._read_only:
return []
self._submit_save(
self._background_encode_batch,
contents, scope, categories, metadata,
importance, source, private, agent_role,
contents,
scope,
categories,
metadata,
importance,
source,
private,
agent_role,
)
return []
@@ -566,14 +596,13 @@ class Memory:
# Privacy filter
if not include_private:
raw = [
(r, s) for r, s in raw
(r, s)
for r, s in raw
if not r.private or r.source == source
]
results = []
for r, s in raw:
composite, reasons = compute_composite_score(
r, s, self._config
)
composite, reasons = compute_composite_score(r, s, self._config)
results.append(
MemoryMatch(
record=r,
@@ -739,7 +768,9 @@ class Memory:
limit: Maximum number of records to return.
offset: Number of records to skip (for pagination).
"""
return self._storage.list_records(scope_prefix=scope, limit=limit, offset=offset)
return self._storage.list_records(
scope_prefix=scope, limit=limit, offset=offset
)
def info(self, path: str = "/") -> ScopeInfo:
"""Return scope info for path."""
@@ -781,7 +812,7 @@ class Memory:
importance: float | None = None,
source: str | None = None,
private: bool = False,
) -> MemoryRecord:
) -> MemoryRecord | None:
"""Async remember: delegates to sync for now."""
return self.remember(
content,

View File

@@ -216,6 +216,10 @@ def build_embedder_from_dict(
def build_embedder_from_dict(spec: ONNXProviderSpec) -> ONNXMiniLM_L6_V2: ...
@overload
def build_embedder_from_dict(spec: dict[str, Any]) -> EmbeddingFunction[Any]: ...
def build_embedder_from_dict(spec): # type: ignore[no-untyped-def]
"""Build an embedding function instance from a dictionary specification.
@@ -341,6 +345,10 @@ def build_embedder(spec: Text2VecProviderSpec) -> Text2VecEmbeddingFunction: ...
def build_embedder(spec: ONNXProviderSpec) -> ONNXMiniLM_L6_V2: ...
@overload
def build_embedder(spec: dict[str, Any]) -> EmbeddingFunction[Any]: ...
def build_embedder(spec): # type: ignore[no-untyped-def]
"""Build an embedding function from either a provider spec or a provider instance.

View File

@@ -586,16 +586,29 @@ class Task(BaseModel):
self._post_agent_execution(agent)
if not self._guardrails and not self._guardrail:
if isinstance(result, BaseModel):
raw = result.model_dump_json()
if self.output_pydantic:
pydantic_output = result
json_output = None
elif self.output_json:
pydantic_output = None
json_output = result.model_dump()
else:
pydantic_output = None
json_output = None
elif not self._guardrails and not self._guardrail:
raw = result
pydantic_output, json_output = self._export_output(result)
else:
raw = result
pydantic_output, json_output = None, None
task_output = TaskOutput(
name=self.name or self.description,
description=self.description,
expected_output=self.expected_output,
raw=result,
raw=raw,
pydantic=pydantic_output,
json_dict=json_output,
agent=agent.role,
@@ -687,16 +700,29 @@ class Task(BaseModel):
self._post_agent_execution(agent)
if not self._guardrails and not self._guardrail:
if isinstance(result, BaseModel):
raw = result.model_dump_json()
if self.output_pydantic:
pydantic_output = result
json_output = None
elif self.output_json:
pydantic_output = None
json_output = result.model_dump()
else:
pydantic_output = None
json_output = None
elif not self._guardrails and not self._guardrail:
raw = result
pydantic_output, json_output = self._export_output(result)
else:
raw = result
pydantic_output, json_output = None, None
task_output = TaskOutput(
name=self.name or self.description,
description=self.description,
expected_output=self.expected_output,
raw=result,
raw=raw,
pydantic=pydantic_output,
json_dict=json_output,
agent=agent.role,

View File

@@ -1,5 +1,4 @@
from crewai.telemetry.telemetry import Telemetry
__all__ = ["Telemetry"]

View File

@@ -173,6 +173,12 @@ class Telemetry:
self._original_handlers: dict[int, Any] = {}
if threading.current_thread() is not threading.main_thread():
logger.debug(
"Skipping signal handler registration: not running in main thread"
)
return
self._register_signal_handler(signal.SIGTERM, SigTermEvent, shutdown=True)
self._register_signal_handler(signal.SIGINT, SigIntEvent, shutdown=True)
if hasattr(signal, "SIGHUP"):

View File

@@ -1,7 +1,6 @@
from crewai.tools.base_tool import BaseTool, EnvVar, tool
__all__ = [
"BaseTool",
"EnvVar",

View File

@@ -23,7 +23,7 @@ from pydantic import (
)
from typing_extensions import TypeIs
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.structured_tool import CrewStructuredTool, build_schema_hint
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.string_utils import sanitize_tool_name
@@ -150,14 +150,39 @@ class BaseTool(BaseModel, ABC):
super().model_post_init(__context)
def _validate_kwargs(self, kwargs: dict[str, Any]) -> dict[str, Any]:
"""Validate keyword arguments against args_schema if present.
Args:
kwargs: The keyword arguments to validate.
Returns:
Validated (and possibly coerced) keyword arguments.
Raises:
ValueError: If validation against args_schema fails.
"""
if self.args_schema is not None and self.args_schema.model_fields:
try:
validated = self.args_schema.model_validate(kwargs)
return validated.model_dump()
except Exception as e:
hint = build_schema_hint(self.args_schema)
raise ValueError(
f"Tool '{self.name}' arguments validation failed: {e}{hint}"
) from e
return kwargs
def run(
self,
*args: Any,
**kwargs: Any,
) -> Any:
if not args:
kwargs = self._validate_kwargs(kwargs)
result = self._run(*args, **kwargs)
# If _run is async, we safely run it
if asyncio.iscoroutine(result):
result = asyncio.run(result)
@@ -179,6 +204,8 @@ class BaseTool(BaseModel, ABC):
Returns:
The result of the tool execution.
"""
if not args:
kwargs = self._validate_kwargs(kwargs)
result = await self._arun(*args, **kwargs)
self.current_usage_count += 1
return result
@@ -331,6 +358,9 @@ class Tool(BaseTool, Generic[P, R]):
Returns:
The result of the tool execution.
"""
if not args:
kwargs = self._validate_kwargs(kwargs) # type: ignore[assignment]
result = self.func(*args, **kwargs)
if asyncio.iscoroutine(result):
@@ -361,6 +391,8 @@ class Tool(BaseTool, Generic[P, R]):
Returns:
The result of the tool execution.
"""
if not args:
kwargs = self._validate_kwargs(kwargs) # type: ignore[assignment]
result = await self._arun(*args, **kwargs)
self.current_usage_count += 1
return result

View File

@@ -27,14 +27,16 @@ class MCPNativeTool(BaseTool):
tool_name: str,
tool_schema: dict[str, Any],
server_name: str,
original_tool_name: str | None = None,
) -> None:
"""Initialize native MCP tool.
Args:
mcp_client: MCPClient instance with active session.
tool_name: Original name of the tool on the MCP server.
tool_name: Name of the tool (may be prefixed).
tool_schema: Schema information for the tool.
server_name: Name of the MCP server for prefixing.
original_tool_name: Original name of the tool on the MCP server.
"""
# Create tool name with server prefix to avoid conflicts
prefixed_name = f"{server_name}_{tool_name}"
@@ -57,7 +59,7 @@ class MCPNativeTool(BaseTool):
# Set instance attributes after super().__init__
self._mcp_client = mcp_client
self._original_tool_name = tool_name
self._original_tool_name = original_tool_name or tool_name
self._server_name = server_name
# self._logger = logging.getLogger(__name__)

View File

@@ -20,14 +20,6 @@ class RecallMemorySchema(BaseModel):
"or multiple items to search for several things at once."
),
)
scope: str | None = Field(
default=None,
description="Optional scope to narrow the search (e.g. /project/alpha)",
)
depth: str = Field(
default="shallow",
description="'shallow' for fast vector search, 'deep' for LLM-analyzed retrieval",
)
class RecallMemoryTool(BaseTool):
@@ -41,32 +33,27 @@ class RecallMemoryTool(BaseTool):
def _run(
self,
queries: list[str] | str,
scope: str | None = None,
depth: str = "shallow",
**kwargs: Any,
) -> str:
"""Search memory for relevant information.
Args:
queries: One or more search queries (string or list of strings).
scope: Optional scope prefix to narrow the search.
depth: "shallow" for fast vector search, "deep" for LLM-analyzed retrieval.
Returns:
Formatted string of matching memories, or a message if none found.
"""
if isinstance(queries, str):
queries = [queries]
actual_depth = depth if depth in ("shallow", "deep") else "shallow"
all_lines: list[str] = []
seen_ids: set[str] = set()
for query in queries:
matches = self.memory.recall(query, scope=scope, limit=5, depth=actual_depth)
matches = self.memory.recall(query, limit=20)
for m in matches:
if m.record.id not in seen_ids:
seen_ids.add(m.record.id)
all_lines.append(f"- (score={m.score:.2f}) {m.record.content}")
all_lines.append(m.format())
if not all_lines:
return "No relevant memories found."
@@ -117,20 +104,28 @@ class RememberTool(BaseTool):
def create_memory_tools(memory: Any) -> list[BaseTool]:
"""Create Recall and Remember tools for the given memory instance.
When memory is read-only (``_read_only=True``), only the RecallMemoryTool
is returned — the RememberTool is omitted so agents are never offered a
save capability they cannot use.
Args:
memory: A Memory, MemoryScope, or MemorySlice instance.
Returns:
List containing a RecallMemoryTool and a RememberTool.
List containing a RecallMemoryTool and, if not read-only, a RememberTool.
"""
i18n = get_i18n()
return [
tools: list[BaseTool] = [
RecallMemoryTool(
memory=memory,
description=i18n.tools("recall_memory"),
),
RememberTool(
memory=memory,
description=i18n.tools("save_to_memory"),
),
]
if not getattr(memory, "_read_only", False):
tools.append(
RememberTool(
memory=memory,
description=i18n.tools("save_to_memory"),
)
)
return tools

View File

@@ -17,6 +17,27 @@ if TYPE_CHECKING:
from crewai.tools.base_tool import BaseTool
def build_schema_hint(args_schema: type[BaseModel]) -> str:
"""Build a human-readable hint from a Pydantic model's JSON schema.
Args:
args_schema: The Pydantic model class to extract schema from.
Returns:
A formatted string with expected arguments and required fields,
or empty string if schema extraction fails.
"""
try:
schema = args_schema.model_json_schema()
return (
f"\nExpected arguments: "
f"{json.dumps(schema.get('properties', {}))}"
f"\nRequired: {json.dumps(schema.get('required', []))}"
)
except Exception:
return ""
class ToolUsageLimitExceededError(Exception):
"""Exception raised when a tool has reached its maximum usage limit."""
@@ -208,7 +229,8 @@ class CrewStructuredTool:
validated_args = self.args_schema.model_validate(raw_args)
return validated_args.model_dump()
except Exception as e:
raise ValueError(f"Arguments validation failed: {e}") from e
hint = build_schema_hint(self.args_schema)
raise ValueError(f"Arguments validation failed: {e}{hint}") from e
async def ainvoke(
self,

View File

@@ -7,7 +7,7 @@
"slices": {
"observation": "\nObservation:",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:",
"memory": "\n\n# Useful context: \n{memory}",
"memory": "\n\n# Memories from past conversations:\n{memory}\n\nIMPORTANT: The memories above are an automatic selection and may be INCOMPLETE. If the task involves counting, listing, or summing items (e.g. 'how many', 'total', 'list all'), you MUST use the Search memory tool with several different queries before answering — do NOT rely solely on the memories shown above. Enumerate each distinct item you find before giving a final count.",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```",
"no_tools": "",
@@ -60,12 +60,12 @@
"description": "See image to understand its content, you can optionally ask a question about the image",
"default_action": "Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
},
"recall_memory": "Search through the team's shared memory for relevant information. Pass one or more queries to search for multiple things at once. Use this when you need to find facts, decisions, preferences, or past results that may have been stored previously.",
"recall_memory": "Search through the team's shared memory for relevant information. Pass one or more queries to search for multiple things at once. Use this when you need to find facts, decisions, preferences, or past results that may have been stored previously. IMPORTANT: For questions that require counting, summing, or listing items across multiple conversations (e.g. 'how many X', 'total Y', 'list all Z'), you MUST search multiple times with different phrasings to ensure you find ALL relevant items before giving a final count or total. Do not rely on a single search — items may be described differently across conversations.",
"save_to_memory": "Store one or more important facts, decisions, observations, or lessons in memory so they can be recalled later by you or other agents. Pass multiple items at once when you have several things worth remembering."
},
"memory": {
"query_system": "You analyze a query for searching memory.\nGiven the query and available scopes, output:\n1. keywords: Key entities or keywords that can be used to filter by category.\n2. suggested_scopes: Which available scopes are most relevant (empty for all).\n3. complexity: 'simple' or 'complex'.\n4. recall_queries: 1-3 short, targeted search phrases distilled from the query. Each should be a concise phrase optimized for semantic vector search. If the query is already short and focused, return it as-is in a single-item list. For long task descriptions, extract the distinct things worth searching for.\n5. time_filter: If the query references a time period (like 'last week', 'yesterday', 'in January'), return an ISO 8601 date string for the earliest relevant date (e.g. '2026-02-01'). Return null if no time constraint is implied.",
"extract_memories_system": "You extract discrete, reusable memory statements from raw content (e.g. a task description and its result).\n\nFor the given content, output a list of memory statements. Each memory must:\n- Be one clear sentence or short statement\n- Be understandable without the original context\n- Capture a decision, fact, outcome, preference, lesson, or observation worth remembering\n- NOT be a vague summary or a restatement of the task description\n- NOT duplicate the same idea in different words\n\nIf there is nothing worth remembering (e.g. empty result, no decisions or facts), return an empty list.\nOutput a JSON object with a single key \"memories\" whose value is a list of strings.",
"extract_memories_system": "You extract discrete, reusable memory statements from raw content (e.g. a task description and its result, or a conversation between a user and an assistant).\n\nFor the given content, output a list of memory statements. Each memory must:\n- Be one clear sentence or short statement\n- Be understandable without the original context\n- Capture a decision, fact, outcome, preference, lesson, or observation worth remembering\n- NOT be a vague summary or a restatement of the task description\n- NOT duplicate the same idea in different words\n\nWhen the content is a conversation, pay special attention to facts stated by the user (first-person statements). These personal facts are HIGH PRIORITY and must always be extracted:\n- What the user did, bought, made, visited, attended, or completed\n- Names of people, pets, places, brands, and specific items the user mentions\n- Quantities, durations, dates, and measurements the user states\n- Subordinate clauses and casual asides often contain important personal details (e.g. \"by the way, it took me 4 hours\" or \"my Golden Retriever Max\")\n\nPreserve exact names and numbers — never generalize (e.g. keep \"lavender gin fizz\" not just \"cocktail\", keep \"12 largemouth bass\" not just \"fish caught\", keep \"Golden Retriever\" not just \"dog\").\n\nAdditional extraction rules:\n- Presupposed facts: When the user reveals a fact indirectly in a question (e.g. \"What collar suits a Golden Retriever like Max?\" presupposes Max is a Golden Retriever), extract that fact as a separate memory.\n- Date precision: Always preserve the full date including day-of-month when stated (e.g. \"February 14th\" not just \"February\", \"March 5\" not just \"March\").\n- Life events in passing: When the user mentions a life event (birth, wedding, graduation, move, adoption) while discussing something else, extract the life event as its own memory (e.g. \"my friend David had a baby boy named Jasper\" is a birth fact, even if mentioned while planning to send congratulations).\n\nIf there is nothing worth remembering (e.g. empty result, no decisions or facts), return an empty list.\nOutput a JSON object with a single key \"memories\" whose value is a list of strings.",
"extract_memories_user": "Content:\n{content}\n\nExtract memory statements as described. Return structured output.",
"query_user": "Query: {query}\n\nAvailable scopes: {available_scopes}\n{scope_desc}\n\nReturn the analysis as structured output.",
"save_system": "You analyze content to be stored in a hierarchical memory system.\nGiven the content and the existing scopes and categories, output:\n1. suggested_scope: The best matching existing scope path, or a new path if none fit (use / for root).\n2. categories: A list of categories (reuse existing when relevant, add new ones if needed).\n3. importance: A number from 0.0 to 1.0 indicating how significant this memory is.\n4. extracted_metadata: A JSON object with any entities, dates, or topics you can extract.",

View File

@@ -139,7 +139,11 @@ def render_text_description_and_args(
def convert_tools_to_openai_schema(
tools: Sequence[BaseTool | CrewStructuredTool],
) -> tuple[list[dict[str, Any]], dict[str, Callable[..., Any]]]:
) -> tuple[
list[dict[str, Any]],
dict[str, Callable[..., Any]],
dict[str, BaseTool | CrewStructuredTool],
]:
"""Convert CrewAI tools to OpenAI function calling format.
This function converts CrewAI BaseTool and CrewStructuredTool objects
@@ -152,23 +156,21 @@ def convert_tools_to_openai_schema(
Returns:
Tuple containing:
- List of OpenAI-format tool schema dictionaries
- Dict mapping tool names to their callable run() methods
Example:
>>> tools = [CalculatorTool(), SearchTool()]
>>> schemas, functions = convert_tools_to_openai_schema(tools)
>>> # schemas can be passed to llm.call(tools=schemas)
>>> # functions can be passed to llm.call(available_functions=functions)
- Dict mapping sanitized tool names to their callable run() methods
- Dict mapping sanitized tool names to their original tool objects
"""
openai_tools: list[dict[str, Any]] = []
available_functions: dict[str, Callable[..., Any]] = {}
tool_name_mapping: dict[str, BaseTool | CrewStructuredTool] = {}
for tool in tools:
# Get the JSON schema for tool parameters
parameters: dict[str, Any] = {}
if hasattr(tool, "args_schema") and tool.args_schema is not None:
try:
schema_output = generate_model_description(tool.args_schema)
schema_output = generate_model_description(
tool.args_schema, strip_null_types=False
)
parameters = schema_output.get("json_schema", {}).get("schema", {})
# Remove title and description from schema root as they're redundant
parameters.pop("title", None)
@@ -184,6 +186,14 @@ def convert_tools_to_openai_schema(
sanitized_name = sanitize_tool_name(tool.name)
if sanitized_name in available_functions:
counter = 2
candidate = sanitize_tool_name(f"{sanitized_name}_{counter}")
while candidate in available_functions:
counter += 1
candidate = sanitize_tool_name(f"{sanitized_name}_{counter}")
sanitized_name = candidate
schema: dict[str, Any] = {
"type": "function",
"function": {
@@ -195,8 +205,9 @@ def convert_tools_to_openai_schema(
}
openai_tools.append(schema)
available_functions[sanitized_name] = tool.run # type: ignore[union-attr]
tool_name_mapping[sanitized_name] = tool
return openai_tools, available_functions
return openai_tools, available_functions, tool_name_mapping
def has_reached_max_iterations(iterations: int, max_iterations: int) -> bool:
@@ -1146,6 +1157,36 @@ def extract_tool_call_info(
return None
def parse_tool_call_args(
func_args: dict[str, Any] | str,
func_name: str,
call_id: str,
original_tool: Any = None,
) -> tuple[dict[str, Any], None] | tuple[None, dict[str, Any]]:
"""Parse tool call arguments from a JSON string or dict.
Returns:
``(args_dict, None)`` on success, or ``(None, error_result)`` on
JSON parse failure where ``error_result`` is a ready-to-return dict
with the same shape as ``_execute_single_native_tool_call`` return values.
"""
if isinstance(func_args, str):
try:
return json.loads(func_args), None
except json.JSONDecodeError as e:
return None, {
"call_id": call_id,
"func_name": func_name,
"result": (
f"Error: Failed to parse tool arguments as JSON: {e}. "
f"Please provide valid JSON arguments for the '{func_name}' tool."
),
"from_cache": False,
"original_tool": original_tool,
}
return func_args, None
def _setup_before_llm_call_hooks(
executor_context: CrewAgentExecutor | AgentExecutor | LiteAgent | None,
printer: Printer,

View File

@@ -69,7 +69,7 @@ def create_llm(
UNACCEPTED_ATTRIBUTES: Final[list[str]] = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
"AWS_DEFAULT_REGION",
]
@@ -146,7 +146,7 @@ def _llm_via_environment_or_fallback() -> LLM | None:
unaccepted_attributes = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
"AWS_DEFAULT_REGION",
]
set_provider = model_name.partition("/")[0] if "/" in model_name else "openai"

View File

@@ -417,7 +417,11 @@ def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
return schema
def generate_model_description(model: type[BaseModel]) -> ModelDescription:
def generate_model_description(
model: type[BaseModel],
*,
strip_null_types: bool = True,
) -> ModelDescription:
"""Generate JSON schema description of a Pydantic model.
This function takes a Pydantic model class and returns its JSON schema,
@@ -426,6 +430,9 @@ def generate_model_description(model: type[BaseModel]) -> ModelDescription:
Args:
model: A Pydantic model class.
strip_null_types: When ``True`` (default), remove ``null`` from
``anyOf`` / ``type`` arrays. Set to ``False`` to allow sending ``null`` for
optional fields.
Returns:
A ModelDescription with JSON schema representation of the model.
@@ -442,7 +449,9 @@ def generate_model_description(model: type[BaseModel]) -> ModelDescription:
json_schema = fix_discriminator_mappings(json_schema)
json_schema = convert_oneof_to_anyof(json_schema)
json_schema = ensure_all_properties_required(json_schema)
json_schema = strip_null_from_types(json_schema)
if strip_null_types:
json_schema = strip_null_from_types(json_schema)
return {
"type": "json_schema",
@@ -482,10 +491,66 @@ FORMAT_TYPE_MAP: dict[str, type[Any]] = {
}
def build_rich_field_description(prop_schema: dict[str, Any]) -> str:
"""Build a comprehensive field description including constraints.
Embeds format, enum, pattern, min/max, and example constraints into the
description text so that LLMs can understand tool parameter requirements
without inspecting the raw JSON Schema.
Args:
prop_schema: Property schema with description and constraints.
Returns:
Enhanced description with format, enum, and other constraints.
"""
parts: list[str] = []
description = prop_schema.get("description", "")
if description:
parts.append(description)
format_type = prop_schema.get("format")
if format_type:
parts.append(f"Format: {format_type}")
enum_values = prop_schema.get("enum")
if enum_values:
enum_str = ", ".join(repr(v) for v in enum_values)
parts.append(f"Allowed values: [{enum_str}]")
pattern = prop_schema.get("pattern")
if pattern:
parts.append(f"Pattern: {pattern}")
minimum = prop_schema.get("minimum")
maximum = prop_schema.get("maximum")
if minimum is not None:
parts.append(f"Minimum: {minimum}")
if maximum is not None:
parts.append(f"Maximum: {maximum}")
min_length = prop_schema.get("minLength")
max_length = prop_schema.get("maxLength")
if min_length is not None:
parts.append(f"Min length: {min_length}")
if max_length is not None:
parts.append(f"Max length: {max_length}")
examples = prop_schema.get("examples")
if examples:
examples_str = ", ".join(repr(e) for e in examples[:3])
parts.append(f"Examples: {examples_str}")
return ". ".join(parts) if parts else ""
def create_model_from_schema( # type: ignore[no-any-unimported]
json_schema: dict[str, Any],
*,
root_schema: dict[str, Any] | None = None,
model_name: str | None = None,
enrich_descriptions: bool = False,
__config__: ConfigDict | None = None,
__base__: type[BaseModel] | None = None,
__module__: str = __name__,
@@ -503,6 +568,13 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
json_schema: A dictionary representing the JSON schema.
root_schema: The root schema containing $defs. If not provided, the
current schema is treated as the root schema.
model_name: Override for the model name. If not provided, the schema
``title`` field is used, falling back to ``"DynamicModel"``.
enrich_descriptions: When True, augment field descriptions with
constraint info (format, enum, pattern, min/max, examples) via
:func:`build_rich_field_description`. Useful for LLM-facing tool
schemas where constraints in the description help the model
understand parameter requirements.
__config__: Pydantic configuration for the generated model.
__base__: Base class for the generated model. Defaults to BaseModel.
__module__: Module name for the generated model class.
@@ -539,10 +611,14 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
if "title" not in json_schema and "title" in (root_schema or {}):
json_schema["title"] = (root_schema or {}).get("title")
model_name = json_schema.get("title") or "DynamicModel"
effective_name = model_name or json_schema.get("title") or "DynamicModel"
field_definitions = {
name: _json_schema_to_pydantic_field(
name, prop, json_schema.get("required", []), effective_root
name,
prop,
json_schema.get("required", []),
effective_root,
enrich_descriptions=enrich_descriptions,
)
for name, prop in (json_schema.get("properties", {}) or {}).items()
}
@@ -550,7 +626,7 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
effective_config = __config__ or ConfigDict(extra="forbid")
return create_model_base(
model_name,
effective_name,
__config__=effective_config,
__base__=__base__,
__module__=__module__,
@@ -565,6 +641,8 @@ def _json_schema_to_pydantic_field(
json_schema: dict[str, Any],
required: list[str],
root_schema: dict[str, Any],
*,
enrich_descriptions: bool = False,
) -> Any:
"""Convert a JSON schema property to a Pydantic field definition.
@@ -573,20 +651,29 @@ def _json_schema_to_pydantic_field(
json_schema: The JSON schema for this field.
required: List of required field names.
root_schema: The root schema for resolving $ref.
enrich_descriptions: When True, embed constraints in the description.
Returns:
A tuple of (type, Field) for use with create_model.
"""
type_ = _json_schema_to_pydantic_type(json_schema, root_schema, name_=name.title())
description = json_schema.get("description")
examples = json_schema.get("examples")
type_ = _json_schema_to_pydantic_type(
json_schema, root_schema, name_=name.title(), enrich_descriptions=enrich_descriptions
)
is_required = name in required
field_params: dict[str, Any] = {}
schema_extra: dict[str, Any] = {}
if description:
field_params["description"] = description
if enrich_descriptions:
rich_desc = build_rich_field_description(json_schema)
if rich_desc:
field_params["description"] = rich_desc
else:
description = json_schema.get("description")
if description:
field_params["description"] = description
examples = json_schema.get("examples")
if examples:
schema_extra["examples"] = examples
@@ -702,6 +789,7 @@ def _json_schema_to_pydantic_type(
root_schema: dict[str, Any],
*,
name_: str | None = None,
enrich_descriptions: bool = False,
) -> Any:
"""Convert a JSON schema to a Python/Pydantic type.
@@ -709,6 +797,7 @@ def _json_schema_to_pydantic_type(
json_schema: The JSON schema to convert.
root_schema: The root schema for resolving $ref.
name_: Optional name for nested models.
enrich_descriptions: Propagated to nested model creation.
Returns:
A Python type corresponding to the JSON schema.
@@ -716,7 +805,9 @@ def _json_schema_to_pydantic_type(
ref = json_schema.get("$ref")
if ref:
ref_schema = _resolve_ref(ref, root_schema)
return _json_schema_to_pydantic_type(ref_schema, root_schema, name_=name_)
return _json_schema_to_pydantic_type(
ref_schema, root_schema, name_=name_, enrich_descriptions=enrich_descriptions
)
enum_values = json_schema.get("enum")
if enum_values:
@@ -731,7 +822,10 @@ def _json_schema_to_pydantic_type(
if any_of_schemas:
any_of_types = [
_json_schema_to_pydantic_type(
schema, root_schema, name_=f"{name_ or 'Union'}Option{i}"
schema,
root_schema,
name_=f"{name_ or 'Union'}Option{i}",
enrich_descriptions=enrich_descriptions,
)
for i, schema in enumerate(any_of_schemas)
]
@@ -741,10 +835,14 @@ def _json_schema_to_pydantic_type(
if all_of_schemas:
if len(all_of_schemas) == 1:
return _json_schema_to_pydantic_type(
all_of_schemas[0], root_schema, name_=name_
all_of_schemas[0], root_schema, name_=name_,
enrich_descriptions=enrich_descriptions,
)
merged = _merge_all_of_schemas(all_of_schemas, root_schema)
return _json_schema_to_pydantic_type(merged, root_schema, name_=name_)
return _json_schema_to_pydantic_type(
merged, root_schema, name_=name_,
enrich_descriptions=enrich_descriptions,
)
type_ = json_schema.get("type")
@@ -760,7 +858,8 @@ def _json_schema_to_pydantic_type(
items_schema = json_schema.get("items")
if items_schema:
item_type = _json_schema_to_pydantic_type(
items_schema, root_schema, name_=name_
items_schema, root_schema, name_=name_,
enrich_descriptions=enrich_descriptions,
)
return list[item_type] # type: ignore[valid-type]
return list
@@ -770,7 +869,10 @@ def _json_schema_to_pydantic_type(
json_schema_ = json_schema.copy()
if json_schema_.get("title") is None:
json_schema_["title"] = name_ or "DynamicModel"
return create_model_from_schema(json_schema_, root_schema=root_schema)
return create_model_from_schema(
json_schema_, root_schema=root_schema,
enrich_descriptions=enrich_descriptions,
)
return dict
if type_ == "null":
return None

View File

@@ -2,6 +2,7 @@
# https://github.com/un33k/python-slugify
# MIT License
import hashlib
import re
from typing import Any, Final
import unicodedata
@@ -40,7 +41,9 @@ def sanitize_tool_name(name: str, max_length: int = _MAX_TOOL_NAME_LENGTH) -> st
name = name.strip("_")
if len(name) > max_length:
name = name[:max_length].rstrip("_")
name_hash = hashlib.sha256(name.encode()).hexdigest()[:8]
suffix = f"_{name_hash}"
name = name[: max_length - len(suffix)].rstrip("_") + suffix
return name

View File

@@ -123,7 +123,7 @@ class TestAgentExecutor:
executor.state.iterations = 10
result = executor.check_max_iterations()
assert result == "force_final_answer"
assert result == "max_iterations_exceeded"
def test_route_by_answer_type_action(self, mock_dependencies):
"""Test routing for AgentAction."""

View File

@@ -659,7 +659,7 @@ def test_agent_kickoff_with_platform_tools(mock_get, mock_post):
@patch.dict("os.environ", {"EXA_API_KEY": "test_exa_key"})
@patch("crewai.agent.Agent._get_external_mcp_tools")
@patch("crewai.agent.Agent.get_mcp_tools")
@pytest.mark.vcr()
def test_agent_kickoff_with_mcp_tools(mock_get_mcp_tools):
"""Test that Agent.kickoff() properly integrates MCP tools with LiteAgent"""
@@ -691,7 +691,7 @@ def test_agent_kickoff_with_mcp_tools(mock_get_mcp_tools):
assert result.raw is not None
# Verify MCP tools were retrieved
mock_get_mcp_tools.assert_called_once_with("https://mcp.exa.ai/mcp?api_key=test_exa_key&profile=research")
mock_get_mcp_tools.assert_called_once_with(["https://mcp.exa.ai/mcp?api_key=test_exa_key&profile=research"])
# ============================================================================
@@ -1136,6 +1136,7 @@ def test_lite_agent_memory_instance_recall_and_save_called():
successful_requests=1,
)
mock_memory = Mock()
mock_memory._read_only = False
mock_memory.recall.return_value = []
mock_memory.extract_memories.return_value = ["Fact one.", "Fact two."]

View File

@@ -11,7 +11,7 @@ import os
import threading
import time
from collections import Counter
from unittest.mock import patch
from unittest.mock import Mock, patch
import pytest
from pydantic import BaseModel, Field
@@ -1129,3 +1129,150 @@ class TestMaxUsageCountWithNativeToolCalling:
# Verify the requested calls occurred while keeping usage bounded.
assert tool.current_usage_count >= 2
assert tool.current_usage_count <= tool.max_usage_count
# =============================================================================
# JSON Parse Error Handling Tests
# =============================================================================
class TestNativeToolCallingJsonParseError:
"""Tests that malformed JSON tool arguments produce clear errors
instead of silently dropping all arguments."""
def _make_executor(self, tools: list[BaseTool]) -> "CrewAgentExecutor":
"""Create a minimal CrewAgentExecutor with mocked dependencies."""
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.tools.base_tool import to_langchain
structured_tools = to_langchain(tools)
mock_agent = Mock()
mock_agent.key = "test_agent"
mock_agent.role = "tester"
mock_agent.verbose = False
mock_agent.fingerprint = None
mock_agent.tools_results = []
mock_task = Mock()
mock_task.name = "test"
mock_task.description = "test"
mock_task.id = "test-id"
executor = object.__new__(CrewAgentExecutor)
executor.agent = mock_agent
executor.task = mock_task
executor.crew = Mock()
executor.tools = structured_tools
executor.original_tools = tools
executor.tools_handler = None
executor._printer = Mock()
executor.messages = []
return executor
def test_malformed_json_returns_parse_error(self) -> None:
"""Malformed JSON args must return a descriptive error, not silently become {}."""
class CodeTool(BaseTool):
name: str = "execute_code"
description: str = "Run code"
def _run(self, code: str) -> str:
return f"ran: {code}"
tool = CodeTool()
executor = self._make_executor([tool])
from crewai.utilities.agent_utils import convert_tools_to_openai_schema
_, available_functions, _ = convert_tools_to_openai_schema([tool])
malformed_json = '{"code": "print("hello")"}'
result = executor._execute_single_native_tool_call(
call_id="call_123",
func_name="execute_code",
func_args=malformed_json,
available_functions=available_functions,
)
assert "Failed to parse tool arguments as JSON" in result["result"]
assert tool.current_usage_count == 0
def test_valid_json_still_executes_normally(self) -> None:
"""Valid JSON args should execute the tool as before."""
class CodeTool(BaseTool):
name: str = "execute_code"
description: str = "Run code"
def _run(self, code: str) -> str:
return f"ran: {code}"
tool = CodeTool()
executor = self._make_executor([tool])
from crewai.utilities.agent_utils import convert_tools_to_openai_schema
_, available_functions, _ = convert_tools_to_openai_schema([tool])
valid_json = '{"code": "print(1)"}'
result = executor._execute_single_native_tool_call(
call_id="call_456",
func_name="execute_code",
func_args=valid_json,
available_functions=available_functions,
)
assert result["result"] == "ran: print(1)"
def test_dict_args_bypass_json_parsing(self) -> None:
"""When func_args is already a dict, no JSON parsing occurs."""
class CodeTool(BaseTool):
name: str = "execute_code"
description: str = "Run code"
def _run(self, code: str) -> str:
return f"ran: {code}"
tool = CodeTool()
executor = self._make_executor([tool])
from crewai.utilities.agent_utils import convert_tools_to_openai_schema
_, available_functions, _ = convert_tools_to_openai_schema([tool])
result = executor._execute_single_native_tool_call(
call_id="call_789",
func_name="execute_code",
func_args={"code": "x = 42"},
available_functions=available_functions,
)
assert result["result"] == "ran: x = 42"
def test_schema_validation_catches_missing_args_on_native_path(self) -> None:
"""The native function calling path should now enforce args_schema,
catching missing required fields before _run is called."""
class StrictTool(BaseTool):
name: str = "strict_tool"
description: str = "A tool with required args"
def _run(self, code: str, language: str) -> str:
return f"{language}: {code}"
tool = StrictTool()
executor = self._make_executor([tool])
from crewai.utilities.agent_utils import convert_tools_to_openai_schema
_, available_functions, _ = convert_tools_to_openai_schema([tool])
result = executor._execute_single_native_tool_call(
call_id="call_schema",
func_name="strict_tool",
func_args={"code": "print(1)"},
available_functions=available_functions,
)
assert "Error" in result["result"]
assert "validation failed" in result["result"].lower() or "missing" in result["result"].lower()

Some files were not shown because too many files have changed in this diff Show More