Compare commits

..

78 Commits

Author SHA1 Message Date
iris-clawd
472abf89ad docs: add agent-level planning (PlanningConfig) docs, deprecate reasoning page
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 00:43:29 +00:00
Greyson LaLonde
160e25c1a9 docs: update changelog and version for v1.14.3a2
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-04-22 03:14:00 +08:00
Greyson LaLonde
b34b336273 feat: bump versions to 1.14.3a2 2026-04-22 03:08:52 +08:00
Renato Nitta
42d6c03ebc fix: propagate implicit @CrewBase names to crew events (#5574)
* fix: propagate implicit @CrewBase names to crew events

* test: appease static analysis for @CrewBase kickoff test

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-04-21 15:57:19 -03:00
Greyson LaLonde
d4f9f875f7 fix: bump python-dotenv to >=1.2.2 for GHSA-mf9w-mj56-hr94
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-04-22 01:22:19 +08:00
Lorenze Jay
6d153284d4 fix: merge execution metadata on duplicate batch initialization in Tr… (#5573)
* fix: merge execution metadata on duplicate batch initialization in TraceBatchManager

- Updated TraceBatchManager to merge execution metadata when a batch is initialized multiple times.
- Enhanced logging to reflect the merging of metadata during duplicate initialization.
- Added a test case to verify that execution metadata is correctly merged when initializing a batch after a lazy action.

* drop env events emitting from traces listener
2026-04-21 10:12:24 -07:00
Lorenze Jay
84a4d47aa7 updated descriptions and applied the actual translations (#5572) 2026-04-21 08:55:39 -07:00
Greyson LaLonde
9caed61f36 chore: remove scarf install tracking 2026-04-21 21:52:17 +08:00
MatthiasHowellYopp
d45ed61db5 feat: added bedrock V4 support 2026-04-21 21:09:13 +08:00
iris-clawd
3b01da9ad9 docs: add Build with AI to Get Started nav + page files for all languages (en, ko, pt-BR, ar) (#5567)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-04-20 23:43:37 -03:00
iris-clawd
874405b825 docs: Add 'Build with AI' page — AI-native docs for coding agents (#5558)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
* docs: add Build with AI page for coding agents and AI assistants

* docs: add Build with AI section to README

* docs: trim README Build with AI section to skills install only

* docs: add skills.sh reference link for npx install

* docs: add coding agent logos to Build with AI page

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-04-20 16:09:37 -07:00
Greyson LaLonde
d6d04717c2 fix: serialize Task class-reference fields for checkpointing
Task fields that store class references (output_pydantic, output_json,
response_model, converter_cls) caused PydanticSerializationError when
RuntimeState serialized Crew entities during checkpointing. Serialize
to model_json_schema() and hydrate back via create_model_from_schema.
2026-04-21 03:15:06 +08:00
Greyson LaLonde
01b8437940 fix: handle BaseModel result in guardrail retry loop
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
The guardrail retry path passed a Pydantic object directly to
TaskOutput.raw (which expects a string), causing a ValidationError
when output_pydantic is set and a guardrail fails. Mirror the
BaseModel check from the initial execution path into both sync
and async retry loops.

Closes #5544 (part 1)
2026-04-21 01:59:42 +08:00
Lorenze Jay
2c08f54341 feat: add Daytona sandbox tools for enhanced functionality (#5530)
* feat: add Daytona sandbox tools for enhanced functionality

- Introduced DaytonaBaseTool as a shared base for tools interacting with Daytona sandboxes.
- Added DaytonaExecTool for executing shell commands within a sandbox.
- Implemented DaytonaFileTool for managing files (read, write, delete, etc.) in a sandbox.
- Created DaytonaPythonTool for running Python code in a sandbox environment.
- Updated pyproject.toml to include Daytona as a dependency.

* chore: update tool specifications

* refactor: enhance error handling and logging in Daytona tools

- Added logging for best-effort cleanup failures in DaytonaBaseTool and DaytonaFileTool to aid in debugging.
- Improved error message for ImportError in DaytonaPythonTool to provide clearer guidance on SDK compatibility issues.

* linted

* addressing comment

* pinning version

* supporting append

* chore: update tool specifications

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-20 10:17:11 -07:00
Greyson LaLonde
bc1f1b85a4 docs: update changelog and version for v1.14.3a1 2026-04-21 00:59:07 +08:00
Greyson LaLonde
0b408534ab feat: bump versions to 1.14.3a1 2026-04-21 00:53:50 +08:00
Greyson LaLonde
48f391092c fix: preserve thought_signature in Gemini streaming tool calls
Gemini thinking models (2.5+, 3.x) require thought_signature on
functionCall parts when sent back in conversation history. The streaming
path was extracting only name/args into plain dicts, losing the
signature. Return raw Part objects (matching the non-streaming path)
so the executor preserves them via raw_tool_call_parts.
2026-04-21 00:01:55 +08:00
Greyson LaLonde
ae242c507d feat: add checkpoint and fork support to standalone agents
Add fork classmethod, _restore_runtime, and _restore_event_scope
to BaseAgent. Fix from_checkpoint to set runtime state on the
event bus and restore event scopes. Store kickoff event ID across
checkpoints to skip re-emission on resume. Handle agent entity
type in checkpoint CLI and TUI.
2026-04-20 22:47:37 +08:00
alex-clawd
0b120fac90 fix: use future dates in checkpoint prune tests to prevent time-dependent failures (#5543)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
The test_older_than tests in both JSON and SQLite prune suites used
hardcoded 2026-04-17 timestamps for the 'new' checkpoint. Once that
date passes, the checkpoint is older than 1 day and gets pruned along
with the 'old' one, causing assert count >= 1 to fail (count=0).

Use 2099-01-01 for the 'new' checkpoint so tests remain stable.

Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
2026-04-20 01:27:12 -03:00
Greyson LaLonde
f879909526 fix: emit task_started on fork resume, redesign checkpoint TUI
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Redesign checkpoint TUI with tabbed detail panel, collapsible
agent rosters, keybinding actions, and human-readable timestamps.
2026-04-18 04:19:31 +08:00
Greyson LaLonde
c9b0004d0e fix: correct dry-run order and handle checked-out stale branch in devtools release
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
- Move _update_all_versions inside each dry-run branch so output order matches actual execution
- Switch to main before deleting the stale local branch in create_or_reset_branch
2026-04-17 23:26:52 +08:00
Greyson LaLonde
a8994347b0 docs: update changelog and version for v1.14.2 2026-04-17 22:08:25 +08:00
Greyson LaLonde
5ca62c20f2 feat: bump versions to 1.14.2 2026-04-17 22:01:27 +08:00
Greyson LaLonde
11989da4b1 fix: prompt on stale branch conflicts in devtools release 2026-04-17 21:55:48 +08:00
Greyson LaLonde
19ac7d2f64 fix: patch authlib, langchain-text-splitters, and pypdf vulnerabilities
- authlib 1.6.9 -> 1.6.11 (GHSA-jj8c-mmj3-mmgv)
- langchain-text-splitters 1.1.1 -> 1.1.2 (GHSA-fv5p-p927-qmxr)
- langchain-core 1.2.28 -> 1.2.31 (required by text-splitters 1.1.2)
- pypdf 6.10.1 -> 6.10.2 (GHSA-4pxv-j86v-mhcw, GHSA-7gw9-cf7v-778f, GHSA-x284-j5p8-9c5p)

Pinned tool.uv.exclude-newer to 2026-04-17 so the 2026-04-16 patch
releases fall inside the resolution window.
2026-04-17 21:25:47 +08:00
Lorenze Jay
2f48937ce4 docs(crews): document missing params and add Checkpointing section (OSS-32) (#5409)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add 8 missing parameters to the Crew Attributes table:
  chat_llm, before_kickoff_callbacks, after_kickoff_callbacks,
  tracing, skills, security_config, checkpoint
- Add new "## Checkpointing" section before "## Memory Utilization" with:
  - Quick-start checkpoint=True example
  - Full CheckpointConfig usage example
  - Crew.from_checkpoint() resume pattern
  - CheckpointConfig attributes table (location, on_events, provider, max_checkpoints)
  - Note on auto-restored checkpoint fields

Closes OSS-32
2026-04-16 16:57:00 -07:00
Greyson LaLonde
c5192b970c feat: add checkpoint resume, diff, prune commands and save discoverability
Add three new CLI subcommands to improve checkpoint UX:

- `crewai checkpoint resume [id]` skips the TUI and resumes from the
  latest or specified checkpoint directly
- `crewai checkpoint diff <id1> <id2>` compares two checkpoints showing
  changes in metadata, inputs, task status, and outputs
- `crewai checkpoint prune --keep N --older-than Xd` removes old
  checkpoints from JSON dirs or SQLite databases

Also writes a resume hint to stderr after every checkpoint save so
users discover the command without needing to know it exists.
2026-04-17 04:50:15 +08:00
Greyson LaLonde
54391fdbdf feat: add from_checkpoint parameter to Agent.kickoff, kickoff_async, akickoff 2026-04-17 03:40:37 +08:00
Greyson LaLonde
6136228a66 fix: scope streaming handlers to prevent cross-run chunk contamination
Concurrent streaming runs registered handlers on the singleton event bus
that received all LLMStreamChunkEvent emissions, causing chunks to fan
out across unrelated queues. Introduces a ContextVar-based stream scope
ID so each handler only accepts events from its own execution context.

Closes #5376
2026-04-17 03:02:03 +08:00
Greyson LaLonde
fbe2a04064 fix: mock Repository.__init__ in test_publish_when_not_in_sync 2026-04-17 02:39:22 +08:00
iris-clawd
baf91d8f0a fix: update broken enterprise link on installation page (OSS-36) (#5443)
* fix: update broken enterprise link on installation page (OSS-36)

The 'Explore Enterprise Options' card on the installation page linked to
https://crewai.com/enterprise which returns a 404. Updated the href to
https://crewai.com/amp across all locales (en, pt-BR, ko, ar).

* fix: use HubSpot form link for enterprise options card

Updated per team feedback — the enterprise card should link to the
HubSpot demo form instead of crewai.com/amp.

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-04-16 11:01:59 -07:00
Greyson LaLonde
7e01c5a030 fix: dispatch Flow checkpoints through Flow APIs in TUI 2026-04-17 01:34:06 +08:00
Lorenze Jay
105a9778cc feat: add template management commands for project templates (#5444)
* feat: add template management commands for project templates

- Introduced  command group to browse and install project templates.
- Added  command to display available templates.
- Implemented  command to install a selected template into the current directory.
- Created  class to handle template-related operations, including fetching templates from GitHub and managing installations.
- Enhanced telemetry to track template installations.

* linted

* adressing comments

* comment addressed
2026-04-16 10:18:15 -07:00
Greyson LaLonde
32ec4414bf fix: use recursive glob for JSON checkpoint discovery
Branch-aware checkpoint storage writes under subdirectories (e.g.
main/, fork/exp1/) but _list_json and _info_json_latest used flat
globs that missed them.
2026-04-17 00:13:35 +08:00
Greyson LaLonde
63fc2e7588 fix: complete recursive MCP schema handling
resolve_refs now returns type-preserving stubs instead of {} for
circular $refs, and create_model_from_schema catches JsonRefError
to fall back to lazy top-level-only inlining.
2026-04-17 00:06:02 +08:00
Greyson LaLonde
749fe85325 fix: bump langsmith to 0.7.31 to patch GHSA-rr7j-v2q5-chgv
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
langsmith <0.7.31 bypasses output redaction for streaming token
events, leaking sensitive LLM outputs into LangSmith storage.
2026-04-16 23:55:30 +08:00
Greyson LaLonde
0bb6faa9d3 docs: update changelog and version for v1.14.2rc1
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-04-16 05:24:57 +08:00
Greyson LaLonde
aa28eeab6a feat: bump versions to 1.14.2rc1 2026-04-16 05:18:24 +08:00
Greyson LaLonde
29b5531f78 fix: handle cyclic JSON schemas in MCP tool resolution 2026-04-16 05:03:00 +08:00
Greyson LaLonde
74d061e994 fix: bump python-multipart to 0.0.26 to patch GHSA-mj87-hwqh-73pj
Fixes GHSA-mj87-hwqh-73pj
2026-04-16 04:25:35 +08:00
Greyson LaLonde
18d0fd6b80 fix: bump pypdf to 6.10.1 to patch GHSA-jj6c-8h6c-hppx
Fixes GHSA-jj6c-8h6c-hppx
2026-04-16 04:11:08 +08:00
Greyson LaLonde
1c90d574ab docs: update changelog and version for v1.14.2a5
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
2026-04-15 22:45:15 +08:00
Greyson LaLonde
3a7c550512 feat: bump versions to 1.14.2a5 2026-04-15 22:40:48 +08:00
Greyson LaLonde
5b6f89fe64 docs: update changelog and version for v1.14.2a4
Some checks failed
Vulnerability Scan / pip-audit (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
2026-04-15 02:34:32 +08:00
Greyson LaLonde
ad5e66d1d0 feat: bump versions to 1.14.2a4 2026-04-15 02:29:06 +08:00
Greyson LaLonde
94e7d86df1 fix: stop forwarding strict mode to Bedrock Converse API
Forwarding strict and sanitizing tool schemas for strict mode causes
Bedrock Converse requests to hang until timeout. Drop strict forwarding
and schema sanitization from the Bedrock provider.
2026-04-15 02:22:50 +08:00
Greyson LaLonde
0dba95e166 fix: bump pytest to 9.0.3 for GHSA-6w46-j5rx-g56g
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
pytest <9.0.3 has an insecure tmpdir vulnerability (CVE / GHSA-6w46-j5rx-g56g).
Bump pytest-split to 0.11.0 to satisfy the new pytest>=9 requirement.
2026-04-14 02:38:05 +08:00
Greyson LaLonde
58208fdbae fix: bump openai lower bound to >=2.0.0 2026-04-14 02:19:47 +08:00
Greyson LaLonde
655e75038b feat: add resume hints to devtools release on failure 2026-04-14 01:26:29 +08:00
Greyson LaLonde
8e2a529d94 chore: add deprecation decorator to LiteAgent 2026-04-14 00:51:11 +08:00
Greyson LaLonde
58bbd0a400 docs: update changelog and version for v1.14.2a3
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-04-13 21:38:12 +08:00
Greyson LaLonde
9708b94979 feat: bump versions to 1.14.2a3 2026-04-13 21:30:14 +08:00
Greyson LaLonde
0b0521b315 chore: improve typing in task module 2026-04-13 21:21:18 +08:00
Greyson LaLonde
c8694fbed2 fix: override pypdf and uv to patched versions for CVE-2026-40260 and GHSA-pjjw-68hj-v9mw 2026-04-13 21:04:37 +08:00
Greyson LaLonde
a4e7b322c5 docs: clean up enterprise A2A language 2026-04-13 20:53:31 +08:00
Greyson LaLonde
ee049999cb docs: add enterprise A2A feature doc and update OSS A2A docs 2026-04-13 20:28:06 +08:00
Greyson LaLonde
1d6f84c7aa chore: clean up redundant inline docs in agents module
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-04-13 11:00:42 +08:00
Greyson LaLonde
8dc2655cbf chore: clean up redundant inline docs in agent module 2026-04-13 10:55:29 +08:00
Greyson LaLonde
121720cbb3 chore: clean up redundant inline docs in a2a module 2026-04-13 10:49:59 +08:00
Greyson LaLonde
16bf24001e fix: upgrade requests to >=2.33.0 for CVE temp file vulnerability
Some checks failed
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
2026-04-12 16:12:35 +08:00
Greyson LaLonde
29fc4ac226 feat: add deploy validation CLI and improve LLM initialization ergonomics
Add crewai deploy validate to check project structure, dependencies, imports, and env usage before deploy
Run validation automatically in deploy create and deploy push with skip flag support
Return structured findings with stable codes and hints
Add test coverage for validation scenarios

refactor: defer LLM client construction to first use

Move SDK client creation out of model initialization into lazy getters
Add _get_sync_client and _get_async_client across providers
Route all provider calls through lazy getters
Surface credential errors at first real invocation

refactor: standardize provider client access

Align async paths to use _get_async_client
Avoid client construction in lightweight config accessors
Simplify provider lifecycle and improve consistency

test: update suite for new behavior

Update tests for lazy initialization contract
Update CLI tests for validation flow and skip flag
Expand coverage for provider initialization paths
2026-04-12 16:00:46 +08:00
Yanhu
25fcf39cc1 fix: preserve Bedrock tool call arguments by removing truthy default
func_info.get('arguments', '{}') returns '{}' (truthy) when no
'function' wrapper exists (Bedrock format), causing the or-fallback
to tool_call.get('input', {}) to never execute. The actual Bedrock
arguments are silently discarded.

Remove the default so get('arguments') returns None (falsy) when
there's no function wrapper, allowing the or-chain to correctly
fall through to Bedrock's 'input' field.

Fixes #5275
2026-04-12 15:50:56 +08:00
Greyson LaLonde
3b280e41fb chore: bump pypdf to 6.10.0 for GHSA-3crg-w4f6-42mx
Some checks failed
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Resolves CVE-2026-40260 where manipulated XMP metadata entity
declarations can exhaust RAM in pypdf <6.10.0.
2026-04-11 05:56:11 +08:00
Greyson LaLonde
8de4421705 fix: sanitize tool schemas for strict mode
Pydantic schemas intermittently fail strict tool-use on openai, anthropic,
and bedrock. All three reject nested objects missing additionalProperties:
false, and anthropic also rejects keywords like minLength and top-level
anyOf. Adds per-provider sanitizers that inline refs, close objects, mark
every property required, preserve nullable unions, and strip keywords each
grammar compiler rejects. Verified against real bedrock, anthropic, and
openai.
2026-04-11 05:26:48 +08:00
Greyson LaLonde
62484934c1 chore: bump uv to 0.11.6 for GHSA-pjjw-68hj-v9mw
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Low-severity advisory: malformed RECORD entries in wheels could delete
files outside the venv on uninstall. Fixed in uv 0.11.6.
2026-04-11 05:09:24 +08:00
Greyson LaLonde
298fc7b9c0 chore: drop tiktoken from anthropic async max_tokens test 2026-04-11 03:20:20 +08:00
Greyson LaLonde
9537ba0413 ci: add pip-audit pre-commit hook 2026-04-11 03:06:31 +08:00
Greyson LaLonde
ace9617722 test: re-record hierarchical verbose manager cassette 2026-04-11 02:35:00 +08:00
Greyson LaLonde
7e1672447b fix: deflake MemoryRecord embedding serialization test
Substring checks like `'0.1' not in json_str` collided with timestamps
such as `2026-04-10T13:00:50.140557` on CI. Round-trip through
`model_validate_json` to verify structurally that the embedding field
is absent from the serialized output.
2026-04-11 02:01:23 +08:00
Greyson LaLonde
ea58f8d34d docs: update changelog and version for v1.14.2a2 2026-04-10 21:58:55 +08:00
Greyson LaLonde
fe93333066 feat: bump versions to 1.14.2a2 2026-04-10 21:51:51 +08:00
Greyson LaLonde
1293dee241 feat: checkpoint TUI with tree view, fork support, editable inputs/outputs
- Rewrite TUI with Tree widget showing branch/fork lineage
- Add Resume and Fork buttons in detail panel with Collapsible entities
- Show branch and parent_id in detail panel and CLI info output
- Auto-detect .checkpoints.db when default dir missing
- Append .db to location for SqliteProvider when no extension set
- Fix RuntimeState.from_checkpoint not setting provider/location
- Fork now writes initial checkpoint on new branch
- Add from_checkpoint, fork, and CLI docs to checkpointing.mdx
2026-04-10 21:24:49 +08:00
Greyson LaLonde
6efa142e22 fix: forward strict mode to Anthropic and Bedrock providers
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
The OpenAI-format tool schema sets strict: true but this was dropped
during conversion to Anthropic/Bedrock formats, so neither provider
used constrained decoding. Without it, the model can return string
"None" instead of JSON null for nullable fields, causing Pydantic
validation failures.
2026-04-10 15:32:54 +08:00
Lucas Gomide
fc6792d067 feat: enrich LLM token tracking with reasoning tokens, cache creation tokens (#5389)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-04-10 00:22:27 -04:00
Greyson LaLonde
84b1b0a0b0 feat: add from_checkpoint parameter to kickoff methods
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Accept CheckpointConfig on Crew and Flow kickoff/kickoff_async/akickoff.
When restore_from is set, the entity resumes from that checkpoint.
When only config fields are set, checkpointing is enabled for the run.
Adds restore_from field (Path | str | None) to CheckpointConfig.
2026-04-10 03:47:23 +08:00
Greyson LaLonde
56cf8a4384 feat: embed crewai_version in checkpoints with migration framework
Write the crewAI package version into every checkpoint blob. On restore,
run version-based migrations so older checkpoints can be transformed
forward to the current format. Adds crewai.utilities.version module.
2026-04-10 01:13:30 +08:00
Greyson LaLonde
68c754883d feat: add checkpoint forking with lineage tracking 2026-04-10 00:03:28 +08:00
alex-clawd
ce56472fc3 fix: harden NL2SQLTool — read-only default, query validation, parameterized queries (#5311)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
* fix: harden NL2SQLTool — read-only by default, parameterized queries, query validation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: address CI lint failures and remove unused import

- Remove unused `sessionmaker` import from test_nl2sql_security.py
- Use `Self` return type on `_apply_env_override` (fixes UP037/F821)
- Fix ruff errors auto-fixed in lib/crewai (UP007, etc.)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: expand _WRITE_COMMANDS and block multi-statement semicolon injection

- Add missing write commands: UPSERT, LOAD, COPY, VACUUM, ANALYZE,
  ANALYSE, REINDEX, CLUSTER, REFRESH, COMMENT, SET, RESET
- _validate_query() now splits on ';' and validates each statement
  independently; multi-statement queries are rejected outright in
  read-only mode to prevent 'SELECT 1; DROP TABLE users' bypass
- Extract single-statement logic into _validate_statement() helper
- Add TestSemicolonInjection and TestExtendedWriteCommands test classes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* ci: retrigger

* fix: use typing_extensions.Self for Python 3.10 compat

* chore: update tool specifications

* docs: document NL2SQLTool read-only default and DML configuration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: close three NL2SQLTool security gaps (writable CTEs, EXPLAIN ANALYZE, multi-stmt commit)

- Remove WITH from _READ_ONLY_COMMANDS; scan CTE body for write keywords so
  writable CTEs like `WITH d AS (DELETE …) SELECT …` are blocked in read-only mode.
- EXPLAIN ANALYZE/ANALYSE now resolves the underlying command; EXPLAIN ANALYZE DELETE
  is treated as a write and blocked in read-only mode.
- execute_sql commit decision now checks ALL semicolon-separated statements so
  a SELECT-first batch like `SELECT 1; DROP TABLE t` still triggers a commit
  when allow_dml=True.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: handle parenthesized EXPLAIN options syntax; remove unused _seed_db

_validate_statement now strips parenthesized options from EXPLAIN (e.g.
EXPLAIN (ANALYZE) DELETE, EXPLAIN (ANALYZE, VERBOSE) DELETE) before
checking whether ANALYZE/ANALYSE is present — closing the bypass where
the options-list form was silently allowed in read-only mode.

Adds three new tests:
  - EXPLAIN (ANALYZE) DELETE  → blocked
  - EXPLAIN (ANALYZE, VERBOSE) DELETE  → blocked
  - EXPLAIN (VERBOSE) SELECT  → allowed

Also removes the unused _seed_db helper from test_nl2sql_security.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore: update tool specifications

* fix: smarter CTE write detection, fix commit logic for writable CTEs

- Replace naive token-set matching with positional AS() body inspection
  to avoid false positives on column names like 'comment', 'set', 'reset'
- Fix execute_sql commit logic to detect writable CTEs (WITH + DELETE/INSERT)
  not just top-level write commands
- Add tests for false positive cases and writable CTE commit behavior
- Format nl2sql_tool.py to pass ruff format check

* fix: catch write commands in CTE main query + handle whitespace in AS()

- WITH cte AS (SELECT 1) DELETE FROM users now correctly blocked
- AS followed by newline/tab/multi-space before ( now detected
- execute_sql commit logic updated for both cases
- 4 new tests

* fix: EXPLAIN ANALYZE VERBOSE handling, string literal paren bypass, commit logic for EXPLAIN ANALYZE

- EXPLAIN handler now consumes all known options (ANALYZE, ANALYSE, VERBOSE) before
  extracting the real command, fixing 'EXPLAIN ANALYZE VERBOSE SELECT' being blocked
- Paren walker in _extract_main_query_after_cte now skips string literals, preventing
  'WITH cte AS (SELECT '\''('\'' FROM t) DELETE FROM users' from bypassing detection
- _is_write_stmt in execute_sql now resolves EXPLAIN ANALYZE to underlying command
  via _resolve_explain_command, ensuring session.commit() fires for write operations
- 10 new tests covering all three fixes

* fix: deduplicate EXPLAIN parsing, fix AS( regex in strings, block unknown CTE commands, bump langchain-core

- Refactor _validate_statement to use _resolve_explain_command (single source of truth)
- _iter_as_paren_matches skips string literals so 'AS (' in data doesn't confuse CTE detection
- Unknown commands after CTE definitions now blocked in read-only mode
- Bump langchain-core override to >=1.2.28 (GHSA-926x-3r5x-gfhw)

* fix: add return type annotation to _iter_as_paren_matches

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-09 03:21:38 -03:00
133 changed files with 15271 additions and 2335 deletions

View File

@@ -24,6 +24,14 @@ repos:
rev: 0.11.3
hooks:
- id: uv-lock
- repo: local
hooks:
- id: pip-audit
name: pip-audit
entry: bash -c 'source .venv/bin/activate && uv run pip-audit --skip-editable --ignore-vuln CVE-2025-69872 --ignore-vuln CVE-2026-25645 --ignore-vuln CVE-2026-27448 --ignore-vuln CVE-2026-27459 --ignore-vuln PYSEC-2023-235' --
language: system
pass_filenames: false
stages: [pre-push, manual]
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:

View File

@@ -83,6 +83,7 @@ intelligent automations.
## Table of contents
- [Build with AI](#build-with-ai)
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
@@ -101,6 +102,32 @@ intelligent automations.
- [Telemetry](#telemetry)
- [License](#license)
## Build with AI
Using an AI coding agent? Teach it CrewAI best practices in one command:
**Claude Code:**
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
Four skills that activate automatically when you ask relevant CrewAI questions:
| Skill | When it runs |
|-------|--------------|
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
**Cursor, Codex, Windsurf, and others ([skills.sh](https://skills.sh/crewaiinc/skills)):**
```shell
npx skills add crewaiinc/skills
```
This installs the official [CrewAI Skills](https://github.com/crewAIInc/skills) — structured instructions that teach coding agents how to scaffold Flows, configure Crews, design agents and tasks, and follow CrewAI patterns.
## Why CrewAI?
<div align="center" style="margin-bottom: 30px;">

View File

@@ -4,6 +4,218 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="22 أبريل 2026">
## v1.14.3a2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## ما الذي تغير
### الميزات
- إضافة دعم لـ bedrock V4
- إضافة أدوات Daytona sandbox لوظائف محسّنة
- إضافة صفحة "البناء باستخدام الذكاء الاصطناعي" — مستندات أصلية للذكاء الاصطناعي لوكلاء البرمجة
- إضافة "البناء باستخدام الذكاء الاصطناعي" إلى التنقل في صفحة "البدء" وملفات الصفحات لجميع اللغات (en, ko, pt-BR, ar)
### إصلاحات الأخطاء
- إصلاح انتشار أسماء @CrewBase الضمنية إلى أحداث الطاقم
- حل مشكلة تكرار تهيئة الدفعات في دمج بيانات التنفيذ الوصفية
- إصلاح تسلسل حقول مرجع فئة Task لعمليات التحقق من النقاط
- التعامل مع نتيجة BaseModel في حلقة إعادة المحاولة للحدود
- تحديث python-dotenv إلى الإصدار >=1.2.2 للامتثال الأمني
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.3a1
- تحديث الأوصاف وتطبيق الترجمات الفعلية
## المساهمون
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="21 أبريل 2026">
## v1.14.3a1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
## ما الذي تغير
### الميزات
- إضافة دعم نقاط التحقق والفروع لوكلاء مستقلين
### إصلاحات الأخطاء
- الحفاظ على thought_signature في استدعاءات أداة البث Gemini
- إصدار task_started عند استئناف الفرع وإعادة تصميم واجهة المستخدم النصية لنقاط التحقق
- تصحيح ترتيب التشغيل الجاف ومعالجة الفرع القديم الذي تم التحقق منه في إصدار أدوات التطوير
- استخدام تواريخ مستقبلية في اختبارات تقليم نقاط التحقق لمنع الفشل المعتمد على الوقت (#5543)
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2
## المساهمون
@alex-clawd, @greysonlalonde
</Update>
<Update label="17 أبريل 2026">
## v1.14.2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
## ما الذي تغير
### الميزات
- إضافة أوامر استئناف النقاط التفتيش، والاختلاف، والتنظيف مع تحسين إمكانية الاكتشاف.
- إضافة معلمة `from_checkpoint` إلى `Agent.kickoff` والطرق ذات الصلة.
- إضافة أوامر إدارة القوالب لقوالب المشاريع.
- إضافة تلميحات استئناف إلى إصدار أدوات المطور عند الفشل.
- إضافة واجهة سطر الأوامر للتحقق من النشر وتعزيز سهولة استخدام تهيئة LLM.
- إضافة تقسيم النقاط التفتيشية مع تتبع النسب.
- إثراء تتبع رموز LLM مع رموز الاستدلال ورموز إنشاء التخزين المؤقت.
### إصلاحات الأخطاء
- إصلاح المطالبة بشأن تعارضات الفروع القديمة في إصدار أدوات المطور.
- تصحيح الثغرات في `authlib` و `langchain-text-splitters` و `pypdf`.
- تحديد نطاق معالجات البث لمنع تلوث أجزاء التشغيل المتقاطعة.
- إرسال نقاط التفتيش عبر واجهات Flow في TUI.
- استخدام نمط البحث المتكرر لاكتشاف نقاط التفتيش بتنسيق JSON.
- التعامل مع مخططات JSON الدائرية في أداة حل MCP.
- الحفاظ على معلمات استدعاء أداة Bedrock من خلال إزالة القيمة الافتراضية الصحيحة.
- إصدار حدث flow_finished بعد استئناف HITL.
- إصلاح ثغرات متنوعة من خلال تحديث التبعيات، بما في ذلك `requests` و `cryptography` و `pytest`.
- إصلاح لإيقاف تمرير وضع صارم إلى واجهة برمجة التطبيقات Bedrock Converse.
### الوثائق
- توثيق المعلمات المفقودة وإضافة قسم النقاط التفتيشية.
- تحديث سجل التغييرات والإصدار للإصدار v1.14.2 ومرشحي الإصدار السابقين.
- إضافة توثيق ميزة A2A الخاصة بالشركات وتحديث وثائق A2A المفتوحة المصدر.
## المساهمون
@Yanhu007، @alex-clawd، @github-actions[bot]، @greysonlalonde، @iris-clawd، @lorenzejay، @lucasgomide
</Update>
<Update label="16 أبريل 2026">
## v1.14.2rc1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## ما الذي تغير
### إصلاحات الأخطاء
- إصلاح معالجة مخططات JSON الدائرية في أداة MCP
- إصلاح ثغرة أمنية من خلال تحديث python-multipart إلى 0.0.26
- إصلاح ثغرة أمنية من خلال تحديث pypdf إلى 6.10.1
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a5
## المساهمون
@greysonlalonde
</Update>
<Update label="15 أبريل 2026">
## v1.14.2a5
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## ما الذي تغير
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a4
## المساهمون
@greysonlalonde
</Update>
<Update label="15 أبريل 2026">
## v1.14.2a4
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## ما الذي تغير
### الميزات
- إضافة تلميحات استئناف إلى إصدار أدوات المطورين عند الفشل
### إصلاحات الأخطاء
- إصلاح توجيه وضع الصرامة إلى واجهة برمجة تطبيقات Bedrock Converse
- إصلاح إصدار pytest إلى 9.0.3 لثغرة الأمان GHSA-6w46-j5rx-g56g
- رفع الحد الأدنى لـ OpenAI إلى >=2.0.0
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a3
## المساهمون
@greysonlalonde
</Update>
<Update label="13 أبريل 2026">
## v1.14.2a3
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## ما الذي تغير
### الميزات
- إضافة واجهة سطر الأوامر للتحقق من النشر
- تحسين سهولة استخدام تهيئة LLM
### إصلاحات الأخطاء
- تجاوز pypdf و uv إلى إصدارات مصححة لـ CVE-2026-40260 و GHSA-pjjw-68hj-v9mw
- ترقية requests إلى >=2.33.0 لمعالجة ثغرة ملف مؤقت CVE
- الحفاظ على معلمات استدعاء أداة Bedrock من خلال إزالة القيمة الافتراضية الصحيحة
- تنظيف مخططات الأدوات لوضع صارم
- إصلاح اختبار تسلسل تضمين MemoryRecord
### الوثائق
- تنظيف لغة A2A الخاصة بالمؤسسات
- إضافة وثائق ميزات A2A الخاصة بالمؤسسات
- تحديث وثائق A2A الخاصة بالمصادر المفتوحة
- تحديث سجل التغييرات والإصدار لـ v1.14.2a2
## المساهمون
@Yanhu007, @greysonlalonde
</Update>
<Update label="10 أبريل 2026">
## v1.14.2a2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## ما الذي تغير
### الميزات
- إضافة واجهة مستخدم نصية لنقطة التحقق مع عرض شجري، ودعم التفرع، ومدخلات/مخرجات قابلة للتعديل
- إثراء تتبع رموز LLM مع رموز الاستدلال ورموز إنشاء التخزين المؤقت
- إضافة معلمة `from_checkpoint` إلى طرق الانطلاق
- تضمين `crewai_version` في نقاط التحقق مع إطار عمل الهجرة
- إضافة تفرع نقاط التحقق مع تتبع السلالة
### إصلاحات الأخطاء
- إصلاح توجيه الوضع الصارم إلى مزودي Anthropic وBedrock
- تعزيز NL2SQLTool مع وضع القراءة فقط الافتراضي، والتحقق من الاستعلامات، والاستعلامات المعلمة
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a1
## المساهمون
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="9 أبريل 2026">
## v1.14.2a1

View File

@@ -0,0 +1,217 @@
---
title: "البناء باستخدام الذكاء الاصطناعي"
description: "كل ما يحتاجه وكلاء البرمجة بالذكاء الاصطناعي للبناء والنشر والتوسع مع CrewAI — المهارات، وثائق مقروءة آلياً، النشر، وميزات المؤسسات."
icon: robot
mode: "wide"
---
# البناء باستخدام الذكاء الاصطناعي
CrewAI مُصمَّم أصلاً للعمل مع الذكاء الاصطناعي. تجمع هذه الصفحة ما يحتاجه وكيل البرمجة بالذكاء الاصطناعي للبناء مع CrewAI — سواءً كان Claude Code أو Codex أو Cursor أو Gemini CLI أو أي مساعد آخر يساعد المطوّر على إيصال الـ crews والـ flows.
### وكلاء البرمجة المدعومون
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
صُممت هذه الصفحة للبشر وللمساعدين الذكيين على حدٍّ سواء. إذا كنت وكيل برمجة، ابدأ بـ **Skills** للحصول على سياق CrewAI، ثم استخدم **llms.txt** للوصول الكامل إلى الوثائق.
</Note>
---
## 1. Skills — علِّم وكيلك CrewAI
**Skills** حزم تعليمات تمنح وكلاء البرمجة معرفة عميقة بـ CrewAI — كيفية إنشاء هيكل Flows، وضبط Crews، استخدام الأدوات، واتباع اتفاقيات الإطار.
<Tabs>
<Tab title="Claude Code (سوق الإضافات)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
مهارات CrewAI متاحة في **سوق إضافات Claude Code** — نفس قناة التوزيع التي تستخدمها شركات رائدة في مجال الذكاء الاصطناعي:
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
تُفعَّل أربع مهارات تلقائياً عند طرح أسئلة متعلقة بـ CrewAI:
| المهارة | متى تُستخدم |
|---------|-------------|
| `getting-started` | مشاريع جديدة، الاختيار بين `LLM.call()` / `Agent` / `Crew` / `Flow`، ربط `crew.py` / `main.py` |
| `design-agent` | ضبط الوكلاء — الدور، الهدف، الخلفية، الأدوات، نماذج اللغة، الذاكرة، الحدود الآمنة |
| `design-task` | وصف المهام، التبعيات، المخرجات المنظمة (`output_pydantic`، `output_json`)، المراجعة البشرية |
| `ask-docs` | الاستعلام من [خادم CrewAI docs MCP](https://docs.crewai.com/mcp) للحصول على تفاصيل واجهة البرمجة الحالية |
</Tab>
<Tab title="npx (أي وكيل)">
يعمل مع Claude Code أو Codex أو Cursor أو Gemini CLI أو أي وكيل برمجة:
```shell
npx skills add crewaiinc/skills
```
يُجلب من [سجل skills.sh](https://skills.sh/crewaiinc/skills).
</Tab>
</Tabs>
<Steps>
<Step title="ثبِّت حزمة المهارات الرسمية">
استخدم إحدى الطريقتين أعلاه — سوق إضافات Claude Code أو `npx skills add`. كلاهما يثبّت الحزمة الرسمية [crewAIInc/skills](https://github.com/crewAIInc/skills).
</Step>
<Step title="يحصل وكيلك فوراً على خبرة CrewAI">
تعلّم الحزمة وكيلك:
- **Flows** — تطبيقات ذات حالة، خطوات، وتشغيل crews
- **Crews والوكلاء** — أنماط YAML أولاً، الأدوار، المهام، التفويض
- **الأدوات والتكاملات** — البحث، واجهات API، خوادم MCP، وأدوات CrewAI الشائعة
- **هيكل المشروع** — هياكل CLI واتفاقيات المستودع
- **أنماط محدثة** — يتماشى مع وثائق CrewAI الحالية وأفضل الممارسات
</Step>
<Step title="ابدأ البناء">
يمكن لوكيلك الآن إنشاء هيكل وبناء مشاريع CrewAI دون أن تعيد شرح الإطار في كل جلسة.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="مفهوم Skills" icon="bolt" href="/ar/concepts/skills">
كيف تعمل المهارات في وكلاء CrewAI — الحقن، التفعيل، والأنماط.
</Card>
<Card title="صفحة Skills" icon="wand-magic-sparkles" href="/ar/skills">
نظرة على حزمة crewAIInc/skills وما تتضمنه.
</Card>
<Card title="AGENTS.md والأدوات" icon="terminal" href="/ar/guides/coding-tools/agents-md">
إعداد AGENTS.md لـ Claude Code وCodex وCursor وGemini CLI.
</Card>
<Card title="سجل skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
القائمة الرسمية — المهارات، إحصاءات التثبيت، والتدقيق.
</Card>
</CardGroup>
---
## 2. llms.txt — وثائق مقروءة آلياً
ينشر CrewAI ملف `llms.txt` يمنح المساعدين الذكيين وصولاً مباشراً إلى الوثائق الكاملة بصيغة مقروءة آلياً.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="ما هو llms.txt؟">
[`llms.txt`](https://llmstxt.org/) معيار ناشئ لجعل الوثائق قابلة للاستهلاك من قبل نماذج اللغة الكبيرة. بدلاً من استخراج HTML، يمكن لوكيلك جلب ملف نصي واحد منظم بكل المحتوى المطلوب.
ملف `llms.txt` الخاص بـ CrewAI **متاح فعلياً** — يمكن لوكيلك استخدامه الآن.
</Tab>
<Tab title="كيفية الاستخدام">
وجِّه وكيل البرمجة إلى عنوان URL عندما يحتاج إلى مرجع CrewAI:
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
يمكن للعديد من وكلاء البرمجة (Claude Code، Cursor، وغيرهما) جلب عناوين URL مباشرة. يحتوي الملف على وثائق منظمة تغطي مفاهيم CrewAI وواجهات البرمجة والأدلة.
</Tab>
<Tab title="لماذا يهم">
- **دون استخراج ويب** — محتوى نظيف ومنظم في طلب واحد
- **دائماً محدث** — يُقدَّم مباشرة من docs.crewai.com
- **محسّن لنماذج اللغة** — مُنسَّق لنوافذ السياق لا للمتصفحات
- **يُكمّل Skills** — المهارات تعلّم الأنماط، وllms.txt يوفّر المرجع
</Tab>
</Tabs>
---
## 3. النشر للمؤسسات
انتقل من crew محلي إلى الإنتاج على **CrewAI AMP** (منصة إدارة الوكلاء) في دقائق.
<Steps>
<Step title="ابنِ محلياً">
أنشئ الهيكل واختبر crew أو flow:
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="جهّز للنشر">
تأكد أن هيكل مشروعك جاهز:
```bash
crewai deploy --prepare
```
راجع [دليل التحضير](/ar/enterprise/guides/prepare-for-deployment) لتفاصيل الهيكل والمتطلبات.
</Step>
<Step title="انشر على AMP">
ادفع إلى منصة CrewAI AMP:
```bash
crewai deploy
```
يمكنك أيضاً النشر عبر [تكامل GitHub](/ar/enterprise/guides/deploy-to-amp) أو [Crew Studio](/ar/enterprise/guides/enable-crew-studio).
</Step>
<Step title="الوصول عبر API">
يحصل الـ crew المنشور على نقطة نهاية REST. دمجه في أي تطبيق:
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="النشر على AMP" icon="rocket" href="/ar/enterprise/guides/deploy-to-amp">
دليل النشر الكامل — CLI وGitHub وCrew Studio.
</Card>
<Card title="مقدمة عن AMP" icon="globe" href="/ar/enterprise/introduction">
نظرة على المنصة — ما يوفّره AMP لـ crews في الإنتاج.
</Card>
</CardGroup>
---
## 4. ميزات المؤسسات
CrewAI AMP مُصمَّم لفرق الإنتاج. إليك ما تحصل عليه بعد النشر.
<CardGroup cols={2}>
<Card title="المراقبة والرصد" icon="chart-line">
مسارات تنفيذ مفصّلة، وسجلات، ومقاييس أداء لكل تشغيل crew. راقب قرارات الوكلاء، استدعاءات الأدوات، وإكمال المهام في الوقت الفعلي.
</Card>
<Card title="Crew Studio" icon="paintbrush">
واجهة منخفضة/بدون كود لإنشاء crews وتخصيصها ونشرها بصرياً — ثم التصدير إلى الشيفرة أو النشر مباشرة.
</Card>
<Card title="بث الويبهوك" icon="webhook">
بث أحداث فورية من تنفيذات الـ crews إلى أنظمتك. تكامل مع Slack أو Zapier أو أي مستهلك ويبهوك.
</Card>
<Card title="إدارة الفريق" icon="users">
SSO وRBAC وضوابط على مستوى المؤسسة. أدر من يمكنه إنشاء crews ونشرها والوصول إليها.
</Card>
<Card title="مستودع الأدوات" icon="toolbox">
انشر وشارك أدواتاً مخصصة عبر مؤسستك. ثبّت أدوات المجتمع من السجل.
</Card>
<Card title="Factory (استضافة ذاتية)" icon="server">
شغّل CrewAI AMP على بنيتك التحتية. قدرات المنصة كاملة مع ضوابط إقامة البيانات والامتثال.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="لمن مخصص AMP؟">
لفرق تحتاج نقل سير عمل وكلاء الذكاء الاصطناعي من النماذج الأولية إلى الإنتاج — مع المراقبة وضوابط الوصول والبنية التحتية القابلة للتوسع. سواءً كنت ناشئاً أو مؤسسة كبيرة، يتولى AMP التعقيد التشغيلي لتتفرغ لبناء الوكلاء.
</Accordion>
<Accordion title="ما خيارات النشر المتاحة؟">
- **السحابة (app.crewai.com)** — تُدار من CrewAI، أسرع طريق إلى الإنتاج
- **Factory (استضافة ذاتية)** — على بنيتك التحتية لسيطرة كاملة على البيانات
- **هجين** — دمج السحابة والاستضافة الذاتية حسب حساسية البيانات
</Accordion>
<Accordion title="كيف يعمل التسعير؟">
سجّل في [app.crewai.com](https://app.crewai.com) لمعرفة الخطط الحالية. تسعير المؤسسات وFactory متاح عند الطلب.
</Accordion>
</AccordionGroup>
<Card title="استكشف CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
سجّل وانشر أول crew لك في الإنتاج.
</Card>

View File

@@ -196,7 +196,7 @@ python3 --version
- يدعم أي مزود سحابي بما في ذلك النشر المحلي
- تكامل مع أنظمة الأمان الحالية
<Card title="استكشف خيارات المؤسسات" icon="building" href="https://crewai.com/enterprise">
<Card title="استكشف خيارات المؤسسات" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
تعرّف على عروض CrewAI للمؤسسات وجدول عرضًا توضيحيًا
</Card>
</Note>

View File

@@ -11,7 +11,7 @@ mode: "wide"
يتيح ذلك سير عمل متعددة مثل أن يقوم وكيل بالوصول إلى قاعدة البيانات واسترجاع المعلومات بناءً على الهدف ثم استخدام تلك المعلومات لتوليد استجابة أو تقرير أو أي مخرجات أخرى. بالإضافة إلى ذلك، يوفر القدرة للوكيل على تحديث قاعدة البيانات بناءً على هدفه.
**تنبيه**: تأكد من أن الوكيل لديه وصول إلى نسخة قراءة فقط أو أنه من المقبول أن يقوم الوكيل بتنفيذ استعلامات إدراج/تحديث على قاعدة البيانات.
**تنبيه**: الأداة للقراءة فقط بشكل افتراضي (SELECT/SHOW/DESCRIBE/EXPLAIN فقط). تتطلب عمليات الكتابة تمرير `allow_dml=True` أو ضبط متغير البيئة `CREWAI_NL2SQL_ALLOW_DML=true`. عند تفعيل الكتابة، تأكد من أن الوكيل يستخدم مستخدم قاعدة بيانات محدود الصلاحيات أو نسخة قراءة كلما أمكن.
## نموذج الأمان
@@ -36,6 +36,74 @@ mode: "wide"
- أضف خطافات `before_tool_call` لفرض أنماط الاستعلام المسموح بها
- فعّل تسجيل الاستعلامات والتنبيهات للعبارات التدميرية
## وضع القراءة فقط وتهيئة DML
تعمل `NL2SQLTool` في **وضع القراءة فقط بشكل افتراضي**. لا يُسمح إلا بأنواع العبارات التالية دون تهيئة إضافية:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
أي محاولة لتنفيذ عملية كتابة (`INSERT`، `UPDATE`، `DELETE`، `DROP`، `CREATE`، `ALTER`، `TRUNCATE`، إلخ) ستُسبب خطأً ما لم يتم تفعيل DML صراحةً.
كما تُحظر الاستعلامات متعددة العبارات التي تحتوي على فاصلة منقوطة (مثل `SELECT 1; DROP TABLE users`) في وضع القراءة فقط لمنع هجمات الحقن.
### تفعيل عمليات الكتابة
يمكنك تفعيل DML (لغة معالجة البيانات) بطريقتين:
**الخيار الأول — معامل المُنشئ:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**الخيار الثاني — متغير البيئة:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# DML مفعّل عبر متغير البيئة
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### أمثلة الاستخدام
**القراءة فقط (الافتراضي) — آمن للتحليلات والتقارير:**
```python
from crewai_tools import NL2SQLTool
# يُسمح فقط بـ SELECT/SHOW/DESCRIBE/EXPLAIN
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**مع تفعيل DML — مطلوب لأعباء عمل الكتابة:**
```python
from crewai_tools import NL2SQLTool
# يُسمح بـ INSERT وUPDATE وDELETE وDROP وغيرها
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
يمنح تفعيل DML للوكيل القدرة على تعديل البيانات أو حذفها. لا تفعّله إلا عندما يتطلب حالة الاستخدام صراحةً وصولاً للكتابة، وتأكد من أن بيانات اعتماد قاعدة البيانات محدودة بالحد الأدنى من الصلاحيات المطلوبة.
</Warning>
## المتطلبات
- SqlAlchemy

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,218 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Apr 22, 2026">
## v1.14.3a2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## What's Changed
### Features
- Add support for bedrock V4
- Add Daytona sandbox tools for enhanced functionality
- Add 'Build with AI' page — AI-native docs for coding agents
- Add Build with AI to Get Started navigation and page files for all languages (en, ko, pt-BR, ar)
### Bug Fixes
- Fix propagation of implicit @CrewBase names to crew events
- Resolve issue with duplicate batch initialization in execution metadata merge
- Fix serialization of Task class-reference fields for checkpointing
- Handle BaseModel result in guardrail retry loop
- Bump python-dotenv to version >=1.2.2 for security compliance
### Documentation
- Update changelog and version for v1.14.3a1
- Update descriptions and apply actual translations
## Contributors
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="Apr 21, 2026">
## v1.14.3a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
## What's Changed
### Features
- Add checkpoint and fork support to standalone agents
### Bug Fixes
- Preserve thought_signature in Gemini streaming tool calls
- Emit task_started on fork resume and redesign checkpoint TUI
- Correct dry-run order and handle checked-out stale branch in devtools release
- Use future dates in checkpoint prune tests to prevent time-dependent failures (#5543)
### Documentation
- Update changelog and version for v1.14.2
## Contributors
@alex-clawd, @greysonlalonde
</Update>
<Update label="Apr 17, 2026">
## v1.14.2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
## What's Changed
### Features
- Add checkpoint resume, diff, and prune commands with improved discoverability.
- Add `from_checkpoint` parameter to `Agent.kickoff` and related methods.
- Add template management commands for project templates.
- Add resume hints to devtools release on failure.
- Add deploy validation CLI and enhance LLM initialization ergonomics.
- Add checkpoint forking with lineage tracking.
- Enrich LLM token tracking with reasoning tokens and cache creation tokens.
### Bug Fixes
- Fix prompt on stale branch conflicts in devtools release.
- Patch vulnerabilities in `authlib`, `langchain-text-splitters`, and `pypdf`.
- Scope streaming handlers to prevent cross-run chunk contamination.
- Dispatch Flow checkpoints through Flow APIs in TUI.
- Use recursive glob for JSON checkpoint discovery.
- Handle cyclic JSON schemas in MCP tool resolution.
- Preserve Bedrock tool call arguments by removing truthy default.
- Emit flow_finished event after HITL resume.
- Fix various vulnerabilities by updating dependencies, including `requests`, `cryptography`, and `pytest`.
- Fix to stop forwarding strict mode to Bedrock Converse API.
### Documentation
- Document missing parameters and add Checkpointing section.
- Update changelog and version for v1.14.2 and previous release candidates.
- Add enterprise A2A feature documentation and update OSS A2A docs.
## Contributors
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
</Update>
<Update label="Apr 16, 2026">
## v1.14.2rc1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## What's Changed
### Bug Fixes
- Fix handling of cyclic JSON schemas in MCP tool resolution
- Fix vulnerability by bumping python-multipart to 0.0.26
- Fix vulnerability by bumping pypdf to 6.10.1
### Documentation
- Update changelog and version for v1.14.2a5
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 15, 2026">
## v1.14.2a5
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## What's Changed
### Documentation
- Update changelog and version for v1.14.2a4
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 15, 2026">
## v1.14.2a4
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## What's Changed
### Features
- Add resume hints to devtools release on failure
### Bug Fixes
- Fix strict mode forwarding to Bedrock Converse API
- Fix pytest version to 9.0.3 for security vulnerability GHSA-6w46-j5rx-g56g
- Bump OpenAI lower bound to >=2.0.0
### Documentation
- Update changelog and version for v1.14.2a3
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 13, 2026">
## v1.14.2a3
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## What's Changed
### Features
- Add deploy validation CLI
- Improve LLM initialization ergonomics
### Bug Fixes
- Override pypdf and uv to patched versions for CVE-2026-40260 and GHSA-pjjw-68hj-v9mw
- Upgrade requests to >=2.33.0 for CVE temp file vulnerability
- Preserve Bedrock tool call arguments by removing truthy default
- Sanitize tool schemas for strict mode
- Deflake MemoryRecord embedding serialization test
### Documentation
- Clean up enterprise A2A language
- Add enterprise A2A feature documentation
- Update OSS A2A documentation
- Update changelog and version for v1.14.2a2
## Contributors
@Yanhu007, @greysonlalonde
</Update>
<Update label="Apr 10, 2026">
## v1.14.2a2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## What's Changed
### Features
- Add checkpoint TUI with tree view, fork support, and editable inputs/outputs
- Enrich LLM token tracking with reasoning tokens and cache creation tokens
- Add `from_checkpoint` parameter to kickoff methods
- Embed `crewai_version` in checkpoints with migration framework
- Add checkpoint forking with lineage tracking
### Bug Fixes
- Fix strict mode forwarding to Anthropic and Bedrock providers
- Harden NL2SQLTool with read-only default, query validation, and parameterized queries
### Documentation
- Update changelog and version for v1.14.2a1
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="Apr 09, 2026">
## v1.14.2a1

View File

@@ -54,6 +54,7 @@ crew = Crew(
| `on_events` | `list[str]` | `["task_completed"]` | Event types that trigger a checkpoint |
| `provider` | `BaseProvider` | `JsonProvider()` | Storage backend |
| `max_checkpoints` | `int \| None` | `None` | Max checkpoints to keep. Oldest are pruned after each write. Pruning is handled by the provider. |
| `restore_from` | `Path \| str \| None` | `None` | Path to a checkpoint to restore from. Used when passing config via a kickoff method's `from_checkpoint` parameter. |
### Inheritance and Opt-Out
@@ -79,13 +80,42 @@ crew = Crew(
## Resuming from a Checkpoint
Pass a `CheckpointConfig` with `restore_from` to any kickoff method. The crew restores from that checkpoint, skips completed tasks, and resumes.
```python
# Restore and resume
crew = Crew.from_checkpoint("./my_checkpoints/20260407T120000_abc123.json")
result = crew.kickoff() # picks up from last completed task
from crewai import Crew, CheckpointConfig
crew = Crew(agents=[...], tasks=[...])
result = crew.kickoff(
from_checkpoint=CheckpointConfig(
restore_from="./my_checkpoints/20260407T120000_abc123.json",
),
)
```
The restored crew skips already-completed tasks and resumes from the first incomplete one.
Remaining `CheckpointConfig` fields apply to the new run, so checkpointing continues after the restore.
You can also use the classmethod directly:
```python
config = CheckpointConfig(restore_from="./my_checkpoints/20260407T120000_abc123.json")
crew = Crew.from_checkpoint(config)
result = crew.kickoff()
```
## Forking from a Checkpoint
`fork()` restores a checkpoint and starts a new execution branch. Useful for exploring alternative paths from the same point.
```python
from crewai import Crew, CheckpointConfig
config = CheckpointConfig(restore_from="./my_checkpoints/20260407T120000_abc123.json")
crew = Crew.fork(config, branch="experiment-a")
result = crew.kickoff(inputs={"strategy": "aggressive"})
```
Each fork gets a unique lineage ID so checkpoints from different branches don't collide. The `branch` label is optional and auto-generated if omitted.
## Works on Crew, Flow, and Agent
@@ -125,7 +155,8 @@ flow = MyFlow(
result = flow.kickoff()
# Resume
flow = MyFlow.from_checkpoint("./flow_cp/20260407T120000_abc123.json")
config = CheckpointConfig(restore_from="./flow_cp/20260407T120000_abc123.json")
flow = MyFlow.from_checkpoint(config)
result = flow.kickoff()
```
@@ -231,3 +262,44 @@ async def on_llm_done_async(source, event, state):
The `state` argument is the `RuntimeState` passed automatically by the event bus when your handler accepts 3 parameters. You can register handlers on any event type listed in the [Event Listeners](/en/concepts/event-listener) documentation.
Checkpointing is best-effort: if a checkpoint write fails, the error is logged but execution continues uninterrupted.
## CLI
The `crewai checkpoint` command gives you a TUI for browsing, inspecting, resuming, and forking checkpoints. It auto-detects whether your checkpoints are JSON files or a SQLite database.
```bash
# Launch the TUI — auto-detects .checkpoints/ or .checkpoints.db
crewai checkpoint
# Point at a specific location
crewai checkpoint --location ./my_checkpoints
crewai checkpoint --location ./.checkpoints.db
```
<Frame>
<img src="/images/checkpointing.png" alt="Checkpoint TUI" />
</Frame>
The left panel is a tree view. Checkpoints are grouped by branch, and forks nest under the checkpoint they diverged from. Select a checkpoint to see its metadata, entity state, and task progress in the detail panel. Hit **Resume** to pick up where it left off, or **Fork** to start a new branch from that point.
### Editing inputs and task outputs
When a checkpoint is selected, the detail panel shows:
- **Inputs** — if the original kickoff had inputs (e.g. `{topic}`), they appear as editable fields pre-filled with the original values. Change them before resuming or forking.
- **Task outputs** — completed tasks show their output in editable text areas. Edit a task's output to change the context that downstream tasks receive. When you modify a task output and hit Fork, all subsequent tasks are invalidated and re-run with the new context.
This is useful for "what if" exploration — fork from a checkpoint, tweak a task's result, and see how it changes downstream behavior.
### Subcommands
```bash
# List all checkpoints
crewai checkpoint list ./my_checkpoints
# Inspect a specific checkpoint
crewai checkpoint info ./my_checkpoints/20260407T120000_abc123.json
# Inspect latest in a SQLite database
crewai checkpoint info ./.checkpoints.db
```

View File

@@ -33,7 +33,14 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
| **Chat LLM** _(optional)_ | `chat_llm` | The language model used to orchestrate `crewai chat` CLI interactions with the crew. Accepts a model name string or `LLM` instance. Defaults to `None`. |
| **Before Kickoff Callbacks** _(optional)_ | `before_kickoff_callbacks` | A list of callable functions executed **before** the crew starts. Each callback receives and can modify the inputs dict. Distinct from the `@before_kickoff` decorator. Defaults to `[]`. |
| **After Kickoff Callbacks** _(optional)_ | `after_kickoff_callbacks` | A list of callable functions executed **after** the crew finishes. Each callback receives and can modify the `CrewOutput`. Distinct from the `@after_kickoff` decorator. Defaults to `[]`. |
| **Tracing** _(optional)_ | `tracing` | Controls OpenTelemetry tracing for the crew. `True` = always enable, `False` = always disable, `None` = inherit from environment / user settings. Defaults to `None`. |
| **Skills** _(optional)_ | `skills` | A list of `Path` objects (skill search directories) or pre-loaded `Skill` objects applied to all agents in the crew. Defaults to `None`. |
| **Security Config** _(optional)_ | `security_config` | A `SecurityConfig` instance managing crew fingerprinting and identity. Defaults to `SecurityConfig()`. |
| **Checkpoint** _(optional)_ | `checkpoint` | Enables automatic checkpointing. Pass `True` for sensible defaults, a `CheckpointConfig` for full control, `False` to opt out, or `None` to inherit. See the [Checkpointing](#checkpointing) section below. Defaults to `None`. |
<Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -271,6 +278,72 @@ crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name
## Checkpointing
Checkpointing lets a crew automatically save its state after key events (e.g. task completion) so that long-running or interrupted runs can be resumed exactly where they left off without re-executing completed tasks.
### Quick Start
Pass `checkpoint=True` to enable checkpointing with sensible defaults (saves to `.checkpoints/` after every task):
```python Code
from crewai import Crew, Process
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential,
checkpoint=True, # saves to .checkpoints/ after every task
)
crew.kickoff(inputs={"topic": "AI trends"})
```
### Full Control with `CheckpointConfig`
Use `CheckpointConfig` for fine-grained control over location, trigger events, storage backend, and retention:
```python Code
from crewai import Crew, Process
from crewai.state.checkpoint_config import CheckpointConfig
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential,
checkpoint=CheckpointConfig(
location="./.checkpoints", # directory for JSON files (default)
on_events=["task_completed"], # trigger after each task (default)
max_checkpoints=5, # keep only the 5 most recent checkpoints
),
)
crew.kickoff(inputs={"topic": "AI trends"})
```
### Resuming from a Checkpoint
Use `Crew.from_checkpoint()` to restore a crew from a saved checkpoint file, then call `kickoff()` to resume:
```python Code
# Resume from the most recent checkpoint
crew = Crew.from_checkpoint(".checkpoints/latest.json")
crew.kickoff()
```
<Note>
When restoring from a checkpoint, `checkpoint_inputs`, `checkpoint_train`, and `checkpoint_kickoff_event_id` are automatically reconstructed — you do not need to set these manually.
</Note>
### `CheckpointConfig` Attributes
| Attribute | Type | Default | Description |
| :----------------- | :------------------------------------- | :------------------- | :-------------------------------------------------------------------------------------------- |
| `location` | `str` | `"./.checkpoints"` | Storage destination. For `JsonProvider` this is a directory path; for `SqliteProvider` a database file path. |
| `on_events` | `list[str]` | `["task_completed"]` | Event types that trigger a checkpoint write. Use `["*"]` to checkpoint on every event. |
| `provider` | `JsonProvider \| SqliteProvider` | `JsonProvider()` | Storage backend. Defaults to `JsonProvider` (plain JSON files). |
| `max_checkpoints` | `int \| None` | `None` | Maximum checkpoints to keep. Oldest are pruned after each write. `None` keeps all. |
## Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.

View File

@@ -1,18 +1,27 @@
---
title: Planning
description: Learn how to add planning to your CrewAI Crew and improve their performance.
description: Learn how to add planning to CrewAI at the crew level (sequential task planning) and the agent level (Plan-and-Act with PlanningConfig).
icon: ruler-combined
mode: "wide"
---
## Overview
The planning feature in CrewAI allows you to add planning capability to your crew. When enabled, before each Crew iteration,
all Crew information is sent to an AgentPlanner that will plan the tasks step by step, and this plan will be added to each task description.
CrewAI provides two complementary planning systems:
- **Crew-level planning** — before each crew iteration, an `AgentPlanner` produces a step-by-step plan for every task and injects it into the task description. Useful when you want the crew to think through the *whole pipeline* before any agent starts working.
- **Agent-level planning (Plan-and-Act)** — a single agent builds an explicit multi-step plan, executes it step by step, and observes/replans as it goes. Configured per-agent via `PlanningConfig`. Useful when you want one agent to tackle a complex task adaptively.
The two are independent and can be combined: a crew can have planning enabled, and individual agents in that crew can also use `planning_config`.
## Crew-Level Planning
The crew-level planning feature adds planning capability to your crew. When enabled, before each Crew iteration,
all Crew information is sent to an `AgentPlanner` that will plan the tasks step by step, and this plan will be added to each task description.
### Using the Planning Feature
Getting started with the planning feature is very easy, the only step required is to add `planning=True` to your Crew:
Getting started with crew-level planning is very easy, the only step required is to add `planning=True` to your Crew:
<CodeGroup>
```python Code
@@ -36,9 +45,9 @@ When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for p
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.
Now you can define the LLM that will be used to plan the tasks.
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
responsible for creating the step-by-step logic to add to the Agents' tasks.
<CodeGroup>
@@ -152,4 +161,191 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
**Expected Output:**
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```
</CodeGroup>
</CodeGroup>
## Agent-Level Planning (Plan-and-Act)
Agent-level planning gives a single agent an explicit Plan-and-Act loop: it builds a structured multi-step plan up front, executes each step, observes the result, and can replan or refine when reality diverges from the plan. It's configured per-agent through `PlanningConfig`.
### Enabling Agent Planning
Pass a `PlanningConfig` to the agent. The presence of a `PlanningConfig` enables planning — you don't need a separate flag.
<CodeGroup>
```python Defaults
from crewai import Agent, PlanningConfig
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and surface insights",
backstory="You are an experienced data analyst.",
planning_config=PlanningConfig(), # medium effort, defaults
)
```
```python Tuned
from crewai import Agent, PlanningConfig
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and surface insights",
backstory="You are an experienced data analyst.",
planning_config=PlanningConfig(
reasoning_effort="high",
max_steps=10,
max_replans=2,
max_step_iterations=10,
step_timeout=120,
llm="gpt-4o-mini",
),
)
```
</CodeGroup>
### Reasoning Effort
`reasoning_effort` controls what happens *between steps* — how aggressively the agent observes, replans, and refines as it executes the plan. It is the most important knob for tuning latency vs. adaptiveness.
<ParamField body="low" type="string">
Observe each step for success validation only. Skip the decide/replan/refine pipeline; steps are marked complete and execution continues linearly. **Fastest option** — best when the plan is likely to be correct on the first try and you want minimal overhead per step.
</ParamField>
<ParamField body="medium" type="string" default="default">
Observe each step. On failure, trigger replanning. On success, skip refinement and continue. **Balanced option (default)** — replans only when something goes wrong, so you get adaptiveness without paying for it on the happy path.
</ParamField>
<ParamField body="high" type="string">
Full observation pipeline with `decide_next_action` after every step. Can trigger early goal achievement (finish before all steps run), full replanning, or lightweight step refinement. **Most adaptive, highest latency** — best for open-ended or exploratory tasks where the right path can't be predicted up front.
</ParamField>
### PlanningConfig Fields
<ParamField body="reasoning_effort" type="Literal['low', 'medium', 'high']" default="medium">
Post-step observation/replanning behavior. See above.
</ParamField>
<ParamField body="max_attempts" type="int | None" default="None">
Maximum number of planning refinement attempts during the initial plan creation. If `None`, the agent keeps refining until it indicates readiness.
</ParamField>
<ParamField body="max_steps" type="int" default="20">
Maximum number of steps in the generated plan. Must be `>= 1`. Lower this when you want concise plans; raise it for complex tasks that legitimately need many steps.
</ParamField>
<ParamField body="max_replans" type="int" default="3">
Maximum number of full replanning cycles allowed during execution. Must be `>= 0`. Set to `0` to forbid replanning entirely (the agent will stick to the original plan even if steps fail).
</ParamField>
<ParamField body="max_step_iterations" type="int" default="15">
Maximum LLM iterations per step inside the `StepExecutor` multi-turn loop. Must be `>= 1`. Lower values make individual steps faster but less thorough — useful when each step is a small, well-scoped action.
</ParamField>
<ParamField body="step_timeout" type="int | None" default="None">
Wall-clock seconds for a single step. If exceeded, the step is marked failed and observation decides whether to continue or replan. `None` means no per-step timeout.
</ParamField>
<ParamField body="system_prompt" type="str | None" default="None">
Override the default planning system prompt. Use this to inject domain-specific instructions for how plans should be structured.
</ParamField>
<ParamField body="plan_prompt" type="str | None" default="None">
Override the prompt used to create the initial plan. Supports template variables like `{description}`.
</ParamField>
<ParamField body="refine_prompt" type="str | None" default="None">
Override the prompt used to refine the plan during the `max_attempts` refinement loop.
</ParamField>
<ParamField body="llm" type="str | BaseLLM | None" default="None">
LLM used for planning. Falls back to the agent's own LLM if not provided. Pass either a model string (e.g., `"gpt-4o-mini"`) or a `BaseLLM` instance.
</ParamField>
### How the Plan-and-Act Loop Works
When `planning_config` is set, the agent executes the task as follows:
1. **Plan** — build an initial multi-step plan, refining up to `max_attempts` times until ready.
2. **Execute step** — run one step through the `StepExecutor` (up to `max_step_iterations` LLM turns, bounded by `step_timeout`).
3. **Observe** — validate whether the step succeeded.
4. **Decide next action** — depending on `reasoning_effort`:
- `low`: continue to the next step.
- `medium`: continue on success; replan on failure.
- `high`: route through `decide_next_action`, which can finish early, replan, refine the next step, or continue.
5. Repeat until the plan completes, the goal is achieved, or `max_replans` is exhausted.
### Custom Prompts Example
```python
from crewai import Agent, PlanningConfig
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
planning_config=PlanningConfig(
reasoning_effort="high",
max_attempts=3,
max_steps=10,
plan_prompt="Create a focused plan for: {description}",
refine_prompt="Tighten this plan, removing any step that doesn't materially advance the goal.",
llm="gpt-4o-mini",
),
)
```
### Migration from `reasoning=True`
The original agent reasoning API used two fields directly on `Agent`:
- `reasoning: bool = False`
- `max_reasoning_attempts: int | None = None`
Both are **deprecated**. They still work — passing them emits a `DeprecationWarning` and CrewAI auto-migrates them to an equivalent `PlanningConfig` — but new code should use `PlanningConfig` directly.
<Warning>
`Agent(reasoning=True, ...)` and `Agent(max_reasoning_attempts=N, ...)` are deprecated and will be removed in a future release. Migrate to `planning_config=PlanningConfig(...)`.
</Warning>
<CodeGroup>
```python Before (deprecated)
from crewai import Agent
agent = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
backstory="Expert data analyst.",
reasoning=True,
max_reasoning_attempts=3,
)
```
```python After
from crewai import Agent, PlanningConfig
agent = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
backstory="Expert data analyst.",
planning_config=PlanningConfig(max_attempts=3),
)
```
</CodeGroup>
The mapping is direct:
- `reasoning=True` → presence of `planning_config` enables planning.
- `max_reasoning_attempts=N` → `PlanningConfig(max_attempts=N)`.
Everything else (`reasoning_effort`, `max_steps`, `max_replans`, `max_step_iterations`, `step_timeout`, custom prompts, dedicated planning LLM) is new functionality only available through `PlanningConfig`.
## Choosing Between Crew-Level and Agent-Level Planning
| Concern | Crew-level (`Crew(planning=True)`) | Agent-level (`PlanningConfig`) |
| --- | --- | --- |
| Scope | Plans every task in the crew up front | Plans one agent's task adaptively |
| When the plan is built | Once per crew iteration, before any task runs | At the start of each agent's task |
| Adapts mid-execution | No — the plan is injected as guidance | Yes — observes, replans, and refines per step |
| Best for | Multi-task pipelines where ordering and hand-offs matter | Open-ended tasks where the right path emerges as the agent works |
| Configuration surface | `planning`, `planning_llm` on `Crew` | `PlanningConfig` on `Agent` |
The two are complementary — you can enable crew-level planning to coordinate the overall pipeline and use `planning_config` on individual agents that need to think adaptively while executing their step.

View File

@@ -1,148 +1,59 @@
---
title: Reasoning
description: "Learn how to enable and use agent reasoning to improve task execution."
description: "Agent reasoning has been renamed to planning_config. See the Planning page for the current API."
icon: brain
mode: "wide"
---
## Overview
Agent reasoning is a feature that allows agents to reflect on a task and create a plan before execution. This helps agents approach tasks more methodically and ensures they're ready to perform the assigned work.
<Warning>
The `reasoning=True` and `max_reasoning_attempts=N` arguments on `Agent` are **deprecated**. They still work for now — passing them emits a `DeprecationWarning` and CrewAI auto-migrates the values into a `PlanningConfig` — but they will be removed in a future release.
## Usage
The replacement is **`planning_config`**, documented in full on the [Planning](/en/concepts/planning) page.
</Warning>
To enable reasoning for an agent, simply set `reasoning=True` when creating the agent:
## Migration
```python
The new API lives on `Agent.planning_config` and uses the `PlanningConfig` model. The presence of a `PlanningConfig` enables planning — there is no separate boolean flag.
<CodeGroup>
```python Before (deprecated)
from crewai import Agent
agent = Agent(
role="Data Analyst",
goal="Analyze complex datasets and provide insights",
backstory="You are an experienced data analyst with expertise in finding patterns in complex data.",
reasoning=True, # Enable reasoning
max_reasoning_attempts=3 # Optional: Set a maximum number of reasoning attempts
)
```
## How It Works
When reasoning is enabled, before executing a task, the agent will:
1. Reflect on the task and create a detailed plan
2. Evaluate whether it's ready to execute the task
3. Refine the plan as necessary until it's ready or max_reasoning_attempts is reached
4. Inject the reasoning plan into the task description before execution
This process helps the agent break down complex tasks into manageable steps and identify potential challenges before starting.
## Configuration Options
<ParamField body="reasoning" type="bool" default="False">
Enable or disable reasoning
</ParamField>
<ParamField body="max_reasoning_attempts" type="int" default="None">
Maximum number of attempts to refine the plan before proceeding with execution. If None (default), the agent will continue refining until it's ready.
</ParamField>
## Example
Here's a complete example:
```python
from crewai import Agent, Task, Crew
# Create an agent with reasoning enabled
analyst = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
backstory="You are an expert data analyst.",
backstory="Expert data analyst.",
reasoning=True,
max_reasoning_attempts=3 # Optional: Set a limit on reasoning attempts
max_reasoning_attempts=3,
)
# Create a task
analysis_task = Task(
description="Analyze the provided sales data and identify key trends.",
expected_output="A report highlighting the top 3 sales trends.",
agent=analyst
)
# Create a crew and run the task
crew = Crew(agents=[analyst], tasks=[analysis_task])
result = crew.kickoff()
print(result)
```
## Error Handling
```python After
from crewai import Agent, PlanningConfig
The reasoning process is designed to be robust, with error handling built in. If an error occurs during reasoning, the agent will proceed with executing the task without the reasoning plan. This ensures that tasks can still be executed even if the reasoning process fails.
Here's how to handle potential errors in your code:
```python
from crewai import Agent, Task
import logging
# Set up logging to capture any reasoning errors
logging.basicConfig(level=logging.INFO)
# Create an agent with reasoning enabled
agent = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
reasoning=True,
max_reasoning_attempts=3
backstory="Expert data analyst.",
planning_config=PlanningConfig(max_attempts=3),
)
# Create a task
task = Task(
description="Analyze the provided sales data and identify key trends.",
expected_output="A report highlighting the top 3 sales trends.",
agent=agent
)
# Execute the task
# If an error occurs during reasoning, it will be logged and execution will continue
result = agent.execute_task(task)
```
</CodeGroup>
## Example Reasoning Output
Field mapping:
Here's an example of what a reasoning plan might look like for a data analysis task:
- `reasoning=True` → presence of `planning_config` enables planning.
- `max_reasoning_attempts=N` → `PlanningConfig(max_attempts=N)`.
```
Task: Analyze the provided sales data and identify key trends.
## What's New
Reasoning Plan:
I'll analyze the sales data to identify the top 3 trends.
`PlanningConfig` exposes capabilities that the old `reasoning` flag did not, including:
1. Understanding of the task:
I need to analyze sales data to identify key trends that would be valuable for business decision-making.
- `reasoning_effort` (`"low"` / `"medium"` / `"high"`) to control post-step observation, replanning, and refinement.
- `max_steps`, `max_replans`, `max_step_iterations`, and `step_timeout` to bound plan size and execution.
- A dedicated planning `llm` separate from the agent's execution LLM.
- Custom `system_prompt`, `plan_prompt`, and `refine_prompt` overrides.
2. Key steps I'll take:
- First, I'll examine the data structure to understand what fields are available
- Then I'll perform exploratory data analysis to identify patterns
- Next, I'll analyze sales by time periods to identify temporal trends
- I'll also analyze sales by product categories and customer segments
- Finally, I'll identify the top 3 most significant trends
3. Approach to challenges:
- If the data has missing values, I'll decide whether to fill or filter them
- If the data has outliers, I'll investigate whether they're valid data points or errors
- If trends aren't immediately obvious, I'll apply statistical methods to uncover patterns
4. Use of available tools:
- I'll use data analysis tools to explore and visualize the data
- I'll use statistical tools to identify significant patterns
- I'll use knowledge retrieval to access relevant information about sales analysis
5. Expected outcome:
A concise report highlighting the top 3 sales trends with supporting evidence from the data.
READY: I am ready to execute the task.
```
This reasoning plan helps the agent organize its approach to the task, consider potential challenges, and ensure it delivers the expected output.
For the full field reference, the Plan-and-Act loop, and guidance on when to use agent-level planning vs. crew-level planning, see [Planning](/en/concepts/planning).

View File

@@ -0,0 +1,227 @@
---
title: A2A on AMP
description: Production-grade Agent-to-Agent communication with distributed state and multi-scheme authentication
icon: "network-wired"
mode: "wide"
---
<Warning>
A2A server agents on AMP are in early release. APIs may change in future versions.
</Warning>
## Overview
CrewAI AMP extends the open-source [A2A protocol implementation](/en/learn/a2a-agent-delegation) with production infrastructure for deploying distributed agents at scale. AMP supports A2A protocol versions 0.2 and 0.3. When you deploy a crew or agent with A2A server configuration to AMP, the platform automatically provisions distributed state management, authentication, multi-transport endpoints, and lifecycle management.
<Note>
For A2A protocol fundamentals, client/server configuration, and authentication schemes, see the [A2A Agent Delegation](/en/learn/a2a-agent-delegation) documentation. This page covers what AMP adds on top of the open-source implementation.
</Note>
### Usage
Add `A2AServerConfig` to any agent in your crew and deploy to AMP. The platform detects agents with server configuration and automatically registers A2A endpoints, generates agent cards, and provisions the infrastructure described below.
```python
from crewai import Agent, Crew, Task
from crewai.a2a import A2AServerConfig
from crewai.a2a.auth import EnterpriseTokenAuth
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and provide insights",
backstory="Expert data scientist with statistical analysis skills",
llm="gpt-4o",
a2a=A2AServerConfig(
auth=EnterpriseTokenAuth()
)
)
task = Task(
description="Analyze the provided dataset",
expected_output="Statistical summary with key insights",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
```
After [deploying to AMP](/en/enterprise/guides/deploy-to-amp), the platform registers two levels of A2A endpoints:
- **Crew-level**: an aggregate agent card at `/.well-known/agent-card.json` where each agent with `A2AServerConfig` is listed as a skill, with a JSON-RPC endpoint at `/a2a`
- **Per-agent**: isolated agent cards and JSON-RPC endpoints mounted at `/a2a/agents/{role}/`, each with its own tenancy
Clients can interact with the crew as a whole or target a specific agent directly. To route a request to a specific agent through the crew-level endpoint, include `"target_agent"` in the message metadata with the agent's slugified role name (e.g., `"data-analyst"` for an agent with role `"Data Analyst"`). If no `target_agent` is provided, the request is handled by the first agent in the crew.
See [A2A Agent Delegation](/en/learn/a2a-agent-delegation#server-configuration-options) for the full list of `A2AServerConfig` options.
<Warning>
Per the A2A protocol, agent cards are publicly accessible to enable discovery. This includes both the crew-level card at `/.well-known/agent-card.json` and per-agent cards at `/a2a/agents/{role}/.well-known/agent-card.json`. Do not include sensitive information in agent names, descriptions, or skill definitions.
</Warning>
### File Inputs and Structured Output
A2A on AMP supports passing files and requesting structured output in both directions. Clients can send files as `FilePart`s and request structured responses by embedding a JSON schema in the message. Server agents receive files as `input_files` on the task, and return structured data as `DataPart`s when a schema is provided. See [File Inputs and Structured Output](/en/learn/a2a-agent-delegation#file-inputs-and-structured-output) for details.
### What AMP Adds
<CardGroup cols={2}>
<Card title="Distributed State" icon="database">
Persistent task, context, and result storage
</Card>
<Card title="Enterprise Authentication" icon="shield-halved">
OIDC, OAuth2, mTLS, and Enterprise token validation beyond simple bearer tokens
</Card>
<Card title="gRPC Transport" icon="bolt">
Full gRPC server with TLS and authentication
</Card>
<Card title="Context Lifecycle" icon="clock-rotate-left">
Automatic idle detection, expiration, and cleanup of long-running conversations
</Card>
<Card title="Signed Webhooks" icon="signature">
HMAC-SHA256 signed push notifications with replay protection
</Card>
<Card title="Multi-Transport" icon="arrows-split-up-and-left">
REST, JSON-RPC, and gRPC endpoints served simultaneously from a single deployment
</Card>
</CardGroup>
---
## Distributed State Management
In the open-source implementation, task and context state lives in memory on a single process. AMP replaces this with persistent, distributed stores.
### Storage Layers
| Store | Purpose |
|---|---|
| **Task Store** | Persists A2A task state and metadata |
| **Context Store** | Tracks conversation context, creation time, last activity, and associated tasks |
| **Result Store** | Caches task results for retrieval |
| **Push Config Store** | Manages webhook subscriptions per task |
Multiple A2A deployments are automatically isolated from each other, preventing data collisions when sharing infrastructure.
---
## Enterprise Authentication
AMP supports six authentication schemes for incoming A2A requests, configurable per deployment. Authentication works across both HTTP and gRPC transports.
| Scheme | Description | Use Case |
|---|---|---|
| **SimpleTokenAuth** | Static bearer token from `AUTH_TOKEN` env var | Development, simple deployments |
| **EnterpriseTokenAuth** | Token verification via CrewAI PlusAPI with integration token claims | AMP-to-AMP agent communication |
| **OIDCAuth** | OpenID Connect JWT validation with JWKS endpoint caching | Enterprise SSO integration |
| **OAuth2ServerAuth** | OAuth2 with configurable scopes | Fine-grained access control |
| **APIKeyServerAuth** | API key validation via header or query parameter | Third-party integrations |
| **MTLSServerAuth** | Mutual TLS certificate-based authentication | Zero-trust environments |
The configured auth scheme automatically populates the agent card's `securitySchemes` and `security` fields. Clients discover authentication requirements by fetching the agent card before making requests.
---
## Extended Agent Cards
AMP supports role-based skill visibility through extended agent cards. Unauthenticated users see the standard agent card with public skills. Authenticated users receive an extended card with additional capabilities.
This enables patterns like:
- Public agents that expose basic skills to anyone, with advanced skills available to authenticated clients
- Internal agents that advertise different capabilities based on the caller's identity
---
## gRPC Transport
If enabled, AMP provides full gRPC support alongside the default JSON-RPC transport.
- **TLS termination** with configurable certificate and key paths
- **gRPC reflection** for debugging with tools like `grpcurl`
- **Authentication** using the same schemes available for HTTP
- **Extension validation** ensuring clients support required protocol extensions
- **Version negotiation** across A2A protocol versions 0.2 and 0.3
For deployments exposing multiple agents, AMP automatically allocates per-agent gRPC ports and coordinates TLS, startup, and shutdown across all servers.
---
## Context Lifecycle Management
AMP tracks the lifecycle of A2A conversation contexts and automatically manages cleanup.
### Lifecycle States
| State | Condition | Action |
|---|---|---|
| **Active** | Context has recent activity | None |
| **Idle** | No activity for a configured period | Marked idle, event emitted |
| **Expired** | Context exceeds its maximum lifetime | Marked expired, associated tasks cleaned up, event emitted |
A background cleanup task runs hourly to scan for idle and expired contexts. All state transitions emit CrewAI events that integrate with the platform's observability features.
---
## Signed Push Notifications
When an A2A agent sends push notifications to a client webhook, AMP signs each request with HMAC-SHA256 to ensure integrity and prevent tampering.
### Signature Headers
| Header | Purpose |
|---|---|
| `X-A2A-Signature` | HMAC-SHA256 signature in `sha256={hex_digest}` format |
| `X-A2A-Signature-Timestamp` | Unix timestamp bound to the signature |
| `X-A2A-Notification-Token` | Optional notification auth token |
### Security Properties
- **Integrity**: payload cannot be modified without invalidating the signature
- **Replay protection**: signatures are timestamp-bound with a configurable tolerance window
- **Retry with backoff**: failed deliveries retry with exponential backoff
---
## Distributed Event Streaming
In the open-source implementation, SSE streaming works within a single process. AMP propagates SSE events across instances so that clients receive updates even when the instance holding the streaming connection differs from the instance executing the task.
---
## Multi-Transport Endpoints
AMP serves REST and JSON-RPC by default. gRPC is available as an additional transport if enabled.
| Transport | Path Convention | Description |
|---|---|---|
| **REST** | `/v1/message:send`, `/v1/message:stream`, `/v1/tasks` | Google API conventions |
| **JSON-RPC** | Standard A2A JSON-RPC endpoint | Default A2A protocol transport |
| **gRPC** | Per-agent port allocation | Optional, high-performance binary protocol |
All active transports share the same authentication, version negotiation, and extension validation. Agent cards are generated from agent and crew metadata — roles, goals, and tools become skills and descriptions — and automatically include interfaces for each active transport. They can also be manually configured via `A2AServerConfig`.
---
## Version and Extension Negotiation
AMP validates A2A protocol versions and extensions at the transport layer.
### Version Negotiation
- Clients send the `A2A-Version` header with their preferred version
- AMP validates against supported versions (0.2, 0.3) and falls back to 0.3 if unspecified
- The negotiated version is returned in the response headers
### Extension Validation
- Clients declare supported extensions via the `X-A2A-Extensions` header
- AMP validates that clients support all extensions the agent requires
- Requests from clients missing required extensions receive an `UnsupportedExtensionError`
---
## Next Steps
- [A2A Agent Delegation](/en/learn/a2a-agent-delegation) — A2A protocol fundamentals and configuration
- [A2UI](/en/learn/a2ui) — Interactive UI rendering over A2A
- [Deploy to AMP](/en/enterprise/guides/deploy-to-amp) — General deployment guide
- [Webhook Streaming](/en/enterprise/features/webhook-streaming) — Event streaming for deployed automations

View File

@@ -0,0 +1,217 @@
---
title: "Build with AI"
description: "Everything AI coding agents need to build, deploy, and scale with CrewAI — skills, machine-readable docs, deployment, and enterprise features."
icon: robot
mode: "wide"
---
# Build with AI
CrewAI is AI-native. This page brings together everything an AI coding agent needs to build with CrewAI — whether you're Claude Code, Codex, Cursor, Gemini CLI, or any other assistant helping a developer ship crews and flows.
### Supported Coding Agents
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
This page is designed to be consumed by both humans and AI assistants. If you're a coding agent, start with **Skills** to get CrewAI context, then use **llms.txt** for full docs access.
</Note>
---
## 1. Skills — Teach Your Agent CrewAI
**Skills** are instruction packs that give coding agents deep CrewAI knowledge — how to scaffold Flows, configure Crews, use tools, and follow framework conventions.
<Tabs>
<Tab title="Claude Code (Plugin Marketplace)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
CrewAI skills are available in the **Claude Code plugin marketplace** — the same distribution channel used by top AI-native companies:
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
Four skills activate automatically when you ask relevant CrewAI questions:
| Skill | When it runs |
|-------|--------------|
| `getting-started` | Scaffolding new projects, choosing between `LLM.call()` / `Agent` / `Crew` / `Flow`, wiring `crew.py` / `main.py` |
| `design-agent` | Configuring agents — role, goal, backstory, tools, LLMs, memory, guardrails |
| `design-task` | Writing task descriptions, dependencies, structured output (`output_pydantic`, `output_json`), human review |
| `ask-docs` | Querying the live [CrewAI docs MCP server](https://docs.crewai.com/mcp) for up-to-date API details |
</Tab>
<Tab title="npx (Any Agent)">
Works with Claude Code, Codex, Cursor, Gemini CLI, or any coding agent:
```shell
npx skills add crewaiinc/skills
```
Pulls from the [skills.sh registry](https://skills.sh/crewaiinc/skills).
</Tab>
</Tabs>
<Steps>
<Step title="Install the official skill pack">
Use either method above — the Claude Code plugin marketplace or `npx skills add`. Both install the official [crewAIInc/skills](https://github.com/crewAIInc/skills) pack.
</Step>
<Step title="Your agent gets instant CrewAI expertise">
The skill pack teaches your agent:
- **Flows** — stateful apps, steps, and crew kickoffs
- **Crews & Agents** — YAML-first patterns, roles, tasks, delegation
- **Tools & Integrations** — search, APIs, MCP servers, and common CrewAI tools
- **Project layout** — CLI scaffolds and repo conventions
- **Up-to-date patterns** — tracks current CrewAI docs and best practices
</Step>
<Step title="Start building">
Your agent can now scaffold and build CrewAI projects without you re-explaining the framework each session.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Skills concept" icon="bolt" href="/en/concepts/skills">
How skills work in CrewAI agents — injection, activation, and patterns.
</Card>
<Card title="Skills landing page" icon="wand-magic-sparkles" href="/en/skills">
Overview of the crewAIInc/skills pack and what it includes.
</Card>
<Card title="AGENTS.md & coding tools" icon="terminal" href="/en/guides/coding-tools/agents-md">
Set up AGENTS.md for Claude Code, Codex, Cursor, and Gemini CLI.
</Card>
<Card title="Skills registry (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
Official listing — skills, install stats, and audits.
</Card>
</CardGroup>
---
## 2. llms.txt — Machine-Readable Docs
CrewAI publishes an `llms.txt` file that gives AI assistants direct access to the full documentation in a machine-readable format.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="What is llms.txt?">
[`llms.txt`](https://llmstxt.org/) is an emerging standard for making documentation consumable by large language models. Instead of scraping HTML, your agent can fetch a single structured text file with all the content it needs.
CrewAI's `llms.txt` is **already live** — your agent can use it right now.
</Tab>
<Tab title="How to use it">
Point your coding agent at the URL when it needs CrewAI reference docs:
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
Many coding agents (Claude Code, Cursor, etc.) can fetch URLs directly. The file contains structured documentation covering all CrewAI concepts, APIs, and guides.
</Tab>
<Tab title="Why it matters">
- **No scraping required** — clean, structured content in one request
- **Always up-to-date** — served directly from docs.crewai.com
- **Optimized for LLMs** — formatted for context windows, not browsers
- **Complements skills** — skills teach patterns, llms.txt provides reference
</Tab>
</Tabs>
---
## 3. Deploy to Enterprise
Go from a local crew to production on **CrewAI AMP** (Agent Management Platform) in minutes.
<Steps>
<Step title="Build locally">
Scaffold and test your crew or flow:
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="Prepare for deployment">
Ensure your project structure is ready:
```bash
crewai deploy --prepare
```
See the [preparation guide](/en/enterprise/guides/prepare-for-deployment) for details on project structure and requirements.
</Step>
<Step title="Deploy to AMP">
Push to the CrewAI AMP platform:
```bash
crewai deploy
```
You can also deploy via [GitHub integration](/en/enterprise/guides/deploy-to-amp) or [Crew Studio](/en/enterprise/guides/enable-crew-studio).
</Step>
<Step title="Access via API">
Your deployed crew gets a REST API endpoint. Integrate it into any application:
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
Full deployment guide — CLI, GitHub, and Crew Studio methods.
</Card>
<Card title="AMP introduction" icon="globe" href="/en/enterprise/introduction">
Platform overview — what AMP provides for production crews.
</Card>
</CardGroup>
---
## 4. Enterprise Features
CrewAI AMP is built for production teams. Here's what you get beyond deployment.
<CardGroup cols={2}>
<Card title="Observability" icon="chart-line">
Detailed execution traces, logs, and performance metrics for every crew run. Monitor agent decisions, tool calls, and task completion in real time.
</Card>
<Card title="Crew Studio" icon="paintbrush">
No-code/low-code interface to create, customize, and deploy crews visually — then export to code or deploy directly.
</Card>
<Card title="Webhook Streaming" icon="webhook">
Stream real-time events from crew executions to your systems. Integrate with Slack, Zapier, or any webhook consumer.
</Card>
<Card title="Team Management" icon="users">
SSO, RBAC, and organization-level controls. Manage who can create, deploy, and access crews across your team.
</Card>
<Card title="Tool Repository" icon="toolbox">
Publish and share custom tools across your organization. Install community tools from the registry.
</Card>
<Card title="Factory (Self-Hosted)" icon="server">
Run CrewAI AMP on your own infrastructure. Full platform capabilities with data residency and compliance controls.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="Who is AMP for?">
AMP is for teams that need to move AI agent workflows from prototypes to production — with observability, access controls, and scalable infrastructure. Whether you're a startup or enterprise, AMP handles the operational complexity so you can focus on building agents.
</Accordion>
<Accordion title="What deployment options are available?">
- **Cloud (app.crewai.com)** — managed by CrewAI, fastest path to production
- **Factory (self-hosted)** — run on your own infrastructure for full data control
- **Hybrid** — mix cloud and self-hosted based on sensitivity requirements
</Accordion>
<Accordion title="How does pricing work?">
Sign up at [app.crewai.com](https://app.crewai.com) to see current plans. Enterprise and Factory pricing is available on request.
</Accordion>
</AccordionGroup>
<Card title="Explore CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
Sign up and deploy your first crew to production.
</Card>

View File

@@ -199,7 +199,7 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
- Supports any hyperscaler including on prem deployments
- Integration with your existing security systems
<Card title="Explore Enterprise Options" icon="building" href="https://crewai.com/enterprise">
<Card title="Explore Enterprise Options" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
Learn about CrewAI's enterprise offerings and schedule a demo
</Card>
</Note>

View File

@@ -7,6 +7,10 @@ mode: "wide"
## A2A Agent Delegation
<Info>
Deploying A2A agents to production? See [A2A on AMP](/en/enterprise/features/a2a) for distributed state, enterprise authentication, gRPC transport, and horizontal scaling.
</Info>
CrewAI treats [A2A protocol](https://a2a-protocol.org/latest/) as a first-class delegation primitive, enabling agents to delegate tasks, request information, and collaborate with remote agents, as well as act as A2A-compliant server agents.
In client mode, agents autonomously choose between local execution and remote delegation based on task requirements.
@@ -96,24 +100,28 @@ The `A2AClientConfig` class accepts the following parameters:
Update mechanism for receiving task status. Options: `StreamingConfig`, `PollingConfig`, or `PushNotificationConfig`.
</ParamField>
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
Transport protocol for A2A communication. Options: `JSONRPC` (default), `GRPC`, or `HTTP+JSON`.
</ParamField>
<ParamField path="accepted_output_modes" type="list[str]" default='["application/json"]'>
Media types the client can accept in responses.
</ParamField>
<ParamField path="supported_transports" type="list[str]" default='["JSONRPC"]'>
Ordered list of transport protocols the client supports.
</ParamField>
<ParamField path="use_client_preference" type="bool" default="False">
Whether to prioritize client transport preferences over server.
</ParamField>
<ParamField path="extensions" type="list[str]" default="[]">
Extension URIs the client supports.
A2A protocol extension URIs the client supports.
</ParamField>
<ParamField path="client_extensions" type="list[A2AExtension]" default="[]">
Client-side processing hooks for tool injection, prompt augmentation, and response modification.
</ParamField>
<ParamField path="transport" type="ClientTransportConfig" default="ClientTransportConfig()">
Transport configuration including preferred transport, supported transports for negotiation, and protocol-specific settings (gRPC message sizes, keepalive, etc.).
</ParamField>
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
**Deprecated**: Use `transport=ClientTransportConfig(preferred=...)` instead.
</ParamField>
<ParamField path="supported_transports" type="list[str]" default="None">
**Deprecated**: Use `transport=ClientTransportConfig(supported=...)` instead.
</ParamField>
## Authentication
@@ -405,11 +413,7 @@ agent = Agent(
Preferred endpoint URL. If set, overrides the URL passed to `to_agent_card()`.
</ParamField>
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
Transport protocol for the preferred endpoint.
</ParamField>
<ParamField path="protocol_version" type="str" default="0.3">
<ParamField path="protocol_version" type="str" default="0.3.0">
A2A protocol version this agent supports.
</ParamField>
@@ -441,8 +445,36 @@ agent = Agent(
Whether agent provides extended card to authenticated users.
</ParamField>
<ParamField path="signatures" type="list[AgentCardSignature]" default="[]">
JSON Web Signatures for the AgentCard.
<ParamField path="extended_skills" type="list[AgentSkill]" default="[]">
Additional skills visible only to authenticated users in the extended agent card.
</ParamField>
<ParamField path="signing_config" type="AgentCardSigningConfig" default="None">
Configuration for signing the AgentCard with JWS. Supports RS256, ES256, PS256, and related algorithms.
</ParamField>
<ParamField path="server_extensions" type="list[ServerExtension]" default="[]">
Server-side A2A protocol extensions with `on_request`/`on_response` hooks that modify agent behavior.
</ParamField>
<ParamField path="push_notifications" type="ServerPushNotificationConfig" default="None">
Configuration for outgoing push notifications, including HMAC-SHA256 signing secret.
</ParamField>
<ParamField path="transport" type="ServerTransportConfig" default="ServerTransportConfig()">
Transport configuration including preferred transport, gRPC server settings, JSON-RPC paths, and HTTP+JSON settings.
</ParamField>
<ParamField path="auth" type="ServerAuthScheme" default="None">
Authentication scheme for incoming A2A requests. Defaults to `SimpleTokenAuth` using the `AUTH_TOKEN` environment variable.
</ParamField>
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
**Deprecated**: Use `transport=ServerTransportConfig(preferred=...)` instead.
</ParamField>
<ParamField path="signatures" type="list[AgentCardSignature]" default="None">
**Deprecated**: Use `signing_config=AgentCardSigningConfig(...)` instead.
</ParamField>
### Combined Client and Server
@@ -468,6 +500,14 @@ agent = Agent(
)
```
### File Inputs and Structured Output
A2A supports passing files and requesting structured output in both directions.
**Client side**: When delegating to a remote A2A agent, files from the task's `input_files` are sent as `FilePart`s in the outgoing message. If `response_model` is set on the `A2AClientConfig`, the Pydantic model's JSON schema is embedded in the message metadata, requesting structured output from the remote agent.
**Server side**: Incoming `FilePart`s are extracted and passed to the agent's task as `input_files`. If the client included a JSON schema, the server creates a response model from it and applies it to the task. When the agent returns structured data, the response is sent back as a `DataPart` rather than plain text.
## Best Practices
<CardGroup cols={2}>

View File

@@ -13,7 +13,7 @@ This tool is used to convert natural language to SQL queries. When passed to the
This enables multiple workflows like having an Agent to access the database fetch information based on the goal and then use the information to generate a response, report or any other output.
Along with that provides the ability for the Agent to update the database based on its goal.
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
**Attention**: By default the tool is read-only (SELECT/SHOW/DESCRIBE/EXPLAIN only). Write operations require `allow_dml=True` or the `CREWAI_NL2SQL_ALLOW_DML=true` environment variable. When write access is enabled, make sure the Agent uses a scoped database user or a read replica where possible.
## Security Model
@@ -38,6 +38,74 @@ Use all of the following in production:
- Add `before_tool_call` hooks to enforce allowed query patterns
- Enable query logging and alerting for destructive statements
## Read-Only Mode & DML Configuration
`NL2SQLTool` operates in **read-only mode by default**. Only the following statement types are permitted without additional configuration:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
Any attempt to execute a write operation (`INSERT`, `UPDATE`, `DELETE`, `DROP`, `CREATE`, `ALTER`, `TRUNCATE`, etc.) will raise an error unless DML is explicitly enabled.
Multi-statement queries containing semicolons (e.g. `SELECT 1; DROP TABLE users`) are also blocked in read-only mode to prevent injection attacks.
### Enabling Write Operations
You can enable DML (Data Manipulation Language) in two ways:
**Option 1 — constructor parameter:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**Option 2 — environment variable:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# DML enabled via environment variable
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### Usage Examples
**Read-only (default) — safe for analytics and reporting:**
```python
from crewai_tools import NL2SQLTool
# Only SELECT/SHOW/DESCRIBE/EXPLAIN are permitted
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**DML enabled — required for write workloads:**
```python
from crewai_tools import NL2SQLTool
# INSERT, UPDATE, DELETE, DROP, etc. are permitted
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
Enabling DML gives the agent the ability to modify or destroy data. Only enable this when your use case explicitly requires write access, and ensure the database credentials are scoped to the minimum required privileges.
</Warning>
## Requirements
- SqlAlchemy

Binary file not shown.

After

Width:  |  Height:  |  Size: 315 KiB

View File

@@ -4,6 +4,218 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 4월 22일">
## v1.14.3a2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## 변경 사항
### 기능
- 베드록 V4 지원 추가
- 향상된 기능을 위한 데이토나 샌드박스 도구 추가
- 'AI와 함께 빌드' 페이지 추가 — 코딩 에이전트를 위한 AI 네이티브 문서
- 모든 언어(en, ko, pt-BR, ar)에 대한 시작하기 탐색 및 페이지 파일에 AI와 함께 빌드 추가
### 버그 수정
- 크루 이벤트에 대한 암묵적 @CrewBase 이름 전파 수정
- 실행 메타데이터 병합에서 중복 배치 초기화 문제 해결
- 체크포인트를 위한 Task 클래스 참조 필드 직렬화 수정
- 가드레일 재시도 루프에서 BaseModel 결과 처리
- 보안 준수를 위해 python-dotenv를 버전 >=1.2.2로 업데이트
### 문서
- v1.14.3a1에 대한 변경 로그 및 버전 업데이트
- 설명 업데이트 및 실제 번역 적용
## 기여자
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="2026년 4월 21일">
## v1.14.3a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
## 변경 사항
### 기능
- 독립형 에이전트에 체크포인트 및 포크 지원 추가
### 버그 수정
- Gemini 스트리밍 도구 호출에서 thought_signature 보존
- 포크 재개 시 task_started 방출 및 체크포인트 TUI 재설계
- dry-run 순서 수정 및 devtools 릴리스에서 체크아웃된 오래된 브랜치 처리
- 체크포인트 가지치기 테스트에서 미래 날짜 사용하여 시간 의존성 실패 방지 (#5543)
### 문서
- v1.14.2에 대한 변경 로그 및 버전 업데이트
## 기여자
@alex-clawd, @greysonlalonde
</Update>
<Update label="2026년 4월 17일">
## v1.14.2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
## 변경 사항
### 기능
- 체크포인트 재개, 차이(diff), 및 가지치기(prune) 명령을 추가하여 가시성을 개선했습니다.
- `Agent.kickoff` 및 관련 메서드에 `from_checkpoint` 매개변수를 추가했습니다.
- 프로젝트 템플릿을 위한 템플릿 관리 명령을 추가했습니다.
- 실패 시 개발 도구 릴리스에 재개 힌트를 추가했습니다.
- 배포 검증 CLI를 추가하고 LLM 초기화의 사용 편의성을 향상시켰습니다.
- 계보 추적이 가능한 체크포인트 포킹을 추가했습니다.
- 추론 토큰 및 캐시 생성 토큰으로 LLM 토큰 추적을 풍부하게 했습니다.
### 버그 수정
- 개발 도구 릴리스에서 오래된 브랜치 충돌에 대한 프롬프트를 수정했습니다.
- `authlib`, `langchain-text-splitters`, 및 `pypdf`의 취약점을 패치했습니다.
- 스트리밍 핸들러의 범위를 설정하여 교차 실행 청크 오염을 방지했습니다.
- TUI에서 Flow API를 통해 Flow 체크포인트를 전송했습니다.
- JSON 체크포인트 발견을 위해 재귀적 글로브를 사용했습니다.
- MCP 도구 해상도에서 순환 JSON 스키마를 처리했습니다.
- 진리값이 있는 기본값을 제거하여 Bedrock 도구 호출 인수를 보존했습니다.
- HITL 재개 후 flow_finished 이벤트를 발생시켰습니다.
- `requests`, `cryptography`, 및 `pytest`를 포함한 종속성을 업데이트하여 다양한 취약점을 수정했습니다.
- Bedrock Converse API에 엄격 모드를 전달하지 않도록 수정했습니다.
### 문서
- 누락된 매개변수를 문서화하고 체크포인팅 섹션을 추가했습니다.
- v1.14.2 및 이전 릴리스 후보에 대한 변경 로그 및 버전을 업데이트했습니다.
- 기업 A2A 기능 문서를 추가하고 OSS A2A 문서를 업데이트했습니다.
## 기여자
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
</Update>
<Update label="2026년 4월 16일">
## v1.14.2rc1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## 변경 사항
### 버그 수정
- MCP 도구 해상도에서 순환 JSON 스키마 처리 수정
- python-multipart를 0.0.26으로 업데이트하여 취약점 수정
- pypdf를 6.10.1로 업데이트하여 취약점 수정
### 문서
- v1.14.2a5에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 15일">
## v1.14.2a5
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## 변경 사항
### 문서
- v1.14.2a4의 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 15일">
## v1.14.2a4
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## 변경 사항
### 기능
- 실패 시 devtools 릴리스에 이력서 힌트 추가
### 버그 수정
- Bedrock Converse API로의 엄격 모드 포워딩 수정
- 보안 취약점 GHSA-6w46-j5rx-g56g에 대해 pytest 버전을 9.0.3으로 수정
- OpenAI 하한을 >=2.0.0으로 상향 조정
### 문서
- v1.14.2a3에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 13일">
## v1.14.2a3
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## 변경 사항
### 기능
- 배포 검증 CLI 추가
- LLM 초기화 사용성 개선
### 버그 수정
- CVE-2026-40260 및 GHSA-pjjw-68hj-v9mw에 대한 패치된 버전으로 pypdf 및 uv 재정의
- CVE 임시 파일 취약점에 대해 requests를 >=2.33.0으로 업그레이드
- 진리값 기본값을 제거하여 Bedrock 도구 호출 인수 보존
- 엄격 모드를 위한 도구 스키마 정리
- MemoryRecord 임베딩 직렬화 테스트의 불안정성 제거
### 문서
- 기업 A2A 언어 정리
- 기업 A2A 기능 문서 추가
- OSS A2A 문서 업데이트
- v1.14.2a2에 대한 변경 로그 및 버전 업데이트
## 기여자
@Yanhu007, @greysonlalonde
</Update>
<Update label="2026년 4월 10일">
## v1.14.2a2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## 변경 사항
### 기능
- 트리 뷰, 포크 지원 및 편집 가능한 입력/출력을 갖춘 체크포인트 TUI 추가
- 추론 토큰 및 캐시 생성 토큰으로 LLM 토큰 추적 강화
- 킥오프 메서드에 `from_checkpoint` 매개변수 추가
- 마이그레이션 프레임워크와 함께 체크포인트에 `crewai_version` 포함
- 계보 추적이 가능한 체크포인트 포킹 추가
### 버그 수정
- Anthropic 및 Bedrock 공급자로의 엄격 모드 포워딩 수정
- 읽기 전용 기본값, 쿼리 검증 및 매개변수화된 쿼리로 NL2SQLTool 강화
### 문서
- v1.14.2a1에 대한 변경 로그 및 버전 업데이트
## 기여자
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="2026년 4월 9일">
## v1.14.2a1

View File

@@ -0,0 +1,217 @@
---
title: "AI와 함께 빌드하기"
description: "CrewAI로 빌드·배포·확장하는 데 필요한 모든 것 — 스킬, 기계가 읽을 수 있는 문서, 배포, 엔터프라이즈 기능을 AI 코딩 에이전트용으로 정리했습니다."
icon: robot
mode: "wide"
---
# AI와 함께 빌드하기
CrewAI는 AI 네이티브입니다. 이 페이지는 Claude Code, Codex, Cursor, Gemini CLI 등 개발자가 crew와 flow를 배포하도록 돕는 코딩 에이전트가 CrewAI로 빌드할 때 필요한 내용을 한곳에 모았습니다.
### 지원 코딩 에이전트
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
이 페이지는 사람과 AI 어시스턴트 모두를 위해 작성되었습니다. 코딩 에이전트라면 CrewAI 맥락은 **Skills**부터, 전체 문서 접근은 **llms.txt**를 사용하세요.
</Note>
---
## 1. Skills — 에이전트에게 CrewAI 가르치기
**Skills**는 코딩 에이전트에게 Flow 스캐폴딩, Crew 구성, 도구 사용, 프레임워크 관례 등 CrewAI에 대한 깊은 지식을 담은 지침 묶음입니다.
<Tabs>
<Tab title="Claude Code (플러그인 마켓플레이스)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
CrewAI 스킬은 **Claude Code 플러그인 마켓플레이스**에서 제공됩니다. AI 네이티브 기업들이 쓰는 것과 같은 배포 채널입니다.
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
CrewAI와 관련된 질문을 하면 다음 네 가지 스킬이 자동으로 활성화됩니다.
| 스킬 | 실행 시점 |
|------|-------------|
| `getting-started` | 새 프로젝트 스캐폴딩, `LLM.call()` / `Agent` / `Crew` / `Flow` 선택, `crew.py` / `main.py` 연결 |
| `design-agent` | 에이전트 구성 — 역할, 목표, 배경 이야기, 도구, LLM, 메모리, 가드레일 |
| `design-task` | 태스크 설명, 의존성, 구조화된 출력(`output_pydantic`, `output_json`), 사람 검토 |
| `ask-docs` | 최신 API 정보를 위해 [CrewAI 문서 MCP 서버](https://docs.crewai.com/mcp) 조회 |
</Tab>
<Tab title="npx (모든 에이전트)">
Claude Code, Codex, Cursor, Gemini CLI 등 모든 코딩 에이전트에서 사용할 수 있습니다.
```shell
npx skills add crewaiinc/skills
```
[skills.sh 레지스트리](https://skills.sh/crewaiinc/skills)에서 가져옵니다.
</Tab>
</Tabs>
<Steps>
<Step title="공식 스킬 팩 설치">
위 방법 중 하나를 사용하세요 — Claude Code 플러그인 마켓플레이스 또는 `npx skills add`. 둘 다 공식 [crewAIInc/skills](https://github.com/crewAIInc/skills) 팩을 설치합니다.
</Step>
<Step title="에이전트가 즉시 CrewAI 전문성을 갖춤">
스킬 팩이 에이전트에게 알려 주는 내용:
- **Flow** — 상태ful 앱, 단계, crew 킥오프
- **Crew 및 에이전트** — YAML 우선 패턴, 역할, 태스크, 위임
- **도구 및 통합** — 검색, API, MCP 서버, 일반적인 CrewAI 도구
- **프로젝트 레이아웃** — CLI 스캐폴드와 저장소 관례
- **최신 패턴** — 현재 CrewAI 문서와 모범 사례 반영
</Step>
<Step title="빌드 시작">
매 세션마다 프레임워크를 다시 설명하지 않아도 에이전트가 CrewAI 프로젝트를 스캐폴딩하고 빌드할 수 있습니다.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Skills 개념" icon="bolt" href="/ko/concepts/skills">
CrewAI 에이전트에서 스킬이 동작하는 방식 — 주입, 활성화, 패턴.
</Card>
<Card title="Skills 랜딩 페이지" icon="wand-magic-sparkles" href="/ko/skills">
crewAIInc/skills 팩 개요와 포함 내용.
</Card>
<Card title="AGENTS.md 및 코딩 도구" icon="terminal" href="/ko/guides/coding-tools/agents-md">
Claude Code, Codex, Cursor, Gemini CLI용 AGENTS.md 설정.
</Card>
<Card title="Skills 레지스트리 (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
공식 목록 — 스킬, 설치 통계, 감사 정보.
</Card>
</CardGroup>
---
## 2. llms.txt — 기계가 읽을 수 있는 문서
CrewAI는 AI 어시스턴트가 전체 문서에 기계가 읽을 수 있는 형태로 바로 접근할 수 있도록 `llms.txt` 파일을 제공합니다.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="llms.txt란?">
[`llms.txt`](https://llmstxt.org/)는 문서를 대규모 언어 모델이 소비하기 쉽게 만드는 새로운 표준입니다. HTML을 스크래핑하는 대신, 필요한 내용이 담긴 하나의 구조화된 텍스트 파일을 가져올 수 있습니다.
CrewAI의 `llms.txt`는 **이미 제공 중**이며, 에이전트가 바로 사용할 수 있습니다.
</Tab>
<Tab title="사용 방법">
CrewAI 참고 문서가 필요할 때 코딩 에이전트에 URL을 알려 주세요.
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
Claude Code, Cursor 등 많은 코딩 에이전트가 URL을 직접 가져올 수 있습니다. 파일에는 CrewAI 개념, API, 가이드를 아우르는 구조화된 문서가 포함되어 있습니다.
</Tab>
<Tab title="왜 중요한가">
- **스크래핑 불필요** — 한 번의 요청으로 깔끔한 구조화 콘텐츠
- **항상 최신** — docs.crewai.com에서 직접 제공
- **LLM에 최적화** — 브라우저가 아니라 컨텍스트 윈도우에 맞게 포맷
- **스킬과 상호 보완** — 스킬은 패턴을, llms.txt는 참조를 제공
</Tab>
</Tabs>
---
## 3. 엔터프라이즈에 배포
로컬 crew를 몇 분 안에 **CrewAI AMP**(Agent Management Platform) 프로덕션으로 가져가세요.
<Steps>
<Step title="로컬에서 빌드">
crew 또는 flow를 스캐폴딩하고 테스트합니다.
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="배포 준비">
프로젝트 구조가 준비되었는지 확인합니다.
```bash
crewai deploy --prepare
```
구조와 요구 사항은 [준비 가이드](/ko/enterprise/guides/prepare-for-deployment)를 참고하세요.
</Step>
<Step title="AMP에 배포">
CrewAI AMP 플랫폼으로 푸시합니다.
```bash
crewai deploy
```
[GitHub 연동](/ko/enterprise/guides/deploy-to-amp) 또는 [Crew Studio](/ko/enterprise/guides/enable-crew-studio)로도 배포할 수 있습니다.
</Step>
<Step title="API로 접근">
배포된 crew는 REST API 엔드포인트를 받습니다. 모든 애플리케이션에 통합할 수 있습니다.
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="AMP에 배포" icon="rocket" href="/ko/enterprise/guides/deploy-to-amp">
전체 배포 가이드 — CLI, GitHub, Crew Studio 방법.
</Card>
<Card title="AMP 소개" icon="globe" href="/ko/enterprise/introduction">
플랫폼 개요 — 프로덕션 crew에 AMP가 제공하는 것.
</Card>
</CardGroup>
---
## 4. 엔터프라이즈 기능
CrewAI AMP는 프로덕션 팀을 위해 만들어졌습니다. 배포 외에 제공되는 것은 다음과 같습니다.
<CardGroup cols={2}>
<Card title="관측 가능성" icon="chart-line">
모든 crew 실행에 대한 상세 실행 추적, 로그, 성능 지표. 에이전트 결정, 도구 호출, 태스크 완료를 실시간으로 모니터링합니다.
</Card>
<Card title="Crew Studio" icon="paintbrush">
시각적으로 crew를 만들고, 맞춤 설정하고, 배포하는 노코드/로코드 인터페이스 — 코드로 보내거나 바로 배포할 수 있습니다.
</Card>
<Card title="웹훅 스트리밍" icon="webhook">
crew 실행에서 실시간 이벤트를 시스템으로 스트리밍합니다. Slack, Zapier 등 웹훅 소비자와 연동할 수 있습니다.
</Card>
<Card title="팀 관리" icon="users">
SSO, RBAC, 조직 단위 제어. 팀 전체에서 crew 생성·배포·접근 권한을 관리합니다.
</Card>
<Card title="도구 저장소" icon="toolbox">
조직 전체에 맞춤 도구를 게시하고 공유합니다. 레지스트리에서 커뮤니티 도구를 설치합니다.
</Card>
<Card title="Factory(셀프 호스팅)" icon="server">
자체 인프라에서 CrewAI AMP를 실행합니다. 데이터 상주와 규정 준수 제어와 함께 플랫폼 전체 기능을 사용할 수 있습니다.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="AMP는 누구를 위한 것인가요?">
AI 에이전트 워크플로를 프로토타입에서 프로덕션으로 옮겨야 하는 팀을 위한 제품입니다. 관측 가능성, 접근 제어, 확장 가능한 인프라를 제공합니다. 스타트업이든 대기업이든 운영 복잡도는 AMP가 맡고, 에이전트 구축에 집중할 수 있습니다.
</Accordion>
<Accordion title="배포 옵션은 무엇이 있나요?">
- **클라우드 (app.crewai.com)** — CrewAI가 관리, 프로덕션까지 가장 빠른 경로
- **Factory(셀프 호스팅)** — 데이터 통제를 위해 자체 인프라에서 실행
- **하이브리드** — 민감도에 따라 클라우드와 셀프 호스팅을 혼합
</Accordion>
<Accordion title="가격은 어떻게 되나요?">
[app.crewai.com](https://app.crewai.com)에 가입하면 현재 요금제를 확인할 수 있습니다. 엔터프라이즈 및 Factory 가격은 문의 시 안내합니다.
</Accordion>
</AccordionGroup>
<Card title="CrewAI AMP 살펴보기 →" icon="arrow-right" href="https://app.crewai.com">
가입하고 첫 crew를 프로덕션에 배포해 보세요.
</Card>

View File

@@ -189,7 +189,7 @@ CrewAI는 의존성 관리와 패키지 처리를 위해 `uv`를 사용합니다
- 온프레미스 배포를 포함하여 모든 하이퍼스케일러 지원
- 기존 보안 시스템과의 통합
<Card title="엔터프라이즈 옵션 살펴보기" icon="building" href="https://crewai.com/enterprise">
<Card title="엔터프라이즈 옵션 살펴보기" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
CrewAI의 엔터프라이즈 서비스에 대해 알아보고 데모를 예약하세요
</Card>
</Note>

View File

@@ -11,7 +11,75 @@ mode: "wide"
이를 통해 에이전트가 데이터베이스에 접근하여 목표에 따라 정보를 가져오고, 해당 정보를 사용해 응답, 보고서 또는 기타 출력물을 생성하는 다양한 워크플로우가 가능해집니다. 또한 에이전트가 자신의 목표에 맞춰 데이터베이스를 업데이트할 수 있는 기능도 제공합니다.
**주의**: 에이전트가 Read-Replica에 접근할 수 있거나, 에이전트가 데이터베이스에 insert/update 쿼리를 실행해도 괜찮은지 반드시 확인하십시오.
**주의**: 도구는 기본적으로 읽기 전용(SELECT/SHOW/DESCRIBE/EXPLAIN만 허용)으로 동작합니다. 쓰기 작업을 수행하려면 `allow_dml=True` 매개변수 또는 `CREWAI_NL2SQL_ALLOW_DML=true` 환경 변수가 필요합니다. 쓰기 접근이 활성화된 경우, 가능하면 권한이 제한된 데이터베이스 사용자나 읽기 복제본을 사용하십시오.
## 읽기 전용 모드 및 DML 구성
`NL2SQLTool`은 기본적으로 **읽기 전용 모드**로 동작합니다. 추가 구성 없이 허용되는 구문 유형은 다음과 같습니다:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
DML을 명시적으로 활성화하지 않으면 쓰기 작업(`INSERT`, `UPDATE`, `DELETE`, `DROP`, `CREATE`, `ALTER`, `TRUNCATE` 등)을 실행하려고 할 때 오류가 발생합니다.
읽기 전용 모드에서는 세미콜론이 포함된 다중 구문 쿼리(예: `SELECT 1; DROP TABLE users`)도 인젝션 공격을 방지하기 위해 차단됩니다.
### 쓰기 작업 활성화
DML(데이터 조작 언어)을 활성화하는 방법은 두 가지입니다:
**옵션 1 — 생성자 매개변수:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**옵션 2 — 환경 변수:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# 환경 변수를 통해 DML 활성화
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### 사용 예시
**읽기 전용(기본값) — 분석 및 보고 워크로드에 안전:**
```python
from crewai_tools import NL2SQLTool
# SELECT/SHOW/DESCRIBE/EXPLAIN만 허용
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**DML 활성화 — 쓰기 워크로드에 필요:**
```python
from crewai_tools import NL2SQLTool
# INSERT, UPDATE, DELETE, DROP 등이 허용됨
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
DML을 활성화하면 에이전트가 데이터를 수정하거나 삭제할 수 있습니다. 사용 사례에서 명시적으로 쓰기 접근이 필요한 경우에만 활성화하고, 데이터베이스 자격 증명이 최소 필요 권한으로 제한되어 있는지 확인하십시오.
</Warning>
## 요구 사항

View File

@@ -4,6 +4,218 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="22 abr 2026">
## v1.14.3a2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a2)
## O que mudou
### Recursos
- Adicionar suporte para bedrock V4
- Adicionar ferramentas de sandbox Daytona para funcionalidade aprimorada
- Adicionar página 'Construir com IA' — documentação nativa de IA para agentes de codificação
- Adicionar Construir com IA à navegação Começar e arquivos de página para todos os idiomas (en, ko, pt-BR, ar)
### Correções de Bugs
- Corrigir a propagação de nomes implícitos @CrewBase para eventos da equipe
- Resolver problema com inicialização de lote duplicada na mesclagem de metadados de execução
- Corrigir a serialização de campos de referência de classe Task para checkpointing
- Lidar com o resultado BaseModel no loop de repetição de guardrail
- Atualizar python-dotenv para a versão >=1.2.2 para conformidade de segurança
### Documentação
- Atualizar changelog e versão para v1.14.3a1
- Atualizar descrições e aplicar traduções reais
## Contributors
@MatthiasHowellYopp, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @renatonitta
</Update>
<Update label="21 abr 2026">
## v1.14.3a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a1)
## O que Mudou
### Funcionalidades
- Adicionar suporte a checkpoint e fork para agentes autônomos
### Correções de Bugs
- Preservar thought_signature nas chamadas da ferramenta de streaming Gemini
- Emitir task_started na retomada do fork e redesenhar a TUI de checkpoint
- Corrigir a ordem do dry-run e lidar com branch desatualizada em release do devtools
- Usar datas futuras nos testes de poda de checkpoint para evitar falhas dependentes do tempo (#5543)
### Documentação
- Atualizar changelog e versão para v1.14.2
## Contribuidores
@alex-clawd, @greysonlalonde
</Update>
<Update label="17 abr 2026">
## v1.14.2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
## O que Mudou
### Recursos
- Adicionar comandos de retomar, diferenciar e podar checkpoints com melhor descobribilidade.
- Adicionar o parâmetro `from_checkpoint` ao `Agent.kickoff` e métodos relacionados.
- Adicionar comandos de gerenciamento de templates para templates de projeto.
- Adicionar dicas de retomar na liberação de devtools em caso de falha.
- Adicionar CLI de validação de implantação e melhorar a ergonomia da inicialização do LLM.
- Adicionar bifurcação de checkpoints com rastreamento de linhagem.
- Enriquecer o rastreamento de tokens do LLM com tokens de raciocínio e tokens de criação de cache.
### Correções de Bugs
- Corrigir prompt em conflitos de branch obsoletos na liberação de devtools.
- Corrigir vulnerabilidades em `authlib`, `langchain-text-splitters` e `pypdf`.
- Restringir manipuladores de streaming para evitar contaminação de chunks entre execuções.
- Despachar checkpoints de Flow através das APIs de Flow na TUI.
- Usar glob recursivo para descoberta de checkpoints JSON.
- Lidar com esquemas JSON cíclicos na resolução de ferramentas MCP.
- Preservar os argumentos de chamada da ferramenta Bedrock removendo o padrão truthy.
- Emitir evento flow_finished após retomar HITL.
- Corrigir várias vulnerabilidades atualizando dependências, incluindo `requests`, `cryptography` e `pytest`.
- Corrigir para parar de encaminhar o modo estrito para a API Bedrock Converse.
### Documentação
- Documentar parâmetros ausentes e adicionar seção de Checkpointing.
- Atualizar changelog e versão para v1.14.2 e candidatos a liberação anteriores.
- Adicionar documentação da funcionalidade A2A empresarial e atualizar a documentação A2A OSS.
## Contribuidores
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
</Update>
<Update label="16 abr 2026">
## v1.14.2rc1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## O que Mudou
### Correções de Bugs
- Corrigir o manuseio de esquemas JSON cíclicos na resolução da ferramenta MCP
- Corrigir vulnerabilidade atualizando python-multipart para 0.0.26
- Corrigir vulnerabilidade atualizando pypdf para 6.10.1
### Documentação
- Atualizar o changelog e a versão para v1.14.2a5
## Contribuidores
@greysonlalonde
</Update>
<Update label="15 abr 2026">
## v1.14.2a5
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## O que Mudou
### Documentação
- Atualizar changelog e versão para v1.14.2a4
## Contribuidores
@greysonlalonde
</Update>
<Update label="15 abr 2026">
## v1.14.2a4
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## O que Mudou
### Recursos
- Adicionar dicas de retomar ao release do devtools em caso de falha
### Correções de Bugs
- Corrigir o encaminhamento do modo estrito para a API Bedrock Converse
- Corrigir a versão do pytest para 9.0.3 devido à vulnerabilidade de segurança GHSA-6w46-j5rx-g56g
- Aumentar o limite inferior do OpenAI para >=2.0.0
### Documentação
- Atualizar o changelog e a versão para v1.14.2a3
## Contribuidores
@greysonlalonde
</Update>
<Update label="13 abr 2026">
## v1.14.2a3
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## O que Mudou
### Recursos
- Adicionar CLI de validação de deploy
- Melhorar a ergonomia de inicialização do LLM
### Correções de Bugs
- Substituir pypdf e uv por versões corrigidas para CVE-2026-40260 e GHSA-pjjw-68hj-v9mw
- Atualizar requests para >=2.33.0 devido à vulnerabilidade de arquivo temporário CVE
- Preservar os argumentos de chamada da ferramenta Bedrock removendo o padrão truthy
- Sanitizar esquemas de ferramentas para modo estrito
- Remover flakiness do teste de serialização de embedding MemoryRecord
### Documentação
- Limpar a linguagem do A2A empresarial
- Adicionar documentação de recursos do A2A empresarial
- Atualizar documentação do A2A OSS
- Atualizar changelog e versão para v1.14.2a2
## Contribuidores
@Yanhu007, @greysonlalonde
</Update>
<Update label="10 abr 2026">
## v1.14.2a2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## O que Mudou
### Funcionalidades
- Adicionar TUI de ponto de verificação com visualização em árvore, suporte a bifurcações e entradas/saídas editáveis
- Enriquecer o rastreamento de tokens LLM com tokens de raciocínio e tokens de criação de cache
- Adicionar parâmetro `from_checkpoint` aos métodos de inicialização
- Incorporar `crewai_version` em pontos de verificação com o framework de migração
- Adicionar bifurcação de ponto de verificação com rastreamento de linhagem
### Correções de Bugs
- Corrigir o encaminhamento em modo estrito para os provedores Anthropic e Bedrock
- Fortalecer NL2SQLTool com padrão somente leitura, validação de consultas e consultas parametrizadas
### Documentação
- Atualizar changelog e versão para v1.14.2a1
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="09 abr 2026">
## v1.14.2a1

View File

@@ -0,0 +1,217 @@
---
title: "Construa com IA"
description: "Tudo o que agentes de codificação com IA precisam para criar, implantar e escalar com CrewAI — skills, documentação legível por máquina, implantação e recursos enterprise."
icon: robot
mode: "wide"
---
# Construa com IA
O CrewAI é nativo de IA. Esta página reúne o que um agente de codificação com IA precisa para construir com CrewAI — seja Claude Code, Codex, Cursor, Gemini CLI ou qualquer outro assistente que ajude um desenvolvedor a entregar crews e flows.
### Agentes de codificação compatíveis
<CardGroup cols={5}>
<Card title="Claude Code" icon="message-bot" color="#D97706" />
<Card title="Cursor" icon="arrow-pointer" color="#3B82F6" />
<Card title="Codex" icon="terminal" color="#10B981" />
<Card title="Windsurf" icon="wind" color="#06B6D4" />
<Card title="Gemini CLI" icon="sparkles" color="#8B5CF6" />
</CardGroup>
<Note>
Esta página serve para humanos e para assistentes de IA. Se você é um agente de codificação, comece por **Skills** para obter contexto do CrewAI e depois use **llms.txt** para acesso completo à documentação.
</Note>
---
## 1. Skills — ensine CrewAI ao seu agente
**Skills** são pacotes de instruções que dão aos agentes de codificação conhecimento profundo do CrewAI — como estruturar Flows, configurar Crews, usar ferramentas e seguir convenções do framework.
<Tabs>
<Tab title="Claude Code (Plugin Marketplace)">
<img src="https://cdn.simpleicons.org/anthropic/D97706" alt="Anthropic" width="28" style={{display: "inline", verticalAlign: "middle", marginRight: "8px"}} />
As skills do CrewAI estão no **plugin marketplace do Claude Code** — o mesmo canal usado por empresas líderes em IA:
```shell
/plugin marketplace add crewAIInc/skills
/plugin install crewai-skills@crewai-plugins
/reload-plugins
```
Quatro skills são ativadas automaticamente quando você faz perguntas relevantes sobre CrewAI:
| Skill | Quando é usada |
|-------|----------------|
| `getting-started` | Novos projetos, escolha entre `LLM.call()` / `Agent` / `Crew` / `Flow`, arquivos `crew.py` / `main.py` |
| `design-agent` | Configurar agentes — papel, objetivo, história, ferramentas, LLMs, memória, guardrails |
| `design-task` | Descrever tarefas, dependências, saída estruturada (`output_pydantic`, `output_json`), revisão humana |
| `ask-docs` | Consultar o [servidor MCP da documentação CrewAI](https://docs.crewai.com/mcp) em tempo real para detalhes de API |
</Tab>
<Tab title="npx (qualquer agente)">
Funciona com Claude Code, Codex, Cursor, Gemini CLI ou qualquer agente de codificação:
```shell
npx skills add crewaiinc/skills
```
Obtido do [registro skills.sh](https://skills.sh/crewaiinc/skills).
</Tab>
</Tabs>
<Steps>
<Step title="Instale o pacote oficial de skills">
Use um dos métodos acima — o plugin marketplace do Claude Code ou `npx skills add`. Ambos instalam o pacote oficial [crewAIInc/skills](https://github.com/crewAIInc/skills).
</Step>
<Step title="Seu agente ganha expertise imediata em CrewAI">
O pacote ensina ao seu agente:
- **Flows** — apps com estado, passos e disparo de crews
- **Crews e agentes** — padrões YAML-first, papéis, tarefas, delegação
- **Ferramentas e integrações** — busca, APIs, servidores MCP e ferramentas comuns do CrewAI
- **Estrutura do projeto** — scaffolds da CLI e convenções de repositório
- **Padrões atualizados** — alinhado à documentação e às melhores práticas atuais do CrewAI
</Step>
<Step title="Comece a construir">
Seu agente pode estruturar e construir projetos CrewAI sem você precisar reexplicar o framework a cada sessão.
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Conceito de skills" icon="bolt" href="/pt-BR/concepts/skills">
Como skills funcionam em agentes CrewAI — injeção, ativação e padrões.
</Card>
<Card title="Página de skills" icon="wand-magic-sparkles" href="/pt-BR/skills">
Visão geral do pacote crewAIInc/skills e do que ele inclui.
</Card>
<Card title="AGENTS.md e ferramentas" icon="terminal" href="/pt-BR/guides/coding-tools/agents-md">
Configure o AGENTS.md para Claude Code, Codex, Cursor e Gemini CLI.
</Card>
<Card title="Registro skills.sh" icon="globe" href="https://skills.sh/crewaiinc/skills">
Listagem oficial — skills, estatísticas de instalação e auditorias.
</Card>
</CardGroup>
---
## 2. llms.txt — documentação legível por máquina
O CrewAI publica um arquivo `llms.txt` que dá aos assistentes de IA acesso direto à documentação completa em formato legível por máquinas.
```
https://docs.crewai.com/llms.txt
```
<Tabs>
<Tab title="O que é llms.txt?">
[`llms.txt`](https://llmstxt.org/) é um padrão emergente para tornar a documentação consumível por grandes modelos de linguagem. Em vez de fazer scraping de HTML, seu agente pode buscar um único arquivo de texto estruturado com o conteúdo necessário.
O `llms.txt` do CrewAI **já está no ar** — seu agente pode usar agora.
</Tab>
<Tab title="Como usar">
Indique ao agente de codificação a URL quando precisar da referência do CrewAI:
```
Fetch https://docs.crewai.com/llms.txt for CrewAI documentation.
```
Muitos agentes (Claude Code, Cursor etc.) conseguem buscar URLs diretamente. O arquivo contém documentação estruturada sobre conceitos, APIs e guias do CrewAI.
</Tab>
<Tab title="Por que importa">
- **Sem scraping** — conteúdo limpo e estruturado em uma requisição
- **Sempre atualizado** — servido diretamente de docs.crewai.com
- **Otimizado para LLMs** — formatado para janelas de contexto, não para navegadores
- **Complementa as skills** — skills ensinam padrões; llms.txt fornece referência
</Tab>
</Tabs>
---
## 3. Implantação enterprise
Do crew local à produção no **CrewAI AMP** (Agent Management Platform) em minutos.
<Steps>
<Step title="Construa localmente">
Estruture e teste seu crew ou flow:
```bash
crewai create crew my_crew
cd my_crew
crewai run
```
</Step>
<Step title="Prepare a implantação">
Garanta que a estrutura do projeto está pronta:
```bash
crewai deploy --prepare
```
Veja o [guia de preparação](/pt-BR/enterprise/guides/prepare-for-deployment) para detalhes de estrutura e requisitos.
</Step>
<Step title="Implante no AMP">
Envie para a plataforma CrewAI AMP:
```bash
crewai deploy
```
Também é possível implantar pela [integração com GitHub](/pt-BR/enterprise/guides/deploy-to-amp) ou pelo [Crew Studio](/pt-BR/enterprise/guides/enable-crew-studio).
</Step>
<Step title="Acesso via API">
O crew implantado recebe um endpoint REST. Integre em qualquer aplicação:
```bash
curl -X POST https://app.crewai.com/api/v1/crews/<crew-id>/kickoff \
-H "Authorization: Bearer $CREWAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": {"topic": "AI agents"}}'
```
</Step>
</Steps>
<CardGroup cols={2}>
<Card title="Implantar no AMP" icon="rocket" href="/pt-BR/enterprise/guides/deploy-to-amp">
Guia completo de implantação — CLI, GitHub e Crew Studio.
</Card>
<Card title="Introdução ao AMP" icon="globe" href="/pt-BR/enterprise/introduction">
Visão da plataforma — o que o AMP oferece para crews em produção.
</Card>
</CardGroup>
---
## 4. Recursos enterprise
O CrewAI AMP foi feito para equipes em produção. Além da implantação, você obtém:
<CardGroup cols={2}>
<Card title="Observabilidade" icon="chart-line">
Traces de execução, logs e métricas de desempenho para cada execução de crew. Monitore decisões de agentes, chamadas de ferramentas e conclusão de tarefas em tempo real.
</Card>
<Card title="Crew Studio" icon="paintbrush">
Interface no-code/low-code para criar, personalizar e implantar crews visualmente — exporte para código ou implante direto.
</Card>
<Card title="Webhook streaming" icon="webhook">
Transmita eventos em tempo real das execuções para seus sistemas. Integre com Slack, Zapier ou qualquer consumidor de webhook.
</Card>
<Card title="Gestão de equipe" icon="users">
SSO, RBAC e controles em nível de organização. Gerencie quem pode criar, implantar e acessar crews.
</Card>
<Card title="Repositório de ferramentas" icon="toolbox">
Publique e compartilhe ferramentas customizadas na organização. Instale ferramentas da comunidade a partir do registro.
</Card>
<Card title="Factory (self-hosted)" icon="server">
Execute o CrewAI AMP na sua infraestrutura. Capacidades completas da plataforma com residência de dados e controles de conformidade.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="Para quem é o AMP?">
Para equipes que precisam levar fluxos de agentes de IA do protótipo à produção — com observabilidade, controles de acesso e infraestrutura escalável. De startups a grandes empresas, o AMP cuida da complexidade operacional para você focar nos agentes.
</Accordion>
<Accordion title="Quais opções de implantação existem?">
- **Nuvem (app.crewai.com)** — gerenciada pela CrewAI, caminho mais rápido para produção
- **Factory (self-hosted)** — na sua infraestrutura para controle total dos dados
- **Híbrido** — combine nuvem e self-hosted conforme a sensibilidade dos dados
</Accordion>
<Accordion title="Como funciona o preço?">
Cadastre-se em [app.crewai.com](https://app.crewai.com) para ver os planos atuais. Preços enterprise e Factory sob consulta.
</Accordion>
</AccordionGroup>
<Card title="Conheça o CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
Cadastre-se e leve seu primeiro crew à produção.
</Card>

View File

@@ -191,7 +191,7 @@ Para equipes e organizações, o CrewAI oferece opções de implantação corpor
- Compatível com qualquer hyperscaler, incluindo ambientes on-premises
- Integração com seus sistemas de segurança existentes
<Card title="Explore as Opções Enterprise" icon="building" href="https://crewai.com/enterprise">
<Card title="Explore as Opções Enterprise" icon="building" href="https://share.hsforms.com/1Ooo2UViKQ22UOzdr7i77iwr87kg">
Saiba mais sobre as soluções enterprise do CrewAI e agende uma demonstração
</Card>
</Note>

View File

@@ -11,7 +11,75 @@ Esta ferramenta é utilizada para converter linguagem natural em consultas SQL.
Isso possibilita múltiplos fluxos de trabalho, como por exemplo ter um Agente acessando o banco de dados para buscar informações com base em um objetivo e, então, usar essas informações para gerar uma resposta, relatório ou qualquer outro tipo de saída. Além disso, permite que o Agente atualize o banco de dados de acordo com seu objetivo.
**Atenção**: Certifique-se de que o Agente tenha acesso a um Read-Replica ou que seja permitido que o Agente execute consultas de inserção/atualização no banco de dados.
**Atenção**: Por padrão, a ferramenta opera em modo somente leitura (apenas SELECT/SHOW/DESCRIBE/EXPLAIN). Operações de escrita exigem `allow_dml=True` ou a variável de ambiente `CREWAI_NL2SQL_ALLOW_DML=true`. Quando o acesso de escrita estiver habilitado, certifique-se de que o Agente use um usuário de banco de dados com privilégios mínimos ou um Read-Replica sempre que possível.
## Modo Somente Leitura e Configuração de DML
O `NL2SQLTool` opera em **modo somente leitura por padrão**. Apenas os seguintes tipos de instrução são permitidos sem configuração adicional:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
Qualquer tentativa de executar uma operação de escrita (`INSERT`, `UPDATE`, `DELETE`, `DROP`, `CREATE`, `ALTER`, `TRUNCATE`, etc.) resultará em erro, a menos que o DML seja habilitado explicitamente.
Consultas com múltiplas instruções contendo ponto e vírgula (ex.: `SELECT 1; DROP TABLE users`) também são bloqueadas no modo somente leitura para prevenir ataques de injeção.
### Habilitando Operações de Escrita
Você pode habilitar DML (Linguagem de Manipulação de Dados) de duas formas:
**Opção 1 — parâmetro do construtor:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**Opção 2 — variável de ambiente:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# DML habilitado via variável de ambiente
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### Exemplos de Uso
**Somente leitura (padrão) — seguro para análise e relatórios:**
```python
from crewai_tools import NL2SQLTool
# Apenas SELECT/SHOW/DESCRIBE/EXPLAIN são permitidos
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**Com DML habilitado — necessário para workloads de escrita:**
```python
from crewai_tools import NL2SQLTool
# INSERT, UPDATE, DELETE, DROP, etc. são permitidos
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
Habilitar DML concede ao agente a capacidade de modificar ou destruir dados. Ative apenas quando o seu caso de uso exigir explicitamente acesso de escrita e certifique-se de que as credenciais do banco de dados estejam limitadas aos privilégios mínimos necessários.
</Warning>
## Requisitos

View File

@@ -9,7 +9,7 @@ authors = [
requires-python = ">=3.10, <3.14"
dependencies = [
"Pillow~=12.1.1",
"pypdf~=6.9.1",
"pypdf~=6.10.0",
"python-magic>=0.4.27",
"aiocache~=0.12.3",
"aiofiles~=24.1.0",

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.2a1"
__version__ = "1.14.3a2"

View File

@@ -9,8 +9,8 @@ authors = [
requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"crewai==1.14.2a1",
"requests>=2.33.0,<3",
"crewai==1.14.3a2",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",
@@ -139,6 +139,9 @@ contextual = [
"contextual-client>=0.1.0",
"nest-asyncio>=1.6.0",
]
daytona = [
"daytona~=0.140.0",
]
[tool.uv]

View File

@@ -59,6 +59,11 @@ from crewai_tools.tools.dalle_tool.dalle_tool import DallETool
from crewai_tools.tools.databricks_query_tool.databricks_query_tool import (
DatabricksQueryTool,
)
from crewai_tools.tools.daytona_sandbox_tool import (
DaytonaExecTool,
DaytonaFileTool,
DaytonaPythonTool,
)
from crewai_tools.tools.directory_read_tool.directory_read_tool import (
DirectoryReadTool,
)
@@ -232,6 +237,9 @@ __all__ = [
"DOCXSearchTool",
"DallETool",
"DatabricksQueryTool",
"DaytonaExecTool",
"DaytonaFileTool",
"DaytonaPythonTool",
"DirectoryReadTool",
"DirectorySearchTool",
"EXASearchTool",
@@ -305,4 +313,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.2a1"
__version__ = "1.14.3a2"

View File

@@ -48,6 +48,11 @@ from crewai_tools.tools.dalle_tool.dalle_tool import DallETool
from crewai_tools.tools.databricks_query_tool.databricks_query_tool import (
DatabricksQueryTool,
)
from crewai_tools.tools.daytona_sandbox_tool import (
DaytonaExecTool,
DaytonaFileTool,
DaytonaPythonTool,
)
from crewai_tools.tools.directory_read_tool.directory_read_tool import (
DirectoryReadTool,
)
@@ -217,6 +222,9 @@ __all__ = [
"DOCXSearchTool",
"DallETool",
"DatabricksQueryTool",
"DaytonaExecTool",
"DaytonaFileTool",
"DaytonaPythonTool",
"DirectoryReadTool",
"DirectorySearchTool",
"EXASearchTool",

View File

@@ -0,0 +1,107 @@
# Daytona Sandbox Tools
Run shell commands, execute Python, and manage files inside a [Daytona](https://www.daytona.io/) sandbox. Daytona provides isolated, ephemeral compute environments suitable for agent-driven code execution.
Three tools are provided so you can pick what the agent actually needs:
- **`DaytonaExecTool`** — run a shell command (`sandbox.process.exec`).
- **`DaytonaPythonTool`** — run a Python script (`sandbox.process.code_run`).
- **`DaytonaFileTool`** — read / write / list / delete files (`sandbox.fs.*`).
## Installation
```shell
uv add "crewai-tools[daytona]"
# or
pip install "crewai-tools[daytona]"
```
Set the API key:
```shell
export DAYTONA_API_KEY="..."
```
`DAYTONA_API_URL` and `DAYTONA_TARGET` are also respected if set.
## Sandbox lifecycle
All three tools share the same lifecycle controls from `DaytonaBaseTool`:
| Mode | When the sandbox is created | When it is deleted |
| --- | --- | --- |
| **Ephemeral** (default, `persistent=False`) | On every `_run` call | At the end of that same call |
| **Persistent** (`persistent=True`) | Lazily on first use | At process exit (via `atexit`), or manually via `tool.close()` |
| **Attach** (`sandbox_id="…"`) | Never — the tool attaches to an existing sandbox | Never — the tool will not delete a sandbox it did not create |
Ephemeral mode is the safe default: nothing leaks if the agent forgets to clean up. Use persistent mode when you want filesystem state or installed packages to carry across steps — this is typical when pairing `DaytonaFileTool` with `DaytonaExecTool`.
## Examples
### One-shot Python execution (ephemeral)
```python
from crewai_tools import DaytonaPythonTool
tool = DaytonaPythonTool()
result = tool.run(code="print(sum(range(10)))")
```
### Multi-step shell session (persistent)
```python
from crewai_tools import DaytonaExecTool, DaytonaFileTool
exec_tool = DaytonaExecTool(persistent=True)
file_tool = DaytonaFileTool(persistent=True)
# Agent writes a script, then runs it — both share the same sandbox instance
# because they each keep their own persistent sandbox. If you need the *same*
# sandbox across two tools, create one tool, grab the sandbox id via
# `tool._persistent_sandbox.id`, and pass it to the other via `sandbox_id=...`.
```
### Attach to an existing sandbox
```python
from crewai_tools import DaytonaExecTool
tool = DaytonaExecTool(sandbox_id="my-long-lived-sandbox")
```
### Custom create params
Pass Daytona's `CreateSandboxFromSnapshotParams` kwargs via `create_params`:
```python
tool = DaytonaExecTool(
persistent=True,
create_params={
"language": "python",
"env_vars": {"MY_FLAG": "1"},
"labels": {"owner": "crewai-agent"},
},
)
```
## Tool arguments
### `DaytonaExecTool`
- `command: str` — shell command to run.
- `cwd: str | None` — working directory.
- `env: dict[str, str] | None` — extra env vars for this command.
- `timeout: int | None` — seconds.
### `DaytonaPythonTool`
- `code: str` — Python source to execute.
- `argv: list[str] | None` — argv forwarded via `CodeRunParams`.
- `env: dict[str, str] | None` — env vars forwarded via `CodeRunParams`.
- `timeout: int | None` — seconds.
### `DaytonaFileTool`
- `action: "read" | "write" | "list" | "delete" | "mkdir" | "info"`
- `path: str` — absolute path inside the sandbox.
- `content: str | None` — required for `write`.
- `binary: bool` — if `True`, `content` is base64 on write / returned as base64 on read.
- `recursive: bool` — for `delete`, removes directories recursively.
- `mode: str` — for `mkdir`, octal permission string (default `"0755"`).

View File

@@ -0,0 +1,13 @@
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
from crewai_tools.tools.daytona_sandbox_tool.daytona_exec_tool import DaytonaExecTool
from crewai_tools.tools.daytona_sandbox_tool.daytona_file_tool import DaytonaFileTool
from crewai_tools.tools.daytona_sandbox_tool.daytona_python_tool import (
DaytonaPythonTool,
)
__all__ = [
"DaytonaBaseTool",
"DaytonaExecTool",
"DaytonaFileTool",
"DaytonaPythonTool",
]

View File

@@ -0,0 +1,198 @@
from __future__ import annotations
import atexit
import logging
import os
import threading
from typing import Any, ClassVar
from crewai.tools import BaseTool, EnvVar
from pydantic import ConfigDict, Field, PrivateAttr
logger = logging.getLogger(__name__)
class DaytonaBaseTool(BaseTool):
"""Shared base for tools that act on a Daytona sandbox.
Lifecycle modes:
- persistent=False (default): create a fresh sandbox per `_run` call and
delete it when the call returns. Safer and stateless — nothing leaks if
the agent forgets cleanup.
- persistent=True: lazily create a single sandbox on first use, cache it
on the instance, and register an atexit hook to delete it at process
exit. Cheaper across many calls and lets files/state carry over.
- sandbox_id=<existing>: attach to a sandbox the caller already owns.
Never deleted by the tool.
"""
model_config = ConfigDict(arbitrary_types_allowed=True)
package_dependencies: list[str] = Field(default_factory=lambda: ["daytona"])
api_key: str | None = Field(
default_factory=lambda: os.getenv("DAYTONA_API_KEY"),
description="Daytona API key. Falls back to DAYTONA_API_KEY env var.",
json_schema_extra={"required": False},
)
api_url: str | None = Field(
default_factory=lambda: os.getenv("DAYTONA_API_URL"),
description="Daytona API URL override. Falls back to DAYTONA_API_URL env var.",
json_schema_extra={"required": False},
)
target: str | None = Field(
default_factory=lambda: os.getenv("DAYTONA_TARGET"),
description="Daytona target region. Falls back to DAYTONA_TARGET env var.",
json_schema_extra={"required": False},
)
persistent: bool = Field(
default=False,
description=(
"If True, reuse one sandbox across all calls to this tool instance "
"and delete it at process exit. Default False creates and deletes a "
"fresh sandbox per call."
),
)
sandbox_id: str | None = Field(
default=None,
description=(
"Attach to an existing sandbox by id or name instead of creating a "
"new one. The tool will never delete a sandbox it did not create."
),
)
create_params: dict[str, Any] | None = Field(
default=None,
description=(
"Optional kwargs forwarded to CreateSandboxFromSnapshotParams when "
"creating a sandbox (e.g. language, snapshot, env_vars, labels)."
),
)
sandbox_timeout: float = Field(
default=60.0,
description="Timeout in seconds for sandbox create/delete operations.",
)
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(
name="DAYTONA_API_KEY",
description="API key for Daytona sandbox service",
required=False,
),
EnvVar(
name="DAYTONA_API_URL",
description="Daytona API base URL (optional)",
required=False,
),
EnvVar(
name="DAYTONA_TARGET",
description="Daytona target region (optional)",
required=False,
),
]
)
_client: Any | None = PrivateAttr(default=None)
_persistent_sandbox: Any | None = PrivateAttr(default=None)
_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
_cleanup_registered: bool = PrivateAttr(default=False)
_sdk_cache: ClassVar[dict[str, Any]] = {}
@classmethod
def _import_sdk(cls) -> dict[str, Any]:
if cls._sdk_cache:
return cls._sdk_cache
try:
from daytona import (
CreateSandboxFromSnapshotParams,
Daytona,
DaytonaConfig,
)
except ImportError as exc:
raise ImportError(
"The 'daytona' package is required for Daytona sandbox tools. "
"Install it with: uv add daytona (or) pip install daytona"
) from exc
cls._sdk_cache = {
"Daytona": Daytona,
"DaytonaConfig": DaytonaConfig,
"CreateSandboxFromSnapshotParams": CreateSandboxFromSnapshotParams,
}
return cls._sdk_cache
def _get_client(self) -> Any:
if self._client is not None:
return self._client
sdk = self._import_sdk()
config_kwargs: dict[str, Any] = {}
if self.api_key:
config_kwargs["api_key"] = self.api_key
if self.api_url:
config_kwargs["api_url"] = self.api_url
if self.target:
config_kwargs["target"] = self.target
config = sdk["DaytonaConfig"](**config_kwargs) if config_kwargs else None
self._client = sdk["Daytona"](config) if config else sdk["Daytona"]()
return self._client
def _build_create_params(self) -> Any | None:
if not self.create_params:
return None
sdk = self._import_sdk()
return sdk["CreateSandboxFromSnapshotParams"](**self.create_params)
def _acquire_sandbox(self) -> tuple[Any, bool]:
"""Return (sandbox, should_delete_after_use)."""
client = self._get_client()
if self.sandbox_id:
return client.get(self.sandbox_id), False
if self.persistent:
with self._lock:
if self._persistent_sandbox is None:
self._persistent_sandbox = client.create(
self._build_create_params(),
timeout=self.sandbox_timeout,
)
if not self._cleanup_registered:
atexit.register(self.close)
self._cleanup_registered = True
return self._persistent_sandbox, False
sandbox = client.create(
self._build_create_params(),
timeout=self.sandbox_timeout,
)
return sandbox, True
def _release_sandbox(self, sandbox: Any, should_delete: bool) -> None:
if not should_delete:
return
try:
sandbox.delete(timeout=self.sandbox_timeout)
except Exception:
logger.debug(
"Best-effort sandbox cleanup failed after ephemeral use; "
"the sandbox may need manual deletion.",
exc_info=True,
)
def close(self) -> None:
"""Delete the cached persistent sandbox if one exists."""
with self._lock:
sandbox = self._persistent_sandbox
self._persistent_sandbox = None
if sandbox is None:
return
try:
sandbox.delete(timeout=self.sandbox_timeout)
except Exception:
logger.debug(
"Best-effort persistent sandbox cleanup failed at close(); "
"the sandbox may need manual deletion.",
exc_info=True,
)

View File

@@ -0,0 +1,59 @@
from __future__ import annotations
from builtins import type as type_
from typing import Any
from pydantic import BaseModel, Field
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
class DaytonaExecToolSchema(BaseModel):
command: str = Field(..., description="Shell command to execute in the sandbox.")
cwd: str | None = Field(
default=None,
description="Working directory to run the command in. Defaults to the sandbox work dir.",
)
env: dict[str, str] | None = Field(
default=None,
description="Optional environment variables to set for this command.",
)
timeout: int | None = Field(
default=None,
description="Maximum seconds to wait for the command to finish.",
)
class DaytonaExecTool(DaytonaBaseTool):
"""Run a shell command inside a Daytona sandbox."""
name: str = "Daytona Sandbox Exec"
description: str = (
"Execute a shell command inside a Daytona sandbox and return the exit "
"code and combined output. Use this to run builds, package installs, "
"git operations, or any one-off shell command."
)
args_schema: type_[BaseModel] = DaytonaExecToolSchema
def _run(
self,
command: str,
cwd: str | None = None,
env: dict[str, str] | None = None,
timeout: int | None = None,
) -> Any:
sandbox, should_delete = self._acquire_sandbox()
try:
response = sandbox.process.exec(
command,
cwd=cwd,
env=env,
timeout=timeout,
)
return {
"exit_code": getattr(response, "exit_code", None),
"result": getattr(response, "result", None),
"artifacts": getattr(response, "artifacts", None),
}
finally:
self._release_sandbox(sandbox, should_delete)

View File

@@ -0,0 +1,205 @@
from __future__ import annotations
import base64
from builtins import type as type_
import logging
import posixpath
from typing import Any, Literal
from pydantic import BaseModel, Field, model_validator
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
logger = logging.getLogger(__name__)
FileAction = Literal["read", "write", "append", "list", "delete", "mkdir", "info"]
class DaytonaFileToolSchema(BaseModel):
action: FileAction = Field(
...,
description=(
"The filesystem action to perform: 'read' (returns file contents), "
"'write' (create or replace a file with content), 'append' (append "
"content to an existing file — use this for writing large files in "
"chunks to avoid hitting tool-call size limits), 'list' (lists a "
"directory), 'delete' (removes a file/dir), 'mkdir' (creates a "
"directory), 'info' (returns file metadata)."
),
)
path: str = Field(..., description="Absolute path inside the sandbox.")
content: str | None = Field(
default=None,
description=(
"Content to write or append. If omitted for 'write', an empty file "
"is created. For files larger than a few KB, prefer one 'write' "
"with empty content followed by multiple 'append' calls of ~4KB "
"each to stay within tool-call payload limits."
),
)
binary: bool = Field(
default=False,
description=(
"For 'write': treat content as base64 and upload raw bytes. "
"For 'read': return contents as base64 instead of decoded utf-8."
),
)
recursive: bool = Field(
default=False,
description="For action='delete': remove directories recursively.",
)
mode: str = Field(
default="0755",
description="For action='mkdir': octal permission string (default 0755).",
)
@model_validator(mode="after")
def _validate_action_args(self) -> DaytonaFileToolSchema:
if self.action == "append" and self.content is None:
raise ValueError(
"action='append' requires 'content'. Pass the chunk to append "
"in the 'content' field."
)
return self
class DaytonaFileTool(DaytonaBaseTool):
"""Read, write, and manage files inside a Daytona sandbox.
Notes:
- Most useful with `persistent=True` or an explicit `sandbox_id`. With the
default ephemeral mode, files disappear when this tool call finishes.
"""
name: str = "Daytona Sandbox Files"
description: str = (
"Perform filesystem operations inside a Daytona sandbox: read a file, "
"write content to a path, append content to an existing file, list a "
"directory, delete a path, make a directory, or fetch file metadata. "
"For files larger than a few KB, create the file with action='write' "
"and empty content, then send the body via multiple 'append' calls of "
"~4KB each to stay within tool-call payload limits."
)
args_schema: type_[BaseModel] = DaytonaFileToolSchema
def _run(
self,
action: FileAction,
path: str,
content: str | None = None,
binary: bool = False,
recursive: bool = False,
mode: str = "0755",
) -> Any:
sandbox, should_delete = self._acquire_sandbox()
try:
if action == "read":
return self._read(sandbox, path, binary=binary)
if action == "write":
return self._write(sandbox, path, content or "", binary=binary)
if action == "append":
return self._append(sandbox, path, content or "", binary=binary)
if action == "list":
return self._list(sandbox, path)
if action == "delete":
sandbox.fs.delete_file(path, recursive=recursive)
return {"status": "deleted", "path": path}
if action == "mkdir":
sandbox.fs.create_folder(path, mode)
return {"status": "created", "path": path, "mode": mode}
if action == "info":
return self._info(sandbox, path)
raise ValueError(f"Unknown action: {action}")
finally:
self._release_sandbox(sandbox, should_delete)
def _read(self, sandbox: Any, path: str, *, binary: bool) -> dict[str, Any]:
data: bytes = sandbox.fs.download_file(path)
if binary:
return {
"path": path,
"encoding": "base64",
"content": base64.b64encode(data).decode("ascii"),
}
try:
return {"path": path, "encoding": "utf-8", "content": data.decode("utf-8")}
except UnicodeDecodeError:
return {
"path": path,
"encoding": "base64",
"content": base64.b64encode(data).decode("ascii"),
"note": "File was not valid utf-8; returned as base64.",
}
def _write(
self, sandbox: Any, path: str, content: str, *, binary: bool
) -> dict[str, Any]:
payload = base64.b64decode(content) if binary else content.encode("utf-8")
self._ensure_parent_dir(sandbox, path)
sandbox.fs.upload_file(payload, path)
return {"status": "written", "path": path, "bytes": len(payload)}
def _append(
self, sandbox: Any, path: str, content: str, *, binary: bool
) -> dict[str, Any]:
chunk = base64.b64decode(content) if binary else content.encode("utf-8")
self._ensure_parent_dir(sandbox, path)
try:
existing: bytes = sandbox.fs.download_file(path)
except Exception:
existing = b""
payload = existing + chunk
sandbox.fs.upload_file(payload, path)
return {
"status": "appended",
"path": path,
"appended_bytes": len(chunk),
"total_bytes": len(payload),
}
@staticmethod
def _ensure_parent_dir(sandbox: Any, path: str) -> None:
"""Make sure the parent directory of `path` exists.
Daytona's upload returns 400 if the parent directory is missing. We
best-effort mkdir the parent; any error (e.g. already exists) is
swallowed because `create_folder` is not idempotent on the server.
"""
parent = posixpath.dirname(path)
if not parent or parent in ("/", "."):
return
try:
sandbox.fs.create_folder(parent, "0755")
except Exception:
logger.debug(
"Best-effort parent-directory create failed for %s; "
"assuming it already exists and proceeding with the write.",
parent,
exc_info=True,
)
def _list(self, sandbox: Any, path: str) -> dict[str, Any]:
entries = sandbox.fs.list_files(path)
return {
"path": path,
"entries": [self._file_info_to_dict(entry) for entry in entries],
}
def _info(self, sandbox: Any, path: str) -> dict[str, Any]:
return self._file_info_to_dict(sandbox.fs.get_file_info(path))
@staticmethod
def _file_info_to_dict(info: Any) -> dict[str, Any]:
fields = (
"name",
"size",
"mode",
"permissions",
"is_dir",
"mod_time",
"owner",
"group",
)
return {field: getattr(info, field, None) for field in fields}

View File

@@ -0,0 +1,82 @@
from __future__ import annotations
from builtins import type as type_
from typing import Any
from pydantic import BaseModel, Field
from crewai_tools.tools.daytona_sandbox_tool.daytona_base_tool import DaytonaBaseTool
class DaytonaPythonToolSchema(BaseModel):
code: str = Field(
...,
description="Python source to execute inside the sandbox.",
)
argv: list[str] | None = Field(
default=None,
description="Optional argv passed to the script (forwarded as params.argv).",
)
env: dict[str, str] | None = Field(
default=None,
description="Optional environment variables for the run (forwarded as params.env).",
)
timeout: int | None = Field(
default=None,
description="Maximum seconds to wait for the code to finish.",
)
class DaytonaPythonTool(DaytonaBaseTool):
"""Run Python source inside a Daytona sandbox."""
name: str = "Daytona Sandbox Python"
description: str = (
"Execute a block of Python code inside a Daytona sandbox and return the "
"exit code, captured stdout, and any produced artifacts. Use this for "
"data processing, quick scripts, or analysis that should run in an "
"isolated environment."
)
args_schema: type_[BaseModel] = DaytonaPythonToolSchema
def _run(
self,
code: str,
argv: list[str] | None = None,
env: dict[str, str] | None = None,
timeout: int | None = None,
) -> Any:
sandbox, should_delete = self._acquire_sandbox()
try:
params = self._build_code_run_params(argv=argv, env=env)
response = sandbox.process.code_run(code, params=params, timeout=timeout)
return {
"exit_code": getattr(response, "exit_code", None),
"result": getattr(response, "result", None),
"artifacts": getattr(response, "artifacts", None),
}
finally:
self._release_sandbox(sandbox, should_delete)
def _build_code_run_params(
self,
argv: list[str] | None,
env: dict[str, str] | None,
) -> Any | None:
if argv is None and env is None:
return None
try:
from daytona import CodeRunParams
except ImportError as exc:
raise ImportError(
"Could not import daytona.CodeRunParams while building "
"argv/env for sandbox.process.code_run. This usually means the "
"installed 'daytona' SDK is too old or incompatible. Upgrade "
"with: pip install -U 'crewai-tools[daytona]'"
) from exc
kwargs: dict[str, Any] = {}
if argv is not None:
kwargs["argv"] = argv
if env is not None:
kwargs["env"] = env
return CodeRunParams(**kwargs)

View File

@@ -1,7 +1,17 @@
from collections.abc import Iterator
import logging
import os
import re
from typing import Any
try:
from typing import Self
except ImportError:
from typing_extensions import Self
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from pydantic import BaseModel, Field, model_validator
try:
@@ -12,6 +22,186 @@ try:
except ImportError:
SQLALCHEMY_AVAILABLE = False
logger = logging.getLogger(__name__)
# Commands allowed in read-only mode
# NOTE: WITH is intentionally excluded — writable CTEs start with WITH, so the
# CTE body must be inspected separately (see _validate_statement).
_READ_ONLY_COMMANDS = {"SELECT", "SHOW", "DESCRIBE", "DESC", "EXPLAIN"}
# Commands that mutate state and are blocked by default
_WRITE_COMMANDS = {
"INSERT",
"UPDATE",
"DELETE",
"DROP",
"ALTER",
"CREATE",
"TRUNCATE",
"GRANT",
"REVOKE",
"EXEC",
"EXECUTE",
"CALL",
"MERGE",
"REPLACE",
"UPSERT",
"LOAD",
"COPY",
"VACUUM",
"ANALYZE",
"ANALYSE",
"REINDEX",
"CLUSTER",
"REFRESH",
"COMMENT",
"SET",
"RESET",
}
# Subset of write commands that can realistically appear *inside* a CTE body.
# Narrower than _WRITE_COMMANDS to avoid false positives on identifiers like
# ``comment``, ``set``, or ``reset`` which are common column/table names.
_CTE_WRITE_INDICATORS = {
"INSERT",
"UPDATE",
"DELETE",
"DROP",
"ALTER",
"CREATE",
"TRUNCATE",
"MERGE",
}
_AS_PAREN_RE = re.compile(r"\bAS\s*\(", re.IGNORECASE)
def _iter_as_paren_matches(stmt: str) -> Iterator[re.Match[str]]:
"""Yield regex matches for ``AS\\s*(`` outside of string literals."""
# Build a set of character positions that are inside string literals.
in_string: set[int] = set()
i = 0
while i < len(stmt):
if stmt[i] == "'":
start = i
end = _skip_string_literal(stmt, i)
in_string.update(range(start, end))
i = end
else:
i += 1
for m in _AS_PAREN_RE.finditer(stmt):
if m.start() not in in_string:
yield m
def _detect_writable_cte(stmt: str) -> str | None:
"""Return the first write command inside a CTE body, or None.
Instead of tokenizing the whole statement (which falsely matches column
names like ``comment``), this walks through parenthesized CTE bodies and
checks only the *first keyword after* an opening ``AS (`` for a write
command. Uses a regex to handle any whitespace (spaces, tabs, newlines)
between ``AS`` and ``(``. Skips matches inside string literals.
"""
for m in _iter_as_paren_matches(stmt):
body = stmt[m.end() :].lstrip()
first_word = body.split()[0].upper().strip("()") if body.split() else ""
if first_word in _CTE_WRITE_INDICATORS:
return first_word
return None
def _skip_string_literal(stmt: str, pos: int) -> int:
"""Skip past a string literal starting at pos (single-quoted).
Handles escaped quotes ('') inside the literal.
Returns the index after the closing quote.
"""
quote_char = stmt[pos]
i = pos + 1
while i < len(stmt):
if stmt[i] == quote_char:
# Check for escaped quote ('')
if i + 1 < len(stmt) and stmt[i + 1] == quote_char:
i += 2
continue
return i + 1
i += 1
return i # Unterminated literal — return end
def _find_matching_close_paren(stmt: str, start: int) -> int:
"""Find the matching close paren, skipping string literals."""
depth = 1
i = start
while i < len(stmt) and depth > 0:
ch = stmt[i]
if ch == "'":
i = _skip_string_literal(stmt, i)
continue
if ch == "(":
depth += 1
elif ch == ")":
depth -= 1
i += 1
return i
def _extract_main_query_after_cte(stmt: str) -> str | None:
"""Extract the main (outer) query that follows all CTE definitions.
For ``WITH cte AS (SELECT 1) DELETE FROM users``, returns ``DELETE FROM users``.
Returns None if no main query is found after the last CTE body.
Handles parentheses inside string literals (e.g., ``SELECT '(' FROM t``).
"""
last_cte_end = 0
for m in _iter_as_paren_matches(stmt):
last_cte_end = _find_matching_close_paren(stmt, m.end())
if last_cte_end > 0:
remainder = stmt[last_cte_end:].strip().lstrip(",").strip()
if remainder:
return remainder
return None
def _resolve_explain_command(stmt: str) -> str | None:
"""Resolve the underlying command from an EXPLAIN [ANALYZE] [VERBOSE] statement.
Returns the real command (e.g., 'DELETE') if ANALYZE is present, else None.
Handles both space-separated and parenthesized syntax.
"""
rest = stmt.strip()[len("EXPLAIN") :].strip()
if not rest:
return None
analyze_found = False
explain_opts = {"ANALYZE", "ANALYSE", "VERBOSE"}
if rest.startswith("("):
close = rest.find(")")
if close != -1:
options_str = rest[1:close].upper()
analyze_found = any(
opt.strip() in ("ANALYZE", "ANALYSE") for opt in options_str.split(",")
)
rest = rest[close + 1 :].strip()
else:
while rest:
first_opt = rest.split()[0].upper().rstrip(";") if rest.split() else ""
if first_opt in ("ANALYZE", "ANALYSE"):
analyze_found = True
if first_opt not in explain_opts:
break
rest = rest[len(first_opt) :].strip()
if analyze_found and rest:
return rest.split()[0].upper().rstrip(";")
return None
class NL2SQLToolInput(BaseModel):
sql_query: str = Field(
@@ -21,20 +211,70 @@ class NL2SQLToolInput(BaseModel):
class NL2SQLTool(BaseTool):
"""Tool that converts natural language to SQL and executes it against a database.
By default the tool operates in **read-only mode**: only SELECT, SHOW,
DESCRIBE, EXPLAIN, and read-only CTEs (WITH … SELECT) are permitted. Write
operations (INSERT, UPDATE, DELETE, DROP, ALTER, CREATE, TRUNCATE, …) are
blocked unless ``allow_dml=True`` is set explicitly or the environment
variable ``CREWAI_NL2SQL_ALLOW_DML=true`` is present.
Writable CTEs (``WITH d AS (DELETE …) SELECT …``) and
``EXPLAIN ANALYZE <write-stmt>`` are treated as write operations and are
blocked in read-only mode.
The ``_fetch_all_available_columns`` helper uses parameterised queries so
that table names coming from the database catalogue cannot be used as an
injection vector.
"""
name: str = "NL2SQLTool"
description: str = "Converts natural language to SQL queries and executes them."
description: str = (
"Converts natural language to SQL queries and executes them against a "
"database. Read-only by default — only SELECT/SHOW/DESCRIBE/EXPLAIN "
"queries (and read-only CTEs) are allowed unless configured with "
"allow_dml=True."
)
db_uri: str = Field(
title="Database URI",
description="The URI of the database to connect to.",
)
allow_dml: bool = Field(
default=False,
title="Allow DML",
description=(
"When False (default) only read statements are permitted. "
"Set to True to allow INSERT/UPDATE/DELETE/DROP and other "
"write operations."
),
)
tables: list[dict[str, Any]] = Field(default_factory=list)
columns: dict[str, list[dict[str, Any]] | str] = Field(default_factory=dict)
args_schema: type[BaseModel] = NL2SQLToolInput
@model_validator(mode="after")
def _apply_env_override(self) -> Self:
"""Allow CREWAI_NL2SQL_ALLOW_DML=true to override allow_dml at runtime."""
if os.environ.get("CREWAI_NL2SQL_ALLOW_DML", "").strip().lower() == "true":
if not self.allow_dml:
logger.warning(
"NL2SQLTool: CREWAI_NL2SQL_ALLOW_DML env var is set — "
"DML/DDL operations are enabled. Ensure this is intentional."
)
self.allow_dml = True
return self
def model_post_init(self, __context: Any) -> None:
if not SQLALCHEMY_AVAILABLE:
raise ImportError(
"sqlalchemy is not installed. Please install it with `pip install crewai-tools[sqlalchemy]`"
"sqlalchemy is not installed. Please install it with "
"`pip install crewai-tools[sqlalchemy]`"
)
if self.allow_dml:
logger.warning(
"NL2SQLTool: allow_dml=True — write operations (INSERT/UPDATE/"
"DELETE/DROP/…) are permitted. Use with caution."
)
data: dict[str, list[dict[str, Any]] | str] = {}
@@ -50,42 +290,216 @@ class NL2SQLTool(BaseTool):
self.tables = tables
self.columns = data
# ------------------------------------------------------------------
# Query validation
# ------------------------------------------------------------------
def _validate_query(self, sql_query: str) -> None:
"""Raise ValueError if *sql_query* is not permitted under the current config.
Splits the query on semicolons and validates each statement
independently. When ``allow_dml=False`` (the default), multi-statement
queries are rejected outright to prevent ``SELECT 1; DROP TABLE users``
style bypasses. When ``allow_dml=True`` every statement is checked and
a warning is emitted for write operations.
"""
statements = [s.strip() for s in sql_query.split(";") if s.strip()]
if not statements:
raise ValueError("NL2SQLTool received an empty SQL query.")
if not self.allow_dml and len(statements) > 1:
raise ValueError(
"NL2SQLTool blocked a multi-statement query in read-only mode. "
"Semicolons are not permitted when allow_dml=False."
)
for stmt in statements:
self._validate_statement(stmt)
def _validate_statement(self, stmt: str) -> None:
"""Validate a single SQL statement (no semicolons)."""
command = self._extract_command(stmt)
# EXPLAIN ANALYZE / EXPLAIN ANALYSE actually *executes* the underlying
# query. Resolve the real command so write operations are caught.
# Handles both space-separated ("EXPLAIN ANALYZE DELETE …") and
# parenthesized ("EXPLAIN (ANALYZE) DELETE …", "EXPLAIN (ANALYZE, VERBOSE) DELETE …").
# EXPLAIN ANALYZE actually executes the underlying query — resolve the
# real command so write operations are caught.
if command == "EXPLAIN":
resolved = _resolve_explain_command(stmt)
if resolved:
command = resolved
# WITH starts a CTE. Read-only CTEs are fine; writable CTEs
# (e.g. WITH d AS (DELETE …) SELECT …) must be blocked in read-only mode.
if command == "WITH":
# Check for write commands inside CTE bodies.
write_found = _detect_writable_cte(stmt)
if write_found:
found = write_found
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool is configured in read-only mode and blocked a "
f"writable CTE containing a '{found}' statement. To allow "
f"write operations set allow_dml=True or "
f"CREWAI_NL2SQL_ALLOW_DML=true."
)
logger.warning(
"NL2SQLTool: executing writable CTE with '%s' because allow_dml=True.",
found,
)
return
# Check the main query after the CTE definitions.
main_query = _extract_main_query_after_cte(stmt)
if main_query:
main_cmd = main_query.split()[0].upper().rstrip(";")
if main_cmd in _WRITE_COMMANDS:
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool is configured in read-only mode and blocked a "
f"'{main_cmd}' statement after a CTE. To allow write "
f"operations set allow_dml=True or "
f"CREWAI_NL2SQL_ALLOW_DML=true."
)
logger.warning(
"NL2SQLTool: executing '%s' after CTE because allow_dml=True.",
main_cmd,
)
elif main_cmd not in _READ_ONLY_COMMANDS:
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool blocked an unrecognised SQL command '{main_cmd}' "
f"after a CTE. Only {sorted(_READ_ONLY_COMMANDS)} are allowed "
f"in read-only mode."
)
return
if command in _WRITE_COMMANDS:
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool is configured in read-only mode and blocked a "
f"'{command}' statement. To allow write operations set "
f"allow_dml=True or CREWAI_NL2SQL_ALLOW_DML=true."
)
logger.warning(
"NL2SQLTool: executing write statement '%s' because allow_dml=True.",
command,
)
elif command not in _READ_ONLY_COMMANDS:
# Unknown command — block by default unless DML is explicitly enabled
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool blocked an unrecognised SQL command '{command}'. "
f"Only {sorted(_READ_ONLY_COMMANDS)} are allowed in read-only "
f"mode."
)
@staticmethod
def _extract_command(sql_query: str) -> str:
"""Return the uppercased first keyword of *sql_query*."""
stripped = sql_query.strip().lstrip("(")
first_token = stripped.split()[0] if stripped.split() else ""
return first_token.upper().rstrip(";")
# ------------------------------------------------------------------
# Schema introspection helpers
# ------------------------------------------------------------------
def _fetch_available_tables(self) -> list[dict[str, Any]] | str:
return self.execute_sql(
"SELECT table_name FROM information_schema.tables WHERE table_schema = 'public';"
"SELECT table_name FROM information_schema.tables "
"WHERE table_schema = 'public';"
)
def _fetch_all_available_columns(
self, table_name: str
) -> list[dict[str, Any]] | str:
"""Fetch columns for *table_name* using a parameterised query.
The table name is bound via SQLAlchemy's ``:param`` syntax to prevent
SQL injection from catalogue values.
"""
return self.execute_sql(
f"SELECT column_name, data_type FROM information_schema.columns WHERE table_name = '{table_name}';" # noqa: S608
"SELECT column_name, data_type FROM information_schema.columns "
"WHERE table_name = :table_name",
params={"table_name": table_name},
)
# ------------------------------------------------------------------
# Core execution
# ------------------------------------------------------------------
def _run(self, sql_query: str) -> list[dict[str, Any]] | str:
try:
self._validate_query(sql_query)
data = self.execute_sql(sql_query)
except ValueError:
raise
except Exception as exc:
data = (
f"Based on these tables {self.tables} and columns {self.columns}, "
"you can create SQL queries to retrieve data from the database."
f"Get the original request {sql_query} and the error {exc} and create the correct SQL query."
"you can create SQL queries to retrieve data from the database. "
f"Get the original request {sql_query} and the error {exc} and "
"create the correct SQL query."
)
return data
def execute_sql(self, sql_query: str) -> list[dict[str, Any]] | str:
def execute_sql(
self,
sql_query: str,
params: dict[str, Any] | None = None,
) -> list[dict[str, Any]] | str:
"""Execute *sql_query* and return the results as a list of dicts.
Parameters
----------
sql_query:
The SQL statement to run.
params:
Optional mapping of bind parameters (e.g. ``{"table_name": "users"}``).
"""
if not SQLALCHEMY_AVAILABLE:
raise ImportError(
"sqlalchemy is not installed. Please install it with `pip install crewai-tools[sqlalchemy]`"
"sqlalchemy is not installed. Please install it with "
"`pip install crewai-tools[sqlalchemy]`"
)
# Check ALL statements so that e.g. "SELECT 1; DROP TABLE t" triggers a
# commit when allow_dml=True, regardless of statement order.
_stmts = [s.strip() for s in sql_query.split(";") if s.strip()]
def _is_write_stmt(s: str) -> bool:
cmd = self._extract_command(s)
if cmd in _WRITE_COMMANDS:
return True
if cmd == "EXPLAIN":
# Resolve the underlying command for EXPLAIN ANALYZE
resolved = _resolve_explain_command(s)
if resolved and resolved in _WRITE_COMMANDS:
return True
if cmd == "WITH":
if _detect_writable_cte(s):
return True
main_q = _extract_main_query_after_cte(s)
if main_q:
return main_q.split()[0].upper().rstrip(";") in _WRITE_COMMANDS
return False
is_write = any(_is_write_stmt(s) for s in _stmts)
engine = create_engine(self.db_uri)
Session = sessionmaker(bind=engine) # noqa: N806
session = Session()
try:
result = session.execute(text(sql_query))
session.commit()
result = session.execute(text(sql_query), params or {})
# Only commit when the operation actually mutates state
if self.allow_dml and is_write:
session.commit()
if result.returns_rows: # type: ignore[attr-defined]
columns = result.keys()

View File

@@ -0,0 +1,671 @@
"""Security tests for NL2SQLTool.
Uses an in-memory SQLite database so no external service is needed.
SQLite does not have information_schema, so we patch the schema-introspection
helpers to avoid bootstrap failures and focus purely on the security logic.
"""
import os
from unittest.mock import MagicMock, patch
import pytest
# Skip the entire module if SQLAlchemy is not installed
pytest.importorskip("sqlalchemy")
from sqlalchemy import create_engine, text # noqa: E402
from crewai_tools.tools.nl2sql.nl2sql_tool import NL2SQLTool # noqa: E402
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
SQLITE_URI = "sqlite://" # in-memory
def _make_tool(allow_dml: bool = False, **kwargs) -> NL2SQLTool:
"""Return a NL2SQLTool wired to an in-memory SQLite DB.
Schema-introspection is patched out so we can create the tool without a
real PostgreSQL information_schema.
"""
with (
patch.object(NL2SQLTool, "_fetch_available_tables", return_value=[]),
patch.object(NL2SQLTool, "_fetch_all_available_columns", return_value=[]),
):
return NL2SQLTool(db_uri=SQLITE_URI, allow_dml=allow_dml, **kwargs)
# ---------------------------------------------------------------------------
# Read-only enforcement (allow_dml=False)
# ---------------------------------------------------------------------------
class TestReadOnlyMode:
def test_select_allowed_by_default(self):
tool = _make_tool()
# SQLite supports SELECT without information_schema
result = tool.execute_sql("SELECT 1 AS val")
assert result == [{"val": 1}]
@pytest.mark.parametrize(
"stmt",
[
"INSERT INTO t VALUES (1)",
"UPDATE t SET col = 1",
"DELETE FROM t",
"DROP TABLE t",
"ALTER TABLE t ADD col TEXT",
"CREATE TABLE t (id INTEGER)",
"TRUNCATE TABLE t",
"GRANT SELECT ON t TO user1",
"REVOKE SELECT ON t FROM user1",
"EXEC sp_something",
"EXECUTE sp_something",
"CALL proc()",
],
)
def test_write_statements_blocked_by_default(self, stmt: str):
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(stmt)
def test_explain_allowed(self):
tool = _make_tool()
# Should not raise
tool._validate_query("EXPLAIN SELECT 1")
def test_read_only_cte_allowed(self):
tool = _make_tool()
tool._validate_query("WITH cte AS (SELECT 1) SELECT * FROM cte")
def test_show_allowed(self):
tool = _make_tool()
tool._validate_query("SHOW TABLES")
def test_describe_allowed(self):
tool = _make_tool()
tool._validate_query("DESCRIBE users")
# ---------------------------------------------------------------------------
# DML enabled (allow_dml=True)
# ---------------------------------------------------------------------------
class TestDMLEnabled:
def test_insert_allowed_when_dml_enabled(self):
tool = _make_tool(allow_dml=True)
# Should not raise
tool._validate_query("INSERT INTO t VALUES (1)")
def test_delete_allowed_when_dml_enabled(self):
tool = _make_tool(allow_dml=True)
tool._validate_query("DELETE FROM t WHERE id = 1")
def test_drop_allowed_when_dml_enabled(self):
tool = _make_tool(allow_dml=True)
tool._validate_query("DROP TABLE t")
def test_dml_actually_persists(self):
"""End-to-end: INSERT commits when allow_dml=True."""
# Use a file-based SQLite so we can verify persistence across sessions
import tempfile, os
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
uri = f"sqlite:///{db_path}"
try:
tool = _make_tool(allow_dml=True)
tool.db_uri = uri
engine = create_engine(uri)
with engine.connect() as conn:
conn.execute(text("CREATE TABLE items (id INTEGER PRIMARY KEY)"))
conn.commit()
tool.execute_sql("INSERT INTO items VALUES (42)")
with engine.connect() as conn:
rows = conn.execute(text("SELECT id FROM items")).fetchall()
assert (42,) in rows
finally:
os.unlink(db_path)
# ---------------------------------------------------------------------------
# Parameterised query — SQL injection prevention
# ---------------------------------------------------------------------------
class TestParameterisedQueries:
def test_table_name_is_parameterised(self):
"""_fetch_all_available_columns must not interpolate table_name into SQL."""
tool = _make_tool()
captured_calls = []
def recording_execute_sql(self_inner, sql_query, params=None):
captured_calls.append((sql_query, params))
return []
with patch.object(NL2SQLTool, "execute_sql", recording_execute_sql):
tool._fetch_all_available_columns("users'; DROP TABLE users; --")
assert len(captured_calls) == 1
sql, params = captured_calls[0]
# The raw SQL must NOT contain the injected string
assert "DROP" not in sql
# The table name must be passed as a parameter
assert params is not None
assert params.get("table_name") == "users'; DROP TABLE users; --"
# The SQL template must use the :param syntax
assert ":table_name" in sql
def test_injection_string_not_in_sql_template(self):
"""The f-string vulnerability is gone — table name never lands in the SQL."""
tool = _make_tool()
injection = "'; DROP TABLE users; --"
captured = {}
def spy(self_inner, sql_query, params=None):
captured["sql"] = sql_query
captured["params"] = params
return []
with patch.object(NL2SQLTool, "execute_sql", spy):
tool._fetch_all_available_columns(injection)
assert injection not in captured["sql"]
assert captured["params"]["table_name"] == injection
# ---------------------------------------------------------------------------
# session.commit() not called for read-only queries
# ---------------------------------------------------------------------------
class TestNoCommitForReadOnly:
def test_select_does_not_commit(self):
tool = _make_tool(allow_dml=False)
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.returns_rows = True
mock_result.keys.return_value = ["val"]
mock_result.fetchall.return_value = [(1,)]
mock_session.execute.return_value = mock_result
mock_session_cls = MagicMock(return_value=mock_session)
with (
patch("crewai_tools.tools.nl2sql.nl2sql_tool.create_engine"),
patch(
"crewai_tools.tools.nl2sql.nl2sql_tool.sessionmaker",
return_value=mock_session_cls,
),
):
tool.execute_sql("SELECT 1")
mock_session.commit.assert_not_called()
def test_write_with_dml_enabled_does_commit(self):
tool = _make_tool(allow_dml=True)
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.returns_rows = False
mock_session.execute.return_value = mock_result
mock_session_cls = MagicMock(return_value=mock_session)
with (
patch("crewai_tools.tools.nl2sql.nl2sql_tool.create_engine"),
patch(
"crewai_tools.tools.nl2sql.nl2sql_tool.sessionmaker",
return_value=mock_session_cls,
),
):
tool.execute_sql("INSERT INTO t VALUES (1)")
mock_session.commit.assert_called_once()
# ---------------------------------------------------------------------------
# Environment-variable escape hatch
# ---------------------------------------------------------------------------
class TestEnvVarEscapeHatch:
def test_env_var_enables_dml(self):
with patch.dict(os.environ, {"CREWAI_NL2SQL_ALLOW_DML": "true"}):
tool = _make_tool(allow_dml=False)
assert tool.allow_dml is True
def test_env_var_case_insensitive(self):
with patch.dict(os.environ, {"CREWAI_NL2SQL_ALLOW_DML": "TRUE"}):
tool = _make_tool(allow_dml=False)
assert tool.allow_dml is True
def test_env_var_absent_keeps_default(self):
env = {k: v for k, v in os.environ.items() if k != "CREWAI_NL2SQL_ALLOW_DML"}
with patch.dict(os.environ, env, clear=True):
tool = _make_tool(allow_dml=False)
assert tool.allow_dml is False
def test_env_var_false_does_not_enable_dml(self):
with patch.dict(os.environ, {"CREWAI_NL2SQL_ALLOW_DML": "false"}):
tool = _make_tool(allow_dml=False)
assert tool.allow_dml is False
def test_dml_write_blocked_without_env_var(self):
env = {k: v for k, v in os.environ.items() if k != "CREWAI_NL2SQL_ALLOW_DML"}
with patch.dict(os.environ, env, clear=True):
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("DROP TABLE sensitive_data")
# ---------------------------------------------------------------------------
# _run() propagates ValueError from _validate_query
# ---------------------------------------------------------------------------
class TestRunValidation:
def test_run_raises_on_blocked_query(self):
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._run("DELETE FROM users")
def test_run_returns_results_for_select(self):
tool = _make_tool(allow_dml=False)
result = tool._run("SELECT 1 AS n")
assert result == [{"n": 1}]
# ---------------------------------------------------------------------------
# Multi-statement / semicolon injection prevention
# ---------------------------------------------------------------------------
class TestSemicolonInjection:
def test_multi_statement_blocked_in_read_only_mode(self):
"""SELECT 1; DROP TABLE users must be rejected when allow_dml=False."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="multi-statement"):
tool._validate_query("SELECT 1; DROP TABLE users")
def test_multi_statement_blocked_even_with_only_selects(self):
"""Two SELECT statements are still rejected in read-only mode."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="multi-statement"):
tool._validate_query("SELECT 1; SELECT 2")
def test_trailing_semicolon_allowed_single_statement(self):
"""A single statement with a trailing semicolon should pass."""
tool = _make_tool(allow_dml=False)
# Should not raise — the part after the semicolon is empty
tool._validate_query("SELECT 1;")
def test_multi_statement_allowed_when_dml_enabled(self):
"""Multiple statements are permitted when allow_dml=True."""
tool = _make_tool(allow_dml=True)
# Should not raise
tool._validate_query("SELECT 1; INSERT INTO t VALUES (1)")
def test_multi_statement_write_still_blocked_individually(self):
"""Even with allow_dml=False, a single write statement is blocked."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("DROP TABLE users")
# ---------------------------------------------------------------------------
# Writable CTEs (WITH … DELETE/INSERT/UPDATE)
# ---------------------------------------------------------------------------
class TestWritableCTE:
def test_writable_cte_delete_blocked_in_read_only(self):
"""WITH d AS (DELETE FROM users RETURNING *) SELECT * FROM d — blocked."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH deleted AS (DELETE FROM users RETURNING *) SELECT * FROM deleted"
)
def test_writable_cte_insert_blocked_in_read_only(self):
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH ins AS (INSERT INTO t VALUES (1) RETURNING id) SELECT * FROM ins"
)
def test_writable_cte_update_blocked_in_read_only(self):
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH upd AS (UPDATE t SET x=1 RETURNING id) SELECT * FROM upd"
)
def test_writable_cte_allowed_when_dml_enabled(self):
tool = _make_tool(allow_dml=True)
# Should not raise
tool._validate_query(
"WITH deleted AS (DELETE FROM users RETURNING *) SELECT * FROM deleted"
)
def test_plain_read_only_cte_still_allowed(self):
tool = _make_tool(allow_dml=False)
# No write commands in the CTE body — must pass
tool._validate_query("WITH cte AS (SELECT id FROM users) SELECT * FROM cte")
def test_cte_with_comment_column_not_false_positive(self):
"""Column named 'comment' should NOT trigger writable CTE detection."""
tool = _make_tool(allow_dml=False)
# 'comment' is a column name, not a SQL command
tool._validate_query(
"WITH cte AS (SELECT comment FROM posts) SELECT * FROM cte"
)
def test_cte_with_set_column_not_false_positive(self):
"""Column named 'set' should NOT trigger writable CTE detection."""
tool = _make_tool(allow_dml=False)
tool._validate_query(
"WITH cte AS (SELECT set, reset FROM config) SELECT * FROM cte"
)
# ---------------------------------------------------------------------------
# EXPLAIN ANALYZE executes the underlying query
# ---------------------------------------------------------------------------
def test_cte_with_write_main_query_blocked(self):
"""WITH cte AS (SELECT 1) DELETE FROM users — main query must be caught."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH cte AS (SELECT 1) DELETE FROM users"
)
def test_cte_with_write_main_query_allowed_with_dml(self):
"""Main query write after CTE should pass when allow_dml=True."""
tool = _make_tool(allow_dml=True)
tool._validate_query(
"WITH cte AS (SELECT id FROM users) INSERT INTO archive SELECT * FROM cte"
)
def test_cte_with_newline_before_paren_blocked(self):
"""AS followed by newline then ( should still detect writable CTE."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH cte AS\n(DELETE FROM users RETURNING *) SELECT * FROM cte"
)
def test_cte_with_tab_before_paren_blocked(self):
"""AS followed by tab then ( should still detect writable CTE."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH cte AS\t(DELETE FROM users RETURNING *) SELECT * FROM cte"
)
class TestExplainAnalyze:
def test_explain_analyze_delete_blocked_in_read_only(self):
"""EXPLAIN ANALYZE DELETE actually runs the delete — block it."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("EXPLAIN ANALYZE DELETE FROM users")
def test_explain_analyse_delete_blocked_in_read_only(self):
"""British spelling ANALYSE is also caught."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("EXPLAIN ANALYSE DELETE FROM users")
def test_explain_analyze_drop_blocked_in_read_only(self):
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("EXPLAIN ANALYZE DROP TABLE users")
def test_explain_analyze_select_allowed_in_read_only(self):
"""EXPLAIN ANALYZE on a SELECT is safe — must be permitted."""
tool = _make_tool(allow_dml=False)
tool._validate_query("EXPLAIN ANALYZE SELECT * FROM users")
def test_explain_without_analyze_allowed(self):
tool = _make_tool(allow_dml=False)
tool._validate_query("EXPLAIN SELECT * FROM users")
def test_explain_analyze_delete_allowed_when_dml_enabled(self):
tool = _make_tool(allow_dml=True)
tool._validate_query("EXPLAIN ANALYZE DELETE FROM users")
def test_explain_paren_analyze_delete_blocked_in_read_only(self):
"""EXPLAIN (ANALYZE) DELETE actually runs the delete — block it."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("EXPLAIN (ANALYZE) DELETE FROM users")
def test_explain_paren_analyze_verbose_delete_blocked_in_read_only(self):
"""EXPLAIN (ANALYZE, VERBOSE) DELETE actually runs the delete — block it."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("EXPLAIN (ANALYZE, VERBOSE) DELETE FROM users")
def test_explain_paren_verbose_select_allowed_in_read_only(self):
"""EXPLAIN (VERBOSE) SELECT is safe — no ANALYZE means no execution."""
tool = _make_tool(allow_dml=False)
tool._validate_query("EXPLAIN (VERBOSE) SELECT * FROM users")
# ---------------------------------------------------------------------------
# Multi-statement commit covers ALL statements (not just the first)
# ---------------------------------------------------------------------------
class TestMultiStatementCommit:
def test_select_then_insert_triggers_commit(self):
"""SELECT 1; INSERT … — commit must happen because INSERT is a write."""
tool = _make_tool(allow_dml=True)
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.returns_rows = False
mock_session.execute.return_value = mock_result
mock_session_cls = MagicMock(return_value=mock_session)
with (
patch("crewai_tools.tools.nl2sql.nl2sql_tool.create_engine"),
patch(
"crewai_tools.tools.nl2sql.nl2sql_tool.sessionmaker",
return_value=mock_session_cls,
),
):
tool.execute_sql("SELECT 1; INSERT INTO t VALUES (1)")
mock_session.commit.assert_called_once()
def test_select_only_multi_statement_does_not_commit(self):
"""Two SELECTs must not trigger a commit even when allow_dml=True."""
tool = _make_tool(allow_dml=True)
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.returns_rows = True
mock_result.keys.return_value = ["v"]
mock_result.fetchall.return_value = [(1,)]
mock_session.execute.return_value = mock_result
mock_session_cls = MagicMock(return_value=mock_session)
with (
patch("crewai_tools.tools.nl2sql.nl2sql_tool.create_engine"),
patch(
"crewai_tools.tools.nl2sql.nl2sql_tool.sessionmaker",
return_value=mock_session_cls,
),
):
tool.execute_sql("SELECT 1; SELECT 2")
def test_writable_cte_triggers_commit(self):
"""WITH d AS (DELETE ...) must trigger commit when allow_dml=True."""
tool = _make_tool(allow_dml=True)
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.returns_rows = True
mock_result.keys.return_value = ["id"]
mock_result.fetchall.return_value = [(1,)]
mock_session.execute.return_value = mock_result
mock_session_cls = MagicMock(return_value=mock_session)
with (
patch("crewai_tools.tools.nl2sql.nl2sql_tool.create_engine"),
patch(
"crewai_tools.tools.nl2sql.nl2sql_tool.sessionmaker",
return_value=mock_session_cls,
),
):
tool.execute_sql(
"WITH d AS (DELETE FROM users RETURNING *) SELECT * FROM d"
)
mock_session.commit.assert_called_once()
# ---------------------------------------------------------------------------
# Extended _WRITE_COMMANDS coverage
# ---------------------------------------------------------------------------
class TestExtendedWriteCommands:
@pytest.mark.parametrize(
"stmt",
[
"UPSERT INTO t VALUES (1)",
"LOAD DATA INFILE 'f.csv' INTO TABLE t",
"COPY t FROM '/tmp/f.csv'",
"VACUUM ANALYZE t",
"ANALYZE t",
"ANALYSE t",
"REINDEX TABLE t",
"CLUSTER t USING idx",
"REFRESH MATERIALIZED VIEW v",
"COMMENT ON TABLE t IS 'desc'",
"SET search_path = myschema",
"RESET search_path",
],
)
def test_extended_write_commands_blocked_by_default(self, stmt: str):
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(stmt)
# ---------------------------------------------------------------------------
# EXPLAIN ANALYZE VERBOSE handling
# ---------------------------------------------------------------------------
class TestExplainAnalyzeVerbose:
def test_explain_analyze_verbose_select_allowed(self):
"""EXPLAIN ANALYZE VERBOSE SELECT should be allowed (read-only)."""
tool = _make_tool(allow_dml=False)
tool._validate_query("EXPLAIN ANALYZE VERBOSE SELECT * FROM users")
def test_explain_analyze_verbose_delete_blocked(self):
"""EXPLAIN ANALYZE VERBOSE DELETE should be blocked."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query("EXPLAIN ANALYZE VERBOSE DELETE FROM users")
def test_explain_verbose_select_allowed(self):
"""EXPLAIN VERBOSE SELECT (no ANALYZE) should be allowed."""
tool = _make_tool(allow_dml=False)
tool._validate_query("EXPLAIN VERBOSE SELECT * FROM users")
# ---------------------------------------------------------------------------
# CTE with string literal parens
# ---------------------------------------------------------------------------
class TestCTEStringLiteralParens:
def test_cte_string_paren_does_not_bypass(self):
"""Parens inside string literals should not confuse the paren walker."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH cte AS (SELECT '(' FROM t) DELETE FROM users"
)
def test_cte_string_paren_read_only_allowed(self):
"""Read-only CTE with string literal parens should be allowed."""
tool = _make_tool(allow_dml=False)
tool._validate_query(
"WITH cte AS (SELECT '(' FROM t) SELECT * FROM cte"
)
# ---------------------------------------------------------------------------
# EXPLAIN ANALYZE commit logic
# ---------------------------------------------------------------------------
class TestExplainAnalyzeCommit:
def test_explain_analyze_delete_triggers_commit(self):
"""EXPLAIN ANALYZE DELETE should trigger commit when allow_dml=True."""
tool = _make_tool(allow_dml=True)
mock_session = MagicMock()
mock_result = MagicMock()
mock_result.returns_rows = True
mock_result.keys.return_value = ["QUERY PLAN"]
mock_result.fetchall.return_value = [("Delete on users",)]
mock_session.execute.return_value = mock_result
mock_session_cls = MagicMock(return_value=mock_session)
with (
patch("crewai_tools.tools.nl2sql.nl2sql_tool.create_engine"),
patch(
"crewai_tools.tools.nl2sql.nl2sql_tool.sessionmaker",
return_value=mock_session_cls,
),
):
tool.execute_sql("EXPLAIN ANALYZE DELETE FROM users")
mock_session.commit.assert_called_once()
# ---------------------------------------------------------------------------
# AS( inside string literals must not confuse CTE detection
# ---------------------------------------------------------------------------
class TestCTEStringLiteralAS:
def test_as_paren_inside_string_does_not_bypass(self):
"""'AS (' inside a string literal must not be treated as a CTE body."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="read-only mode"):
tool._validate_query(
"WITH cte AS (SELECT 'AS (' FROM t) DELETE FROM users"
)
def test_as_paren_inside_string_read_only_ok(self):
"""Read-only CTE with 'AS (' in a string should be allowed."""
tool = _make_tool(allow_dml=False)
tool._validate_query(
"WITH cte AS (SELECT 'AS (' FROM t) SELECT * FROM cte"
)
# ---------------------------------------------------------------------------
# Unknown command after CTE should be blocked
# ---------------------------------------------------------------------------
class TestCTEUnknownCommand:
def test_unknown_command_after_cte_blocked(self):
"""WITH cte AS (SELECT 1) FOOBAR should be blocked as unknown."""
tool = _make_tool(allow_dml=False)
with pytest.raises(ValueError, match="unrecognised"):
tool._validate_query("WITH cte AS (SELECT 1) FOOBAR")

View File

@@ -6976,6 +6976,634 @@
"type": "object"
}
},
{
"description": "Execute a shell command inside a Daytona sandbox and return the exit code and combined output. Use this to run builds, package installs, git operations, or any one-off shell command.",
"env_vars": [
{
"default": null,
"description": "API key for Daytona sandbox service",
"name": "DAYTONA_API_KEY",
"required": false
},
{
"default": null,
"description": "Daytona API base URL (optional)",
"name": "DAYTONA_API_URL",
"required": false
},
{
"default": null,
"description": "Daytona target region (optional)",
"name": "DAYTONA_TARGET",
"required": false
}
],
"humanized_name": "Daytona Sandbox Exec",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"description": "Run a shell command inside a Daytona sandbox.",
"properties": {
"api_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona API key. Falls back to DAYTONA_API_KEY env var.",
"required": false,
"title": "Api Key"
},
"api_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona API URL override. Falls back to DAYTONA_API_URL env var.",
"required": false,
"title": "Api Url"
},
"create_params": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional kwargs forwarded to CreateSandboxFromSnapshotParams when creating a sandbox (e.g. language, snapshot, env_vars, labels).",
"title": "Create Params"
},
"persistent": {
"default": false,
"description": "If True, reuse one sandbox across all calls to this tool instance and delete it at process exit. Default False creates and deletes a fresh sandbox per call.",
"title": "Persistent",
"type": "boolean"
},
"sandbox_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Attach to an existing sandbox by id or name instead of creating a new one. The tool will never delete a sandbox it did not create.",
"title": "Sandbox Id"
},
"sandbox_timeout": {
"default": 60.0,
"description": "Timeout in seconds for sandbox create/delete operations.",
"title": "Sandbox Timeout",
"type": "number"
},
"target": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona target region. Falls back to DAYTONA_TARGET env var.",
"required": false,
"title": "Target"
}
},
"required": [],
"title": "DaytonaExecTool",
"type": "object"
},
"name": "DaytonaExecTool",
"package_dependencies": [
"daytona"
],
"run_params_schema": {
"properties": {
"command": {
"description": "Shell command to execute in the sandbox.",
"title": "Command",
"type": "string"
},
"cwd": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Working directory to run the command in. Defaults to the sandbox work dir.",
"title": "Cwd"
},
"env": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional environment variables to set for this command.",
"title": "Env"
},
"timeout": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum seconds to wait for the command to finish.",
"title": "Timeout"
}
},
"required": [
"command"
],
"title": "DaytonaExecToolSchema",
"type": "object"
}
},
{
"description": "Perform filesystem operations inside a Daytona sandbox: read a file, write content to a path, append content to an existing file, list a directory, delete a path, make a directory, or fetch file metadata. For files larger than a few KB, create the file with action='write' and empty content, then send the body via multiple 'append' calls of ~4KB each to stay within tool-call payload limits.",
"env_vars": [
{
"default": null,
"description": "API key for Daytona sandbox service",
"name": "DAYTONA_API_KEY",
"required": false
},
{
"default": null,
"description": "Daytona API base URL (optional)",
"name": "DAYTONA_API_URL",
"required": false
},
{
"default": null,
"description": "Daytona target region (optional)",
"name": "DAYTONA_TARGET",
"required": false
}
],
"humanized_name": "Daytona Sandbox Files",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"description": "Read, write, and manage files inside a Daytona sandbox.\n\nNotes:\n - Most useful with `persistent=True` or an explicit `sandbox_id`. With the\n default ephemeral mode, files disappear when this tool call finishes.",
"properties": {
"api_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona API key. Falls back to DAYTONA_API_KEY env var.",
"required": false,
"title": "Api Key"
},
"api_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona API URL override. Falls back to DAYTONA_API_URL env var.",
"required": false,
"title": "Api Url"
},
"create_params": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional kwargs forwarded to CreateSandboxFromSnapshotParams when creating a sandbox (e.g. language, snapshot, env_vars, labels).",
"title": "Create Params"
},
"persistent": {
"default": false,
"description": "If True, reuse one sandbox across all calls to this tool instance and delete it at process exit. Default False creates and deletes a fresh sandbox per call.",
"title": "Persistent",
"type": "boolean"
},
"sandbox_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Attach to an existing sandbox by id or name instead of creating a new one. The tool will never delete a sandbox it did not create.",
"title": "Sandbox Id"
},
"sandbox_timeout": {
"default": 60.0,
"description": "Timeout in seconds for sandbox create/delete operations.",
"title": "Sandbox Timeout",
"type": "number"
},
"target": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona target region. Falls back to DAYTONA_TARGET env var.",
"required": false,
"title": "Target"
}
},
"required": [],
"title": "DaytonaFileTool",
"type": "object"
},
"name": "DaytonaFileTool",
"package_dependencies": [
"daytona"
],
"run_params_schema": {
"properties": {
"action": {
"description": "The filesystem action to perform: 'read' (returns file contents), 'write' (create or replace a file with content), 'append' (append content to an existing file \u2014 use this for writing large files in chunks to avoid hitting tool-call size limits), 'list' (lists a directory), 'delete' (removes a file/dir), 'mkdir' (creates a directory), 'info' (returns file metadata).",
"enum": [
"read",
"write",
"append",
"list",
"delete",
"mkdir",
"info"
],
"title": "Action",
"type": "string"
},
"binary": {
"default": false,
"description": "For 'write': treat content as base64 and upload raw bytes. For 'read': return contents as base64 instead of decoded utf-8.",
"title": "Binary",
"type": "boolean"
},
"content": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Content to write or append. If omitted for 'write', an empty file is created. For files larger than a few KB, prefer one 'write' with empty content followed by multiple 'append' calls of ~4KB each to stay within tool-call payload limits.",
"title": "Content"
},
"mode": {
"default": "0755",
"description": "For action='mkdir': octal permission string (default 0755).",
"title": "Mode",
"type": "string"
},
"path": {
"description": "Absolute path inside the sandbox.",
"title": "Path",
"type": "string"
},
"recursive": {
"default": false,
"description": "For action='delete': remove directories recursively.",
"title": "Recursive",
"type": "boolean"
}
},
"required": [
"action",
"path"
],
"title": "DaytonaFileToolSchema",
"type": "object"
}
},
{
"description": "Execute a block of Python code inside a Daytona sandbox and return the exit code, captured stdout, and any produced artifacts. Use this for data processing, quick scripts, or analysis that should run in an isolated environment.",
"env_vars": [
{
"default": null,
"description": "API key for Daytona sandbox service",
"name": "DAYTONA_API_KEY",
"required": false
},
{
"default": null,
"description": "Daytona API base URL (optional)",
"name": "DAYTONA_API_URL",
"required": false
},
{
"default": null,
"description": "Daytona target region (optional)",
"name": "DAYTONA_TARGET",
"required": false
}
],
"humanized_name": "Daytona Sandbox Python",
"init_params_schema": {
"$defs": {
"EnvVar": {
"properties": {
"default": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Default"
},
"description": {
"title": "Description",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
},
"required": {
"default": true,
"title": "Required",
"type": "boolean"
}
},
"required": [
"name",
"description"
],
"title": "EnvVar",
"type": "object"
}
},
"description": "Run Python source inside a Daytona sandbox.",
"properties": {
"api_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona API key. Falls back to DAYTONA_API_KEY env var.",
"required": false,
"title": "Api Key"
},
"api_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona API URL override. Falls back to DAYTONA_API_URL env var.",
"required": false,
"title": "Api Url"
},
"create_params": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional kwargs forwarded to CreateSandboxFromSnapshotParams when creating a sandbox (e.g. language, snapshot, env_vars, labels).",
"title": "Create Params"
},
"persistent": {
"default": false,
"description": "If True, reuse one sandbox across all calls to this tool instance and delete it at process exit. Default False creates and deletes a fresh sandbox per call.",
"title": "Persistent",
"type": "boolean"
},
"sandbox_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Attach to an existing sandbox by id or name instead of creating a new one. The tool will never delete a sandbox it did not create.",
"title": "Sandbox Id"
},
"sandbox_timeout": {
"default": 60.0,
"description": "Timeout in seconds for sandbox create/delete operations.",
"title": "Sandbox Timeout",
"type": "number"
},
"target": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"description": "Daytona target region. Falls back to DAYTONA_TARGET env var.",
"required": false,
"title": "Target"
}
},
"required": [],
"title": "DaytonaPythonTool",
"type": "object"
},
"name": "DaytonaPythonTool",
"package_dependencies": [
"daytona"
],
"run_params_schema": {
"properties": {
"argv": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional argv passed to the script (forwarded as params.argv).",
"title": "Argv"
},
"code": {
"description": "Python source to execute inside the sandbox.",
"title": "Code",
"type": "string"
},
"env": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional environment variables for the run (forwarded as params.env).",
"title": "Env"
},
"timeout": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum seconds to wait for the code to finish.",
"title": "Timeout"
}
},
"required": [
"code"
],
"title": "DaytonaPythonToolSchema",
"type": "object"
}
},
{
"description": "A tool that can be used to recursively list a directory's content.",
"env_vars": [],
@@ -14051,7 +14679,7 @@
}
},
{
"description": "Converts natural language to SQL queries and executes them.",
"description": "Converts natural language to SQL queries and executes them against a database. Read-only by default \u2014 only SELECT/SHOW/DESCRIBE/EXPLAIN queries (and read-only CTEs) are allowed unless configured with allow_dml=True.",
"env_vars": [],
"humanized_name": "NL2SQLTool",
"init_params_schema": {
@@ -14092,7 +14720,14 @@
"type": "object"
}
},
"description": "Tool that converts natural language to SQL and executes it against a database.\n\nBy default the tool operates in **read-only mode**: only SELECT, SHOW,\nDESCRIBE, EXPLAIN, and read-only CTEs (WITH \u2026 SELECT) are permitted. Write\noperations (INSERT, UPDATE, DELETE, DROP, ALTER, CREATE, TRUNCATE, \u2026) are\nblocked unless ``allow_dml=True`` is set explicitly or the environment\nvariable ``CREWAI_NL2SQL_ALLOW_DML=true`` is present.\n\nWritable CTEs (``WITH d AS (DELETE \u2026) SELECT \u2026``) and\n``EXPLAIN ANALYZE <write-stmt>`` are treated as write operations and are\nblocked in read-only mode.\n\nThe ``_fetch_all_available_columns`` helper uses parameterised queries so\nthat table names coming from the database catalogue cannot be used as an\ninjection vector.",
"properties": {
"allow_dml": {
"default": false,
"description": "When False (default) only read statements are permitted. Set to True to allow INSERT/UPDATE/DELETE/DROP and other write operations.",
"title": "Allow DML",
"type": "boolean"
},
"columns": {
"additionalProperties": {
"anyOf": [

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
# Core Dependencies
"pydantic~=2.11.9",
"openai>=1.83.0,<3",
"openai>=2.0.0,<3",
"instructor>=1.3.3",
# Text Processing
"pdfplumber~=0.11.4",
@@ -24,7 +24,7 @@ dependencies = [
"tokenizers>=0.21,<1",
"openpyxl~=3.1.5",
# Authentication and Security
"python-dotenv~=1.1.1",
"python-dotenv>=1.2.2,<2",
"pyjwt>=2.9.0,<3",
# TUI
"textual>=7.5.0",
@@ -40,7 +40,7 @@ dependencies = [
"pydantic-settings~=2.10.1",
"httpx~=0.28.1",
"mcp~=1.26.0",
"uv~=0.9.13",
"uv~=0.11.6",
"aiosqlite~=0.21.0",
"pyyaml~=6.0",
"aiofiles~=24.1.0",
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.14.2a1",
"crewai-tools==1.14.3a2",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -74,8 +74,8 @@ qdrant = [
"qdrant-client[fastembed]~=1.14.3",
]
aws = [
"boto3~=1.40.38",
"aiobotocore~=2.25.2",
"boto3~=1.42.79",
"aiobotocore~=3.4.0",
]
watson = [
"ibm-watsonx-ai~=1.3.39",
@@ -87,7 +87,7 @@ litellm = [
"litellm~=1.83.0",
]
bedrock = [
"boto3~=1.40.45",
"boto3~=1.42.79",
]
google-genai = [
"google-genai~=1.65.0",

View File

@@ -1,10 +1,9 @@
import contextvars
import threading
from typing import Any
import urllib.request
import importlib
import sys
from typing import TYPE_CHECKING, Annotated, Any
import warnings
from pydantic import PydanticUserError
from pydantic import Field, PydanticUserError
from crewai.agent.core import Agent
from crewai.agent.planning_config import PlanningConfig
@@ -20,7 +19,10 @@ from crewai.state.checkpoint_config import CheckpointConfig # noqa: F401
from crewai.task import Task
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
if TYPE_CHECKING:
from crewai.memory.unified_memory import Memory
def _suppress_pydantic_deprecation_warnings() -> None:
@@ -46,38 +48,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.14.2a1"
_telemetry_submitted = False
def _track_install() -> None:
"""Track package installation/first-use via Scarf analytics."""
global _telemetry_submitted
if _telemetry_submitted or Telemetry._is_telemetry_disabled():
return
try:
pixel_url = "https://api.scarf.sh/v2/packages/CrewAI/crewai/docs/00f2dad1-8334-4a39-934e-003b2e1146db"
req = urllib.request.Request(pixel_url) # noqa: S310
req.add_header("User-Agent", f"CrewAI-Python/{__version__}")
with urllib.request.urlopen(req, timeout=2): # noqa: S310
_telemetry_submitted = True
except Exception: # noqa: S110
pass
def _track_install_async() -> None:
"""Track installation in background thread to avoid blocking imports."""
if not Telemetry._is_telemetry_disabled():
ctx = contextvars.copy_context()
thread = threading.Thread(target=ctx.run, args=(_track_install,), daemon=True)
thread.start()
_track_install_async()
__version__ = "1.14.3a2"
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
"Memory": ("crewai.memory.unified_memory", "Memory"),
@@ -88,8 +59,6 @@ def __getattr__(name: str) -> Any:
"""Lazily import heavy modules (e.g. Memory → lancedb) on first access."""
if name in _LAZY_IMPORTS:
module_path, attr = _LAZY_IMPORTS[name]
import importlib
mod = importlib.import_module(module_path)
val = getattr(mod, attr)
globals()[name] = val
@@ -147,8 +116,6 @@ try:
except ImportError:
pass
import sys
_full_namespace = {
**_base_namespace,
"ToolsHandler": _ToolsHandler,
@@ -191,10 +158,6 @@ try:
Flow.model_rebuild(force=True, _types_namespace=_full_namespace)
_AgentExecutor.model_rebuild(force=True, _types_namespace=_full_namespace)
from typing import Annotated
from pydantic import Field
from crewai.state.runtime import RuntimeState
Entity = Annotated[

View File

@@ -98,7 +98,6 @@ class A2AErrorCode(IntEnum):
"""The specified artifact was not found."""
# Error code to default message mapping
ERROR_MESSAGES: dict[int, str] = {
A2AErrorCode.JSON_PARSE_ERROR: "Parse error",
A2AErrorCode.INVALID_REQUEST: "Invalid Request",

View File

@@ -63,25 +63,21 @@ class A2AExtension(Protocol):
Example:
class MyExtension:
def inject_tools(self, agent: Agent) -> None:
# Add custom tools to the agent
pass
def extract_state_from_history(
self, conversation_history: Sequence[Message]
) -> ConversationState | None:
# Extract state from conversation
return None
def augment_prompt(
self, base_prompt: str, conversation_state: ConversationState | None
) -> str:
# Add custom instructions
return base_prompt
def process_response(
self, agent_response: Any, conversation_state: ConversationState | None
) -> Any:
# Modify response if needed
return agent_response
"""

View File

@@ -77,7 +77,6 @@ def extract_a2a_agent_ids_from_config(
else:
configs = a2a_config
# Filter to only client configs (those with endpoint)
client_configs: list[A2AClientConfigTypes] = [
config for config in configs if isinstance(config, (A2AConfig, A2AClientConfig))
]

View File

@@ -29,7 +29,7 @@ from pydantic import (
model_validator,
)
from pydantic.functional_serializers import PlainSerializer
from typing_extensions import Self
from typing_extensions import Self, TypeIs
from crewai.agent.planning_config import PlanningConfig
from crewai.agent.utils import (
@@ -84,6 +84,7 @@ from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.fingerprint import Fingerprint
from crewai.skills.loader import activate_skill, discover_skills
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
from crewai.state.checkpoint_config import CheckpointConfig, apply_checkpoint
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.types.callback import SerializableCallable
from crewai.utilities.agent_utils import (
@@ -132,6 +133,13 @@ _EXECUTOR_CLASS_MAP: dict[str, type] = {
}
def _is_resuming_agent_executor(
executor: CrewAgentExecutor | AgentExecutor | None,
) -> TypeIs[AgentExecutor]:
"""Type guard: True when the executor is resuming from a checkpoint."""
return isinstance(executor, AgentExecutor) and executor._resuming
def _validate_executor_class(value: Any) -> Any:
if isinstance(value, str):
cls = _EXECUTOR_CLASS_MAP.get(value)
@@ -1341,7 +1349,6 @@ class Agent(BaseAgent):
raw_tools: list[BaseTool] = self.tools or []
# Inject memory tools for standalone kickoff (crew path handles its own)
agent_memory = getattr(self, "memory", None)
if agent_memory is not None:
from crewai.tools.memory_tools import create_memory_tools
@@ -1366,24 +1373,42 @@ class Agent(BaseAgent):
prompt, stop_words, rpm_limit_fn = self._build_execution_prompt(raw_tools)
executor = AgentExecutor(
llm=cast(BaseLLM, self.llm),
agent=self,
prompt=prompt,
max_iter=self.max_iter,
tools=parsed_tools,
tools_names=get_tool_names(parsed_tools),
stop_words=stop_words,
tools_description=render_text_description_and_args(parsed_tools),
tools_handler=self.tools_handler,
original_tools=raw_tools,
step_callback=self.step_callback,
function_calling_llm=self.function_calling_llm,
respect_context_window=self.respect_context_window,
request_within_rpm_limit=rpm_limit_fn,
callbacks=[TokenCalcHandler(self._token_process)],
response_model=response_format,
)
if _is_resuming_agent_executor(self.agent_executor):
executor = self.agent_executor
executor.tools = parsed_tools
executor.tools_names = get_tool_names(parsed_tools)
executor.tools_description = render_text_description_and_args(parsed_tools)
executor.original_tools = raw_tools
executor.prompt = prompt
executor.response_model = response_format
executor.stop_words = stop_words
executor.tools_handler = self.tools_handler
executor.step_callback = self.step_callback
executor.function_calling_llm = cast(
BaseLLM | None, self.function_calling_llm
)
executor.respect_context_window = self.respect_context_window
executor.request_within_rpm_limit = rpm_limit_fn
executor.callbacks = [TokenCalcHandler(self._token_process)]
else:
executor = AgentExecutor(
llm=cast(BaseLLM, self.llm),
agent=self,
prompt=prompt,
max_iter=self.max_iter,
tools=parsed_tools,
tools_names=get_tool_names(parsed_tools),
stop_words=stop_words,
tools_description=render_text_description_and_args(parsed_tools),
tools_handler=self.tools_handler,
original_tools=raw_tools,
step_callback=self.step_callback,
function_calling_llm=self.function_calling_llm,
respect_context_window=self.respect_context_window,
request_within_rpm_limit=rpm_limit_fn,
callbacks=[TokenCalcHandler(self._token_process)],
response_model=response_format,
)
all_files: dict[str, Any] = {}
if isinstance(messages, str):
@@ -1399,7 +1424,6 @@ class Agent(BaseAgent):
if input_files:
all_files.update(input_files)
# Inject memory context for standalone kickoff (recall before execution)
if agent_memory is not None:
try:
crewai_event_bus.emit(
@@ -1459,6 +1483,7 @@ class Agent(BaseAgent):
messages: str | list[LLMMessage],
response_format: type[Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> LiteAgentOutput | Coroutine[Any, Any, LiteAgentOutput]:
"""Execute the agent with the given messages using the AgentExecutor.
@@ -1477,6 +1502,9 @@ class Agent(BaseAgent):
response_format: Optional Pydantic model for structured output.
input_files: Optional dict of named files to attach to the message.
Files can be paths, bytes, or File objects from crewai_files.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the agent resumes from that checkpoint. Remaining
config fields enable checkpointing for the run.
Returns:
LiteAgentOutput: The result of the agent execution.
@@ -1485,8 +1513,14 @@ class Agent(BaseAgent):
Note:
For explicit async usage outside of Flow, use kickoff_async() directly.
"""
# Magic auto-async: if inside event loop (e.g., inside a Flow),
# return coroutine for Flow to await
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return restored.kickoff( # type: ignore[no-any-return]
messages=messages,
response_format=response_format,
input_files=input_files,
)
if is_inside_event_loop():
return self.kickoff_async(messages, response_format, input_files)
@@ -1495,14 +1529,17 @@ class Agent(BaseAgent):
)
try:
crewai_event_bus.emit(
self,
event=LiteAgentExecutionStartedEvent(
if self.checkpoint_kickoff_event_id is not None:
self._kickoff_event_id = self.checkpoint_kickoff_event_id
self.checkpoint_kickoff_event_id = None
else:
started_event = LiteAgentExecutionStartedEvent(
agent_info=agent_info,
tools=parsed_tools,
messages=messages,
),
)
)
crewai_event_bus.emit(self, event=started_event)
self._kickoff_event_id = started_event.event_id
output = self._execute_and_build_output(executor, inputs, response_format)
return self._finalize_kickoff(
@@ -1637,7 +1674,7 @@ class Agent(BaseAgent):
if isinstance(conversion_result, BaseModel):
formatted_result = conversion_result
except ConverterError:
pass # Keep raw output if conversion fails
pass
else:
raw_output = str(output) if not isinstance(output, str) else output
@@ -1719,7 +1756,6 @@ class Agent(BaseAgent):
elif callable(self.guardrail):
guardrail_callable = self.guardrail
else:
# Should not happen if called from kickoff with guardrail check
return output
guardrail_result = process_guardrail(
@@ -1765,6 +1801,7 @@ class Agent(BaseAgent):
messages: str | list[LLMMessage],
response_format: type[Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> LiteAgentOutput:
"""Execute the agent asynchronously with the given messages.
@@ -1780,23 +1817,36 @@ class Agent(BaseAgent):
response_format: Optional Pydantic model for structured output.
input_files: Optional dict of named files to attach to the message.
Files can be paths, bytes, or File objects from crewai_files.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the agent resumes from that checkpoint.
Returns:
LiteAgentOutput: The result of the agent execution.
"""
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return await restored.kickoff_async( # type: ignore[no-any-return]
messages=messages,
response_format=response_format,
input_files=input_files,
)
executor, inputs, agent_info, parsed_tools = self._prepare_kickoff(
messages, response_format, input_files
)
try:
crewai_event_bus.emit(
self,
event=LiteAgentExecutionStartedEvent(
if self.checkpoint_kickoff_event_id is not None:
self._kickoff_event_id = self.checkpoint_kickoff_event_id
self.checkpoint_kickoff_event_id = None
else:
started_event = LiteAgentExecutionStartedEvent(
agent_info=agent_info,
tools=parsed_tools,
messages=messages,
),
)
)
crewai_event_bus.emit(self, event=started_event)
self._kickoff_event_id = started_event.event_id
output = await self._execute_and_build_output_async(
executor, inputs, response_format
@@ -1813,6 +1863,7 @@ class Agent(BaseAgent):
messages: str | list[LLMMessage],
response_format: type[Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> LiteAgentOutput:
"""Async version of kickoff. Alias for kickoff_async.
@@ -1820,8 +1871,12 @@ class Agent(BaseAgent):
messages: Either a string query or a list of message dictionaries.
response_format: Optional Pydantic model for structured output.
input_files: Optional dict of named files to attach to the message.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the agent resumes from that checkpoint.
Returns:
LiteAgentOutput: The result of the agent execution.
"""
return await self.kickoff_async(messages, response_format, input_files)
return await self.kickoff_async(
messages, response_format, input_files, from_checkpoint
)

View File

@@ -41,7 +41,6 @@ class PlanningConfig(BaseModel):
from crewai import Agent
from crewai.agent.planning_config import PlanningConfig
# Simple usage — fast, linear execution (default)
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -49,7 +48,6 @@ class PlanningConfig(BaseModel):
planning_config=PlanningConfig(),
)
# Balanced — replan only when steps fail
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -59,7 +57,6 @@ class PlanningConfig(BaseModel):
),
)
# Full adaptive planning with refinement and replanning
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -69,7 +66,7 @@ class PlanningConfig(BaseModel):
max_attempts=3,
max_steps=10,
plan_prompt="Create a focused plan for: {description}",
llm="gpt-4o-mini", # Use cheaper model for planning
llm="gpt-4o-mini",
),
)
```

View File

@@ -39,7 +39,6 @@ def handle_reasoning(agent: Agent, task: Task) -> None:
agent: The agent performing the task.
task: The task to execute.
"""
# Check if planning is enabled using the planning_enabled property
if not getattr(agent, "planning_enabled", False):
return

View File

@@ -99,12 +99,10 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
Returns:
Tool execution result.
"""
# Get the parameter name from the schema
param_name: str = next(
iter(tool.args_schema.model_json_schema()["properties"].keys())
)
# Handle different argument types
args_dict: dict[str, Any]
if isinstance(arguments, dict):
args_dict = arguments
@@ -116,16 +114,13 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
else:
args_dict = {param_name: str(arguments)}
# Run the tool with the processed arguments
output: Any | Awaitable[Any] = tool._run(**args_dict)
# Await if the tool returned a coroutine
if inspect.isawaitable(output):
result: Any = await output
else:
result = output
# Ensure the result is JSON serializable
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
return result
return str(result)

View File

@@ -28,6 +28,9 @@ from crewai.agents.agent_builder.base_agent_executor import BaseAgentExecutor
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.events.base_events import set_emission_counter
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_context import restore_event_scope, set_last_event_id
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.knowledge_config import KnowledgeConfig
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
@@ -51,7 +54,7 @@ from crewai.utilities.string_utils import interpolate_only
if TYPE_CHECKING:
from crewai.context import ExecutionContext
from crewai.crew import Crew
from crewai.state.provider.core import BaseProvider
from crewai.state.runtime import RuntimeState
def _validate_crew_ref(value: Any) -> Any:
@@ -220,6 +223,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
_original_goal: str | None = PrivateAttr(default=None)
_original_backstory: str | None = PrivateAttr(default=None)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
_kickoff_event_id: str | None = PrivateAttr(default=None)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
@@ -336,30 +340,89 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
min_length=1,
)
execution_context: ExecutionContext | None = Field(default=None)
checkpoint_kickoff_event_id: str | None = Field(default=None)
@classmethod
def from_checkpoint(
cls, path: str, *, provider: BaseProvider | None = None
) -> Self:
"""Restore an Agent from a checkpoint file."""
def from_checkpoint(cls, config: CheckpointConfig) -> Self:
"""Restore an Agent from a checkpoint, ready to resume via kickoff().
Args:
config: Checkpoint configuration with ``restore_from`` set to
the path of the checkpoint to load.
Returns:
An Agent instance. Call kickoff() to resume execution.
"""
from crewai.context import apply_execution_context
from crewai.state.provider.json_provider import JsonProvider
from crewai.state.runtime import RuntimeState
state = RuntimeState.from_checkpoint(
path,
provider=provider or JsonProvider(),
context={"from_checkpoint": True},
)
state = RuntimeState.from_checkpoint(config, context={"from_checkpoint": True})
crewai_event_bus.set_runtime_state(state)
for entity in state.root:
if isinstance(entity, cls):
if entity.execution_context is not None:
apply_execution_context(entity.execution_context)
if entity.agent_executor is not None:
entity.agent_executor.agent = entity
entity.agent_executor._resuming = True
entity._restore_runtime(state)
return entity
raise ValueError(f"No {cls.__name__} found in checkpoint: {path}")
raise ValueError(
f"No {cls.__name__} found in checkpoint: {config.restore_from}"
)
@classmethod
def fork(cls, config: CheckpointConfig, branch: str | None = None) -> Self:
"""Fork an Agent from a checkpoint, creating a new execution branch.
Args:
config: Checkpoint configuration with ``restore_from`` set.
branch: Branch label for the fork. Auto-generated if not provided.
Returns:
An Agent instance on the new branch. Call kickoff() to run.
"""
agent = cls.from_checkpoint(config)
state = crewai_event_bus._runtime_state
if state is None:
raise RuntimeError("Cannot fork: no runtime state on the event bus.")
state.fork(branch)
return agent
def _restore_runtime(self, state: RuntimeState) -> None:
"""Re-create runtime objects after restoring from a checkpoint.
Args:
state: The RuntimeState containing the event record.
"""
if self.agent_executor is not None:
self.agent_executor.agent = self
self.agent_executor._resuming = True
if self.checkpoint_kickoff_event_id is not None:
self._kickoff_event_id = self.checkpoint_kickoff_event_id
self._restore_event_scope(state)
def _restore_event_scope(self, state: RuntimeState) -> None:
"""Rebuild the event scope stack from the checkpoint's event record.
Args:
state: The RuntimeState containing the event record.
"""
stack: list[tuple[str, str]] = []
kickoff_id = self._kickoff_event_id
if kickoff_id:
stack.append((kickoff_id, "lite_agent_execution_started"))
restore_event_scope(tuple(stack))
last_event_id: str | None = None
max_seq = 0
for node in state.event_record.nodes.values():
seq = node.event.emission_sequence or 0
if seq > max_seq:
max_seq = seq
last_event_id = node.event.event_id
if last_event_id is not None:
set_last_event_id(last_event_id)
if max_seq > 0:
set_emission_counter(max_seq)
@model_validator(mode="before")
@classmethod
@@ -385,7 +448,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif all(hasattr(tool, attr) for attr in required_attrs):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
raise ValueError(
@@ -450,14 +512,12 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
@model_validator(mode="after")
def validate_and_set_attributes(self) -> Self:
# Validate required fields
for field in ["role", "goal", "backstory"]:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
# Set private attributes
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
@@ -466,7 +526,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if not self._token_process:
self._token_process = TokenProcess()
# Initialize security_config if not provided
if self.security_config is None:
self.security_config = SecurityConfig()
@@ -568,14 +627,11 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
"actions",
}
# Copy llm
existing_llm = shallow_copy(self.llm)
copied_knowledge = shallow_copy(self.knowledge)
copied_knowledge_storage = shallow_copy(self.knowledge_storage)
# Properly copy knowledge sources if they exist
existing_knowledge_sources = None
if self.knowledge_sources:
# Create a shared storage instance for all knowledge sources
shared_storage = (
self.knowledge_sources[0].storage if self.knowledge_sources else None
)
@@ -587,7 +643,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if hasattr(source, "model_copy")
else shallow_copy(source)
)
# Ensure all copied sources use the same storage instance
copied_source.storage = shared_storage
existing_knowledge_sources.append(copied_source)

View File

@@ -4,8 +4,6 @@ import re
from typing import Final
# crewai.agents.parser constants
FINAL_ANSWER_ACTION: Final[str] = "Final Answer:"
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE: Final[str] = (
"I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.\n"

View File

@@ -296,7 +296,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Final answer from the agent.
"""
# Check if model supports native function calling
use_native_tools = (
hasattr(self.llm, "supports_function_calling")
and callable(getattr(self.llm, "supports_function_calling", None))
@@ -307,7 +306,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
if use_native_tools:
return self._invoke_loop_native_tools()
# Fall back to ReAct text-based pattern
return self._invoke_loop_react()
def _invoke_loop_react(self) -> AgentFinish:
@@ -347,7 +345,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
executor_context=self,
verbose=self.agent.verbose,
)
# breakpoint()
if self.response_model is not None:
try:
if isinstance(answer, BaseModel):
@@ -365,7 +362,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
text=answer,
)
except ValidationError:
# If validation fails, convert BaseModel to JSON string for parsing
answer_str = (
answer.model_dump_json()
if isinstance(answer, BaseModel)
@@ -375,14 +371,12 @@ class CrewAgentExecutor(BaseAgentExecutor):
answer_str, self.use_stop_words
) # type: ignore[assignment]
else:
# When no response_model, answer should be a string
answer_str = str(answer) if not isinstance(answer, str) else answer
formatted_answer = process_llm_response(
answer_str, self.use_stop_words
) # type: ignore[assignment]
if isinstance(formatted_answer, AgentAction):
# Extract agent fingerprint if available
fingerprint_context = {}
if (
self.agent
@@ -426,7 +420,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if is_context_length_exceeded(e):
handle_context_length(
@@ -443,10 +436,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
finally:
self.iterations += 1
# During the invoke loop, formatted_answer alternates between AgentAction
# (when the agent is using tools) and eventually becomes AgentFinish
# (when the agent reaches a final answer). This check confirms we've
# reached a final answer and helps type checking understand this transition.
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer. "
@@ -465,9 +454,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Final answer from the agent.
"""
# Convert tools to OpenAI schema format
if not self.original_tools:
# No tools available, fall back to simple LLM call
return self._invoke_loop_native_no_tools()
openai_tools, available_functions, self._tool_name_mapping = (
@@ -490,10 +477,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
enforce_rpm_limit(self.request_within_rpm_limit)
# Call LLM with native tools
# Pass available_functions=None so the LLM returns tool_calls
# without executing them. The executor handles tool execution
# via _handle_native_tool_calls to properly manage message history.
answer = get_llm_response(
llm=cast("BaseLLM", self.llm),
messages=self.messages,
@@ -508,32 +491,26 @@ class CrewAgentExecutor(BaseAgentExecutor):
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
if (
isinstance(answer, list)
and answer
and self._is_tool_call_list(answer)
):
# Handle tool calls - execute tools and add results to messages
tool_finish = self._handle_native_tool_calls(
answer, available_functions
)
# If tool has result_as_answer=True, return immediately
if tool_finish is not None:
return tool_finish
# Continue loop to let LLM analyze results and decide next steps
continue
# Text or other response - handle as potential final answer
if isinstance(answer, str):
# Text response - this is the final answer
formatted_answer = AgentFinish(
thought="",
output=answer,
text=answer,
)
self._invoke_step_callback(formatted_answer)
self._append_message(answer) # Save final answer to messages
self._append_message(answer)
self._show_logs(formatted_answer)
return formatted_answer
@@ -549,14 +526,13 @@ class CrewAgentExecutor(BaseAgentExecutor):
self._show_logs(formatted_answer)
return formatted_answer
# Unexpected response type, treat as final answer
formatted_answer = AgentFinish(
thought="",
output=str(answer),
text=str(answer),
)
self._invoke_step_callback(formatted_answer)
self._append_message(str(answer)) # Save final answer to messages
self._append_message(str(answer))
self._show_logs(formatted_answer)
return formatted_answer
@@ -627,12 +603,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
if not response:
return False
first_item = response[0]
# OpenAI-style
if hasattr(first_item, "function") or (
isinstance(first_item, dict) and "function" in first_item
):
return True
# Anthropic-style (object with attributes)
if (
hasattr(first_item, "type")
and getattr(first_item, "type", None) == "tool_use"
@@ -640,14 +614,12 @@ class CrewAgentExecutor(BaseAgentExecutor):
return True
if hasattr(first_item, "name") and hasattr(first_item, "input"):
return True
# Bedrock-style (dict with name and input keys)
if (
isinstance(first_item, dict)
and "name" in first_item
and "input" in first_item
):
return True
# Gemini-style
if hasattr(first_item, "function_call") and first_item.function_call:
return True
return False
@@ -706,8 +678,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
for _, func_name, _ in parsed_calls
)
# Preserve historical sequential behavior for result_as_answer batches.
# Also avoid threading around usage counters for max_usage_count tools.
if has_result_as_answer_in_batch or has_max_usage_count_in_batch:
logger.debug(
"Skipping parallel native execution because batch includes result_as_answer or max_usage_count tool"
@@ -773,7 +743,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
self.messages.append(reasoning_message)
return None
# Sequential behavior: process only first tool call, then force reflection.
call_id, func_name, func_args = parsed_calls[0]
self._append_assistant_tool_calls_message([(call_id, func_name, func_args)])
@@ -827,7 +796,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
func_name = sanitize_tool_name(
func_info.get("name", "") or tool_call.get("name", "")
)
func_args = func_info.get("arguments", "{}") or tool_call.get("input", {})
func_args = func_info.get("arguments") or tool_call.get("input", {})
return call_id, func_name, func_args
return None
@@ -1202,7 +1171,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
text=answer,
)
except ValidationError:
# If validation fails, convert BaseModel to JSON string for parsing
answer_str = (
answer.model_dump_json()
if isinstance(answer, BaseModel)
@@ -1212,7 +1180,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
answer_str, self.use_stop_words
) # type: ignore[assignment]
else:
# When no response_model, answer should be a string
answer_str = str(answer) if not isinstance(answer, str) else answer
formatted_answer = process_llm_response(
answer_str, self.use_stop_words
@@ -1319,10 +1286,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
enforce_rpm_limit(self.request_within_rpm_limit)
# Call LLM with native tools
# Pass available_functions=None so the LLM returns tool_calls
# without executing them. The executor handles tool execution
# via _handle_native_tool_calls to properly manage message history.
answer = await aget_llm_response(
llm=cast("BaseLLM", self.llm),
messages=self.messages,
@@ -1336,32 +1299,26 @@ class CrewAgentExecutor(BaseAgentExecutor):
executor_context=self,
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
if (
isinstance(answer, list)
and answer
and self._is_tool_call_list(answer)
):
# Handle tool calls - execute tools and add results to messages
tool_finish = self._handle_native_tool_calls(
answer, available_functions
)
# If tool has result_as_answer=True, return immediately
if tool_finish is not None:
return tool_finish
# Continue loop to let LLM analyze results and decide next steps
continue
# Text or other response - handle as potential final answer
if isinstance(answer, str):
# Text response - this is the final answer
formatted_answer = AgentFinish(
thought="",
output=answer,
text=answer,
)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(answer) # Save final answer to messages
self._append_message(answer)
self._show_logs(formatted_answer)
return formatted_answer
@@ -1377,14 +1334,13 @@ class CrewAgentExecutor(BaseAgentExecutor):
self._show_logs(formatted_answer)
return formatted_answer
# Unexpected response type, treat as final answer
formatted_answer = AgentFinish(
thought="",
output=str(answer),
text=str(answer),
)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(str(answer)) # Save final answer to messages
self._append_message(str(answer))
self._show_logs(formatted_answer)
return formatted_answer
@@ -1455,7 +1411,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Updated action or final answer.
"""
# Special case for add_image_tool
add_image_tool = I18N_DEFAULT.tools("add_image")
if (
isinstance(add_image_tool, dict)
@@ -1575,17 +1530,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
training_data = training_handler.load() or {}
# Initialize or retrieve agent's training data
agent_training_data = training_data.get(agent_id, {})
if human_feedback is not None:
# Save initial output and human feedback
agent_training_data[train_iteration] = {
"initial_output": result.output,
"human_feedback": human_feedback,
}
else:
# Save improved output
if train_iteration in agent_training_data:
agent_training_data[train_iteration]["improved_output"] = result.output
else:
@@ -1599,7 +1551,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
)
return
# Update the training data and save
training_data[agent_id] = agent_training_data
training_handler.save(training_data)

View File

@@ -94,11 +94,8 @@ def parse(text: str) -> AgentAction | AgentFinish:
if includes_answer:
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
# Check whether the final answer ends with triple backticks.
if final_answer.endswith("```"):
# Count occurrences of triple backticks in the final answer.
count = final_answer.count("```")
# If count is odd then it's an unmatched trailing set; remove it.
if count % 2 != 0:
final_answer = final_answer[:-3].rstrip()
return AgentFinish(thought=thought, output=final_answer, text=text)
@@ -146,7 +143,6 @@ def _extract_thought(text: str) -> str:
if thought_index == -1:
return ""
thought = text[:thought_index].strip()
# Remove any triple backticks from the thought string
return thought.replace("```", "").strip()
@@ -171,18 +167,9 @@ def _safe_repair_json(tool_input: str) -> str:
Returns:
The repaired JSON string or original if repair fails.
"""
# Skip repair if the input starts and ends with square brackets
# Explanation: The JSON parser has issues handling inputs that are enclosed in square brackets ('[]').
# These are typically valid JSON arrays or strings that do not require repair. Attempting to repair such inputs
# might lead to unintended alterations, such as wrapping the entire input in additional layers or modifying
# the structure in a way that changes its meaning. By skipping the repair for inputs that start and end with
# square brackets, we preserve the integrity of these valid JSON structures and avoid unnecessary modifications.
if tool_input.startswith("[") and tool_input.endswith("]"):
return tool_input
# Before repair, handle common LLM issues:
# 1. Replace """ with " to avoid JSON parser errors
tool_input = tool_input.replace('"""', '"')
result = repair_json(tool_input)

View File

@@ -83,10 +83,6 @@ class PlannerObserver:
return create_llm(config.llm)
return self.agent.llm
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def observe(
self,
completed_step: TodoItem,
@@ -182,9 +178,6 @@ class PlannerObserver:
),
)
# Don't force a full replan — the step may have succeeded even if the
# observer LLM failed to parse the result. Defaulting to "continue" is
# far less disruptive than wiping the entire plan on every observer error.
return StepObservation(
step_completed_successfully=True,
key_information_learned="",
@@ -221,10 +214,6 @@ class PlannerObserver:
return remaining_todos
# ------------------------------------------------------------------
# Internal: Message building
# ------------------------------------------------------------------
def _build_observation_messages(
self,
completed_step: TodoItem,
@@ -239,15 +228,11 @@ class PlannerObserver:
task_desc = self.task.description or ""
task_goal = self.task.expected_output or ""
elif self.kickoff_input:
# Standalone kickoff path — no Task object, but we have the raw input.
# Extract just the ## Task section so the observer sees the actual goal,
# not the full enriched instruction with env/tools/verification noise.
task_desc = extract_task_section(self.kickoff_input)
task_goal = "Complete the task successfully"
system_prompt = I18N_DEFAULT.retrieve("planning", "observation_system_prompt")
# Build context of what's been done
completed_summary = ""
if all_completed:
completed_lines = []
@@ -261,7 +246,6 @@ class PlannerObserver:
completed_lines
)
# Build remaining plan
remaining_summary = ""
if remaining_todos:
remaining_lines = [
@@ -306,17 +290,14 @@ class PlannerObserver:
if isinstance(response, StepObservation):
return response
# JSON string path — most common miss before this fix
if isinstance(response, str):
text = response.strip()
try:
return StepObservation.model_validate_json(text)
except Exception: # noqa: S110
pass
# Some LLMs wrap the JSON in markdown fences
if text.startswith("```"):
lines = text.split("\n")
# Strip first and last lines (``` markers)
inner = "\n".join(
lines[1:-1] if lines[-1].strip() == "```" else lines[1:]
)
@@ -325,14 +306,12 @@ class PlannerObserver:
except Exception: # noqa: S110
pass
# Dict path
if isinstance(response, dict):
try:
return StepObservation.model_validate(response)
except Exception: # noqa: S110
pass
# Last resort — log what we got so it's diagnosable
logger.warning(
"Could not parse observation response (type=%s). "
"Falling back to default failure observation. Preview: %.200s",

View File

@@ -108,7 +108,6 @@ class StepExecutor:
self.request_within_rpm_limit = request_within_rpm_limit
self.callbacks = callbacks or []
# Native tool support — set up once
self._use_native_tools = check_native_tool_support(
self.llm, self.original_tools
)
@@ -121,10 +120,6 @@ class StepExecutor:
_,
) = setup_native_tools(self.original_tools)
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def execute(
self,
todo: TodoItem,
@@ -190,10 +185,6 @@ class StepExecutor:
execution_time=elapsed,
)
# ------------------------------------------------------------------
# Internal: Message building
# ------------------------------------------------------------------
def _build_isolated_messages(
self, todo: TodoItem, context: StepExecutionContext
) -> list[LLMMessage]:
@@ -237,10 +228,6 @@ class StepExecutor:
"""Build the user prompt for this specific step."""
parts: list[str] = []
# Include overall task context so the executor knows the full goal and
# required output format/location — critical for knowing WHAT to produce.
# We extract only the task body (not tool instructions or verification
# sections) to avoid duplicating directives already in the system prompt.
if context.task_description:
task_section = extract_task_section(context.task_description)
if task_section:
@@ -267,7 +254,6 @@ class StepExecutor:
)
)
# Include dependency results (final results only, no traces)
if context.dependency_results:
parts.append(
I18N_DEFAULT.retrieve("planning", "step_executor_context_header")
@@ -283,10 +269,6 @@ class StepExecutor:
return "\n".join(parts)
# ------------------------------------------------------------------
# Internal: Multi-turn execution loop
# ------------------------------------------------------------------
def _execute_text_parsed(
self,
messages: list[LLMMessage],
@@ -306,7 +288,6 @@ class StepExecutor:
last_tool_result = ""
for _ in range(max_step_iterations):
# Check step timeout
if step_timeout and start_time:
elapsed = time.monotonic() - start_time
if elapsed >= step_timeout:
@@ -331,17 +312,12 @@ class StepExecutor:
tool_calls_made.append(formatted.tool)
tool_result = self._execute_text_tool_with_events(formatted)
last_tool_result = tool_result
# Append the assistant's reasoning + action, then the observation.
# _build_observation_message handles vision sentinels so the LLM
# receives an image content block instead of raw base64 text.
messages.append({"role": "assistant", "content": answer_str})
messages.append(self._build_observation_message(tool_result))
continue
# Raw text response with no Final Answer marker — treat as done
return answer_str
# Max iterations reached — return the last tool result we accumulated
return last_tool_result
def _execute_text_tool_with_events(self, formatted: AgentAction) -> str:
@@ -429,10 +405,6 @@ class StepExecutor:
return {"input": stripped_input}
return {"input": str(tool_input)}
# ------------------------------------------------------------------
# Internal: Vision support
# ------------------------------------------------------------------
@staticmethod
def _parse_vision_sentinel(raw: str) -> tuple[str, str] | None:
"""Parse a VISION_IMAGE sentinel into (media_type, base64_data), or None."""
@@ -517,7 +489,6 @@ class StepExecutor:
accumulated_results: list[str] = []
for _ in range(max_step_iterations):
# Check step timeout
if step_timeout and start_time:
elapsed = time.monotonic() - start_time
if elapsed >= step_timeout:
@@ -541,19 +512,14 @@ class StepExecutor:
return answer.model_dump_json()
if isinstance(answer, list) and answer and is_tool_call_list(answer):
# _execute_native_tool_calls appends assistant + tool messages
# to `messages` as a side-effect, so the next LLM call will
# see the full conversation history including tool outputs.
result = self._execute_native_tool_calls(
answer, messages, tool_calls_made
)
accumulated_results.append(result)
continue
# Text answer → LLM decided the step is done
return str(answer)
# Max iterations reached — return everything we accumulated
return "\n".join(filter(None, accumulated_results))
def _execute_native_tool_calls(
@@ -599,9 +565,6 @@ class StepExecutor:
parsed = self._parse_vision_sentinel(raw_content)
if parsed:
media_type, b64_data = parsed
# Replace the sentinel with a standard image_url content block.
# Each provider's _format_messages handles conversion to
# its native format (e.g. Anthropic image blocks).
modified: LLMMessage = cast(
LLMMessage, dict(call_result.tool_message)
)

View File

@@ -2,16 +2,20 @@
from __future__ import annotations
from datetime import datetime
from datetime import datetime, timedelta, timezone
import glob
import json
import os
import re
import sqlite3
from typing import Any
import click
_PLACEHOLDER_RE = re.compile(r"\{([A-Za-z_][A-Za-z0-9_\-]*)}")
_SQLITE_MAGIC = b"SQLite format 3\x00"
_SELECT_ALL = """
@@ -33,6 +37,45 @@ ORDER BY rowid DESC
LIMIT 1
"""
_DELETE_OLDER_THAN = """
DELETE FROM checkpoints
WHERE created_at < ?
"""
_DELETE_KEEP_N = """
DELETE FROM checkpoints WHERE rowid NOT IN (
SELECT rowid FROM checkpoints ORDER BY rowid DESC LIMIT ?
)
"""
_COUNT_CHECKPOINTS = "SELECT COUNT(*) FROM checkpoints"
_SELECT_LIKE = """
SELECT id, created_at, json(data)
FROM checkpoints
WHERE id LIKE ?
ORDER BY rowid DESC
"""
_DEFAULT_DIR = "./.checkpoints"
_DEFAULT_DB = "./.checkpoints.db"
def _detect_location(location: str) -> str:
"""Resolve the default checkpoint location.
When the caller passes the default directory path, check whether a
SQLite database exists at the conventional ``.db`` path and prefer it.
"""
if (
location == _DEFAULT_DIR
and not os.path.exists(_DEFAULT_DIR)
and os.path.exists(_DEFAULT_DB)
):
return _DEFAULT_DB
return location
def _is_sqlite(path: str) -> bool:
"""Check if a file is a SQLite database by reading its magic bytes."""
@@ -52,13 +95,7 @@ def _parse_checkpoint_json(raw: str, source: str) -> dict[str, Any]:
nodes = data.get("event_record", {}).get("nodes", {})
event_count = len(nodes)
trigger_event = None
if nodes:
last_node = max(
nodes.values(),
key=lambda n: n.get("event", {}).get("emission_sequence") or 0,
)
trigger_event = last_node.get("event", {}).get("type")
trigger_event = data.get("trigger")
parsed_entities: list[dict[str, Any]] = []
for entity in entities:
@@ -69,23 +106,87 @@ def _parse_checkpoint_json(raw: str, source: str) -> dict[str, Any]:
"name": entity.get("name"),
"id": entity.get("id"),
}
raw_agents = entity.get("agents", [])
agents_by_id: dict[str, dict[str, Any]] = {}
parsed_agents: list[dict[str, Any]] = []
for ag in raw_agents:
agent_info: dict[str, Any] = {
"id": ag.get("id", ""),
"role": ag.get("role", ""),
"goal": ag.get("goal", ""),
}
parsed_agents.append(agent_info)
if ag.get("id"):
agents_by_id[str(ag["id"])] = agent_info
if parsed_agents:
info["agents"] = parsed_agents
if tasks:
info["tasks_completed"] = completed
info["tasks_total"] = len(tasks)
info["tasks"] = [
{
parsed_tasks: list[dict[str, Any]] = []
for t in tasks:
task_info: dict[str, Any] = {
"description": t.get("description", ""),
"completed": t.get("output") is not None,
"output": (t.get("output") or {}).get("raw", ""),
}
for t in tasks
]
task_agent = t.get("agent")
if isinstance(task_agent, dict):
task_info["agent_role"] = task_agent.get("role", "")
task_info["agent_id"] = task_agent.get("id", "")
elif isinstance(task_agent, str) and task_agent in agents_by_id:
task_info["agent_role"] = agents_by_id[task_agent].get("role", "")
task_info["agent_id"] = task_agent
parsed_tasks.append(task_info)
info["tasks"] = parsed_tasks
if entity.get("entity_type") == "flow":
completed_methods = entity.get("checkpoint_completed_methods")
if completed_methods:
info["completed_methods"] = sorted(completed_methods)
state = entity.get("checkpoint_state")
if isinstance(state, dict):
info["flow_state"] = state
parsed_entities.append(info)
inputs: dict[str, Any] = {}
for entity in entities:
cp_inputs = entity.get("checkpoint_inputs")
if isinstance(cp_inputs, dict) and cp_inputs:
inputs = dict(cp_inputs)
break
for entity in entities:
for task in entity.get("tasks", []):
for field in (
"checkpoint_original_description",
"checkpoint_original_expected_output",
):
text = task.get(field) or ""
for match in _PLACEHOLDER_RE.findall(text):
if match not in inputs:
inputs[match] = ""
for agent in entity.get("agents", []):
for field in ("role", "goal", "backstory"):
text = agent.get(field) or ""
for match in _PLACEHOLDER_RE.findall(text):
if match not in inputs:
inputs[match] = ""
branch = data.get("branch", "main")
parent_id = data.get("parent_id")
return {
"source": source,
"event_count": event_count,
"trigger": trigger_event,
"entities": parsed_entities,
"branch": branch,
"parent_id": parent_id,
"inputs": inputs,
}
@@ -125,9 +226,11 @@ def _entity_summary(entities: list[dict[str, Any]]) -> str:
def _list_json(location: str) -> list[dict[str, Any]]:
pattern = os.path.join(location, "*.json")
pattern = os.path.join(location, "**", "*.json")
results = []
for path in sorted(glob.glob(pattern), key=os.path.getmtime, reverse=True):
for path in sorted(
glob.glob(pattern, recursive=True), key=os.path.getmtime, reverse=True
):
name = os.path.basename(path)
try:
with open(path) as f:
@@ -144,8 +247,10 @@ def _list_json(location: str) -> list[dict[str, Any]]:
def _info_json_latest(location: str) -> dict[str, Any] | None:
pattern = os.path.join(location, "*.json")
files = sorted(glob.glob(pattern), key=os.path.getmtime, reverse=True)
pattern = os.path.join(location, "**", "*.json")
files = sorted(
glob.glob(pattern, recursive=True), key=os.path.getmtime, reverse=True
)
if not files:
return None
path = files[0]
@@ -189,6 +294,7 @@ def _list_sqlite(db_path: str) -> list[dict[str, Any]]:
"entities": [],
"source": checkpoint_id,
}
meta["db"] = db_path
results.append(meta)
return results
@@ -209,6 +315,8 @@ def _info_sqlite_latest(db_path: str) -> dict[str, Any] | None:
def _info_sqlite_id(db_path: str, checkpoint_id: str) -> dict[str, Any] | None:
with sqlite3.connect(db_path) as conn:
row = conn.execute(_SELECT_ONE, (checkpoint_id,)).fetchone()
if not row:
row = conn.execute(_SELECT_LIKE, (f"%{checkpoint_id}%",)).fetchone()
if not row:
return None
cid, created_at, raw = row
@@ -311,6 +419,10 @@ def _print_info(meta: dict[str, Any]) -> None:
trigger = meta.get("trigger")
if trigger:
click.echo(f"Trigger: {trigger}")
click.echo(f"Branch: {meta.get('branch', 'main')}")
parent_id = meta.get("parent_id")
if parent_id:
click.echo(f"Parent: {parent_id}")
for ent in meta.get("entities", []):
eid = str(ent.get("id", ""))[:8]
@@ -327,3 +439,294 @@ def _print_info(meta: dict[str, Any]) -> None:
if len(desc) > 70:
desc = desc[:67] + "..."
click.echo(f" {i + 1}. [{status}] {desc}")
def _resolve_checkpoint(
location: str, checkpoint_id: str | None
) -> dict[str, Any] | None:
if _is_sqlite(location):
if checkpoint_id:
return _info_sqlite_id(location, checkpoint_id)
return _info_sqlite_latest(location)
if os.path.isdir(location):
if checkpoint_id:
from crewai.state.provider.json_provider import JsonProvider
_json_provider: JsonProvider = JsonProvider()
pattern: str = os.path.join(location, "**", "*.json")
all_files: list[str] = glob.glob(pattern, recursive=True)
matches: list[str] = [
f for f in all_files if checkpoint_id in _json_provider.extract_id(f)
]
matches.sort(key=os.path.getmtime, reverse=True)
if matches:
return _info_json_file(matches[0])
return None
return _info_json_latest(location)
if os.path.isfile(location):
return _info_json_file(location)
return None
def _entity_type_from_meta(meta: dict[str, Any]) -> str:
for ent in meta.get("entities", []):
if ent.get("type") == "flow":
return "flow"
if ent.get("type") == "agent":
return "agent"
return "crew"
def resume_checkpoint(location: str, checkpoint_id: str | None) -> None:
import asyncio
meta: dict[str, Any] | None = _resolve_checkpoint(location, checkpoint_id)
if meta is None:
if checkpoint_id:
click.echo(f"Checkpoint not found: {checkpoint_id}")
else:
click.echo(f"No checkpoints found in {location}")
return
restore_path: str = meta.get("path") or meta.get("source", "")
if meta.get("db"):
restore_path = f"{meta['db']}#{meta['name']}"
click.echo(f"Resuming from: {meta.get('name', restore_path)}")
_print_info(meta)
click.echo()
from crewai.state.checkpoint_config import CheckpointConfig
config: CheckpointConfig = CheckpointConfig(restore_from=restore_path)
entity_type: str = _entity_type_from_meta(meta)
inputs: dict[str, Any] | None = meta.get("inputs") or None
if entity_type == "flow":
from crewai.flow.flow import Flow
flow = Flow.from_checkpoint(config)
result = asyncio.run(flow.kickoff_async(inputs=inputs))
elif entity_type == "agent":
from crewai.agent import Agent
agent = Agent.from_checkpoint(config)
result = asyncio.run(agent.akickoff(messages="Resume execution."))
else:
from crewai.crew import Crew
crew = Crew.from_checkpoint(config)
result = asyncio.run(crew.akickoff(inputs=inputs))
click.echo(f"\nResult: {getattr(result, 'raw', result)}")
def _task_list_from_meta(meta: dict[str, Any]) -> list[dict[str, Any]]:
tasks: list[dict[str, Any]] = []
for ent in meta.get("entities", []):
tasks.extend(
{
"entity": ent.get("name", "unnamed"),
"description": t.get("description", ""),
"completed": t.get("completed", False),
"output": t.get("output", ""),
}
for t in ent.get("tasks", [])
)
return tasks
def diff_checkpoints(location: str, id1: str, id2: str) -> None:
meta1: dict[str, Any] | None = _resolve_checkpoint(location, id1)
meta2: dict[str, Any] | None = _resolve_checkpoint(location, id2)
if meta1 is None:
click.echo(f"Checkpoint not found: {id1}")
return
if meta2 is None:
click.echo(f"Checkpoint not found: {id2}")
return
name1: str = meta1.get("name", id1)
name2: str = meta2.get("name", id2)
click.echo(f"--- {name1}")
click.echo(f"+++ {name2}")
click.echo()
fields: list[tuple[str, str]] = [
("Time", "ts"),
("Branch", "branch"),
("Trigger", "trigger"),
("Events", "event_count"),
]
for label, key in fields:
v1: str = str(meta1.get(key, ""))
v2: str = str(meta2.get(key, ""))
if v1 != v2:
click.echo(f" {label}:")
click.echo(f" - {v1}")
click.echo(f" + {v2}")
inputs1: dict[str, Any] = meta1.get("inputs", {})
inputs2: dict[str, Any] = meta2.get("inputs", {})
all_keys: list[str] = sorted(set(list(inputs1.keys()) + list(inputs2.keys())))
changed_inputs: list[tuple[str, Any, Any]] = [
(k, inputs1.get(k, ""), inputs2.get(k, ""))
for k in all_keys
if inputs1.get(k) != inputs2.get(k)
]
if changed_inputs:
click.echo("\n Inputs:")
for key, v1, v2 in changed_inputs:
click.echo(f" {key}:")
click.echo(f" - {v1}")
click.echo(f" + {v2}")
tasks1: list[dict[str, Any]] = _task_list_from_meta(meta1)
tasks2: list[dict[str, Any]] = _task_list_from_meta(meta2)
max_tasks: int = max(len(tasks1), len(tasks2))
if max_tasks == 0:
return
click.echo("\n Tasks:")
for i in range(max_tasks):
t1: dict[str, Any] | None = tasks1[i] if i < len(tasks1) else None
t2: dict[str, Any] | None = tasks2[i] if i < len(tasks2) else None
if t1 is None:
desc: str = t2["description"][:60] if t2 else ""
click.echo(f" + {i + 1}. [new] {desc}")
continue
if t2 is None:
desc = t1["description"][:60]
click.echo(f" - {i + 1}. [removed] {desc}")
continue
desc = str(t1["description"][:60])
s1: str = "done" if t1["completed"] else "pending"
s2: str = "done" if t2["completed"] else "pending"
if s1 != s2:
click.echo(f" {i + 1}. {desc}")
click.echo(f" status: {s1} -> {s2}")
out1: str = (t1.get("output") or "").strip()
out2: str = (t2.get("output") or "").strip()
if out1 != out2:
if s1 == s2:
click.echo(f" {i + 1}. {desc}")
preview1: str = (
out1[:80] + ("..." if len(out1) > 80 else "") if out1 else "(empty)"
)
preview2: str = (
out2[:80] + ("..." if len(out2) > 80 else "") if out2 else "(empty)"
)
click.echo(" output:")
click.echo(f" - {preview1}")
click.echo(f" + {preview2}")
def _parse_duration(value: str) -> timedelta:
match: re.Match[str] | None = re.match(r"^(\d+)([dhm])$", value.strip())
if not match:
raise click.BadParameter(
f"Invalid duration: {value!r}. Use format like '7d', '24h', or '30m'."
)
amount: int = int(match.group(1))
unit: str = match.group(2)
if unit == "d":
return timedelta(days=amount)
if unit == "h":
return timedelta(hours=amount)
return timedelta(minutes=amount)
def _prune_json(location: str, keep: int | None, older_than: timedelta | None) -> int:
pattern: str = os.path.join(location, "**", "*.json")
files: list[str] = sorted(
glob.glob(pattern, recursive=True), key=os.path.getmtime, reverse=True
)
if not files:
return 0
to_delete: set[str] = set()
if keep is not None and len(files) > keep:
to_delete.update(files[keep:])
if older_than is not None:
cutoff: datetime = datetime.now(timezone.utc) - older_than
for path in files:
mtime: datetime = datetime.fromtimestamp(
os.path.getmtime(path), tz=timezone.utc
)
if mtime < cutoff:
to_delete.add(path)
deleted: int = 0
for path in to_delete:
try:
os.remove(path)
deleted += 1
except OSError: # noqa: PERF203
pass
for dirpath, dirnames, filenames in os.walk(location, topdown=False):
if dirpath != location and not filenames and not dirnames:
try:
os.rmdir(dirpath)
except OSError:
pass
return deleted
def _prune_sqlite(db_path: str, keep: int | None, older_than: timedelta | None) -> int:
deleted: int = 0
with sqlite3.connect(db_path) as conn:
if older_than is not None:
cutoff: str = (datetime.now(timezone.utc) - older_than).strftime(
"%Y%m%dT%H%M%S"
)
cursor: sqlite3.Cursor = conn.execute(_DELETE_OLDER_THAN, (cutoff,))
deleted += cursor.rowcount
if keep is not None:
cursor = conn.execute(_DELETE_KEEP_N, (keep,))
deleted += cursor.rowcount
conn.commit()
return deleted
def prune_checkpoints(
location: str, keep: int | None, older_than: str | None, dry_run: bool = False
) -> None:
if keep is None and older_than is None:
click.echo("Specify --keep N and/or --older-than DURATION (e.g. 7d, 24h)")
return
duration: timedelta | None = _parse_duration(older_than) if older_than else None
deleted: int
if _is_sqlite(location):
if dry_run:
with sqlite3.connect(location) as conn:
total: int = conn.execute(_COUNT_CHECKPOINTS).fetchone()[0]
click.echo(f"Would prune from {total} checkpoint(s) in {location}")
return
deleted = _prune_sqlite(location, keep, duration)
elif os.path.isdir(location):
if dry_run:
files: list[str] = glob.glob(
os.path.join(location, "**", "*.json"), recursive=True
)
click.echo(f"Would prune from {len(files)} checkpoint(s) in {location}")
return
deleted = _prune_json(location, keep, duration)
else:
click.echo(f"Not a directory or SQLite database: {location}")
return
click.echo(f"Pruned {deleted} checkpoint(s) from {location}")

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,7 @@ from crewai.cli.install_crew import install_crew
from crewai.cli.kickoff_flow import kickoff_flow
from crewai.cli.organization.main import OrganizationCommand
from crewai.cli.plot_flow import plot_flow
from crewai.cli.remote_template.main import TemplateCommand
from crewai.cli.replay_from_task import replay_task_command
from crewai.cli.reset_memories_command import reset_memories_command
from crewai.cli.run_crew import run_crew
@@ -392,10 +393,15 @@ def deploy() -> None:
@deploy.command(name="create")
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
def deploy_create(yes: bool) -> None:
@click.option(
"--skip-validate",
is_flag=True,
help="Skip the pre-deploy validation checks.",
)
def deploy_create(yes: bool, skip_validate: bool) -> None:
"""Create a Crew deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.create_crew(yes)
deploy_cmd.create_crew(yes, skip_validate=skip_validate)
@deploy.command(name="list")
@@ -407,10 +413,28 @@ def deploy_list() -> None:
@deploy.command(name="push")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_push(uuid: str | None) -> None:
@click.option(
"--skip-validate",
is_flag=True,
help="Skip the pre-deploy validation checks.",
)
def deploy_push(uuid: str | None, skip_validate: bool) -> None:
"""Deploy the Crew."""
deploy_cmd = DeployCommand()
deploy_cmd.deploy(uuid=uuid)
deploy_cmd.deploy(uuid=uuid, skip_validate=skip_validate)
@deploy.command(name="validate")
def deploy_validate() -> None:
"""Validate the current project against common deployment failures.
Runs the same pre-deploy checks that `crewai deploy create` and
`crewai deploy push` run automatically, without contacting the platform.
Exits non-zero if any blocking issues are found.
"""
from crewai.cli.deploy.validate import run_validate_command
run_validate_command()
@deploy.command(name="status")
@@ -473,6 +497,33 @@ def tool_publish(is_public: bool, force: bool) -> None:
tool_cmd.publish(is_public, force)
@crewai.group()
def template() -> None:
"""Browse and install project templates."""
@template.command(name="list")
def template_list() -> None:
"""List available templates and select one to install."""
template_cmd = TemplateCommand()
template_cmd.list_templates()
@template.command(name="add")
@click.argument("name")
@click.option(
"-o",
"--output-dir",
type=str,
default=None,
help="Directory name for the template (defaults to template name)",
)
def template_add(name: str, output_dir: str | None) -> None:
"""Add a template to the current directory."""
template_cmd = TemplateCommand()
template_cmd.add_template(name, output_dir)
@crewai.group()
def flow() -> None:
"""Flow related commands."""
@@ -793,6 +844,9 @@ def traces_status() -> None:
@click.pass_context
def checkpoint(ctx: click.Context, location: str) -> None:
"""Browse and inspect checkpoints. Launches a TUI when called without a subcommand."""
from crewai.cli.checkpoint_cli import _detect_location
location = _detect_location(location)
ctx.ensure_object(dict)
ctx.obj["location"] = location
if ctx.invoked_subcommand is None:
@@ -805,18 +859,61 @@ def checkpoint(ctx: click.Context, location: str) -> None:
@click.argument("location", default="./.checkpoints")
def checkpoint_list(location: str) -> None:
"""List checkpoints in a directory."""
from crewai.cli.checkpoint_cli import list_checkpoints
from crewai.cli.checkpoint_cli import _detect_location, list_checkpoints
list_checkpoints(location)
list_checkpoints(_detect_location(location))
@checkpoint.command("info")
@click.argument("path", default="./.checkpoints")
def checkpoint_info(path: str) -> None:
"""Show details of a checkpoint. Pass a file or directory for latest."""
from crewai.cli.checkpoint_cli import info_checkpoint
from crewai.cli.checkpoint_cli import _detect_location, info_checkpoint
info_checkpoint(path)
info_checkpoint(_detect_location(path))
@checkpoint.command("resume")
@click.argument("checkpoint_id", required=False, default=None)
@click.pass_context
def checkpoint_resume(ctx: click.Context, checkpoint_id: str | None) -> None:
"""Resume from a checkpoint. Defaults to the most recent."""
from crewai.cli.checkpoint_cli import resume_checkpoint
resume_checkpoint(ctx.obj["location"], checkpoint_id)
@checkpoint.command("diff")
@click.argument("id1")
@click.argument("id2")
@click.pass_context
def checkpoint_diff(ctx: click.Context, id1: str, id2: str) -> None:
"""Compare two checkpoints side-by-side."""
from crewai.cli.checkpoint_cli import diff_checkpoints
diff_checkpoints(ctx.obj["location"], id1, id2)
@checkpoint.command("prune")
@click.option(
"--keep", type=int, default=None, help="Keep the N most recent checkpoints."
)
@click.option(
"--older-than",
default=None,
help="Remove checkpoints older than duration (e.g. 7d, 24h, 30m).",
)
@click.option(
"--dry-run", is_flag=True, help="Show what would be pruned without deleting."
)
@click.pass_context
def checkpoint_prune(
ctx: click.Context, keep: int | None, older_than: str | None, dry_run: bool
) -> None:
"""Remove old checkpoints."""
from crewai.cli.checkpoint_cli import prune_checkpoints
prune_checkpoints(ctx.obj["location"], keep, older_than, dry_run)
if __name__ == "__main__":

View File

@@ -13,7 +13,6 @@ from packaging import version
import tomli
from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
from crewai.crew import Crew
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
@@ -21,6 +20,7 @@ from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import PRINTER
from crewai.utilities.types import LLMMessage
from crewai.utilities.version import get_crewai_version
MIN_REQUIRED_VERSION: Final[Literal["0.98.0"]] = "0.98.0"

View File

@@ -4,12 +4,35 @@ from rich.console import Console
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.deploy.validate import validate_project
from crewai.cli.utils import fetch_and_json_env_file, get_project_name
console = Console()
def _run_predeploy_validation(skip_validate: bool) -> bool:
"""Run pre-deploy validation unless skipped.
Returns True if deployment should proceed, False if it should abort.
"""
if skip_validate:
console.print(
"[yellow]Skipping pre-deploy validation (--skip-validate).[/yellow]"
)
return True
console.print("Running pre-deploy validation...", style="bold blue")
validator = validate_project()
if not validator.ok:
console.print(
"\n[bold red]Pre-deploy validation failed. "
"Fix the issues above or re-run with --skip-validate.[/bold red]"
)
return False
return True
class DeployCommand(BaseCommand, PlusAPIMixin):
"""
A class to handle deployment-related operations for CrewAI projects.
@@ -60,13 +83,16 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
f"{log_message['timestamp']} - {log_message['level']}: {log_message['message']}"
)
def deploy(self, uuid: str | None = None) -> None:
def deploy(self, uuid: str | None = None, skip_validate: bool = False) -> None:
"""
Deploy a crew using either UUID or project name.
Args:
uuid (Optional[str]): The UUID of the crew to deploy.
skip_validate (bool): Skip pre-deploy validation checks.
"""
if not _run_predeploy_validation(skip_validate):
return
self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue")
if uuid:
@@ -80,10 +106,16 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
self._validate_response(response)
self._display_deployment_info(response.json())
def create_crew(self, confirm: bool = False) -> None:
def create_crew(self, confirm: bool = False, skip_validate: bool = False) -> None:
"""
Create a new crew deployment.
Args:
confirm (bool): Whether to skip the interactive confirmation prompt.
skip_validate (bool): Skip pre-deploy validation checks.
"""
if not _run_predeploy_validation(skip_validate):
return
self._telemetry.create_crew_deployment_span()
console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file()

View File

@@ -0,0 +1,845 @@
"""Pre-deploy validation for CrewAI projects.
Catches locally what a deploy would reject at build or runtime so users
don't burn deployment attempts on fixable project-structure problems.
Each check is grouped into one of:
- ERROR: will block a deployment; validator exits non-zero.
- WARNING: may still deploy but is almost always a deployment bug; printed
but does not block.
The individual checks mirror the categories observed in production
deployment-failure logs:
1. pyproject.toml present with ``[project].name``
2. lockfile (``uv.lock`` or ``poetry.lock``) present and not stale
3. package directory at ``src/<package>/`` exists (no empty name, no egg-info)
4. standard crew files: ``crew.py``, ``config/agents.yaml``, ``config/tasks.yaml``
5. flow entrypoint: ``main.py`` with a Flow subclass
6. hatch wheel target resolves (packages = [...] or default dir matches name)
7. crew/flow module imports cleanly (catches ``@CrewBase not found``,
``No Flow subclass found``, provider import errors)
8. environment variables referenced in code vs ``.env`` / deployment env
9. installed crewai vs lockfile pin (catches missing-attribute failures from
stale pins)
"""
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum
import json
import logging
import os
from pathlib import Path
import re
import shutil
import subprocess
import sys
from typing import Any
from rich.console import Console
from crewai.cli.utils import parse_toml
console = Console()
logger = logging.getLogger(__name__)
class Severity(str, Enum):
"""Severity of a validation finding."""
ERROR = "error"
WARNING = "warning"
@dataclass
class ValidationResult:
"""A single finding from a validation check.
Attributes:
severity: whether this blocks deploy or is advisory.
code: stable short identifier, used in tests and docs
(e.g. ``missing_pyproject``, ``stale_lockfile``).
title: one-line summary shown to the user.
detail: optional multi-line explanation.
hint: optional remediation suggestion.
"""
severity: Severity
code: str
title: str
detail: str = ""
hint: str = ""
# Maps known provider env var names → label used in hint messages.
_KNOWN_API_KEY_HINTS: dict[str, str] = {
"OPENAI_API_KEY": "OpenAI",
"ANTHROPIC_API_KEY": "Anthropic",
"GOOGLE_API_KEY": "Google",
"GEMINI_API_KEY": "Gemini",
"AZURE_OPENAI_API_KEY": "Azure OpenAI",
"AZURE_API_KEY": "Azure",
"AWS_ACCESS_KEY_ID": "AWS",
"AWS_SECRET_ACCESS_KEY": "AWS",
"COHERE_API_KEY": "Cohere",
"GROQ_API_KEY": "Groq",
"MISTRAL_API_KEY": "Mistral",
"TAVILY_API_KEY": "Tavily",
"SERPER_API_KEY": "Serper",
"SERPLY_API_KEY": "Serply",
"PERPLEXITY_API_KEY": "Perplexity",
"DEEPSEEK_API_KEY": "DeepSeek",
"OPENROUTER_API_KEY": "OpenRouter",
"FIRECRAWL_API_KEY": "Firecrawl",
"EXA_API_KEY": "Exa",
"BROWSERBASE_API_KEY": "Browserbase",
}
def normalize_package_name(project_name: str) -> str:
"""Normalize a pyproject project.name into a Python package directory name.
Mirrors the rules in ``crewai.cli.create_crew.create_crew`` so the
validator agrees with the scaffolder about where ``src/<pkg>/`` should
live.
"""
folder = project_name.replace(" ", "_").replace("-", "_").lower()
return re.sub(r"[^a-zA-Z0-9_]", "", folder)
class DeployValidator:
"""Runs the full pre-deploy validation suite against a project directory."""
def __init__(self, project_root: Path | None = None) -> None:
self.project_root: Path = (project_root or Path.cwd()).resolve()
self.results: list[ValidationResult] = []
self._pyproject: dict[str, Any] | None = None
self._project_name: str | None = None
self._package_name: str | None = None
self._package_dir: Path | None = None
self._is_flow: bool = False
def _add(
self,
severity: Severity,
code: str,
title: str,
detail: str = "",
hint: str = "",
) -> None:
self.results.append(
ValidationResult(
severity=severity,
code=code,
title=title,
detail=detail,
hint=hint,
)
)
@property
def errors(self) -> list[ValidationResult]:
return [r for r in self.results if r.severity is Severity.ERROR]
@property
def warnings(self) -> list[ValidationResult]:
return [r for r in self.results if r.severity is Severity.WARNING]
@property
def ok(self) -> bool:
return not self.errors
def run(self) -> list[ValidationResult]:
"""Run all checks. Later checks are skipped when earlier ones make
them impossible (e.g. no pyproject.toml → no lockfile check)."""
if not self._check_pyproject():
return self.results
self._check_lockfile()
if not self._check_package_dir():
self._check_hatch_wheel_target()
return self.results
if self._is_flow:
self._check_flow_entrypoint()
else:
self._check_crew_entrypoint()
self._check_config_yamls()
self._check_hatch_wheel_target()
self._check_module_imports()
self._check_env_vars()
self._check_version_vs_lockfile()
return self.results
def _check_pyproject(self) -> bool:
pyproject_path = self.project_root / "pyproject.toml"
if not pyproject_path.exists():
self._add(
Severity.ERROR,
"missing_pyproject",
"Cannot find pyproject.toml",
detail=(
f"Expected pyproject.toml at {pyproject_path}. "
"CrewAI projects must be installable Python packages."
),
hint="Run `crewai create crew <name>` to scaffold a valid project layout.",
)
return False
try:
self._pyproject = parse_toml(pyproject_path.read_text())
except Exception as e:
self._add(
Severity.ERROR,
"invalid_pyproject",
"pyproject.toml is not valid TOML",
detail=str(e),
)
return False
project = self._pyproject.get("project") or {}
name = project.get("name")
if not isinstance(name, str) or not name.strip():
self._add(
Severity.ERROR,
"missing_project_name",
"pyproject.toml is missing [project].name",
detail=(
"Without a project name the platform cannot resolve your "
"package directory (this produces errors like "
"'Cannot find src//crew.py')."
),
hint='Set a `name = "..."` field under `[project]` in pyproject.toml.',
)
return False
self._project_name = name
self._package_name = normalize_package_name(name)
self._is_flow = (self._pyproject.get("tool") or {}).get("crewai", {}).get(
"type"
) == "flow"
return True
def _check_lockfile(self) -> None:
uv_lock = self.project_root / "uv.lock"
poetry_lock = self.project_root / "poetry.lock"
pyproject = self.project_root / "pyproject.toml"
if not uv_lock.exists() and not poetry_lock.exists():
self._add(
Severity.ERROR,
"missing_lockfile",
"Expected to find at least one of these files: uv.lock or poetry.lock",
hint=(
"Run `uv lock` (recommended) or `poetry lock` in your project "
"directory, commit the lockfile, then redeploy."
),
)
return
lockfile = uv_lock if uv_lock.exists() else poetry_lock
try:
if lockfile.stat().st_mtime < pyproject.stat().st_mtime:
self._add(
Severity.WARNING,
"stale_lockfile",
f"{lockfile.name} is older than pyproject.toml",
detail=(
"Your lockfile may not reflect recent dependency changes. "
"The platform resolves from the lockfile, so deployed "
"dependencies may differ from local."
),
hint="Run `uv lock` (or `poetry lock`) and commit the result.",
)
except OSError:
pass
def _check_package_dir(self) -> bool:
if self._package_name is None:
return False
src_dir = self.project_root / "src"
if not src_dir.is_dir():
self._add(
Severity.ERROR,
"missing_src_dir",
"Missing src/ directory",
detail=(
"CrewAI deployments expect a src-layout project: "
f"src/{self._package_name}/crew.py (or main.py for flows)."
),
hint="Run `crewai create crew <name>` to see the expected layout.",
)
return False
package_dir = src_dir / self._package_name
if not package_dir.is_dir():
siblings = [
p.name
for p in src_dir.iterdir()
if p.is_dir() and not p.name.endswith(".egg-info")
]
egg_info = [
p.name for p in src_dir.iterdir() if p.name.endswith(".egg-info")
]
hint_parts = [
f'Create src/{self._package_name}/ to match [project].name = "{self._project_name}".'
]
if siblings:
hint_parts.append(
f"Found other package directories: {', '.join(siblings)}. "
f"Either rename one to '{self._package_name}' or update [project].name."
)
if egg_info:
hint_parts.append(
f"Delete stale build artifacts: {', '.join(egg_info)} "
"(these confuse the platform's package discovery)."
)
self._add(
Severity.ERROR,
"missing_package_dir",
f"Cannot find src/{self._package_name}/",
detail=(
"The platform looks for your crew source under "
"src/<package_name>/, derived from [project].name."
),
hint=" ".join(hint_parts),
)
return False
for p in src_dir.iterdir():
if p.name.endswith(".egg-info"):
self._add(
Severity.WARNING,
"stale_egg_info",
f"Stale build artifact in src/: {p.name}",
detail=(
".egg-info directories can be mistaken for your package "
"and cause 'Cannot find src/<name>.egg-info/crew.py' errors."
),
hint=f"Delete {p} and add `*.egg-info/` to .gitignore.",
)
self._package_dir = package_dir
return True
def _check_crew_entrypoint(self) -> None:
if self._package_dir is None:
return
crew_py = self._package_dir / "crew.py"
if not crew_py.is_file():
self._add(
Severity.ERROR,
"missing_crew_py",
f"Cannot find {crew_py.relative_to(self.project_root)}",
detail=(
"Standard crew projects must define a Crew class decorated "
"with @CrewBase inside crew.py."
),
hint=(
"Create crew.py with an @CrewBase-annotated class, or set "
'`[tool.crewai] type = "flow"` in pyproject.toml if this is a flow.'
),
)
def _check_config_yamls(self) -> None:
if self._package_dir is None:
return
config_dir = self._package_dir / "config"
if not config_dir.is_dir():
self._add(
Severity.ERROR,
"missing_config_dir",
f"Cannot find {config_dir.relative_to(self.project_root)}",
hint="Create a config/ directory with agents.yaml and tasks.yaml.",
)
return
for yaml_name in ("agents.yaml", "tasks.yaml"):
yaml_path = config_dir / yaml_name
if not yaml_path.is_file():
self._add(
Severity.ERROR,
f"missing_{yaml_name.replace('.', '_')}",
f"Cannot find {yaml_path.relative_to(self.project_root)}",
detail=(
"CrewAI loads agent and task config from these files; "
"missing them causes empty-config warnings and runtime crashes."
),
)
def _check_flow_entrypoint(self) -> None:
if self._package_dir is None:
return
main_py = self._package_dir / "main.py"
if not main_py.is_file():
self._add(
Severity.ERROR,
"missing_flow_main",
f"Cannot find {main_py.relative_to(self.project_root)}",
detail=(
"Flow projects must define a Flow subclass in main.py. "
'This project has `[tool.crewai] type = "flow"` set.'
),
hint="Create main.py with a `class MyFlow(Flow[...])`.",
)
def _check_hatch_wheel_target(self) -> None:
if not self._pyproject:
return
build_system = self._pyproject.get("build-system") or {}
backend = build_system.get("build-backend", "")
if "hatchling" not in backend:
return
hatch_wheel = (
(self._pyproject.get("tool") or {})
.get("hatch", {})
.get("build", {})
.get("targets", {})
.get("wheel", {})
)
if hatch_wheel.get("packages") or hatch_wheel.get("only-include"):
return
if self._package_dir and self._package_dir.is_dir():
return
self._add(
Severity.ERROR,
"hatch_wheel_target_missing",
"Hatchling cannot determine which files to ship",
detail=(
"Your pyproject uses hatchling but has no "
"[tool.hatch.build.targets.wheel] configuration and no "
"directory matching your project name."
),
hint=(
"Add:\n"
" [tool.hatch.build.targets.wheel]\n"
f' packages = ["src/{self._package_name}"]'
),
)
def _check_module_imports(self) -> None:
"""Import the user's crew/flow via `uv run` so the check sees the same
package versions as `crewai run` would. Result is reported as JSON on
the subprocess's stdout."""
script = (
"import json, sys, traceback, os\n"
"os.chdir(sys.argv[1])\n"
"try:\n"
" from crewai.cli.utils import get_crews, get_flows\n"
" is_flow = sys.argv[2] == 'flow'\n"
" if is_flow:\n"
" instances = get_flows()\n"
" kind = 'flow'\n"
" else:\n"
" instances = get_crews()\n"
" kind = 'crew'\n"
" print(json.dumps({'ok': True, 'kind': kind, 'count': len(instances)}))\n"
"except BaseException as e:\n"
" print(json.dumps({\n"
" 'ok': False,\n"
" 'error_type': type(e).__name__,\n"
" 'error': str(e),\n"
" 'traceback': traceback.format_exc(),\n"
" }))\n"
)
uv_path = shutil.which("uv")
if uv_path is None:
self._add(
Severity.WARNING,
"uv_not_found",
"Skipping import check: `uv` not installed",
hint="Install uv: https://docs.astral.sh/uv/",
)
return
try:
proc = subprocess.run( # noqa: S603 - args constructed from trusted inputs
[
uv_path,
"run",
"python",
"-c",
script,
str(self.project_root),
"flow" if self._is_flow else "crew",
],
cwd=self.project_root,
capture_output=True,
text=True,
timeout=120,
check=False,
)
except subprocess.TimeoutExpired:
self._add(
Severity.ERROR,
"import_timeout",
"Importing your crew/flow module timed out after 120s",
detail=(
"User code may be making network calls or doing heavy work "
"at import time. Move that work into agent methods."
),
)
return
# The payload is the last JSON object on stdout; user code may print
# other lines before it.
payload: dict[str, Any] | None = None
for line in reversed(proc.stdout.splitlines()):
line = line.strip()
if line.startswith("{") and line.endswith("}"):
try:
payload = json.loads(line)
break
except json.JSONDecodeError:
continue
if payload is None:
self._add(
Severity.ERROR,
"import_failed",
"Could not import your crew/flow module",
detail=(proc.stderr or proc.stdout or "").strip()[:1500],
hint="Run `crewai run` locally first to reproduce the error.",
)
return
if payload.get("ok"):
if payload.get("count", 0) == 0:
kind = payload.get("kind", "crew")
if kind == "flow":
self._add(
Severity.ERROR,
"no_flow_subclass",
"No Flow subclass found in the module",
hint=(
"main.py must define a class extending "
"`crewai.flow.Flow`, instantiable with no arguments."
),
)
else:
self._add(
Severity.ERROR,
"no_crewbase_class",
"Crew class annotated with @CrewBase not found",
hint=(
"Decorate your crew class with @CrewBase from "
"crewai.project (see `crewai create crew` template)."
),
)
return
err_msg = str(payload.get("error", ""))
err_type = str(payload.get("error_type", "Exception"))
tb = str(payload.get("traceback", ""))
self._classify_import_error(err_type, err_msg, tb)
def _classify_import_error(self, err_type: str, err_msg: str, tb: str) -> None:
"""Turn a raw import-time exception into a user-actionable finding."""
# Must be checked before the generic "native provider" branch below:
# the extras-missing message contains the same phrase. Providers
# format the install command as plain text (`to install: uv add
# "crewai[extra]"`); also tolerate backtick-delimited variants.
m = re.search(
r"(?P<pkg>[A-Za-z0-9_ -]+?)\s+native provider not available"
r".*?to install:\s*`?(?P<cmd>uv add [\"']crewai\[[^\]]+\][\"'])`?",
err_msg,
)
if m:
self._add(
Severity.ERROR,
"missing_provider_extra",
f"{m.group('pkg').strip()} provider extra not installed",
hint=f"Run: {m.group('cmd')}",
)
return
# crewai.llm.LLM.__new__ wraps provider init errors as
# ImportError("Error importing native provider: ...").
if "Error importing native provider" in err_msg or "native provider" in err_msg:
missing_key = self._extract_missing_api_key(err_msg)
if missing_key:
provider = _KNOWN_API_KEY_HINTS.get(missing_key, missing_key)
self._add(
Severity.WARNING,
"llm_init_missing_key",
f"LLM is constructed at import time but {missing_key} is not set",
detail=(
f"Your crew instantiates a {provider} LLM during module "
"load (e.g. in a class field default or @crew method). "
f"The {provider} provider currently requires {missing_key} "
"at construction time, so this will fail on the platform "
"unless the key is set in your deployment environment."
),
hint=(
f"Add {missing_key} to your deployment's Environment "
"Variables before deploying, or move LLM construction "
"inside agent methods so it runs lazily."
),
)
return
self._add(
Severity.ERROR,
"llm_provider_init_failed",
"LLM native provider failed to initialize",
detail=err_msg,
hint=(
"Check your LLM(model=...) configuration and provider-specific "
"extras (e.g. `uv add 'crewai[azure-ai-inference]'` for Azure)."
),
)
return
if err_type == "KeyError":
key = err_msg.strip("'\"")
if key in _KNOWN_API_KEY_HINTS or key.endswith("_API_KEY"):
self._add(
Severity.WARNING,
"env_var_read_at_import",
f"{key} is read at import time via os.environ[...]",
detail=(
"Using os.environ[...] (rather than os.getenv(...)) "
"at module scope crashes the build if the key isn't set."
),
hint=(
f"Either add {key} as a deployment env var, or switch "
"to os.getenv() and move the access inside agent methods."
),
)
return
if "Crew class annotated with @CrewBase not found" in err_msg:
self._add(
Severity.ERROR,
"no_crewbase_class",
"Crew class annotated with @CrewBase not found",
detail=err_msg,
)
return
if "No Flow subclass found" in err_msg:
self._add(
Severity.ERROR,
"no_flow_subclass",
"No Flow subclass found in the module",
detail=err_msg,
)
return
if (
err_type == "AttributeError"
and "has no attribute '_load_response_format'" in err_msg
):
self._add(
Severity.ERROR,
"stale_crewai_pin",
"Your lockfile pins a crewai version missing `_load_response_format`",
detail=err_msg,
hint=(
"Run `uv lock --upgrade-package crewai` (or `poetry update crewai`) "
"to pin a newer release."
),
)
return
if "pydantic" in tb.lower() or "validation error" in err_msg.lower():
self._add(
Severity.ERROR,
"pydantic_validation_error",
"Pydantic validation failed while loading your crew",
detail=err_msg[:800],
hint=(
"Check agent/task configuration fields. `crewai run` locally "
"will show the full traceback."
),
)
return
self._add(
Severity.ERROR,
"import_failed",
f"Importing your crew failed: {err_type}",
detail=err_msg[:800],
hint="Run `crewai run` locally to see the full traceback.",
)
@staticmethod
def _extract_missing_api_key(err_msg: str) -> str | None:
"""Pull 'FOO_API_KEY' out of '... FOO_API_KEY is required ...'."""
m = re.search(r"([A-Z][A-Z0-9_]*_API_KEY)\s+is required", err_msg)
if m:
return m.group(1)
m = re.search(r"['\"]([A-Z][A-Z0-9_]*_API_KEY)['\"]", err_msg)
if m:
return m.group(1)
return None
def _check_env_vars(self) -> None:
"""Warn about env vars referenced in user code but missing locally.
Best-effort only — the platform sets vars server-side, so we never error.
"""
if not self._package_dir:
return
referenced: set[str] = set()
pattern = re.compile(
r"""(?x)
(?:os\.environ\s*(?:\[\s*|\.get\s*\(\s*)
|os\.getenv\s*\(\s*
|getenv\s*\(\s*)
['"]([A-Z][A-Z0-9_]*)['"]
"""
)
for path in self._package_dir.rglob("*.py"):
try:
text = path.read_text(encoding="utf-8", errors="ignore")
except OSError:
continue
referenced.update(pattern.findall(text))
for path in self._package_dir.rglob("*.yaml"):
try:
text = path.read_text(encoding="utf-8", errors="ignore")
except OSError:
continue
referenced.update(re.findall(r"\$\{?([A-Z][A-Z0-9_]+)\}?", text))
env_file = self.project_root / ".env"
env_keys: set[str] = set()
if env_file.exists():
for line in env_file.read_text(errors="ignore").splitlines():
line = line.strip()
if not line or line.startswith("#") or "=" not in line:
continue
env_keys.add(line.split("=", 1)[0].strip())
missing_known: list[str] = sorted(
var
for var in referenced
if var in _KNOWN_API_KEY_HINTS
and var not in env_keys
and var not in os.environ
)
if missing_known:
self._add(
Severity.WARNING,
"env_vars_not_in_dotenv",
f"{len(missing_known)} referenced API key(s) not in .env",
detail=(
"These env vars are referenced in your source but not set "
f"locally: {', '.join(missing_known)}. Deploys will fail "
"unless they are added to the deployment's Environment "
"Variables in the CrewAI dashboard."
),
)
def _check_version_vs_lockfile(self) -> None:
"""Warn when the lockfile pins a crewai release older than 1.13.0,
which is where ``_load_response_format`` was introduced.
"""
uv_lock = self.project_root / "uv.lock"
poetry_lock = self.project_root / "poetry.lock"
lockfile = (
uv_lock
if uv_lock.exists()
else poetry_lock
if poetry_lock.exists()
else None
)
if lockfile is None:
return
try:
text = lockfile.read_text(errors="ignore")
except OSError:
return
m = re.search(
r'name\s*=\s*"crewai"\s*\nversion\s*=\s*"([^"]+)"',
text,
)
if not m:
return
locked = m.group(1)
try:
from packaging.version import Version
if Version(locked) < Version("1.13.0"):
self._add(
Severity.WARNING,
"old_crewai_pin",
f"Lockfile pins crewai=={locked} (older than 1.13.0)",
detail=(
"Older pinned versions are missing API surface the "
"platform builder expects (e.g. `_load_response_format`)."
),
hint="Run `uv lock --upgrade-package crewai` and redeploy.",
)
except Exception as e:
logger.debug("Could not parse crewai pin from lockfile: %s", e)
def render_report(results: list[ValidationResult]) -> None:
"""Pretty-print results to the shared rich console."""
if not results:
console.print("[bold green]Pre-deploy validation passed.[/bold green]")
return
errors = [r for r in results if r.severity is Severity.ERROR]
warnings = [r for r in results if r.severity is Severity.WARNING]
for result in errors:
console.print(f"[bold red]ERROR[/bold red] [{result.code}] {result.title}")
if result.detail:
console.print(f" {result.detail}")
if result.hint:
console.print(f" [dim]hint:[/dim] {result.hint}")
for result in warnings:
console.print(
f"[bold yellow]WARNING[/bold yellow] [{result.code}] {result.title}"
)
if result.detail:
console.print(f" {result.detail}")
if result.hint:
console.print(f" [dim]hint:[/dim] {result.hint}")
summary_parts: list[str] = []
if errors:
summary_parts.append(f"[bold red]{len(errors)} error(s)[/bold red]")
if warnings:
summary_parts.append(f"[bold yellow]{len(warnings)} warning(s)[/bold yellow]")
console.print(f"\n{' / '.join(summary_parts)}")
def validate_project(project_root: Path | None = None) -> DeployValidator:
"""Entrypoint: run validation, render results, return the validator.
The caller inspects ``validator.ok`` to decide whether to proceed with a
deploy.
"""
validator = DeployValidator(project_root=project_root)
validator.run()
render_report(validator.results)
return validator
def run_validate_command() -> None:
"""Implementation of `crewai deploy validate`."""
validator = validate_project()
if not validator.ok:
sys.exit(1)

View File

@@ -7,7 +7,7 @@ from rich.console import Console
from crewai.cli.authentication.main import Oauth2Settings, ProviderFactory
from crewai.cli.command import BaseCommand
from crewai.cli.settings.main import SettingsCommand
from crewai.cli.version import get_crewai_version
from crewai.utilities.version import get_crewai_version
console = Console()

View File

@@ -6,7 +6,7 @@ import httpx
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.version import get_crewai_version
from crewai.utilities.version import get_crewai_version
class PlusAPI:

View File

@@ -0,0 +1,250 @@
import io
import logging
import os
import shutil
from typing import Any
import zipfile
import click
import httpx
from rich.console import Console
from rich.panel import Panel
from rich.text import Text
from crewai.cli.command import BaseCommand
logger = logging.getLogger(__name__)
console = Console()
GITHUB_ORG = "crewAIInc"
TEMPLATE_PREFIX = "template_"
GITHUB_API_BASE = "https://api.github.com"
BANNER = """\
[bold white] ██████╗██████╗ ███████╗██╗ ██╗[/bold white] [bold red] █████╗ ██╗[/bold red]
[bold white]██╔════╝██╔══██╗██╔════╝██║ ██║[/bold white] [bold red]██╔══██╗██║[/bold red]
[bold white]██║ ██████╔╝█████╗ ██║ █╗ ██║[/bold white] [bold red]███████║██║[/bold red]
[bold white]██║ ██╔══██╗██╔══╝ ██║███╗██║[/bold white] [bold red]██╔══██║██║[/bold red]
[bold white]╚██████╗██║ ██║███████╗╚███╔███╔╝[/bold white] [bold red]██║ ██║██║[/bold red]
[bold white] ╚═════╝╚═╝ ╚═╝╚══════╝ ╚══╝╚══╝[/bold white] [bold red]╚═╝ ╚═╝╚═╝[/bold red]
[dim white]████████╗███████╗███╗ ███╗██████╗ ██╗ █████╗ ████████╗███████╗███████╗[/dim white]
[dim white]╚══██╔══╝██╔════╝████╗ ████║██╔══██╗██║ ██╔══██╗╚══██╔══╝██╔════╝██╔════╝[/dim white]
[dim white] ██║ █████╗ ██╔████╔██║██████╔╝██║ ███████║ ██║ █████╗ ███████╗[/dim white]
[dim white] ██║ ██╔══╝ ██║╚██╔╝██║██╔═══╝ ██║ ██╔══██║ ██║ ██╔══╝ ╚════██║[/dim white]
[dim white] ██║ ███████╗██║ ╚═╝ ██║██║ ███████╗██║ ██║ ██║ ███████╗███████║[/dim white]
[dim white] ╚═╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚══════╝╚═╝ ╚═╝ ╚═╝ ╚══════╝╚══════╝[/dim white]"""
class TemplateCommand(BaseCommand):
"""Handle template-related operations for CrewAI projects."""
def __init__(self) -> None:
super().__init__()
def list_templates(self) -> None:
"""List available templates with an interactive selector to install."""
templates = self._fetch_templates()
if not templates:
click.echo("No templates found.")
return
console.print(f"\n{BANNER}\n")
console.print(" [on cyan] templates [/on cyan]\n")
console.print(f" [green]o[/green] Source: https://github.com/{GITHUB_ORG}")
console.print(
f" [green]o[/green] Found [bold]{len(templates)}[/bold] templates\n"
)
console.print(" [green]o[/green] Select a template to install")
for idx, repo in enumerate(templates, start=1):
name = repo["name"].removeprefix(TEMPLATE_PREFIX)
description = repo.get("description") or ""
if description:
console.print(
f" [bold cyan]{idx}.[/bold cyan] [bold white]{name}[/bold white] [dim]({description})[/dim]"
)
else:
console.print(
f" [bold cyan]{idx}.[/bold cyan] [bold white]{name}[/bold white]"
)
console.print(" [bold cyan]q.[/bold cyan] [dim]Quit[/dim]\n")
while True:
choice = click.prompt("Enter your choice", type=str)
if choice.lower() == "q":
return
if choice.isdigit() and 1 <= int(choice) <= len(templates):
selected_index = int(choice) - 1
break
click.secho(
f"Please enter a number between 1 and {len(templates)}, or 'q' to quit.",
fg="yellow",
)
selected = templates[selected_index]
repo_name = selected["name"]
self._install_repo(repo_name)
def add_template(self, name: str, output_dir: str | None = None) -> None:
"""Download a template and copy it into the current working directory.
Args:
name: Template name (with or without the template_ prefix).
output_dir: Optional directory name. Defaults to the template name.
"""
repo_name = self._resolve_repo_name(name)
if repo_name is None:
click.secho(f"Template '{name}' not found.", fg="red")
click.echo("Run 'crewai template list' to see available templates.")
raise SystemExit(1)
self._install_repo(repo_name, output_dir)
def _install_repo(self, repo_name: str, output_dir: str | None = None) -> None:
"""Download and extract a template repo into the current directory.
Args:
repo_name: Full GitHub repo name (e.g. template_deep_research).
output_dir: Optional directory name. Defaults to the template name.
"""
folder_name = output_dir or repo_name.removeprefix(TEMPLATE_PREFIX)
dest = os.path.join(os.getcwd(), folder_name)
while os.path.exists(dest):
click.secho(f"Directory '{folder_name}' already exists.", fg="yellow")
folder_name = click.prompt(
"Enter a different directory name (or 'q' to quit)", type=str
)
if folder_name.lower() == "q":
return
dest = os.path.join(os.getcwd(), folder_name)
click.echo(
f"Downloading template '{repo_name.removeprefix(TEMPLATE_PREFIX)}'..."
)
zip_bytes = self._download_zip(repo_name)
self._extract_zip(zip_bytes, dest)
self._telemetry.template_installed_span(repo_name.removeprefix(TEMPLATE_PREFIX))
console.print(
f"\n [green]\u2713[/green] Installed template [bold white]{folder_name}[/bold white]"
f" [dim](source: github.com/{GITHUB_ORG}/{repo_name})[/dim]\n"
)
next_steps = Text()
next_steps.append(f" cd {folder_name}\n", style="bold white")
next_steps.append(" crewai install", style="bold white")
panel = Panel(
next_steps,
title="[green]\u25c7 Next steps[/green]",
title_align="left",
border_style="dim",
padding=(1, 2),
)
console.print(panel)
def _fetch_templates(self) -> list[dict[str, Any]]:
"""Fetch all template repos from the GitHub org."""
templates: list[dict[str, Any]] = []
page = 1
while True:
url = f"{GITHUB_API_BASE}/orgs/{GITHUB_ORG}/repos"
params: dict[str, str | int] = {
"per_page": 100,
"page": page,
"type": "public",
}
try:
response = httpx.get(url, params=params, timeout=15)
response.raise_for_status()
except httpx.HTTPError as e:
click.secho(f"Failed to fetch templates from GitHub: {e}", fg="red")
raise SystemExit(1) from e
repos = response.json()
if not repos:
break
templates.extend(
repo
for repo in repos
if repo["name"].startswith(TEMPLATE_PREFIX) and not repo.get("private")
)
page += 1
templates.sort(key=lambda r: r["name"])
return templates
def _resolve_repo_name(self, name: str) -> str | None:
"""Resolve user input to a full repo name, or None if not found."""
# Accept both 'deep_research' and 'template_deep_research'
candidates = [
f"{TEMPLATE_PREFIX}{name}"
if not name.startswith(TEMPLATE_PREFIX)
else name,
name,
]
templates = self._fetch_templates()
template_names = {t["name"] for t in templates}
for candidate in candidates:
if candidate in template_names:
return candidate
return None
def _download_zip(self, repo_name: str) -> bytes:
"""Download the default branch zipball for a repo."""
url = f"{GITHUB_API_BASE}/repos/{GITHUB_ORG}/{repo_name}/zipball"
try:
response = httpx.get(url, follow_redirects=True, timeout=60)
response.raise_for_status()
except httpx.HTTPError as e:
click.secho(f"Failed to download template: {e}", fg="red")
raise SystemExit(1) from e
return response.content
def _extract_zip(self, zip_bytes: bytes, dest: str) -> None:
"""Extract a GitHub zipball into dest, stripping the top-level directory."""
with zipfile.ZipFile(io.BytesIO(zip_bytes)) as zf:
# GitHub zipballs have a single top-level dir like 'crewAIInc-template_xxx-<sha>/'
members = zf.namelist()
if not members:
click.secho("Downloaded archive is empty.", fg="red")
raise SystemExit(1)
top_dir = members[0].split("/")[0] + "/"
os.makedirs(dest, exist_ok=True)
for member in members:
if member == top_dir or not member.startswith(top_dir):
continue
relative_path = member[len(top_dir) :]
if not relative_path:
continue
target = os.path.realpath(os.path.join(dest, relative_path))
if not target.startswith(
os.path.realpath(dest) + os.sep
) and target != os.path.realpath(dest):
continue
if member.endswith("/"):
os.makedirs(target, exist_ok=True)
else:
os.makedirs(os.path.dirname(target), exist_ok=True)
with zf.open(member) as src, open(target, "wb") as dst:
shutil.copyfileobj(src, dst)

View File

@@ -5,7 +5,7 @@ import click
from packaging import version
from crewai.cli.utils import build_env_with_all_tool_credentials, read_toml
from crewai.cli.version import get_crewai_version
from crewai.utilities.version import get_crewai_version
class CrewType(Enum):

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2a1"
"crewai[tools]==1.14.3a2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2a1"
"crewai[tools]==1.14.3a2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2a1"
"crewai[tools]==1.14.3a2"
]
[tool.crewai]

View File

@@ -3,7 +3,6 @@
from collections.abc import Mapping
from datetime import datetime, timedelta
from functools import lru_cache
import importlib.metadata
import json
from pathlib import Path
from typing import Any
@@ -13,6 +12,8 @@ from urllib.error import URLError
import appdirs
from packaging.version import InvalidVersion, Version, parse
from crewai.utilities.version import get_crewai_version
@lru_cache(maxsize=1)
def _get_cache_file() -> Path:
@@ -25,11 +26,6 @@ def _get_cache_file() -> Path:
return cache_dir / "version_cache.json"
def get_crewai_version() -> str:
"""Get the version number of CrewAI running the CLI."""
return importlib.metadata.version("crewai")
def _is_cache_valid(cache_data: Mapping[str, Any]) -> bool:
"""Check if the cache is still valid, less than 24 hours old."""
if "timestamp" not in cache_data:

View File

@@ -42,7 +42,6 @@ if TYPE_CHECKING:
from opentelemetry.trace import Span
from crewai.context import ExecutionContext
from crewai.state.provider.core import BaseProvider
try:
from crewai_files import get_supported_content_types
@@ -104,7 +103,11 @@ from crewai.rag.types import SearchResult
from crewai.security.fingerprint import Fingerprint
from crewai.security.security_config import SecurityConfig
from crewai.skills.models import Skill
from crewai.state.checkpoint_config import CheckpointConfig, _coerce_checkpoint
from crewai.state.checkpoint_config import (
CheckpointConfig,
_coerce_checkpoint,
apply_checkpoint,
)
from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
@@ -365,32 +368,21 @@ class Crew(FlowTrackable, BaseModel):
checkpoint_kickoff_event_id: str | None = Field(default=None)
@classmethod
def from_checkpoint(
cls, path: str, *, provider: BaseProvider | None = None
) -> Crew:
"""Restore a Crew from a checkpoint file, ready to resume via kickoff().
def from_checkpoint(cls, config: CheckpointConfig) -> Crew:
"""Restore a Crew from a checkpoint, ready to resume via kickoff().
Args:
path: Path to a checkpoint JSON file.
provider: Storage backend to read from. Defaults to JsonProvider.
config: Checkpoint configuration with ``restore_from`` set to
the path of the checkpoint to load.
Returns:
A Crew instance. Call kickoff() to resume from the last completed task.
"""
from crewai.context import apply_execution_context
from crewai.events.event_bus import crewai_event_bus
from crewai.state.provider.json_provider import JsonProvider
from crewai.state.provider.utils import detect_provider
from crewai.state.runtime import RuntimeState
if provider is None:
provider = detect_provider(path)
state = RuntimeState.from_checkpoint(
path,
provider=provider or JsonProvider(),
context={"from_checkpoint": True},
)
state = RuntimeState.from_checkpoint(config, context={"from_checkpoint": True})
crewai_event_bus.set_runtime_state(state)
for entity in state.root:
if isinstance(entity, cls):
@@ -398,14 +390,61 @@ class Crew(FlowTrackable, BaseModel):
apply_execution_context(entity.execution_context)
entity._restore_runtime()
return entity
raise ValueError(f"No Crew found in checkpoint: {path}")
raise ValueError(f"No Crew found in checkpoint: {config.restore_from}")
@classmethod
def fork(
cls,
config: CheckpointConfig,
branch: str | None = None,
) -> Crew:
"""Fork a Crew from a checkpoint, creating a new execution branch.
Args:
config: Checkpoint configuration with ``restore_from`` set.
branch: Branch label for the fork. Auto-generated if not provided.
Returns:
A Crew instance on the new branch. Call kickoff() to run.
"""
crew = cls.from_checkpoint(config)
state = crewai_event_bus._runtime_state
if state is None:
raise RuntimeError(
"Cannot fork: no runtime state on the event bus. "
"Ensure from_checkpoint() succeeded before calling fork()."
)
state.fork(branch)
return crew
def _restore_runtime(self) -> None:
"""Re-create runtime objects after restoring from a checkpoint."""
from crewai.events.event_bus import crewai_event_bus
started_task_ids: set[str] = set()
state = crewai_event_bus._runtime_state
if state is not None:
for node in state.event_record.nodes.values():
if node.event.type == "task_started" and node.event.task_id:
started_task_ids.add(node.event.task_id)
resuming_task_agent_roles: set[str] = set()
for task in self.tasks:
if (
task.output is None
and task.agent is not None
and str(task.id) in started_task_ids
):
resuming_task_agent_roles.add(task.agent.role)
for agent in self.agents:
agent.crew = self
executor = agent.agent_executor
if executor and executor.messages:
if (
executor
and executor.messages
and agent.role in resuming_task_agent_roles
):
executor.crew = self
executor.agent = agent
executor._resuming = True
@@ -419,6 +458,13 @@ class Crew(FlowTrackable, BaseModel):
if agent.agent_executor is not None and task.output is None:
agent.agent_executor.task = task
break
for task in self.tasks:
if task.checkpoint_original_description is not None:
task._original_description = task.checkpoint_original_description
if task.checkpoint_original_expected_output is not None:
task._original_expected_output = (
task.checkpoint_original_expected_output
)
if self.checkpoint_inputs is not None:
self._inputs = self.checkpoint_inputs
if self.checkpoint_kickoff_event_id is not None:
@@ -854,16 +900,23 @@ class Crew(FlowTrackable, BaseModel):
self,
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> CrewOutput | CrewStreamingOutput:
"""Execute the crew's workflow.
Args:
inputs: Optional input dictionary for task interpolation.
input_files: Optional dict of named file inputs for the crew.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the crew resumes from that checkpoint. Remaining
config fields enable checkpointing for the run.
Returns:
CrewOutput or CrewStreamingOutput if streaming is enabled.
"""
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return restored.kickoff(inputs=inputs, input_files=input_files) # type: ignore[no-any-return]
get_env_context()
if self.stream:
enable_agent_streaming(self.agents)
@@ -976,12 +1029,15 @@ class Crew(FlowTrackable, BaseModel):
self,
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> CrewOutput | CrewStreamingOutput:
"""Asynchronous kickoff method to start the crew execution.
Args:
inputs: Optional input dictionary for task interpolation.
input_files: Optional dict of named file inputs for the crew.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the crew resumes from that checkpoint.
Returns:
CrewOutput or CrewStreamingOutput if streaming is enabled.
@@ -990,6 +1046,9 @@ class Crew(FlowTrackable, BaseModel):
to get stream chunks. After iteration completes, access the final result
via .result.
"""
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return await restored.kickoff_async(inputs=inputs, input_files=input_files) # type: ignore[no-any-return]
inputs = inputs or {}
if self.stream:
@@ -1050,6 +1109,7 @@ class Crew(FlowTrackable, BaseModel):
self,
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> CrewOutput | CrewStreamingOutput:
"""Native async kickoff method using async task execution throughout.
@@ -1060,10 +1120,15 @@ class Crew(FlowTrackable, BaseModel):
Args:
inputs: Optional input dictionary for task interpolation.
input_files: Optional dict of named file inputs for the crew.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the crew resumes from that checkpoint.
Returns:
CrewOutput or CrewStreamingOutput if streaming is enabled.
"""
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return await restored.akickoff(inputs=inputs, input_files=input_files) # type: ignore[no-any-return]
if self.stream:
enable_agent_streaming(self.agents)
ctx = StreamingContext(use_async=True)

View File

@@ -13,13 +13,13 @@ from crewai.cli.authentication.token import AuthError, get_auth_token
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.plus_api import PlusAPI
from crewai.cli.version import get_crewai_version
from crewai.events.listeners.tracing.types import TraceEvent
from crewai.events.listeners.tracing.utils import (
get_user_id,
is_tracing_enabled_in_context,
should_auto_collect_first_time_traces,
)
from crewai.utilities.version import get_crewai_version
logger = getLogger(__name__)
@@ -81,8 +81,11 @@ class TraceBatchManager:
"""Initialize a new trace batch (thread-safe)"""
with self._batch_ready_cv:
if self.current_batch is not None:
# Lazy init (e.g. DefaultEnvEvent) may have created the batch without
# execution_type; merge metadata from a later flow/crew initializer.
self.current_batch.execution_metadata.update(execution_metadata)
logger.debug(
"Batch already initialized, skipping duplicate initialization"
"Batch already initialized, merged execution metadata and skipped duplicate initialization"
)
return self.current_batch

View File

@@ -7,7 +7,6 @@ import uuid
from typing_extensions import Self
from crewai.cli.authentication.token import AuthError, get_auth_token
from crewai.cli.version import get_crewai_version
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.base_events import BaseEvent
from crewai.events.event_bus import CrewAIEventsBus
@@ -61,12 +60,6 @@ from crewai.events.types.crew_events import (
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
)
from crewai.events.types.env_events import (
CCEnvEvent,
CodexEnvEvent,
CursorEnvEvent,
DefaultEnvEvent,
)
from crewai.events.types.flow_events import (
FlowCreatedEvent,
FlowFinishedEvent,
@@ -127,6 +120,7 @@ from crewai.events.types.tool_usage_events import (
ToolUsageStartedEvent,
)
from crewai.events.utils.console_formatter import ConsoleFormatter
from crewai.utilities.version import get_crewai_version
class TraceCollectionListener(BaseEventListener):
@@ -212,7 +206,6 @@ class TraceCollectionListener(BaseEventListener):
self._listeners_setup = True
return
self._register_env_event_handlers(crewai_event_bus)
self._register_flow_event_handlers(crewai_event_bus)
self._register_context_event_handlers(crewai_event_bus)
self._register_action_event_handlers(crewai_event_bus)
@@ -221,25 +214,6 @@ class TraceCollectionListener(BaseEventListener):
self._listeners_setup = True
def _register_env_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for environment context events."""
@event_bus.on(CCEnvEvent)
def on_cc_env(source: Any, event: CCEnvEvent) -> None:
self._handle_action_event("cc_env", source, event)
@event_bus.on(CodexEnvEvent)
def on_codex_env(source: Any, event: CodexEnvEvent) -> None:
self._handle_action_event("codex_env", source, event)
@event_bus.on(CursorEnvEvent)
def on_cursor_env(source: Any, event: CursorEnvEvent) -> None:
self._handle_action_event("cursor_env", source, event)
@event_bus.on(DefaultEnvEvent)
def on_default_env(source: Any, event: DefaultEnvEvent) -> None:
self._handle_action_event("default_env", source, event)
def _register_flow_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for flow events."""
@@ -286,8 +260,8 @@ class TraceCollectionListener(BaseEventListener):
if self.batch_manager.batch_owner_type != "flow":
# Always call _initialize_crew_batch to claim ownership.
# If batch was already initialized by a concurrent action event
# (race condition with DefaultEnvEvent), initialize_batch() returns
# early but batch_owner_type is still correctly set to "crew".
# (e.g. LLM/tool before crew_kickoff_started), initialize_batch()
# returns early but batch_owner_type is still correctly set to "crew".
# Skip only when a parent flow already owns the batch.
self._initialize_crew_batch(source, event)
self._handle_trace_event("crew_kickoff_started", source, event)

View File

@@ -87,7 +87,6 @@ class LLMStreamChunkEvent(LLMEventBase):
tool_call: ToolCall | None = None
call_type: LLMCallType | None = None
response_id: str | None = None
run_id: str | None = None
class LLMThinkingChunkEvent(LLMEventBase):

View File

@@ -113,7 +113,11 @@ from crewai.flow.utils import (
)
from crewai.memory.memory_scope import MemoryScope, MemorySlice
from crewai.memory.unified_memory import Memory
from crewai.state.checkpoint_config import CheckpointConfig, _coerce_checkpoint
from crewai.state.checkpoint_config import (
CheckpointConfig,
_coerce_checkpoint,
apply_checkpoint,
)
if TYPE_CHECKING:
@@ -122,7 +126,6 @@ if TYPE_CHECKING:
from crewai.context import ExecutionContext
from crewai.flow.async_feedback.types import PendingFeedbackContext
from crewai.llms.base_llm import BaseLLM
from crewai.state.provider.core import BaseProvider
from crewai.flow.visualization import build_flow_structure, render_interactive
from crewai.types.streaming import CrewStreamingOutput, FlowStreamingOutput
@@ -928,20 +931,21 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
] = Field(default=None)
@classmethod
def from_checkpoint(
cls, path: str, *, provider: BaseProvider | None = None
) -> Flow: # type: ignore[type-arg]
"""Restore a Flow from a checkpoint file."""
def from_checkpoint(cls, config: CheckpointConfig) -> Flow: # type: ignore[type-arg]
"""Restore a Flow from a checkpoint.
Args:
config: Checkpoint configuration with ``restore_from`` set to
the path of the checkpoint to load.
Returns:
A Flow instance ready to resume.
"""
from crewai.context import apply_execution_context
from crewai.events.event_bus import crewai_event_bus
from crewai.state.provider.json_provider import JsonProvider
from crewai.state.runtime import RuntimeState
state = RuntimeState.from_checkpoint(
path,
provider=provider or JsonProvider(),
context={"from_checkpoint": True},
)
state = RuntimeState.from_checkpoint(config, context={"from_checkpoint": True})
crewai_event_bus.set_runtime_state(state)
for entity in state.root:
if not isinstance(entity, Flow):
@@ -958,7 +962,32 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
instance.checkpoint_state = entity.checkpoint_state
instance._restore_from_checkpoint()
return instance
raise ValueError(f"No Flow found in checkpoint: {path}")
raise ValueError(f"No Flow found in checkpoint: {config.restore_from}")
@classmethod
def fork(
cls,
config: CheckpointConfig,
branch: str | None = None,
) -> Flow: # type: ignore[type-arg]
"""Fork a Flow from a checkpoint, creating a new execution branch.
Args:
config: Checkpoint configuration with ``restore_from`` set.
branch: Branch label for the fork. Auto-generated if not provided.
Returns:
A Flow instance on the new branch. Call kickoff() to run.
"""
flow = cls.from_checkpoint(config)
state = crewai_event_bus._runtime_state
if state is None:
raise RuntimeError(
"Cannot fork: no runtime state on the event bus. "
"Ensure from_checkpoint() succeeded before calling fork()."
)
state.fork(branch)
return flow
checkpoint_completed_methods: set[str] | None = Field(default=None)
checkpoint_method_outputs: list[Any] | None = Field(default=None)
@@ -1474,6 +1503,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
except Exception:
logger.warning("FlowStartedEvent handler failed", exc_info=True)
get_env_context()
context = self._pending_feedback_context
emit = context.emit
default_outcome = context.default_outcome
@@ -1956,6 +1987,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
self,
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> Any | FlowStreamingOutput:
"""Start the flow execution in a synchronous context.
@@ -1965,11 +1997,15 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
Args:
inputs: Optional dictionary containing input values and/or a state ID.
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
Returns:
The final output from the flow or FlowStreamingOutput if streaming.
"""
get_env_context()
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return restored.kickoff(inputs=inputs, input_files=input_files)
if self.stream:
result_holder: list[Any] = []
current_task_info: TaskInfo = {
@@ -2025,6 +2061,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
self,
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> Any | FlowStreamingOutput:
"""Start the flow execution asynchronously.
@@ -2036,10 +2073,15 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
Args:
inputs: Optional dictionary containing input values and/or a state ID for restoration.
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
Returns:
The final output from the flow, which is the result of the last executed method.
"""
restored = apply_checkpoint(self, from_checkpoint)
if restored is not None:
return await restored.kickoff_async(inputs=inputs, input_files=input_files)
if self.stream:
result_holder: list[Any] = []
current_task_info: TaskInfo = {
@@ -2097,7 +2139,9 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
try:
# Reset flow state for fresh execution unless restoring from persistence
is_restoring = inputs and "id" in inputs and self.persistence is not None
is_restoring = (
inputs and "id" in inputs and self.persistence is not None
) or self.checkpoint_completed_methods is not None
if not is_restoring:
# Clear completed methods and outputs for a fresh start
self._completed_methods.clear()
@@ -2163,6 +2207,10 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
f"Flow started with ID: {self.flow_id}", color="bold magenta"
)
# After FlowStarted (when not suppressed): env events must not pre-empt
# trace batch init with implicit "crew" execution_type.
get_env_context()
if inputs is not None and "id" not in inputs:
self._initialize_state(inputs)
@@ -2298,17 +2346,20 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
self,
inputs: dict[str, Any] | None = None,
input_files: dict[str, FileInput] | None = None,
from_checkpoint: CheckpointConfig | None = None,
) -> Any | FlowStreamingOutput:
"""Native async method to start the flow execution. Alias for kickoff_async.
Args:
inputs: Optional dictionary containing input values and/or a state ID for restoration.
input_files: Optional dict of named file inputs for the flow.
from_checkpoint: Optional checkpoint config. If ``restore_from``
is set, the flow resumes from that checkpoint.
Returns:
The final output from the flow, which is the result of the last executed method.
"""
return await self.kickoff_async(inputs, input_files)
return await self.kickoff_async(inputs, input_files, from_checkpoint)
async def _execute_start_method(self, start_method_name: FlowMethodName) -> None:
"""Executes a flow's start method and its triggered listeners.

View File

@@ -16,7 +16,6 @@ from typing import (
get_origin,
)
import uuid
import warnings
from pydantic import (
UUID4,
@@ -26,7 +25,7 @@ from pydantic import (
field_validator,
model_validator,
)
from typing_extensions import Self
from typing_extensions import Self, deprecated
if TYPE_CHECKING:
@@ -173,9 +172,12 @@ def _kickoff_with_a2a_support(
)
@deprecated(
"LiteAgent is deprecated and will be removed in v2.0.0.",
category=FutureWarning,
)
class LiteAgent(FlowTrackable, BaseModel):
"""
A lightweight agent that can process messages and use tools.
"""A lightweight agent that can process messages and use tools.
.. deprecated::
LiteAgent is deprecated and will be removed in a future version.
@@ -278,18 +280,6 @@ class LiteAgent(FlowTrackable, BaseModel):
)
_memory: Any = PrivateAttr(default=None)
@model_validator(mode="after")
def emit_deprecation_warning(self) -> Self:
"""Emit deprecation warning for LiteAgent usage."""
warnings.warn(
"LiteAgent is deprecated and will be removed in a future version. "
"Use Agent().kickoff(messages) instead, which provides the same "
"functionality with additional features like memory and knowledge support.",
DeprecationWarning,
stacklevel=2,
)
return self
@model_validator(mode="after")
def setup_llm(self) -> Self:
"""Set up the LLM and other components after initialization."""

View File

@@ -38,7 +38,6 @@ from crewai.llms.base_llm import (
get_current_call_id,
llm_call_context,
)
from crewai.utilities.streaming import get_current_stream_run_id
from crewai.llms.constants import (
ANTHROPIC_MODELS,
AZURE_MODELS,
@@ -52,6 +51,7 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
)
from crewai.utilities.logger_utils import suppress_warnings
from crewai.utilities.string_utils import sanitize_tool_name
from crewai.utilities.token_counter_callback import TokenCalcHandler
try:
@@ -76,8 +76,13 @@ try:
from litellm.types.utils import (
ChatCompletionDeltaToolCall,
Choices,
Delta as LiteLLMDelta,
Function,
Message,
ModelResponse,
ModelResponseBase,
ModelResponseStream,
StreamingChoices as LiteLLMStreamingChoices,
)
from litellm.utils import supports_response_schema
@@ -86,6 +91,11 @@ except ImportError:
LITELLM_AVAILABLE = False
litellm = None # type: ignore[assignment]
Choices = None # type: ignore[assignment, misc]
LiteLLMDelta = None # type: ignore[assignment, misc]
Message = None # type: ignore[assignment, misc]
ModelResponseBase = None # type: ignore[assignment, misc]
ModelResponseStream = None # type: ignore[assignment, misc]
LiteLLMStreamingChoices = None # type: ignore[assignment, misc]
get_supported_openai_params = None # type: ignore[assignment]
ChatCompletionDeltaToolCall = None # type: ignore[assignment, misc]
Function = None # type: ignore[assignment, misc]
@@ -165,6 +175,16 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
"us.amazon.nova-pro-v1:0": 300000,
"us.amazon.nova-micro-v1:0": 128000,
"us.amazon.nova-lite-v1:0": 300000,
# Claude 4 models
"us.anthropic.claude-opus-4-7": 1000000,
"us.anthropic.claude-sonnet-4-6": 1000000,
"us.anthropic.claude-opus-4-6-v1": 1000000,
"us.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"us.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"us.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"us.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"us.anthropic.claude-opus-4-20250514-v1:0": 200000,
"us.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"us.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"us.anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
@@ -183,15 +203,44 @@ LLM_CONTEXT_WINDOW_SIZES: Final[dict[str, int]] = {
"eu.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"eu.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
"eu.anthropic.claude-3-haiku-20240307-v1:0": 200000,
# Claude 4 EU
"eu.anthropic.claude-opus-4-7": 1000000,
"eu.anthropic.claude-sonnet-4-6": 1000000,
"eu.anthropic.claude-opus-4-6-v1": 1000000,
"eu.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"eu.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"eu.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"eu.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"eu.anthropic.claude-opus-4-20250514-v1:0": 200000,
"eu.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"eu.meta.llama3-2-3b-instruct-v1:0": 131000,
"eu.meta.llama3-2-1b-instruct-v1:0": 131000,
"apac.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
"apac.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
"apac.anthropic.claude-3-haiku-20240307-v1:0": 200000,
# Claude 4 APAC
"apac.anthropic.claude-opus-4-7": 1000000,
"apac.anthropic.claude-sonnet-4-6": 1000000,
"apac.anthropic.claude-opus-4-6-v1": 1000000,
"apac.anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"apac.anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"apac.anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"apac.anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"apac.anthropic.claude-opus-4-20250514-v1:0": 200000,
"apac.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"amazon.nova-pro-v1:0": 300000,
"amazon.nova-micro-v1:0": 128000,
"amazon.nova-lite-v1:0": 300000,
"anthropic.claude-opus-4-7": 1000000,
"anthropic.claude-sonnet-4-6": 1000000,
"anthropic.claude-opus-4-6-v1": 1000000,
"anthropic.claude-opus-4-5-20251101-v1:0": 200000,
"anthropic.claude-haiku-4-5-20251001-v1:0": 200000,
"anthropic.claude-sonnet-4-5-20250929-v1:0": 200000,
"anthropic.claude-opus-4-1-20250805-v1:0": 200000,
"anthropic.claude-opus-4-20250514-v1:0": 200000,
"anthropic.claude-sonnet-4-20250514-v1:0": 200000,
"anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
"anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
"anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
@@ -710,7 +759,7 @@ class LLM(BaseLLM):
chunk_content = None
response_id = None
if hasattr(chunk, "id"):
if isinstance(chunk, ModelResponseBase):
response_id = chunk.id
# Safely extract content from various chunk formats
@@ -719,18 +768,16 @@ class LLM(BaseLLM):
choices = None
if isinstance(chunk, dict) and "choices" in chunk:
choices = chunk["choices"]
elif hasattr(chunk, "choices"):
# Check if choices is not a type but an actual attribute with value
if not isinstance(chunk.choices, type):
choices = chunk.choices
elif isinstance(chunk, ModelResponseStream):
choices = chunk.choices
# Try to extract usage information if available
# NOTE: usage is a pydantic extra field on ModelResponseBase,
# so it must be accessed via model_extra.
if isinstance(chunk, dict) and "usage" in chunk:
usage_info = chunk["usage"]
elif hasattr(chunk, "usage"):
# Check if usage is not a type but an actual attribute with value
if not isinstance(chunk.usage, type):
usage_info = chunk.usage
elif isinstance(chunk, ModelResponseBase) and chunk.model_extra:
usage_info = chunk.model_extra.get("usage") or usage_info
if choices and len(choices) > 0:
choice = choices[0]
@@ -739,7 +786,7 @@ class LLM(BaseLLM):
delta = None
if isinstance(choice, dict) and "delta" in choice:
delta = choice["delta"]
elif hasattr(choice, "delta"):
elif isinstance(choice, LiteLLMStreamingChoices):
delta = choice.delta
# Extract content from delta
@@ -749,7 +796,7 @@ class LLM(BaseLLM):
if "content" in delta and delta["content"] is not None:
chunk_content = delta["content"]
# Handle object format
elif hasattr(delta, "content"):
elif isinstance(delta, LiteLLMDelta):
chunk_content = delta.content
# Handle case where content might be None or empty
@@ -791,7 +838,6 @@ class LLM(BaseLLM):
call_type=LLMCallType.LLM_CALL,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)
# --- 4) Fallback to non-streaming if no content received
@@ -823,9 +869,8 @@ class LLM(BaseLLM):
choices = None
if isinstance(last_chunk, dict) and "choices" in last_chunk:
choices = last_chunk["choices"]
elif hasattr(last_chunk, "choices"):
if not isinstance(last_chunk.choices, type):
choices = last_chunk.choices
elif isinstance(last_chunk, ModelResponseStream):
choices = last_chunk.choices
if choices and len(choices) > 0:
choice = choices[0]
@@ -834,14 +879,14 @@ class LLM(BaseLLM):
message = None
if isinstance(choice, dict) and "message" in choice:
message = choice["message"]
elif hasattr(choice, "message"):
elif isinstance(choice, Choices):
message = choice.message
if message:
content = None
if isinstance(message, dict) and "content" in message:
content = message["content"]
elif hasattr(message, "content"):
elif isinstance(message, Message):
content = message.content
if content:
@@ -868,24 +913,23 @@ class LLM(BaseLLM):
choices = None
if isinstance(last_chunk, dict) and "choices" in last_chunk:
choices = last_chunk["choices"]
elif hasattr(last_chunk, "choices"):
if not isinstance(last_chunk.choices, type):
choices = last_chunk.choices
elif isinstance(last_chunk, ModelResponseStream):
choices = last_chunk.choices
if choices and len(choices) > 0:
choice = choices[0]
message = None
if isinstance(choice, dict) and "message" in choice:
message = choice["message"]
elif hasattr(choice, "message"):
message = choice.message
delta = None
if isinstance(choice, dict) and "delta" in choice:
delta = choice["delta"]
elif isinstance(choice, LiteLLMStreamingChoices):
delta = choice.delta
if message:
if isinstance(message, dict) and "tool_calls" in message:
tool_calls = message["tool_calls"]
elif hasattr(message, "tool_calls"):
tool_calls = message.tool_calls
if delta:
if isinstance(delta, dict) and "tool_calls" in delta:
tool_calls = delta["tool_calls"]
elif isinstance(delta, LiteLLMDelta):
tool_calls = delta.tool_calls
except Exception as e:
logging.debug(f"Error checking for tool calls: {e}")
@@ -1005,7 +1049,6 @@ class LLM(BaseLLM):
call_type=LLMCallType.TOOL_CALL,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)
@@ -1040,7 +1083,7 @@ class LLM(BaseLLM):
"""
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if hasattr(callback, "log_success_event"):
if isinstance(callback, TokenCalcHandler):
# Use the usage_info we've been tracking
if not usage_info:
# Try to get usage from the last chunk if we haven't already
@@ -1051,9 +1094,14 @@ class LLM(BaseLLM):
and "usage" in last_chunk
):
usage_info = last_chunk["usage"]
elif hasattr(last_chunk, "usage"):
if not isinstance(last_chunk.usage, type):
usage_info = last_chunk.usage
elif (
isinstance(last_chunk, ModelResponseBase)
and last_chunk.model_extra
):
usage_info = (
last_chunk.model_extra.get("usage")
or usage_info
)
except Exception as e:
logging.debug(f"Error extracting usage info: {e}")
@@ -1126,13 +1174,10 @@ class LLM(BaseLLM):
params["response_model"] = response_model
response = litellm.completion(**params)
if (
hasattr(response, "usage")
and not isinstance(response.usage, type)
and response.usage
):
usage_info = response.usage
self._track_token_usage_internal(usage_info)
if isinstance(response, ModelResponseBase) and response.model_extra:
usage_info = response.model_extra.get("usage")
if usage_info:
self._track_token_usage_internal(usage_info)
except LLMContextLengthExceededError:
# Re-raise our own context length error
@@ -1144,7 +1189,11 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(getattr(response, "usage", None))
response_usage = self._usage_to_dict(
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase) and response.model_extra
else None
)
# --- 2) Handle structured output response (when response_model is provided)
if response_model is not None:
@@ -1169,8 +1218,13 @@ class LLM(BaseLLM):
# --- 3) Handle callbacks with usage info
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if isinstance(callback, TokenCalcHandler):
usage_info = (
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase)
and response.model_extra
else None
)
if usage_info:
callback.log_success_event(
kwargs=params,
@@ -1179,7 +1233,7 @@ class LLM(BaseLLM):
end_time=0,
)
# --- 4) Check for tool calls
tool_calls = getattr(response_message, "tool_calls", [])
tool_calls = response_message.tool_calls or []
# --- 5) If no tool calls or no available functions, return the text response directly as long as there is a text response
if (not tool_calls or not available_functions) and text_response:
@@ -1272,13 +1326,10 @@ class LLM(BaseLLM):
params["response_model"] = response_model
response = await litellm.acompletion(**params)
if (
hasattr(response, "usage")
and not isinstance(response.usage, type)
and response.usage
):
usage_info = response.usage
self._track_token_usage_internal(usage_info)
if isinstance(response, ModelResponseBase) and response.model_extra:
usage_info = response.model_extra.get("usage")
if usage_info:
self._track_token_usage_internal(usage_info)
except LLMContextLengthExceededError:
# Re-raise our own context length error
@@ -1290,7 +1341,11 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(getattr(response, "usage", None))
response_usage = self._usage_to_dict(
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase) and response.model_extra
else None
)
if response_model is not None:
if isinstance(response, BaseModel):
@@ -1312,8 +1367,13 @@ class LLM(BaseLLM):
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if isinstance(callback, TokenCalcHandler):
usage_info = (
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase)
and response.model_extra
else None
)
if usage_info:
callback.log_success_event(
kwargs=params,
@@ -1322,7 +1382,7 @@ class LLM(BaseLLM):
end_time=0,
)
tool_calls = getattr(response_message, "tool_calls", [])
tool_calls = response_message.tool_calls or []
if (not tool_calls or not available_functions) and text_response:
self._handle_emit_call_events(
@@ -1397,18 +1457,19 @@ class LLM(BaseLLM):
async for chunk in await litellm.acompletion(**params):
chunk_count += 1
chunk_content = None
response_id = chunk.id if hasattr(chunk, "id") else None
response_id = chunk.id if isinstance(chunk, ModelResponseBase) else None
try:
choices = None
if isinstance(chunk, dict) and "choices" in chunk:
choices = chunk["choices"]
elif hasattr(chunk, "choices"):
if not isinstance(chunk.choices, type):
choices = chunk.choices
elif isinstance(chunk, ModelResponseStream):
choices = chunk.choices
if hasattr(chunk, "usage") and chunk.usage is not None:
usage_info = chunk.usage
if isinstance(chunk, ModelResponseBase) and chunk.model_extra:
chunk_usage = chunk.model_extra.get("usage")
if chunk_usage is not None:
usage_info = chunk_usage
if choices and len(choices) > 0:
first_choice = choices[0]
@@ -1416,19 +1477,19 @@ class LLM(BaseLLM):
if isinstance(first_choice, dict):
delta = first_choice.get("delta", {})
elif hasattr(first_choice, "delta"):
elif isinstance(first_choice, LiteLLMStreamingChoices):
delta = first_choice.delta
if delta:
if isinstance(delta, dict):
chunk_content = delta.get("content")
elif hasattr(delta, "content"):
elif isinstance(delta, LiteLLMDelta):
chunk_content = delta.content
tool_calls: list[ChatCompletionDeltaToolCall] | None = None
if isinstance(delta, dict):
tool_calls = delta.get("tool_calls")
elif hasattr(delta, "tool_calls"):
elif isinstance(delta, LiteLLMDelta):
tool_calls = delta.tool_calls
if tool_calls:
@@ -1459,13 +1520,12 @@ class LLM(BaseLLM):
from_agent=from_agent,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)
if callbacks and len(callbacks) > 0 and usage_info:
for callback in callbacks:
if hasattr(callback, "log_success_event"):
if isinstance(callback, TokenCalcHandler):
callback.log_success_event(
kwargs=params,
response_obj={"usage": usage_info},
@@ -1924,7 +1984,7 @@ class LLM(BaseLLM):
return None
if isinstance(usage, dict):
return usage
if hasattr(usage, "model_dump"):
if isinstance(usage, BaseModel):
result: dict[str, Any] = usage.model_dump()
return result
if hasattr(usage, "__dict__"):
@@ -1988,7 +2048,7 @@ class LLM(BaseLLM):
)
return messages
provider = getattr(self, "provider", None) or self.model
provider = self.provider or self.model
for msg in messages:
files = msg.get("files")
@@ -2039,7 +2099,7 @@ class LLM(BaseLLM):
)
return messages
provider = getattr(self, "provider", None) or self.model
provider = self.provider or self.model
for msg in messages:
files = msg.get("files")

View File

@@ -36,7 +36,6 @@ from crewai.events.types.llm_events import (
LLMStreamChunkEvent,
LLMThinkingChunkEvent,
)
from crewai.utilities.streaming import get_current_stream_run_id
from crewai.events.types.tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
@@ -173,6 +172,8 @@ class BaseLLM(BaseModel, ABC):
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
"reasoning_tokens": 0,
"cache_creation_tokens": 0,
}
)
@@ -528,7 +529,6 @@ class BaseLLM(BaseModel, ABC):
call_type=call_type,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)
@@ -810,14 +810,24 @@ class BaseLLM(BaseModel, ABC):
cached_tokens = (
usage_data.get("cached_tokens")
or usage_data.get("cached_prompt_tokens")
or usage_data.get("cache_read_input_tokens")
or 0
)
if not cached_tokens:
prompt_details = usage_data.get("prompt_tokens_details")
if isinstance(prompt_details, dict):
cached_tokens = prompt_details.get("cached_tokens", 0) or 0
reasoning_tokens = usage_data.get("reasoning_tokens", 0) or 0
cache_creation_tokens = usage_data.get("cache_creation_tokens", 0) or 0
self._token_usage["prompt_tokens"] += prompt_tokens
self._token_usage["completion_tokens"] += completion_tokens
self._token_usage["total_tokens"] += prompt_tokens + completion_tokens
self._token_usage["successful_requests"] += 1
self._token_usage["cached_prompt_tokens"] += cached_tokens
self._token_usage["reasoning_tokens"] += reasoning_tokens
self._token_usage["cache_creation_tokens"] += cache_creation_tokens
def get_token_usage_summary(self) -> UsageMetrics:
"""Get summary of token usage for this LLM instance.

View File

@@ -423,6 +423,34 @@ AZURE_MODELS: list[AzureModels] = [
BedrockModels: TypeAlias = Literal[
# Inference profiles (regional) - Claude 4
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"us.anthropic.claude-sonnet-4-20250514-v1:0",
"us.anthropic.claude-opus-4-5-20251101-v1:0",
"us.anthropic.claude-opus-4-20250514-v1:0",
"us.anthropic.claude-opus-4-1-20250805-v1:0",
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
"us.anthropic.claude-sonnet-4-6",
"us.anthropic.claude-opus-4-6-v1",
# Inference profiles - shorter versions
"us.anthropic.claude-sonnet-4-5-v1:0",
"us.anthropic.claude-opus-4-5-v1:0",
"us.anthropic.claude-opus-4-6-v1:0",
"us.anthropic.claude-haiku-4-5-v1:0",
"eu.anthropic.claude-sonnet-4-5-v1:0",
"eu.anthropic.claude-opus-4-5-v1:0",
"eu.anthropic.claude-haiku-4-5-v1:0",
"apac.anthropic.claude-sonnet-4-5-v1:0",
"apac.anthropic.claude-opus-4-5-v1:0",
"apac.anthropic.claude-haiku-4-5-v1:0",
# Global inference profiles
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
"global.anthropic.claude-sonnet-4-20250514-v1:0",
"global.anthropic.claude-opus-4-5-20251101-v1:0",
"global.anthropic.claude-opus-4-6-v1",
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
"global.anthropic.claude-sonnet-4-6",
# Direct model IDs
"ai21.jamba-1-5-large-v1:0",
"ai21.jamba-1-5-mini-v1:0",
"amazon.nova-lite-v1:0",
@@ -496,6 +524,34 @@ BedrockModels: TypeAlias = Literal[
"twelvelabs.pegasus-1-2-v1:0",
]
BEDROCK_MODELS: list[BedrockModels] = [
# Inference profiles (regional) - Claude 4
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"us.anthropic.claude-sonnet-4-20250514-v1:0",
"us.anthropic.claude-opus-4-5-20251101-v1:0",
"us.anthropic.claude-opus-4-20250514-v1:0",
"us.anthropic.claude-opus-4-1-20250805-v1:0",
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
"us.anthropic.claude-sonnet-4-6",
"us.anthropic.claude-opus-4-6-v1",
# Inference profiles - shorter versions
"us.anthropic.claude-sonnet-4-5-v1:0",
"us.anthropic.claude-opus-4-5-v1:0",
"us.anthropic.claude-opus-4-6-v1:0",
"us.anthropic.claude-haiku-4-5-v1:0",
"eu.anthropic.claude-sonnet-4-5-v1:0",
"eu.anthropic.claude-opus-4-5-v1:0",
"eu.anthropic.claude-haiku-4-5-v1:0",
"apac.anthropic.claude-sonnet-4-5-v1:0",
"apac.anthropic.claude-opus-4-5-v1:0",
"apac.anthropic.claude-haiku-4-5-v1:0",
# Global inference profiles
"global.anthropic.claude-sonnet-4-5-20250929-v1:0",
"global.anthropic.claude-sonnet-4-20250514-v1:0",
"global.anthropic.claude-opus-4-5-20251101-v1:0",
"global.anthropic.claude-opus-4-6-v1",
"global.anthropic.claude-haiku-4-5-20251001-v1:0",
"global.anthropic.claude-sonnet-4-6",
# Direct model IDs
"ai21.jamba-1-5-large-v1:0",
"ai21.jamba-1-5-mini-v1:0",
"amazon.nova-lite-v1:0",

View File

@@ -11,10 +11,14 @@ from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import (
sanitize_tool_params_for_anthropic_strict,
)
from crewai.utilities.types import LLMMessage
@@ -189,16 +193,41 @@ class AnthropicCompletion(BaseLLM):
@model_validator(mode="after")
def _init_clients(self) -> AnthropicCompletion:
self._client = Anthropic(**self._get_client_params())
"""Eagerly build clients when the API key is available, otherwise
defer so ``LLM(model="anthropic/...")`` can be constructed at module
import time even before deployment env vars are set.
"""
try:
self._client = self._build_sync_client()
self._async_client = self._build_async_client()
except ValueError:
pass
return self
async_client_params = self._get_client_params()
def _build_sync_client(self) -> Any:
return Anthropic(**self._get_client_params())
def _build_async_client(self) -> Any:
# Skip the sync httpx.Client that `_get_client_params` would
# otherwise construct under `interceptor`; we attach an async one
# below and would leak the sync one if both were built.
async_client_params = self._get_client_params(include_http_client=False)
if self.interceptor:
async_transport = AsyncHTTPTransport(interceptor=self.interceptor)
async_http_client = httpx.AsyncClient(transport=async_transport)
async_client_params["http_client"] = async_http_client
async_client_params["http_client"] = httpx.AsyncClient(
transport=async_transport
)
return AsyncAnthropic(**async_client_params)
self._async_client = AsyncAnthropic(**async_client_params)
return self
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
if self._async_client is None:
self._async_client = self._build_async_client()
return self._async_client
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Anthropic-specific fields."""
@@ -213,8 +242,15 @@ class AnthropicCompletion(BaseLLM):
config["timeout"] = self.timeout
return config
def _get_client_params(self) -> dict[str, Any]:
"""Get client parameters."""
def _get_client_params(self, include_http_client: bool = True) -> dict[str, Any]:
"""Get client parameters.
Args:
include_http_client: When True (default) and an interceptor is
set, attach a sync ``httpx.Client``. The async builder
passes ``False`` so it can attach its own async client
without leaking a sync one.
"""
if self.api_key is None:
self.api_key = os.getenv("ANTHROPIC_API_KEY")
@@ -228,7 +264,7 @@ class AnthropicCompletion(BaseLLM):
"max_retries": self.max_retries,
}
if self.interceptor:
if include_http_client and self.interceptor:
transport = HTTPTransport(interceptor=self.interceptor)
http_client = httpx.Client(transport=transport)
client_params["http_client"] = http_client # type: ignore[assignment]
@@ -473,10 +509,8 @@ class AnthropicCompletion(BaseLLM):
continue
try:
from crewai.llms.providers.utils.common import safe_tool_conversion
name, description, parameters = safe_tool_conversion(tool, "Anthropic")
except (ImportError, KeyError, ValueError) as e:
except (KeyError, ValueError) as e:
logging.error(f"Error converting tool to Anthropic format: {e}")
raise e
@@ -485,8 +519,15 @@ class AnthropicCompletion(BaseLLM):
"description": description,
}
func_info = tool.get("function", {})
strict_enabled = bool(func_info.get("strict"))
if parameters and isinstance(parameters, dict):
anthropic_tool["input_schema"] = parameters
anthropic_tool["input_schema"] = (
sanitize_tool_params_for_anthropic_strict(parameters)
if strict_enabled
else parameters
)
else:
anthropic_tool["input_schema"] = {
"type": "object",
@@ -494,6 +535,9 @@ class AnthropicCompletion(BaseLLM):
"required": [],
}
if strict_enabled:
anthropic_tool["strict"] = True
anthropic_tools.append(anthropic_tool)
return anthropic_tools
@@ -786,11 +830,11 @@ class AnthropicCompletion(BaseLLM):
try:
if betas:
params["betas"] = betas
response = self._client.beta.messages.create(
response = self._get_sync_client().beta.messages.create(
**params, extra_body=extra_body
)
else:
response = self._client.messages.create(**params)
response = self._get_sync_client().messages.create(**params)
except Exception as e:
if is_context_length_exceeded(e):
@@ -938,9 +982,11 @@ class AnthropicCompletion(BaseLLM):
current_tool_calls: dict[int, dict[str, Any]] = {}
stream_context = (
self._client.beta.messages.stream(**stream_params, extra_body=extra_body)
self._get_sync_client().beta.messages.stream(
**stream_params, extra_body=extra_body
)
if betas
else self._client.messages.stream(**stream_params)
else self._get_sync_client().messages.stream(**stream_params)
)
with stream_context as stream:
response_id = None
@@ -1219,7 +1265,9 @@ class AnthropicCompletion(BaseLLM):
try:
# Send tool results back to Claude for final response
final_response: Message = self._client.messages.create(**follow_up_params)
final_response: Message = self._get_sync_client().messages.create(
**follow_up_params
)
# Track token usage for follow-up call
follow_up_usage = self._extract_anthropic_token_usage(final_response)
@@ -1315,11 +1363,11 @@ class AnthropicCompletion(BaseLLM):
try:
if betas:
params["betas"] = betas
response = await self._async_client.beta.messages.create(
response = await self._get_async_client().beta.messages.create(
**params, extra_body=extra_body
)
else:
response = await self._async_client.messages.create(**params)
response = await self._get_async_client().messages.create(**params)
except Exception as e:
if is_context_length_exceeded(e):
@@ -1453,11 +1501,11 @@ class AnthropicCompletion(BaseLLM):
current_tool_calls: dict[int, dict[str, Any]] = {}
stream_context = (
self._async_client.beta.messages.stream(
self._get_async_client().beta.messages.stream(
**stream_params, extra_body=extra_body
)
if betas
else self._async_client.messages.stream(**stream_params)
else self._get_async_client().messages.stream(**stream_params)
)
async with stream_context as stream:
response_id = None
@@ -1622,7 +1670,7 @@ class AnthropicCompletion(BaseLLM):
]
try:
final_response: Message = await self._async_client.messages.create(
final_response: Message = await self._get_async_client().messages.create(
**follow_up_params
)
@@ -1704,18 +1752,23 @@ class AnthropicCompletion(BaseLLM):
def _extract_anthropic_token_usage(
response: Message | BetaMessage,
) -> dict[str, Any]:
"""Extract token usage from Anthropic response."""
"""Extract token usage and response metadata from Anthropic response."""
if hasattr(response, "usage") and response.usage:
usage = response.usage
input_tokens = getattr(usage, "input_tokens", 0)
output_tokens = getattr(usage, "output_tokens", 0)
cache_read_tokens = getattr(usage, "cache_read_input_tokens", 0) or 0
return {
cache_creation_tokens = (
getattr(usage, "cache_creation_input_tokens", 0) or 0
)
result: dict[str, Any] = {
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"total_tokens": input_tokens + output_tokens,
"cached_prompt_tokens": cache_read_tokens,
"cache_creation_tokens": cache_creation_tokens,
}
return result
return {"total_tokens": 0}
def supports_multimodal(self) -> bool:
@@ -1745,8 +1798,8 @@ class AnthropicCompletion(BaseLLM):
from crewai_files.uploaders.anthropic import AnthropicFileUploader
return AnthropicFileUploader(
client=self._client,
async_client=self._async_client,
client=self._get_sync_client(),
async_client=self._get_async_client(),
)
except ImportError:
return None

View File

@@ -116,43 +116,100 @@ class AzureCompletion(BaseLLM):
data.get("api_version") or os.getenv("AZURE_API_VERSION") or "2024-06-01"
)
if not data["api_key"]:
raise ValueError(
"Azure API key is required. Set AZURE_API_KEY environment variable or pass api_key parameter."
)
if not data["endpoint"]:
raise ValueError(
"Azure endpoint is required. Set AZURE_ENDPOINT environment variable or pass endpoint parameter."
)
# Credentials and endpoint are validated lazily in `_init_clients`
# so the LLM can be constructed before deployment env vars are set.
model = data.get("model", "")
data["endpoint"] = AzureCompletion._validate_and_fix_endpoint(
data["endpoint"], model
if data["endpoint"]:
data["endpoint"] = AzureCompletion._validate_and_fix_endpoint(
data["endpoint"], model
)
data["is_azure_openai_endpoint"] = AzureCompletion._is_azure_openai_endpoint(
data["endpoint"]
)
data["is_openai_model"] = any(
prefix in model.lower() for prefix in ["gpt-", "o1-", "text-"]
)
parsed = urlparse(data["endpoint"])
hostname = parsed.hostname or ""
data["is_azure_openai_endpoint"] = (
hostname == "openai.azure.com" or hostname.endswith(".openai.azure.com")
) and "/openai/deployments/" in data["endpoint"]
return data
@staticmethod
def _is_azure_openai_endpoint(endpoint: str | None) -> bool:
if not endpoint:
return False
hostname = urlparse(endpoint).hostname or ""
return (
hostname == "openai.azure.com" or hostname.endswith(".openai.azure.com")
) and "/openai/deployments/" in endpoint
@model_validator(mode="after")
def _init_clients(self) -> AzureCompletion:
"""Eagerly build clients when credentials are available, otherwise
defer so ``LLM(model="azure/...")`` can be constructed at module
import time even before deployment env vars are set.
"""
try:
self._client = self._build_sync_client()
self._async_client = self._build_async_client()
except ValueError:
pass
return self
def _build_sync_client(self) -> Any:
return ChatCompletionsClient(**self._make_client_kwargs())
def _build_async_client(self) -> Any:
return AsyncChatCompletionsClient(**self._make_client_kwargs())
def _make_client_kwargs(self) -> dict[str, Any]:
# Re-read env vars so that a deferred build can pick up credentials
# that weren't set at instantiation time (e.g. LLM constructed at
# module import before deployment env vars were injected).
if not self.api_key:
raise ValueError("Azure API key is required.")
self.api_key = os.getenv("AZURE_API_KEY")
if not self.endpoint:
endpoint = (
os.getenv("AZURE_ENDPOINT")
or os.getenv("AZURE_OPENAI_ENDPOINT")
or os.getenv("AZURE_API_BASE")
)
if endpoint:
self.endpoint = AzureCompletion._validate_and_fix_endpoint(
endpoint, self.model
)
# Recompute the routing flag now that the endpoint is known —
# _prepare_completion_params uses it to decide whether to
# include `model` in the request body (Azure OpenAI endpoints
# embed the deployment name in the URL and reject it).
self.is_azure_openai_endpoint = (
AzureCompletion._is_azure_openai_endpoint(self.endpoint)
)
if not self.api_key:
raise ValueError(
"Azure API key is required. Set AZURE_API_KEY environment "
"variable or pass api_key parameter."
)
if not self.endpoint:
raise ValueError(
"Azure endpoint is required. Set AZURE_ENDPOINT environment "
"variable or pass endpoint parameter."
)
client_kwargs: dict[str, Any] = {
"endpoint": self.endpoint,
"credential": AzureKeyCredential(self.api_key),
}
if self.api_version:
client_kwargs["api_version"] = self.api_version
return client_kwargs
self._client = ChatCompletionsClient(**client_kwargs)
self._async_client = AsyncChatCompletionsClient(**client_kwargs)
return self
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
if self._async_client is None:
self._async_client = self._build_async_client()
return self._async_client
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Azure-specific fields."""
@@ -713,8 +770,7 @@ class AzureCompletion(BaseLLM):
) -> str | Any:
"""Handle non-streaming chat completion."""
try:
# Cast params to Any to avoid type checking issues with TypedDict unpacking
response: ChatCompletions = self._client.complete(**params)
response: ChatCompletions = self._get_sync_client().complete(**params)
return self._process_completion_response(
response=response,
params=params,
@@ -913,7 +969,7 @@ class AzureCompletion(BaseLLM):
tool_calls: dict[int, dict[str, Any]] = {}
usage_data: dict[str, Any] | None = None
for update in self._client.complete(**params):
for update in self._get_sync_client().complete(**params):
if isinstance(update, StreamingChatCompletionsUpdate):
if update.usage:
usage = update.usage
@@ -953,8 +1009,9 @@ class AzureCompletion(BaseLLM):
) -> str | Any:
"""Handle non-streaming chat completion asynchronously."""
try:
# Cast params to Any to avoid type checking issues with TypedDict unpacking
response: ChatCompletions = await self._async_client.complete(**params)
response: ChatCompletions = await self._get_async_client().complete(
**params
)
return self._process_completion_response(
response=response,
params=params,
@@ -980,7 +1037,7 @@ class AzureCompletion(BaseLLM):
usage_data: dict[str, Any] | None = None
stream = await self._async_client.complete(**params)
stream = await self._get_async_client().complete(**params)
async for update in stream:
if isinstance(update, StreamingChatCompletionsUpdate):
if hasattr(update, "usage") and update.usage:
@@ -1076,28 +1133,39 @@ class AzureCompletion(BaseLLM):
@staticmethod
def _extract_azure_token_usage(response: ChatCompletions) -> dict[str, Any]:
"""Extract token usage from Azure response."""
"""Extract token usage and response metadata from Azure response."""
if hasattr(response, "usage") and response.usage:
usage = response.usage
cached_tokens = 0
prompt_details = getattr(usage, "prompt_tokens_details", None)
if prompt_details:
cached_tokens = getattr(prompt_details, "cached_tokens", 0) or 0
return {
reasoning_tokens = 0
completion_details = getattr(usage, "completion_tokens_details", None)
if completion_details:
reasoning_tokens = (
getattr(completion_details, "reasoning_tokens", 0) or 0
)
result: dict[str, Any] = {
"prompt_tokens": getattr(usage, "prompt_tokens", 0),
"completion_tokens": getattr(usage, "completion_tokens", 0),
"total_tokens": getattr(usage, "total_tokens", 0),
"cached_prompt_tokens": cached_tokens,
"reasoning_tokens": reasoning_tokens,
}
return result
return {"total_tokens": 0}
async def aclose(self) -> None:
"""Close the async client and clean up resources.
This ensures proper cleanup of the underlying aiohttp session
to avoid unclosed connector warnings.
to avoid unclosed connector warnings. Accesses the cached client
directly rather than going through `_get_async_client` so a
cleanup on an uninitialized LLM is a harmless no-op rather than
a credential-required error.
"""
if hasattr(self._async_client, "close"):
if self._async_client is not None and hasattr(self._async_client, "close"):
await self._async_client.close()
async def __aenter__(self) -> Self:

View File

@@ -12,6 +12,7 @@ from typing_extensions import Required
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, llm_call_context
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
@@ -302,6 +303,22 @@ class BedrockCompletion(BaseLLM):
@model_validator(mode="after")
def _init_clients(self) -> BedrockCompletion:
"""Eagerly build the sync client when AWS credentials resolve,
otherwise defer so ``LLM(model="bedrock/...")`` can be constructed
at module import time even before deployment env vars are set.
Only credential/SDK errors are caught — programming errors like
``TypeError`` or ``AttributeError`` propagate so real bugs aren't
silently swallowed.
"""
try:
self._client = self._build_sync_client()
except (BotoCoreError, ClientError, ValueError) as e:
logging.debug("Deferring Bedrock client construction: %s", e)
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
return self
def _build_sync_client(self) -> Any:
config = Config(
read_timeout=300,
retries={"max_attempts": 3, "mode": "adaptive"},
@@ -313,9 +330,17 @@ class BedrockCompletion(BaseLLM):
aws_session_token=self.aws_session_token,
region_name=self.region_name,
)
self._client = session.client("bedrock-runtime", config=config)
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
return self
return session.client("bedrock-runtime", config=config)
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
"""Async client is set up separately by ``_ensure_async_client``
using ``aiobotocore`` inside an exit stack."""
return self._async_client
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Bedrock-specific fields."""
@@ -655,7 +680,7 @@ class BedrockCompletion(BaseLLM):
raise ValueError(f"Invalid message format at index {i}")
# Call Bedrock Converse API with proper error handling
response = self._client.converse(
response = self._get_sync_client().converse(
modelId=self.model_id,
messages=cast(
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
@@ -944,7 +969,7 @@ class BedrockCompletion(BaseLLM):
usage_data: dict[str, Any] | None = None
try:
response = self._client.converse_stream(
response = self._get_sync_client().converse_stream(
modelId=self.model_id,
messages=cast(
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
@@ -1948,8 +1973,6 @@ class BedrockCompletion(BaseLLM):
tools: list[dict[str, Any]],
) -> list[ConverseToolTypeDef]:
"""Convert CrewAI tools to Converse API format following AWS specification."""
from crewai.llms.providers.utils.common import safe_tool_conversion
converse_tools: list[ConverseToolTypeDef] = []
for tool in tools:
@@ -2025,11 +2048,18 @@ class BedrockCompletion(BaseLLM):
input_tokens = usage.get("inputTokens", 0)
output_tokens = usage.get("outputTokens", 0)
total_tokens = usage.get("totalTokens", input_tokens + output_tokens)
raw_cached = (
usage.get("cacheReadInputTokenCount")
or usage.get("cacheReadInputTokens")
or 0
)
cached_tokens = raw_cached if isinstance(raw_cached, int) else 0
self._token_usage["prompt_tokens"] += input_tokens
self._token_usage["completion_tokens"] += output_tokens
self._token_usage["total_tokens"] += total_tokens
self._token_usage["successful_requests"] += 1
self._token_usage["cached_prompt_tokens"] += cached_tokens
def supports_function_calling(self) -> bool:
"""Check if the model supports function calling."""
@@ -2045,6 +2075,9 @@ class BedrockCompletion(BaseLLM):
# Context window sizes for common Bedrock models
context_windows = {
"anthropic.claude-sonnet-4": 200000,
"anthropic.claude-opus-4": 200000,
"anthropic.claude-haiku-4": 200000,
"anthropic.claude-3-5-sonnet": 200000,
"anthropic.claude-3-5-haiku": 200000,
"anthropic.claude-3-opus": 200000,

View File

@@ -118,9 +118,33 @@ class GeminiCompletion(BaseLLM):
@model_validator(mode="after")
def _init_client(self) -> GeminiCompletion:
self._client = self._initialize_client(self.use_vertexai)
"""Eagerly build the client when credentials resolve, otherwise defer
so ``LLM(model="gemini/...")`` can be constructed at module import time
even before deployment env vars are set.
"""
try:
self._client = self._initialize_client(self.use_vertexai)
except ValueError:
pass
return self
def _get_sync_client(self) -> Any:
if self._client is None:
# Re-read env vars so a deferred build can pick up credentials
# that weren't set at instantiation time.
if not self.api_key:
self.api_key = os.getenv("GOOGLE_API_KEY") or os.getenv(
"GEMINI_API_KEY"
)
if not self.project:
self.project = os.getenv("GOOGLE_CLOUD_PROJECT")
self._client = self._initialize_client(self.use_vertexai)
return self._client
def _get_async_client(self) -> Any:
"""Gemini uses a single client for both sync and async calls."""
return self._get_sync_client()
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Gemini/Vertex-specific fields."""
config = super().to_config_dict()
@@ -228,6 +252,7 @@ class GeminiCompletion(BaseLLM):
if (
hasattr(self, "client")
and self._client is not None
and hasattr(self._client, "vertexai")
and self._client.vertexai
):
@@ -951,6 +976,7 @@ class GeminiCompletion(BaseLLM):
"id": call_id,
"name": part.function_call.name,
"args": args_dict,
"raw_part": part,
}
self._emit_stream_chunk_event(
@@ -1035,29 +1061,20 @@ class GeminiCompletion(BaseLLM):
if call_data.get("name") != STRUCTURED_OUTPUT_TOOL_NAME
}
# If there are function calls but no available_functions,
# return them for the executor to handle
if non_structured_output_calls and not available_functions:
formatted_function_calls = [
{
"id": call_data["id"],
"function": {
"name": call_data["name"],
"arguments": json.dumps(call_data["args"]),
},
"type": "function",
}
raw_parts = [
call_data["raw_part"]
for call_data in non_structured_output_calls.values()
]
self._emit_call_completed_event(
response=formatted_function_calls,
response=raw_parts,
call_type=LLMCallType.TOOL_CALL,
from_task=from_task,
from_agent=from_agent,
messages=self._convert_contents_to_dict(contents),
usage=usage_data,
)
return formatted_function_calls
return raw_parts
# Handle completed function calls (excluding structured_output)
if non_structured_output_calls and available_functions:
@@ -1112,7 +1129,7 @@ class GeminiCompletion(BaseLLM):
try:
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
response = self._client.models.generate_content(
response = self._get_sync_client().models.generate_content(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1153,7 +1170,7 @@ class GeminiCompletion(BaseLLM):
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
for chunk in self._client.models.generate_content_stream(
for chunk in self._get_sync_client().models.generate_content_stream(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1191,7 +1208,7 @@ class GeminiCompletion(BaseLLM):
try:
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
response = await self._client.aio.models.generate_content(
response = await self._get_async_client().aio.models.generate_content(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1232,7 +1249,7 @@ class GeminiCompletion(BaseLLM):
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
stream = await self._client.aio.models.generate_content_stream(
stream = await self._get_async_client().aio.models.generate_content_stream(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1306,17 +1323,20 @@ class GeminiCompletion(BaseLLM):
@staticmethod
def _extract_token_usage(response: GenerateContentResponse) -> dict[str, Any]:
"""Extract token usage from Gemini response."""
"""Extract token usage and response metadata from Gemini response."""
if response.usage_metadata:
usage = response.usage_metadata
cached_tokens = getattr(usage, "cached_content_token_count", 0) or 0
return {
thinking_tokens = getattr(usage, "thoughts_token_count", 0) or 0
result: dict[str, Any] = {
"prompt_token_count": getattr(usage, "prompt_token_count", 0),
"candidates_token_count": getattr(usage, "candidates_token_count", 0),
"total_token_count": getattr(usage, "total_token_count", 0),
"total_tokens": getattr(usage, "total_token_count", 0),
"cached_prompt_tokens": cached_tokens,
"reasoning_tokens": thinking_tokens,
}
return result
return {"total_tokens": 0}
@staticmethod
@@ -1436,6 +1456,6 @@ class GeminiCompletion(BaseLLM):
try:
from crewai_files.uploaders.gemini import GeminiFileUploader
return GeminiFileUploader(client=self._client)
return GeminiFileUploader(client=self._get_sync_client())
except ImportError:
return None

View File

@@ -32,11 +32,15 @@ from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.pydantic_schema_utils import (
generate_model_description,
sanitize_tool_params_for_openai_strict,
)
from crewai.utilities.types import LLMMessage
@@ -253,22 +257,40 @@ class OpenAICompletion(BaseLLM):
@model_validator(mode="after")
def _init_clients(self) -> OpenAICompletion:
"""Eagerly build clients when the API key is available, otherwise
defer so ``LLM(model="openai/...")`` can be constructed at module
import time even before deployment env vars are set.
"""
try:
self._client = self._build_sync_client()
self._async_client = self._build_async_client()
except ValueError:
pass
return self
def _build_sync_client(self) -> Any:
client_config = self._get_client_params()
if self.interceptor:
transport = HTTPTransport(interceptor=self.interceptor)
http_client = httpx.Client(transport=transport)
client_config["http_client"] = http_client
client_config["http_client"] = httpx.Client(transport=transport)
return OpenAI(**client_config)
self._client = OpenAI(**client_config)
async_client_config = self._get_client_params()
def _build_async_client(self) -> Any:
client_config = self._get_client_params()
if self.interceptor:
async_transport = AsyncHTTPTransport(interceptor=self.interceptor)
async_http_client = httpx.AsyncClient(transport=async_transport)
async_client_config["http_client"] = async_http_client
transport = AsyncHTTPTransport(interceptor=self.interceptor)
client_config["http_client"] = httpx.AsyncClient(transport=transport)
return AsyncOpenAI(**client_config)
self._async_client = AsyncOpenAI(**async_client_config)
return self
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
if self._async_client is None:
self._async_client = self._build_async_client()
return self._async_client
@property
def last_response_id(self) -> str | None:
@@ -764,8 +786,6 @@ class OpenAICompletion(BaseLLM):
"function": {"name": "...", "description": "...", "parameters": {...}}
}
"""
from crewai.llms.providers.utils.common import safe_tool_conversion
responses_tools = []
for tool in tools:
@@ -797,7 +817,7 @@ class OpenAICompletion(BaseLLM):
) -> str | ResponsesAPIResult | Any:
"""Handle non-streaming Responses API call."""
try:
response: Response = self._client.responses.create(**params)
response: Response = self._get_sync_client().responses.create(**params)
# Track response ID for auto-chaining
if self.auto_chain and response.id:
@@ -933,7 +953,9 @@ class OpenAICompletion(BaseLLM):
) -> str | ResponsesAPIResult | Any:
"""Handle async non-streaming Responses API call."""
try:
response: Response = await self._async_client.responses.create(**params)
response: Response = await self._get_async_client().responses.create(
**params
)
# Track response ID for auto-chaining
if self.auto_chain and response.id:
@@ -1069,7 +1091,7 @@ class OpenAICompletion(BaseLLM):
final_response: Response | None = None
usage: dict[str, Any] | None = None
stream = self._client.responses.create(**params)
stream = self._get_sync_client().responses.create(**params)
response_id_stream = None
for event in stream:
@@ -1197,7 +1219,7 @@ class OpenAICompletion(BaseLLM):
final_response: Response | None = None
usage: dict[str, Any] | None = None
stream = await self._async_client.responses.create(**params)
stream = await self._get_async_client().responses.create(**params)
response_id_stream = None
async for event in stream:
@@ -1324,19 +1346,23 @@ class OpenAICompletion(BaseLLM):
]
def _extract_responses_token_usage(self, response: Response) -> dict[str, Any]:
"""Extract token usage from Responses API response."""
"""Extract token usage and response metadata from Responses API response."""
if response.usage:
result = {
result: dict[str, Any] = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.total_tokens,
}
# Extract cached prompt tokens from input_tokens_details
input_details = getattr(response.usage, "input_tokens_details", None)
if input_details:
result["cached_prompt_tokens"] = (
getattr(input_details, "cached_tokens", 0) or 0
)
output_details = getattr(response.usage, "output_tokens_details", None)
if output_details:
result["reasoning_tokens"] = (
getattr(output_details, "reasoning_tokens", 0) or 0
)
return result
return {"total_tokens": 0}
@@ -1544,11 +1570,6 @@ class OpenAICompletion(BaseLLM):
self, tools: list[dict[str, BaseTool]]
) -> list[dict[str, Any]]:
"""Convert CrewAI tool format to OpenAI function calling format."""
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.pydantic_schema_utils import (
force_additional_properties_false,
)
openai_tools = []
for tool in tools:
@@ -1567,8 +1588,9 @@ class OpenAICompletion(BaseLLM):
params_dict = (
parameters if isinstance(parameters, dict) else dict(parameters)
)
params_dict = force_additional_properties_false(params_dict)
openai_tool["function"]["parameters"] = params_dict
openai_tool["function"]["parameters"] = (
sanitize_tool_params_for_openai_strict(params_dict)
)
openai_tools.append(openai_tool)
return openai_tools
@@ -1587,7 +1609,7 @@ class OpenAICompletion(BaseLLM):
parse_params = {
k: v for k, v in params.items() if k != "response_format"
}
parsed_response = self._client.beta.chat.completions.parse(
parsed_response = self._get_sync_client().beta.chat.completions.parse(
**parse_params,
response_format=response_model,
)
@@ -1611,7 +1633,9 @@ class OpenAICompletion(BaseLLM):
)
return parsed_object
response: ChatCompletion = self._client.chat.completions.create(**params)
response: ChatCompletion = self._get_sync_client().chat.completions.create(
**params
)
usage = self._extract_openai_token_usage(response)
@@ -1838,7 +1862,7 @@ class OpenAICompletion(BaseLLM):
}
stream: ChatCompletionStream[BaseModel]
with self._client.beta.chat.completions.stream(
with self._get_sync_client().beta.chat.completions.stream(
**parse_params, response_format=response_model
) as stream:
for chunk in stream:
@@ -1875,7 +1899,7 @@ class OpenAICompletion(BaseLLM):
return ""
completion_stream: Stream[ChatCompletionChunk] = (
self._client.chat.completions.create(**params)
self._get_sync_client().chat.completions.create(**params)
)
usage_data: dict[str, Any] | None = None
@@ -1972,9 +1996,11 @@ class OpenAICompletion(BaseLLM):
parse_params = {
k: v for k, v in params.items() if k != "response_format"
}
parsed_response = await self._async_client.beta.chat.completions.parse(
**parse_params,
response_format=response_model,
parsed_response = (
await self._get_async_client().beta.chat.completions.parse(
**parse_params,
response_format=response_model,
)
)
math_reasoning = parsed_response.choices[0].message
@@ -1996,8 +2022,8 @@ class OpenAICompletion(BaseLLM):
)
return parsed_object
response: ChatCompletion = await self._async_client.chat.completions.create(
**params
response: ChatCompletion = (
await self._get_async_client().chat.completions.create(**params)
)
usage = self._extract_openai_token_usage(response)
@@ -2123,7 +2149,7 @@ class OpenAICompletion(BaseLLM):
if response_model:
completion_stream: AsyncIterator[
ChatCompletionChunk
] = await self._async_client.chat.completions.create(**params)
] = await self._get_async_client().chat.completions.create(**params)
accumulated_content = ""
usage_data: dict[str, Any] | None = None
@@ -2179,7 +2205,7 @@ class OpenAICompletion(BaseLLM):
stream: AsyncIterator[
ChatCompletionChunk
] = await self._async_client.chat.completions.create(**params)
] = await self._get_async_client().chat.completions.create(**params)
usage_data = None
@@ -2307,20 +2333,24 @@ class OpenAICompletion(BaseLLM):
def _extract_openai_token_usage(
self, response: ChatCompletion | ChatCompletionChunk
) -> dict[str, Any]:
"""Extract token usage from OpenAI ChatCompletion or ChatCompletionChunk response."""
"""Extract token usage and response metadata from OpenAI ChatCompletion."""
if hasattr(response, "usage") and response.usage:
usage = response.usage
result = {
result: dict[str, Any] = {
"prompt_tokens": getattr(usage, "prompt_tokens", 0),
"completion_tokens": getattr(usage, "completion_tokens", 0),
"total_tokens": getattr(usage, "total_tokens", 0),
}
# Extract cached prompt tokens from prompt_tokens_details
prompt_details = getattr(usage, "prompt_tokens_details", None)
if prompt_details:
result["cached_prompt_tokens"] = (
getattr(prompt_details, "cached_tokens", 0) or 0
)
completion_details = getattr(usage, "completion_tokens_details", None)
if completion_details:
result["reasoning_tokens"] = (
getattr(completion_details, "reasoning_tokens", 0) or 0
)
return result
return {"total_tokens": 0}
@@ -2371,8 +2401,8 @@ class OpenAICompletion(BaseLLM):
from crewai_files.uploaders.openai import OpenAIFileUploader
return OpenAIFileUploader(
client=self._client,
async_client=self._async_client,
client=self._get_sync_client(),
async_client=self._get_async_client(),
)
except ImportError:
return None

View File

@@ -417,9 +417,18 @@ class MCPToolResolver:
args_schema = None
if tool_def.get("inputSchema"):
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
try:
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
except Exception as e:
self._logger.log(
"warning",
f"Failed to build args schema for MCP tool "
f"'{tool_name}': {e}. Registering tool without a "
"typed schema.",
)
args_schema = None
tool_schema = {
"description": tool_def.get("description", ""),

View File

@@ -237,6 +237,8 @@ def crew(
self.tasks = instantiated_tasks
crew_instance: Crew = _call_method(meth, self, *args, **kwargs)
if "name" not in crew_instance.model_fields_set:
crew_instance.name = getattr(self, "_crew_name", None) or crew_instance.name
def callback_wrapper(
hook: Callable[Concatenate[CrewInstance, P2], R2], instance: CrewInstance

View File

@@ -2,6 +2,7 @@
from __future__ import annotations
from pathlib import Path
from typing import Annotated, Any, Literal
from pydantic import BaseModel, Field, model_validator
@@ -201,11 +202,20 @@ class CheckpointConfig(BaseModel):
description="Maximum checkpoints to keep. Oldest are pruned after "
"each write. None means keep all.",
)
restore_from: Path | str | None = Field(
default=None,
description="Path or location of a checkpoint to restore from. "
"When passed via a kickoff method's from_checkpoint parameter, "
"the crew or flow resumes from this checkpoint.",
)
@model_validator(mode="after")
def _register_handlers(self) -> CheckpointConfig:
from crewai.state.checkpoint_listener import _ensure_handlers_registered
if isinstance(self.provider, SqliteProvider) and not Path(self.location).suffix:
self.location = f"{self.location}.db"
_ensure_handlers_registered()
return self
@@ -216,3 +226,25 @@ class CheckpointConfig(BaseModel):
@property
def trigger_events(self) -> set[str]:
return set(self.on_events)
def apply_checkpoint(instance: Any, from_checkpoint: CheckpointConfig | None) -> Any:
"""Handle checkpoint config for a kickoff method.
If *from_checkpoint* carries a ``restore_from`` path, builds and returns a
restored instance (with ``restore_from`` cleared). The caller should
dispatch into its own kickoff variant on that restored instance.
If *from_checkpoint* is present but has no ``restore_from``, sets
``instance.checkpoint`` and returns ``None`` (proceed normally).
If *from_checkpoint* is ``None``, returns ``None`` immediately.
"""
if from_checkpoint is None:
return None
if from_checkpoint.restore_from is not None:
restored = type(instance).from_checkpoint(from_checkpoint)
restored.checkpoint = from_checkpoint.model_copy(update={"restore_from": None})
return restored
instance.checkpoint = from_checkpoint
return None

View File

@@ -7,6 +7,7 @@ avoids per-event overhead when no entity uses checkpointing.
from __future__ import annotations
import json
import logging
import threading
from typing import Any
@@ -102,14 +103,31 @@ def _find_checkpoint(source: Any) -> CheckpointConfig | None:
return None
def _do_checkpoint(state: RuntimeState, cfg: CheckpointConfig) -> None:
def _do_checkpoint(
state: RuntimeState, cfg: CheckpointConfig, event: BaseEvent | None = None
) -> None:
"""Write a checkpoint and prune old ones if configured."""
_prepare_entities(state.root)
data = state.model_dump_json()
cfg.provider.checkpoint(data, cfg.location)
payload = state.model_dump(mode="json")
if event is not None:
payload["trigger"] = event.type
data = json.dumps(payload)
location = cfg.provider.checkpoint(
data,
cfg.location,
parent_id=state._parent_id,
branch=state._branch,
)
state._chain_lineage(cfg.provider, location)
checkpoint_id: str = cfg.provider.extract_id(location)
msg: str = (
f"Checkpoint saved. Resume with: crewai checkpoint resume {checkpoint_id}"
)
logger.info(msg)
if cfg.max_checkpoints is not None:
cfg.provider.prune(cfg.location, cfg.max_checkpoints)
cfg.provider.prune(cfg.location, cfg.max_checkpoints, branch=state._branch)
def _should_checkpoint(source: Any, event: BaseEvent) -> CheckpointConfig | None:
@@ -128,7 +146,7 @@ def _on_any_event(source: Any, event: BaseEvent, state: Any) -> None:
if cfg is None:
return
try:
_do_checkpoint(state, cfg)
_do_checkpoint(state, cfg, event)
except Exception:
logger.warning("Auto-checkpoint failed for event %s", event.type, exc_info=True)

View File

@@ -17,12 +17,21 @@ class BaseProvider(BaseModel, ABC):
provider_type: str = "base"
@abstractmethod
def checkpoint(self, data: str, location: str) -> str:
def checkpoint(
self,
data: str,
location: str,
*,
parent_id: str | None = None,
branch: str = "main",
) -> str:
"""Persist a snapshot synchronously.
Args:
data: The serialized string to persist.
location: Storage destination (directory, file path, URI, etc.).
parent_id: ID of the parent checkpoint for lineage tracking.
branch: Branch label for this checkpoint.
Returns:
A location identifier for the saved checkpoint.
@@ -30,12 +39,21 @@ class BaseProvider(BaseModel, ABC):
...
@abstractmethod
async def acheckpoint(self, data: str, location: str) -> str:
async def acheckpoint(
self,
data: str,
location: str,
*,
parent_id: str | None = None,
branch: str = "main",
) -> str:
"""Persist a snapshot asynchronously.
Args:
data: The serialized string to persist.
location: Storage destination (directory, file path, URI, etc.).
parent_id: ID of the parent checkpoint for lineage tracking.
branch: Branch label for this checkpoint.
Returns:
A location identifier for the saved checkpoint.
@@ -43,12 +61,25 @@ class BaseProvider(BaseModel, ABC):
...
@abstractmethod
def prune(self, location: str, max_keep: int) -> None:
"""Remove old checkpoints, keeping at most *max_keep*.
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
"""Remove old checkpoints, keeping at most *max_keep* per branch.
Args:
location: The storage destination passed to ``checkpoint``.
max_keep: Maximum number of checkpoints to retain.
branch: Only prune checkpoints on this branch.
"""
...
@abstractmethod
def extract_id(self, location: str) -> str:
"""Extract the checkpoint ID from a location string.
Args:
location: The identifier returned by a previous ``checkpoint`` call.
Returns:
The checkpoint ID.
"""
...

View File

@@ -19,48 +19,87 @@ from crewai.state.provider.core import BaseProvider
logger = logging.getLogger(__name__)
def _safe_branch(base: str, branch: str) -> None:
"""Validate that a branch name doesn't escape the base directory.
Raises:
ValueError: If the branch resolves outside the base directory.
"""
base_resolved = str(Path(base).resolve())
target_resolved = str((Path(base) / branch).resolve())
if (
not target_resolved.startswith(base_resolved + os.sep)
and target_resolved != base_resolved
):
raise ValueError(f"Branch name escapes checkpoint directory: {branch!r}")
class JsonProvider(BaseProvider):
"""Persists runtime state checkpoints as JSON files on the local filesystem."""
provider_type: Literal["json"] = "json"
def checkpoint(self, data: str, location: str) -> str:
def checkpoint(
self,
data: str,
location: str,
*,
parent_id: str | None = None,
branch: str = "main",
) -> str:
"""Write a JSON checkpoint file.
Args:
data: The serialized JSON string to persist.
location: Directory where the checkpoint will be saved.
location: Base directory where checkpoints are saved.
parent_id: ID of the parent checkpoint for lineage tracking.
Encoded in the filename for queryable lineage without
parsing the blob.
branch: Branch label. Files are stored under ``location/branch/``.
Returns:
The path to the written checkpoint file.
"""
file_path = _build_path(location)
file_path = _build_path(location, branch, parent_id)
file_path.parent.mkdir(parents=True, exist_ok=True)
with open(file_path, "w") as f:
f.write(data)
return str(file_path)
async def acheckpoint(self, data: str, location: str) -> str:
async def acheckpoint(
self,
data: str,
location: str,
*,
parent_id: str | None = None,
branch: str = "main",
) -> str:
"""Write a JSON checkpoint file asynchronously.
Args:
data: The serialized JSON string to persist.
location: Directory where the checkpoint will be saved.
location: Base directory where checkpoints are saved.
parent_id: ID of the parent checkpoint for lineage tracking.
Encoded in the filename for queryable lineage without
parsing the blob.
branch: Branch label. Files are stored under ``location/branch/``.
Returns:
The path to the written checkpoint file.
"""
file_path = _build_path(location)
file_path = _build_path(location, branch, parent_id)
await aiofiles.os.makedirs(str(file_path.parent), exist_ok=True)
async with aiofiles.open(file_path, "w") as f:
await f.write(data)
return str(file_path)
def prune(self, location: str, max_keep: int) -> None:
"""Remove oldest checkpoint files beyond *max_keep*."""
pattern = os.path.join(location, "*.json")
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
"""Remove oldest checkpoint files beyond *max_keep* on a branch."""
_safe_branch(location, branch)
branch_dir = os.path.join(location, branch)
pattern = os.path.join(branch_dir, "*.json")
files = sorted(glob.glob(pattern), key=os.path.getmtime)
for path in files if max_keep == 0 else files[:-max_keep]:
try:
@@ -68,6 +107,16 @@ class JsonProvider(BaseProvider):
except OSError: # noqa: PERF203
logger.debug("Failed to remove %s", path, exc_info=True)
def extract_id(self, location: str) -> str:
"""Extract the checkpoint ID from a file path.
The filename format is ``{ts}_{uuid8}_p-{parent}.json``.
The checkpoint ID is the ``{ts}_{uuid8}`` prefix.
"""
stem = Path(location).stem
idx = stem.find("_p-")
return stem[:idx] if idx != -1 else stem
def from_checkpoint(self, location: str) -> str:
"""Read a JSON checkpoint file.
@@ -92,15 +141,24 @@ class JsonProvider(BaseProvider):
return await f.read()
def _build_path(directory: str) -> Path:
"""Build a timestamped checkpoint file path.
def _build_path(
directory: str, branch: str = "main", parent_id: str | None = None
) -> Path:
"""Build a timestamped checkpoint file path under a branch subdirectory.
Filename format: ``{ts}_{uuid8}_p-{parent_id}.json``
Args:
directory: Parent directory for the checkpoint file.
directory: Base directory for checkpoints.
branch: Branch label used as a subdirectory name.
parent_id: Parent checkpoint ID to encode in the filename.
Returns:
The target file path.
"""
_safe_branch(directory, branch)
ts = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%S")
filename = f"{ts}_{uuid.uuid4().hex[:8]}.json"
return Path(directory) / filename
short_uuid = uuid.uuid4().hex[:8]
parent_suffix = parent_id or "none"
filename = f"{ts}_{short_uuid}_p-{parent_suffix}.json"
return Path(directory) / branch / filename

View File

@@ -17,15 +17,20 @@ _CREATE_TABLE = """
CREATE TABLE IF NOT EXISTS checkpoints (
id TEXT PRIMARY KEY,
created_at TEXT NOT NULL,
parent_id TEXT,
branch TEXT NOT NULL DEFAULT 'main',
data JSONB NOT NULL
)
"""
_INSERT = "INSERT INTO checkpoints (id, created_at, data) VALUES (?, ?, jsonb(?))"
_INSERT = (
"INSERT INTO checkpoints (id, created_at, parent_id, branch, data) "
"VALUES (?, ?, ?, ?, jsonb(?))"
)
_SELECT = "SELECT json(data) FROM checkpoints WHERE id = ?"
_PRUNE = """
DELETE FROM checkpoints WHERE rowid NOT IN (
SELECT rowid FROM checkpoints ORDER BY rowid DESC LIMIT ?
DELETE FROM checkpoints WHERE branch = ? AND rowid NOT IN (
SELECT rowid FROM checkpoints WHERE branch = ? ORDER BY rowid DESC LIMIT ?
)
"""
@@ -50,12 +55,21 @@ class SqliteProvider(BaseProvider):
provider_type: Literal["sqlite"] = "sqlite"
def checkpoint(self, data: str, location: str) -> str:
def checkpoint(
self,
data: str,
location: str,
*,
parent_id: str | None = None,
branch: str = "main",
) -> str:
"""Write a checkpoint to the SQLite database.
Args:
data: The serialized JSON string to persist.
location: Path to the SQLite database file.
parent_id: ID of the parent checkpoint for lineage tracking.
branch: Branch label for this checkpoint.
Returns:
A location string in the format ``"db_path#checkpoint_id"``.
@@ -65,16 +79,25 @@ class SqliteProvider(BaseProvider):
with sqlite3.connect(location) as conn:
conn.execute("PRAGMA journal_mode=WAL")
conn.execute(_CREATE_TABLE)
conn.execute(_INSERT, (checkpoint_id, ts, data))
conn.execute(_INSERT, (checkpoint_id, ts, parent_id, branch, data))
conn.commit()
return f"{location}#{checkpoint_id}"
async def acheckpoint(self, data: str, location: str) -> str:
async def acheckpoint(
self,
data: str,
location: str,
*,
parent_id: str | None = None,
branch: str = "main",
) -> str:
"""Write a checkpoint to the SQLite database asynchronously.
Args:
data: The serialized JSON string to persist.
location: Path to the SQLite database file.
parent_id: ID of the parent checkpoint for lineage tracking.
branch: Branch label for this checkpoint.
Returns:
A location string in the format ``"db_path#checkpoint_id"``.
@@ -84,16 +107,20 @@ class SqliteProvider(BaseProvider):
async with aiosqlite.connect(location) as db:
await db.execute("PRAGMA journal_mode=WAL")
await db.execute(_CREATE_TABLE)
await db.execute(_INSERT, (checkpoint_id, ts, data))
await db.execute(_INSERT, (checkpoint_id, ts, parent_id, branch, data))
await db.commit()
return f"{location}#{checkpoint_id}"
def prune(self, location: str, max_keep: int) -> None:
"""Remove oldest checkpoint rows beyond *max_keep*."""
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
"""Remove oldest checkpoint rows beyond *max_keep* on a branch."""
with sqlite3.connect(location) as conn:
conn.execute(_PRUNE, (max_keep,))
conn.execute(_PRUNE, (branch, branch, max_keep))
conn.commit()
def extract_id(self, location: str) -> str:
"""Extract the checkpoint ID from a ``db_path#id`` string."""
return location.rsplit("#", 1)[1]
def from_checkpoint(self, location: str) -> str:
"""Read a checkpoint from the SQLite database.

View File

@@ -9,8 +9,11 @@ via ``RuntimeState.model_rebuild()``.
from __future__ import annotations
import logging
from typing import TYPE_CHECKING, Any
import uuid
from packaging.version import Version
from pydantic import (
ModelWrapValidatorHandler,
PrivateAttr,
@@ -20,9 +23,14 @@ from pydantic import (
)
from crewai.context import capture_execution_context
from crewai.state.checkpoint_config import CheckpointConfig
from crewai.state.event_record import EventRecord
from crewai.state.provider.core import BaseProvider
from crewai.state.provider.json_provider import JsonProvider
from crewai.utilities.version import get_crewai_version
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
@@ -36,9 +44,12 @@ def _sync_checkpoint_fields(entity: object) -> None:
entity: The entity whose private runtime attributes will be
copied into its public checkpoint fields.
"""
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.crew import Crew
from crewai.flow.flow import Flow
if isinstance(entity, BaseAgent):
entity.checkpoint_kickoff_event_id = entity._kickoff_event_id
if isinstance(entity, Flow):
entity.checkpoint_completed_methods = (
set(entity._completed_methods) if entity._completed_methods else None
@@ -58,12 +69,51 @@ def _sync_checkpoint_fields(entity: object) -> None:
entity.checkpoint_inputs = entity._inputs
entity.checkpoint_train = entity._train
entity.checkpoint_kickoff_event_id = entity._kickoff_event_id
for task in entity.tasks:
task.checkpoint_original_description = task._original_description
task.checkpoint_original_expected_output = task._original_expected_output
def _migrate(data: dict[str, Any]) -> dict[str, Any]:
"""Apply version-based migrations to checkpoint data.
Each block handles checkpoints older than a specific version,
transforming them forward to the current format. Blocks run in
version order so migrations compose.
Args:
data: The raw deserialized checkpoint dict.
Returns:
The migrated checkpoint dict.
"""
raw = data.get("crewai_version")
current = Version(get_crewai_version())
stored = Version(raw) if raw else Version("0.0.0")
if raw is None:
logger.warning("Checkpoint has no crewai_version — treating as 0.0.0")
elif stored != current:
logger.debug(
"Migrating checkpoint from crewAI %s to %s",
stored,
current,
)
# --- migrations in version order ---
# if stored < Version("X.Y.Z"):
# data.setdefault("some_field", "default")
return data
class RuntimeState(RootModel): # type: ignore[type-arg]
root: list[Entity]
_provider: BaseProvider = PrivateAttr(default_factory=JsonProvider)
_event_record: EventRecord = PrivateAttr(default_factory=EventRecord)
_checkpoint_id: str | None = PrivateAttr(default=None)
_parent_id: str | None = PrivateAttr(default=None)
_branch: str = PrivateAttr(default="main")
@property
def event_record(self) -> EventRecord:
@@ -73,8 +123,11 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
@model_serializer(mode="plain")
def _serialize(self) -> dict[str, Any]:
return {
"crewai_version": get_crewai_version(),
"parent_id": self._parent_id,
"branch": self._branch,
"entities": [e.model_dump(mode="json") for e in self.root],
"event_record": self._event_record.model_dump(),
"event_record": self._event_record.model_dump(mode="json"),
}
@model_validator(mode="wrap")
@@ -83,13 +136,29 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
cls, data: Any, handler: ModelWrapValidatorHandler[RuntimeState]
) -> RuntimeState:
if isinstance(data, dict) and "entities" in data:
data = _migrate(data)
record_data = data.get("event_record")
state = handler(data["entities"])
if record_data:
state._event_record = EventRecord.model_validate(record_data)
state._parent_id = data.get("parent_id")
state._branch = data.get("branch", "main")
return state
return handler(data)
def _chain_lineage(self, provider: BaseProvider, location: str) -> None:
"""Update lineage fields after a successful checkpoint write.
Sets ``_checkpoint_id`` and ``_parent_id`` so the next write
records the correct parent in the lineage chain.
Args:
provider: The provider that performed the write.
location: The location string returned by the provider.
"""
self._checkpoint_id = provider.extract_id(location)
self._parent_id = self._checkpoint_id
def checkpoint(self, location: str) -> str:
"""Write a checkpoint.
@@ -101,7 +170,14 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
A location identifier for the saved checkpoint.
"""
_prepare_entities(self.root)
return self._provider.checkpoint(self.model_dump_json(), location)
result = self._provider.checkpoint(
self.model_dump_json(),
location,
parent_id=self._parent_id,
branch=self._branch,
)
self._chain_lineage(self._provider, result)
return result
async def acheckpoint(self, location: str) -> str:
"""Async version of :meth:`checkpoint`.
@@ -114,41 +190,84 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
A location identifier for the saved checkpoint.
"""
_prepare_entities(self.root)
return await self._provider.acheckpoint(self.model_dump_json(), location)
result = await self._provider.acheckpoint(
self.model_dump_json(),
location,
parent_id=self._parent_id,
branch=self._branch,
)
self._chain_lineage(self._provider, result)
return result
def fork(self, branch: str | None = None) -> None:
"""Create a new execution branch and write an initial checkpoint.
If this state was restored from a checkpoint, an initial checkpoint
is written on the new branch so the fork point is recorded.
Args:
branch: Branch label. Auto-generated from the current checkpoint
ID if not provided. Always unique — safe to call multiple
times without collisions.
"""
if branch:
self._branch = branch
elif self._checkpoint_id:
self._branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
else:
self._branch = f"fork/{uuid.uuid4().hex[:8]}"
@classmethod
def from_checkpoint(
cls, location: str, provider: BaseProvider, **kwargs: Any
) -> RuntimeState:
def from_checkpoint(cls, config: CheckpointConfig, **kwargs: Any) -> RuntimeState:
"""Restore a RuntimeState from a checkpoint.
Args:
location: The identifier returned by a previous ``checkpoint`` call.
provider: The storage backend to read from.
config: Checkpoint configuration with ``restore_from`` set.
**kwargs: Passed to ``model_validate_json``.
Returns:
A restored RuntimeState.
"""
from crewai.state.provider.utils import detect_provider
if config.restore_from is None:
raise ValueError("CheckpointConfig.restore_from must be set")
location = str(config.restore_from)
provider = detect_provider(location)
raw = provider.from_checkpoint(location)
return cls.model_validate_json(raw, **kwargs)
state = cls.model_validate_json(raw, **kwargs)
state._provider = provider
checkpoint_id = provider.extract_id(location)
state._checkpoint_id = checkpoint_id
state._parent_id = checkpoint_id
return state
@classmethod
async def afrom_checkpoint(
cls, location: str, provider: BaseProvider, **kwargs: Any
cls, config: CheckpointConfig, **kwargs: Any
) -> RuntimeState:
"""Async version of :meth:`from_checkpoint`.
Args:
location: The identifier returned by a previous ``acheckpoint`` call.
provider: The storage backend to read from.
config: Checkpoint configuration with ``restore_from`` set.
**kwargs: Passed to ``model_validate_json``.
Returns:
A restored RuntimeState.
"""
from crewai.state.provider.utils import detect_provider
if config.restore_from is None:
raise ValueError("CheckpointConfig.restore_from must be set")
location = str(config.restore_from)
provider = detect_provider(location)
raw = await provider.afrom_checkpoint(location)
return cls.model_validate_json(raw, **kwargs)
state = cls.model_validate_json(raw, **kwargs)
state._provider = provider
checkpoint_id = provider.extract_id(location)
state._checkpoint_id = checkpoint_id
state._parent_id = checkpoint_id
return state
def _prepare_entities(root: list[Entity]) -> None:

View File

@@ -32,6 +32,7 @@ from pydantic import (
field_validator,
model_validator,
)
from pydantic.functional_serializers import PlainSerializer
from pydantic_core import PydanticCustomError
from typing_extensions import Self
@@ -45,6 +46,7 @@ from crewai.events.types.task_events import (
TaskStartedEvent,
)
from crewai.llms.base_llm import BaseLLM
from crewai.llms.providers.openai.completion import OpenAICompletion
from crewai.security import Fingerprint, SecurityConfig
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
@@ -85,6 +87,22 @@ from crewai.utilities.printer import PRINTER
from crewai.utilities.string_utils import interpolate_only
def _serialize_model_class(v: type[BaseModel] | None) -> dict[str, Any] | None:
"""Serialize a Pydantic model class reference to its JSON schema."""
return v.model_json_schema() if v else None
def _deserialize_model_class(v: Any) -> type[BaseModel] | None:
"""Hydrate a model class reference from checkpoint data."""
if v is None or isinstance(v, type):
return v
if isinstance(v, dict):
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
return create_model_from_schema(v)
return None
class Task(BaseModel):
"""Class that represents a task to be executed.
@@ -140,15 +158,33 @@ class Task(BaseModel):
description="Whether the task should be executed asynchronously or not.",
default=False,
)
output_json: type[BaseModel] | None = Field(
output_json: Annotated[
type[BaseModel] | None,
BeforeValidator(_deserialize_model_class),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A Pydantic model to be used to create a JSON output.",
default=None,
)
output_pydantic: type[BaseModel] | None = Field(
output_pydantic: Annotated[
type[BaseModel] | None,
BeforeValidator(_deserialize_model_class),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A Pydantic model to be used to create a Pydantic output.",
default=None,
)
response_model: type[BaseModel] | None = Field(
response_model: Annotated[
type[BaseModel] | None,
BeforeValidator(_deserialize_model_class),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A Pydantic model for structured LLM outputs using native provider features.",
default=None,
)
@@ -188,7 +224,13 @@ class Task(BaseModel):
description="Whether the task should instruct the agent to return the final answer formatted in Markdown",
default=False,
)
converter_cls: type[Converter] | None = Field(
converter_cls: Annotated[
type[Converter] | None,
BeforeValidator(lambda v: v if v is None or isinstance(v, type) else None),
PlainSerializer(
_serialize_model_class, return_type=dict | None, when_used="json"
),
] = Field(
description="A converter class used to export structured output",
default=None,
)
@@ -230,6 +272,8 @@ class Task(BaseModel):
_original_description: str | None = PrivateAttr(default=None)
_original_expected_output: str | None = PrivateAttr(default=None)
_original_output_file: str | None = PrivateAttr(default=None)
checkpoint_original_description: str | None = Field(default=None, exclude=False)
checkpoint_original_expected_output: str | None = Field(default=None, exclude=False)
_thread: threading.Thread | None = PrivateAttr(default=None)
model_config = {"arbitrary_types_allowed": True}
@@ -299,12 +343,14 @@ class Task(BaseModel):
@model_validator(mode="after")
def validate_required_fields(self) -> Self:
required_fields = ["description", "expected_output"]
for field in required_fields:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
if self.description is None:
raise ValueError(
"description must be provided either directly or through config"
)
if self.expected_output is None:
raise ValueError(
"expected_output must be provided either directly or through config"
)
return self
@model_validator(mode="after")
@@ -836,8 +882,8 @@ class Task(BaseModel):
should_inject = self.allow_crewai_trigger_context
if should_inject and self.agent:
crew = getattr(self.agent, "crew", None)
if crew and hasattr(crew, "_inputs") and crew._inputs:
crew = self.agent.crew
if crew and not isinstance(crew, str) and crew._inputs:
trigger_payload = crew._inputs.get("crewai_trigger_payload")
if trigger_payload is not None:
description += f"\n\nTrigger Payload: {trigger_payload}"
@@ -850,11 +896,12 @@ class Task(BaseModel):
isinstance(self.agent.llm, BaseLLM)
and self.agent.llm.supports_multimodal()
):
provider: str = str(
getattr(self.agent.llm, "provider", None)
or getattr(self.agent.llm, "model", "openai")
provider: str = self.agent.llm.provider or self.agent.llm.model
api: str | None = (
self.agent.llm.api
if isinstance(self.agent.llm, OpenAICompletion)
else None
)
api: str | None = getattr(self.agent.llm, "api", None)
supported_types = get_supported_content_types(provider, api)
def is_auto_injected(content_type: str) -> bool:
@@ -1235,12 +1282,26 @@ Follow these guidelines:
tools=tools,
)
pydantic_output, json_output = self._export_output(result)
if isinstance(result, BaseModel):
raw = result.model_dump_json()
if self.output_pydantic:
pydantic_output = result
json_output = None
elif self.output_json:
pydantic_output = None
json_output = result.model_dump()
else:
pydantic_output = None
json_output = None
else:
raw = result
pydantic_output, json_output = self._export_output(result)
task_output = TaskOutput(
name=self.name or self.description,
description=self.description,
expected_output=self.expected_output,
raw=result,
raw=raw,
pydantic=pydantic_output,
json_dict=json_output,
agent=agent.role,
@@ -1331,12 +1392,26 @@ Follow these guidelines:
tools=tools,
)
pydantic_output, json_output = self._export_output(result)
if isinstance(result, BaseModel):
raw = result.model_dump_json()
if self.output_pydantic:
pydantic_output = result
json_output = None
elif self.output_json:
pydantic_output = None
json_output = result.model_dump()
else:
pydantic_output = None
json_output = None
else:
raw = result
pydantic_output, json_output = self._export_output(result)
task_output = TaskOutput(
name=self.name or self.description,
description=self.description,
expected_output=self.expected_output,
raw=result,
raw=raw,
pydantic=pydantic_output,
json_dict=json_output,
agent=agent.role,

View File

@@ -1058,3 +1058,20 @@ class Telemetry:
close_span(span)
self._safe_telemetry_operation(_operation)
def template_installed_span(self, template_name: str) -> None:
"""Records when a template is downloaded and installed.
Args:
template_name: Name of the template that was installed
(without the template_ prefix).
"""
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Template Installed")
self._add_attribute(span, "crewai_version", version("crewai"))
self._add_attribute(span, "template_name", template_name)
close_span(span)
self._safe_telemetry_operation(_operation)

View File

@@ -29,6 +29,14 @@ class UsageMetrics(BaseModel):
completion_tokens: int = Field(
default=0, description="Number of tokens used in completions."
)
reasoning_tokens: int = Field(
default=0,
description="Number of reasoning/thinking tokens (e.g. OpenAI o-series, Gemini thinking).",
)
cache_creation_tokens: int = Field(
default=0,
description="Number of cache creation tokens (e.g. Anthropic cache writes).",
)
successful_requests: int = Field(
default=0, description="Number of successful requests made."
)
@@ -43,4 +51,6 @@ class UsageMetrics(BaseModel):
self.prompt_tokens += usage_metrics.prompt_tokens
self.cached_prompt_tokens += usage_metrics.cached_prompt_tokens
self.completion_tokens += usage_metrics.completion_tokens
self.reasoning_tokens += usage_metrics.reasoning_tokens
self.cache_creation_tokens += usage_metrics.cache_creation_tokens
self.successful_requests += usage_metrics.successful_requests

View File

@@ -19,7 +19,18 @@ from collections.abc import Callable
from copy import deepcopy
import datetime
import logging
from typing import TYPE_CHECKING, Annotated, Any, Final, Literal, TypedDict, Union
from typing import (
TYPE_CHECKING,
Annotated,
Any,
Final,
ForwardRef,
Literal,
Optional,
TypedDict,
Union,
cast,
)
import uuid
import jsonref # type: ignore[import-untyped]
@@ -99,15 +110,26 @@ def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
"""
defs = schema.get("$defs", {})
schema_copy = deepcopy(schema)
expanding: set[str] = set()
def _resolve(node: Any) -> Any:
if isinstance(node, dict):
ref = node.get("$ref")
if isinstance(ref, str) and ref.startswith("#/$defs/"):
def_name = ref.replace("#/$defs/", "")
if def_name in defs:
if def_name not in defs:
raise KeyError(f"Definition '{def_name}' not found in $defs.")
if def_name in expanding:
def_schema = defs[def_name]
stub: dict[str, Any] = {"type": def_schema.get("type", "object")}
if "description" in def_schema:
stub["description"] = def_schema["description"]
return stub
expanding.add(def_name)
try:
return _resolve(deepcopy(defs[def_name]))
raise KeyError(f"Definition '{def_name}' not found in $defs.")
finally:
expanding.discard(def_name)
return {k: _resolve(v) for k, v in node.items()}
if isinstance(node, list):
@@ -119,7 +141,11 @@ def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
def add_key_in_dict_recursively(
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
d: dict[str, Any],
key: str,
value: Any,
criteria: Callable[[dict[str, Any]], bool],
_seen: set[int] | None = None,
) -> dict[str, Any]:
"""Recursively adds a key/value pair to all nested dicts matching `criteria`.
@@ -128,22 +154,31 @@ def add_key_in_dict_recursively(
key: The key to add.
value: The value to add.
criteria: A function that returns True for dicts that should receive the key.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
if criteria(d) and key not in d:
d[key] = value
for v in d.values():
add_key_in_dict_recursively(v, key, value, criteria)
add_key_in_dict_recursively(v, key, value, criteria, _seen)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
add_key_in_dict_recursively(i, key, value, criteria)
add_key_in_dict_recursively(i, key, value, criteria, _seen)
return d
def force_additional_properties_false(d: Any) -> Any:
def force_additional_properties_false(d: Any, _seen: set[int] | None = None) -> Any:
"""Force additionalProperties=false on all object-type dicts recursively.
OpenAI strict mode requires all objects to have additionalProperties=false.
@@ -154,11 +189,17 @@ def force_additional_properties_false(d: Any) -> Any:
Args:
d: The dictionary/list to modify.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary/list.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
if d.get("type") == "object":
d["additionalProperties"] = False
if "properties" not in d:
@@ -166,10 +207,13 @@ def force_additional_properties_false(d: Any) -> Any:
if "required" not in d:
d["required"] = []
for v in d.values():
force_additional_properties_false(v)
force_additional_properties_false(v, _seen)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
force_additional_properties_false(i)
force_additional_properties_false(i, _seen)
return d
@@ -183,7 +227,7 @@ OPENAI_SUPPORTED_FORMATS: Final[
}
def strip_unsupported_formats(d: Any) -> Any:
def strip_unsupported_formats(d: Any, _seen: set[int] | None = None) -> Any:
"""Remove format annotations that OpenAI strict mode doesn't support.
OpenAI only supports: date-time, date, time, duration.
@@ -191,11 +235,17 @@ def strip_unsupported_formats(d: Any) -> Any:
Args:
d: The dictionary/list to modify.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary/list.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
format_value = d.get("format")
if (
isinstance(format_value, str)
@@ -203,14 +253,17 @@ def strip_unsupported_formats(d: Any) -> Any:
):
del d["format"]
for v in d.values():
strip_unsupported_formats(v)
strip_unsupported_formats(v, _seen)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
strip_unsupported_formats(i)
strip_unsupported_formats(i, _seen)
return d
def ensure_type_in_schemas(d: Any) -> Any:
def ensure_type_in_schemas(d: Any, _seen: set[int] | None = None) -> Any:
"""Ensure all schema objects in anyOf/oneOf have a 'type' key.
OpenAI strict mode requires every schema to have a 'type' key.
@@ -218,11 +271,17 @@ def ensure_type_in_schemas(d: Any) -> Any:
Args:
d: The dictionary/list to modify.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary/list.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
for key in ("anyOf", "oneOf"):
if key in d:
schema_list = d[key]
@@ -230,12 +289,15 @@ def ensure_type_in_schemas(d: Any) -> Any:
if isinstance(schema, dict) and schema == {}:
schema_list[i] = {"type": "object"}
else:
ensure_type_in_schemas(schema)
ensure_type_in_schemas(schema, _seen)
for v in d.values():
ensure_type_in_schemas(v)
ensure_type_in_schemas(v, _seen)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for item in d:
ensure_type_in_schemas(item)
ensure_type_in_schemas(item, _seen)
return d
@@ -318,7 +380,9 @@ def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
return _process_oneof(deepcopy(schema))
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
def convert_oneof_to_anyof(
schema: dict[str, Any], _seen: set[int] | None = None
) -> dict[str, Any]:
"""Convert oneOf to anyOf for OpenAI compatibility.
OpenAI's Structured Outputs support anyOf better than oneOf.
@@ -326,26 +390,37 @@ def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
Args:
schema: JSON schema dictionary.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
Modified schema with anyOf instead of oneOf.
"""
if _seen is None:
_seen = set()
if isinstance(schema, dict):
if id(schema) in _seen:
return schema
_seen.add(id(schema))
if "oneOf" in schema:
schema["anyOf"] = schema.pop("oneOf")
for value in schema.values():
if isinstance(value, dict):
convert_oneof_to_anyof(value)
convert_oneof_to_anyof(value, _seen)
elif isinstance(value, list):
if id(value) in _seen:
continue
_seen.add(id(value))
for item in value:
if isinstance(item, dict):
convert_oneof_to_anyof(item)
convert_oneof_to_anyof(item, _seen)
return schema
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
def ensure_all_properties_required(
schema: dict[str, Any], _seen: set[int] | None = None
) -> dict[str, Any]:
"""Ensure all properties are in the required array for OpenAI strict mode.
OpenAI's strict structured outputs require all properties to be listed
@@ -354,11 +429,17 @@ def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
Args:
schema: JSON schema dictionary.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
Modified schema with all properties marked as required.
"""
if _seen is None:
_seen = set()
if isinstance(schema, dict):
if id(schema) in _seen:
return schema
_seen.add(id(schema))
if schema.get("type") == "object" and "properties" in schema:
properties = schema["properties"]
if properties:
@@ -366,16 +447,21 @@ def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
for value in schema.values():
if isinstance(value, dict):
ensure_all_properties_required(value)
ensure_all_properties_required(value, _seen)
elif isinstance(value, list):
if id(value) in _seen:
continue
_seen.add(id(value))
for item in value:
if isinstance(item, dict):
ensure_all_properties_required(item)
ensure_all_properties_required(item, _seen)
return schema
def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
def strip_null_from_types(
schema: dict[str, Any], _seen: set[int] | None = None
) -> dict[str, Any]:
"""Remove null type from anyOf/type arrays.
Pydantic generates `T | None` for optional fields, which creates schemas with
@@ -384,11 +470,17 @@ def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
Args:
schema: JSON schema dictionary.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
Modified schema with null types removed.
"""
if _seen is None:
_seen = set()
if isinstance(schema, dict):
if id(schema) in _seen:
return schema
_seen.add(id(schema))
if "anyOf" in schema:
any_of = schema["anyOf"]
non_null = [opt for opt in any_of if opt.get("type") != "null"]
@@ -408,15 +500,141 @@ def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
for value in schema.values():
if isinstance(value, dict):
strip_null_from_types(value)
strip_null_from_types(value, _seen)
elif isinstance(value, list):
if id(value) in _seen:
continue
_seen.add(id(value))
for item in value:
if isinstance(item, dict):
strip_null_from_types(item)
strip_null_from_types(item, _seen)
return schema
_STRICT_METADATA_KEYS: Final[tuple[str, ...]] = (
"title",
"default",
"examples",
"example",
"$comment",
"readOnly",
"writeOnly",
"deprecated",
)
_CLAUDE_STRICT_UNSUPPORTED: Final[tuple[str, ...]] = (
"minimum",
"maximum",
"exclusiveMinimum",
"exclusiveMaximum",
"multipleOf",
"minLength",
"maxLength",
"pattern",
"minItems",
"maxItems",
"uniqueItems",
"minContains",
"maxContains",
"minProperties",
"maxProperties",
"patternProperties",
"propertyNames",
"dependentRequired",
"dependentSchemas",
)
def _strip_keys_recursive(
d: Any, keys: tuple[str, ...], _seen: set[int] | None = None
) -> Any:
"""Recursively delete a fixed set of keys from a schema."""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
for key in keys:
d.pop(key, None)
for v in d.values():
_strip_keys_recursive(v, keys, _seen)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
_strip_keys_recursive(i, keys, _seen)
return d
def lift_top_level_anyof(schema: dict[str, Any]) -> dict[str, Any]:
"""Unwrap a top-level anyOf/oneOf/allOf wrapping a single object variant.
Anthropic's strict ``input_schema`` rejects top-level union keywords. When
exactly one variant is an object schema, lift it so the root is a plain
object; otherwise leave the schema alone.
"""
for key in ("anyOf", "oneOf", "allOf"):
variants = schema.get(key)
if not isinstance(variants, list):
continue
object_variants = [
v for v in variants if isinstance(v, dict) and v.get("type") == "object"
]
if len(object_variants) == 1:
lifted = deepcopy(object_variants[0])
schema.pop(key)
schema.update(lifted)
break
return schema
def _common_strict_pipeline(params: dict[str, Any]) -> dict[str, Any]:
"""Shared strict sanitization: inline refs, close objects, require all properties."""
sanitized = resolve_refs(deepcopy(params))
sanitized.pop("$defs", None)
sanitized = convert_oneof_to_anyof(sanitized)
sanitized = ensure_type_in_schemas(sanitized)
sanitized = force_additional_properties_false(sanitized)
sanitized = ensure_all_properties_required(sanitized)
return cast(dict[str, Any], _strip_keys_recursive(sanitized, _STRICT_METADATA_KEYS))
def sanitize_tool_params_for_openai_strict(
params: dict[str, Any],
) -> dict[str, Any]:
"""Sanitize a JSON schema for OpenAI strict function calling."""
if not isinstance(params, dict):
return params
return cast(
dict[str, Any], strip_unsupported_formats(_common_strict_pipeline(params))
)
def sanitize_tool_params_for_anthropic_strict(
params: dict[str, Any],
) -> dict[str, Any]:
"""Sanitize a JSON schema for Anthropic strict tool use."""
if not isinstance(params, dict):
return params
sanitized = lift_top_level_anyof(_common_strict_pipeline(params))
sanitized = _strip_keys_recursive(sanitized, _CLAUDE_STRICT_UNSUPPORTED)
return cast(dict[str, Any], strip_unsupported_formats(sanitized))
def sanitize_tool_params_for_bedrock_strict(
params: dict[str, Any],
) -> dict[str, Any]:
"""Sanitize a JSON schema for Bedrock Converse strict tool use.
Bedrock Converse uses the same grammar compiler as the underlying Claude
model, so the constraints match Anthropic's.
"""
return sanitize_tool_params_for_anthropic_strict(params)
def generate_model_description(
model: type[BaseModel],
*,
@@ -545,6 +763,25 @@ def build_rich_field_description(prop_schema: dict[str, Any]) -> str:
return ". ".join(parts) if parts else ""
def _inline_top_level_ref(schema: dict[str, Any]) -> dict[str, Any]:
"""Resolve only the top-level ``$ref``, preserving ``$defs`` for lazy inner resolution.
Used as a fallback when ``jsonref.replace_refs`` fails on circular schemas.
Inner ``$ref`` pointers are left intact so that :func:`_resolve_ref` can
resolve them during model construction, with cycle detection via ``in_progress``.
"""
schema = deepcopy(schema)
ref = schema.get("$ref")
if isinstance(ref, str) and ref.startswith("#/$defs/"):
def_name = ref[len("#/$defs/") :]
defs = schema.get("$defs", {})
if def_name in defs:
resolved: dict[str, Any] = deepcopy(defs[def_name])
resolved.setdefault("$defs", defs)
return resolved
return schema
def create_model_from_schema( # type: ignore[no-any-unimported]
json_schema: dict[str, Any],
*,
@@ -599,19 +836,80 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
>>> person.name
'John'
"""
json_schema = dict(jsonref.replace_refs(json_schema, proxies=False))
try:
json_schema = dict(jsonref.replace_refs(json_schema, proxies=False))
except (jsonref.JsonRefError, RecursionError):
json_schema = _inline_top_level_ref(json_schema)
effective_root = root_schema or json_schema
json_schema = force_additional_properties_false(json_schema)
effective_root = force_additional_properties_false(effective_root)
in_progress: dict[int, Any] = {}
model = _build_model_from_schema(
json_schema,
effective_root,
model_name=model_name,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
__config__=__config__,
__base__=__base__,
__module__=__module__,
__validators__=__validators__,
__cls_kwargs__=__cls_kwargs__,
)
types_namespace: dict[str, Any] = {
entry.__name__: entry
for entry in in_progress.values()
if isinstance(entry, type) and issubclass(entry, BaseModel)
}
for entry in in_progress.values():
if (
isinstance(entry, type)
and issubclass(entry, BaseModel)
and not getattr(entry, "__pydantic_complete__", True)
):
try:
entry.model_rebuild(_types_namespace=types_namespace)
except Exception as e:
logger.debug("model_rebuild failed for %s: %s", entry.__name__, e)
return model
def _build_model_from_schema( # type: ignore[no-any-unimported]
json_schema: dict[str, Any],
effective_root: dict[str, Any],
*,
model_name: str | None,
enrich_descriptions: bool,
in_progress: dict[int, Any],
__config__: ConfigDict | None = None,
__base__: type[BaseModel] | None = None,
__module__: str = __name__,
__validators__: dict[str, AnyClassMethod] | None = None,
__cls_kwargs__: dict[str, Any] | None = None,
) -> type[BaseModel]:
"""Inner builder shared by the public entry point and recursive nested-object creation.
Preprocessing via ``jsonref.replace_refs`` and the sanitization walkers is
run once by the public entry; this helper walks the already-normalized
schema and emits Pydantic models. ``in_progress`` maps ``id(schema)`` to
the model being built for that schema, so a cyclic ``$ref`` graph
degrades to a ``ForwardRef`` back-edge instead of blowing the stack.
"""
original_id = id(json_schema)
if "allOf" in json_schema:
json_schema = _merge_all_of_schemas(json_schema["allOf"], effective_root)
if "title" not in json_schema and "title" in (root_schema or {}):
json_schema["title"] = (root_schema or {}).get("title")
effective_name = model_name or json_schema.get("title") or "DynamicModel"
schema_id = id(json_schema)
in_progress[original_id] = effective_name
if schema_id != original_id:
in_progress[schema_id] = effective_name
field_definitions = {
name: _json_schema_to_pydantic_field(
name,
@@ -619,13 +917,14 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
json_schema.get("required", []),
effective_root,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
for name, prop in (json_schema.get("properties", {}) or {}).items()
}
effective_config = __config__ or ConfigDict(extra="forbid")
return create_model_base(
model = create_model_base(
effective_name,
__config__=effective_config,
__base__=__base__,
@@ -634,6 +933,10 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
__cls_kwargs__=__cls_kwargs__,
**field_definitions,
)
in_progress[original_id] = model
if schema_id != original_id:
in_progress[schema_id] = model
return model
def _json_schema_to_pydantic_field(
@@ -643,6 +946,7 @@ def _json_schema_to_pydantic_field(
root_schema: dict[str, Any],
*,
enrich_descriptions: bool = False,
in_progress: dict[int, Any] | None = None,
) -> Any:
"""Convert a JSON schema property to a Pydantic field definition.
@@ -661,6 +965,7 @@ def _json_schema_to_pydantic_field(
root_schema,
name_=name.title(),
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
is_required = name in required
@@ -720,7 +1025,7 @@ def _json_schema_to_pydantic_field(
field_params["pattern"] = json_schema["pattern"]
if not is_required:
type_ = type_ | None
type_ = Optional[type_] # noqa: UP045 - ForwardRef does not support `|`
if schema_extra:
field_params["json_schema_extra"] = schema_extra
@@ -793,6 +1098,7 @@ def _json_schema_to_pydantic_type(
*,
name_: str | None = None,
enrich_descriptions: bool = False,
in_progress: dict[int, Any] | None = None,
) -> Any:
"""Convert a JSON schema to a Python/Pydantic type.
@@ -801,10 +1107,23 @@ def _json_schema_to_pydantic_type(
root_schema: The root schema for resolving $ref.
name_: Optional name for nested models.
enrich_descriptions: Propagated to nested model creation.
in_progress: Map of ``id(schema_dict)`` to the Pydantic model
currently being built for that schema, or to a placeholder name
as a plain ``str`` while the model is still being constructed.
Populated by :func:`_build_model_from_schema`. Enables cycle
detection so a self-referential ``$ref`` graph resolves to a
:class:`ForwardRef` back-edge rather than recursing forever.
Returns:
A Python type corresponding to the JSON schema.
"""
if in_progress is not None:
cached = in_progress.get(id(json_schema))
if isinstance(cached, str):
return ForwardRef(cached)
if cached is not None:
return cached
ref = json_schema.get("$ref")
if ref:
ref_schema = _resolve_ref(ref, root_schema)
@@ -813,6 +1132,7 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
enum_values = json_schema.get("enum")
@@ -832,6 +1152,7 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=f"{name_ or 'Union'}Option{i}",
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
for i, schema in enumerate(any_of_schemas)
]
@@ -845,6 +1166,15 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
if in_progress is not None:
return _build_model_from_schema(
json_schema,
root_schema,
model_name=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
merged = _merge_all_of_schemas(all_of_schemas, root_schema)
return _json_schema_to_pydantic_type(
@@ -852,6 +1182,7 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
type_ = json_schema.get("type")
@@ -872,12 +1203,21 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
return list[item_type] # type: ignore[valid-type]
return list
if type_ == "object":
properties = json_schema.get("properties")
if properties:
if in_progress is not None:
return _build_model_from_schema(
json_schema,
root_schema,
model_name=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
json_schema_ = json_schema.copy()
if json_schema_.get("title") is None:
json_schema_["title"] = name_ or "DynamicModel"

View File

@@ -26,18 +26,11 @@ from crewai.utilities.string_utils import sanitize_tool_name
logger = logging.getLogger(__name__)
# ContextVar that tracks the current streaming run_id.
# Set by create_streaming_state() so that LLM emit paths can stamp events.
_current_stream_run_id: contextvars.ContextVar[str | None] = contextvars.ContextVar(
"_current_stream_run_id", default=None
_current_stream_ids: contextvars.ContextVar[tuple[str, ...]] = contextvars.ContextVar(
"_current_stream_ids", default=()
)
def get_current_stream_run_id() -> str | None:
"""Return the active streaming run_id for the current context, if any."""
return _current_stream_run_id.get()
class TaskInfo(TypedDict):
"""Task context information for streaming."""
@@ -57,6 +50,7 @@ class StreamingState(NamedTuple):
async_queue: asyncio.Queue[StreamChunk | None | Exception] | None
loop: asyncio.AbstractEventLoop | None
handler: Callable[[Any, BaseEvent], None]
stream_id: str | None = None
def _extract_tool_call_info(
@@ -118,7 +112,7 @@ def _create_stream_handler(
sync_queue: queue.Queue[StreamChunk | None | Exception],
async_queue: asyncio.Queue[StreamChunk | None | Exception] | None = None,
loop: asyncio.AbstractEventLoop | None = None,
run_id: str | None = None,
stream_id: str | None = None,
) -> Callable[[Any, BaseEvent], None]:
"""Create a stream handler function.
@@ -127,26 +121,17 @@ def _create_stream_handler(
sync_queue: Synchronous queue for chunks.
async_queue: Optional async queue for chunks.
loop: Optional event loop for async operations.
run_id: Unique identifier for this streaming run. When set, the handler
only accepts events whose ``run_id`` matches, preventing cross-run
chunk contamination in concurrent streaming scenarios.
stream_id: Stream scope ID for concurrent isolation.
Returns:
Handler function that can be registered with the event bus.
"""
def stream_handler(_: Any, event: BaseEvent) -> None:
"""Handle LLM stream chunk events and enqueue them.
Args:
_: Event source (unused).
event: The event to process.
"""
if not isinstance(event, LLMStreamChunkEvent):
return
# Filter: only accept events belonging to this streaming run.
if run_id is not None and event.run_id is not None and event.run_id != run_id:
if stream_id is not None and stream_id not in _current_stream_ids.get():
return
chunk = _create_stream_chunk(event, current_task_info)
@@ -207,16 +192,6 @@ def create_streaming_state(
) -> StreamingState:
"""Create and register streaming state.
Each call assigns a ``run_id`` that is:
* stored in a ``contextvars.ContextVar`` so that downstream LLM emit
paths can stamp ``LLMStreamChunkEvent.run_id`` automatically, and
* passed to the stream handler so it only accepts events with a
matching ``run_id``, preventing cross-run chunk contamination.
If the current context already carries a ``run_id`` (e.g. a parent
flow already created a streaming state), the existing value is reused
so that nested streaming (flow → crew) shares the same scope.
Args:
current_task_info: Task context info.
result_holder: List to hold the final result.
@@ -225,9 +200,6 @@ def create_streaming_state(
Returns:
Initialized StreamingState with registered handler.
"""
run_id = _current_stream_run_id.get() or str(uuid.uuid4())
_current_stream_run_id.set(run_id)
sync_queue: queue.Queue[StreamChunk | None | Exception] = queue.Queue()
async_queue: asyncio.Queue[StreamChunk | None | Exception] | None = None
loop: asyncio.AbstractEventLoop | None = None
@@ -236,8 +208,10 @@ def create_streaming_state(
async_queue = asyncio.Queue()
loop = asyncio.get_event_loop()
stream_id = str(uuid.uuid4())
handler = _create_stream_handler(
current_task_info, sync_queue, async_queue, loop, run_id=run_id
current_task_info, sync_queue, async_queue, loop, stream_id=stream_id
)
crewai_event_bus.register_handler(LLMStreamChunkEvent, handler)
@@ -248,6 +222,7 @@ def create_streaming_state(
async_queue=async_queue,
loop=loop,
handler=handler,
stream_id=stream_id,
)
@@ -295,7 +270,12 @@ def create_chunk_generator(
Yields:
StreamChunk objects as they arrive.
"""
ctx = contextvars.copy_context()
if state.stream_id is not None:
token = _current_stream_ids.set((*_current_stream_ids.get(), state.stream_id))
ctx = contextvars.copy_context()
_current_stream_ids.reset(token)
else:
ctx = contextvars.copy_context()
thread = threading.Thread(target=ctx.run, args=(run_func,), daemon=True)
thread.start()
@@ -335,7 +315,12 @@ async def create_async_chunk_generator(
"Async queue not initialized. Use create_streaming_state(use_async=True)."
)
task = asyncio.create_task(run_coro())
if state.stream_id is not None:
token = _current_stream_ids.set((*_current_stream_ids.get(), state.stream_id))
task = asyncio.create_task(run_coro())
_current_stream_ids.reset(token)
else:
task = asyncio.create_task(run_coro())
try:
while True:

View File

@@ -0,0 +1,12 @@
"""Version utilities for crewAI."""
from __future__ import annotations
from functools import cache
import importlib.metadata
@cache
def get_crewai_version() -> str:
"""Get the installed crewAI version string."""
return importlib.metadata.version("crewai")

View File

@@ -1051,7 +1051,7 @@ def test_lite_agent_verbose_false_suppresses_printer_output():
successful_requests=1,
)
with pytest.warns(DeprecationWarning):
with pytest.warns(FutureWarning):
agent = LiteAgent(
role="Test Agent",
goal="Test goal",

View File

@@ -55,7 +55,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
- 2.31.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -63,50 +63,51 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DIqxWpJbbFJoV8WlXhb9UYFbCmdPk\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385850,\n \"model\": \"gpt-4o-2024-08-06\",\n
string: "{\n \"id\": \"chatcmpl-DTApYQx2LepfeRL1XcDKPgrhMFnQr\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845516,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"ask_question_to_coworker\",\n
\ \"arguments\": \"{\\\"question\\\": \\\"What are some trending
topics or ideas in various fields that could be explored for an article?\\\",
\\\"context\\\": \\\"We need to generate a list of 5 interesting ideas to
explore for an article. These ideas should be engaging and relevant to current
trends or captivating subjects.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"\n
\ }\n },\n {\n \"id\": \"call_j4KH2SGZvNeioql0HcRQ9NTp\",\n
\ \"id\": \"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"delegate_work_to_coworker\",\n
\ \"arguments\": \"{\\\"task\\\": \\\"Come up with a list of 5
interesting ideas to explore for an article.\\\", \\\"context\\\": \\\"We
need five intriguing ideas worth exploring for an article. Each idea should
have potential for in-depth exploration and appeal to a broad audience, possibly
touching on current trends, historical insights, future possibilities, or
human interest stories.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"\n }\n
\ },\n {\n \"id\": \"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\n
\ \"type\": \"function\",\n \"function\": {\n \"name\":
\"ask_question_to_coworker\",\n \"arguments\": \"{\\\"question\\\":
\\\"What unique angles or perspectives could we explore to make articles more
compelling and engaging?\\\", \\\"context\\\": \\\"Our task involves coming
up with 5 ideas for articles, each with an exciting paragraph highlight that
illustrates the promise and intrigue of the topic. We want them to be more
than generic concepts, shining for readers with fresh insights or engaging
twists.\\\", \\\"coworker\\\": \\\"Senior Writer\\\"}\"\n }\n }\n
\ ],\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 476,\n \"completion_tokens\":
183,\n \"total_tokens\": 659,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
\"delegate_work_to_coworker\",\n \"arguments\": \"{\\\"task\\\":
\\\"Write one amazing paragraph highlight for each of 5 ideas that showcases
how good an article about this topic could be.\\\", \\\"context\\\": \\\"Upon
receiving five intriguing ideas from the Researcher, create a compelling paragraph
for each idea that highlights its potential as a fascinating article. These
paragraphs must capture the essence of the topic and explain why it would
captivate readers, incorporating possible themes and insights.\\\", \\\"coworker\\\":
\\\"Senior Writer\\\"}\"\n }\n }\n ],\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
476,\n \"completion_tokens\": 201,\n \"total_tokens\": 677,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b7c8e3f100\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_2ca5b70601\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db9389a3f9e424c-EWR
- 9ea3cb06ba66b301-TPE
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 13 Mar 2026 07:10:53 GMT
- Fri, 10 Apr 2026 18:25:18 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -122,7 +123,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '2402'
- '1981'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -154,13 +155,14 @@ interactions:
You work as a freelancer and is now working on doing research and analysis for
a new customer.\nYour personal goal is: Make the best research and analysis
on content about AI and AI agents"},{"role":"user","content":"\nCurrent Task:
What are some trending topics or ideas in various fields that could be explored
for an article?\n\nThis is the expected criteria for your final answer: Your
best answer to your coworker asking you this, accounting for the context shared.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nThis
is the context you''re working with:\nWe need to generate a list of 5 interesting
ideas to explore for an article. These ideas should be engaging and relevant
to current trends or captivating subjects.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
Come up with a list of 5 interesting ideas to explore for an article.\n\nThis
is the expected criteria for your final answer: Your best answer to your coworker
asking you this, accounting for the context shared.\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nThis is the context
you''re working with:\nWe need five intriguing ideas worth exploring for an
article. Each idea should have potential for in-depth exploration and appeal
to a broad audience, possibly touching on current trends, historical insights,
future possibilities, or human interest stories.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -173,7 +175,7 @@ interactions:
connection:
- keep-alive
content-length:
- '978'
- '1046'
content-type:
- application/json
host:
@@ -187,7 +189,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
- 2.31.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -195,63 +197,69 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DIqxak88AexErt9PGFGHnWPIJLwNV\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385854,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
string: "{\n \"id\": \"chatcmpl-DTApalbfnYkqIc8slLS3DKwo9KXbc\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845518,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Here are five trending and engaging
topics across various fields that could be explored for an article:\\n\\n1.
**The Rise of Autonomous AI Agents and Their Impact on the Future of Work**
\ \\nExplore how autonomous AI agents\u2014systems capable of performing complex
tasks independently\u2014are transforming industries such as customer service,
software development, and logistics. Discuss implications for job automation,
human-AI collaboration, and ethical considerations surrounding decision-making
autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations in Audio, Video,
and 3D Content Creation** \\nDelve into advancements in generative AI models
that create not only text but also realistic audio, video content, virtual
environments, and 3D models. Highlight applications in gaming, entertainment,
education, and digital marketing, as well as challenges like misinformation
and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling: Enhancing Predictive
Accuracy to Combat Climate Change** \\nExamine how AI and machine learning
are improving climate models by analyzing vast datasets, uncovering patterns,
and simulating environmental scenarios. Discuss how these advances are aiding
policymakers in making informed decisions to address climate risks and sustainability
goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare: Balancing Innovation
with Patient Privacy** \\nInvestigate ethical challenges posed by AI applications
in healthcare, including diagnosis, personalized treatment, and patient data
management. Focus on balancing rapid technological innovation with privacy,
bias mitigation, and regulatory frameworks to ensure equitable access and
trust.\\n\\n5. **Quantum Computing Meets AI: Exploring the Next Leap in Computational
Power** \\nCover the intersection of quantum computing and artificial intelligence,
exploring how quantum algorithms could accelerate AI training processes and
solve problems beyond the reach of classical computers. Outline current research,
potential breakthroughs, and the timeline for real-world applications.\\n\\nEach
of these topics is timely, relevant, and has the potential to engage readers
interested in cutting-edge technology, societal impact, and future trends.
Let me know if you want me to help develop an outline or deeper research into
any of these areas!\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 178,\n \"completion_tokens\":
402,\n \"total_tokens\": 580,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\"assistant\",\n \"content\": \"Certainly! Here are five intriguing
article ideas that offer rich potential for deep exploration and broad audience
appeal, especially aligned with current trends and human interest in AI and
technology:\\n\\n1. **The Evolution of AI Agents: From Rule-Based Bots to
Autonomous Decision Makers** \\n Explore the historical development of
AI agents, tracing the journey from simple scripted chatbots to advanced autonomous
systems capable of complex decision-making and learning. Dive into key technological
milestones, breakthroughs in machine learning, and current state-of-the-art
AI agents. Discuss implications for industries such as customer service, healthcare,
and autonomous vehicles, highlighting both opportunities and ethical concerns.\\n\\n2.
**AI in Daily Life: How Intelligent Agents Are Reshaping Human Routines**
\ \\n Investigate the integration of AI agents in everyday life\u2014from
virtual assistants like Siri and Alexa to personalized recommendation systems
and smart home devices. Analyze how these AI tools influence productivity,
privacy, and social behavior. Include human interest elements through stories
of individuals or communities who have embraced or resisted these technologies.\\n\\n3.
**The Future of Work: AI Agents as Collaborative Colleagues** \\n Examine
how AI agents are transforming workplaces by acting as collaborators rather
than just tools. Cover applications in creative fields, data analysis, and
decision support, while addressing potential challenges such as job displacement,
new skill requirements, and the evolving definition of teamwork. Use expert
opinions and case studies to paint a nuanced future outlook.\\n\\n4. **Ethics
and Accountability in AI Agent Development** \\n Delve into the ethical
dilemmas posed by increasingly autonomous AI agents\u2014topics like bias
in algorithms, data privacy, and accountability for AI-driven decisions. Explore
measures being taken globally to regulate AI, frameworks for responsible AI
development, and the role of public awareness. Include historical context
about technology ethics to provide depth.\\n\\n5. **Human-AI Symbiosis: Stories
of Innovative Partnerships Shaping Our World** \\n Tell compelling human
interest stories about individuals or organizations pioneering collaborative
projects with AI agents that lead to breakthroughs in science, art, or social
good. Highlight how these partnerships transcend traditional human-machine
interaction and open new creative and problem-solving possibilities, inspiring
readers about the potential of human-AI synergy.\\n\\nThese ideas are designed
to be both engaging and informative, offering multiple angles\u2014technical,
historical, ethical, and personal\u2014to keep readers captivated while providing
substantial content for in-depth analysis.\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 189,\n \"completion_tokens\":
472,\n \"total_tokens\": 661,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_e76a310957\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_fbf43a1ff3\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db938b0493c4b9f-EWR
- 9ea3cb1b5c943323-TPE
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 13 Mar 2026 07:10:59 GMT
- Fri, 10 Apr 2026 18:25:25 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -267,7 +275,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '5699'
- '6990'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -298,15 +306,16 @@ interactions:
a senior writer, specialized in technology, software engineering, AI and startups.
You work as a freelancer and are now working on writing content for a new customer.\nYour
personal goal is: Write the best content about AI and AI agents."},{"role":"user","content":"\nCurrent
Task: What unique angles or perspectives could we explore to make articles more
compelling and engaging?\n\nThis is the expected criteria for your final answer:
Your best answer to your coworker asking you this, accounting for the context
shared.\nyou MUST return the actual complete content as the final answer, not
a summary.\n\nThis is the context you''re working with:\nOur task involves coming
up with 5 ideas for articles, each with an exciting paragraph highlight that
illustrates the promise and intrigue of the topic. We want them to be more than
generic concepts, shining for readers with fresh insights or engaging twists.\n\nProvide
your complete response:"}],"model":"gpt-4.1-mini"}'
Task: Write one amazing paragraph highlight for each of 5 ideas that showcases
how good an article about this topic could be.\n\nThis is the expected criteria
for your final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nThis is the context you''re working with:\nUpon
receiving five intriguing ideas from the Researcher, create a compelling paragraph
for each idea that highlights its potential as a fascinating article. These
paragraphs must capture the essence of the topic and explain why it would captivate
readers, incorporating possible themes and insights.\n\nProvide your complete
response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -319,7 +328,7 @@ interactions:
connection:
- keep-alive
content-length:
- '1041'
- '1103'
content-type:
- application/json
host:
@@ -333,7 +342,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
- 2.31.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -341,78 +350,83 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DIqxZCl1kFIE7WXznIKow9QFNZ2QT\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385853,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
string: "{\n \"id\": \"chatcmpl-DTApbrh9Z4yFAKPHIR48ubdB1R5xK\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845519,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Absolutely! To create compelling and
engaging AI articles that stand out, we need to go beyond surface-level discussions
and deliver fresh perspectives that challenge assumptions and spark curiosity.
Here are five unique angles with their highlight paragraphs that could really
captivate our readers:\\n\\n1. **The Hidden Psychology of AI Agents: How They
Learn Human Biases and What That Means for Our Future** \\n*Highlight:* AI
agents don\u2019t just process data\u2014they absorb the subtle nuances and
biases embedded in human language, behavior, and culture. This article dives
deep into the psychological parallels between AI learning mechanisms and human
cognitive biases, revealing surprising ways AI can both mirror and amplify
our prejudices. Understanding these dynamics is crucial for building trustworthy
AI systems and reshaping the future relationship between humans and machines.\\n\\n2.
**From Assistants to Autonomous Creators: The Rise of AI Agents as Artists,
Writers, and Innovators** \\n*Highlight:* What do we lose and gain when AI
agents start producing original art, literature, and innovations? This piece
explores groundbreaking examples where AI isn\u2019t just a tool but a creative
partner that challenges our definition of authorship and genius. We\u2019ll
examine ethical dilemmas, collaborative workflows, and the exciting frontier
where human intuition meets algorithmic originality.\\n\\n3. **AI Agents in
the Wild: How Decentralized Autonomous Organizations Could Redefine Economy
and Governance** \\n*Highlight:* Imagine AI agents operating autonomously
in decentralized networks, making real-time decisions that affect finances,
resource management, and governance without human intervention. This article
uncovers how DAOs powered by AI agents might spontaneously evolve new forms
of organization\u2014transparent, efficient, and resistant to traditional
corruption. We\u2019ll investigate early case studies and speculate on how
this might disrupt centuries-old societal structures.\\n\\n4. **Beyond Chatbots:
The Next Generation of AI Agents as Empathetic Digital Companions** \\n*Highlight:*
Moving past scripted conversations, emerging AI agents simulate empathy and
emotional intelligence in ways that can transform mental health care, education,
and companionship. This article provides an insider look at the complex algorithms
and biofeedback mechanisms enabling AI to recognize, respond to, and foster
human emotions\u2014potentially filling gaps in underserved populations while
raising profound questions about authenticity and connection.\\n\\n5. **The
Environmental Toll of AI Agents: Unmasking the Ecological Cost of Intelligent
Automation** \\n*Highlight:* While AI promises efficiency and innovation,
the environmental footprint of training and deploying millions of AI agents
is rarely discussed. This eye-opening article quantifies the energy demands
of current models, challenges the narrative of AI as an unequivocal green
solution, and explores emerging approaches pathing toward sustainable intelligent
automation\u2014an urgent conversation for an increasingly eco-conscious tech
landscape.\\n\\nEach of these angles opens a door to rich storytelling that
blends technical depth, ethical inquiry, and visionary implications\u2014perfect
for readers hungry for insight that\u2019s both sophisticated and accessible.
Let me know which ones resonate most, or if you want me to refine any into
full article outlines!\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
595,\n \"total_tokens\": 783,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
\"assistant\",\n \"content\": \"1. **The Rise of Autonomous AI Agents:
Revolutionizing Everyday Tasks** \\nImagine a world where AI agents autonomously
manage your daily schedule, optimize your work routines, and even handle complex
decision-making with minimal human intervention. An article exploring the
rise of autonomous AI agents would captivate readers by diving into how advancements
in machine learning and natural language processing have matured these agents
from simple chatbots to intelligent collaborators. Themes could include practical
applications in industries like healthcare, finance, and personal productivity,
the challenges of trust and transparency, and a glimpse into the ethical questions
surrounding AI autonomy. This topic not only showcases cutting-edge technology
but also invites readers to envision the near future of human-AI synergy.\\n\\n2.
**Building Ethical AI Agents: Balancing Innovation with Responsibility** \\nAs
AI agents become more powerful and independent, the imperative to embed ethical
frameworks within their design comes sharply into focus. An insightful article
on this theme would engage readers by unpacking the complexities of programming
morality, fairness, and accountability into AI systems that influence critical
decisions\u2014whether in hiring processes, law enforcement, or digital content
moderation. Exploring real-world case studies alongside philosophical and
regulatory perspectives, the piece could illuminate the delicate balance between
technological innovation and societal values, offering a nuanced discussion
that appeals to technologists, ethicists, and everyday users alike.\\n\\n3.
**AI Agents in Startups: Accelerating Growth and Disrupting Markets** \\nStartups
are uniquely positioned to leverage AI agents as game-changers that turbocharge
growth, optimize workflows, and unlock new business models. This article could
enthrall readers by detailing how nimble companies integrate AI-driven agents
for customer engagement, market analysis, and personalized product recommendations\u2014outpacing
larger incumbents. It would also examine hurdles such as data privacy, scaling
complexities, and the human-AI collaboration dynamic, providing actionable
insights for entrepreneurs and investors. The story of AI agents fueling startup
innovation not only inspires but also outlines the practical pathways and
pitfalls on the frontier of modern entrepreneurship.\\n\\n4. **The Future
of Work with AI Agents: Redefining Roles and Skills** \\nAI agents are redefining
professional landscapes by automating routine tasks and augmenting human creativity
and decision-making. An article on this topic could engage readers by painting
a vivid picture of the evolving workplace, where collaboration between humans
and AI agents becomes the norm. Delving into emerging roles, necessary skill
sets, and how education and training must adapt, the piece would offer a forward-thinking
analysis that resonates deeply with employees, managers, and policymakers.
Exploring themes of workforce transformation, productivity gains, and potential
socioeconomic impacts, it provides a comprehensive outlook on an AI-integrated
work environment.\\n\\n5. **From Reactive to Proactive: How Next-Gen AI Agents
Anticipate Needs** \\nThe leap from reactive AI assistants to truly proactive
AI agents signifies one of the most thrilling advances in artificial intelligence.
An article centered on this evolution would captivate readers by illustrating
how these agents utilize predictive analytics, contextual understanding, and
continuous learning to anticipate user needs before they are expressed. By
showcasing pioneering applications in personalized healthcare management,
smart homes, and adaptive learning platforms, the article would highlight
the profound shift toward intuitive, anticipatory technology. This theme not
only excites with futuristic promise but also probes the technical and privacy
challenges that come with increased agency and foresight.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
197,\n \"completion_tokens\": 666,\n \"total_tokens\": 863,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_ae0f8c9a7b\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_d45f83c5fd\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db938b0489680d4-EWR
- 9ea3cb1cbfe2b312-TPE
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 13 Mar 2026 07:11:02 GMT
- Fri, 10 Apr 2026 18:25:28 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -428,7 +442,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '8310'
- '9479'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -467,91 +481,105 @@ interactions:
good an article about this topic could be. Return the list of ideas with their
paragraph and your notes.\\n\\nThis is the expected criteria for your final
answer: 5 bullet points with a paragraph for each idea.\\nyou MUST return the
actual complete content as the final answer, not a summary.\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\"type\":\"function\",\"function\":{\"name\":\"ask_question_to_coworker\",\"arguments\":\"{\\\"question\\\":
\\\"What are some trending topics or ideas in various fields that could be explored
for an article?\\\", \\\"context\\\": \\\"We need to generate a list of 5 interesting
ideas to explore for an article. These ideas should be engaging and relevant
to current trends or captivating subjects.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"}},{\"id\":\"call_j4KH2SGZvNeioql0HcRQ9NTp\",\"type\":\"function\",\"function\":{\"name\":\"ask_question_to_coworker\",\"arguments\":\"{\\\"question\\\":
\\\"What unique angles or perspectives could we explore to make articles more
compelling and engaging?\\\", \\\"context\\\": \\\"Our task involves coming
up with 5 ideas for articles, each with an exciting paragraph highlight that
illustrates the promise and intrigue of the topic. We want them to be more than
generic concepts, shining for readers with fresh insights or engaging twists.\\\",
\\\"coworker\\\": \\\"Senior Writer\\\"}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\"name\":\"ask_question_to_coworker\",\"content\":\"Here
are five trending and engaging topics across various fields that could be explored
for an article:\\n\\n1. **The Rise of Autonomous AI Agents and Their Impact
on the Future of Work** \\nExplore how autonomous AI agents\u2014systems capable
of performing complex tasks independently\u2014are transforming industries such
as customer service, software development, and logistics. Discuss implications
for job automation, human-AI collaboration, and ethical considerations surrounding
decision-making autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations in
Audio, Video, and 3D Content Creation** \\nDelve into advancements in generative
AI models that create not only text but also realistic audio, video content,
virtual environments, and 3D models. Highlight applications in gaming, entertainment,
education, and digital marketing, as well as challenges like misinformation
and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling: Enhancing Predictive
Accuracy to Combat Climate Change** \\nExamine how AI and machine learning
are improving climate models by analyzing vast datasets, uncovering patterns,
and simulating environmental scenarios. Discuss how these advances are aiding
policymakers in making informed decisions to address climate risks and sustainability
goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare: Balancing Innovation
with Patient Privacy** \\nInvestigate ethical challenges posed by AI applications
in healthcare, including diagnosis, personalized treatment, and patient data
management. Focus on balancing rapid technological innovation with privacy,
bias mitigation, and regulatory frameworks to ensure equitable access and trust.\\n\\n5.
**Quantum Computing Meets AI: Exploring the Next Leap in Computational Power**
\ \\nCover the intersection of quantum computing and artificial intelligence,
exploring how quantum algorithms could accelerate AI training processes and
solve problems beyond the reach of classical computers. Outline current research,
potential breakthroughs, and the timeline for real-world applications.\\n\\nEach
of these topics is timely, relevant, and has the potential to engage readers
interested in cutting-edge technology, societal impact, and future trends. Let
me know if you want me to help develop an outline or deeper research into any
of these areas!\"},{\"role\":\"tool\",\"tool_call_id\":\"call_j4KH2SGZvNeioql0HcRQ9NTp\",\"name\":\"ask_question_to_coworker\",\"content\":\"Absolutely!
To create compelling and engaging AI articles that stand out, we need to go
beyond surface-level discussions and deliver fresh perspectives that challenge
assumptions and spark curiosity. Here are five unique angles with their highlight
paragraphs that could really captivate our readers:\\n\\n1. **The Hidden Psychology
of AI Agents: How They Learn Human Biases and What That Means for Our Future**
\ \\n*Highlight:* AI agents don\u2019t just process data\u2014they absorb the
subtle nuances and biases embedded in human language, behavior, and culture.
This article dives deep into the psychological parallels between AI learning
mechanisms and human cognitive biases, revealing surprising ways AI can both
mirror and amplify our prejudices. Understanding these dynamics is crucial for
building trustworthy AI systems and reshaping the future relationship between
humans and machines.\\n\\n2. **From Assistants to Autonomous Creators: The Rise
of AI Agents as Artists, Writers, and Innovators** \\n*Highlight:* What do
we lose and gain when AI agents start producing original art, literature, and
innovations? This piece explores groundbreaking examples where AI isn\u2019t
just a tool but a creative partner that challenges our definition of authorship
and genius. We\u2019ll examine ethical dilemmas, collaborative workflows, and
the exciting frontier where human intuition meets algorithmic originality.\\n\\n3.
**AI Agents in the Wild: How Decentralized Autonomous Organizations Could Redefine
Economy and Governance** \\n*Highlight:* Imagine AI agents operating autonomously
in decentralized networks, making real-time decisions that affect finances,
resource management, and governance without human intervention. This article
uncovers how DAOs powered by AI agents might spontaneously evolve new forms
of organization\u2014transparent, efficient, and resistant to traditional corruption.
We\u2019ll investigate early case studies and speculate on how this might disrupt
centuries-old societal structures.\\n\\n4. **Beyond Chatbots: The Next Generation
of AI Agents as Empathetic Digital Companions** \\n*Highlight:* Moving past
scripted conversations, emerging AI agents simulate empathy and emotional intelligence
in ways that can transform mental health care, education, and companionship.
This article provides an insider look at the complex algorithms and biofeedback
mechanisms enabling AI to recognize, respond to, and foster human emotions\u2014potentially
filling gaps in underserved populations while raising profound questions about
authenticity and connection.\\n\\n5. **The Environmental Toll of AI Agents:
Unmasking the Ecological Cost of Intelligent Automation** \\n*Highlight:* While
AI promises efficiency and innovation, the environmental footprint of training
and deploying millions of AI agents is rarely discussed. This eye-opening article
quantifies the energy demands of current models, challenges the narrative of
AI as an unequivocal green solution, and explores emerging approaches pathing
toward sustainable intelligent automation\u2014an urgent conversation for an
increasingly eco-conscious tech landscape.\\n\\nEach of these angles opens a
door to rich storytelling that blends technical depth, ethical inquiry, and
visionary implications\u2014perfect for readers hungry for insight that\u2019s
both sophisticated and accessible. Let me know which ones resonate most, or
if you want me to refine any into full article outlines!\"},{\"role\":\"user\",\"content\":\"Analyze
actual complete content as the final answer, not a summary.\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"arguments\":\"{\\\"task\\\":
\\\"Come up with a list of 5 interesting ideas to explore for an article.\\\",
\\\"context\\\": \\\"We need five intriguing ideas worth exploring for an article.
Each idea should have potential for in-depth exploration and appeal to a broad
audience, possibly touching on current trends, historical insights, future possibilities,
or human interest stories.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"}},{\"id\":\"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"arguments\":\"{\\\"task\\\":
\\\"Write one amazing paragraph highlight for each of 5 ideas that showcases
how good an article about this topic could be.\\\", \\\"context\\\": \\\"Upon
receiving five intriguing ideas from the Researcher, create a compelling paragraph
for each idea that highlights its potential as a fascinating article. These
paragraphs must capture the essence of the topic and explain why it would captivate
readers, incorporating possible themes and insights.\\\", \\\"coworker\\\":
\\\"Senior Writer\\\"}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\"name\":\"delegate_work_to_coworker\",\"content\":\"Certainly!
Here are five intriguing article ideas that offer rich potential for deep exploration
and broad audience appeal, especially aligned with current trends and human
interest in AI and technology:\\n\\n1. **The Evolution of AI Agents: From Rule-Based
Bots to Autonomous Decision Makers** \\n Explore the historical development
of AI agents, tracing the journey from simple scripted chatbots to advanced
autonomous systems capable of complex decision-making and learning. Dive into
key technological milestones, breakthroughs in machine learning, and current
state-of-the-art AI agents. Discuss implications for industries such as customer
service, healthcare, and autonomous vehicles, highlighting both opportunities
and ethical concerns.\\n\\n2. **AI in Daily Life: How Intelligent Agents Are
Reshaping Human Routines** \\n Investigate the integration of AI agents in
everyday life\u2014from virtual assistants like Siri and Alexa to personalized
recommendation systems and smart home devices. Analyze how these AI tools influence
productivity, privacy, and social behavior. Include human interest elements
through stories of individuals or communities who have embraced or resisted
these technologies.\\n\\n3. **The Future of Work: AI Agents as Collaborative
Colleagues** \\n Examine how AI agents are transforming workplaces by acting
as collaborators rather than just tools. Cover applications in creative fields,
data analysis, and decision support, while addressing potential challenges such
as job displacement, new skill requirements, and the evolving definition of
teamwork. Use expert opinions and case studies to paint a nuanced future outlook.\\n\\n4.
**Ethics and Accountability in AI Agent Development** \\n Delve into the
ethical dilemmas posed by increasingly autonomous AI agents\u2014topics like
bias in algorithms, data privacy, and accountability for AI-driven decisions.
Explore measures being taken globally to regulate AI, frameworks for responsible
AI development, and the role of public awareness. Include historical context
about technology ethics to provide depth.\\n\\n5. **Human-AI Symbiosis: Stories
of Innovative Partnerships Shaping Our World** \\n Tell compelling human
interest stories about individuals or organizations pioneering collaborative
projects with AI agents that lead to breakthroughs in science, art, or social
good. Highlight how these partnerships transcend traditional human-machine interaction
and open new creative and problem-solving possibilities, inspiring readers about
the potential of human-AI synergy.\\n\\nThese ideas are designed to be both
engaging and informative, offering multiple angles\u2014technical, historical,
ethical, and personal\u2014to keep readers captivated while providing substantial
content for in-depth analysis.\"},{\"role\":\"tool\",\"tool_call_id\":\"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\"name\":\"delegate_work_to_coworker\",\"content\":\"1.
**The Rise of Autonomous AI Agents: Revolutionizing Everyday Tasks** \\nImagine
a world where AI agents autonomously manage your daily schedule, optimize your
work routines, and even handle complex decision-making with minimal human intervention.
An article exploring the rise of autonomous AI agents would captivate readers
by diving into how advancements in machine learning and natural language processing
have matured these agents from simple chatbots to intelligent collaborators.
Themes could include practical applications in industries like healthcare, finance,
and personal productivity, the challenges of trust and transparency, and a glimpse
into the ethical questions surrounding AI autonomy. This topic not only showcases
cutting-edge technology but also invites readers to envision the near future
of human-AI synergy.\\n\\n2. **Building Ethical AI Agents: Balancing Innovation
with Responsibility** \\nAs AI agents become more powerful and independent,
the imperative to embed ethical frameworks within their design comes sharply
into focus. An insightful article on this theme would engage readers by unpacking
the complexities of programming morality, fairness, and accountability into
AI systems that influence critical decisions\u2014whether in hiring processes,
law enforcement, or digital content moderation. Exploring real-world case studies
alongside philosophical and regulatory perspectives, the piece could illuminate
the delicate balance between technological innovation and societal values, offering
a nuanced discussion that appeals to technologists, ethicists, and everyday
users alike.\\n\\n3. **AI Agents in Startups: Accelerating Growth and Disrupting
Markets** \\nStartups are uniquely positioned to leverage AI agents as game-changers
that turbocharge growth, optimize workflows, and unlock new business models.
This article could enthrall readers by detailing how nimble companies integrate
AI-driven agents for customer engagement, market analysis, and personalized
product recommendations\u2014outpacing larger incumbents. It would also examine
hurdles such as data privacy, scaling complexities, and the human-AI collaboration
dynamic, providing actionable insights for entrepreneurs and investors. The
story of AI agents fueling startup innovation not only inspires but also outlines
the practical pathways and pitfalls on the frontier of modern entrepreneurship.\\n\\n4.
**The Future of Work with AI Agents: Redefining Roles and Skills** \\nAI agents
are redefining professional landscapes by automating routine tasks and augmenting
human creativity and decision-making. An article on this topic could engage
readers by painting a vivid picture of the evolving workplace, where collaboration
between humans and AI agents becomes the norm. Delving into emerging roles,
necessary skill sets, and how education and training must adapt, the piece would
offer a forward-thinking analysis that resonates deeply with employees, managers,
and policymakers. Exploring themes of workforce transformation, productivity
gains, and potential socioeconomic impacts, it provides a comprehensive outlook
on an AI-integrated work environment.\\n\\n5. **From Reactive to Proactive:
How Next-Gen AI Agents Anticipate Needs** \\nThe leap from reactive AI assistants
to truly proactive AI agents signifies one of the most thrilling advances in
artificial intelligence. An article centered on this evolution would captivate
readers by illustrating how these agents utilize predictive analytics, contextual
understanding, and continuous learning to anticipate user needs before they
are expressed. By showcasing pioneering applications in personalized healthcare
management, smart homes, and adaptive learning platforms, the article would
highlight the profound shift toward intuitive, anticipatory technology. This
theme not only excites with futuristic promise but also probes the technical
and privacy challenges that come with increased agency and foresight.\"},{\"role\":\"user\",\"content\":\"Analyze
the tool result. If requirements are met, provide the Final Answer. Otherwise,
call the next tool. Deliver only the answer without meta-commentary.\"}],\"model\":\"gpt-4o\",\"tool_choice\":\"auto\",\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"description\":\"Delegate
a specific task to one of the following coworkers: Researcher, Senior Writer\\nThe
@@ -582,7 +610,7 @@ interactions:
connection:
- keep-alive
content-length:
- '9923'
- '11056'
content-type:
- application/json
cookie:
@@ -598,7 +626,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
- 2.31.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -606,58 +634,64 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DIqxidsfoqQl7qXSIVHfSCyETUwlU\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385862,\n \"model\": \"gpt-4o-2024-08-06\",\n
string: "{\n \"id\": \"chatcmpl-DTApljTaq8nDgNMS21B319i56seCn\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845529,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"1. **The Rise of Autonomous AI Agents
and Their Impact on the Future of Work** \\nExplore how autonomous AI agents\u2014systems
capable of performing complex tasks independently\u2014are transforming industries
such as customer service, software development, and logistics. Discuss implications
for job automation, human-AI collaboration, and ethical considerations surrounding
decision-making autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations
in Audio, Video, and 3D Content Creation** \\nDelve into advancements in
generative AI models that create not only text but also realistic audio, video
content, virtual environments, and 3D models. Highlight applications in gaming,
entertainment, education, and digital marketing, as well as challenges like
misinformation and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling:
Enhancing Predictive Accuracy to Combat Climate Change** \\nExamine how AI
and machine learning are improving climate models by analyzing vast datasets,
uncovering patterns, and simulating environmental scenarios. Discuss how these
advances are aiding policymakers in making informed decisions to address climate
risks and sustainability goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare:
Balancing Innovation with Patient Privacy** \\nInvestigate ethical challenges
posed by AI applications in healthcare, including diagnosis, personalized
treatment, and patient data management. Focus on balancing rapid technological
innovation with privacy, bias mitigation, and regulatory frameworks to ensure
equitable access and trust.\\n\\n5. **Quantum Computing Meets AI: Exploring
the Next Leap in Computational Power** \\nCover the intersection of quantum
computing and artificial intelligence, exploring how quantum algorithms could
accelerate AI training processes and solve problems beyond the reach of classical
computers. Outline current research, potential breakthroughs, and the timeline
for real-world applications.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1748,\n \"completion_tokens\":
335,\n \"total_tokens\": 2083,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\"assistant\",\n \"content\": \"- **The Evolution of AI Agents: From
Rule-Based Bots to Autonomous Decision Makers** \\n Explore the historical
development of AI agents, tracing the journey from simple scripted chatbots
to advanced autonomous systems capable of complex decision-making and learning.
Dive into key technological milestones, breakthroughs in machine learning,
and current state-of-the-art AI agents. Discuss implications for industries
such as customer service, healthcare, and autonomous vehicles, highlighting
both opportunities and ethical concerns.\\n\\n- **AI in Daily Life: How Intelligent
Agents Are Reshaping Human Routines** \\n Investigate the integration of
AI agents in everyday life\u2014from virtual assistants like Siri and Alexa
to personalized recommendation systems and smart home devices. Analyze how
these AI tools influence productivity, privacy, and social behavior. Include
human interest elements through stories of individuals or communities who
have embraced or resisted these technologies.\\n\\n- **The Future of Work:
AI Agents as Collaborative Colleagues** \\n Examine how AI agents are transforming
workplaces by acting as collaborators rather than just tools. Cover applications
in creative fields, data analysis, and decision support, while addressing
potential challenges such as job displacement, new skill requirements, and
the evolving definition of teamwork. Use expert opinions and case studies
to paint a nuanced future outlook.\\n\\n- **Ethics and Accountability in AI
Agent Development** \\n Delve into the ethical dilemmas posed by increasingly
autonomous AI agents\u2014topics like bias in algorithms, data privacy, and
accountability for AI-driven decisions. Explore measures being taken globally
to regulate AI, frameworks for responsible AI development, and the role of
public awareness. Include historical context about technology ethics to provide
depth.\\n\\n- **Human-AI Symbiosis: Stories of Innovative Partnerships Shaping
Our World** \\n Tell compelling human interest stories about individuals
or organizations pioneering collaborative projects with AI agents that lead
to breakthroughs in science, art, or social good. Highlight how these partnerships
transcend traditional human-machine interaction and open new creative and
problem-solving possibilities, inspiring readers about the potential of human-AI
synergy.\",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 1903,\n \"completion_tokens\": 399,\n
\ \"total_tokens\": 2302,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b7c8e3f100\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_df40ab6c25\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db938e60d5bc5e7-EWR
- 9ea3cb5a6957b301-TPE
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 13 Mar 2026 07:11:04 GMT
- Fri, 10 Apr 2026 18:25:31 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -673,7 +707,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '2009'
- '2183'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:

Some files were not shown because too many files have changed in this diff Show More