* feat(azure): forward credential_scopes to Azure AI Inference client
Adds a credential_scopes field to the native Azure AI Inference
provider and a matching AZURE_CREDENTIAL_SCOPES env var
(comma-separated). The value is forwarded to ChatCompletionsClient /
AsyncChatCompletionsClient when set, letting keyless / Entra-based
callers target a specific Azure AD audience (e.g.
https://cognitiveservices.azure.com/.default) without subclassing the
provider. Matches the upstream azure.ai.inference SDK kwarg of the
same name.
Lazy build re-reads the env var so an LLM constructed at module
import (before deployment env vars are set) still picks up scopes —
same pattern as the existing AZURE_API_KEY / AZURE_ENDPOINT lazy
reads. to_config_dict round-trips the field.
* refactor(azure): tighten credential_scopes env handling
Address review feedback:
- Move os.getenv into the helper so AZURE_CREDENTIAL_SCOPES appears once
- Match the surrounding api_key/endpoint `or` style in the validator
- Drop the list() defensive copy in to_config_dict — every other field
in that method (and the base class's `stop`) is assigned by reference
* feat(flow): add optional key param to @persist decorator
Allows users to specify which state attribute to use as the
persistence key instead of always defaulting to state.id.
Usage: @persist(key='conversation_id')
Falls back to state.id when key is not provided (no breaking change).
Raises ValueError if the specified key is missing or falsy on state.
* docs(flow): document @persist key parameter for custom persistence keys
* fix(flow): use explicit None check for persist key to avoid empty-string fallback
---------
Co-authored-by: iris-clawd <iris-clawd@anthropic.com>
Co-authored-by: iris-clawd <iris@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
CrewAgentExecutor is reused across sequential tasks but invoke/ainvoke
only appended to self.messages and never reset self.iterations, so
task 2 inherited task 1's history and iteration count.
* fix(flow): add execution_id separate from state.id (COR-48)
When a consumer passes `id` in `kickoff(inputs=...)`, that value
overwrites the flow's state.id — which was also being used as the
execution tracking identity for telemetry, tracing, and external
correlation. Two kickoffs sharing the same consumer id ended up
with the same tracking id, breaking any downstream system that
joins on it.
Introduces `Flow.execution_id`: a stable per-run identifier stored
as a `PrivateAttr` on the `Flow` model, exposed via property +
setter. It defaults to a fresh `uuid4` per instance, is never
touched by `inputs["id"]`, and can be assigned by outer systems
that already have an execution identity (e.g. a task id).
Switches the `current_flow_id` / `current_flow_request_id`
ContextVars to seed from `execution_id` so OTel spans emitted by
`FlowTrackable` children correlate on the stable tracking key.
`state.id` keeps its existing override semantics for
persistence/restore — consumers resuming a persisted flow via
`inputs["id"]` work exactly as before.
Adds tests covering default uniqueness per instance, immunity to
consumer `inputs["id"]`, context-var propagation, absence from
serialized state, and parity for dict-state flows.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Enables keyless Azure auth (OIDC Workload Identity Federation, Managed
Identity, Azure CLI, env-configured Service Principal) without any
crewAI-specific configuration. Customers whose deployment environment
already sets the standard azure-identity env vars get keyless auth for
free; the existing API-key path is unchanged.
Linear: FAC-40
* fix: merge execution metadata on duplicate batch initialization in TraceBatchManager
- Updated TraceBatchManager to merge execution metadata when a batch is initialized multiple times.
- Enhanced logging to reflect the merging of metadata during duplicate initialization.
- Added a test case to verify that execution metadata is correctly merged when initializing a batch after a lazy action.
* drop env events emitting from traces listener
Add fork classmethod, _restore_runtime, and _restore_event_scope
to BaseAgent. Fix from_checkpoint to set runtime state on the
event bus and restore event scopes. Store kickoff event ID across
checkpoints to skip re-emission on resume. Handle agent entity
type in checkpoint CLI and TUI.
The test_older_than tests in both JSON and SQLite prune suites used
hardcoded 2026-04-17 timestamps for the 'new' checkpoint. Once that
date passes, the checkpoint is older than 1 day and gets pruned along
with the 'old' one, causing assert count >= 1 to fail (count=0).
Use 2099-01-01 for the 'new' checkpoint so tests remain stable.
Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Add three new CLI subcommands to improve checkpoint UX:
- `crewai checkpoint resume [id]` skips the TUI and resumes from the
latest or specified checkpoint directly
- `crewai checkpoint diff <id1> <id2>` compares two checkpoints showing
changes in metadata, inputs, task status, and outputs
- `crewai checkpoint prune --keep N --older-than Xd` removes old
checkpoints from JSON dirs or SQLite databases
Also writes a resume hint to stderr after every checkpoint save so
users discover the command without needing to know it exists.
Concurrent streaming runs registered handlers on the singleton event bus
that received all LLMStreamChunkEvent emissions, causing chunks to fan
out across unrelated queues. Introduces a ContextVar-based stream scope
ID so each handler only accepts events from its own execution context.
Closes#5376
* feat: add template management commands for project templates
- Introduced command group to browse and install project templates.
- Added command to display available templates.
- Implemented command to install a selected template into the current directory.
- Created class to handle template-related operations, including fetching templates from GitHub and managing installations.
- Enhanced telemetry to track template installations.
* linted
* adressing comments
* comment addressed
resolve_refs now returns type-preserving stubs instead of {} for
circular $refs, and create_model_from_schema catches JsonRefError
to fall back to lazy top-level-only inlining.
Add crewai deploy validate to check project structure, dependencies, imports, and env usage before deploy
Run validation automatically in deploy create and deploy push with skip flag support
Return structured findings with stable codes and hints
Add test coverage for validation scenarios
refactor: defer LLM client construction to first use
Move SDK client creation out of model initialization into lazy getters
Add _get_sync_client and _get_async_client across providers
Route all provider calls through lazy getters
Surface credential errors at first real invocation
refactor: standardize provider client access
Align async paths to use _get_async_client
Avoid client construction in lightweight config accessors
Simplify provider lifecycle and improve consistency
test: update suite for new behavior
Update tests for lazy initialization contract
Update CLI tests for validation flow and skip flag
Expand coverage for provider initialization paths
Substring checks like `'0.1' not in json_str` collided with timestamps
such as `2026-04-10T13:00:50.140557` on CI. Round-trip through
`model_validate_json` to verify structurally that the embedding field
is absent from the serialized output.
- Rewrite TUI with Tree widget showing branch/fork lineage
- Add Resume and Fork buttons in detail panel with Collapsible entities
- Show branch and parent_id in detail panel and CLI info output
- Auto-detect .checkpoints.db when default dir missing
- Append .db to location for SqliteProvider when no extension set
- Fix RuntimeState.from_checkpoint not setting provider/location
- Fork now writes initial checkpoint on new branch
- Add from_checkpoint, fork, and CLI docs to checkpointing.mdx
Accept CheckpointConfig on Crew and Flow kickoff/kickoff_async/akickoff.
When restore_from is set, the entity resumes from that checkpoint.
When only config fields are set, checkpointing is enabled for the run.
Adds restore_from field (Path | str | None) to CheckpointConfig.
Write the crewAI package version into every checkpoint blob. On restore,
run version-based migrations so older checkpoints can be transformed
forward to the current format. Adds crewai.utilities.version module.
* refactor: remove CodeInterpreterTool and deprecate code execution params
CodeInterpreterTool has been removed. The allow_code_execution and
code_execution_mode parameters on Agent are deprecated and will be
removed in v2.0. Use dedicated sandbox services (E2B, Modal, etc.)
for code execution needs.
Changes:
- Remove CodeInterpreterTool from crewai-tools (tool, Dockerfile, tests, imports)
- Remove docker dependency from crewai-tools
- Deprecate allow_code_execution and code_execution_mode on Agent
- get_code_execution_tools() returns empty list with deprecation warning
- _validate_docker_installation() is a no-op with deprecation warning
- Bedrock CodeInterpreter (AWS hosted) and OpenAI code_interpreter are NOT affected
* fix: remove empty code_interpreter imports and unused stdlib imports
- Remove empty `from code_interpreter_tool import ()` blocks in both
crewai_tools/__init__.py and tools/__init__.py that caused SyntaxError
after CodeInterpreterTool was removed
- Remove unused `shutil` and `subprocess` imports from agent/core.py
left over from the code execution params deprecation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: remove redundant _validate_docker_installation call and fix list type annotation
- Drop the _validate_docker_installation() call inside the allow_code_execution
block — it fired a second DeprecationWarning identical to the one emitted
just above it, making the warning fire twice.
- Annotate get_code_execution_tools() return type as list[Any] to satisfy mypy
(bare `list` fails the type-arg check introduced by this branch).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ci: retrigger
* fix: update test_crew.py to remove CodeInterpreterTool references
CodeInterpreterTool was removed from crewai_tools. Update tests to
reflect that get_code_execution_tools() now returns an empty list.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* chore: update tool specifications
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- Pass RuntimeState through the event bus and enable entity auto-registration
- Introduce checkpointing API:
- .checkpoint(), .from_checkpoint(), and async checkpoint support
- Provider-based storage with BaseProvider and JsonProvider
- Mid-task resume and kickoff() integration
- Add EventRecord tracking and full event serialization with subtype preservation
- Enable checkpoint fidelity via llm_type and executor_type discriminators
- Refactor executor architecture:
- Convert executors, tools, prompts, and TokenProcess to BaseModel
- Introduce proper base classes with typed fields (CrewAgentExecutorMixin, BaseAgentExecutor)
- Add generic from_checkpoint with full LLM serialization
- Support executor back-references and resume-safe initialization
- Refactor runtime state system:
- Move RuntimeState into state/ module with async checkpoint support
- Add entity serialization improvements and JSON-safe round-tripping
- Implement event scope tracking and replay for accurate resume behavior
- Improve tool and schema handling:
- Make BaseTool fully serializable with JSON round-trip support
- Serialize args_schema via JSON schema and dynamically reconstruct models
- Add automatic subclass restoration via tool_type discriminator
- Enhance Flow checkpointing:
- Support restoring execution state and subclass-aware deserialization
- Performance improvements:
- Cache handler signature inspection
- Optimize event emission and metadata preparation
- General cleanup:
- Remove dead checkpoint payload structures
- Simplify entity registration and serialization logic
* fix: exclude embedding vector from MemoryRecord serialization
MemoryRecord.embedding (1536 floats for OpenAI embeddings) was included
in model_dump()/JSON serialization and repr. When recall results flow
to agents or get logged, these vectors burn tokens for zero value —
agents never need the raw embedding.
Added exclude=True and repr=False to the embedding field. The storage
layer accesses record.embedding directly (not via model_dump), so
persistence is unaffected.
* test: validate embedding excluded from serialization
Two tests:
1. MemoryRecord — model_dump, model_dump_json, and repr all exclude
embedding. Direct attribute access still works for storage layer.
2. MemoryMatch — nested record serialization also excludes embedding.
* fix: bump litellm to >=1.83.0 to address CVE-2026-35030
Bump litellm from <=1.82.6 to >=1.83.0 to fix JWT auth bypass via
OIDC cache key collision (CVE-2026-35030). Also widen devtools openai
pin from ~=1.83.0 to >=1.83.0,<3 to resolve the version conflict
(litellm 1.83.0 requires openai>=2.8.0).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve mypy errors from litellm bump
- Remove unused type: ignore[import-untyped] on instructor import
- Remove all unused type: ignore[union-attr] comments (litellm types fixed)
- Add hasattr guard for tool_call.function — new litellm adds
ChatCompletionMessageCustomToolCall to the union which lacks .function
* fix: tighten litellm pin to ~=1.83.0 (patch-only bumps)
>=1.83.0,<2 is too wide — litellm has had breaking changes between
minors. ~=1.83.0 means >=1.83.0,<1.84.0 — gets CVE patches but won't
pull in breaking minor releases.
* ci: bump uv from 0.8.4 to 0.11.3
* fix: resolve mypy errors in openai completion from 2.x type changes
Use isinstance checks with concrete openai response types instead of
string comparisons for proper type narrowing. Update code interpreter
handling for outputs/OutputImage API changes in openai 2.x.
* fix: pre-cache tiktoken encoding before VCR intercepts requests
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Alex <alex@crewai.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>