- traces enable/disable now use update_user_data (file-locked atomic
read-modify-write) instead of bare _load_user_data + _save_user_data —
prevents data loss under concurrent CLI invocations.
- Drop unused _get_nested_value / _get_project_attribute re-exports from
crewai_cli.utils and crewai.utilities.project_utils — no callers.
- Mark crewai.events.listeners.tracing.utils re-exports via __all__ so
CodeQL recognizes _save_user_data, is_tracing_enabled, update_user_data
(and the other tracing helpers defined in the file) as public API.
- Add __all__ to all re-export shims (crewai.{settings,constants,utilities.{
version,paths,printer,lock_store,constants,project_utils},auth.token_manager,
events.utils.console_formatter}, crewai_cli.{config,shared.token_manager},
crewai_cli.utils) to silence CodeQL "unused import" findings — the imports
are the module's public API.
- Migrate three CLI internal callers (cli.py, authentication/main.py,
authentication/token.py) off the deprecated crewai_cli.shared.token_manager
shim, importing crewai_core.token_manager directly. Avoids a self-imposed
DeprecationWarning on every CLI startup.
- Add Telemetry._safe_telemetry_procedure for void operations; switch the
CLI-facing span methods (deploy/template/flow_creation/signup, _add_attribute)
off _safe_telemetry_operation since they don't return a span.
- Delete unused crewai_cli.utils.update_env_vars (had a latent type-cast bug
and zero callers).
- New crewai_core.{settings,token_manager,tool_credentials} absorb the previously
duplicated Settings class (~200 LOC), TokenManager (~185 LOC), and
build_env_with_*_credentials helpers from both crewai and crewai-cli.
- crewai_core.constants gains the OAuth2/enterprise URL constants.
- crewai.settings, crewai.auth.token_manager, crewai_cli.config, and
crewai_cli.shared.token_manager are now thin re-export shims (deprecation
warnings on the crewai.* paths; crewai_cli.* paths kept silent re-exports).
- Internal callers (plus_api, auth.token, auth.oauth2, agent_utils,
trace_batch_manager) migrated to crewai_core.* imports.
- Tests updated to patch crewai_core.{settings,token_manager}.* paths.
- crewai-core gains pydantic, cryptography, tomli deps; crewai-cli's redundant
cryptography dep can stay (still imported by crewai_cli.shared.token_manager
shim users) — no behavior change.
- Standalone CLI smoke still passes; crewai's full mypy (471 files) clean.
- New lib/crewai-core/ package: version, paths, constants, lock_store, user_data,
printer, telemetry. Pure leaf — depends only on appdirs/portalocker/rich/otel.
- crewai now depends on crewai-core; old crewai.utilities.{version,paths,printer,
lock_store} and the user-data block of events/listeners/tracing/utils.py become
one-shot DeprecationWarning shims that re-export from crewai_core.
- crewai-cli drops its hard dep on crewai and depends only on crewai-core. CLI
imports for telemetry/version/printer/constants now point at crewai_core.
- tools/main.py lazy-imports project_utils + get_user_id; the publish/login
subcommands print a friendly "requires crewai" error if it's missing.
- crewai-cli is now genuinely standalone: 'crewai --help', 'version', 'login',
'config', 'traces', 'create', 'template' all work without crewai installed.
- 351 CLI tests + 9 crewai-core smoke tests + crewai's full mypy (471 files) clean.
Mirrors the parameter origin/main added on crewai.cli.plus_api so the
relocated crewai.plus_api stays in sync. Also fix a stale crewai_cli
patch target in the lib/crewai plus_api test.
These tests targeted crewai.auth.oauth2.AuthenticationCommand but exercise
_login_to_tool_repository, which lives only on the standalone
crewai_cli.authentication.main.AuthenticationCommand. The same tests
already exist in lib/cli/tests/authentication/test_auth_main.py against
the correct class.
These functions live in crewai_cli.utils, not crewai.utilities.project_utils.
The same tests already exist in lib/cli/tests/test_utils.py against the
correct module.
Resolve conflicts from origin/main: relocate new CLI additions
(checkpoint_tui, deploy/validate, remote_template, content_crew
templates) into lib/cli, rewrite imports for the standalone
crewai-cli package, port main's trained_agents_file param and
predeploy validation, and bump python-dotenv/pydantic in
crewai-cli to match crewai's constraints. Add the new
mark_ephemeral_trace_batch_as_failed method to the relocated
crewai.plus_api. Update tests for the new payload field, deploy
--skip-validate kwarg, and crewai_cli import paths.
In `_execute_task_with_a2a` and its async variant, the try body
sets `task.output_pydantic = None` before returning an A2A
response. The finally block then checks
`if task.output_pydantic is not None` before restoring the
original value — but since it was just set to None, the condition
is always False and the original value is never restored. This
permanently mutates the Task object.
Remove the guard so `output_pydantic` is unconditionally restored,
matching the unconditional restoration of `description` and
`response_model` in the same block.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
When a tool with result_as_answer=True raises an exception, the agent
was receiving result_as_answer=True and returning the error string as
the final answer. Now we set result_as_answer=False when an error event
is emitted, allowing the agent to reflect and retry.
FixescrewAIInc/crewAI#5156
---------
Co-authored-by: NIK-TIGER-BILL <nik.tiger.bill@github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
## Summary
- Reverts `b0e2fda` ("fix(flow): add execution_id separate from state.id", COR-48): removes `Flow.execution_id` and points `current_flow_id` / `current_flow_request_id` back at `flow_id` (i.e. `state.id`). The separate per-run tracking id was no longer the right abstraction once `restore_from_state_id` reshapes how `state.id` is assigned;
- Adds an optional `restore_from_state_id` kwarg to `Flow.kickoff` / `Flow.kickoff_async` that hydrates state from a previously-persisted flow's latest snapshot
- Reassigns `state.id` to a fresh value (or `inputs["id"]` if pinned) so the new run's `@persist` writes don't extend the source's history
- Existing `inputs["id"]` resume, `@persist`, and `from_checkpoint` paths are unchanged
## Problem
`@persist` only supports *resume* today: `kickoff(inputs={"id": <uuid>})` hydrates state and continues writing under the same `flow_uuid`. There's no way to **fork** — hydrate from a snapshot but persist under a separate key, leaving the source's history intact. This PR adds that.
| | `state.id` after kickoff | `@persist` writes land under |
|---|---|---|
| `inputs["id"]` (resume) | supplied id | supplied id (extends history) |
| `restore_from_state_id` (fork) | fresh id, or `inputs["id"]` if pinned | new id (source preserved) |
## Behavior
| `inputs.id` | `restore_from_state_id` | Effect |
|---|---|---|
| — | — | Fresh kickoff |
| set | — | Existing resume |
| — | UUID | Fork — new `state.id`, hydrated from source |
| set | UUID | Fork into a pinned `state.id`, hydrated from source |
- Source not found → silent fallback (mirrors existing resume)
- Both `from_checkpoint` and `restore_from_state_id` set → `ValueError`
- `restore_from_state_id=None` → byte-identical to current main
## Design
Fork hydration runs before the existing `inputs` block in `kickoff_async`. On a hit, it calls the same `_restore_state` primitive used by resume, then overwrites `state.id` with a fresh UUID (or `inputs["id"]`). A `fork_succeeded` flag gates the existing `inputs["id"]` path so we don't double-load. `_completed_methods` / `_is_execution_resuming` are intentionally untouched — skip-completed-methods remains the territory of `apply_checkpoint` and `from_pending`.
## Test plan
- [ ] `pytest tests/test_flow_persistence.py` — 5 new tests (four-row matrix, not-found fallback, default no-op, conflict raise) + 6 existing as regression
- [ ] `pytest tests/test_flow.py` — broader flow suite
- [ ] Manual end-to-end against an HITL `@persist` flow
* feat(crewai-tools): add highlights to ExaSearchTool, rename from EXASearchTool
- Add a highlights init param so agents can get token-efficient excerpts instead of full pages
- Rename EXASearchTool to ExaSearchTool; keep EXASearchTool as a deprecated alias so existing imports keep working
- Update the docs and example to use highlights as the recommended option
- Add a small note that says Exa is the fastest and most accurate web search API
- Add tests for the new highlights param and the deprecation alias
* fix(crewai-tools): import order and module-level Exa for tests
- Reorder std-lib imports so ruff is happy with force-sort-within-sections.
- Import Exa at module level (with a fallback) so the existing test mocks resolve.
The lazy install prompt still works if exa_py is missing.
- Allow content and summary to be a dict, matching highlights.
- Trim test file to the cases this PR introduces (highlights param and the
EXASearchTool deprecation alias). Existing init-shape tests stay.
Co-Authored-By: ishan <ishan@exa.ai>
* chore(crewai-tools): drop self-explanatory comment on schema alias
Co-Authored-By: ishan <ishan@exa.ai>
* docs(crewai-tools): default highlights to True, drop summary from examples
Co-Authored-By: ishan <ishan@exa.ai>
* docs(crewai-tools): simplify highlights examples to highlights=True
Co-Authored-By: ishan <ishan@exa.ai>
* feat(crewai-tools): add x-exa-integration header for usage tracking
Co-Authored-By: ishan <ishan@exa.ai>
* docs(crewai-tools): add Exa MCP section and resources links
Co-Authored-By: ishan <ishan@exa.ai>
---------
Co-authored-by: ishan <ishan@exa.ai>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* feat(azure): forward credential_scopes to Azure AI Inference client
Adds a credential_scopes field to the native Azure AI Inference
provider and a matching AZURE_CREDENTIAL_SCOPES env var
(comma-separated). The value is forwarded to ChatCompletionsClient /
AsyncChatCompletionsClient when set, letting keyless / Entra-based
callers target a specific Azure AD audience (e.g.
https://cognitiveservices.azure.com/.default) without subclassing the
provider. Matches the upstream azure.ai.inference SDK kwarg of the
same name.
Lazy build re-reads the env var so an LLM constructed at module
import (before deployment env vars are set) still picks up scopes —
same pattern as the existing AZURE_API_KEY / AZURE_ENDPOINT lazy
reads. to_config_dict round-trips the field.
* refactor(azure): tighten credential_scopes env handling
Address review feedback:
- Move os.getenv into the helper so AZURE_CREDENTIAL_SCOPES appears once
- Match the surrounding api_key/endpoint `or` style in the validator
- Drop the list() defensive copy in to_config_dict — every other field
in that method (and the base class's `stop`) is assigned by reference
* feat(flow): add optional key param to @persist decorator
Allows users to specify which state attribute to use as the
persistence key instead of always defaulting to state.id.
Usage: @persist(key='conversation_id')
Falls back to state.id when key is not provided (no breaking change).
Raises ValueError if the specified key is missing or falsy on state.
* docs(flow): document @persist key parameter for custom persistence keys
* fix(flow): use explicit None check for persist key to avoid empty-string fallback
---------
Co-authored-by: iris-clawd <iris-clawd@anthropic.com>
Co-authored-by: iris-clawd <iris@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
CrewAgentExecutor is reused across sequential tasks but invoke/ainvoke
only appended to self.messages and never reset self.iterations, so
task 2 inherited task 1's history and iteration count.
* fix(flow): add execution_id separate from state.id (COR-48)
When a consumer passes `id` in `kickoff(inputs=...)`, that value
overwrites the flow's state.id — which was also being used as the
execution tracking identity for telemetry, tracing, and external
correlation. Two kickoffs sharing the same consumer id ended up
with the same tracking id, breaking any downstream system that
joins on it.
Introduces `Flow.execution_id`: a stable per-run identifier stored
as a `PrivateAttr` on the `Flow` model, exposed via property +
setter. It defaults to a fresh `uuid4` per instance, is never
touched by `inputs["id"]`, and can be assigned by outer systems
that already have an execution identity (e.g. a task id).
Switches the `current_flow_id` / `current_flow_request_id`
ContextVars to seed from `execution_id` so OTel spans emitted by
`FlowTrackable` children correlate on the stable tracking key.
`state.id` keeps its existing override semantics for
persistence/restore — consumers resuming a persisted flow via
`inputs["id"]` work exactly as before.
Adds tests covering default uniqueness per instance, immunity to
consumer `inputs["id"]`, context-var propagation, absence from
serialized state, and parity for dict-state flows.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>