When a tool with result_as_answer=True raises an exception, the agent
was receiving result_as_answer=True and returning the error string as
the final answer. Now we set result_as_answer=False when an error event
is emitted, allowing the agent to reflect and retry.
FixescrewAIInc/crewAI#5156
---------
Co-authored-by: NIK-TIGER-BILL <nik.tiger.bill@github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
## Summary
- Reverts `b0e2fda` ("fix(flow): add execution_id separate from state.id", COR-48): removes `Flow.execution_id` and points `current_flow_id` / `current_flow_request_id` back at `flow_id` (i.e. `state.id`). The separate per-run tracking id was no longer the right abstraction once `restore_from_state_id` reshapes how `state.id` is assigned;
- Adds an optional `restore_from_state_id` kwarg to `Flow.kickoff` / `Flow.kickoff_async` that hydrates state from a previously-persisted flow's latest snapshot
- Reassigns `state.id` to a fresh value (or `inputs["id"]` if pinned) so the new run's `@persist` writes don't extend the source's history
- Existing `inputs["id"]` resume, `@persist`, and `from_checkpoint` paths are unchanged
## Problem
`@persist` only supports *resume* today: `kickoff(inputs={"id": <uuid>})` hydrates state and continues writing under the same `flow_uuid`. There's no way to **fork** — hydrate from a snapshot but persist under a separate key, leaving the source's history intact. This PR adds that.
| | `state.id` after kickoff | `@persist` writes land under |
|---|---|---|
| `inputs["id"]` (resume) | supplied id | supplied id (extends history) |
| `restore_from_state_id` (fork) | fresh id, or `inputs["id"]` if pinned | new id (source preserved) |
## Behavior
| `inputs.id` | `restore_from_state_id` | Effect |
|---|---|---|
| — | — | Fresh kickoff |
| set | — | Existing resume |
| — | UUID | Fork — new `state.id`, hydrated from source |
| set | UUID | Fork into a pinned `state.id`, hydrated from source |
- Source not found → silent fallback (mirrors existing resume)
- Both `from_checkpoint` and `restore_from_state_id` set → `ValueError`
- `restore_from_state_id=None` → byte-identical to current main
## Design
Fork hydration runs before the existing `inputs` block in `kickoff_async`. On a hit, it calls the same `_restore_state` primitive used by resume, then overwrites `state.id` with a fresh UUID (or `inputs["id"]`). A `fork_succeeded` flag gates the existing `inputs["id"]` path so we don't double-load. `_completed_methods` / `_is_execution_resuming` are intentionally untouched — skip-completed-methods remains the territory of `apply_checkpoint` and `from_pending`.
## Test plan
- [ ] `pytest tests/test_flow_persistence.py` — 5 new tests (four-row matrix, not-found fallback, default no-op, conflict raise) + 6 existing as regression
- [ ] `pytest tests/test_flow.py` — broader flow suite
- [ ] Manual end-to-end against an HITL `@persist` flow
* feat(crewai-tools): add highlights to ExaSearchTool, rename from EXASearchTool
- Add a highlights init param so agents can get token-efficient excerpts instead of full pages
- Rename EXASearchTool to ExaSearchTool; keep EXASearchTool as a deprecated alias so existing imports keep working
- Update the docs and example to use highlights as the recommended option
- Add a small note that says Exa is the fastest and most accurate web search API
- Add tests for the new highlights param and the deprecation alias
* fix(crewai-tools): import order and module-level Exa for tests
- Reorder std-lib imports so ruff is happy with force-sort-within-sections.
- Import Exa at module level (with a fallback) so the existing test mocks resolve.
The lazy install prompt still works if exa_py is missing.
- Allow content and summary to be a dict, matching highlights.
- Trim test file to the cases this PR introduces (highlights param and the
EXASearchTool deprecation alias). Existing init-shape tests stay.
Co-Authored-By: ishan <ishan@exa.ai>
* chore(crewai-tools): drop self-explanatory comment on schema alias
Co-Authored-By: ishan <ishan@exa.ai>
* docs(crewai-tools): default highlights to True, drop summary from examples
Co-Authored-By: ishan <ishan@exa.ai>
* docs(crewai-tools): simplify highlights examples to highlights=True
Co-Authored-By: ishan <ishan@exa.ai>
* feat(crewai-tools): add x-exa-integration header for usage tracking
Co-Authored-By: ishan <ishan@exa.ai>
* docs(crewai-tools): add Exa MCP section and resources links
Co-Authored-By: ishan <ishan@exa.ai>
---------
Co-authored-by: ishan <ishan@exa.ai>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* feat(azure): forward credential_scopes to Azure AI Inference client
Adds a credential_scopes field to the native Azure AI Inference
provider and a matching AZURE_CREDENTIAL_SCOPES env var
(comma-separated). The value is forwarded to ChatCompletionsClient /
AsyncChatCompletionsClient when set, letting keyless / Entra-based
callers target a specific Azure AD audience (e.g.
https://cognitiveservices.azure.com/.default) without subclassing the
provider. Matches the upstream azure.ai.inference SDK kwarg of the
same name.
Lazy build re-reads the env var so an LLM constructed at module
import (before deployment env vars are set) still picks up scopes —
same pattern as the existing AZURE_API_KEY / AZURE_ENDPOINT lazy
reads. to_config_dict round-trips the field.
* refactor(azure): tighten credential_scopes env handling
Address review feedback:
- Move os.getenv into the helper so AZURE_CREDENTIAL_SCOPES appears once
- Match the surrounding api_key/endpoint `or` style in the validator
- Drop the list() defensive copy in to_config_dict — every other field
in that method (and the base class's `stop`) is assigned by reference
* feat(flow): add optional key param to @persist decorator
Allows users to specify which state attribute to use as the
persistence key instead of always defaulting to state.id.
Usage: @persist(key='conversation_id')
Falls back to state.id when key is not provided (no breaking change).
Raises ValueError if the specified key is missing or falsy on state.
* docs(flow): document @persist key parameter for custom persistence keys
* fix(flow): use explicit None check for persist key to avoid empty-string fallback
---------
Co-authored-by: iris-clawd <iris-clawd@anthropic.com>
Co-authored-by: iris-clawd <iris@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
CrewAgentExecutor is reused across sequential tasks but invoke/ainvoke
only appended to self.messages and never reset self.iterations, so
task 2 inherited task 1's history and iteration count.
* Add Tavily Research and get Research
- Added tavily research with docs to crew AI
- Added tavily get research with docs to crew AI
* Update `tavily-python` installation instructions and adjust version constraints
- Changed installation command from `pip install` to `uv add` for `tavily-python` in multiple documentation files.
- Updated version constraint for `tavily-python` in `pyproject.toml` from `>=0.7.14` to `~=0.7.14`.
- Modified the `exclude-newer` date in `uv.lock` to `2026-04-23T07:00:00Z`.
* Add Tavily Research Tool documentation in multiple languages
- Introduced `TavilyResearchTool` documentation in English, Arabic, Korean, and Portuguese.
- Updated `docs.json` to include paths for the new documentation files.
- The `TavilyResearchTool` allows CrewAI agents to perform multi-step research tasks and generate cited reports using the Tavily Research API.
* Fix Tavily research CI failures
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Evan Rimer <evan.rimer@tavily.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* fix(flow): add execution_id separate from state.id (COR-48)
When a consumer passes `id` in `kickoff(inputs=...)`, that value
overwrites the flow's state.id — which was also being used as the
execution tracking identity for telemetry, tracing, and external
correlation. Two kickoffs sharing the same consumer id ended up
with the same tracking id, breaking any downstream system that
joins on it.
Introduces `Flow.execution_id`: a stable per-run identifier stored
as a `PrivateAttr` on the `Flow` model, exposed via property +
setter. It defaults to a fresh `uuid4` per instance, is never
touched by `inputs["id"]`, and can be assigned by outer systems
that already have an execution identity (e.g. a task id).
Switches the `current_flow_id` / `current_flow_request_id`
ContextVars to seed from `execution_id` so OTel spans emitted by
`FlowTrackable` children correlate on the stable tracking key.
`state.id` keeps its existing override semantics for
persistence/restore — consumers resuming a persisted flow via
`inputs["id"]` work exactly as before.
Adds tests covering default uniqueness per instance, immunity to
consumer `inputs["id"]`, context-var propagation, absence from
serialized state, and parity for dict-state flows.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Enables keyless Azure auth (OIDC Workload Identity Federation, Managed
Identity, Azure CLI, env-configured Service Principal) without any
crewAI-specific configuration. Customers whose deployment environment
already sets the standard azure-identity env vars get keyless auth for
free; the existing API-key path is unchanged.
Linear: FAC-40
* perf: defer MCP SDK import by fixing import path in agent/core.py
- Change 'from crewai.mcp import MCPServerConfig' to direct path
'from crewai.mcp.config import MCPServerConfig' to avoid triggering
mcp/__init__.py which eagerly loads the full mcp SDK (~300-400ms)
- Move MCPToolResolver import into get_mcp_tools() method body since
it's only used at runtime, not in type annotations
Saves ~200ms on 'import crewai' cold start.
* perf: lazy-load heavy MCP imports in mcp/__init__.py
MCPClient, MCPToolResolver, BaseTransport, and TransportType now use
__getattr__ lazy loading. These pull in the full mcp SDK (~400ms) but
are only needed at runtime when agents actually connect to MCP servers.
Lightweight config and filter types remain eagerly imported.
* perf: lazy-load all event type modules in events/__init__.py
Previously only agent_events were lazy-loaded; all other event type
modules (crew, flow, knowledge, llm, guardrail, logging, mcp, memory,
reasoning, skill, task, tool_usage) were eagerly imported at package
init time. Since events/__init__.py runs whenever ANY crewai.events.*
submodule is accessed, this loaded ~12 Pydantic model modules
unnecessarily.
Now all event types use the same __getattr__ lazy-loading pattern,
with TYPE_CHECKING imports preserved for IDE/type-checker support.
Saves ~550ms on 'import crewai' cold start.
* chore: remove UNKNOWN.egg-info from version control
* fix: add MCPToolResolver to TYPE_CHECKING imports
Fixes F821 (ruff) and name-defined (mypy) from lazy-loading the
MCP import. The type annotation on _mcp_resolver needs the name
available at type-check time.
* fix: bump lxml to >=5.4.0 for GHSA-vfmq-68hx-4jfw
lxml 5.3.2 has a known vulnerability. Bump to 5.4.0+ which
includes the fix (libxml2 2.13.8). The previous <5.4.0 pin
was for etree import issues that have since been resolved.
* fix: bump exclude-newer to 2026-04-22 for lxml 6.1.0 resolution
lxml 6.1.0 (GHSA fix) was released April 17 but the exclude-newer
date was set to April 17, missing it by timestamp. Bump to April 22.
* perf: add import time benchmark script
scripts/benchmark_import_time.py measures import crewai cold start
in fresh subprocesses. Supports --runs, --json (for CI), and
--threshold (fail if median exceeds N seconds).
The companion GitHub Action workflow needs to be pushed separately
(requires workflow scope).
* new action
* Potential fix for pull request finding 'CodeQL / Workflow does not contain permissions'
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix: merge execution metadata on duplicate batch initialization in TraceBatchManager
- Updated TraceBatchManager to merge execution metadata when a batch is initialized multiple times.
- Enhanced logging to reflect the merging of metadata during duplicate initialization.
- Added a test case to verify that execution metadata is correctly merged when initializing a batch after a lazy action.
* drop env events emitting from traces listener