Adds docs for DaytonaExecTool, DaytonaPythonTool, and DaytonaFileTool
introduced in PR #5530. Covers installation, lifecycle modes, examples,
and full parameter reference. Registered in docs.json nav for all
languages and versions.
Co-authored-by: iris-clawd <iris@crewai.com>
* docs: add Vertex AI workload identity setup guide
Walks SaaS customers through configuring CrewAI AMP to authenticate to
Google Vertex AI via GCP Workload Identity Federation, eliminating the
need for long-lived service account keys.
* docs: restrict Vertex WI guide to v1.14.3+ navigation
The guide requires `crewai>=1.14.3`, so registering it under older
version snapshots is misleading. Keep the entry only in the v1.14.3
English nav.
* docs: clarify crewai-vertex SA name is an example
* docs: add You.com MCP integration documentation for crewAI
Add documentation pages for integrating You.com's remote MCP server
with crewAI agents, covering web search, research, and content
extraction tools via the MCP protocol.
Pages added:
- Overview with DSL and MCPServerAdapter integration approaches
- you-search: web/news search with advanced filtering
- you-research: multi-source research with cited answers
- you-contents: full page content extraction
- Security considerations (prompt injection, API key management)
Co-authored-by: factory-droid[bot] <138933559+factory-droid-oss@users.noreply.github.com>
* docs: add You.com MCP search, research, and content extraction guides
Add two documentation pages for integrating You.com's remote MCP server
with crewAI agents:
- search-research/youai-search.mdx: you-search (web/news search)
and you-research (synthesized cited answers) via DSL or MCPServerAdapter.
Includes free tier support (100 queries/day, no API key).
- web-scraping/youai-contents.mdx: you-contents (full page content
extraction) via MCPServerAdapter with schema patching helpers.
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
* fix: add tool_filter to DSL search agent in youai-contents combo example
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
---------
Co-authored-by: factory-droid[bot] <138933559+factory-droid-oss@users.noreply.github.com>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* Add Tavily Research and get Research
- Added tavily research with docs to crew AI
- Added tavily get research with docs to crew AI
* Update `tavily-python` installation instructions and adjust version constraints
- Changed installation command from `pip install` to `uv add` for `tavily-python` in multiple documentation files.
- Updated version constraint for `tavily-python` in `pyproject.toml` from `>=0.7.14` to `~=0.7.14`.
- Modified the `exclude-newer` date in `uv.lock` to `2026-04-23T07:00:00Z`.
* Add Tavily Research Tool documentation in multiple languages
- Introduced `TavilyResearchTool` documentation in English, Arabic, Korean, and Portuguese.
- Updated `docs.json` to include paths for the new documentation files.
- The `TavilyResearchTool` allows CrewAI agents to perform multi-step research tasks and generate cited reports using the Tavily Research API.
* Fix Tavily research CI failures
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Evan Rimer <evan.rimer@tavily.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
GitHub doesn't expose repo secrets to pull_request events from forks, so
${{ secrets.CREWAI_TOOL_SPECS_APP_ID }} resolves to an empty string and
tibdex/github-app-token@v2 errors with "Input required and not supplied:
app_id". The job also tries to push commits to the PR branch, which it
can't do on a fork regardless. Skip it for cross-repo PRs and keep it
for same-repo PRs and manual dispatch.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix(flow): add execution_id separate from state.id (COR-48)
When a consumer passes `id` in `kickoff(inputs=...)`, that value
overwrites the flow's state.id — which was also being used as the
execution tracking identity for telemetry, tracing, and external
correlation. Two kickoffs sharing the same consumer id ended up
with the same tracking id, breaking any downstream system that
joins on it.
Introduces `Flow.execution_id`: a stable per-run identifier stored
as a `PrivateAttr` on the `Flow` model, exposed via property +
setter. It defaults to a fresh `uuid4` per instance, is never
touched by `inputs["id"]`, and can be assigned by outer systems
that already have an execution identity (e.g. a task id).
Switches the `current_flow_id` / `current_flow_request_id`
ContextVars to seed from `execution_id` so OTel spans emitted by
`FlowTrackable` children correlate on the stable tracking key.
`state.id` keeps its existing override semantics for
persistence/restore — consumers resuming a persisted flow via
`inputs["id"]` work exactly as before.
Adds tests covering default uniqueness per instance, immunity to
consumer `inputs["id"]`, context-var propagation, absence from
serialized state, and parity for dict-state flows.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Enables keyless Azure auth (OIDC Workload Identity Federation, Managed
Identity, Azure CLI, env-configured Service Principal) without any
crewAI-specific configuration. Customers whose deployment environment
already sets the standard azure-identity env vars get keyless auth for
free; the existing API-key path is unchanged.
Linear: FAC-40
* perf: defer MCP SDK import by fixing import path in agent/core.py
- Change 'from crewai.mcp import MCPServerConfig' to direct path
'from crewai.mcp.config import MCPServerConfig' to avoid triggering
mcp/__init__.py which eagerly loads the full mcp SDK (~300-400ms)
- Move MCPToolResolver import into get_mcp_tools() method body since
it's only used at runtime, not in type annotations
Saves ~200ms on 'import crewai' cold start.
* perf: lazy-load heavy MCP imports in mcp/__init__.py
MCPClient, MCPToolResolver, BaseTransport, and TransportType now use
__getattr__ lazy loading. These pull in the full mcp SDK (~400ms) but
are only needed at runtime when agents actually connect to MCP servers.
Lightweight config and filter types remain eagerly imported.
* perf: lazy-load all event type modules in events/__init__.py
Previously only agent_events were lazy-loaded; all other event type
modules (crew, flow, knowledge, llm, guardrail, logging, mcp, memory,
reasoning, skill, task, tool_usage) were eagerly imported at package
init time. Since events/__init__.py runs whenever ANY crewai.events.*
submodule is accessed, this loaded ~12 Pydantic model modules
unnecessarily.
Now all event types use the same __getattr__ lazy-loading pattern,
with TYPE_CHECKING imports preserved for IDE/type-checker support.
Saves ~550ms on 'import crewai' cold start.
* chore: remove UNKNOWN.egg-info from version control
* fix: add MCPToolResolver to TYPE_CHECKING imports
Fixes F821 (ruff) and name-defined (mypy) from lazy-loading the
MCP import. The type annotation on _mcp_resolver needs the name
available at type-check time.
* fix: bump lxml to >=5.4.0 for GHSA-vfmq-68hx-4jfw
lxml 5.3.2 has a known vulnerability. Bump to 5.4.0+ which
includes the fix (libxml2 2.13.8). The previous <5.4.0 pin
was for etree import issues that have since been resolved.
* fix: bump exclude-newer to 2026-04-22 for lxml 6.1.0 resolution
lxml 6.1.0 (GHSA fix) was released April 17 but the exclude-newer
date was set to April 17, missing it by timestamp. Bump to April 22.
* perf: add import time benchmark script
scripts/benchmark_import_time.py measures import crewai cold start
in fresh subprocesses. Supports --runs, --json (for CI), and
--threshold (fail if median exceeds N seconds).
The companion GitHub Action workflow needs to be pushed separately
(requires workflow scope).
* new action
* Potential fix for pull request finding 'CodeQL / Workflow does not contain permissions'
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix: merge execution metadata on duplicate batch initialization in TraceBatchManager
- Updated TraceBatchManager to merge execution metadata when a batch is initialized multiple times.
- Enhanced logging to reflect the merging of metadata during duplicate initialization.
- Added a test case to verify that execution metadata is correctly merged when initializing a batch after a lazy action.
* drop env events emitting from traces listener
* docs: add Build with AI page for coding agents and AI assistants
* docs: add Build with AI section to README
* docs: trim README Build with AI section to skills install only
* docs: add skills.sh reference link for npx install
* docs: add coding agent logos to Build with AI page
---------
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Task fields that store class references (output_pydantic, output_json,
response_model, converter_cls) caused PydanticSerializationError when
RuntimeState serialized Crew entities during checkpointing. Serialize
to model_json_schema() and hydrate back via create_model_from_schema.
The guardrail retry path passed a Pydantic object directly to
TaskOutput.raw (which expects a string), causing a ValidationError
when output_pydantic is set and a guardrail fails. Mirror the
BaseModel check from the initial execution path into both sync
and async retry loops.
Closes#5544 (part 1)
* feat: add Daytona sandbox tools for enhanced functionality
- Introduced DaytonaBaseTool as a shared base for tools interacting with Daytona sandboxes.
- Added DaytonaExecTool for executing shell commands within a sandbox.
- Implemented DaytonaFileTool for managing files (read, write, delete, etc.) in a sandbox.
- Created DaytonaPythonTool for running Python code in a sandbox environment.
- Updated pyproject.toml to include Daytona as a dependency.
* chore: update tool specifications
* refactor: enhance error handling and logging in Daytona tools
- Added logging for best-effort cleanup failures in DaytonaBaseTool and DaytonaFileTool to aid in debugging.
- Improved error message for ImportError in DaytonaPythonTool to provide clearer guidance on SDK compatibility issues.
* linted
* addressing comment
* pinning version
* supporting append
* chore: update tool specifications
---------
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Gemini thinking models (2.5+, 3.x) require thought_signature on
functionCall parts when sent back in conversation history. The streaming
path was extracting only name/args into plain dicts, losing the
signature. Return raw Part objects (matching the non-streaming path)
so the executor preserves them via raw_tool_call_parts.
Add fork classmethod, _restore_runtime, and _restore_event_scope
to BaseAgent. Fix from_checkpoint to set runtime state on the
event bus and restore event scopes. Store kickoff event ID across
checkpoints to skip re-emission on resume. Handle agent entity
type in checkpoint CLI and TUI.
The test_older_than tests in both JSON and SQLite prune suites used
hardcoded 2026-04-17 timestamps for the 'new' checkpoint. Once that
date passes, the checkpoint is older than 1 day and gets pruned along
with the 'old' one, causing assert count >= 1 to fail (count=0).
Use 2099-01-01 for the 'new' checkpoint so tests remain stable.
Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
- Move _update_all_versions inside each dry-run branch so output order matches actual execution
- Switch to main before deleting the stale local branch in create_or_reset_branch