CodeQL flagged the `"test.openai.azure.com" in llm.endpoint` substring
check as incomplete URL sanitization — the substring could match in
an arbitrary position. Parse the URL and assert against
`urlparse(...).hostname` instead, which is the precise check we want.
`_normalize_gemini_fields` captures `GOOGLE_API_KEY` / `GEMINI_API_KEY`
/ `GOOGLE_CLOUD_PROJECT` at construction time, so an LLM constructed
before deployment env vars are set would freeze `self.api_key = None`
and the lazy `_get_sync_client` build would then always auth-fail.
Re-read the env vars inside `_get_sync_client` when `self._client` is
None and the corresponding field is still unset, matching the pattern
used for the other native providers.
Add a regression test that constructs `GeminiCompletion` with no env
vars set, patches them in afterwards, and asserts the lazy build
succeeds and writes the resolved key back onto the LLM.
`_prepare_completion_params` uses `is_azure_openai_endpoint` to decide
whether to include the `model` parameter in requests — Azure OpenAI
endpoints embed the deployment name in the URL and reject a `model`
field. When the endpoint was resolved lazily from env vars, the flag
stayed at its pre-resolve `False` value, causing every lazily-inited
Azure OpenAI request to include `model` and fail.
Factor the classification into `_is_azure_openai_endpoint` and call
it from both `_normalize_azure_fields` and `_make_client_kwargs`.
Extend the lazy-build regression test to assert the flag flips to
`True` once the endpoint is resolved.
Azure's `_normalize_azure_fields` captures env vars at construction
time. When `LLM(model="azure/...")` is instantiated before deployment
env vars are set, `self.api_key` / `self.endpoint` freeze as `None`
and the lazy client builder then always raises — defeating the point
of deferred init for Azure.
Re-read `AZURE_API_KEY` / `AZURE_ENDPOINT` (and friends) inside
`_make_client_kwargs` when the fields are still unset, matching
OpenAI's `_get_client_params` pattern. Runs the endpoint validator
on any env-provided value so the same normalization applies.
Add a regression test that constructs the LLM with no env vars set,
then patches them in afterwards and asserts `_get_sync_client()`
successfully builds a client and writes the resolved values back
onto the LLM instance.
`LLM(model="gpt-4o")` no longer raises at construction when
`OPENAI_API_KEY` is missing — the descriptive error now surfaces when
the client is actually built. Update the test to assert that contract:
`create_llm` succeeds, and `llm._get_sync_client()` raises.
`deploy_push` gained a `--skip-validate` flag that forwards to
`DeployCommand.deploy()` as `skip_validate=False` by default.
Update the two CLI tests that pin the exact call args.
The lazy-init refactor rewrote `aclose` to access the async client via
`_get_async_client()`, which forces lazy construction. When an
`AzureCompletion` is instantiated without credentials (the whole point
of deferred init), that call raises `ValueError: "Azure API key is
required"` during cleanup — including via `async with` / `__aexit__`.
Access the cached `_async_client` attribute directly so cleanup on an
uninitialized LLM is a harmless no-op. Add a regression test that
enters and exits an `async with` block against a credentials-less
`AzureCompletion`.
Adds a new `crewai deploy validate` command that checks a project
locally against the most common categories of deploy-time failures,
so users don't burn attempts on fixable project-structure problems.
`crewai deploy create` and `crewai deploy push` now run the same
checks automatically and abort on errors; `--skip-validate` opts out.
Checks (errors block, warnings print only):
1. pyproject.toml present with `[project].name`
2. lockfile (uv.lock or poetry.lock) present and not stale
3. src/<package>/ resolves, rejecting empty names and .egg-info dirs
4. crew.py, config/agents.yaml, config/tasks.yaml for standard crews
5. main.py for flow projects
6. hatchling wheel target resolves
7. crew/flow module imports cleanly in a `uv run` subprocess, with
classification of common failures (missing provider extras,
missing API keys at import, stale crewai pins, pydantic errors)
8. env vars referenced in source vs .env (warning only)
9. crewai lockfile pin vs a known-bad cutoff (warning only)
Each finding has a stable code and a structured title/detail/hint so
downstream tooling and tests can pin behavior. 33 tests cover the
checks 1:1 against the failure patterns observed in practice.
All native LLM providers built their SDK clients inside
`@model_validator(mode="after")`, which required the API key at
`LLM(...)` construction time. Instantiating an LLM at module scope
(e.g. `chat_llm=LLM(model="openai/gpt-4o-mini")` on a `@crew` method)
crashed during downstream crew-metadata extraction with a confusing
`ImportError: Error importing native provider: 1 validation error...`
before the process env vars were ever consulted.
Wrap eager client construction in a try/except in each provider and
add `_get_sync_client` / `_get_async_client` methods that build on
first use. OpenAI call sites are routed through the lazy getters so
calls made without eager construction still work. The descriptive
"X_API_KEY is required" errors are re-raised from the lazy path at
first real call.
Update two Azure tests that asserted the old eager-error contract to
assert the new lazy-error contract.
Substring checks like `'0.1' not in json_str` collided with timestamps
such as `2026-04-10T13:00:50.140557` on CI. Round-trip through
`model_validate_json` to verify structurally that the embedding field
is absent from the serialized output.
- Rewrite TUI with Tree widget showing branch/fork lineage
- Add Resume and Fork buttons in detail panel with Collapsible entities
- Show branch and parent_id in detail panel and CLI info output
- Auto-detect .checkpoints.db when default dir missing
- Append .db to location for SqliteProvider when no extension set
- Fix RuntimeState.from_checkpoint not setting provider/location
- Fork now writes initial checkpoint on new branch
- Add from_checkpoint, fork, and CLI docs to checkpointing.mdx
Accept CheckpointConfig on Crew and Flow kickoff/kickoff_async/akickoff.
When restore_from is set, the entity resumes from that checkpoint.
When only config fields are set, checkpointing is enabled for the run.
Adds restore_from field (Path | str | None) to CheckpointConfig.
Write the crewAI package version into every checkpoint blob. On restore,
run version-based migrations so older checkpoints can be transformed
forward to the current format. Adds crewai.utilities.version module.
* refactor: remove CodeInterpreterTool and deprecate code execution params
CodeInterpreterTool has been removed. The allow_code_execution and
code_execution_mode parameters on Agent are deprecated and will be
removed in v2.0. Use dedicated sandbox services (E2B, Modal, etc.)
for code execution needs.
Changes:
- Remove CodeInterpreterTool from crewai-tools (tool, Dockerfile, tests, imports)
- Remove docker dependency from crewai-tools
- Deprecate allow_code_execution and code_execution_mode on Agent
- get_code_execution_tools() returns empty list with deprecation warning
- _validate_docker_installation() is a no-op with deprecation warning
- Bedrock CodeInterpreter (AWS hosted) and OpenAI code_interpreter are NOT affected
* fix: remove empty code_interpreter imports and unused stdlib imports
- Remove empty `from code_interpreter_tool import ()` blocks in both
crewai_tools/__init__.py and tools/__init__.py that caused SyntaxError
after CodeInterpreterTool was removed
- Remove unused `shutil` and `subprocess` imports from agent/core.py
left over from the code execution params deprecation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: remove redundant _validate_docker_installation call and fix list type annotation
- Drop the _validate_docker_installation() call inside the allow_code_execution
block — it fired a second DeprecationWarning identical to the one emitted
just above it, making the warning fire twice.
- Annotate get_code_execution_tools() return type as list[Any] to satisfy mypy
(bare `list` fails the type-arg check introduced by this branch).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ci: retrigger
* fix: update test_crew.py to remove CodeInterpreterTool references
CodeInterpreterTool was removed from crewai_tools. Update tests to
reflect that get_code_execution_tools() now returns an empty list.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* chore: update tool specifications
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- Pass RuntimeState through the event bus and enable entity auto-registration
- Introduce checkpointing API:
- .checkpoint(), .from_checkpoint(), and async checkpoint support
- Provider-based storage with BaseProvider and JsonProvider
- Mid-task resume and kickoff() integration
- Add EventRecord tracking and full event serialization with subtype preservation
- Enable checkpoint fidelity via llm_type and executor_type discriminators
- Refactor executor architecture:
- Convert executors, tools, prompts, and TokenProcess to BaseModel
- Introduce proper base classes with typed fields (CrewAgentExecutorMixin, BaseAgentExecutor)
- Add generic from_checkpoint with full LLM serialization
- Support executor back-references and resume-safe initialization
- Refactor runtime state system:
- Move RuntimeState into state/ module with async checkpoint support
- Add entity serialization improvements and JSON-safe round-tripping
- Implement event scope tracking and replay for accurate resume behavior
- Improve tool and schema handling:
- Make BaseTool fully serializable with JSON round-trip support
- Serialize args_schema via JSON schema and dynamically reconstruct models
- Add automatic subclass restoration via tool_type discriminator
- Enhance Flow checkpointing:
- Support restoring execution state and subclass-aware deserialization
- Performance improvements:
- Cache handler signature inspection
- Optimize event emission and metadata preparation
- General cleanup:
- Remove dead checkpoint payload structures
- Simplify entity registration and serialization logic
* fix: exclude embedding vector from MemoryRecord serialization
MemoryRecord.embedding (1536 floats for OpenAI embeddings) was included
in model_dump()/JSON serialization and repr. When recall results flow
to agents or get logged, these vectors burn tokens for zero value —
agents never need the raw embedding.
Added exclude=True and repr=False to the embedding field. The storage
layer accesses record.embedding directly (not via model_dump), so
persistence is unaffected.
* test: validate embedding excluded from serialization
Two tests:
1. MemoryRecord — model_dump, model_dump_json, and repr all exclude
embedding. Direct attribute access still works for storage layer.
2. MemoryMatch — nested record serialization also excludes embedding.
* fix: bump litellm to >=1.83.0 to address CVE-2026-35030
Bump litellm from <=1.82.6 to >=1.83.0 to fix JWT auth bypass via
OIDC cache key collision (CVE-2026-35030). Also widen devtools openai
pin from ~=1.83.0 to >=1.83.0,<3 to resolve the version conflict
(litellm 1.83.0 requires openai>=2.8.0).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve mypy errors from litellm bump
- Remove unused type: ignore[import-untyped] on instructor import
- Remove all unused type: ignore[union-attr] comments (litellm types fixed)
- Add hasattr guard for tool_call.function — new litellm adds
ChatCompletionMessageCustomToolCall to the union which lacks .function
* fix: tighten litellm pin to ~=1.83.0 (patch-only bumps)
>=1.83.0,<2 is too wide — litellm has had breaking changes between
minors. ~=1.83.0 means >=1.83.0,<1.84.0 — gets CVE patches but won't
pull in breaking minor releases.
* ci: bump uv from 0.8.4 to 0.11.3
* fix: resolve mypy errors in openai completion from 2.x type changes
Use isinstance checks with concrete openai response types instead of
string comparisons for proper type narrowing. Update code interpreter
handling for outputs/OutputImage API changes in openai 2.x.
* fix: pre-cache tiktoken encoding before VCR intercepts requests
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Alex <alex@crewai.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
* fix: add tool repository credentials to uv build in tool publish
When running 'uv build' during tool publish, the build process now has access
to tool repository credentials. This mirrors the pattern used in run_crew.py,
ensuring private package indexes are properly authenticated during the build.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: add env kwarg to subprocess.run mock assertions in publish tests
The actual code passes env= to subprocess.run but the test assertions
were missing this parameter, causing assertion failures.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Introduce the A2UI extension for declarative UI generation, including
support for both v0.8 and v0.9 protocol specs. Add A2UI content type
integration in A2A utils, along with schema definitions, catalog models,
and client extension improvements.
Enhance models with explicit defaults, field descriptions, and ConfigDict,
and improve typing and instance state handling across the extension.
Add schema conformance tests and align test structure.
Add and register A2UI documentation, including extension guide and
navigation updates.
* perf: reduce framework overhead for NVIDIA benchmarks
- Lazy initialize event bus thread pool and event loop on first emit()
instead of at import time (~200ms savings)
- Skip trace listener registration (50+ handlers) when tracing disabled
- Skip trace prompt in non-interactive contexts (isatty check) to avoid
20s timeout in CI/Docker/API servers
- Skip flush() when no events were emitted (avoids 30s timeout waste)
- Add _has_pending_events flag to track if any events were emitted
- Add _executor_initialized flag for lazy init double-checked locking
All existing behavior preserved when tracing IS enabled. No public APIs
changed - only conditional guards added.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: address PR review comments — tracing override, executor init order, stdin guard, unused import
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* style: fix ruff formatting in trace_listener.py and utils.py
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Iris Clawd <iris@crewai.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* refactor: replace InstanceOf[T] with plain type annotations
InstanceOf[] is a Pydantic validation wrapper that adds runtime
isinstance checks. Plain type annotations are sufficient here since
the models already use arbitrary_types_allowed or the types are
BaseModel subclasses.
* refactor: convert BaseKnowledgeStorage to BaseModel
* fix: update tests for BaseKnowledgeStorage BaseModel conversion
* fix: correct embedder config structure in test
This commit cleans up the class by removing the and methods, which are no longer needed. The changes help streamline the code and improve maintainability.
GPT-5.x models reject the `stop` parameter at the API level with "Unsupported parameter: 'stop' is not supported with this model". This breaks CrewAI executions when routing through LiteLLM (e.g. via
OpenAI-compatible gateways like Asimov), because the LiteLLM fallback path always includes `stop` in the API request params.
The native OpenAI provider was unaffected because it never sends `stop` to the API — it applies stop words client-side via `_apply_stop_words()`. However, when the request goes through LiteLLM (custom endpoints, proxy gateways),
`stop` is sent as an API parameter and GPT-5.x rejects it.
Additionally, the existing retry logic that catches this error only matched the OpenAI API error format ("Unsupported parameter") but missed
LiteLLM's own pre-validation error format ("does not support parameters"), so the self-healing retry never triggered for LiteLLM-routed calls.
* Exporting tool's metadata to AMP - initial work
* Fix payload (nest under `tools` key)
* Remove debug message + code simplification
* Priting out detected tools
* Extract module name
* fix: address PR review feedback for tool metadata extraction
- Use sha256 instead of md5 for module name hashing (lint S324)
- Filter required list to match filtered properties in JSON schema
* fix: Use sha256 instead of md5 for module name hashing (lint S324)
- Add missing mocks to metadata extraction failure test
* style: fix ruff formatting
* fix: resolve mypy type errors in utils.py
* fix: address bot review feedback on tool metadata
- Use `is not None` instead of truthiness check so empty tools list
is sent to the API rather than being silently dropped as None
- Strip __init__ suffix from module path for tools in __init__.py files
- Extend _unwrap_schema to handle function-before, function-wrap, and
definitions wrapper types
* fix: capture env_vars declared with Field(default_factory=...)
When env_vars uses default_factory, pydantic stores a callable in the
schema instead of a static default value. Fall back to calling the
factory when no static default is present.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix: preserve method return value as flow output for @human_feedback with emit
When a @human_feedback decorated method with emit= is the final method in a
flow (no downstream listeners triggered), the flow's final output was
incorrectly set to the collapsed outcome string (e.g., 'approved') instead
of the method's actual return value (e.g., a state dict).
Root cause: _process_feedback() returns the collapsed_outcome string when
emit is set, and this string was being stored as the method's result in
_method_outputs.
The fix:
1. In human_feedback.py: After _process_feedback, stash the real method_output
on the flow instance as _human_feedback_method_output when emit is set.
2. In flow.py: After appending a method result to _method_outputs, check if
_human_feedback_method_output is set. If so, replace the last entry with
the stashed real output and clear the stash.
This ensures:
- Routing still works correctly (collapsed outcome used for @listen matching)
- The flow's final result is the actual method return value
- If downstream listeners execute, their results become the final output
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* style: ruff format flow.py
* fix: use per-method dict stash for concurrency safety and None returns
Addresses review comments:
- Replace single flow-level slot with dict keyed by method name,
safe under concurrent @human_feedback+emit execution
- Dict key presence (not value) indicates stashed output,
correctly preserving None return values
- Added test for None return value preservation
---------
Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
- Delegate supports_function_calling() to parent (handles o1 models via OpenRouter)
- Guard empty env vars in base_url resolution
- Fix misleading comment about model validation rules
- Remove unused MagicMock import
- Use 'is not None' for env var restoration in tests
Co-authored-by: Joao Moura <joao@crewai.com>
## Summary
### Core fixes
<details>
<summary><b>Fix silent 404 cascade on trace event send</b></summary>
When `_initialize_backend_batch` failed, `trace_batch_id` was left populated with a client-generated UUID never registered server-side. All subsequent event sends hit a non-existent batch endpoint and returned 404. Now all three failure paths (None response, non-2xx status, exception) clear `trace_batch_id`.
</details>
<details>
<summary><b>Fix first-time deferred batch init silently skipped</b></summary>
First-time users have `is_tracing_enabled_in_context() = False` by design. This caused `_initialize_backend_batch` to return early without creating the batch, and `finalize_batch` to skip finalization (same guard). The first-time handler now passes `skip_context_check=True` to bypass both guards, calls `_finalize_backend_batch` directly, gates `backend_initialized` on actual success, checks `_send_events_to_backend` return status (marking batch as failed on 500), captures event count/duration/batch ID before they're consumed by send/finalize, and cleans up all singleton state via `_reset_batch_state()` on every exit path.
</details>
<details>
<summary><b>Sync <code>is_current_batch_ephemeral</code> on batch creation success</b></summary>
When the batch is successfully created on the server, `is_current_batch_ephemeral` is now synced with the actual `use_ephemeral` value used. This prevents endpoint mismatches where the batch was created on one endpoint but events and finalization were sent to a different one, resulting in 404.
</details>
<details>
<summary><b>Route <code>mark_trace_batch_as_failed</code> to correct endpoint for ephemeral batches</b></summary>
`mark_trace_batch_as_failed` always routed to the non-ephemeral endpoint (`/tracing/batches/{id}`), causing 404s when called on ephemeral batches — the same class of endpoint mismatch this PR aims to fix. Added `mark_ephemeral_trace_batch_as_failed` to `PlusAPI` and a `_mark_batch_as_failed` helper on `TraceBatchManager` that routes based on `is_current_batch_ephemeral`.
</details>
<details>
<summary><b>Gate <code>backend_initialized</code> on actual init success (non-first-time path)</b></summary>
On the non-first-time path, `backend_initialized` was set to `True` unconditionally after `_initialize_backend_batch` returned. With the new failure-path cleanup that clears `trace_batch_id`, this created an inconsistent state: `backend_initialized=True` + `trace_batch_id=None`. Now set via `self.trace_batch_id is not None`.
</details>
### Resilience improvements
<details>
<summary><b>Retry transient failures on batch creation</b></summary>
`_initialize_backend_batch` now retries up to 2 times with 200ms backoff on transient failures (None response, 5xx, network errors). Non-transient 4xx errors are not retried. The short backoff minimizes lock hold time on the non-first-time path where `_batch_ready_cv` is held.
</details>
<details>
<summary><b>Fall back to ephemeral on server auth rejection</b></summary>
When the non-ephemeral endpoint returns 401/403 (expired token, revoked credentials, key rotation), the client automatically switches to ephemeral tracing instead of losing traces. The fallback forwards `skip_context_check` and is guarded against infinite recursion — if ephemeral also fails, `trace_batch_id` is cleared normally.
</details>
<details>
<summary><b>Fix action-event race initializing batch as non-ephemeral</b></summary>
`_handle_action_event` called `batch_manager.initialize_batch()` directly, defaulting `use_ephemeral=False`. When a `DefaultEnvEvent` or `LLMCallStartedEvent` fired before `CrewKickoffStartedEvent` in the thread pool, the batch was locked in as non-ephemeral. Now routes through `_initialize_batch()` which computes `use_ephemeral` from `_check_authenticated()`.
</details>
<details>
<summary><b>Guard <code>_mark_batch_as_failed</code> against cascading network errors</b></summary>
When `_finalize_backend_batch` failed with a network error (e.g. `[Errno 54] Connection reset by peer`), the exception handler called `_mark_batch_as_failed` — which also makes an HTTP request on the same dead connection. That second failure was unhandled. Now wrapped in a try/except so it logs at debug level instead of propagating.
</details>
<details>
<summary><b>Design decision: first-time users always use ephemeral</b></summary>
First-time trace collection **always creates ephemeral batches**, regardless of authentication status. This is intentional:
1. **The first-time handler UX is built around ephemeral traces** — it displays an access code, a 24-hour expiry link, and opens the browser to the ephemeral trace viewer. Non-ephemeral batches don't produce these artifacts, so the handler would fall through to the "Local Traces Collected" fallback even when traces were successfully sent.
2. **The server handles account linking automatically** — `LinkEphemeralTracesJob` runs on user signup and migrates ephemeral traces to permanent records. Logged-in users can access their traces via their dashboard regardless.
3. **Checking auth during batch setup broke event collection** — moving `_check_authenticated()` into `_initialize_batch` caused the batch initialization to fail silently during the flow/crew start event handler, preventing all event collection. Keeping the first-time path fast and side-effect-free preserves event collection.
The auth check is deferred to the non-first-time path (second run onwards), where `is_tracing_enabled_in_context()` is `True` and the normal tracing pipeline handles everything — including the 401/403 ephemeral fallback.
</details>
### Manual tests
<details>
<summary><b>Matrix</b></summary>
| Scenario | First run | Second run |
|----------|-----------|------------|
| Logged out, fresh `.crewai_user.json` | Ephemeral trace created, URL returned | Ephemeral trace created, URL returned |
| Logged in, fresh `.crewai_user.json` | Ephemeral trace created, URL returned | Trace batch finalized, URL returned |
| Flow execution | Tested with `poem_flow` | Tested with `poem_flow` |
| Crew execution | Tested with `hitl_crew` | Tested with `hitl_crew` |
</details>
When a method has both @listen and @human_feedback(emit=[...]),
the FlowMeta metaclass registered it as a router but only used
get_possible_return_constants() to detect paths. This fails for
@human_feedback methods since the paths come from the decorator's
emit param, not from return statements in the source code.
Now checks __router_paths__ first (set by @human_feedback), then
falls back to source code analysis for plain @router methods.
This was causing missing edges in the flow serializer output —
e.g. the whitepaper generator's review_infographic -> handle_cancelled,
send_slack_notification, classify_feedback edges were all missing.
Adds test: @listen + @human_feedback(emit=[...]) generates correct
router edges in serialized output.
Co-authored-by: Joao Moura <joao@crewai.com>
* feat: add native OpenAI-compatible providers (OpenRouter, DeepSeek, Ollama, vLLM, Cerebras, Dashscope)
Add a data-driven OpenAI-compatible provider system that enables
native support for multiple third-party APIs that implement the
OpenAI API specification.
New providers:
- OpenRouter: 500+ models via openrouter.ai
- DeepSeek: deepseek-chat, deepseek-coder, deepseek-reasoner
- Ollama: local models (llama3, mistral, codellama, etc.)
- hosted_vllm: self-hosted vLLM servers
- Cerebras: ultra-fast inference
- Dashscope: Alibaba Qwen models (qwen-turbo, qwen-max, etc.)
Architecture:
- Single OpenAICompatibleCompletion class extends OpenAICompletion
- ProviderConfig dataclass stores per-provider settings
- Registry dict makes adding new providers a single config entry
- Handles provider-specific quirks (OpenRouter headers, Ollama
base URL normalization, optional API keys)
Usage:
LLM(model="deepseek/deepseek-chat")
LLM(model="ollama/llama3")
LLM(model="openrouter/anthropic/claude-3-opus")
LLM(model="llama3", provider="ollama")
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: add is_litellm=True to tests that test litellm-specific methods
Tests for _get_custom_llm_provider and _validate_call_params used
openrouter/ model prefix which now routes to native provider.
Added is_litellm=True to force litellm path since these test
litellm-specific internals.
---------
Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
introduce the agent skills standard for packaging reusable instructions that agents can discover and activate at runtime.
- skills defined via SKILL.md with yaml frontmatter and markdown body
- three-level progressive disclosure: metadata, instructions, resources
- filesystem discovery with directory name validation
- skill lifecycle events (discovery, loaded, activated, failed)
- crew-level skills resolved once and shared across agents
- skill context injected into both task execution and standalone kickoff
* feat: automatic root_scope for hierarchical memory isolation
Crews and flows now automatically scope their memories hierarchically.
The encoding flow's LLM-inferred scope becomes a sub-scope under the
structural root, preventing memory pollution across crews/agents.
Scope hierarchy:
/crew/{crew_name}/agent/{agent_role}/{llm-inferred}
/flow/{flow_name}/{llm-inferred}
Changes:
- Memory class: new root_scope field, passed through remember/remember_many
- EncodingFlow: prepends root_scope to resolved scope in both fast path
(Group A) and LLM path (Group C/D)
- Crew: auto-sets root_scope=/crew/{sanitized_name} on memory creation
- Agent executor: extends crew root with /agent/{sanitized_role} per save
- Flow: auto-sets root_scope=/flow/{sanitized_name} on memory creation
- New utils: sanitize_scope_name, normalize_scope_path, join_scope_paths
Backward compatible — no root_scope means no prefix (existing behavior).
Old memories at '/' remain accessible.
51 new tests, all existing tests pass.
* ci: retrigger tests
* fix: don't auto-set root_scope on user-provided Memory instances
When users pass their own Memory instance to a Crew (memory=mem),
respect their configuration — don't auto-set root_scope.
Auto-scoping only applies when memory=True (Crew creates Memory).
Fixes: test_crew_memory_with_google_vertex_embedder which passes
Memory(embedder=...) to Crew and expects remember(scope='/test')
to produce scope '/test', not '/crew/crew/test'.
* fix: address 6 review comments — true scope isolation for reads, writes, and consolidation
1. Constrain similarity search to root_scope boundary (no cross-crew consolidation)
2. Remove unused self._root_scope from EncodingFlow
3. Apply root_scope to recall/list/info/reset (true read isolation)
4. Only extend agent root_scope when crew has one (backward compat)
5. Fix docstring example for sanitize_scope_name
6. Verify code comments match behavior
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>