* fix: add tool repository credentials to uv build in tool publish
When running 'uv build' during tool publish, the build process now has access
to tool repository credentials. This mirrors the pattern used in run_crew.py,
ensuring private package indexes are properly authenticated during the build.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: add env kwarg to subprocess.run mock assertions in publish tests
The actual code passes env= to subprocess.run but the test assertions
were missing this parameter, causing assertion failures.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Introduce the A2UI extension for declarative UI generation, including
support for both v0.8 and v0.9 protocol specs. Add A2UI content type
integration in A2A utils, along with schema definitions, catalog models,
and client extension improvements.
Enhance models with explicit defaults, field descriptions, and ConfigDict,
and improve typing and instance state handling across the extension.
Add schema conformance tests and align test structure.
Add and register A2UI documentation, including extension guide and
navigation updates.
* perf: reduce framework overhead for NVIDIA benchmarks
- Lazy initialize event bus thread pool and event loop on first emit()
instead of at import time (~200ms savings)
- Skip trace listener registration (50+ handlers) when tracing disabled
- Skip trace prompt in non-interactive contexts (isatty check) to avoid
20s timeout in CI/Docker/API servers
- Skip flush() when no events were emitted (avoids 30s timeout waste)
- Add _has_pending_events flag to track if any events were emitted
- Add _executor_initialized flag for lazy init double-checked locking
All existing behavior preserved when tracing IS enabled. No public APIs
changed - only conditional guards added.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: address PR review comments — tracing override, executor init order, stdin guard, unused import
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* style: fix ruff formatting in trace_listener.py and utils.py
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Iris Clawd <iris@crewai.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* refactor: replace InstanceOf[T] with plain type annotations
InstanceOf[] is a Pydantic validation wrapper that adds runtime
isinstance checks. Plain type annotations are sufficient here since
the models already use arbitrary_types_allowed or the types are
BaseModel subclasses.
* refactor: convert BaseKnowledgeStorage to BaseModel
* fix: update tests for BaseKnowledgeStorage BaseModel conversion
* fix: correct embedder config structure in test
This commit cleans up the class by removing the and methods, which are no longer needed. The changes help streamline the code and improve maintainability.
GPT-5.x models reject the `stop` parameter at the API level with "Unsupported parameter: 'stop' is not supported with this model". This breaks CrewAI executions when routing through LiteLLM (e.g. via
OpenAI-compatible gateways like Asimov), because the LiteLLM fallback path always includes `stop` in the API request params.
The native OpenAI provider was unaffected because it never sends `stop` to the API — it applies stop words client-side via `_apply_stop_words()`. However, when the request goes through LiteLLM (custom endpoints, proxy gateways),
`stop` is sent as an API parameter and GPT-5.x rejects it.
Additionally, the existing retry logic that catches this error only matched the OpenAI API error format ("Unsupported parameter") but missed
LiteLLM's own pre-validation error format ("does not support parameters"), so the self-healing retry never triggered for LiteLLM-routed calls.
* Exporting tool's metadata to AMP - initial work
* Fix payload (nest under `tools` key)
* Remove debug message + code simplification
* Priting out detected tools
* Extract module name
* fix: address PR review feedback for tool metadata extraction
- Use sha256 instead of md5 for module name hashing (lint S324)
- Filter required list to match filtered properties in JSON schema
* fix: Use sha256 instead of md5 for module name hashing (lint S324)
- Add missing mocks to metadata extraction failure test
* style: fix ruff formatting
* fix: resolve mypy type errors in utils.py
* fix: address bot review feedback on tool metadata
- Use `is not None` instead of truthiness check so empty tools list
is sent to the API rather than being silently dropped as None
- Strip __init__ suffix from module path for tools in __init__.py files
- Extend _unwrap_schema to handle function-before, function-wrap, and
definitions wrapper types
* fix: capture env_vars declared with Field(default_factory=...)
When env_vars uses default_factory, pydantic stores a callable in the
schema instead of a static default value. Fall back to calling the
factory when no static default is present.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix: preserve method return value as flow output for @human_feedback with emit
When a @human_feedback decorated method with emit= is the final method in a
flow (no downstream listeners triggered), the flow's final output was
incorrectly set to the collapsed outcome string (e.g., 'approved') instead
of the method's actual return value (e.g., a state dict).
Root cause: _process_feedback() returns the collapsed_outcome string when
emit is set, and this string was being stored as the method's result in
_method_outputs.
The fix:
1. In human_feedback.py: After _process_feedback, stash the real method_output
on the flow instance as _human_feedback_method_output when emit is set.
2. In flow.py: After appending a method result to _method_outputs, check if
_human_feedback_method_output is set. If so, replace the last entry with
the stashed real output and clear the stash.
This ensures:
- Routing still works correctly (collapsed outcome used for @listen matching)
- The flow's final result is the actual method return value
- If downstream listeners execute, their results become the final output
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* style: ruff format flow.py
* fix: use per-method dict stash for concurrency safety and None returns
Addresses review comments:
- Replace single flow-level slot with dict keyed by method name,
safe under concurrent @human_feedback+emit execution
- Dict key presence (not value) indicates stashed output,
correctly preserving None return values
- Added test for None return value preservation
---------
Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
- Delegate supports_function_calling() to parent (handles o1 models via OpenRouter)
- Guard empty env vars in base_url resolution
- Fix misleading comment about model validation rules
- Remove unused MagicMock import
- Use 'is not None' for env var restoration in tests
Co-authored-by: Joao Moura <joao@crewai.com>
## Summary
### Core fixes
<details>
<summary><b>Fix silent 404 cascade on trace event send</b></summary>
When `_initialize_backend_batch` failed, `trace_batch_id` was left populated with a client-generated UUID never registered server-side. All subsequent event sends hit a non-existent batch endpoint and returned 404. Now all three failure paths (None response, non-2xx status, exception) clear `trace_batch_id`.
</details>
<details>
<summary><b>Fix first-time deferred batch init silently skipped</b></summary>
First-time users have `is_tracing_enabled_in_context() = False` by design. This caused `_initialize_backend_batch` to return early without creating the batch, and `finalize_batch` to skip finalization (same guard). The first-time handler now passes `skip_context_check=True` to bypass both guards, calls `_finalize_backend_batch` directly, gates `backend_initialized` on actual success, checks `_send_events_to_backend` return status (marking batch as failed on 500), captures event count/duration/batch ID before they're consumed by send/finalize, and cleans up all singleton state via `_reset_batch_state()` on every exit path.
</details>
<details>
<summary><b>Sync <code>is_current_batch_ephemeral</code> on batch creation success</b></summary>
When the batch is successfully created on the server, `is_current_batch_ephemeral` is now synced with the actual `use_ephemeral` value used. This prevents endpoint mismatches where the batch was created on one endpoint but events and finalization were sent to a different one, resulting in 404.
</details>
<details>
<summary><b>Route <code>mark_trace_batch_as_failed</code> to correct endpoint for ephemeral batches</b></summary>
`mark_trace_batch_as_failed` always routed to the non-ephemeral endpoint (`/tracing/batches/{id}`), causing 404s when called on ephemeral batches — the same class of endpoint mismatch this PR aims to fix. Added `mark_ephemeral_trace_batch_as_failed` to `PlusAPI` and a `_mark_batch_as_failed` helper on `TraceBatchManager` that routes based on `is_current_batch_ephemeral`.
</details>
<details>
<summary><b>Gate <code>backend_initialized</code> on actual init success (non-first-time path)</b></summary>
On the non-first-time path, `backend_initialized` was set to `True` unconditionally after `_initialize_backend_batch` returned. With the new failure-path cleanup that clears `trace_batch_id`, this created an inconsistent state: `backend_initialized=True` + `trace_batch_id=None`. Now set via `self.trace_batch_id is not None`.
</details>
### Resilience improvements
<details>
<summary><b>Retry transient failures on batch creation</b></summary>
`_initialize_backend_batch` now retries up to 2 times with 200ms backoff on transient failures (None response, 5xx, network errors). Non-transient 4xx errors are not retried. The short backoff minimizes lock hold time on the non-first-time path where `_batch_ready_cv` is held.
</details>
<details>
<summary><b>Fall back to ephemeral on server auth rejection</b></summary>
When the non-ephemeral endpoint returns 401/403 (expired token, revoked credentials, key rotation), the client automatically switches to ephemeral tracing instead of losing traces. The fallback forwards `skip_context_check` and is guarded against infinite recursion — if ephemeral also fails, `trace_batch_id` is cleared normally.
</details>
<details>
<summary><b>Fix action-event race initializing batch as non-ephemeral</b></summary>
`_handle_action_event` called `batch_manager.initialize_batch()` directly, defaulting `use_ephemeral=False`. When a `DefaultEnvEvent` or `LLMCallStartedEvent` fired before `CrewKickoffStartedEvent` in the thread pool, the batch was locked in as non-ephemeral. Now routes through `_initialize_batch()` which computes `use_ephemeral` from `_check_authenticated()`.
</details>
<details>
<summary><b>Guard <code>_mark_batch_as_failed</code> against cascading network errors</b></summary>
When `_finalize_backend_batch` failed with a network error (e.g. `[Errno 54] Connection reset by peer`), the exception handler called `_mark_batch_as_failed` — which also makes an HTTP request on the same dead connection. That second failure was unhandled. Now wrapped in a try/except so it logs at debug level instead of propagating.
</details>
<details>
<summary><b>Design decision: first-time users always use ephemeral</b></summary>
First-time trace collection **always creates ephemeral batches**, regardless of authentication status. This is intentional:
1. **The first-time handler UX is built around ephemeral traces** — it displays an access code, a 24-hour expiry link, and opens the browser to the ephemeral trace viewer. Non-ephemeral batches don't produce these artifacts, so the handler would fall through to the "Local Traces Collected" fallback even when traces were successfully sent.
2. **The server handles account linking automatically** — `LinkEphemeralTracesJob` runs on user signup and migrates ephemeral traces to permanent records. Logged-in users can access their traces via their dashboard regardless.
3. **Checking auth during batch setup broke event collection** — moving `_check_authenticated()` into `_initialize_batch` caused the batch initialization to fail silently during the flow/crew start event handler, preventing all event collection. Keeping the first-time path fast and side-effect-free preserves event collection.
The auth check is deferred to the non-first-time path (second run onwards), where `is_tracing_enabled_in_context()` is `True` and the normal tracing pipeline handles everything — including the 401/403 ephemeral fallback.
</details>
### Manual tests
<details>
<summary><b>Matrix</b></summary>
| Scenario | First run | Second run |
|----------|-----------|------------|
| Logged out, fresh `.crewai_user.json` | Ephemeral trace created, URL returned | Ephemeral trace created, URL returned |
| Logged in, fresh `.crewai_user.json` | Ephemeral trace created, URL returned | Trace batch finalized, URL returned |
| Flow execution | Tested with `poem_flow` | Tested with `poem_flow` |
| Crew execution | Tested with `hitl_crew` | Tested with `hitl_crew` |
</details>
When a method has both @listen and @human_feedback(emit=[...]),
the FlowMeta metaclass registered it as a router but only used
get_possible_return_constants() to detect paths. This fails for
@human_feedback methods since the paths come from the decorator's
emit param, not from return statements in the source code.
Now checks __router_paths__ first (set by @human_feedback), then
falls back to source code analysis for plain @router methods.
This was causing missing edges in the flow serializer output —
e.g. the whitepaper generator's review_infographic -> handle_cancelled,
send_slack_notification, classify_feedback edges were all missing.
Adds test: @listen + @human_feedback(emit=[...]) generates correct
router edges in serialized output.
Co-authored-by: Joao Moura <joao@crewai.com>
* feat: add native OpenAI-compatible providers (OpenRouter, DeepSeek, Ollama, vLLM, Cerebras, Dashscope)
Add a data-driven OpenAI-compatible provider system that enables
native support for multiple third-party APIs that implement the
OpenAI API specification.
New providers:
- OpenRouter: 500+ models via openrouter.ai
- DeepSeek: deepseek-chat, deepseek-coder, deepseek-reasoner
- Ollama: local models (llama3, mistral, codellama, etc.)
- hosted_vllm: self-hosted vLLM servers
- Cerebras: ultra-fast inference
- Dashscope: Alibaba Qwen models (qwen-turbo, qwen-max, etc.)
Architecture:
- Single OpenAICompatibleCompletion class extends OpenAICompletion
- ProviderConfig dataclass stores per-provider settings
- Registry dict makes adding new providers a single config entry
- Handles provider-specific quirks (OpenRouter headers, Ollama
base URL normalization, optional API keys)
Usage:
LLM(model="deepseek/deepseek-chat")
LLM(model="ollama/llama3")
LLM(model="openrouter/anthropic/claude-3-opus")
LLM(model="llama3", provider="ollama")
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: add is_litellm=True to tests that test litellm-specific methods
Tests for _get_custom_llm_provider and _validate_call_params used
openrouter/ model prefix which now routes to native provider.
Added is_litellm=True to force litellm path since these test
litellm-specific internals.
---------
Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
introduce the agent skills standard for packaging reusable instructions that agents can discover and activate at runtime.
- skills defined via SKILL.md with yaml frontmatter and markdown body
- three-level progressive disclosure: metadata, instructions, resources
- filesystem discovery with directory name validation
- skill lifecycle events (discovery, loaded, activated, failed)
- crew-level skills resolved once and shared across agents
- skill context injected into both task execution and standalone kickoff
* feat: automatic root_scope for hierarchical memory isolation
Crews and flows now automatically scope their memories hierarchically.
The encoding flow's LLM-inferred scope becomes a sub-scope under the
structural root, preventing memory pollution across crews/agents.
Scope hierarchy:
/crew/{crew_name}/agent/{agent_role}/{llm-inferred}
/flow/{flow_name}/{llm-inferred}
Changes:
- Memory class: new root_scope field, passed through remember/remember_many
- EncodingFlow: prepends root_scope to resolved scope in both fast path
(Group A) and LLM path (Group C/D)
- Crew: auto-sets root_scope=/crew/{sanitized_name} on memory creation
- Agent executor: extends crew root with /agent/{sanitized_role} per save
- Flow: auto-sets root_scope=/flow/{sanitized_name} on memory creation
- New utils: sanitize_scope_name, normalize_scope_path, join_scope_paths
Backward compatible — no root_scope means no prefix (existing behavior).
Old memories at '/' remain accessible.
51 new tests, all existing tests pass.
* ci: retrigger tests
* fix: don't auto-set root_scope on user-provided Memory instances
When users pass their own Memory instance to a Crew (memory=mem),
respect their configuration — don't auto-set root_scope.
Auto-scoping only applies when memory=True (Crew creates Memory).
Fixes: test_crew_memory_with_google_vertex_embedder which passes
Memory(embedder=...) to Crew and expects remember(scope='/test')
to produce scope '/test', not '/crew/crew/test'.
* fix: address 6 review comments — true scope isolation for reads, writes, and consolidation
1. Constrain similarity search to root_scope boundary (no cross-crew consolidation)
2. Remove unused self._root_scope from EncodingFlow
3. Apply root_scope to recall/list/info/reset (true read isolation)
4. Only extend agent root_scope when crew has one (backward compat)
5. Fix docstring example for sanitize_scope_name
6. Verify code comments match behavior
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: add flow_structure() serializer for Flow class introspection
Adds a new flow_serializer module that introspects a Flow class and returns
a JSON-serializable dictionary describing its complete graph structure.
This enables Studio UI to render visual flow graphs (analogous to how
crew_structure() works for Crews).
The serializer extracts:
- Method metadata (type, triggers, conditions, router paths)
- Edge graph (listen and route edges between methods)
- State schema (from Pydantic model if typed)
- Human feedback and Crew reference detection
- Flow input detection
Includes 23 comprehensive tests covering linear flows, routers,
AND/OR conditions, human feedback, crew detection, state schemas,
edge cases, and JSON serialization.
* fix: lint — ruff check + format compliance for flow_serializer
* fix: address review — PydanticUndefined bug, FlowCondition tuple handling, dead code cleanup, inheritance tests
1. Fix PydanticUndefined default handling (real bug) — required fields
were serialized with sentinel value instead of null
2. Fix FlowCondition tuple type in _extract_all_methods_from_condition —
tuple conditions now properly extracted
3. Remove dead get_flow_inputs branch that did nothing
4. Document _detect_crew_reference as best-effort heuristic
5. Add 2 inheritance tests (parent→child method propagation)
---------
Co-authored-by: Joao Moura <joao@crewai.com>
When a flow with @human_feedback(llm=create_llm()) pauses for HITL and
later resumes:
1. The LLM object was being serialized to just a model string via
_serialize_llm_for_context() (e.g. 'gemini/gemini-3.1-flash-lite-preview')
2. On resume, resume_async() was creating LLM(model=string) with NO
credentials, project, location, safety_settings, or client_params
3. OpenAI worked by accident (OPENAI_API_KEY from env), but Gemini with
service accounts broke
This fix:
- Stashes the live LLM object on the wrapper as _hf_llm attribute
- On resume, looks up the method and retrieves the live LLM if available
- Falls back to the serialized string for backward compatibility
- Preserves _hf_llm through FlowMethod wrapper decorators
Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* Fix lock_store crash when redis package is not installed
`REDIS_URL` being set was enough to trigger a Redis lock, which would
raise `ImportError` if the `redis` package wasn't available. Added
`_redis_available()` to guard on both the env var and the import.
* Simplify tests
* Simplify tests #2
* fix: enhance LLM response handling and serialization
* Updated the Flow class to improve error handling when both structured and simple prompting fail, ensuring the first outcome is returned as a fallback.
* Introduced a new function, _serialize_llm_for_context, to properly serialize LLM objects with provider prefixes for better context management.
* Added tests to validate the new serialization logic and ensure correct behavior when LLM calls fail.
This update enhances the robustness of LLM interactions and improves the overall flow of handling outcomes.
* fix: patch VCR response handling to prevent httpx.ResponseNotRead errors (#4917)
* fix: enhance LLM response handling and serialization
* Updated the Flow class to improve error handling when both structured and simple prompting fail, ensuring the first outcome is returned as a fallback.
* Introduced a new function, _serialize_llm_for_context, to properly serialize LLM objects with provider prefixes for better context management.
* Added tests to validate the new serialization logic and ensure correct behavior when LLM calls fail.
This update enhances the robustness of LLM interactions and improves the overall flow of handling outcomes.
* fix: patch VCR response handling to prevent httpx.ResponseNotRead errors
VCR's _from_serialized_response mocks httpx.Response.read(), which
prevents the response's internal _content attribute from being properly
initialized. When OpenAI's client (using with_raw_response) accesses
response.content, httpx raises ResponseNotRead.
This patch explicitly sets response._content after the response is
created, ensuring that tests using VCR cassettes work correctly with
the OpenAI client's raw response handling.
Fixes tests:
- test_hierarchical_crew_creation_tasks_with_sync_last
- test_conditional_task_last_task_when_conditional_is_false
- test_crew_log_file_output
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: alex-clawd <alex@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: introduce PlanningConfig for enhanced agent planning capabilities (#4344)
* feat: introduce PlanningConfig for enhanced agent planning capabilities
This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.
* dropping redundancy
* fix test
* revert handle_reasoning here
* refactor: update reasoning handling in Agent class
This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.
* improve planning prompts
* matching
* refactor: remove default enabled flag from PlanningConfig in Agent class
* more cassettes
* fix test
* refactor: update planning prompt and remove deprecated methods in reasoning handler
* improve planning prompt
* Lorenze/feat planning pt 2 todo list gen (#4449)
* feat: introduce PlanningConfig for enhanced agent planning capabilities
This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.
* dropping redundancy
* fix test
* revert handle_reasoning here
* refactor: update reasoning handling in Agent class
This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.
* improve planning prompts
* matching
* refactor: remove default enabled flag from PlanningConfig in Agent class
* more cassettes
* fix test
* feat: enhance agent planning with structured todo management
This commit introduces a new planning system within the AgentExecutor class, allowing for the creation of structured todo items from planning steps. The TodoList and TodoItem models have been added to facilitate tracking of plan execution. The reasoning plan now includes a list of steps, improving the clarity and organization of agent tasks. Additionally, tests have been added to validate the new planning functionality and ensure proper integration with existing workflows.
* refactor: update planning prompt and remove deprecated methods in reasoning handler
* improve planning prompt
* improve handler
* linted
* linted
* Lorenze/feat/planning pt 3 todo list execution (#4450)
* feat: introduce PlanningConfig for enhanced agent planning capabilities
This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.
* dropping redundancy
* fix test
* revert handle_reasoning here
* refactor: update reasoning handling in Agent class
This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.
* improve planning prompts
* matching
* refactor: remove default enabled flag from PlanningConfig in Agent class
* more cassettes
* fix test
* feat: enhance agent planning with structured todo management
This commit introduces a new planning system within the AgentExecutor class, allowing for the creation of structured todo items from planning steps. The TodoList and TodoItem models have been added to facilitate tracking of plan execution. The reasoning plan now includes a list of steps, improving the clarity and organization of agent tasks. Additionally, tests have been added to validate the new planning functionality and ensure proper integration with existing workflows.
* refactor: update planning prompt and remove deprecated methods in reasoning handler
* improve planning prompt
* improve handler
* execute todos and be able to track them
* feat: introduce PlannerObserver and StepExecutor for enhanced plan execution
This commit adds the PlannerObserver and StepExecutor classes to the CrewAI framework, implementing the observation phase of the Plan-and-Execute architecture. The PlannerObserver analyzes step execution results, determines plan validity, and suggests refinements, while the StepExecutor executes individual todo items in isolation. These additions improve the overall planning and execution process, allowing for more dynamic and responsive agent behavior. Additionally, new observation events have been defined to facilitate monitoring and logging of the planning process.
* refactor: enhance final answer synthesis in AgentExecutor
This commit improves the synthesis of final answers in the AgentExecutor class by implementing a more coherent approach to combining results from multiple todo items. The method now utilizes a single LLM call to generate a polished response, falling back to concatenation if the synthesis fails. Additionally, the test cases have been updated to reflect the changes in planning and execution, ensuring that the results are properly validated and that the plan-and-execute architecture is functioning as intended.
* refactor: enhance final answer synthesis in AgentExecutor
This commit improves the synthesis of final answers in the AgentExecutor class by implementing a more coherent approach to combining results from multiple todo items. The method now utilizes a single LLM call to generate a polished response, falling back to concatenation if the synthesis fails. Additionally, the test cases have been updated to reflect the changes in planning and execution, ensuring that the results are properly validated and that the plan-and-execute architecture is functioning as intended.
* refactor: implement structured output handling in final answer synthesis
This commit enhances the final answer synthesis process in the AgentExecutor class by introducing support for structured outputs when a response model is specified. The synthesis method now utilizes the response model to produce outputs that conform to the expected schema, while still falling back to concatenation in case of synthesis failures. This change ensures that intermediate steps yield free-text results, but the final output can be structured, improving the overall coherence and usability of the synthesized answers.
* regen tests
* linted
* fix
* Enhance PlanningConfig and AgentExecutor with Reasoning Effort Levels
This update introduces a new attribute in the class, allowing users to customize the observation and replanning behavior during task execution. The class has been modified to utilize this new attribute, routing step observations based on the specified reasoning effort level: low, medium, or high.
Additionally, tests have been added to validate the functionality of the reasoning effort levels, ensuring that the agent behaves as expected under different configurations. This enhancement improves the adaptability and efficiency of the planning process in agent execution.
* regen cassettes for test and fix test
* cassette regen
* fixing tests
* dry
* Refactor PlannerObserver and StepExecutor to Utilize I18N for Prompts
This update enhances the PlannerObserver and StepExecutor classes by integrating the I18N utility for managing prompts and messages. The system and user prompts are now retrieved from the I18N module, allowing for better localization and maintainability. Additionally, the code has been cleaned up to remove hardcoded strings, improving readability and consistency across the planning and execution processes.
* Refactor PlannerObserver and StepExecutor to Utilize I18N for Prompts
This update enhances the PlannerObserver and StepExecutor classes by integrating the I18N utility for managing prompts and messages. The system and user prompts are now retrieved from the I18N module, allowing for better localization and maintainability. Additionally, the code has been cleaned up to remove hardcoded strings, improving readability and consistency across the planning and execution processes.
* consolidate agent logic
* fix datetime
* improving step executor
* refactor: streamline observation and refinement process in PlannerObserver
- Updated the PlannerObserver to apply structured refinements directly from observations without requiring a second LLM call.
- Renamed method to for clarity.
- Enhanced documentation to reflect changes in how refinements are handled.
- Removed unnecessary LLM message building and parsing logic, simplifying the refinement process.
- Updated event emissions to include summaries of refinements instead of raw data.
* enhance step executor with tool usage events and validation
- Added event emissions for tool usage, including started and finished events, to track tool execution.
- Implemented validation to ensure expected tools are called during step execution, raising errors when not.
- Refactored the method to handle tool execution with event logging.
- Introduced a new method for parsing tool input into a structured format.
- Updated tests to cover new functionality and ensure correct behavior of tool usage events.
* refactor: enhance final answer synthesis logic in AgentExecutor
- Updated the finalization process to conditionally skip synthesis when the last todo result is sufficient as a complete answer.
- Introduced a new method to determine if the last todo result can be used directly, improving efficiency.
- Added tests to verify the new behavior, ensuring synthesis is skipped when appropriate and maintained when a response model is set.
* fix: update observation handling in PlannerObserver for LLM errors
- Modified the error handling in the PlannerObserver to default to a conservative replan when an LLM call fails.
- Updated the return values to indicate that the step was not completed successfully and that a full replan is needed.
- Added a new test to verify the behavior of the observer when an LLM error occurs, ensuring the correct replan logic is triggered.
* refactor: enhance planning and execution flow in agents
- Updated the PlannerObserver to accept a kickoff input for standalone task execution, improving flexibility in task handling.
- Refined the step execution process in StepExecutor to support multi-turn action loops, allowing for iterative tool execution and observation.
- Introduced a method to extract relevant task sections from descriptions, ensuring clarity in task requirements.
- Enhanced the AgentExecutor to manage step failures more effectively, triggering replans only when necessary and preserving completed task history.
- Updated translations to reflect changes in planning principles and execution prompts, emphasizing concrete and executable steps.
* refactor: update setup_native_tools to include tool_name_mapping
- Modified the setup_native_tools function to return an additional mapping of tool names.
- Updated StepExecutor and AgentExecutor classes to accommodate the new return value from setup_native_tools.
* fix tests
* linted
* linted
* feat: enhance image block handling in Anthropic provider and update AgentExecutor logic
- Added a method to convert OpenAI-style image_url blocks to Anthropic's required format.
- Updated AgentExecutor to handle cases where no todos are ready, introducing a needs_replan return state.
- Improved fallback answer generation in AgentExecutor to prevent RuntimeErrors when no final output is produced.
* lint
* lint
* 1. Added failed to TodoStatus (planning_types.py)
- TodoStatus now includes failed as a valid state: Literal[pending, running, completed, failed]
- Added mark_failed(step_number, result) method to TodoList
- Added get_failed_todos() method to TodoList
- Updated is_complete to treat both completed and failed as terminal states
- Updated replace_pending_todos docstring to mention failed items are preserved
2. Mark running todos as failed before replan (agent_executor.py)
All three effort-level handlers now call mark_failed() on the current todo before routing to replan_now:
- Low effort (handle_step_observed_low): hard-failure branch
- Medium effort (handle_step_observed_medium): needs_full_replan branch
- High effort (decide_next_action): both needs_full_replan and step_completed_successfully=False branches
3. Updated _should_replan to use get_failed_todos()
Previously filtered on todo.status == failed which was dead code. Now uses the proper accessor method that will actually find failed items.
What this fixes: Before these changes, a step that triggered a replan would stay in running status permanently, causing is_complete to never
return True and next_pending to skip it — leading to stuck execution states. Now failed steps are properly tracked, replanning context correctly
reports them, and LiteAgentOutput.failed_todos will actually return results.
* fix test
* imp on failed states
* adjusted the var name from AgentReActState to AgentExecutorState
* addressed p0 bugs
* more improvements
* linted
* regen cassette
* addressing crictical comments
* ensure configurable timeouts, max_replans and max step iterations
* adjusted tools
* dropping debug statements
* addressed comment
* fix linter
* lints and test fixes
* fix: default observation parse fallback to failure and clean up plan-execute types
When _parse_observation_response fails all parse attempts, default to
step_completed_successfully=False instead of True to avoid silently
masking failures. Extract duplicate _extract_task_section into a shared
utility in agent_utils. Type PlanningConfig.llm as str | BaseLLM | None
instead of str | Any | None. Make StepResult a frozen dataclass for
immutability consistency with StepExecutionContext.
* fix: remove Any from function_calling_llm union type in step_executor
* fix: make BaseTool usage count thread-safe for parallel step execution
Add _usage_lock and _claim_usage() to BaseTool for atomic
check-and-increment of current_usage_count. This prevents race
conditions when parallel plan steps invoke the same tool concurrently
via execute_todos_parallel. Remove the racy pre-check from
execute_single_native_tool_call since the limit is now enforced
atomically inside tool.run().
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
* [SECURITY] Fix sandbox escape vulnerability in CodeInterpreterTool (F-001)
This commit addresses a critical security vulnerability where the CodeInterpreterTool
could be exploited via sandbox escape attacks when Docker was unavailable.
Changes:
- Remove insecure fallback to restricted sandbox in run_code_safety()
- Now fails closed with RuntimeError when Docker is unavailable
- Mark run_code_in_restricted_sandbox() as deprecated and insecure
- Add clear security warnings to SandboxPython class documentation
- Update tests to reflect secure-by-default behavior
- Add test demonstrating the sandbox escape vulnerability
- Update README with security requirements and best practices
The previous implementation would fall back to a Python-based 'restricted sandbox'
when Docker was unavailable. However, this sandbox could be easily bypassed using
Python object introspection to recover the original __import__ function, allowing
arbitrary module access and command execution on the host.
The fix enforces Docker as a requirement for safe code execution. Users who cannot
use Docker must explicitly enable unsafe_mode=True, acknowledging the security risks.
Security Impact:
- Prevents RCE via sandbox escape when Docker is unavailable
- Enforces fail-closed security model
- Maintains backward compatibility via unsafe_mode flag
References:
- https://docs.crewai.com/tools/ai-ml/codeinterpretertool
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
* Add security fix documentation for F-001
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
* Add Slack summary for security fix
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
* Delete SECURITY_FIX_F001.md
* Delete SLACK_SUMMARY.md
* chore: regen cassettes
* chore: regen more cassettes
* Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* fix(bedrock): group parallel tool results in single user message
When an AWS Bedrock model makes multiple tool calls in a single
response, the Converse API requires all corresponding tool results
to be sent back in a single user message. Previously, each tool
result was emitted as a separate user message, causing:
ValidationException: Expected toolResult blocks at messages.2.content
Fix: When processing consecutive tool messages, append the toolResult
block to the preceding user message (if it already contains
toolResult blocks) instead of creating a new message. This groups
all parallel tool results together while keeping tool results from
different assistant turns separate.
Fixes#4749
Signed-off-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
* Update lib/crewai/tests/llms/bedrock/test_bedrock.py
* fix: group bedrock tool results
Co-authored-by: João Moura <joaomdmoura@gmail.com>
---------
Signed-off-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
Co-authored-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
* fix: allow hyphenated tool names in MCP references like notion#get-page
The _SLUG_RE regex on BaseAgent rejected MCP tool references containing
hyphens (e.g. "notion#get-page") because the fragment pattern only
matched \w (word chars)
* fix: create fresh MCP client per tool invocation to prevent parallel call races
When the LLM dispatches parallel calls to MCP tools on the same server, the executor runs them concurrently via ThreadPoolExecutor. Previously, all tools from a server shared a single MCPClient instance, and even the same tool called twice would reuse one client. Since each thread creates its own asyncio event loop via asyncio.run(), concurrent connect/disconnect calls on the shared client caused anyio cancel-scope errors ("Attempted to exit cancel scope in a different task than it was entered in").
The fix introduces a client_factory pattern: MCPNativeTool now receives a zero-arg callable that produces a fresh MCPClient + transport on every
_run_async() invocation. This eliminates all shared mutable connection state between concurrent calls, whether to the same tool or different tools from the same server.
* test: ensure we can filter hyphenated MCP tool
* refactor(memory): convert Memory, MemoryScope, and MemorySlice to BaseModel
* fix(test): update mock memory attribute from _read_only to read_only
* fix: handle re-validation in wrap validators and patch BaseModel class in tests
* fix(telemetry): skip signal handler registration in non-main threads
When CrewAI is initialized from a non-main thread (e.g. Streamlit, Flask,
Django, Jupyter), the telemetry module attempted to register signal handlers
which only work in the main thread. This caused multiple noisy ValueError
tracebacks to be printed to stderr, confusing users even though the errors
were caught and non-fatal.
Check `threading.current_thread() is not threading.main_thread()` before
attempting signal registration, and skip silently with a debug-level log
message instead of printing full tracebacks.
Fixes crewAIInc/crewAI#4289
* fix(test): move Telemetry() inside signal.signal mock context
Refs: #4649
* fix(telemetry): move signal.signal mock inside thread to wrap Telemetry() construction
The patch context now activates inside init_in_thread so the mock
is guaranteed to be active before and during Telemetry.__init__,
addressing the Copilot review feedback.
Refs: #4289
* fix(test): mock logger.debug instead of capsys for deterministic assertion
Replace signal.signal-only mock with combined logger + signal mock.
Assert logger.debug was called with the skip message and signal.signal
was never invoked from the non-main thread.
Refs: #4289
* chore(deps): update lancedb version and add lance-namespace packages
- Updated lancedb dependency version from 0.4.0 to 0.29.2 in multiple files.
- Added new packages: lance-namespace and lance-namespace-urllib3-client with version 0.5.2, including their dependencies and installation details.
- Enhanced MemoryTUI to display a limit on entries and improved the LanceDBStorage class with automatic background compaction and index creation for better performance.
* linter
* refactor: update memory recall limit and formatting in Agent class
- Reduced the memory recall limit from 10 to 5 in multiple locations within the Agent class.
- Updated the memory formatting to use a new `format` method in the MemoryMatch class for improved readability and metadata inclusion.
* refactor: enhance memory handling with read-only support
- Updated memory-related classes and methods to support read-only functionality, allowing for silent no-ops when attempting to remember data in read-only mode.
- Modified the LiteAgent and CrewAgentExecutorMixin classes to check for read-only status before saving memories.
- Adjusted MemorySlice and Memory classes to reflect changes in behavior when read-only is enabled.
- Updated tests to verify that memory operations behave correctly under read-only conditions.
* test: set mock memory to read-write in unit tests
- Updated unit tests in test_unified_memory.py to set mock_memory._read_only to False, ensuring that memory operations can be tested in a writable state.
* fix test
* fix: preserve falsy metadata values and fix remember() return type
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
* fix: preserve null types in tool parameter schemas for LLM
Tool parameter schemas were stripping null from optional fields via
generate_model_description, forcing the LLM to provide non-null values
for fields.
Adds strip_null_types parameter to generate_model_description and passes False when generating tool
schemas, so optional fields keep anyOf: [{type: T}, {type: null}]
* Update lib/crewai/src/crewai/utilities/pydantic_schema_utils.py
Co-authored-by: Gabe Milani <gabriel@crewai.com>
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Gabe Milani <gabriel@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* fix: map output_pydantic/output_json to native structured output
* test: add crew+tools+structured output integration test for Gemini
* fix: re-record stale cassette for test_crew_testing_function
* fix: re-record remaining stale cassettes for native structured output
* fix: enable native structured output for lite agent and fix mypy errors