Previously, _initialize_batch forced use_ephemeral=True for all
first-time users, bypassing _check_authenticated() entirely. This
meant logged-in users in a new project directory were routed to the
ephemeral endpoint instead of their account's tracing endpoint.
Now _check_authenticated() runs for all users including first-time.
Authenticated first-time users get non-ephemeral tracing (traces
linked to their account); only unauthenticated first-time users
fall back to ephemeral. The deferred backend init in
FirstTimeTraceHandler also reads is_current_batch_ephemeral instead
of hardcoding use_ephemeral=True.
Transient failures (None response, 5xx, network errors) during
_initialize_backend_batch now retry up to 2 times with a 1s backoff.
Non-transient 4xx errors (422 validation, 401 auth) are not retried
since the same payload would fail again. If all retries are exhausted,
trace_batch_id is cleared per the existing safety net.
This runs post-execution when the user has already answered "y" to
view traces, so the ~2s worst-case delay is acceptable.
In first_time_trace_handler._initialize_backend_and_send_events,
backend_initialized was set to True unconditionally after calling
_initialize_backend_batch, regardless of whether the server-side
batch was actually created. This caused _send_events_to_backend and
finalize_batch to run against a non-existent batch.
Now check trace_batch_id after _initialize_backend_batch returns;
if None (batch creation failed), call _gracefully_fail and return
early, skipping event send and finalization.
When _initialize_backend_batch fails (None response, non-2xx status,
or exception), trace_batch_id was left populated with a client-generated
UUID that was never registered server-side. Subsequent calls to
_send_events_to_backend would see the stale ID and POST events to
/ephemeral/batches/{id}/events, resulting in a 404 from the server.
Nullify trace_batch_id on all three failure paths so downstream methods
skip event sending instead of hitting a non-existent batch.
* Fix lock_store crash when redis package is not installed
`REDIS_URL` being set was enough to trigger a Redis lock, which would
raise `ImportError` if the `redis` package wasn't available. Added
`_redis_available()` to guard on both the env var and the import.
* Simplify tests
* Simplify tests #2
* fix: enhance LLM response handling and serialization
* Updated the Flow class to improve error handling when both structured and simple prompting fail, ensuring the first outcome is returned as a fallback.
* Introduced a new function, _serialize_llm_for_context, to properly serialize LLM objects with provider prefixes for better context management.
* Added tests to validate the new serialization logic and ensure correct behavior when LLM calls fail.
This update enhances the robustness of LLM interactions and improves the overall flow of handling outcomes.
* fix: patch VCR response handling to prevent httpx.ResponseNotRead errors (#4917)
* fix: enhance LLM response handling and serialization
* Updated the Flow class to improve error handling when both structured and simple prompting fail, ensuring the first outcome is returned as a fallback.
* Introduced a new function, _serialize_llm_for_context, to properly serialize LLM objects with provider prefixes for better context management.
* Added tests to validate the new serialization logic and ensure correct behavior when LLM calls fail.
This update enhances the robustness of LLM interactions and improves the overall flow of handling outcomes.
* fix: patch VCR response handling to prevent httpx.ResponseNotRead errors
VCR's _from_serialized_response mocks httpx.Response.read(), which
prevents the response's internal _content attribute from being properly
initialized. When OpenAI's client (using with_raw_response) accesses
response.content, httpx raises ResponseNotRead.
This patch explicitly sets response._content after the response is
created, ensuring that tests using VCR cassettes work correctly with
the OpenAI client's raw response handling.
Fixes tests:
- test_hierarchical_crew_creation_tasks_with_sync_last
- test_conditional_task_last_task_when_conditional_is_false
- test_crew_log_file_output
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: alex-clawd <alex@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* fix: replace os.system with subprocess.run in unsafe mode pip install
Eliminates shell injection risk (A05) where a malicious library name like
"pkg; rm -rf /" could execute arbitrary host commands. Using list-form
subprocess.run with shell=False ensures the library name is always treated
as a single argument with no shell metacharacter expansion.
Adds two tests: one verifying list-form invocation, one verifying that
shell metacharacters in a library name cannot trigger shell execution.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: use sys.executable -m pip to satisfy S607 linting rule
S607 flags partial executable paths like ["pip", ...]. Using
[sys.executable, "-m", "pip", ...] provides an absolute path and also
ensures installation targets the correct Python environment.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* feat(a2a): add plus api token auth
* feat(a2a): use stub for plus api
* fix: use dynamic separator in slugify for dedup and strip
* fix: remove unused _DUPLICATE_SEPARATOR_PATTERN, cache compiled regex in slugify
* feat: introduce PlanningConfig for enhanced agent planning capabilities (#4344)
* feat: introduce PlanningConfig for enhanced agent planning capabilities
This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.
* dropping redundancy
* fix test
* revert handle_reasoning here
* refactor: update reasoning handling in Agent class
This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.
* improve planning prompts
* matching
* refactor: remove default enabled flag from PlanningConfig in Agent class
* more cassettes
* fix test
* refactor: update planning prompt and remove deprecated methods in reasoning handler
* improve planning prompt
* Lorenze/feat planning pt 2 todo list gen (#4449)
* feat: introduce PlanningConfig for enhanced agent planning capabilities
This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.
* dropping redundancy
* fix test
* revert handle_reasoning here
* refactor: update reasoning handling in Agent class
This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.
* improve planning prompts
* matching
* refactor: remove default enabled flag from PlanningConfig in Agent class
* more cassettes
* fix test
* feat: enhance agent planning with structured todo management
This commit introduces a new planning system within the AgentExecutor class, allowing for the creation of structured todo items from planning steps. The TodoList and TodoItem models have been added to facilitate tracking of plan execution. The reasoning plan now includes a list of steps, improving the clarity and organization of agent tasks. Additionally, tests have been added to validate the new planning functionality and ensure proper integration with existing workflows.
* refactor: update planning prompt and remove deprecated methods in reasoning handler
* improve planning prompt
* improve handler
* linted
* linted
* Lorenze/feat/planning pt 3 todo list execution (#4450)
* feat: introduce PlanningConfig for enhanced agent planning capabilities
This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.
* dropping redundancy
* fix test
* revert handle_reasoning here
* refactor: update reasoning handling in Agent class
This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.
* improve planning prompts
* matching
* refactor: remove default enabled flag from PlanningConfig in Agent class
* more cassettes
* fix test
* feat: enhance agent planning with structured todo management
This commit introduces a new planning system within the AgentExecutor class, allowing for the creation of structured todo items from planning steps. The TodoList and TodoItem models have been added to facilitate tracking of plan execution. The reasoning plan now includes a list of steps, improving the clarity and organization of agent tasks. Additionally, tests have been added to validate the new planning functionality and ensure proper integration with existing workflows.
* refactor: update planning prompt and remove deprecated methods in reasoning handler
* improve planning prompt
* improve handler
* execute todos and be able to track them
* feat: introduce PlannerObserver and StepExecutor for enhanced plan execution
This commit adds the PlannerObserver and StepExecutor classes to the CrewAI framework, implementing the observation phase of the Plan-and-Execute architecture. The PlannerObserver analyzes step execution results, determines plan validity, and suggests refinements, while the StepExecutor executes individual todo items in isolation. These additions improve the overall planning and execution process, allowing for more dynamic and responsive agent behavior. Additionally, new observation events have been defined to facilitate monitoring and logging of the planning process.
* refactor: enhance final answer synthesis in AgentExecutor
This commit improves the synthesis of final answers in the AgentExecutor class by implementing a more coherent approach to combining results from multiple todo items. The method now utilizes a single LLM call to generate a polished response, falling back to concatenation if the synthesis fails. Additionally, the test cases have been updated to reflect the changes in planning and execution, ensuring that the results are properly validated and that the plan-and-execute architecture is functioning as intended.
* refactor: enhance final answer synthesis in AgentExecutor
This commit improves the synthesis of final answers in the AgentExecutor class by implementing a more coherent approach to combining results from multiple todo items. The method now utilizes a single LLM call to generate a polished response, falling back to concatenation if the synthesis fails. Additionally, the test cases have been updated to reflect the changes in planning and execution, ensuring that the results are properly validated and that the plan-and-execute architecture is functioning as intended.
* refactor: implement structured output handling in final answer synthesis
This commit enhances the final answer synthesis process in the AgentExecutor class by introducing support for structured outputs when a response model is specified. The synthesis method now utilizes the response model to produce outputs that conform to the expected schema, while still falling back to concatenation in case of synthesis failures. This change ensures that intermediate steps yield free-text results, but the final output can be structured, improving the overall coherence and usability of the synthesized answers.
* regen tests
* linted
* fix
* Enhance PlanningConfig and AgentExecutor with Reasoning Effort Levels
This update introduces a new attribute in the class, allowing users to customize the observation and replanning behavior during task execution. The class has been modified to utilize this new attribute, routing step observations based on the specified reasoning effort level: low, medium, or high.
Additionally, tests have been added to validate the functionality of the reasoning effort levels, ensuring that the agent behaves as expected under different configurations. This enhancement improves the adaptability and efficiency of the planning process in agent execution.
* regen cassettes for test and fix test
* cassette regen
* fixing tests
* dry
* Refactor PlannerObserver and StepExecutor to Utilize I18N for Prompts
This update enhances the PlannerObserver and StepExecutor classes by integrating the I18N utility for managing prompts and messages. The system and user prompts are now retrieved from the I18N module, allowing for better localization and maintainability. Additionally, the code has been cleaned up to remove hardcoded strings, improving readability and consistency across the planning and execution processes.
* Refactor PlannerObserver and StepExecutor to Utilize I18N for Prompts
This update enhances the PlannerObserver and StepExecutor classes by integrating the I18N utility for managing prompts and messages. The system and user prompts are now retrieved from the I18N module, allowing for better localization and maintainability. Additionally, the code has been cleaned up to remove hardcoded strings, improving readability and consistency across the planning and execution processes.
* consolidate agent logic
* fix datetime
* improving step executor
* refactor: streamline observation and refinement process in PlannerObserver
- Updated the PlannerObserver to apply structured refinements directly from observations without requiring a second LLM call.
- Renamed method to for clarity.
- Enhanced documentation to reflect changes in how refinements are handled.
- Removed unnecessary LLM message building and parsing logic, simplifying the refinement process.
- Updated event emissions to include summaries of refinements instead of raw data.
* enhance step executor with tool usage events and validation
- Added event emissions for tool usage, including started and finished events, to track tool execution.
- Implemented validation to ensure expected tools are called during step execution, raising errors when not.
- Refactored the method to handle tool execution with event logging.
- Introduced a new method for parsing tool input into a structured format.
- Updated tests to cover new functionality and ensure correct behavior of tool usage events.
* refactor: enhance final answer synthesis logic in AgentExecutor
- Updated the finalization process to conditionally skip synthesis when the last todo result is sufficient as a complete answer.
- Introduced a new method to determine if the last todo result can be used directly, improving efficiency.
- Added tests to verify the new behavior, ensuring synthesis is skipped when appropriate and maintained when a response model is set.
* fix: update observation handling in PlannerObserver for LLM errors
- Modified the error handling in the PlannerObserver to default to a conservative replan when an LLM call fails.
- Updated the return values to indicate that the step was not completed successfully and that a full replan is needed.
- Added a new test to verify the behavior of the observer when an LLM error occurs, ensuring the correct replan logic is triggered.
* refactor: enhance planning and execution flow in agents
- Updated the PlannerObserver to accept a kickoff input for standalone task execution, improving flexibility in task handling.
- Refined the step execution process in StepExecutor to support multi-turn action loops, allowing for iterative tool execution and observation.
- Introduced a method to extract relevant task sections from descriptions, ensuring clarity in task requirements.
- Enhanced the AgentExecutor to manage step failures more effectively, triggering replans only when necessary and preserving completed task history.
- Updated translations to reflect changes in planning principles and execution prompts, emphasizing concrete and executable steps.
* refactor: update setup_native_tools to include tool_name_mapping
- Modified the setup_native_tools function to return an additional mapping of tool names.
- Updated StepExecutor and AgentExecutor classes to accommodate the new return value from setup_native_tools.
* fix tests
* linted
* linted
* feat: enhance image block handling in Anthropic provider and update AgentExecutor logic
- Added a method to convert OpenAI-style image_url blocks to Anthropic's required format.
- Updated AgentExecutor to handle cases where no todos are ready, introducing a needs_replan return state.
- Improved fallback answer generation in AgentExecutor to prevent RuntimeErrors when no final output is produced.
* lint
* lint
* 1. Added failed to TodoStatus (planning_types.py)
- TodoStatus now includes failed as a valid state: Literal[pending, running, completed, failed]
- Added mark_failed(step_number, result) method to TodoList
- Added get_failed_todos() method to TodoList
- Updated is_complete to treat both completed and failed as terminal states
- Updated replace_pending_todos docstring to mention failed items are preserved
2. Mark running todos as failed before replan (agent_executor.py)
All three effort-level handlers now call mark_failed() on the current todo before routing to replan_now:
- Low effort (handle_step_observed_low): hard-failure branch
- Medium effort (handle_step_observed_medium): needs_full_replan branch
- High effort (decide_next_action): both needs_full_replan and step_completed_successfully=False branches
3. Updated _should_replan to use get_failed_todos()
Previously filtered on todo.status == failed which was dead code. Now uses the proper accessor method that will actually find failed items.
What this fixes: Before these changes, a step that triggered a replan would stay in running status permanently, causing is_complete to never
return True and next_pending to skip it — leading to stuck execution states. Now failed steps are properly tracked, replanning context correctly
reports them, and LiteAgentOutput.failed_todos will actually return results.
* fix test
* imp on failed states
* adjusted the var name from AgentReActState to AgentExecutorState
* addressed p0 bugs
* more improvements
* linted
* regen cassette
* addressing crictical comments
* ensure configurable timeouts, max_replans and max step iterations
* adjusted tools
* dropping debug statements
* addressed comment
* fix linter
* lints and test fixes
* fix: default observation parse fallback to failure and clean up plan-execute types
When _parse_observation_response fails all parse attempts, default to
step_completed_successfully=False instead of True to avoid silently
masking failures. Extract duplicate _extract_task_section into a shared
utility in agent_utils. Type PlanningConfig.llm as str | BaseLLM | None
instead of str | Any | None. Make StepResult a frozen dataclass for
immutability consistency with StepExecutionContext.
* fix: remove Any from function_calling_llm union type in step_executor
* fix: make BaseTool usage count thread-safe for parallel step execution
Add _usage_lock and _claim_usage() to BaseTool for atomic
check-and-increment of current_usage_count. This prevents race
conditions when parallel plan steps invoke the same tool concurrently
via execute_todos_parallel. Remove the racy pre-check from
execute_single_native_tool_call since the limit is now enforced
atomically inside tool.run().
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
* [SECURITY] Fix sandbox escape vulnerability in CodeInterpreterTool (F-001)
This commit addresses a critical security vulnerability where the CodeInterpreterTool
could be exploited via sandbox escape attacks when Docker was unavailable.
Changes:
- Remove insecure fallback to restricted sandbox in run_code_safety()
- Now fails closed with RuntimeError when Docker is unavailable
- Mark run_code_in_restricted_sandbox() as deprecated and insecure
- Add clear security warnings to SandboxPython class documentation
- Update tests to reflect secure-by-default behavior
- Add test demonstrating the sandbox escape vulnerability
- Update README with security requirements and best practices
The previous implementation would fall back to a Python-based 'restricted sandbox'
when Docker was unavailable. However, this sandbox could be easily bypassed using
Python object introspection to recover the original __import__ function, allowing
arbitrary module access and command execution on the host.
The fix enforces Docker as a requirement for safe code execution. Users who cannot
use Docker must explicitly enable unsafe_mode=True, acknowledging the security risks.
Security Impact:
- Prevents RCE via sandbox escape when Docker is unavailable
- Enforces fail-closed security model
- Maintains backward compatibility via unsafe_mode flag
References:
- https://docs.crewai.com/tools/ai-ml/codeinterpretertool
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
* Add security fix documentation for F-001
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
* Add Slack summary for security fix
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
* Delete SECURITY_FIX_F001.md
* Delete SLACK_SUMMARY.md
* chore: regen cassettes
* chore: regen more cassettes
* Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Rip&Tear <theCyberTech@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* fix: remove exclusive locks from read-only storage operations to eliminate lock contention
read operations like search, list_scopes, get_scope_info, count across
LanceDB, ChromaDB, and RAG adapters were holding exclusive locks unnecessarily.
under multi-process prefork workers this caused RedisLock contention triggering
a portalocker bug where AlreadyLocked is raised with the exceptions module as its arg.
- remove store_lock from 7 LanceDB read methods since MVCC handles concurrent reads
- remove store_lock from ChromaDB search/asearch which are thread-safe since v0.4
- remove store_lock from RAG core query and LanceDB adapter query
- wrap lock_store BaseLockException with actionable error message
- add exception handling in encoding_flow/recall_flow ThreadPoolExecutor calls
- fix flow.py double-logging of ancestor listener errors
* fix: remove dead conditional in filter_and_chunk fallback
both branches of the if/else and the except all produced the same
candidates = [scope_prefix] result, making the get_scope_info call
and conditional pointless
* fix: separate lock acquisition from caller body in lock_store
the try/except wrapped the yield inside the contextmanager, which meant
any BaseLockException raised by the caller's code inside the with block
would be caught and re-raised with a misleading "Failed to acquire lock"
message. split into acquire-then-yield so only actual acquisition
failures get the actionable error message.
* feat(devtools): add release command and fix automerge on protected branches
Replace gh pr merge --auto with polling-based merge wait that prints the
PR URL for manual review. Add unified release command that chains bump
and tag into a single end-to-end workflow.
* feat(devtools): trigger PyPI publish workflow after GitHub release
* refactor(devtools): extract shared helpers to eliminate duplication
Extract _poll_pr_until_merged, _update_all_versions,
_generate_release_notes, _update_docs_and_create_pr,
_create_tag_and_release, and _trigger_pypi_publish into reusable
helpers. All three commands (bump, tag, release) now compose from
these shared functions.
threading.Thread() does not inherit the parent's contextvars.Context,
causing ContextVar-based state (OpenTelemetry spans, Langfuse trace IDs,
and any other request-scoped vars) to be silently dropped in async tasks.
Fix by calling contextvars.copy_context() before spawning each thread and
using ctx.run() as the thread target, which runs the function inside the
captured context.
Affected locations:
- task.py: execute_async() — the primary async task execution path
- utilities/streaming.py: create_chunk_generator() — streaming execution path
Fixes: #4822
Related: #4168, #4286
Co-authored-by: Claude <noreply@anthropic.com>
* fix(bedrock): group parallel tool results in single user message
When an AWS Bedrock model makes multiple tool calls in a single
response, the Converse API requires all corresponding tool results
to be sent back in a single user message. Previously, each tool
result was emitted as a separate user message, causing:
ValidationException: Expected toolResult blocks at messages.2.content
Fix: When processing consecutive tool messages, append the toolResult
block to the preceding user message (if it already contains
toolResult blocks) instead of creating a new message. This groups
all parallel tool results together while keeping tool results from
different assistant turns separate.
Fixes#4749
Signed-off-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
* Update lib/crewai/tests/llms/bedrock/test_bedrock.py
* fix: group bedrock tool results
Co-authored-by: João Moura <joaomdmoura@gmail.com>
---------
Signed-off-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
Co-authored-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
* fix: allow hyphenated tool names in MCP references like notion#get-page
The _SLUG_RE regex on BaseAgent rejected MCP tool references containing
hyphens (e.g. "notion#get-page") because the fragment pattern only
matched \w (word chars)
* fix: create fresh MCP client per tool invocation to prevent parallel call races
When the LLM dispatches parallel calls to MCP tools on the same server, the executor runs them concurrently via ThreadPoolExecutor. Previously, all tools from a server shared a single MCPClient instance, and even the same tool called twice would reuse one client. Since each thread creates its own asyncio event loop via asyncio.run(), concurrent connect/disconnect calls on the shared client caused anyio cancel-scope errors ("Attempted to exit cancel scope in a different task than it was entered in").
The fix introduces a client_factory pattern: MCPNativeTool now receives a zero-arg callable that produces a fresh MCPClient + transport on every
_run_async() invocation. This eliminates all shared mutable connection state between concurrent calls, whether to the same tool or different tools from the same server.
* test: ensure we can filter hyphenated MCP tool
* feat: add dedicated Brave Search tools for web, news, image, video, local POIs, and Brave's newest LLM Context endpoint
* fix: normalize transformed response shape
* revert legacy tool name
* fix: schema change prevented property resolution
* Update tool.specs.json
* fix: add fallback for search_langugage
* simplify exports
* makes rate-limiting logic per-instance
* fix(brave-tools): correct _refine_response return type annotations
The abstract method and subclasses annotated _refine_response as returning
dict[str, Any] but most implementations actually return list[dict[str, Any]].
Updated base to return Any, and each subclass to match its actual return type.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(memory): convert Memory, MemoryScope, and MemorySlice to BaseModel
* fix(test): update mock memory attribute from _read_only to read_only
* fix: handle re-validation in wrap validators and patch BaseModel class in tests
ThreadPoolExecutor threads do not inherit the calling thread's contextvars
context, causing _event_id_stack and _current_celery_task_id to be empty
in worker threads. This broke OTel span parenting for parallel tool calls
(missing parent_event_id) and lost the Celery task ID in the enterprise
tracking layer ([Task ID: no-task]).
Fix by capturing an independent context copy per submission via
contextvars.copy_context().run in CrewAgentExecutor._handle_native_tool_calls,
so each worker thread starts with the correct inherited context without
sharing mutable state across threads.
* feat: enhance memory recall limits and update documentation
- Increased the memory recall limit in the Agent class from 5 to 15.
- Updated the RecallMemoryTool to allow a recall limit of 20.
- Expanded the documentation for the recall_memory feature to emphasize the importance of multiple queries for comprehensive results.
* feat: increase memory recall limit and enhance memory context documentation
- Increased the memory recall limit in the Agent class from 15 to 20.
- Updated the memory context message to clarify the nature of the memories presented and the importance of using the Search memory tool for comprehensive results.
* refactor: remove inferred_categories from RecallState and update category merging logic
- Removed the inferred_categories field from RecallState to simplify state management.
- Updated the _merged_categories method to only merge caller-supplied categories, enhancing clarity in category handling.
* refactor: simplify category handling in RecallFlow
- Updated the _merged_categories method to return only caller-supplied categories, removing the previous merging logic for inferred categories. This change enhances clarity and maintains consistency in category management.
* fix(gemini): group parallel function_response parts in a single Content object
When Gemini makes N parallel tool calls, the API requires all N function_response parts in one Content object. Previously each tool result created a separate Content, causing 400 INVALID_ARGUMENT errors. Merge consecutive function_response parts into the existing Content instead of appending new ones.
* Address change requested
- function_response is a declared field on the types.Part Pydantic model so hasattr can be replaced with p.function_response is not None