Update documentation to use underscore instead of hyphen in the `--skip_provider` flag across all CLI command examples for consistency with actual CLI implementation.
* docs: add missing integration actions from OAuth config
Sync enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations:
- Google Contacts: 4 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions
* docs: add missing integration actions from OAuth config
Sync pt-BR enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Portuguese:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions
* docs: add missing integration actions from OAuth config
Sync Korean enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Korean:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix(security): bump regex from 2024.9.11 to 2026.1.15
Address security vulnerability flagged in regex==2024.9.11
* bump mcp from 1.23.1 to 1.26.0
Address security vulnerability flagged in mcp==1.16.0 (resolved to 1.23.3)
asynchronous human-in-the-loop handling and related fixes.
- Extend human_input provider with async support: AsyncExecutorContext, handle_feedback_async, async prompt helpers (_prompt_input_async, _async_readline), and async training/regular feedback loops in SyncHumanInputProvider.
- Add async handler methods in CrewAgentExecutor and AgentExecutor (_ahandle_human_feedback, _ainvoke_loop) to integrate async provider flows.
- Change PlusAPI.get_agent to an async httpx call and adapt caller in agent_utils to run it via asyncio.run.
- Simplify listener execution in flow.Flow to correctly pass HumanFeedbackResult to listeners and unify execution path for router outcomes.
- Remove deprecated types/hitl.py definitions.
- Add tests covering chained router feedback, rejected paths, and mixed router/non-router listeners to prevent regressions.
* fix: add current task id context and flow updates
introduce a context var for the current task id in `crewai.context` to track task scope. update `Flow._execute_single_listener` to return `(result, event_id)` and adjust callers to unpack it and append `FlowMethodName(str(result))` to `router_results`. set/reset the current task id at the start/end of task execution (async + sync) with minor import and call-site tweaks.
* fix: await event futures and flush event bus
call `crewai_event_bus.flush()` after crew kickoff. in `Flow`, await event handler futures instead of just collecting them: await pending `_event_futures` before finishing, await emitted futures immediately with try/except to log failures, then clear `_event_futures`. ensures handlers complete and errors surface.
* fix: continue iteration on tool completion events
expand the loop bridge listener to also trigger on tool completion events (`tool_completed` and `native_tool_completed`) so agent iteration resumes after tools finish. add a `requests.post` mock and response fixture in the liteagent test to simulate platform tool execution. refresh and sanitize vcr cassettes (updated model responses, timestamps, and header placeholders) to reflect tool-call flows and new recordings.
* fix: thread-safe state proxies & native routing
add thread-safe state proxies and refactor native tool routing.
* introduce `LockedListProxy` and `LockedDictProxy` in `flow.py` and update `StateProxy` to return them for list/dict attrs so mutations are protected by the flow lock.
* update `AgentExecutor` to use `StateProxy` on flow init, guard the messages setter with the state lock, and return a `StateProxy` from the temp state accessor.
* convert `call_llm_native_tools` into a listener (no direct routing return) and add `route_native_tool_result` to route based on state (pending tool calls, final answer, or context error).
* minor cleanup in `continue_iteration` to drop orphan listeners on init.
* update test cassettes for new native tool call responses, timestamps, and ids.
improves concurrency safety for shared state and makes native tool routing explicit.
* chore: regen cassettes
* chore: regen cassettes, remove duplicate listener call path
* feat: add started_event_id and set in eventbus
* chore: update additional test assumption
* fix: restore event bus handlers on context exit
fix rollback in crewai events bus so that exiting the context restores
the previous _sync_handlers, _async_handlers, _handler_dependencies, and _execution_plan_cache by assigning shallow copies of the saved dicts. previously these
were set to empty dicts on exit, which caused registered handlers and cached execution plans to be lost.
Introduce ContextVar-backed hooks and small API/behavior changes to improve extensibility and testability.
Changes include:
- agents: mark configure_structured_output as abstract and change its parameter to task to reflect use of task metadata.
- tracing: convert _first_time_trace_hook to a ContextVar and call .get() to safely retrieve the hook.
- console formatter: add _disable_version_check ContextVar and skip version checks when set (avoids noisy checks in certain contexts).
- flow: use current_triggering_event_id variable when scheduling listener tasks to keep naming consistent.
- hallucination guardrail: make context optional, add _validate_output_hook to allow custom validation hooks, update examples and return contract to allow hooks to override behavior.
- agent utilities: add _create_plus_client_hook for injecting a Plus client (used in tests/alternate flows), ensure structured tools have current_usage_count initialized and propagate to original tool, and fall back to creating PlusAPI client when no hook is provided.
Refactor agent executor to delegate human interactions to a provider: add messages and ask_for_human_input properties, implement _invoke_loop and _format_feedback_message, and replace the internal iterative/training feedback logic with a call to get_provider().handle_feedback.
Make LLMGuardrail kickoff coroutine-aware by detecting coroutines and running them via asyncio.run so both sync and async agents are supported.
Make telemetry more robust by safely handling missing task.output (use empty string) and returning early if span is None before setting attributes.
Improve serialization to detect circular references via an _ancestors set, propagate it through recursive calls, and pass exclude/max_depth/_current_depth consistently to prevent infinite recursion and produce stable serializable output.
Allow hook registration to accept both typed hook types and plain callables by importing and using After*/Before*CallHookCallable types; add explicit LLMCallHookContext and ToolCallHookContext typing in crew_base. Introduce a post-initialize crew hook list and invoke hooks after Crew instance initialization. Refactor filtered hook factory functions to include precise typing and clearer local names (before_llm_hook/after_llm_hook/before_tool_hook/after_tool_hook) and register those with the instance. Update CrewInstance protocol to include _registered_hook_functions and _hooks_being_registered fields.
When a tool raises an error, both ToolUsageErrorEvent and
ToolUsageFinishedEvent were being emitted. Since both events pop the
event scope stack, this caused the agent scope to be incorrectly popped
along with the tool scope.
Enable dynamic extension exports and small behavior fixes across events and flow modules:
- events/__init__.py: Added _extension_exports and extended __getattr__ to lazily resolve registered extension values or import paths.
- events/event_bus.py: Implemented off() to unregister sync/async handlers, clean handler dependencies, and invalidate execution plan cache.
- events/listeners/tracing/utils.py: Added Callable import and _first_time_trace_hook to allow overriding first-time trace auto-collection behavior.
- events/types/tool_usage_events.py: Changed ToolUsageEvent.run_attempts default from None to 0 to avoid nullable handling.
- events/utils/console_formatter.py: Respect CREWAI_DISABLE_VERSION_CHECK env var to skip version checks in CI-like flows.
- flow/async_feedback/__init__.py: Added typing.Any import, _extension_exports and __getattr__ to support extensions via attribute lookup.
These changes add extension points and safer defaults, and provide a way to unregister event handlers.
* refactor: extract hitl to provider pattern
- add humaninputprovider protocol with setup_messages and handle_feedback
- move sync hitl logic from executor to synchuman inputprovider
- add _passthrough_exceptions extension point in agent/core.py
- create crewai.core.providers module for extensible components
- remove _ask_human_input from base_agent_executor_mixin
* feat: enhance AnthropicCompletion to support available functions in tool execution
- Updated the `_prepare_completion_params` method to accept `available_functions` for better tool handling.
- Modified tool execution logic to directly return results from tools when `available_functions` is provided, aligning behavior with OpenAI's model.
- Added new test cases to validate the execution of tools with available functions, ensuring correct argument passing and result formatting.
This change improves the flexibility and usability of the Anthropic LLM integration, allowing for more complex interactions with tools.
* refactor: remove redundant event emission in AnthropicCompletion
* fix test
* dry up
Dependabot's uv updater defaults to Python 3.14.2, which is incompatible
with the project's requires-python constraint (>=3.10, <3.14). Adding
.python-version pins the Python version to 3.13 for dependency updates.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* limit to 0.5.9 due breaking changes + add env vars requirements
* fix tool spec extract that was ignoring with default
* original tool spec
* update spec
When monitoring LLM events, consumers need to know which events belong
to the same API call. Before this change, there was no way to correlate
LLMCallStartedEvent, LLMStreamChunkEvent, and LLMCallCompletedEvent
belonging to the same request.
* feat: add server-side auth schemes and protocol extensions
- add server auth scheme base class and implementations (api key, bearer token, basic/digest auth, mtls)
- add server-side extension system for a2a protocol extensions
- add extensions middleware for x-a2a-extensions header management
- add extension validation and registry utilities
- enhance auth utilities with server-side support
- add async intercept method to match client call interceptor protocol
- fix type_checking import to resolve mypy errors with a2aconfig
* feat: add transport negotiation and content type handling
- add transport negotiation logic with fallback support
- add content type parser and encoder utilities
- add transport configuration models (client and server)
- add transport types and enums
- enhance config with transport settings
- add negotiation events for transport and content type
* feat: add a2a delegation support to LiteAgent
* feat: add file input support to a2a delegation and tasks
Introduces handling of file inputs in A2A delegation flows by converting file dictionaries to protocol-compatible parts and propagating them through delegation and task execution functions. Updates include utility functions for file conversion, changes to message construction, and passing input_files through relevant APIs.
* feat: liteagent a2a delegation support to kickoff methods
* fix: improve output handling and response model integration in agents
- Refactored output handling in the Agent class to ensure proper conversion and formatting of outputs, including support for BaseModel instances.
- Enhanced the AgentExecutor class to correctly utilize response models during execution, improving the handling of structured outputs.
- Updated the Gemini and Anthropic completion providers to ensure compatibility with new response model handling, including the addition of strict mode for function definitions.
- Improved the OpenAI completion provider to enforce strict adherence to function schemas.
- Adjusted translations to clarify instructions regarding output formatting and schema adherence.
* drop what was a print that didnt get deleted properly
* fixes gemini
* azure working
* bedrock works
* added tests
* adjust test
* fix tests and regen
* fix tests and regen
* refactor: ensure stop words are applied correctly in Azure, Gemini, and OpenAI completions; add tests to validate behavior with structured outputs
* linting
* no need post tool reflection on native tools
* refactor: update prompt generation to prevent thought leakage
- Modified the prompt structure to ensure agents without tools use a simplified format, avoiding ReAct instructions.
- Introduced a new 'task_no_tools' slice for agents lacking tools, ensuring clean output without Thought: prefixes.
- Enhanced test coverage to verify that prompts do not encourage thought leakage, ensuring outputs remain focused and direct.
- Added integration tests to validate that real LLM calls produce clean outputs without internal reasoning artifacts.
* dont forget the cassettes
- add gemini 2.0 schema support using response_json_schema with propertyordering while retaining backward compatibility for earlier models
- refactor llm completions to return validated pydantic models when a response_model is provided, updating hooks, types, and tests for consistent structured outputs
- extend agentfinish and executors to support basemodel outputs, improve anthropic structured parsing, and clean up schema utilities, tests, and original_json handling