Compare commits

...

64 Commits

Author SHA1 Message Date
Greyson LaLonde
fb0a8c4549 Merge branch 'main' into codex/nl2sql-security-docs 2026-02-12 10:59:37 -05:00
Giovanni Vella
6c0fb7f970 fix broken tasks table
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Check Documentation Broken Links / Check broken links (push) Waiting to run
Notify Downstream / notify-downstream (push) Waiting to run
Signed-off-by: Giovanni Vella <giovanni.vella98@gmail.com>
2026-02-12 10:55:40 -05:00
Rip&Tear
db0c8ff468 Merge branch 'main' into codex/nl2sql-security-docs 2026-02-12 13:44:54 +08:00
Greyson LaLonde
cde33fd981 feat: add yanked detection for version notes
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-02-11 23:31:06 -05:00
theCyberTech
1f8cfe3282 docs: clarify NL2SQL security model and hardening guidance 2026-02-12 10:32:56 +08:00
Lorenze Jay
2ed0c2c043 imp compaction (#4399)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
* imp compaction

* fix lint

* cassette gen

* cassette gen

* improve assert

* adding azure

* fix global docstring
2026-02-11 15:52:03 -08:00
Lorenze Jay
0341e5aee7 supporting prompt cache results show (#4447)
* supporting prompt cache

* droped azure tests

* fix tests

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-11 14:07:15 -08:00
Mike Plachta
397d14c772 fix: correct CLI flag format from --skip-provider to --skip_provider (#4462)
Update documentation to use underscore instead of hyphen in the `--skip_provider` flag across all CLI command examples for consistency with actual CLI implementation.
2026-02-11 13:51:54 -08:00
Lucas Gomide
fc3e86e9a3 docs Adding 96 missing actions across 9 integrations (#4460)
* docs: add missing integration actions from OAuth config

Sync enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations:
- Google Contacts: 4 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions

* docs: add missing integration actions from OAuth config

Sync pt-BR enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Portuguese:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions

* docs: add missing integration actions from OAuth config

Sync Korean enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Korean:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-11 15:17:54 -05:00
Mike Plachta
2882df5daf replace old .cursorrules with AGENTS.md (#4451)
* chore: remove .cursorrules file
feat: add AGENTS.md file to any newly created file

* move the copy of the tests
2026-02-11 10:07:24 -08:00
Greyson LaLonde
3a22e80764 fix: ensure openai tool call stream is finalized
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-02-11 10:02:31 -05:00
Greyson LaLonde
9b585a934d fix: pass started_event_id to crew 2026-02-11 09:30:07 -05:00
Rip&Tear
46e1b02154 chore: fix codeql coverage and action version (#4454) 2026-02-11 18:20:07 +08:00
Rip&Tear
87675b49fd test: avoid URL substring assertion in brave search test (#4453)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-02-11 14:32:10 +08:00
Lucas Gomide
a3bee66be8 Address OpenSSL CVE-2025-15467 vulnerability (#4426)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* fix(security): bump regex from 2024.9.11 to 2026.1.15

Address security vulnerability flagged in regex==2024.9.11

* bump mcp from 1.23.1 to 1.26.0

Address security vulnerability flagged in mcp==1.16.0 (resolved to 1.23.3)
2026-02-10 09:39:35 -08:00
Greyson LaLonde
f6fa04528a fix: add async HITL support and chained-router tests
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
asynchronous human-in-the-loop handling and related fixes.

- Extend human_input provider with async support: AsyncExecutorContext, handle_feedback_async, async prompt helpers (_prompt_input_async, _async_readline), and async training/regular feedback loops in SyncHumanInputProvider.
- Add async handler methods in CrewAgentExecutor and AgentExecutor (_ahandle_human_feedback, _ainvoke_loop) to integrate async provider flows.
- Change PlusAPI.get_agent to an async httpx call and adapt caller in agent_utils to run it via asyncio.run.
- Simplify listener execution in flow.Flow to correctly pass HumanFeedbackResult to listeners and unify execution path for router outcomes.
- Remove deprecated types/hitl.py definitions.
- Add tests covering chained router feedback, rejected paths, and mixed router/non-router listeners to prevent regressions.
2026-02-06 16:29:27 -05:00
Greyson LaLonde
7d498b29be fix: event ordering; flow state locks, routing
* fix: add current task id context and flow updates

introduce a context var for the current task id in `crewai.context` to track task scope. update `Flow._execute_single_listener` to return `(result, event_id)` and adjust callers to unpack it and append `FlowMethodName(str(result))` to `router_results`. set/reset the current task id at the start/end of task execution (async + sync) with minor import and call-site tweaks.

* fix: await event futures and flush event bus

call `crewai_event_bus.flush()` after crew kickoff. in `Flow`, await event handler futures instead of just collecting them: await pending `_event_futures` before finishing, await emitted futures immediately with try/except to log failures, then clear `_event_futures`. ensures handlers complete and errors surface.

* fix: continue iteration on tool completion events

expand the loop bridge listener to also trigger on tool completion events (`tool_completed` and `native_tool_completed`) so agent iteration resumes after tools finish. add a `requests.post` mock and response fixture in the liteagent test to simulate platform tool execution. refresh and sanitize vcr cassettes (updated model responses, timestamps, and header placeholders) to reflect tool-call flows and new recordings.

* fix: thread-safe state proxies & native routing

add thread-safe state proxies and refactor native tool routing.

* introduce `LockedListProxy` and `LockedDictProxy` in `flow.py` and update `StateProxy` to return them for list/dict attrs so mutations are protected by the flow lock.
* update `AgentExecutor` to use `StateProxy` on flow init, guard the messages setter with the state lock, and return a `StateProxy` from the temp state accessor.
* convert `call_llm_native_tools` into a listener (no direct routing return) and add `route_native_tool_result` to route based on state (pending tool calls, final answer, or context error).
* minor cleanup in `continue_iteration` to drop orphan listeners on init.
* update test cassettes for new native tool call responses, timestamps, and ids.

improves concurrency safety for shared state and makes native tool routing explicit.

* chore: regen cassettes

* chore: regen cassettes, remove duplicate listener call path
2026-02-06 14:02:43 -05:00
Greyson LaLonde
1308bdee63 feat: add started_event_id and set in eventbus
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add started_event_id and set in eventbus

* chore: update additional test assumption

* fix: restore event bus handlers on context exit

fix rollback in crewai events bus so that exiting the context restores
the previous _sync_handlers, _async_handlers, _handler_dependencies, and _execution_plan_cache by assigning shallow copies of the saved dicts. previously these
were set to empty dicts on exit, which caused registered handlers and cached execution plans to be lost.
2026-02-05 21:28:23 -05:00
Greyson LaLonde
6bb1b178a1 chore: extension points
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Introduce ContextVar-backed hooks and small API/behavior changes to improve extensibility and testability.

Changes include:
- agents: mark configure_structured_output as abstract and change its parameter to task to reflect use of task metadata.
- tracing: convert _first_time_trace_hook to a ContextVar and call .get() to safely retrieve the hook.
- console formatter: add _disable_version_check ContextVar and skip version checks when set (avoids noisy checks in certain contexts).
- flow: use current_triggering_event_id variable when scheduling listener tasks to keep naming consistent.
- hallucination guardrail: make context optional, add _validate_output_hook to allow custom validation hooks, update examples and return contract to allow hooks to override behavior.
- agent utilities: add _create_plus_client_hook for injecting a Plus client (used in tests/alternate flows), ensure structured tools have current_usage_count initialized and propagate to original tool, and fall back to creating PlusAPI client when no hook is provided.
2026-02-05 12:49:54 -05:00
Greyson LaLonde
fe2a4b4e40 chore: bug fixes and more refactor
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Refactor agent executor to delegate human interactions to a provider: add messages and ask_for_human_input properties, implement _invoke_loop and _format_feedback_message, and replace the internal iterative/training feedback logic with a call to get_provider().handle_feedback.

Make LLMGuardrail kickoff coroutine-aware by detecting coroutines and running them via asyncio.run so both sync and async agents are supported.

Make telemetry more robust by safely handling missing task.output (use empty string) and returning early if span is None before setting attributes.

Improve serialization to detect circular references via an _ancestors set, propagate it through recursive calls, and pass exclude/max_depth/_current_depth consistently to prevent infinite recursion and produce stable serializable output.
2026-02-04 21:21:54 -05:00
Greyson LaLonde
711e7171e1 chore: improve hook typing and registration
Allow hook registration to accept both typed hook types and plain callables by importing and using After*/Before*CallHookCallable types; add explicit LLMCallHookContext and ToolCallHookContext typing in crew_base. Introduce a post-initialize crew hook list and invoke hooks after Crew instance initialization. Refactor filtered hook factory functions to include precise typing and clearer local names (before_llm_hook/after_llm_hook/before_tool_hook/after_tool_hook) and register those with the instance. Update CrewInstance protocol to include _registered_hook_functions and _hooks_being_registered fields.
2026-02-04 21:16:20 -05:00
Vini Brasil
76b5f72e81 Fix tool error causing double event scope pop (#4373)
When a tool raises an error, both ToolUsageErrorEvent and
ToolUsageFinishedEvent were being emitted. Since both events pop the
event scope stack, this caused the agent scope to be incorrectly popped
along with the tool scope.
2026-02-04 20:34:08 -03:00
Greyson LaLonde
d86d43d3e0 chore: refactor crew to provider
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Enable dynamic extension exports and small behavior fixes across events and flow modules:

- events/__init__.py: Added _extension_exports and extended __getattr__ to lazily resolve registered extension values or import paths.
- events/event_bus.py: Implemented off() to unregister sync/async handlers, clean handler dependencies, and invalidate execution plan cache.
- events/listeners/tracing/utils.py: Added Callable import and _first_time_trace_hook to allow overriding first-time trace auto-collection behavior.
- events/types/tool_usage_events.py: Changed ToolUsageEvent.run_attempts default from None to 0 to avoid nullable handling.
- events/utils/console_formatter.py: Respect CREWAI_DISABLE_VERSION_CHECK env var to skip version checks in CI-like flows.
- flow/async_feedback/__init__.py: Added typing.Any import, _extension_exports and __getattr__ to support extensions via attribute lookup.

These changes add extension points and safer defaults, and provide a way to unregister event handlers.
2026-02-04 16:05:21 -05:00
Greyson LaLonde
6bfc98e960 refactor: extract hitl to provider pattern
* refactor: extract hitl to provider pattern

- add humaninputprovider protocol with setup_messages and handle_feedback
- move sync hitl logic from executor to synchuman inputprovider
- add _passthrough_exceptions extension point in agent/core.py
- create crewai.core.providers module for extensible components
- remove _ask_human_input from base_agent_executor_mixin
2026-02-04 15:40:22 -05:00
Greyson LaLonde
3cc33ef6ab fix: resolve complex schema $ref pointers in mcp tools
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: resolve complex schema $ref pointers in mcp tools

* chore: update tool specifications

* fix: adapt mcp tools; sanitize pydantic json schemas

* fix: strip nulls from json schemas and simplify mcp args

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-03 20:47:58 -05:00
Lorenze Jay
3fec4669af Lorenze/fix/anthropic available functions call (#4360)
* feat: enhance AnthropicCompletion to support available functions in tool execution

- Updated the `_prepare_completion_params` method to accept `available_functions` for better tool handling.
- Modified tool execution logic to directly return results from tools when `available_functions` is provided, aligning behavior with OpenAI's model.
- Added new test cases to validate the execution of tools with available functions, ensuring correct argument passing and result formatting.

This change improves the flexibility and usability of the Anthropic LLM integration, allowing for more complex interactions with tools.

* refactor: remove redundant event emission in AnthropicCompletion

* fix test

* dry up
2026-02-03 16:30:43 -08:00
dependabot[bot]
d3f424fd8f chore(deps-dev): bump types-aiofiles
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Bumps [types-aiofiles](https://github.com/typeshed-internal/stub_uploader) from 24.1.0.20250822 to 25.1.0.20251011.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-aiofiles
  dependency-version: 25.1.0.20251011
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-03 12:02:28 -05:00
Matt Aitchison
fee9445067 fix: add .python-version to fix Dependabot uv updates (#4352)
Dependabot's uv updater defaults to Python 3.14.2, which is incompatible
with the project's requires-python constraint (>=3.10, <3.14). Adding
.python-version pins the Python version to 3.13 for dependency updates.

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-03 10:55:01 -06:00
Greyson LaLonde
a3c01265ee feat: add version check & integrate update notices 2026-02-03 10:17:50 -05:00
Matt Aitchison
aa7e7785bc chore: group dependabot security updates into single PR (#4351)
Configure dependabot to batch security updates together while keeping
regular version updates as separate PRs.
2026-02-03 08:53:28 -06:00
Thiago Moretto
e30645e855 limit stagehand dep version to 0.5.9 due breaking changes (#4339)
* limit to 0.5.9 due breaking changes + add env vars requirements

* fix tool spec extract that was ignoring with default

* original tool spec

* update spec
2026-02-03 09:43:24 -05:00
Greyson LaLonde
c1d2801be2 fix: reject reserved script names for crew folders 2026-02-03 09:16:55 -05:00
Greyson LaLonde
6a8483fcb6 fix: resolve race condition in guardrail event emission test 2026-02-03 09:06:48 -05:00
Greyson LaLonde
5fb602dff2 fix: replace timing-based concurrency test with state tracking 2026-02-03 08:58:51 -05:00
Greyson LaLonde
b90cff580a fix: relax openai and litellm dependency constraints 2026-02-03 08:51:55 -05:00
Vini Brasil
576b74b2ef Add call_id to LLM events for correlating requests (#4281)
When monitoring LLM events, consumers need to know which events belong
to the same API call. Before this change, there was no way to correlate
LLMCallStartedEvent, LLMStreamChunkEvent, and LLMCallCompletedEvent
belonging to the same request.
2026-02-03 10:10:33 -03:00
Greyson LaLonde
7590d4c6e3 fix: enforce additionalProperties=false in schemas
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: enforce additionalProperties=false in schemas

* fix: ensure nested items have required properties
2026-02-02 22:19:04 -05:00
Sampson
8c6436234b adds additional search params (#4321)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Introduces support for additional Brave Search API web-search parameters.
2026-02-02 11:17:02 -08:00
Lucas Gomide
96bde4510b feat: auto update tools.specs (#4341) 2026-02-02 12:52:00 -05:00
Greyson LaLonde
9d7f45376a fix: use contextvars for flow execution context 2026-02-02 11:24:02 -05:00
Thiago Moretto
536447ab0e declare stagehand package as dep for StagehandTool (#4336) 2026-02-02 09:45:47 -05:00
Lorenze Jay
63a508f601 feat: bump versions to 1.9.3 (#4316)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* feat: bump versions to 1.9.3

* bump bump
2026-01-30 14:24:25 -08:00
Greyson LaLonde
102b6ae855 feat: add a2a liteagent, auth, transport negotiation, and file support
* feat: add server-side auth schemes and protocol extensions

- add server auth scheme base class and implementations (api key, bearer token, basic/digest auth, mtls)
- add server-side extension system for a2a protocol extensions
- add extensions middleware for x-a2a-extensions header management
- add extension validation and registry utilities
- enhance auth utilities with server-side support
- add async intercept method to match client call interceptor protocol
- fix type_checking import to resolve mypy errors with a2aconfig

* feat: add transport negotiation and content type handling

- add transport negotiation logic with fallback support
- add content type parser and encoder utilities
- add transport configuration models (client and server)
- add transport types and enums
- enhance config with transport settings
- add negotiation events for transport and content type

* feat: add a2a delegation support to LiteAgent

* feat: add file input support to a2a delegation and tasks

Introduces handling of file inputs in A2A delegation flows by converting file dictionaries to protocol-compatible parts and propagating them through delegation and task execution functions. Updates include utility functions for file conversion, changes to message construction, and passing input_files through relevant APIs.

* feat: liteagent a2a delegation support to kickoff methods
2026-01-30 17:10:00 -05:00
Lorenze Jay
19ce56032c fix: improve output handling and response model integration in agents (#4307)
* fix: improve output handling and response model integration in agents

- Refactored output handling in the Agent class to ensure proper conversion and formatting of outputs, including support for BaseModel instances.
- Enhanced the AgentExecutor class to correctly utilize response models during execution, improving the handling of structured outputs.
- Updated the Gemini and Anthropic completion providers to ensure compatibility with new response model handling, including the addition of strict mode for function definitions.
- Improved the OpenAI completion provider to enforce strict adherence to function schemas.
- Adjusted translations to clarify instructions regarding output formatting and schema adherence.

* drop what was a print that didnt get deleted properly

* fixes gemini

* azure working

* bedrock works

* added tests

* adjust test

* fix tests and regen

* fix tests and regen

* refactor: ensure stop words are applied correctly in Azure, Gemini, and OpenAI completions; add tests to validate behavior with structured outputs

* linting
2026-01-30 12:27:46 -08:00
Joao Moura
85f31459c1 docs link
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-01-30 09:05:36 -08:00
Joao Moura
6fcf748dae refactor: update Flow HITL Management documentation to emphasize email-first notifications, routing rules, and auto-response capabilities; remove outdated references to assignment and SLA management 2026-01-30 08:44:54 -08:00
Joao Moura
38065e29ce updating docs 2026-01-30 08:44:54 -08:00
Lorenze Jay
e291a97bdd chore: update version to 1.9.2 across all relevant files (#4299)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-01-28 17:11:44 -08:00
Lorenze Jay
2d05e59223 Lorenze/improve tool response pt2 (#4297)
* no need post tool reflection on native tools

* refactor: update prompt generation to prevent thought leakage

- Modified the prompt structure to ensure agents without tools use a simplified format, avoiding ReAct instructions.
- Introduced a new 'task_no_tools' slice for agents lacking tools, ensuring clean output without Thought: prefixes.
- Enhanced test coverage to verify that prompts do not encourage thought leakage, ensuring outputs remain focused and direct.
- Added integration tests to validate that real LLM calls produce clean outputs without internal reasoning artifacts.

* dont forget the cassettes
2026-01-28 16:53:19 -08:00
Greyson LaLonde
a731efac8d fix: improve structured output handling across providers and agents
- add gemini 2.0 schema support using response_json_schema with propertyordering while retaining backward compatibility for earlier models
- refactor llm completions to return validated pydantic models when a response_model is provided, updating hooks, types, and tests for consistent structured outputs
- extend agentfinish and executors to support basemodel outputs, improve anthropic structured parsing, and clean up schema utilities, tests, and original_json handling
2026-01-28 16:59:55 -05:00
Greyson LaLonde
1e27cf3f0f fix: ensure verbosity flag is applied
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-28 11:52:47 -05:00
Lorenze Jay
381ad3a9a8 chore: update version to 1.9.1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-01-27 20:08:53 -05:00
Lorenze Jay
f53bdb28ac feat: implement before and after tool call hooks in CrewAgentExecutor… (#4287)
* feat: implement before and after tool call hooks in CrewAgentExecutor and AgentExecutor

- Added support for before and after tool call hooks in both CrewAgentExecutor and AgentExecutor classes.
- Introduced ToolCallHookContext to manage context for hooks, allowing for enhanced control over tool execution.
- Implemented logic to block tool execution based on before hooks and to modify results based on after hooks.
- Added integration tests to validate the functionality of the new hooks, ensuring they work as expected in various scenarios.
- Enhanced the overall flexibility and extensibility of tool interactions within the CrewAI framework.

* Potential fix for pull request finding 'Unused local variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* Potential fix for pull request finding 'Unused local variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* test: add integration test for before hook blocking tool execution in Crew

- Implemented a new test to verify that the before hook can successfully block the execution of a tool within a crew.
- The test checks that the tool is not executed when the before hook returns False, ensuring proper control over tool interactions.
- Enhanced the validation of hook calls to confirm that both before and after hooks are triggered appropriately, even when execution is blocked.
- This addition strengthens the testing coverage for tool call hooks in the CrewAI framework.

* drop unused

* refactor(tests): remove OPENAI_API_KEY check from tool hook tests

- Eliminated the check for the OPENAI_API_KEY environment variable in the test cases for tool hooks.
- This change simplifies the test setup and allows for running tests without requiring the API key to be set, improving test accessibility and flexibility.

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-01-27 14:56:50 -08:00
Greyson LaLonde
3b17026082 fix: correct tool-calling content handling and schema serialization
- fix(gemini): prevent tool calls from using stale text content; correct key refs
- fix(agent-executor): resolve type errors
- refactor(schema): extract Pydantic schema utilities from platform tools
- fix(schema): properly serialize schemas and ensure Responses API uses a separate structure
- fix: preserve list identity to avoid mutation/aliasing issues
- chore(tests): update assumptions to match new behavior
2026-01-27 15:47:29 -05:00
Greyson LaLonde
d52dbc1f4b chore: add missing change logs (#4285)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: add missing change logs

* chore: add translations
2026-01-26 18:26:01 -08:00
Lorenze Jay
6b926b90d0 chore: update version to 1.9.0 across all relevant files (#4284)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Bumped the version number to 1.9.0 in pyproject.toml files and __init__.py files across the CrewAI library and its tools.
- Updated dependencies to use the new version of crewai-tools (1.9.0) for improved functionality and compatibility.
- Ensured consistency in versioning across the codebase to reflect the latest updates.
2026-01-26 16:36:35 -08:00
Lorenze Jay
fc84daadbb fix: enhance file store with fallback memory cache when aiocache is n… (#4283)
* fix: enhance file store with fallback memory cache when aiocache is not installed

- Added a simple in-memory cache implementation to serve as a fallback when the aiocache library is unavailable.
- Improved error handling for the aiocache import, ensuring that the application can still function without it.
- This change enhances the robustness of the file store utility by providing a reliable caching mechanism in various environments.

* drop fallback

* Potential fix for pull request finding 'Unused global variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-01-26 15:12:34 -08:00
Lorenze Jay
58b866a83d Lorenze/supporting vertex embeddings (#4282)
* feat: introduce GoogleGenAIVertexEmbeddingFunction for dual SDK support

- Added a new embedding function to support both the legacy vertexai.language_models SDK and the new google-genai SDK for Google Vertex AI.
- Updated factory methods to route to the new embedding function.
- Enhanced VertexAIProvider and related configurations to accommodate the new model options.
- Added integration tests for Google Vertex embeddings with Crew memory, ensuring compatibility and functionality with both authentication methods.

This update improves the flexibility and compatibility of Google Vertex AI embeddings within the CrewAI framework.

* fix test count

* rm comment

* regen cassettes

* regen

* drop variable from .envtest

* dreict to relevant trest only
2026-01-26 14:55:03 -08:00
Greyson LaLonde
9797567342 feat: add structured outputs and response_format support across providers (#4280)
* feat: add response_format parameter to Azure and Gemini providers

* feat: add structured outputs support to Bedrock and Anthropic providers

* chore: bump anthropic dep

* fix: use beta structured output for new models
2026-01-26 11:03:33 -08:00
Greyson LaLonde
a32de6bdac fix: ensure doc list is not empty
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-26 05:08:01 -05:00
Vidit Ostwal
06a58e463c feat: adding response_id in streaming response 2026-01-26 04:20:04 -05:00
Vidit Ostwal
3d771f03fa fix: ensure bedrock client handles stop sequences properly
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-25 20:28:09 -05:00
Greyson LaLonde
db7aeb5a00 chore: disable chroma telemetry
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-25 16:23:29 -05:00
Lorenze Jay
0cb40374de Enhance Gemini LLM Tool Handling and Add Test for Float Responses (#4273)
- Updated the GeminiCompletion class to handle non-dict values returned from tools, ensuring that floats are wrapped in a dictionary format for consistent response handling.
- Introduced a new YAML cassette to test the Gemini LLM's ability to process tools that return float values, verifying that the agent can correctly utilize the sum_numbers tool and return the expected results.
- Added a comprehensive test case to validate the integration of the sum_numbers tool within the Gemini LLM, ensuring accurate calculations and proper response formatting.

These changes improve the robustness of tool interactions within the Gemini LLM and enhance testing coverage for float return values.

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-25 12:50:49 -08:00
287 changed files with 69958 additions and 7284 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -14,13 +14,18 @@ paths-ignore:
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include GitHub Actions workflows/composite actions for CodeQL actions analysis
- ".github/workflows/**"
- ".github/actions/**"
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/crewai-files/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/crewai-files/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed

View File

@@ -5,7 +5,12 @@
version: 2
updates:
- package-ecosystem: uv # See documentation for possible values
directory: "/" # Location of package manifests
- package-ecosystem: uv
directory: "/"
schedule:
interval: "weekly"
groups:
security-updates:
applies-to: security-updates
patterns:
- "*"

View File

@@ -69,7 +69,7 @@ jobs:
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v4
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -98,6 +98,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v4
with:
category: "/language:${{matrix.language}}"

View File

@@ -0,0 +1,63 @@
name: Generate Tool Specifications
on:
pull_request:
branches:
- main
paths:
- 'lib/crewai-tools/src/crewai_tools/**'
workflow_dispatch:
permissions:
contents: write
pull-requests: write
jobs:
generate-specs:
runs-on: ubuntu-latest
env:
PYTHONUNBUFFERED: 1
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.CREWAI_TOOL_SPECS_APP_ID }}
private_key: ${{ secrets.CREWAI_TOOL_SPECS_PRIVATE_KEY }}
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.head_ref }}
token: ${{ steps.app-token.outputs.token }}
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: true
- name: Install the project
working-directory: lib/crewai-tools
run: uv sync --dev --all-extras
- name: Generate tool specifications
working-directory: lib/crewai-tools
run: uv run python src/crewai_tools/generate_tool_specs.py
- name: Check for changes and commit
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add lib/crewai-tools/tool.specs.json
if git diff --quiet --staged; then
echo "No changes detected in tool.specs.json"
else
echo "Changes detected in tool.specs.json, committing..."
git commit -m "chore: update tool specifications"
git push
fi

1
.python-version Normal file
View File

@@ -0,0 +1 @@
3.13

View File

@@ -111,6 +111,13 @@
"en/guides/flows/mastering-flow-state"
]
},
{
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
]
},
{
"group": "Advanced",
"icon": "gear",
@@ -370,7 +377,8 @@
"pages": [
"en/enterprise/features/traces",
"en/enterprise/features/webhook-streaming",
"en/enterprise/features/hallucination-guardrail"
"en/enterprise/features/hallucination-guardrail",
"en/enterprise/features/flow-hitl-management"
]
},
{
@@ -823,7 +831,8 @@
"pages": [
"pt-BR/enterprise/features/traces",
"pt-BR/enterprise/features/webhook-streaming",
"pt-BR/enterprise/features/hallucination-guardrail"
"pt-BR/enterprise/features/hallucination-guardrail",
"pt-BR/enterprise/features/flow-hitl-management"
]
},
{
@@ -1287,7 +1296,8 @@
"pages": [
"ko/enterprise/features/traces",
"ko/enterprise/features/webhook-streaming",
"ko/enterprise/features/hallucination-guardrail"
"ko/enterprise/features/hallucination-guardrail",
"ko/enterprise/features/flow-hitl-management"
]
},
{
@@ -1568,4 +1578,4 @@
"reddit": "https://www.reddit.com/r/crewAIInc/"
}
}
}
}

View File

@@ -4,6 +4,74 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Jan 26, 2026">
## v1.9.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.9.0)
## What's Changed
### Features
- Add structured outputs and response_format support across providers
- Add response ID to streaming responses
- Add event ordering with parent-child hierarchies
- Add Keycloak SSO authentication support
- Add multimodal file handling capabilities
- Add native OpenAI responses API support
- Add A2A task execution utilities
- Add A2A server configuration and agent card generation
- Enhance event system and expand transport options
- Improve tool calling mechanisms
### Bug Fixes
- Enhance file store with fallback memory cache when aiocache is not available
- Ensure document list is not empty
- Handle Bedrock stop sequences properly
- Add Google Vertex API key support
- Enhance Azure model stop word detection
- Improve error handling for HumanFeedbackPending in flow execution
- Fix execution span task unlinking
### Documentation
- Add native file handling documentation
- Add OpenAI responses API documentation
- Add agent card implementation guidance
- Refine A2A documentation
- Update changelog for v1.8.0
### Contributors
@Anaisdg, @GininDenis, @Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @koushiv777, @lorenzejay, @nicoferdi96, @vinibrsl
</Update>
<Update label="Jan 15, 2026">
## v1.8.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.8.1)
## What's Changed
### Features
- Add A2A task execution utilities
- Add A2A server configuration and agent card generation
- Add additional transport mechanisms
- Add Galileo integration support
### Bug Fixes
- Improve Azure model compatibility
- Expand frame inspection depth to detect parent_flow
- Resolve task execution span management issues
- Enhance error handling for human feedback scenarios during flow execution
### Documentation
- Add A2A agent card documentation
- Add PII redaction feature documentation
### Contributors
@Anaisdg, @GininDenis, @greysonlalonde, @joaomdmoura, @koushiv777, @lorenzejay, @vinibrsl
</Update>
<Update label="Jan 08, 2026">
## v1.8.0

View File

@@ -401,23 +401,58 @@ crew = Crew(
### Vertex AI Embeddings
For Google Cloud users with Vertex AI access.
For Google Cloud users with Vertex AI access. Supports both legacy and new embedding models with automatic SDK selection.
<Note>
**Deprecation Notice:** Legacy models (`textembedding-gecko*`) use the deprecated `vertexai.language_models` SDK which will be removed after June 24, 2026. Consider migrating to newer models like `gemini-embedding-001`. See the [Google migration guide](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/deprecations/genai-vertexai-sdk) for details.
</Note>
```python
# Recommended: Using new models with google-genai SDK
crew = Crew(
memory=True,
embedder={
"provider": "vertexai",
"provider": "google-vertex",
"config": {
"project_id": "your-gcp-project-id",
"region": "us-central1", # or your preferred region
"api_key": "your-service-account-key",
"model_name": "textembedding-gecko"
"location": "us-central1",
"model_name": "gemini-embedding-001", # or "text-embedding-005", "text-multilingual-embedding-002"
"task_type": "RETRIEVAL_DOCUMENT", # Optional
"output_dimensionality": 768 # Optional
}
}
)
# Using API key authentication (Exp)
crew = Crew(
memory=True,
embedder={
"provider": "google-vertex",
"config": {
"api_key": "your-google-api-key",
"model_name": "gemini-embedding-001"
}
}
)
# Legacy models (backwards compatible, emits deprecation warning)
crew = Crew(
memory=True,
embedder={
"provider": "google-vertex",
"config": {
"project_id": "your-gcp-project-id",
"region": "us-central1", # or "location" (region is deprecated)
"model_name": "textembedding-gecko" # Legacy model
}
}
)
```
**Available models:**
- **New SDK models** (recommended): `gemini-embedding-001`, `text-embedding-005`, `text-multilingual-embedding-002`
- **Legacy models** (deprecated): `textembedding-gecko`, `textembedding-gecko@001`, `textembedding-gecko-multilingual`
### Ollama Embeddings (Local)
Run embeddings locally for privacy and cost savings.
@@ -569,7 +604,7 @@ mem0_client_embedder_config = {
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory
@@ -591,7 +626,7 @@ crew = Crew(
### Choosing the Right Embedding Provider
When selecting an embedding provider, consider factors like performance, privacy, cost, and integration needs.
When selecting an embedding provider, consider factors like performance, privacy, cost, and integration needs.
Below is a comparison to help you decide:
| Provider | Best For | Pros | Cons |
@@ -749,7 +784,7 @@ Entity Memory supports batching when saving multiple entities at once. When you
This improves performance and observability when writing many entities in one operation.
## 2. External Memory
## 2. External Memory
External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing.
### Basic External Memory with Mem0
@@ -819,7 +854,7 @@ external_memory = ExternalMemory(
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory

View File

@@ -46,7 +46,7 @@ crew = Crew(
## Task Attributes
| Attribute | Parameters | Type | Description |
| :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
@@ -63,7 +63,7 @@ crew = Crew(
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable]]` | List of guardrails to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
<Note type="warning" title="Deprecated: max_retries">

View File

@@ -0,0 +1,563 @@
---
title: "Flow HITL Management"
description: "Enterprise-grade human review for Flows with email-first notifications, routing rules, and auto-response capabilities"
icon: "users-gear"
mode: "wide"
---
<Note>
Flow HITL Management features require the `@human_feedback` decorator, available in **CrewAI version 1.8.0 or higher**. These features apply specifically to **Flows**, not Crews.
</Note>
CrewAI Enterprise provides a comprehensive Human-in-the-Loop (HITL) management system for Flows that transforms AI workflows into collaborative human-AI processes. The platform uses an **email-first architecture** that enables anyone with an email address to respond to review requests—no platform account required.
## Overview
<CardGroup cols={3}>
<Card title="Email-First Design" icon="envelope">
Responders can reply directly to notification emails to provide feedback
</Card>
<Card title="Flexible Routing" icon="route">
Route requests to specific emails based on method patterns or flow state
</Card>
<Card title="Auto-Response" icon="clock">
Configure automatic fallback responses when no human replies in time
</Card>
</CardGroup>
### Key Benefits
- **Simple mental model**: Email addresses are universal; no need to manage platform users or roles
- **External responders**: Anyone with an email can respond, even non-platform users
- **Dynamic assignment**: Pull assignee email directly from flow state (e.g., `sales_rep_email`)
- **Reduced configuration**: Fewer settings to configure, faster time to value
- **Email as primary channel**: Most users prefer responding via email over logging into a dashboard
## Setting Up Human Review Points in Flows
Configure human review checkpoints within your Flows using the `@human_feedback` decorator. When execution reaches a review point, the system pauses, notifies the assignee via email, and waits for a response.
```python
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ContentApprovalFlow(Flow):
@start()
def generate_content(self):
# AI generates content
return "Generated marketing copy for Q1 campaign..."
@listen(generate_content)
@human_feedback(
message="Please review this content for brand compliance:",
emit=["approved", "rejected", "needs_revision"],
)
def review_content(self, content):
return content
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
print(f"Publishing approved content. Reviewer notes: {result.feedback}")
@listen("rejected")
def archive_content(self, result: HumanFeedbackResult):
print(f"Content rejected. Reason: {result.feedback}")
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
print(f"Revision requested: {result.feedback}")
```
For complete implementation details, see the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide.
### Decorator Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `message` | `str` | The message displayed to the human reviewer |
| `emit` | `list[str]` | Valid response options (displayed as buttons in UI) |
## Platform Configuration
Access HITL configuration from: **Deployment → Settings → Human in the Loop Configuration**
<Frame>
<img src="/images/enterprise/hitl-settings-overview.png" alt="HITL Configuration Settings" />
</Frame>
### Email Notifications
Toggle to enable or disable email notifications for HITL requests.
| Setting | Default | Description |
|---------|---------|-------------|
| Email Notifications | Enabled | Send emails when feedback is requested |
<Note>
When disabled, responders must use the dashboard UI or you must configure webhooks for custom notification systems.
</Note>
### SLA Target
Set a target response time for tracking and metrics purposes.
| Setting | Description |
|---------|-------------|
| SLA Target (minutes) | Target response time. Used for dashboard metrics and SLA tracking |
Leave empty to disable SLA tracking.
## Email Notifications & Responses
The HITL system uses an email-first architecture where responders can reply directly to notification emails.
### How Email Responses Work
<Steps>
<Step title="Notification Sent">
When a HITL request is created, an email is sent to the assigned responder with the review content and context.
</Step>
<Step title="Reply-To Address">
The email includes a special reply-to address with a signed token for authentication.
</Step>
<Step title="User Replies">
The responder simply replies to the email with their feedback—no login required.
</Step>
<Step title="Token Validation">
The platform receives the reply, verifies the signed token, and matches the sender email.
</Step>
<Step title="Flow Resumes">
The feedback is recorded and the flow continues with the human's input.
</Step>
</Steps>
### Response Format
Responders can reply with:
- **Emit option**: If the reply matches an `emit` option (e.g., "approved"), it's used directly
- **Free-form text**: Any text response is passed to the flow as feedback
- **Plain text**: The first line of the reply body is used as feedback
### Confirmation Emails
After processing a reply, the responder receives a confirmation email indicating whether the feedback was successfully submitted or if an error occurred.
### Email Token Security
- Tokens are cryptographically signed for security
- Tokens expire after 7 days
- Sender email must match the token's authorized email
- Confirmation/error emails are sent after processing
## Routing Rules
Route HITL requests to specific email addresses based on method patterns.
<Frame>
<img src="/images/enterprise/hitl-settings-routing-rules.png" alt="HITL Routing Rules Configuration" />
</Frame>
### Rule Structure
```json
{
"name": "Approvals to Finance",
"match": {
"method_name": "approve_*"
},
"assign_to_email": "finance@company.com",
"assign_from_input": "manager_email"
}
```
### Matching Patterns
| Pattern | Description | Example Match |
|---------|-------------|---------------|
| `approve_*` | Wildcard (any chars) | `approve_payment`, `approve_vendor` |
| `review_?` | Single char | `review_a`, `review_1` |
| `validate_payment` | Exact match | `validate_payment` only |
### Assignment Priority
1. **Dynamic assignment** (`assign_from_input`): If configured, pulls email from flow state
2. **Static email** (`assign_to_email`): Falls back to configured email
3. **Deployment creator**: If no rule matches, the deployment creator's email is used
### Dynamic Assignment Example
If your flow state contains `{"sales_rep_email": "alice@company.com"}`, configure:
```json
{
"name": "Route to Sales Rep",
"match": {
"method_name": "review_*"
},
"assign_from_input": "sales_rep_email"
}
```
The request will be assigned to `alice@company.com` automatically.
<Tip>
**Use Case**: Pull the assignee from your CRM, database, or previous flow step to dynamically route reviews to the right person.
</Tip>
## Auto-Response
Automatically respond to HITL requests if no human responds within a timeout. This ensures flows don't hang indefinitely.
### Configuration
| Setting | Description |
|---------|-------------|
| Enabled | Toggle to enable auto-response |
| Timeout (minutes) | Time to wait before auto-responding |
| Default Outcome | The response value (must match an `emit` option) |
<Frame>
<img src="/images/enterprise/hitl-settings-auto-respond.png" alt="HITL Auto-Response Configuration" />
</Frame>
### Use Cases
- **SLA compliance**: Ensure flows don't hang indefinitely
- **Default approval**: Auto-approve low-risk requests after timeout
- **Graceful degradation**: Continue with a safe default when reviewers are unavailable
<Warning>
Use auto-response carefully. Only enable it for non-critical reviews where a default response is acceptable.
</Warning>
## Review Process
### Dashboard Interface
The HITL review interface provides a clean, focused experience for reviewers:
- **Markdown Rendering**: Rich formatting for review content with syntax highlighting
- **Context Panel**: View flow state, execution history, and related information
- **Feedback Input**: Provide detailed feedback and comments with your decision
- **Quick Actions**: One-click emit option buttons with optional comments
<Frame>
<img src="/images/enterprise/hitl-list-pending-feedbacks.png" alt="HITL Pending Requests List" />
</Frame>
### Response Methods
Reviewers can respond via three channels:
| Method | Description |
|--------|-------------|
| **Email Reply** | Reply directly to the notification email |
| **Dashboard** | Use the Enterprise dashboard UI |
| **API/Webhook** | Programmatic response via API |
### History & Audit Trail
Every HITL interaction is tracked with a complete timeline:
- Decision history (approve/reject/revise)
- Reviewer identity and timestamp
- Feedback and comments provided
- Response method (email/dashboard/API)
- Response time metrics
## Analytics & Monitoring
Track HITL performance with comprehensive analytics.
### Performance Dashboard
<Frame>
<img src="/images/enterprise/hitl-metrics.png" alt="HITL Metrics Dashboard" />
</Frame>
<CardGroup cols={2}>
<Card title="Response Times" icon="stopwatch">
Monitor average and median response times by reviewer or flow.
</Card>
<Card title="Volume Trends" icon="chart-bar">
Analyze review volume patterns to optimize team capacity.
</Card>
<Card title="Decision Distribution" icon="chart-pie">
View approval/rejection rates across different review types.
</Card>
<Card title="SLA Tracking" icon="chart-line">
Track percentage of reviews completed within SLA targets.
</Card>
</CardGroup>
### Audit & Compliance
Enterprise-ready audit capabilities for regulatory requirements:
- Complete decision history with timestamps
- Reviewer identity verification
- Immutable audit logs
- Export capabilities for compliance reporting
## Common Use Cases
<AccordionGroup>
<Accordion title="Security Reviews" icon="shield-halved">
**Use Case**: Internal security questionnaire automation with human validation
- AI generates responses to security questionnaires
- Security team reviews and validates accuracy via email
- Approved responses are compiled into final submission
- Full audit trail for compliance
</Accordion>
<Accordion title="Content Approval" icon="file-lines">
**Use Case**: Marketing content requiring legal/brand review
- AI generates marketing copy or social media content
- Route to brand team email for voice/tone review
- Automatic publishing upon approval
</Accordion>
<Accordion title="Financial Approvals" icon="money-bill">
**Use Case**: Expense reports, contract terms, budget allocations
- AI pre-processes and categorizes financial requests
- Route based on amount thresholds using dynamic assignment
- Maintain complete audit trail for financial compliance
</Accordion>
<Accordion title="Dynamic Assignment from CRM" icon="database">
**Use Case**: Route reviews to account owners from your CRM
- Flow fetches account owner email from CRM
- Store email in flow state (e.g., `account_owner_email`)
- Use `assign_from_input` to route to the right person automatically
</Accordion>
<Accordion title="Quality Assurance" icon="magnifying-glass">
**Use Case**: AI output validation before customer delivery
- AI generates customer-facing content or responses
- QA team reviews via email notification
- Feedback loops improve AI performance over time
</Accordion>
</AccordionGroup>
## Webhooks API
When your Flows pause for human feedback, you can configure webhooks to send request data to your own application. This enables:
- Building custom approval UIs
- Integrating with internal tools (Jira, ServiceNow, custom dashboards)
- Routing approvals to third-party systems
- Mobile app notifications
- Automated decision systems
<Frame>
<img src="/images/enterprise/hitl-settings-webhook.png" alt="HITL Webhook Configuration" />
</Frame>
### Configuring Webhooks
<Steps>
<Step title="Navigate to Settings">
Go to your **Deployment** → **Settings** → **Human in the Loop**
</Step>
<Step title="Expand Webhooks Section">
Click to expand the **Webhooks** configuration
</Step>
<Step title="Add Your Webhook URL">
Enter your webhook URL (must be HTTPS in production)
</Step>
<Step title="Save Configuration">
Click **Save Configuration** to activate
</Step>
</Steps>
You can configure multiple webhooks. Each active webhook receives all HITL events.
### Webhook Events
Your endpoint will receive HTTP POST requests for these events:
| Event Type | When Triggered |
|------------|----------------|
| `new_request` | A flow pauses and requests human feedback |
### Webhook Payload
All webhooks receive a JSON payload with this structure:
```json
{
"event": "new_request",
"request": {
"id": "550e8400-e29b-41d4-a716-446655440000",
"flow_id": "flow_abc123",
"method_name": "review_article",
"message": "Please review this article for publication.",
"emit_options": ["approved", "rejected", "request_changes"],
"state": {
"article_id": 12345,
"author": "john@example.com",
"category": "technology"
},
"metadata": {},
"created_at": "2026-01-14T12:00:00Z"
},
"deployment": {
"id": 456,
"name": "Content Review Flow",
"organization_id": 789
},
"callback_url": "https://api.crewai.com/...",
"assigned_to_email": "reviewer@company.com"
}
```
### Responding to Requests
To submit feedback, **POST to the `callback_url`** included in the webhook payload.
```http
POST {callback_url}
Content-Type: application/json
{
"feedback": "Approved. Great article!",
"source": "my_custom_app"
}
```
### Security
<Info>
All webhook requests are cryptographically signed using HMAC-SHA256 to ensure authenticity and prevent tampering.
</Info>
#### Webhook Security
- **HMAC-SHA256 signatures**: Every webhook includes a cryptographic signature
- **Per-webhook secrets**: Each webhook has its own unique signing secret
- **Encrypted at rest**: Signing secrets are encrypted in our database
- **Timestamp verification**: Prevents replay attacks
#### Signature Headers
Each webhook request includes these headers:
| Header | Description |
|--------|-------------|
| `X-Signature` | HMAC-SHA256 signature: `sha256=<hex_digest>` |
| `X-Timestamp` | Unix timestamp when the request was signed |
#### Verification
Verify by computing:
```python
import hmac
import hashlib
expected = hmac.new(
signing_secret.encode(),
f"{timestamp}.{payload}".encode(),
hashlib.sha256
).hexdigest()
if hmac.compare_digest(expected, signature):
# Valid signature
```
### Error Handling
Your webhook endpoint should return a 2xx status code to acknowledge receipt:
| Your Response | Our Behavior |
|---------------|--------------|
| 2xx | Webhook delivered successfully |
| 4xx/5xx | Logged as failed, no retry |
| Timeout (30s) | Logged as failed, no retry |
## Security & RBAC
### Dashboard Access
HITL access is controlled at the deployment level:
| Permission | Capability |
|------------|------------|
| `manage_human_feedback` | Configure HITL settings, view all requests |
| `respond_to_human_feedback` | Respond to requests, view assigned requests |
### Email Response Authorization
For email replies:
1. The reply-to token encodes the authorized email
2. Sender email must match the token's email
3. Token must not be expired (7-day default)
4. Request must still be pending
### Audit Trail
All HITL actions are logged:
- Request creation
- Assignment changes
- Response submission (with source: dashboard/email/API)
- Flow resume status
## Troubleshooting
### Emails Not Sending
1. Check "Email Notifications" is enabled in configuration
2. Verify routing rules match the method name
3. Verify assignee email is valid
4. Check deployment creator fallback if no routing rules match
### Email Replies Not Processing
1. Check token hasn't expired (7-day default)
2. Verify sender email matches assigned email
3. Ensure request is still pending (not already responded)
### Flow Not Resuming
1. Check request status in dashboard
2. Verify callback URL is accessible
3. Ensure deployment is still running
## Best Practices
<Tip>
**Start Simple**: Begin with email notifications to deployment creator, then add routing rules as your workflows mature.
</Tip>
1. **Use Dynamic Assignment**: Pull assignee emails from your flow state for flexible routing.
2. **Configure Auto-Response**: Set up a fallback for non-critical reviews to prevent flows from hanging.
3. **Monitor Response Times**: Use analytics to identify bottlenecks and optimize your review process.
4. **Keep Review Messages Clear**: Write clear, actionable messages in the `@human_feedback` decorator.
5. **Test Email Flow**: Send test requests to verify email delivery before going to production.
## Related Resources
<CardGroup cols={2}>
<Card title="Human Feedback in Flows" icon="code" href="/en/learn/human-feedback-in-flows">
Implementation guide for the `@human_feedback` decorator
</Card>
<Card title="Flow HITL Workflow Guide" icon="route" href="/en/enterprise/guides/human-in-the-loop">
Step-by-step guide for setting up HITL workflows
</Card>
<Card title="RBAC Configuration" icon="shield-check" href="/en/enterprise/features/rbac">
Configure role-based access control for your organization
</Card>
<Card title="Webhook Streaming" icon="bolt" href="/en/enterprise/features/webhook-streaming">
Set up real-time event notifications
</Card>
</CardGroup>

View File

@@ -5,9 +5,54 @@ icon: "user-check"
mode: "wide"
---
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI Enterprise.
## Setting Up HITL Workflows
## HITL Approaches in CrewAI
CrewAI offers two approaches for implementing human-in-the-loop workflows:
| Approach | Best For | Version |
|----------|----------|---------|
| **Flow-based** (`@human_feedback` decorator) | Production with Enterprise UI, email-first workflows, full platform features | **1.8.0+** |
| **Webhook-based** | Custom integrations, external systems (Slack, Teams, etc.), legacy setups | All versions |
## Flow-Based HITL with Enterprise Platform
<Note>
The `@human_feedback` decorator requires **CrewAI version 1.8.0 or higher**.
</Note>
When using the `@human_feedback` decorator in your Flows, CrewAI Enterprise provides an **email-first HITL system** that enables anyone with an email address to respond to review requests:
<CardGroup cols={2}>
<Card title="Email-First Design" icon="envelope">
Responders receive email notifications and can reply directly—no login required.
</Card>
<Card title="Dashboard Review" icon="desktop">
Review and respond to HITL requests in the Enterprise dashboard when preferred.
</Card>
<Card title="Flexible Routing" icon="route">
Route requests to specific emails based on method patterns or pull from flow state.
</Card>
<Card title="Auto-Response" icon="clock">
Configure automatic fallback responses when no human replies within the timeout.
</Card>
</CardGroup>
### Key Benefits
- **External responders**: Anyone with an email can respond, even non-platform users
- **Dynamic assignment**: Pull assignee email from flow state (e.g., `account_owner_email`)
- **Simple configuration**: Email-based routing is easier to set up than user/role management
- **Deployment creator fallback**: If no routing rule matches, the deployment creator is notified
<Tip>
For implementation details on the `@human_feedback` decorator, see the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide.
</Tip>
## Setting Up Webhook-Based HITL Workflows
For custom integrations with external systems like Slack, Microsoft Teams, or your own applications, you can use the webhook-based approach:
<Steps>
<Step title="Configure Your Task">
@@ -99,3 +144,14 @@ HITL workflows are particularly valuable for:
- Sensitive or high-stakes operations
- Creative tasks requiring human judgment
- Compliance and regulatory reviews
## Learn More
<CardGroup cols={2}>
<Card title="Flow HITL Management" icon="users-gear" href="/en/enterprise/features/flow-hitl-management">
Explore the full Enterprise Flow HITL platform capabilities including email notifications, routing rules, auto-response, and analytics.
</Card>
<Card title="Human Feedback in Flows" icon="code" href="/en/learn/human-feedback-in-flows">
Implementation guide for the `@human_feedback` decorator in your Flows.
</Card>
</CardGroup>

View File

@@ -224,6 +224,60 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
</Accordion>
<Accordion title="google_contacts/get_contact_group">
**Description:** Get a specific contact group by resource name.
**Parameters:**
- `resourceName` (string, required): The resource name of the contact group (e.g., 'contactGroups/myContactGroup')
- `maxMembers` (integer, optional): Maximum number of members to include. Minimum: 0, Maximum: 20000
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
</Accordion>
<Accordion title="google_contacts/create_contact_group">
**Description:** Create a new contact group (label).
**Parameters:**
- `name` (string, required): The name of the contact group
- `clientData` (array, optional): Client-specific data
```json
[
{
"key": "data_key",
"value": "data_value"
}
]
```
</Accordion>
<Accordion title="google_contacts/update_contact_group">
**Description:** Update a contact group's information.
**Parameters:**
- `resourceName` (string, required): The resource name of the contact group (e.g., 'contactGroups/myContactGroup')
- `name` (string, required): The name of the contact group
- `clientData` (array, optional): Client-specific data
```json
[
{
"key": "data_key",
"value": "data_value"
}
]
```
</Accordion>
<Accordion title="google_contacts/delete_contact_group">
**Description:** Delete a contact group.
**Parameters:**
- `resourceName` (string, required): The resource name of the contact group to delete (e.g., 'contactGroups/myContactGroup')
- `deleteContacts` (boolean, optional): Whether to delete contacts in the group as well. Default: false
</Accordion>
</AccordionGroup>
## Usage Examples

View File

@@ -132,6 +132,297 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `endIndex` (integer, required): The end index of the range.
</Accordion>
<Accordion title="google_docs/create_document_with_content">
**Description:** Create a new Google Document with content in one action.
**Parameters:**
- `title` (string, required): The title for the new document. Appears at the top of the document and in Google Drive.
- `content` (string, optional): The text content to insert into the document. Use `\n` for new paragraphs.
</Accordion>
<Accordion title="google_docs/append_text">
**Description:** Append text to the end of a Google Document. Automatically inserts at the document end without needing to specify an index.
**Parameters:**
- `documentId` (string, required): The document ID from create_document response or URL.
- `text` (string, required): Text to append at the end of the document. Use `\n` for new paragraphs.
</Accordion>
<Accordion title="google_docs/set_text_bold">
**Description:** Make text bold or remove bold formatting in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of text to format.
- `endIndex` (integer, required): End position of text to format (exclusive).
- `bold` (boolean, required): Set `true` to make bold, `false` to remove bold.
</Accordion>
<Accordion title="google_docs/set_text_italic">
**Description:** Make text italic or remove italic formatting in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of text to format.
- `endIndex` (integer, required): End position of text to format (exclusive).
- `italic` (boolean, required): Set `true` to make italic, `false` to remove italic.
</Accordion>
<Accordion title="google_docs/set_text_underline">
**Description:** Add or remove underline formatting from text in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of text to format.
- `endIndex` (integer, required): End position of text to format (exclusive).
- `underline` (boolean, required): Set `true` to underline, `false` to remove underline.
</Accordion>
<Accordion title="google_docs/set_text_strikethrough">
**Description:** Add or remove strikethrough formatting from text in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of text to format.
- `endIndex` (integer, required): End position of text to format (exclusive).
- `strikethrough` (boolean, required): Set `true` to add strikethrough, `false` to remove.
</Accordion>
<Accordion title="google_docs/set_font_size">
**Description:** Change the font size of text in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of text to format.
- `endIndex` (integer, required): End position of text to format (exclusive).
- `fontSize` (number, required): Font size in points. Common sizes: 10, 11, 12, 14, 16, 18, 24, 36.
</Accordion>
<Accordion title="google_docs/set_text_color">
**Description:** Change the color of text using RGB values (0-1 scale) in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of text to format.
- `endIndex` (integer, required): End position of text to format (exclusive).
- `red` (number, required): Red component (0-1). Example: `1` for full red.
- `green` (number, required): Green component (0-1). Example: `0.5` for half green.
- `blue` (number, required): Blue component (0-1). Example: `0` for no blue.
</Accordion>
<Accordion title="google_docs/create_hyperlink">
**Description:** Turn existing text into a clickable hyperlink in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of text to make into a link.
- `endIndex` (integer, required): End position of text to make into a link (exclusive).
- `url` (string, required): The URL the link should point to. Example: `"https://example.com"`.
</Accordion>
<Accordion title="google_docs/apply_heading_style">
**Description:** Apply a heading or paragraph style to a text range in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of paragraph(s) to style.
- `endIndex` (integer, required): End position of paragraph(s) to style.
- `style` (string, required): The style to apply. Enum: `NORMAL_TEXT`, `TITLE`, `SUBTITLE`, `HEADING_1`, `HEADING_2`, `HEADING_3`, `HEADING_4`, `HEADING_5`, `HEADING_6`.
</Accordion>
<Accordion title="google_docs/set_paragraph_alignment">
**Description:** Set text alignment for paragraphs in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of paragraph(s) to align.
- `endIndex` (integer, required): End position of paragraph(s) to align.
- `alignment` (string, required): Text alignment. Enum: `START` (left), `CENTER`, `END` (right), `JUSTIFIED`.
</Accordion>
<Accordion title="google_docs/set_line_spacing">
**Description:** Set line spacing for paragraphs in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of paragraph(s).
- `endIndex` (integer, required): End position of paragraph(s).
- `lineSpacing` (number, required): Line spacing as percentage. `100` = single, `115` = 1.15x, `150` = 1.5x, `200` = double.
</Accordion>
<Accordion title="google_docs/create_paragraph_bullets">
**Description:** Convert paragraphs to a bulleted or numbered list in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of paragraphs to convert to list.
- `endIndex` (integer, required): End position of paragraphs to convert to list.
- `bulletPreset` (string, required): Bullet/numbering style. Enum: `BULLET_DISC_CIRCLE_SQUARE`, `BULLET_DIAMONDX_ARROW3D_SQUARE`, `BULLET_CHECKBOX`, `BULLET_ARROW_DIAMOND_DISC`, `BULLET_STAR_CIRCLE_SQUARE`, `NUMBERED_DECIMAL_ALPHA_ROMAN`, `NUMBERED_DECIMAL_ALPHA_ROMAN_PARENS`, `NUMBERED_DECIMAL_NESTED`, `NUMBERED_UPPERALPHA_ALPHA_ROMAN`, `NUMBERED_UPPERROMAN_UPPERALPHA_DECIMAL`.
</Accordion>
<Accordion title="google_docs/delete_paragraph_bullets">
**Description:** Remove bullets or numbering from paragraphs in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `startIndex` (integer, required): Start position of list paragraphs.
- `endIndex` (integer, required): End position of list paragraphs.
</Accordion>
<Accordion title="google_docs/insert_table_with_content">
**Description:** Insert a table with content into a Google Document in one action. Provide content as a 2D array.
**Parameters:**
- `documentId` (string, required): The document ID.
- `rows` (integer, required): Number of rows in the table.
- `columns` (integer, required): Number of columns in the table.
- `index` (integer, optional): Position to insert the table. If not provided, the table is inserted at the end of the document.
- `content` (array, required): Table content as a 2D array. Each inner array is a row. Example: `[["Year", "Revenue"], ["2023", "$43B"], ["2024", "$45B"]]`.
</Accordion>
<Accordion title="google_docs/insert_table_row">
**Description:** Insert a new row above or below a reference cell in an existing table.
**Parameters:**
- `documentId` (string, required): The document ID.
- `tableStartIndex` (integer, required): The start index of the table. Get from get_document.
- `rowIndex` (integer, required): Row index (0-based) of reference cell.
- `columnIndex` (integer, optional): Column index (0-based) of reference cell. Default is `0`.
- `insertBelow` (boolean, optional): If `true`, insert below the reference row. If `false`, insert above. Default is `true`.
</Accordion>
<Accordion title="google_docs/insert_table_column">
**Description:** Insert a new column left or right of a reference cell in an existing table.
**Parameters:**
- `documentId` (string, required): The document ID.
- `tableStartIndex` (integer, required): The start index of the table.
- `rowIndex` (integer, optional): Row index (0-based) of reference cell. Default is `0`.
- `columnIndex` (integer, required): Column index (0-based) of reference cell.
- `insertRight` (boolean, optional): If `true`, insert to the right. If `false`, insert to the left. Default is `true`.
</Accordion>
<Accordion title="google_docs/delete_table_row">
**Description:** Delete a row from an existing table in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `tableStartIndex` (integer, required): The start index of the table.
- `rowIndex` (integer, required): Row index (0-based) to delete.
- `columnIndex` (integer, optional): Column index (0-based) of any cell in the row. Default is `0`.
</Accordion>
<Accordion title="google_docs/delete_table_column">
**Description:** Delete a column from an existing table in a Google Document.
**Parameters:**
- `documentId` (string, required): The document ID.
- `tableStartIndex` (integer, required): The start index of the table.
- `rowIndex` (integer, optional): Row index (0-based) of any cell in the column. Default is `0`.
- `columnIndex` (integer, required): Column index (0-based) to delete.
</Accordion>
<Accordion title="google_docs/merge_table_cells">
**Description:** Merge a range of table cells into a single cell. Content from all cells is preserved.
**Parameters:**
- `documentId` (string, required): The document ID.
- `tableStartIndex` (integer, required): The start index of the table.
- `rowIndex` (integer, required): Starting row index (0-based) for the merge.
- `columnIndex` (integer, required): Starting column index (0-based) for the merge.
- `rowSpan` (integer, required): Number of rows to merge.
- `columnSpan` (integer, required): Number of columns to merge.
</Accordion>
<Accordion title="google_docs/unmerge_table_cells">
**Description:** Unmerge previously merged table cells back into individual cells.
**Parameters:**
- `documentId` (string, required): The document ID.
- `tableStartIndex` (integer, required): The start index of the table.
- `rowIndex` (integer, required): Row index (0-based) of the merged cell.
- `columnIndex` (integer, required): Column index (0-based) of the merged cell.
- `rowSpan` (integer, required): Number of rows the merged cell spans.
- `columnSpan` (integer, required): Number of columns the merged cell spans.
</Accordion>
<Accordion title="google_docs/insert_inline_image">
**Description:** Insert an image from a public URL into a Google Document. The image must be publicly accessible, under 50MB, and in PNG/JPEG/GIF format.
**Parameters:**
- `documentId` (string, required): The document ID.
- `uri` (string, required): Public URL of the image. Must be accessible without authentication.
- `index` (integer, optional): Position to insert the image. If not provided, the image is inserted at the end of the document. Default is `1`.
</Accordion>
<Accordion title="google_docs/insert_section_break">
**Description:** Insert a section break to create document sections with different formatting.
**Parameters:**
- `documentId` (string, required): The document ID.
- `index` (integer, required): Position to insert the section break.
- `sectionType` (string, required): The type of section break. Enum: `CONTINUOUS` (stays on same page), `NEXT_PAGE` (starts a new page).
</Accordion>
<Accordion title="google_docs/create_header">
**Description:** Create a header for the document. Returns a headerId which can be used with insert_text to add header content.
**Parameters:**
- `documentId` (string, required): The document ID.
- `type` (string, optional): Header type. Enum: `DEFAULT`. Default is `DEFAULT`.
</Accordion>
<Accordion title="google_docs/create_footer">
**Description:** Create a footer for the document. Returns a footerId which can be used with insert_text to add footer content.
**Parameters:**
- `documentId` (string, required): The document ID.
- `type` (string, optional): Footer type. Enum: `DEFAULT`. Default is `DEFAULT`.
</Accordion>
<Accordion title="google_docs/delete_header">
**Description:** Delete a header from the document. Use get_document to find the headerId.
**Parameters:**
- `documentId` (string, required): The document ID.
- `headerId` (string, required): The header ID to delete. Get from get_document response.
</Accordion>
<Accordion title="google_docs/delete_footer">
**Description:** Delete a footer from the document. Use get_document to find the footerId.
**Parameters:**
- `documentId` (string, required): The document ID.
- `footerId` (string, required): The footer ID to delete. Get from get_document response.
</Accordion>
</AccordionGroup>
## Usage Examples

View File

@@ -62,6 +62,22 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="google_slides/get_presentation_metadata">
**Description:** Get lightweight metadata about a presentation (title, slide count, slide IDs). Use this first before fetching full content.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation to retrieve.
</Accordion>
<Accordion title="google_slides/get_presentation_text">
**Description:** Extract all text content from a presentation. Returns slide IDs and text from shapes and tables only (no formatting).
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
</Accordion>
<Accordion title="google_slides/get_presentation">
**Description:** Retrieves a presentation by ID.
@@ -96,6 +112,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="google_slides/get_slide_text">
**Description:** Extract text content from a single slide. Returns only text from shapes and tables (no formatting or styling).
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `pageObjectId` (string, required): The ID of the slide/page to get text from.
</Accordion>
<Accordion title="google_slides/get_page">
**Description:** Retrieves a specific page by its ID.
@@ -114,6 +139,120 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="google_slides/create_slide">
**Description:** Add an additional blank slide to a presentation. New presentations already have one blank slide - check get_presentation_metadata first. For slides with title/body areas, use create_slide_with_layout instead.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `insertionIndex` (integer, optional): Where to insert the slide (0-based). If omitted, adds at the end.
</Accordion>
<Accordion title="google_slides/create_slide_with_layout">
**Description:** Create a slide with a predefined layout containing placeholder areas for title, body, etc. This is better than create_slide for structured content. After creating, use get_page to find placeholder IDs, then insert text into them.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `layout` (string, required): Layout type. One of: `BLANK`, `TITLE`, `TITLE_AND_BODY`, `TITLE_AND_TWO_COLUMNS`, `TITLE_ONLY`, `SECTION_HEADER`, `ONE_COLUMN_TEXT`, `MAIN_POINT`, `BIG_NUMBER`. TITLE_AND_BODY is best for title+description. TITLE for title-only slides. SECTION_HEADER for section dividers.
- `insertionIndex` (integer, optional): Where to insert (0-based). Omit to add at end.
</Accordion>
<Accordion title="google_slides/create_text_box">
**Description:** Create a text box on a slide with content. Use this for titles, descriptions, paragraphs - not tables. Optionally specify position (x, y) and size (width, height) in EMU units (914400 EMU = 1 inch).
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The ID of the slide to add the text box to.
- `text` (string, required): The text content for the text box.
- `x` (integer, optional): X position in EMU (914400 = 1 inch). Default: 914400 (1 inch from left).
- `y` (integer, optional): Y position in EMU (914400 = 1 inch). Default: 914400 (1 inch from top).
- `width` (integer, optional): Width in EMU. Default: 7315200 (~8 inches).
- `height` (integer, optional): Height in EMU. Default: 914400 (~1 inch).
</Accordion>
<Accordion title="google_slides/delete_slide">
**Description:** Remove a slide from the presentation. Use get_presentation first to find the slide ID.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The object ID of the slide to delete. Get from get_presentation.
</Accordion>
<Accordion title="google_slides/duplicate_slide">
**Description:** Create a copy of an existing slide. The duplicate is inserted immediately after the original.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The object ID of the slide to duplicate. Get from get_presentation.
</Accordion>
<Accordion title="google_slides/move_slides">
**Description:** Reorder slides by moving them to a new position. Slide IDs must be in their current presentation order (no duplicates).
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideIds` (array of strings, required): Array of slide IDs to move. Must be in current presentation order.
- `insertionIndex` (integer, required): Target position (0-based). 0 = beginning, slide count = end.
</Accordion>
<Accordion title="google_slides/insert_youtube_video">
**Description:** Embed a YouTube video on a slide. The video ID is the value after "v=" in YouTube URLs (e.g., for youtube.com/watch?v=abc123, use "abc123").
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The ID of the slide to add the video to. Get from get_presentation.
- `videoId` (string, required): The YouTube video ID (the value after v= in the URL).
</Accordion>
<Accordion title="google_slides/insert_drive_video">
**Description:** Embed a video from Google Drive on a slide. The file ID can be found in the Drive file URL.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The ID of the slide to add the video to. Get from get_presentation.
- `fileId` (string, required): The Google Drive file ID of the video.
</Accordion>
<Accordion title="google_slides/set_slide_background_image">
**Description:** Set a background image for a slide. The image URL must be publicly accessible.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The ID of the slide to set the background for. Get from get_presentation.
- `imageUrl` (string, required): Publicly accessible URL of the image to use as background.
</Accordion>
<Accordion title="google_slides/create_table">
**Description:** Create an empty table on a slide. To create a table with content, use create_table_with_content instead.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The ID of the slide to add the table to. Get from get_presentation.
- `rows` (integer, required): Number of rows in the table.
- `columns` (integer, required): Number of columns in the table.
</Accordion>
<Accordion title="google_slides/create_table_with_content">
**Description:** Create a table with content in one action. Provide content as a 2D array where each inner array is a row. Example: [["Header1", "Header2"], ["Row1Col1", "Row1Col2"]].
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `slideId` (string, required): The ID of the slide to add the table to. Get from get_presentation.
- `rows` (integer, required): Number of rows in the table.
- `columns` (integer, required): Number of columns in the table.
- `content` (array, required): Table content as 2D array. Each inner array is a row. Example: [["Year", "Revenue"], ["2023", "$10M"]].
</Accordion>
<Accordion title="google_slides/import_data_from_sheet">
**Description:** Imports data from a Google Sheet into a presentation.

View File

@@ -169,6 +169,16 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_excel/get_table_data">
**Description:** Get data from a specific table in an Excel worksheet.
**Parameters:**
- `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet
- `table_name` (string, required): Name of the table
</Accordion>
<Accordion title="microsoft_excel/create_chart">
**Description:** Create a chart in an Excel worksheet.
@@ -201,6 +211,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_excel/get_used_range_metadata">
**Description:** Get the used range metadata (dimensions only, no data) of an Excel worksheet.
**Parameters:**
- `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet
</Accordion>
<Accordion title="microsoft_excel/list_charts">
**Description:** Get all charts in an Excel worksheet.

View File

@@ -151,6 +151,49 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `item_id` (string, required): The ID of the file.
</Accordion>
<Accordion title="microsoft_onedrive/list_files_by_path">
**Description:** List files and folders in a specific OneDrive path.
**Parameters:**
- `folder_path` (string, required): The folder path (e.g., 'Documents/Reports').
- `top` (integer, optional): Number of items to retrieve (max 1000). Default is `50`.
- `orderby` (string, optional): Order by field (e.g., "name asc", "lastModifiedDateTime desc"). Default is "name asc".
</Accordion>
<Accordion title="microsoft_onedrive/get_recent_files">
**Description:** Get recently accessed files from OneDrive.
**Parameters:**
- `top` (integer, optional): Number of items to retrieve (max 200). Default is `25`.
</Accordion>
<Accordion title="microsoft_onedrive/get_shared_with_me">
**Description:** Get files and folders shared with the user.
**Parameters:**
- `top` (integer, optional): Number of items to retrieve (max 200). Default is `50`.
- `orderby` (string, optional): Order by field. Default is "name asc".
</Accordion>
<Accordion title="microsoft_onedrive/get_file_by_path">
**Description:** Get information about a specific file or folder by path.
**Parameters:**
- `file_path` (string, required): The file or folder path (e.g., 'Documents/report.docx').
</Accordion>
<Accordion title="microsoft_onedrive/download_file_by_path">
**Description:** Download a file from OneDrive by its path.
**Parameters:**
- `file_path` (string, required): The file path (e.g., 'Documents/report.docx').
</Accordion>
</AccordionGroup>
## Usage Examples

View File

@@ -133,6 +133,74 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `companyName` (string, optional): Contact's company name.
</Accordion>
<Accordion title="microsoft_outlook/get_message">
**Description:** Get a specific email message by ID.
**Parameters:**
- `message_id` (string, required): The unique identifier of the message. Obtain from get_messages action.
- `select` (string, optional): Comma-separated list of properties to return. Example: "id,subject,body,from,receivedDateTime". Default is "id,subject,body,from,toRecipients,receivedDateTime".
</Accordion>
<Accordion title="microsoft_outlook/reply_to_email">
**Description:** Reply to an email message.
**Parameters:**
- `message_id` (string, required): The unique identifier of the message to reply to. Obtain from get_messages action.
- `comment` (string, required): The reply message content. Can be plain text or HTML. The original message will be quoted below this content.
</Accordion>
<Accordion title="microsoft_outlook/forward_email">
**Description:** Forward an email message.
**Parameters:**
- `message_id` (string, required): The unique identifier of the message to forward. Obtain from get_messages action.
- `to_recipients` (array, required): Array of recipient email addresses to forward to. Example: ["john@example.com", "jane@example.com"].
- `comment` (string, optional): Optional message to include above the forwarded content. Can be plain text or HTML.
</Accordion>
<Accordion title="microsoft_outlook/mark_message_read">
**Description:** Mark a message as read or unread.
**Parameters:**
- `message_id` (string, required): The unique identifier of the message. Obtain from get_messages action.
- `is_read` (boolean, required): Set to true to mark as read, false to mark as unread.
</Accordion>
<Accordion title="microsoft_outlook/delete_message">
**Description:** Delete an email message.
**Parameters:**
- `message_id` (string, required): The unique identifier of the message to delete. Obtain from get_messages action.
</Accordion>
<Accordion title="microsoft_outlook/update_event">
**Description:** Update an existing calendar event.
**Parameters:**
- `event_id` (string, required): The unique identifier of the event. Obtain from get_calendar_events action.
- `subject` (string, optional): New subject/title for the event.
- `start_time` (string, optional): New start time in ISO 8601 format (e.g., "2024-01-20T10:00:00"). REQUIRED: Must also provide start_timezone when using this field.
- `start_timezone` (string, optional): Timezone for start time. REQUIRED when updating start_time. Examples: "Pacific Standard Time", "Eastern Standard Time", "UTC".
- `end_time` (string, optional): New end time in ISO 8601 format. REQUIRED: Must also provide end_timezone when using this field.
- `end_timezone` (string, optional): Timezone for end time. REQUIRED when updating end_time. Examples: "Pacific Standard Time", "Eastern Standard Time", "UTC".
- `location` (string, optional): New location for the event.
- `body` (string, optional): New body/description for the event. Supports HTML formatting.
</Accordion>
<Accordion title="microsoft_outlook/delete_event">
**Description:** Delete a calendar event.
**Parameters:**
- `event_id` (string, required): The unique identifier of the event to delete. Obtain from get_calendar_events action.
</Accordion>
</AccordionGroup>
## Usage Examples

View File

@@ -78,6 +78,17 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_sharepoint/get_drives">
**Description:** List all document libraries (drives) in a SharePoint site. Use this to discover available libraries before using file operations.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `top` (integer, optional): Maximum number of drives to return per page (1-999). Default is 100
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'id,name,webUrl,driveType')
</Accordion>
<Accordion title="microsoft_sharepoint/get_site_lists">
**Description:** Get all lists in a SharePoint site.
@@ -159,20 +170,317 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_sharepoint/get_drive_items">
**Description:** Get files and folders from a SharePoint document library.
<Accordion title="microsoft_sharepoint/list_files">
**Description:** Retrieve files and folders from a SharePoint document library. By default lists the root folder, but you can navigate into subfolders by providing a folder_id.
**Parameters:**
- `site_id` (string, required): The ID of the SharePoint site
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `folder_id` (string, optional): The ID of the folder to list contents from. Use 'root' for the root folder, or provide a folder ID from a previous list_files call. Default is 'root'
- `top` (integer, optional): Maximum number of items to return per page (1-1000). Default is 50
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
- `orderby` (string, optional): Sort order for results (e.g., 'name asc', 'size desc', 'lastModifiedDateTime desc'). Default is 'name asc'
- `filter` (string, optional): OData filter to narrow results (e.g., 'file ne null' for files only, 'folder ne null' for folders only)
- `select` (string, optional): Comma-separated list of fields to return (e.g., 'id,name,size,folder,file,webUrl,lastModifiedDateTime')
</Accordion>
<Accordion title="microsoft_sharepoint/delete_drive_item">
**Description:** Delete a file or folder from SharePoint document library.
<Accordion title="microsoft_sharepoint/delete_file">
**Description:** Delete a file or folder from a SharePoint document library. For folders, all contents are deleted recursively. Items are moved to the site recycle bin.
**Parameters:**
- `site_id` (string, required): The ID of the SharePoint site
- `item_id` (string, required): The ID of the file or folder to delete
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the file or folder to delete. Obtain from list_files
</Accordion>
<Accordion title="microsoft_sharepoint/list_files_by_path">
**Description:** List files and folders in a SharePoint document library folder by its path. More efficient than multiple list_files calls for deep navigation.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `folder_path` (string, required): The full path to the folder without leading/trailing slashes (e.g., 'Documents', 'Reports/2024/Q1')
- `top` (integer, optional): Maximum number of items to return per page (1-1000). Default is 50
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
- `orderby` (string, optional): Sort order for results (e.g., 'name asc', 'size desc'). Default is 'name asc'
- `select` (string, optional): Comma-separated list of fields to return (e.g., 'id,name,size,folder,file,webUrl,lastModifiedDateTime')
</Accordion>
<Accordion title="microsoft_sharepoint/download_file">
**Description:** Download raw file content from a SharePoint document library. Use only for plain text files (.txt, .csv, .json). For Excel files, use the Excel-specific actions. For Word files, use get_word_document_content.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the file to download. Obtain from list_files or list_files_by_path
</Accordion>
<Accordion title="microsoft_sharepoint/get_file_info">
**Description:** Retrieve detailed metadata for a specific file or folder in a SharePoint document library, including name, size, created/modified dates, and author information.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the file or folder. Obtain from list_files or list_files_by_path
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'id,name,size,createdDateTime,lastModifiedDateTime,webUrl,createdBy,lastModifiedBy')
</Accordion>
<Accordion title="microsoft_sharepoint/create_folder">
**Description:** Create a new folder in a SharePoint document library. By default creates the folder in the root; use parent_id to create subfolders.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `folder_name` (string, required): Name for the new folder. Cannot contain: \ / : * ? " < > |
- `parent_id` (string, optional): The ID of the parent folder. Use 'root' for the document library root, or provide a folder ID from list_files. Default is 'root'
</Accordion>
<Accordion title="microsoft_sharepoint/search_files">
**Description:** Search for files and folders in a SharePoint document library by keywords. Searches file names, folder names, and file contents for Office documents. Do not use wildcards or special characters.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `query` (string, required): Search keywords (e.g., 'report', 'budget 2024'). Wildcards like *.txt are not supported
- `top` (integer, optional): Maximum number of results to return per page (1-1000). Default is 50
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
- `select` (string, optional): Comma-separated list of fields to return (e.g., 'id,name,size,folder,file,webUrl,lastModifiedDateTime')
</Accordion>
<Accordion title="microsoft_sharepoint/copy_file">
**Description:** Copy a file or folder to a new location within SharePoint. The original item remains unchanged. The copy operation is asynchronous for large files.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the file or folder to copy. Obtain from list_files or search_files
- `destination_folder_id` (string, required): The ID of the destination folder. Use 'root' for the root folder, or a folder ID from list_files
- `new_name` (string, optional): New name for the copy. If not provided, the original name is used
</Accordion>
<Accordion title="microsoft_sharepoint/move_file">
**Description:** Move a file or folder to a new location within SharePoint. The item is removed from its original location. For folders, all contents are moved as well.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the file or folder to move. Obtain from list_files or search_files
- `destination_folder_id` (string, required): The ID of the destination folder. Use 'root' for the root folder, or a folder ID from list_files
- `new_name` (string, optional): New name for the moved item. If not provided, the original name is kept
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_worksheets">
**Description:** List all worksheets (tabs) in an Excel workbook stored in a SharePoint document library. Use the returned worksheet name with other Excel actions.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'id,name,position,visibility')
- `filter` (string, optional): OData filter expression (e.g., "visibility eq 'Visible'" to exclude hidden sheets)
- `top` (integer, optional): Maximum number of worksheets to return. Minimum: 1, Maximum: 999
- `orderby` (string, optional): Sort order (e.g., 'position asc' to return sheets in tab order)
</Accordion>
<Accordion title="microsoft_sharepoint/create_excel_worksheet">
**Description:** Create a new worksheet (tab) in an Excel workbook stored in a SharePoint document library. The new sheet is added at the end of the tab list.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `name` (string, required): Name for the new worksheet. Maximum 31 characters. Cannot contain: \ / * ? : [ ]. Must be unique within the workbook
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_range_data">
**Description:** Retrieve cell values from a specific range in an Excel worksheet stored in SharePoint. For reading all data without knowing dimensions, use get_excel_used_range instead.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet (tab) to read from. Obtain from get_excel_worksheets. Case-sensitive
- `range` (string, required): Cell range in A1 notation (e.g., 'A1:C10', 'A:C', '1:5', 'A1')
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text')
</Accordion>
<Accordion title="microsoft_sharepoint/update_excel_range_data">
**Description:** Write values to a specific range in an Excel worksheet stored in SharePoint. Overwrites existing cell contents. The values array dimensions must match the range dimensions exactly.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet (tab) to update. Obtain from get_excel_worksheets. Case-sensitive
- `range` (string, required): Cell range in A1 notation where values will be written (e.g., 'A1:C3' for a 3x3 block)
- `values` (array, required): 2D array of values (rows containing cells). Example for A1:B2: [["Header1", "Header2"], ["Value1", "Value2"]]. Use null to clear a cell
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_used_range_metadata">
**Description:** Return only the metadata (address and dimensions) of the used range in a worksheet, without the actual cell values. Ideal for large files to understand spreadsheet size before reading data in chunks.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet (tab) to read. Obtain from get_excel_worksheets. Case-sensitive
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_used_range">
**Description:** Retrieve all cells containing data in a worksheet stored in SharePoint. Do not use for files larger than 2MB. For large files, use get_excel_used_range_metadata first, then get_excel_range_data to read in smaller chunks.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet (tab) to read. Obtain from get_excel_worksheets. Case-sensitive
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text,rowCount,columnCount')
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_cell">
**Description:** Retrieve the value of a single cell by row and column index from an Excel file in SharePoint. Indices are 0-based (row 0 = Excel row 1, column 0 = column A).
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet (tab). Obtain from get_excel_worksheets. Case-sensitive
- `row` (integer, required): 0-based row index (row 0 = Excel row 1). Valid range: 0-1048575
- `column` (integer, required): 0-based column index (column 0 = A, column 1 = B). Valid range: 0-16383
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text')
</Accordion>
<Accordion title="microsoft_sharepoint/add_excel_table">
**Description:** Convert a cell range into a formatted Excel table with filtering, sorting, and structured data capabilities. Tables enable add_excel_table_row for appending data.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet containing the data range. Obtain from get_excel_worksheets
- `range` (string, required): Cell range to convert into a table, including headers and data (e.g., 'A1:D10' where A1:D1 contains column headers)
- `has_headers` (boolean, optional): Set to true if the first row contains column headers. Default is true
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_tables">
**Description:** List all tables in a specific Excel worksheet stored in SharePoint. Returns table properties including id, name, showHeaders, and showTotals.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet to get tables from. Obtain from get_excel_worksheets
</Accordion>
<Accordion title="microsoft_sharepoint/add_excel_table_row">
**Description:** Append a new row to the end of an Excel table in a SharePoint file. The values array must have the same number of elements as the table has columns.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet containing the table. Obtain from get_excel_worksheets
- `table_name` (string, required): Name of the table to add the row to (e.g., 'Table1'). Obtain from get_excel_tables. Case-sensitive
- `values` (array, required): Array of cell values for the new row, one per column in table order (e.g., ["John Doe", "john@example.com", 25])
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_table_data">
**Description:** Get all rows from an Excel table in a SharePoint file as a data range. Easier than get_excel_range_data when working with structured tables since you don't need to know the exact range.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet containing the table. Obtain from get_excel_worksheets
- `table_name` (string, required): Name of the table to get data from (e.g., 'Table1'). Obtain from get_excel_tables. Case-sensitive
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text')
</Accordion>
<Accordion title="microsoft_sharepoint/create_excel_chart">
**Description:** Create a chart visualization in an Excel worksheet stored in SharePoint from a data range. The chart is embedded in the worksheet.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet where the chart will be created. Obtain from get_excel_worksheets
- `chart_type` (string, required): Chart type (e.g., 'ColumnClustered', 'ColumnStacked', 'Line', 'LineMarkers', 'Pie', 'Bar', 'BarClustered', 'Area', 'Scatter', 'Doughnut')
- `source_data` (string, required): Data range for the chart in A1 notation, including headers (e.g., 'A1:B10')
- `series_by` (string, optional): How data series are organized: 'Auto', 'Columns', or 'Rows'. Default is 'Auto'
</Accordion>
<Accordion title="microsoft_sharepoint/list_excel_charts">
**Description:** List all charts embedded in an Excel worksheet stored in SharePoint. Returns chart properties including id, name, chartType, height, width, and position.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet to list charts from. Obtain from get_excel_worksheets
</Accordion>
<Accordion title="microsoft_sharepoint/delete_excel_worksheet">
**Description:** Permanently remove a worksheet (tab) and all its contents from an Excel workbook stored in SharePoint. Cannot be undone. A workbook must have at least one worksheet.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet to delete. Case-sensitive. All data, tables, and charts on this sheet will be permanently removed
</Accordion>
<Accordion title="microsoft_sharepoint/delete_excel_table">
**Description:** Remove a table from an Excel worksheet in SharePoint. This deletes the table structure (filtering, formatting, table features) but preserves the underlying cell data.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
- `worksheet_name` (string, required): Name of the worksheet containing the table. Obtain from get_excel_worksheets
- `table_name` (string, required): Name of the table to delete (e.g., 'Table1'). Obtain from get_excel_tables. The data in the cells will remain after table deletion
</Accordion>
<Accordion title="microsoft_sharepoint/list_excel_names">
**Description:** Retrieve all named ranges defined in an Excel workbook stored in SharePoint. Named ranges are user-defined labels for cell ranges (e.g., 'SalesData' for A1:D100).
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
</Accordion>
<Accordion title="microsoft_sharepoint/get_word_document_content">
**Description:** Download and extract text content from a Word document (.docx) stored in a SharePoint document library. This is the recommended way to read Word documents from SharePoint.
**Parameters:**
- `site_id` (string, required): The full SharePoint site identifier from get_sites
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
- `item_id` (string, required): The unique identifier of the Word document (.docx) in SharePoint. Obtain from list_files or search_files
</Accordion>
</AccordionGroup>

View File

@@ -108,6 +108,86 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `join_web_url` (string, required): The join web URL of the meeting to search for.
</Accordion>
<Accordion title="microsoft_teams/search_online_meetings_by_meeting_id">
**Description:** Search online meetings by external Meeting ID.
**Parameters:**
- `join_meeting_id` (string, required): The meeting ID (numeric code) that attendees use to join. This is the joinMeetingId shown in meeting invitations, not the Graph API meeting id.
</Accordion>
<Accordion title="microsoft_teams/get_meeting">
**Description:** Get details of a specific online meeting.
**Parameters:**
- `meeting_id` (string, required): The Graph API meeting ID (a long alphanumeric string). Obtain from create_meeting or search_online_meetings actions. Different from the numeric joinMeetingId.
</Accordion>
<Accordion title="microsoft_teams/get_team_members">
**Description:** Get members of a specific team.
**Parameters:**
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
- `top` (integer, optional): Maximum number of members to retrieve per page (1-999). Default is `100`.
- `skip_token` (string, optional): Pagination token from a previous response. When the response includes @odata.nextLink, extract the $skiptoken parameter value and pass it here to get the next page of results.
</Accordion>
<Accordion title="microsoft_teams/create_channel">
**Description:** Create a new channel in a team.
**Parameters:**
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
- `display_name` (string, required): Name of the channel as displayed in Teams. Must be unique within the team. Max 50 characters.
- `description` (string, optional): Optional description explaining the channel's purpose. Visible in channel details. Max 1024 characters.
- `membership_type` (string, optional): Channel visibility. Enum: `standard`, `private`. "standard" = visible to all team members, "private" = visible only to specifically added members. Default is `standard`.
</Accordion>
<Accordion title="microsoft_teams/get_message_replies">
**Description:** Get replies to a specific message in a channel.
**Parameters:**
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
- `channel_id` (string, required): The unique identifier of the channel. Obtain from get_channels action.
- `message_id` (string, required): The unique identifier of the parent message. Obtain from get_messages action.
- `top` (integer, optional): Maximum number of replies to retrieve per page (1-50). Default is `50`.
- `skip_token` (string, optional): Pagination token from a previous response. When the response includes @odata.nextLink, extract the $skiptoken parameter value and pass it here to get the next page of results.
</Accordion>
<Accordion title="microsoft_teams/reply_to_message">
**Description:** Reply to a message in a Teams channel.
**Parameters:**
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
- `channel_id` (string, required): The unique identifier of the channel. Obtain from get_channels action.
- `message_id` (string, required): The unique identifier of the message to reply to. Obtain from get_messages action.
- `message` (string, required): The reply content. For HTML, include formatting tags. For text, plain text only.
- `content_type` (string, optional): Content format. Enum: `html`, `text`. "text" for plain text, "html" for rich text with formatting. Default is `text`.
</Accordion>
<Accordion title="microsoft_teams/update_meeting">
**Description:** Update an existing online meeting.
**Parameters:**
- `meeting_id` (string, required): The unique identifier of the meeting. Obtain from create_meeting or search_online_meetings actions.
- `subject` (string, optional): New meeting title.
- `startDateTime` (string, optional): New start time in ISO 8601 format with timezone. Example: "2024-01-20T10:00:00-08:00".
- `endDateTime` (string, optional): New end time in ISO 8601 format with timezone.
</Accordion>
<Accordion title="microsoft_teams/delete_meeting">
**Description:** Delete an online meeting.
**Parameters:**
- `meeting_id` (string, required): The unique identifier of the meeting to delete. Obtain from create_meeting or search_online_meetings actions.
</Accordion>
</AccordionGroup>
## Usage Examples

View File

@@ -98,6 +98,26 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `file_id` (string, required): The ID of the document to delete.
</Accordion>
<Accordion title="microsoft_word/copy_document">
**Description:** Copy a document to a new location in OneDrive.
**Parameters:**
- `file_id` (string, required): The ID of the document to copy
- `name` (string, optional): New name for the copied document
- `parent_id` (string, optional): The ID of the destination folder (defaults to root)
</Accordion>
<Accordion title="microsoft_word/move_document">
**Description:** Move a document to a new location in OneDrive.
**Parameters:**
- `file_id` (string, required): The ID of the document to move
- `parent_id` (string, required): The ID of the destination folder
- `name` (string, optional): New name for the moved document
</Accordion>
</AccordionGroup>
## Usage Examples

View File

@@ -0,0 +1,61 @@
---
title: Coding Tools
description: Use AGENTS.md to guide coding agents and IDEs across your CrewAI projects.
icon: terminal
mode: "wide"
---
## Why AGENTS.md
`AGENTS.md` is a lightweight, repo-local instruction file that gives coding agents consistent, project-specific guidance. Keep it in the project root and treat it as the source of truth for how you want assistants to work: conventions, commands, architecture notes, and guardrails.
## Create a Project with the CLI
Use the CrewAI CLI to scaffold a project, then `AGENTS.md` will be automatically added at the root.
```bash
# Crew
crewai create crew my_crew
# Flow
crewai create flow my_flow
# Tool repository
crewai tool create my_tool
```
## Tool Setup: Point Assistants to AGENTS.md
### Codex
Codex can be guided by `AGENTS.md` files placed in your repository. Use them to supply persistent project context such as conventions, commands, and workflow expectations.
### Claude Code
Claude Code stores project memory in `CLAUDE.md`. You can bootstrap it with `/init` and edit it using `/memory`. Claude Code also supports imports inside `CLAUDE.md`, so you can add a single line like `@AGENTS.md` to pull in the shared instructions without duplicating them.
You can simply use:
```bash
mv AGENTS.md CLAUDE.md
```
### Gemini CLI and Google Antigravity
Gemini CLI and Antigravity load a project context file (default: `GEMINI.md`) from the repo root and parent directories. You can configure it to read `AGENTS.md` instead (or in addition) by setting `context.fileName` in your Gemini CLI settings. For example, set it to `AGENTS.md` only, or include both `AGENTS.md` and `GEMINI.md` if you want to keep each tools format.
You can simply use:
```bash
mv AGENTS.md GEMINI.md
```
### Cursor
Cursor supports `AGENTS.md` as a project instruction file. Place it at the project root to provide guidance for Cursors coding assistant.
### Windsurf
Claude Code provides an official integration with Windsurf. If you use Claude Code inside Windsurf, follow the Claude Code guidance above and import `AGENTS.md` from `CLAUDE.md`.
If you are using Windsurfs native assistant, configure its project rules or instructions feature (if available) to read from `AGENTS.md` or paste the contents directly.

View File

@@ -151,3 +151,9 @@ HITL workflows are particularly valuable for:
- Sensitive or high-stakes operations
- Creative tasks requiring human judgment
- Compliance and regulatory reviews
## Enterprise Features
<Card title="Flow HITL Management Platform" icon="users-gear" href="/en/enterprise/features/flow-hitl-management">
CrewAI Enterprise provides a comprehensive HITL management system for Flows with in-platform review, responder assignment, permissions, escalation policies, SLA management, dynamic routing, and full analytics. [Learn more →](/en/enterprise/features/flow-hitl-management)
</Card>

View File

@@ -15,6 +15,29 @@ Along with that provides the ability for the Agent to update the database based
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
## Security Model
`NL2SQLTool` is an execution-capable tool. It runs model-generated SQL directly against the configured database connection.
This means risk depends on your deployment choices:
- Which credentials you provide in `db_uri`
- Whether untrusted input can influence prompts
- Whether you add tool-call guardrails before execution
If you route untrusted input to agents using this tool, treat it as a high-risk integration.
## Hardening Recommendations
Use all of the following in production:
- Use a read-only database user whenever possible
- Prefer a read replica for analytics/retrieval workloads
- Grant least privilege (no superuser/admin roles, no file/system-level capabilities)
- Apply database-side resource limits (statement timeout, lock timeout, cost/row limits)
- Add `before_tool_call` hooks to enforce allowed query patterns
- Enable query logging and alerting for destructive statements
## Requirements
- SqlAlchemy

Binary file not shown.

After

Width:  |  Height:  |  Size: 251 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 405 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

View File

@@ -4,6 +4,74 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 1월 26일">
## v1.9.0
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.9.0)
## 변경 사항
### 기능
- 프로바이더 전반에 걸친 구조화된 출력 및 response_format 지원 추가
- 스트리밍 응답에 응답 ID 추가
- 부모-자식 계층 구조를 가진 이벤트 순서 추가
- Keycloak SSO 인증 지원 추가
- 멀티모달 파일 처리 기능 추가
- 네이티브 OpenAI responses API 지원 추가
- A2A 작업 실행 유틸리티 추가
- A2A 서버 구성 및 에이전트 카드 생성 추가
- 이벤트 시스템 향상 및 전송 옵션 확장
- 도구 호출 메커니즘 개선
### 버그 수정
- aiocache를 사용할 수 없을 때 폴백 메모리 캐시로 파일 저장소 향상
- 문서 목록이 비어 있지 않도록 보장
- Bedrock 중지 시퀀스 적절히 처리
- Google Vertex API 키 지원 추가
- Azure 모델 중지 단어 감지 향상
- 흐름 실행 시 HumanFeedbackPending 오류 처리 개선
- 실행 스팬 작업 연결 해제 수정
### 문서
- 네이티브 파일 처리 문서 추가
- OpenAI responses API 문서 추가
- 에이전트 카드 구현 가이드 추가
- A2A 문서 개선
- v1.8.0 변경 로그 업데이트
### 기여자
@Anaisdg, @GininDenis, @Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @koushiv777, @lorenzejay, @nicoferdi96, @vinibrsl
</Update>
<Update label="2026년 1월 15일">
## v1.8.1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.8.1)
## 변경 사항
### 기능
- A2A 작업 실행 유틸리티 추가
- A2A 서버 구성 및 에이전트 카드 생성 추가
- 추가 전송 메커니즘 추가
- Galileo 통합 지원 추가
### 버그 수정
- Azure 모델 호환성 개선
- parent_flow 감지를 위한 프레임 검사 깊이 확장
- 작업 실행 스팬 관리 문제 해결
- 흐름 실행 중 휴먼 피드백 시나리오에 대한 오류 처리 향상
### 문서
- A2A 에이전트 카드 문서 추가
- PII 삭제 기능 문서 추가
### 기여자
@Anaisdg, @GininDenis, @greysonlalonde, @joaomdmoura, @koushiv777, @lorenzejay, @vinibrsl
</Update>
<Update label="2026년 1월 8일">
## v1.8.0

View File

@@ -0,0 +1,563 @@
---
title: "Flow HITL 관리"
description: "이메일 우선 알림, 라우팅 규칙 및 자동 응답 기능을 갖춘 Flow용 엔터프라이즈급 인간 검토"
icon: "users-gear"
mode: "wide"
---
<Note>
Flow HITL 관리 기능은 `@human_feedback` 데코레이터가 필요하며, **CrewAI 버전 1.8.0 이상**에서 사용할 수 있습니다. 이 기능은 Crew가 아닌 **Flow**에만 적용됩니다.
</Note>
CrewAI Enterprise는 AI 워크플로우를 협업적인 인간-AI 프로세스로 전환하는 Flow용 포괄적인 Human-in-the-Loop(HITL) 관리 시스템을 제공합니다. 플랫폼은 **이메일 우선 아키텍처**를 사용하여 이메일 주소가 있는 누구나 플랫폼 계정 없이도 검토 요청에 응답할 수 있습니다.
## 개요
<CardGroup cols={3}>
<Card title="이메일 우선 설계" icon="envelope">
응답자가 알림 이메일에 직접 회신하여 피드백 제공 가능
</Card>
<Card title="유연한 라우팅" icon="route">
메서드 패턴 또는 Flow 상태에 따라 특정 이메일로 요청 라우팅
</Card>
<Card title="자동 응답" icon="clock">
시간 내에 인간이 응답하지 않을 경우 자동 대체 응답 구성
</Card>
</CardGroup>
### 주요 이점
- **간단한 멘탈 모델**: 이메일 주소는 보편적이며 플랫폼 사용자나 역할을 관리할 필요 없음
- **외부 응답자**: 플랫폼 사용자가 아니어도 이메일이 있는 누구나 응답 가능
- **동적 할당**: Flow 상태에서 직접 담당자 이메일 가져오기 (예: `sales_rep_email`)
- **간소화된 구성**: 설정할 항목이 적어 더 빠르게 가치 실현
- **이메일이 주요 채널**: 대부분의 사용자는 대시보드 로그인보다 이메일로 응답하는 것을 선호
## Flow에서 인간 검토 포인트 설정
`@human_feedback` 데코레이터를 사용하여 Flow 내에 인간 검토 체크포인트를 구성합니다. 실행이 검토 포인트에 도달하면 시스템이 일시 중지되고, 담당자에게 이메일로 알리며, 응답을 기다립니다.
```python
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ContentApprovalFlow(Flow):
@start()
def generate_content(self):
# AI가 콘텐츠 생성
return "Q1 캠페인용 마케팅 카피 생성..."
@listen(generate_content)
@human_feedback(
message="브랜드 준수를 위해 이 콘텐츠를 검토해 주세요:",
emit=["approved", "rejected", "needs_revision"],
)
def review_content(self, content):
return content
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
print(f"승인된 콘텐츠 게시 중. 검토자 노트: {result.feedback}")
@listen("rejected")
def archive_content(self, result: HumanFeedbackResult):
print(f"콘텐츠 거부됨. 사유: {result.feedback}")
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
print(f"수정 요청: {result.feedback}")
```
완전한 구현 세부 사항은 [Flow에서 인간 피드백](/ko/learn/human-feedback-in-flows) 가이드를 참조하세요.
### 데코레이터 파라미터
| 파라미터 | 유형 | 설명 |
|---------|------|------|
| `message` | `str` | 인간 검토자에게 표시되는 메시지 |
| `emit` | `list[str]` | 유효한 응답 옵션 (UI에서 버튼으로 표시) |
## 플랫폼 구성
HITL 구성에 접근: **배포** → **설정** → **Human in the Loop 구성**
<Frame>
<img src="/images/enterprise/hitl-settings-overview.png" alt="HITL 구성 설정" />
</Frame>
### 이메일 알림
HITL 요청에 대한 이메일 알림을 활성화하거나 비활성화하는 토글입니다.
| 설정 | 기본값 | 설명 |
|-----|-------|------|
| 이메일 알림 | 활성화됨 | 피드백 요청 시 이메일 전송 |
<Note>
비활성화되면 응답자는 대시보드 UI를 사용하거나 커스텀 알림 시스템을 위해 webhook을 구성해야 합니다.
</Note>
### SLA 목표
추적 및 메트릭 목적으로 목표 응답 시간을 설정합니다.
| 설정 | 설명 |
|-----|------|
| SLA 목표 (분) | 목표 응답 시간. 대시보드 메트릭 및 SLA 추적에 사용 |
SLA 추적을 비활성화하려면 비워 두세요.
## 이메일 알림 및 응답
HITL 시스템은 응답자가 알림 이메일에 직접 회신할 수 있는 이메일 우선 아키텍처를 사용합니다.
### 이메일 응답 작동 방식
<Steps>
<Step title="알림 전송">
HITL 요청이 생성되면 검토 콘텐츠와 컨텍스트가 포함된 이메일이 할당된 응답자에게 전송됩니다.
</Step>
<Step title="Reply-To 주소">
이메일에는 인증을 위한 서명된 토큰이 포함된 특별한 reply-to 주소가 있습니다.
</Step>
<Step title="사용자 회신">
응답자는 이메일에 피드백으로 회신하면 됩니다—로그인 필요 없음.
</Step>
<Step title="토큰 검증">
플랫폼이 회신을 받고, 서명된 토큰을 확인하고, 발신자 이메일을 매칭합니다.
</Step>
<Step title="Flow 재개">
피드백이 기록되고 인간의 입력으로 Flow가 계속됩니다.
</Step>
</Steps>
### 응답 형식
응답자는 다음과 같이 회신할 수 있습니다:
- **Emit 옵션**: 회신이 `emit` 옵션과 일치하면 (예: "approved") 직접 사용됨
- **자유 형식 텍스트**: 모든 텍스트 응답이 피드백으로 Flow에 전달됨
- **일반 텍스트**: 회신 본문의 첫 번째 줄이 피드백으로 사용됨
### 확인 이메일
회신을 처리한 후 응답자는 피드백이 성공적으로 제출되었는지 또는 오류가 발생했는지 나타내는 확인 이메일을 받습니다.
### 이메일 토큰 보안
- 토큰은 보안을 위해 암호화 서명됨
- 토큰은 7일 후 만료됨
- 발신자 이메일은 토큰의 인증된 이메일과 일치해야 함
- 처리 후 확인/오류 이메일 전송됨
## 라우팅 규칙
메서드 패턴에 따라 HITL 요청을 특정 이메일 주소로 라우팅합니다.
<Frame>
<img src="/images/enterprise/hitl-settings-routing-rules.png" alt="HITL 라우팅 규칙 구성" />
</Frame>
### 규칙 구조
```json
{
"name": "재무팀으로 승인",
"match": {
"method_name": "approve_*"
},
"assign_to_email": "finance@company.com",
"assign_from_input": "manager_email"
}
```
### 매칭 패턴
| 패턴 | 설명 | 매칭 예시 |
|-----|------|----------|
| `approve_*` | 와일드카드 (모든 문자) | `approve_payment`, `approve_vendor` |
| `review_?` | 단일 문자 | `review_a`, `review_1` |
| `validate_payment` | 정확히 일치 | `validate_payment`만 |
### 할당 우선순위
1. **동적 할당** (`assign_from_input`): 구성된 경우 Flow 상태에서 이메일 가져옴
2. **정적 이메일** (`assign_to_email`): 구성된 이메일로 대체
3. **배포 생성자**: 규칙이 일치하지 않으면 배포 생성자의 이메일이 사용됨
### 동적 할당 예제
Flow 상태에 `{"sales_rep_email": "alice@company.com"}`이 포함된 경우:
```json
{
"name": "영업 담당자에게 라우팅",
"match": {
"method_name": "review_*"
},
"assign_from_input": "sales_rep_email"
}
```
요청이 자동으로 `alice@company.com`에 할당됩니다.
<Tip>
**사용 사례**: CRM, 데이터베이스 또는 이전 Flow 단계에서 담당자를 가져와 적합한 사람에게 검토를 동적으로 라우팅하세요.
</Tip>
## 자동 응답
시간 내에 인간이 응답하지 않으면 HITL 요청에 자동으로 응답합니다. 이를 통해 Flow가 무한정 중단되지 않도록 합니다.
### 구성
| 설정 | 설명 |
|-----|------|
| 활성화됨 | 자동 응답 활성화 토글 |
| 타임아웃 (분) | 자동 응답 전 대기 시간 |
| 기본 결과 | 응답 값 (`emit` 옵션과 일치해야 함) |
<Frame>
<img src="/images/enterprise/hitl-settings-auto-respond.png" alt="HITL 자동 응답 구성" />
</Frame>
### 사용 사례
- **SLA 준수**: Flow가 무한정 중단되지 않도록 보장
- **기본 승인**: 타임아웃 후 저위험 요청 자동 승인
- **우아한 저하**: 검토자가 없을 때 안전한 기본값으로 계속
<Warning>
자동 응답을 신중하게 사용하세요. 기본 응답이 허용되는 중요하지 않은 검토에만 활성화하세요.
</Warning>
## 검토 프로세스
### 대시보드 인터페이스
HITL 검토 인터페이스는 검토자에게 깔끔하고 집중된 경험을 제공합니다:
- **마크다운 렌더링**: 구문 강조가 포함된 풍부한 형식의 검토 콘텐츠
- **컨텍스트 패널**: Flow 상태, 실행 기록 및 관련 정보 보기
- **피드백 입력**: 결정과 함께 상세한 피드백 및 코멘트 제공
- **빠른 작업**: 선택적 코멘트가 있는 원클릭 emit 옵션 버튼
<Frame>
<img src="/images/enterprise/hitl-list-pending-feedbacks.png" alt="HITL 대기 중인 요청 목록" />
</Frame>
### 응답 방법
검토자는 세 가지 채널을 통해 응답할 수 있습니다:
| 방법 | 설명 |
|-----|------|
| **이메일 회신** | 알림 이메일에 직접 회신 |
| **대시보드** | Enterprise 대시보드 UI 사용 |
| **API/Webhook** | API를 통한 프로그래밍 방식 응답 |
### 기록 및 감사 추적
모든 HITL 상호작용은 완전한 타임라인으로 추적됩니다:
- 결정 기록 (승인/거부/수정)
- 검토자 신원 및 타임스탬프
- 제공된 피드백 및 코멘트
- 응답 방법 (이메일/대시보드/API)
- 응답 시간 메트릭
## 분석 및 모니터링
포괄적인 분석으로 HITL 성능을 추적합니다.
### 성능 대시보드
<Frame>
<img src="/images/enterprise/hitl-metrics.png" alt="HITL 메트릭 대시보드" />
</Frame>
<CardGroup cols={2}>
<Card title="응답 시간" icon="stopwatch">
검토자 또는 Flow별 평균 및 중앙값 응답 시간 모니터링.
</Card>
<Card title="볼륨 트렌드" icon="chart-bar">
팀 용량 최적화를 위한 검토 볼륨 패턴 분석.
</Card>
<Card title="결정 분포" icon="chart-pie">
다양한 검토 유형에 대한 승인/거부 비율 보기.
</Card>
<Card title="SLA 추적" icon="chart-line">
SLA 목표 내에 완료된 검토 비율 추적.
</Card>
</CardGroup>
### 감사 및 규정 준수
규제 요구 사항을 위한 엔터프라이즈급 감사 기능:
- 타임스탬프가 있는 완전한 결정 기록
- 검토자 신원 확인
- 불변 감사 로그
- 규정 준수 보고를 위한 내보내기 기능
## 일반적인 사용 사례
<AccordionGroup>
<Accordion title="보안 검토" icon="shield-halved">
**사용 사례**: 인간 검증이 포함된 내부 보안 설문지 자동화
- AI가 보안 설문지에 대한 응답 생성
- 보안팀이 이메일로 정확성 검토 및 검증
- 승인된 응답이 최종 제출물로 편집
- 규정 준수를 위한 완전한 감사 추적
</Accordion>
<Accordion title="콘텐츠 승인" icon="file-lines">
**사용 사례**: 법무/브랜드 검토가 필요한 마케팅 콘텐츠
- AI가 마케팅 카피 또는 소셜 미디어 콘텐츠 생성
- 브랜드팀 이메일로 목소리/톤 검토를 위해 라우팅
- 승인 시 자동 게시
</Accordion>
<Accordion title="재무 승인" icon="money-bill">
**사용 사례**: 경비 보고서, 계약 조건, 예산 배분
- AI가 재무 요청을 사전 처리하고 분류
- 동적 할당을 사용하여 금액 임계값에 따라 라우팅
- 재무 규정 준수를 위한 완전한 감사 추적 유지
</Accordion>
<Accordion title="CRM에서 동적 할당" icon="database">
**사용 사례**: CRM에서 계정 담당자에게 검토 라우팅
- Flow가 CRM에서 계정 담당자 이메일 가져옴
- 이메일을 Flow 상태에 저장 (예: `account_owner_email`)
- `assign_from_input`을 사용하여 적합한 사람에게 자동 라우팅
</Accordion>
<Accordion title="품질 보증" icon="magnifying-glass">
**사용 사례**: 고객 전달 전 AI 출력 검증
- AI가 고객 대면 콘텐츠 또는 응답 생성
- QA팀이 이메일 알림을 통해 검토
- 피드백 루프가 시간이 지남에 따라 AI 성능 개선
</Accordion>
</AccordionGroup>
## Webhook API
Flow가 인간 피드백을 위해 일시 중지되면, 요청 데이터를 자체 애플리케이션으로 보내도록 webhook을 구성할 수 있습니다. 이를 통해 다음이 가능합니다:
- 커스텀 승인 UI 구축
- 내부 도구와 통합 (Jira, ServiceNow, 커스텀 대시보드)
- 타사 시스템으로 승인 라우팅
- 모바일 앱 알림
- 자동화된 결정 시스템
<Frame>
<img src="/images/enterprise/hitl-settings-webhook.png" alt="HITL Webhook 구성" />
</Frame>
### Webhook 구성
<Steps>
<Step title="설정으로 이동">
**배포** → **설정** → **Human in the Loop**으로 이동
</Step>
<Step title="Webhook 섹션 확장">
**Webhooks** 구성을 클릭하여 확장
</Step>
<Step title="Webhook URL 추가">
webhook URL 입력 (프로덕션에서는 HTTPS 필수)
</Step>
<Step title="구성 저장">
**구성 저장**을 클릭하여 활성화
</Step>
</Steps>
여러 webhook을 구성할 수 있습니다. 각 활성 webhook은 모든 HITL 이벤트를 수신합니다.
### Webhook 이벤트
엔드포인트는 다음 이벤트에 대해 HTTP POST 요청을 수신합니다:
| 이벤트 유형 | 트리거 시점 |
|------------|------------|
| `new_request` | Flow가 일시 중지되고 인간 피드백을 요청할 때 |
### Webhook 페이로드
모든 webhook은 다음 구조의 JSON 페이로드를 수신합니다:
```json
{
"event": "new_request",
"request": {
"id": "550e8400-e29b-41d4-a716-446655440000",
"flow_id": "flow_abc123",
"method_name": "review_article",
"message": "이 기사의 게시를 검토해 주세요.",
"emit_options": ["approved", "rejected", "request_changes"],
"state": {
"article_id": 12345,
"author": "john@example.com",
"category": "technology"
},
"metadata": {},
"created_at": "2026-01-14T12:00:00Z"
},
"deployment": {
"id": 456,
"name": "Content Review Flow",
"organization_id": 789
},
"callback_url": "https://api.crewai.com/...",
"assigned_to_email": "reviewer@company.com"
}
```
### 요청에 응답하기
피드백을 제출하려면 webhook 페이로드에 포함된 **`callback_url`로 POST**합니다.
```http
POST {callback_url}
Content-Type: application/json
{
"feedback": "승인됨. 훌륭한 기사입니다!",
"source": "my_custom_app"
}
```
### 보안
<Info>
모든 webhook 요청은 HMAC-SHA256을 사용하여 암호화 서명되어 진위성을 보장하고 변조를 방지합니다.
</Info>
#### Webhook 보안
- **HMAC-SHA256 서명**: 모든 webhook에 암호화 서명이 포함됨
- **Webhook별 시크릿**: 각 webhook은 고유한 서명 시크릿을 가짐
- **저장 시 암호화**: 서명 시크릿은 데이터베이스에서 암호화됨
- **타임스탬프 검증**: 리플레이 공격 방지
#### 서명 헤더
각 webhook 요청에는 다음 헤더가 포함됩니다:
| 헤더 | 설명 |
|------|------|
| `X-Signature` | HMAC-SHA256 서명: `sha256=<hex_digest>` |
| `X-Timestamp` | 요청이 서명된 Unix 타임스탬프 |
#### 검증
다음과 같이 계산하여 검증합니다:
```python
import hmac
import hashlib
expected = hmac.new(
signing_secret.encode(),
f"{timestamp}.{payload}".encode(),
hashlib.sha256
).hexdigest()
if hmac.compare_digest(expected, signature):
# 유효한 서명
```
### 오류 처리
webhook 엔드포인트는 수신 확인을 위해 2xx 상태 코드를 반환해야 합니다:
| 응답 | 동작 |
|------|------|
| 2xx | Webhook 성공적으로 전달됨 |
| 4xx/5xx | 실패로 기록됨, 재시도 없음 |
| 타임아웃 (30초) | 실패로 기록됨, 재시도 없음 |
## 보안 및 RBAC
### 대시보드 접근
HITL 접근은 배포 수준에서 제어됩니다:
| 권한 | 기능 |
|-----|------|
| `manage_human_feedback` | HITL 설정 구성, 모든 요청 보기 |
| `respond_to_human_feedback` | 요청에 응답, 할당된 요청 보기 |
### 이메일 응답 인증
이메일 회신의 경우:
1. reply-to 토큰이 인증된 이메일을 인코딩
2. 발신자 이메일이 토큰의 이메일과 일치해야 함
3. 토큰이 만료되지 않아야 함 (기본 7일)
4. 요청이 여전히 대기 중이어야 함
### 감사 추적
모든 HITL 작업이 기록됩니다:
- 요청 생성
- 할당 변경
- 응답 제출 (소스: 대시보드/이메일/API)
- Flow 재개 상태
## 문제 해결
### 이메일이 전송되지 않음
1. 구성에서 "이메일 알림"이 활성화되어 있는지 확인
2. 라우팅 규칙이 메서드 이름과 일치하는지 확인
3. 담당자 이메일이 유효한지 확인
4. 라우팅 규칙이 일치하지 않는 경우 배포 생성자 대체 확인
### 이메일 회신이 처리되지 않음
1. 토큰이 만료되지 않았는지 확인 (기본 7일)
2. 발신자 이메일이 할당된 이메일과 일치하는지 확인
3. 요청이 여전히 대기 중인지 확인 (아직 응답되지 않음)
### Flow가 재개되지 않음
1. 대시보드에서 요청 상태 확인
2. 콜백 URL에 접근 가능한지 확인
3. 배포가 여전히 실행 중인지 확인
## 모범 사례
<Tip>
**간단하게 시작**: 배포 생성자에게 이메일 알림으로 시작한 다음, 워크플로우가 성숙해지면 라우팅 규칙을 추가하세요.
</Tip>
1. **동적 할당 사용**: 유연한 라우팅을 위해 Flow 상태에서 담당자 이메일을 가져오세요.
2. **자동 응답 구성**: 중요하지 않은 검토에 대해 Flow가 중단되지 않도록 대체를 설정하세요.
3. **응답 시간 모니터링**: 분석을 사용하여 병목 현상을 식별하고 검토 프로세스를 최적화하세요.
4. **검토 메시지를 명확하게 유지**: `@human_feedback` 데코레이터에 명확하고 실행 가능한 메시지를 작성하세요.
5. **이메일 흐름 테스트**: 프로덕션에 가기 전에 테스트 요청을 보내 이메일 전달을 확인하세요.
## 관련 리소스
<CardGroup cols={2}>
<Card title="Flow에서 인간 피드백" icon="code" href="/ko/learn/human-feedback-in-flows">
`@human_feedback` 데코레이터 구현 가이드
</Card>
<Card title="Flow HITL 워크플로우 가이드" icon="route" href="/ko/enterprise/guides/human-in-the-loop">
HITL 워크플로우 설정을 위한 단계별 가이드
</Card>
<Card title="RBAC 구성" icon="shield-check" href="/ko/enterprise/features/rbac">
조직을 위한 역할 기반 접근 제어 구성
</Card>
<Card title="Webhook 스트리밍" icon="bolt" href="/ko/enterprise/features/webhook-streaming">
실시간 이벤트 알림 설정
</Card>
</CardGroup>

View File

@@ -5,9 +5,54 @@ icon: "user-check"
mode: "wide"
---
인간-중심(Human-In-The-Loop, HITL)은 인공지능과 인간 전문 지식을 결합하여 의사결정을 강화하고 작업 결과를 향상시키는 강력한 접근 방식입니다. 이 가이드는 CrewAI 내에서 HITL을 구현하는 방법을 보여줍니다.
인간-중심(Human-In-The-Loop, HITL)은 인공지능과 인간 전문 지식을 결합하여 의사결정을 강화하고 작업 결과를 향상시키는 강력한 접근 방식입니다. 이 가이드는 CrewAI Enterprise 내에서 HITL을 구현하는 방법을 보여줍니다.
## HITL 워크플로 설정
## CrewAI의 HITL 접근 방식
CrewAI는 human-in-the-loop 워크플로우를 구현하기 위한 두 가지 접근 방식을 제공합니다:
| 접근 방식 | 적합한 용도 | 버전 |
|----------|----------|---------|
| **Flow 기반** (`@human_feedback` 데코레이터) | Enterprise UI를 사용한 프로덕션, 이메일 우선 워크플로우, 전체 플랫폼 기능 | **1.8.0+** |
| **Webhook 기반** | 커스텀 통합, 외부 시스템 (Slack, Teams 등), 레거시 설정 | 모든 버전 |
## Enterprise 플랫폼과 Flow 기반 HITL
<Note>
`@human_feedback` 데코레이터는 **CrewAI 버전 1.8.0 이상**이 필요합니다.
</Note>
Flow에서 `@human_feedback` 데코레이터를 사용하면, CrewAI Enterprise는 이메일 주소가 있는 누구나 검토 요청에 응답할 수 있는 **이메일 우선 HITL 시스템**을 제공합니다:
<CardGroup cols={2}>
<Card title="이메일 우선 설계" icon="envelope">
응답자가 이메일 알림을 받고 직접 회신할 수 있습니다—로그인이 필요 없습니다.
</Card>
<Card title="대시보드 검토" icon="desktop">
원할 때 Enterprise 대시보드에서 HITL 요청을 검토하고 응답하세요.
</Card>
<Card title="유연한 라우팅" icon="route">
메서드 패턴에 따라 특정 이메일로 요청을 라우팅하거나 Flow 상태에서 가져오세요.
</Card>
<Card title="자동 응답" icon="clock">
타임아웃 내에 인간이 응답하지 않을 경우 자동 대체 응답을 구성하세요.
</Card>
</CardGroup>
### 주요 이점
- **외부 응답자**: 플랫폼 사용자가 아니어도 이메일이 있는 누구나 응답 가능
- **동적 할당**: Flow 상태에서 담당자 이메일 가져오기 (예: `account_owner_email`)
- **간단한 구성**: 이메일 기반 라우팅은 사용자/역할 관리보다 설정이 쉬움
- **배포 생성자 대체**: 라우팅 규칙이 일치하지 않으면 배포 생성자에게 알림
<Tip>
`@human_feedback` 데코레이터의 구현 세부 사항은 [Flow에서 인간 피드백](/ko/learn/human-feedback-in-flows) 가이드를 참조하세요.
</Tip>
## Webhook 기반 HITL 워크플로 설정
Slack, Microsoft Teams 또는 자체 애플리케이션과 같은 외부 시스템과의 커스텀 통합을 위해 webhook 기반 접근 방식을 사용할 수 있습니다:
<Steps>
<Step title="작업 구성">
@@ -99,3 +144,14 @@ HITL 워크플로우는 특히 다음과 같은 경우에 유용합니다:
- 민감하거나 위험도가 높은 작업
- 인간의 판단이 필요한 창의적 작업
- 준수 및 규제 검토
## 자세히 알아보기
<CardGroup cols={2}>
<Card title="Flow HITL 관리" icon="users-gear" href="/ko/enterprise/features/flow-hitl-management">
이메일 알림, 라우팅 규칙, 자동 응답 및 분석을 포함한 전체 Enterprise Flow HITL 플랫폼 기능을 살펴보세요.
</Card>
<Card title="Flow에서 인간 피드백" icon="code" href="/ko/learn/human-feedback-in-flows">
Flow에서 `@human_feedback` 데코레이터 구현 가이드.
</Card>
</CardGroup>

View File

@@ -200,6 +200,25 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `clientData` (array, 선택사항): 클라이언트별 데이터. 각 항목은 `key` (string)와 `value` (string)가 있는 객체.
</Accordion>
<Accordion title="google_contacts/update_contact_group">
**설명:** 연락처 그룹의 정보를 업데이트합니다.
**매개변수:**
- `resourceName` (string, 필수): 연락처 그룹의 리소스 이름 (예: 'contactGroups/myContactGroup').
- `name` (string, 필수): 연락처 그룹의 이름.
- `clientData` (array, 선택사항): 클라이언트별 데이터. 각 항목은 `key` (string)와 `value` (string)가 있는 객체.
</Accordion>
<Accordion title="google_contacts/delete_contact_group">
**설명:** 연락처 그룹을 삭제합니다.
**매개변수:**
- `resourceName` (string, 필수): 삭제할 연락처 그룹의 리소스 이름 (예: 'contactGroups/myContactGroup').
- `deleteContacts` (boolean, 선택사항): 그룹 내 연락처도 삭제할지 여부. 기본값: false
</Accordion>
</AccordionGroup>
## 사용 예제

View File

@@ -131,6 +131,297 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `endIndex` (integer, 필수): 범위의 끝 인덱스.
</Accordion>
<Accordion title="google_docs/create_document_with_content">
**설명:** 내용이 포함된 새 Google 문서를 한 번에 만듭니다.
**매개변수:**
- `title` (string, 필수): 새 문서의 제목. 문서 상단과 Google Drive에 표시됩니다.
- `content` (string, 선택사항): 문서에 삽입할 텍스트 내용. 새 단락에는 `\n`을 사용하세요.
</Accordion>
<Accordion title="google_docs/append_text">
**설명:** Google 문서의 끝에 텍스트를 추가합니다. 인덱스를 지정할 필요 없이 자동으로 문서 끝에 삽입됩니다.
**매개변수:**
- `documentId` (string, 필수): create_document 응답 또는 URL에서 가져온 문서 ID.
- `text` (string, 필수): 문서 끝에 추가할 텍스트. 새 단락에는 `\n`을 사용하세요.
</Accordion>
<Accordion title="google_docs/set_text_bold">
**설명:** Google 문서에서 텍스트를 굵게 만들거나 굵게 서식을 제거합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
- `bold` (boolean, 필수): 굵게 만들려면 `true`, 굵게를 제거하려면 `false`로 설정.
</Accordion>
<Accordion title="google_docs/set_text_italic">
**설명:** Google 문서에서 텍스트를 기울임꼴로 만들거나 기울임꼴 서식을 제거합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
- `italic` (boolean, 필수): 기울임꼴로 만들려면 `true`, 기울임꼴을 제거하려면 `false`로 설정.
</Accordion>
<Accordion title="google_docs/set_text_underline">
**설명:** Google 문서에서 텍스트에 밑줄 서식을 추가하거나 제거합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
- `underline` (boolean, 필수): 밑줄을 추가하려면 `true`, 밑줄을 제거하려면 `false`로 설정.
</Accordion>
<Accordion title="google_docs/set_text_strikethrough">
**설명:** Google 문서에서 텍스트에 취소선 서식을 추가하거나 제거합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
- `strikethrough` (boolean, 필수): 취소선을 추가하려면 `true`, 제거하려면 `false`로 설정.
</Accordion>
<Accordion title="google_docs/set_font_size">
**설명:** Google 문서에서 텍스트의 글꼴 크기를 변경합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
- `fontSize` (number, 필수): 포인트 단위의 글꼴 크기. 일반적인 크기: 10, 11, 12, 14, 16, 18, 24, 36.
</Accordion>
<Accordion title="google_docs/set_text_color">
**설명:** Google 문서에서 RGB 값(0-1 스케일)을 사용하여 텍스트 색상을 변경합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
- `red` (number, 필수): 빨강 구성 요소 (0-1). 예: `1`은 완전한 빨강.
- `green` (number, 필수): 초록 구성 요소 (0-1). 예: `0.5`는 절반 초록.
- `blue` (number, 필수): 파랑 구성 요소 (0-1). 예: `0`은 파랑 없음.
</Accordion>
<Accordion title="google_docs/create_hyperlink">
**설명:** Google 문서에서 기존 텍스트를 클릭 가능한 하이퍼링크로 변환합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 링크로 만들 텍스트의 시작 위치.
- `endIndex` (integer, 필수): 링크로 만들 텍스트의 끝 위치 (배타적).
- `url` (string, 필수): 링크가 가리킬 URL. 예: `"https://example.com"`.
</Accordion>
<Accordion title="google_docs/apply_heading_style">
**설명:** Google 문서에서 텍스트 범위에 제목 또는 단락 스타일을 적용합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 스타일을 적용할 단락의 시작 위치.
- `endIndex` (integer, 필수): 스타일을 적용할 단락의 끝 위치.
- `style` (string, 필수): 적용할 스타일. 옵션: `NORMAL_TEXT`, `TITLE`, `SUBTITLE`, `HEADING_1`, `HEADING_2`, `HEADING_3`, `HEADING_4`, `HEADING_5`, `HEADING_6`.
</Accordion>
<Accordion title="google_docs/set_paragraph_alignment">
**설명:** Google 문서에서 단락의 텍스트 정렬을 설정합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 정렬할 단락의 시작 위치.
- `endIndex` (integer, 필수): 정렬할 단락의 끝 위치.
- `alignment` (string, 필수): 텍스트 정렬. 옵션: `START` (왼쪽), `CENTER`, `END` (오른쪽), `JUSTIFIED`.
</Accordion>
<Accordion title="google_docs/set_line_spacing">
**설명:** Google 문서에서 단락의 줄 간격을 설정합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 단락의 시작 위치.
- `endIndex` (integer, 필수): 단락의 끝 위치.
- `lineSpacing` (number, 필수): 백분율로 나타낸 줄 간격. `100` = 단일, `115` = 1.15배, `150` = 1.5배, `200` = 이중.
</Accordion>
<Accordion title="google_docs/create_paragraph_bullets">
**설명:** Google 문서에서 단락을 글머리 기호 또는 번호 매기기 목록으로 변환합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 목록으로 변환할 단락의 시작 위치.
- `endIndex` (integer, 필수): 목록으로 변환할 단락의 끝 위치.
- `bulletPreset` (string, 필수): 글머리 기호/번호 매기기 스타일. 옵션: `BULLET_DISC_CIRCLE_SQUARE`, `BULLET_DIAMONDX_ARROW3D_SQUARE`, `BULLET_CHECKBOX`, `BULLET_ARROW_DIAMOND_DISC`, `BULLET_STAR_CIRCLE_SQUARE`, `NUMBERED_DECIMAL_ALPHA_ROMAN`, `NUMBERED_DECIMAL_ALPHA_ROMAN_PARENS`, `NUMBERED_DECIMAL_NESTED`, `NUMBERED_UPPERALPHA_ALPHA_ROMAN`, `NUMBERED_UPPERROMAN_UPPERALPHA_DECIMAL`.
</Accordion>
<Accordion title="google_docs/delete_paragraph_bullets">
**설명:** Google 문서에서 단락의 글머리 기호 또는 번호 매기기를 제거합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `startIndex` (integer, 필수): 목록 단락의 시작 위치.
- `endIndex` (integer, 필수): 목록 단락의 끝 위치.
</Accordion>
<Accordion title="google_docs/insert_table_with_content">
**설명:** Google 문서에 내용이 포함된 표를 한 번에 삽입합니다. 내용은 2D 배열로 제공하세요.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `rows` (integer, 필수): 표의 행 수.
- `columns` (integer, 필수): 표의 열 수.
- `index` (integer, 선택사항): 표를 삽입할 위치. 제공하지 않으면 문서 끝에 삽입됩니다.
- `content` (array, 필수): 2D 배열로 된 표 내용. 각 내부 배열은 행입니다. 예: `[["Year", "Revenue"], ["2023", "$43B"], ["2024", "$45B"]]`.
</Accordion>
<Accordion title="google_docs/insert_table_row">
**설명:** 기존 표의 참조 셀 위 또는 아래에 새 행을 삽입합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스. get_document에서 가져오세요.
- `rowIndex` (integer, 필수): 참조 셀의 행 인덱스 (0 기반).
- `columnIndex` (integer, 선택사항): 참조 셀의 열 인덱스 (0 기반). 기본값: `0`.
- `insertBelow` (boolean, 선택사항): `true`이면 참조 행 아래에, `false`이면 위에 삽입. 기본값: `true`.
</Accordion>
<Accordion title="google_docs/insert_table_column">
**설명:** 기존 표의 참조 셀 왼쪽 또는 오른쪽에 새 열을 삽입합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
- `rowIndex` (integer, 선택사항): 참조 셀의 행 인덱스 (0 기반). 기본값: `0`.
- `columnIndex` (integer, 필수): 참조 셀의 열 인덱스 (0 기반).
- `insertRight` (boolean, 선택사항): `true`이면 오른쪽에, `false`이면 왼쪽에 삽입. 기본값: `true`.
</Accordion>
<Accordion title="google_docs/delete_table_row">
**설명:** Google 문서의 기존 표에서 행을 삭제합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
- `rowIndex` (integer, 필수): 삭제할 행 인덱스 (0 기반).
- `columnIndex` (integer, 선택사항): 행의 아무 셀의 열 인덱스 (0 기반). 기본값: `0`.
</Accordion>
<Accordion title="google_docs/delete_table_column">
**설명:** Google 문서의 기존 표에서 열을 삭제합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
- `rowIndex` (integer, 선택사항): 열의 아무 셀의 행 인덱스 (0 기반). 기본값: `0`.
- `columnIndex` (integer, 필수): 삭제할 열 인덱스 (0 기반).
</Accordion>
<Accordion title="google_docs/merge_table_cells">
**설명:** 표 셀 범위를 단일 셀로 병합합니다. 모든 셀의 내용이 보존됩니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
- `rowIndex` (integer, 필수): 병합의 시작 행 인덱스 (0 기반).
- `columnIndex` (integer, 필수): 병합의 시작 열 인덱스 (0 기반).
- `rowSpan` (integer, 필수): 병합할 행 수.
- `columnSpan` (integer, 필수): 병합할 열 수.
</Accordion>
<Accordion title="google_docs/unmerge_table_cells">
**설명:** 이전에 병합된 표 셀을 개별 셀로 분리합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
- `rowIndex` (integer, 필수): 병합된 셀의 행 인덱스 (0 기반).
- `columnIndex` (integer, 필수): 병합된 셀의 열 인덱스 (0 기반).
- `rowSpan` (integer, 필수): 병합된 셀이 차지하는 행 수.
- `columnSpan` (integer, 필수): 병합된 셀이 차지하는 열 수.
</Accordion>
<Accordion title="google_docs/insert_inline_image">
**설명:** 공개 URL에서 Google 문서에 이미지를 삽입합니다. 이미지는 공개적으로 접근 가능해야 하고, 50MB 미만이며, PNG/JPEG/GIF 형식이어야 합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `uri` (string, 필수): 이미지의 공개 URL. 인증 없이 접근 가능해야 합니다.
- `index` (integer, 선택사항): 이미지를 삽입할 위치. 제공하지 않으면 문서 끝에 삽입됩니다. 기본값: `1`.
</Accordion>
<Accordion title="google_docs/insert_section_break">
**설명:** 서로 다른 서식을 가진 문서 섹션을 만들기 위해 섹션 나누기를 삽입합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `index` (integer, 필수): 섹션 나누기를 삽입할 위치.
- `sectionType` (string, 필수): 섹션 나누기의 유형. 옵션: `CONTINUOUS` (같은 페이지에 유지), `NEXT_PAGE` (새 페이지 시작).
</Accordion>
<Accordion title="google_docs/create_header">
**설명:** 문서의 머리글을 만듭니다. insert_text를 사용하여 머리글 내용을 추가할 수 있는 headerId를 반환합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `type` (string, 선택사항): 머리글 유형. 옵션: `DEFAULT`. 기본값: `DEFAULT`.
</Accordion>
<Accordion title="google_docs/create_footer">
**설명:** 문서의 바닥글을 만듭니다. insert_text를 사용하여 바닥글 내용을 추가할 수 있는 footerId를 반환합니다.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `type` (string, 선택사항): 바닥글 유형. 옵션: `DEFAULT`. 기본값: `DEFAULT`.
</Accordion>
<Accordion title="google_docs/delete_header">
**설명:** 문서에서 머리글을 삭제합니다. headerId를 찾으려면 get_document를 사용하세요.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `headerId` (string, 필수): 삭제할 머리글 ID. get_document 응답에서 가져오세요.
</Accordion>
<Accordion title="google_docs/delete_footer">
**설명:** 문서에서 바닥글을 삭제합니다. footerId를 찾으려면 get_document를 사용하세요.
**매개변수:**
- `documentId` (string, 필수): 문서 ID.
- `footerId` (string, 필수): 삭제할 바닥글 ID. get_document 응답에서 가져오세요.
</Accordion>
</AccordionGroup>
## 사용 예제

View File

@@ -61,6 +61,22 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="google_slides/get_presentation_metadata">
**설명:** 프레젠테이션에 대한 가벼운 메타데이터(제목, 슬라이드 수, 슬라이드 ID)를 가져옵니다. 전체 콘텐츠를 가져오기 전에 먼저 사용하세요.
**매개변수:**
- `presentationId` (string, 필수): 검색할 프레젠테이션의 ID.
</Accordion>
<Accordion title="google_slides/get_presentation_text">
**설명:** 프레젠테이션에서 모든 텍스트 콘텐츠를 추출합니다. 슬라이드 ID와 도형 및 테이블의 텍스트만 반환합니다 (포맷팅 없음).
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
</Accordion>
<Accordion title="google_slides/get_presentation">
**설명:** ID로 프레젠테이션을 검색합니다.
@@ -80,6 +96,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="google_slides/get_slide_text">
**설명:** 단일 슬라이드에서 텍스트 콘텐츠를 추출합니다. 도형 및 테이블의 텍스트만 반환합니다 (포맷팅 또는 스타일 없음).
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `pageObjectId` (string, 필수): 텍스트를 가져올 슬라이드/페이지의 ID.
</Accordion>
<Accordion title="google_slides/get_page">
**설명:** ID로 특정 페이지를 검색합니다.
@@ -98,6 +123,120 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="google_slides/create_slide">
**설명:** 프레젠테이션에 추가 빈 슬라이드를 추가합니다. 새 프레젠테이션에는 이미 빈 슬라이드가 하나 있습니다. 먼저 get_presentation_metadata를 확인하세요. 제목/본문 영역이 있는 슬라이드는 create_slide_with_layout을 사용하세요.
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `insertionIndex` (integer, 선택사항): 슬라이드를 삽입할 위치 (0 기반). 생략하면 맨 끝에 추가됩니다.
</Accordion>
<Accordion title="google_slides/create_slide_with_layout">
**설명:** 제목, 본문 등의 플레이스홀더 영역이 있는 미리 정의된 레이아웃으로 슬라이드를 만듭니다. 구조화된 콘텐츠에는 create_slide보다 적합합니다. 생성 후 get_page로 플레이스홀더 ID를 찾고, 그 안에 텍스트를 삽입하세요.
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `layout` (string, 필수): 레이아웃 유형. 옵션: `BLANK`, `TITLE`, `TITLE_AND_BODY`, `TITLE_AND_TWO_COLUMNS`, `TITLE_ONLY`, `SECTION_HEADER`, `ONE_COLUMN_TEXT`, `MAIN_POINT`, `BIG_NUMBER`. 제목+설명은 TITLE_AND_BODY, 제목만은 TITLE, 섹션 구분은 SECTION_HEADER가 적합합니다.
- `insertionIndex` (integer, 선택사항): 삽입할 위치 (0 기반). 생략하면 맨 끝에 추가됩니다.
</Accordion>
<Accordion title="google_slides/create_text_box">
**설명:** 콘텐츠가 있는 텍스트 상자를 슬라이드에 만듭니다. 제목, 설명, 단락에 사용합니다. 테이블에는 사용하지 마세요. 선택적으로 EMU 단위로 위치(x, y)와 크기(width, height)를 지정할 수 있습니다 (914400 EMU = 1 인치).
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 텍스트 상자를 추가할 슬라이드의 ID.
- `text` (string, 필수): 텍스트 상자의 텍스트 내용.
- `x` (integer, 선택사항): EMU 단위 X 위치 (914400 = 1 인치). 기본값: 914400 (왼쪽에서 1 인치).
- `y` (integer, 선택사항): EMU 단위 Y 위치 (914400 = 1 인치). 기본값: 914400 (위에서 1 인치).
- `width` (integer, 선택사항): EMU 단위 너비. 기본값: 7315200 (약 8 인치).
- `height` (integer, 선택사항): EMU 단위 높이. 기본값: 914400 (약 1 인치).
</Accordion>
<Accordion title="google_slides/delete_slide">
**설명:** 프레젠테이션에서 슬라이드를 제거합니다. 슬라이드 ID를 찾으려면 먼저 get_presentation을 사용하세요.
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 삭제할 슬라이드의 객체 ID. get_presentation에서 가져옵니다.
</Accordion>
<Accordion title="google_slides/duplicate_slide">
**설명:** 기존 슬라이드의 복사본을 만듭니다. 복사본은 원본 바로 다음에 삽입됩니다.
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 복제할 슬라이드의 객체 ID. get_presentation에서 가져옵니다.
</Accordion>
<Accordion title="google_slides/move_slides">
**설명:** 슬라이드를 새 위치로 이동하여 순서를 변경합니다. 슬라이드 ID는 현재 프레젠테이션 순서대로 있어야 합니다 (중복 없음).
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideIds` (string 배열, 필수): 이동할 슬라이드 ID 배열. 현재 프레젠테이션 순서대로 있어야 합니다.
- `insertionIndex` (integer, 필수): 대상 위치 (0 기반). 0 = 맨 앞, 슬라이드 수 = 맨 끝.
</Accordion>
<Accordion title="google_slides/insert_youtube_video">
**설명:** 슬라이드에 YouTube 동영상을 삽입합니다. 동영상 ID는 YouTube URL의 "v=" 다음 값입니다 (예: youtube.com/watch?v=abc123의 경우 "abc123" 사용).
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 동영상을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
- `videoId` (string, 필수): YouTube 동영상 ID (URL의 v= 다음 값).
</Accordion>
<Accordion title="google_slides/insert_drive_video">
**설명:** 슬라이드에 Google Drive의 동영상을 삽입합니다. 파일 ID는 Drive 파일 URL에서 찾을 수 있습니다.
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 동영상을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
- `fileId` (string, 필수): 동영상의 Google Drive 파일 ID.
</Accordion>
<Accordion title="google_slides/set_slide_background_image">
**설명:** 슬라이드의 배경 이미지를 설정합니다. 이미지 URL은 공개적으로 액세스 가능해야 합니다.
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 배경을 설정할 슬라이드의 ID. get_presentation에서 가져옵니다.
- `imageUrl` (string, 필수): 배경으로 사용할 이미지의 공개적으로 액세스 가능한 URL.
</Accordion>
<Accordion title="google_slides/create_table">
**설명:** 슬라이드에 빈 테이블을 만듭니다. 콘텐츠가 있는 테이블을 만들려면 create_table_with_content를 사용하세요.
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 테이블을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
- `rows` (integer, 필수): 테이블의 행 수.
- `columns` (integer, 필수): 테이블의 열 수.
</Accordion>
<Accordion title="google_slides/create_table_with_content">
**설명:** 한 번의 작업으로 콘텐츠가 있는 테이블을 만듭니다. 콘텐츠는 2D 배열로 제공하며, 각 내부 배열은 행을 나타냅니다. 예: [["Header1", "Header2"], ["Row1Col1", "Row1Col2"]].
**매개변수:**
- `presentationId` (string, 필수): 프레젠테이션의 ID.
- `slideId` (string, 필수): 테이블을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
- `rows` (integer, 필수): 테이블의 행 수.
- `columns` (integer, 필수): 테이블의 열 수.
- `content` (array, 필수): 2D 배열 형태의 테이블 콘텐츠. 각 내부 배열은 행입니다. 예: [["Year", "Revenue"], ["2023", "$10M"]].
</Accordion>
<Accordion title="google_slides/import_data_from_sheet">
**설명:** Google 시트에서 프레젠테이션으로 데이터를 가져옵니다.

View File

@@ -148,6 +148,16 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_excel/get_table_data">
**설명:** Excel 워크시트의 특정 테이블에서 데이터를 가져옵니다.
**매개변수:**
- `file_id` (string, 필수): Excel 파일의 ID.
- `worksheet_name` (string, 필수): 워크시트의 이름.
- `table_name` (string, 필수): 테이블의 이름.
</Accordion>
<Accordion title="microsoft_excel/create_chart">
**설명:** Excel 워크시트에 차트를 만듭니다.
@@ -180,6 +190,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_excel/get_used_range_metadata">
**설명:** Excel 워크시트의 사용된 범위 메타데이터(크기만, 데이터 없음)를 가져옵니다.
**매개변수:**
- `file_id` (string, 필수): Excel 파일의 ID.
- `worksheet_name` (string, 필수): 워크시트의 이름.
</Accordion>
<Accordion title="microsoft_excel/list_charts">
**설명:** Excel 워크시트의 모든 차트를 가져옵니다.

View File

@@ -150,6 +150,49 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `item_id` (string, 필수): 파일의 ID.
</Accordion>
<Accordion title="microsoft_onedrive/list_files_by_path">
**설명:** 특정 OneDrive 경로의 파일과 폴더를 나열합니다.
**매개변수:**
- `folder_path` (string, 필수): 폴더 경로 (예: 'Documents/Reports').
- `top` (integer, 선택사항): 검색할 항목 수 (최대 1000). 기본값: 50.
- `orderby` (string, 선택사항): 필드별 정렬 (예: "name asc", "lastModifiedDateTime desc"). 기본값: "name asc".
</Accordion>
<Accordion title="microsoft_onedrive/get_recent_files">
**설명:** OneDrive에서 최근에 액세스한 파일을 가져옵니다.
**매개변수:**
- `top` (integer, 선택사항): 검색할 항목 수 (최대 200). 기본값: 25.
</Accordion>
<Accordion title="microsoft_onedrive/get_shared_with_me">
**설명:** 사용자와 공유된 파일과 폴더를 가져옵니다.
**매개변수:**
- `top` (integer, 선택사항): 검색할 항목 수 (최대 200). 기본값: 50.
- `orderby` (string, 선택사항): 필드별 정렬. 기본값: "name asc".
</Accordion>
<Accordion title="microsoft_onedrive/get_file_by_path">
**설명:** 경로로 특정 파일 또는 폴더에 대한 정보를 가져옵니다.
**매개변수:**
- `file_path` (string, 필수): 파일 또는 폴더 경로 (예: 'Documents/report.docx').
</Accordion>
<Accordion title="microsoft_onedrive/download_file_by_path">
**설명:** 경로로 OneDrive에서 파일을 다운로드합니다.
**매개변수:**
- `file_path` (string, 필수): 파일 경로 (예: 'Documents/report.docx').
</Accordion>
</AccordionGroup>
## 사용 예제
@@ -183,6 +226,62 @@ crew = Crew(
crew.kickoff()
```
### 파일 업로드 및 관리
```python
from crewai import Agent, Task, Crew
# 파일 작업에 특화된 에이전트 생성
file_operator = Agent(
role="파일 운영자",
goal="파일을 정확하게 업로드, 다운로드 및 관리",
backstory="파일 처리 및 콘텐츠 관리에 능숙한 AI 어시스턴트.",
apps=['microsoft_onedrive/upload_file', 'microsoft_onedrive/download_file', 'microsoft_onedrive/get_file_info']
)
# 파일 업로드 및 관리 작업
file_management_task = Task(
description="'report.txt'라는 이름의 텍스트 파일을 'This is a sample report for the project.' 내용으로 업로드한 다음 업로드된 파일에 대한 정보를 가져오세요.",
agent=file_operator,
expected_output="파일이 성공적으로 업로드되고 파일 정보가 검색됨."
)
crew = Crew(
agents=[file_operator],
tasks=[file_management_task]
)
crew.kickoff()
```
### 파일 정리 및 공유
```python
from crewai import Agent, Task, Crew
# 파일 정리 및 공유를 위한 에이전트 생성
file_organizer = Agent(
role="파일 정리자",
goal="파일을 정리하고 협업을 위한 공유 링크 생성",
backstory="파일 정리 및 공유 권한 관리에 뛰어난 AI 어시스턴트.",
apps=['microsoft_onedrive/search_files', 'microsoft_onedrive/move_item', 'microsoft_onedrive/share_item', 'microsoft_onedrive/create_folder']
)
# 파일 정리 및 공유 작업
organize_share_task = Task(
description="이름에 'presentation'이 포함된 파일을 검색하고, '프레젠테이션'이라는 폴더를 만든 다음, 찾은 파일을 이 폴더로 이동하고 폴더에 대한 읽기 전용 공유 링크를 생성하세요.",
agent=file_organizer,
expected_output="파일이 '프레젠테이션' 폴더로 정리되고 공유 링크가 생성됨."
)
crew = Crew(
agents=[file_organizer],
tasks=[organize_share_task]
)
crew.kickoff()
```
## 문제 해결
### 일반적인 문제
@@ -196,6 +295,30 @@ crew.kickoff()
- 파일 업로드 시 `file_name`과 `content`가 제공되는지 확인하세요.
- 바이너리 파일의 경우 내용이 Base64로 인코딩되어야 합니다.
- OneDrive에 대한 쓰기 권한이 있는지 확인하세요.
**파일/폴더 ID 문제**
- 특정 파일 또는 폴더에 액세스할 때 항목 ID가 올바른지 다시 확인하세요.
- 항목 ID는 `list_files` 또는 `search_files`와 같은 다른 작업에서 반환됩니다.
- 참조하는 항목이 존재하고 액세스 가능한지 확인하세요.
**검색 및 필터 작업**
- `search_files` 작업에 적절한 검색어를 사용하세요.
- `filter` 매개변수의 경우 올바른 OData 문법을 사용하세요.
**파일 작업 (복사/이동)**
- `move_item`의 경우 `item_id`와 `parent_id`가 모두 제공되는지 확인하세요.
- `copy_item`의 경우 `item_id`만 필요합니다. `parent_id`는 지정하지 않으면 루트로 기본 설정됩니다.
- 대상 폴더가 존재하고 액세스 가능한지 확인하세요.
**공유 링크 생성**
- 공유 링크를 만들기 전에 항목이 존재하는지 확인하세요.
- 공유 요구 사항에 따라 적절한 `type`과 `scope`를 선택하세요.
- `anonymous` 범위는 로그인 없이 액세스를 허용합니다. `organization`은 조직 계정이 필요합니다.
### 도움 받기

View File

@@ -132,6 +132,74 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `companyName` (string, 선택사항): 연락처의 회사 이름.
</Accordion>
<Accordion title="microsoft_outlook/get_message">
**설명:** ID로 특정 이메일 메시지를 가져옵니다.
**매개변수:**
- `message_id` (string, 필수): 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록. 예: "id,subject,body,from,receivedDateTime". 기본값: "id,subject,body,from,toRecipients,receivedDateTime".
</Accordion>
<Accordion title="microsoft_outlook/reply_to_email">
**설명:** 이메일 메시지에 회신합니다.
**매개변수:**
- `message_id` (string, 필수): 회신할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
- `comment` (string, 필수): 회신 메시지 내용. 일반 텍스트 또는 HTML 가능. 원본 메시지가 이 내용 아래에 인용됩니다.
</Accordion>
<Accordion title="microsoft_outlook/forward_email">
**설명:** 이메일 메시지를 전달합니다.
**매개변수:**
- `message_id` (string, 필수): 전달할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
- `to_recipients` (array, 필수): 전달할 받는 사람의 이메일 주소 배열. 예: ["john@example.com", "jane@example.com"].
- `comment` (string, 선택사항): 전달된 콘텐츠 위에 포함할 선택적 메시지. 일반 텍스트 또는 HTML 가능.
</Accordion>
<Accordion title="microsoft_outlook/mark_message_read">
**설명:** 메시지를 읽음 또는 읽지 않음으로 표시합니다.
**매개변수:**
- `message_id` (string, 필수): 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
- `is_read` (boolean, 필수): 읽음으로 표시하려면 true, 읽지 않음으로 표시하려면 false로 설정합니다.
</Accordion>
<Accordion title="microsoft_outlook/delete_message">
**설명:** 이메일 메시지를 삭제합니다.
**매개변수:**
- `message_id` (string, 필수): 삭제할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
</Accordion>
<Accordion title="microsoft_outlook/update_event">
**설명:** 기존 캘린더 이벤트를 업데이트합니다.
**매개변수:**
- `event_id` (string, 필수): 이벤트의 고유 식별자. get_calendar_events 작업에서 얻을 수 있습니다.
- `subject` (string, 선택사항): 이벤트의 새 제목/제목.
- `start_time` (string, 선택사항): ISO 8601 형식의 새 시작 시간 (예: "2024-01-20T10:00:00"). 필수: 이 필드 사용 시 start_timezone도 제공해야 합니다.
- `start_timezone` (string, 선택사항): 시작 시간의 시간대. start_time 업데이트 시 필수. 예: "Pacific Standard Time", "Eastern Standard Time", "UTC".
- `end_time` (string, 선택사항): ISO 8601 형식의 새 종료 시간. 필수: 이 필드 사용 시 end_timezone도 제공해야 합니다.
- `end_timezone` (string, 선택사항): 종료 시간의 시간대. end_time 업데이트 시 필수. 예: "Pacific Standard Time", "Eastern Standard Time", "UTC".
- `location` (string, 선택사항): 이벤트의 새 위치.
- `body` (string, 선택사항): 이벤트의 새 본문/설명. HTML 형식 지원.
</Accordion>
<Accordion title="microsoft_outlook/delete_event">
**설명:** 캘린더 이벤트를 삭제합니다.
**매개변수:**
- `event_id` (string, 필수): 삭제할 이벤트의 고유 식별자. get_calendar_events 작업에서 얻을 수 있습니다.
</Accordion>
</AccordionGroup>
## 사용 예제
@@ -165,6 +233,62 @@ crew = Crew(
crew.kickoff()
```
### 이메일 관리 및 검색
```python
from crewai import Agent, Task, Crew
# 이메일 관리에 특화된 에이전트 생성
email_manager = Agent(
role="이메일 관리자",
goal="이메일 메시지를 검색하고 가져와 정리",
backstory="이메일 정리 및 관리에 능숙한 AI 어시스턴트.",
apps=['microsoft_outlook/get_messages']
)
# 이메일 검색 및 가져오기 작업
search_emails_task = Task(
description="최신 읽지 않은 이메일 20건을 가져와 가장 중요한 것들의 요약을 제공하세요.",
agent=email_manager,
expected_output="주요 읽지 않은 이메일의 요약과 핵심 세부 정보."
)
crew = Crew(
agents=[email_manager],
tasks=[search_emails_task]
)
crew.kickoff()
```
### 캘린더 및 연락처 관리
```python
from crewai import Agent, Task, Crew
# 캘린더 및 연락처 관리를 위한 에이전트 생성
scheduler = Agent(
role="캘린더 및 연락처 관리자",
goal="캘린더 이벤트를 관리하고 연락처 정보를 유지",
backstory="일정 관리 및 연락처 정리를 담당하는 AI 어시스턴트.",
apps=['microsoft_outlook/create_calendar_event', 'microsoft_outlook/get_calendar_events', 'microsoft_outlook/create_contact']
)
# 회의 생성 및 연락처 추가 작업
schedule_task = Task(
description="내일 오후 2시 '팀 회의' 제목으로 '회의실 A' 장소의 캘린더 이벤트를 만들고, 'john.smith@example.com' 이메일과 '프로젝트 매니저' 직책으로 'John Smith'의 새 연락처를 추가하세요.",
agent=scheduler,
expected_output="캘린더 이벤트가 생성되고 새 연락처가 추가됨."
)
crew = Crew(
agents=[scheduler],
tasks=[schedule_task]
)
crew.kickoff()
```
## 문제 해결
### 일반적인 문제
@@ -173,11 +297,29 @@ crew.kickoff()
- Microsoft 계정이 이메일, 캘린더 및 연락처 액세스에 필요한 권한을 가지고 있는지 확인하세요.
- 필요한 범위: `Mail.Read`, `Mail.Send`, `Calendars.Read`, `Calendars.ReadWrite`, `Contacts.Read`, `Contacts.ReadWrite`.
- OAuth 연결에 필요한 모든 범위가 포함되어 있는지 확인하세요.
**이메일 보내기 문제**
- `send_email`에 `to_recipients`, `subject`, `body`가 제공되는지 확인하세요.
- 이메일 주소가 올바르게 형식화되어 있는지 확인하세요.
- 계정에 `Mail.Send` 권한이 있는지 확인하세요.
**캘린더 이벤트 생성**
- `subject`, `start_datetime`, `end_datetime`이 제공되는지 확인하세요.
- 날짜/시간 필드에 적절한 ISO 8601 형식을 사용하세요 (예: '2024-01-20T10:00:00').
- 이벤트가 잘못된 시간에 표시되는 경우 시간대 설정을 확인하세요.
**연락처 관리**
- `create_contact`의 경우 필수인 `displayName`이 제공되는지 확인하세요.
- `emailAddresses`를 제공할 때 `address`와 `name` 속성이 있는 올바른 객체 형식을 사용하세요.
**검색 및 필터 문제**
- `filter` 매개변수에 올바른 OData 문법을 사용하세요.
- 날짜 필터의 경우 ISO 8601 형식을 사용하세요 (예: "receivedDateTime ge '2024-01-01T00:00:00Z'").
### 도움 받기

View File

@@ -77,6 +77,17 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_sharepoint/get_drives">
**설명:** SharePoint 사이트의 모든 문서 라이브러리(드라이브)를 나열합니다. 파일 작업을 사용하기 전에 사용 가능한 라이브러리를 찾으려면 이 작업을 사용하세요.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `top` (integer, 선택사항): 페이지당 반환할 최대 드라이브 수 (1-999). 기본값: 100
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'id,name,webUrl,driveType').
</Accordion>
<Accordion title="microsoft_sharepoint/get_site_lists">
**설명:** SharePoint 사이트의 모든 목록을 가져옵니다.
@@ -145,20 +156,317 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
</Accordion>
<Accordion title="microsoft_sharepoint/get_drive_items">
**설명:** SharePoint 문서 라이브러리에서 파일과 폴더를 가져옵니다.
<Accordion title="microsoft_sharepoint/list_files">
**설명:** SharePoint 문서 라이브러리에서 파일과 폴더를 가져옵니다. 기본적으로 루트 폴더를 나열하지만 folder_id를 제공하여 하위 폴더로 이동할 수 있습니다.
**매개변수:**
- `site_id` (string, 필수): SharePoint 사이트의 ID.
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `folder_id` (string, 선택사항): 내용을 나열할 폴더의 ID. 루트 폴더의 경우 'root'를 사용하거나 이전 list_files 호출에서 가져온 폴더 ID를 제공하세요. 기본값: 'root'
- `top` (integer, 선택사항): 페이지당 반환할 최대 항목 수 (1-1000). 기본값: 50
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
- `orderby` (string, 선택사항): 결과 정렬 순서 (예: 'name asc', 'size desc', 'lastModifiedDateTime desc'). 기본값: 'name asc'
- `filter` (string, 선택사항): 결과를 좁히기 위한 OData 필터 (예: 'file ne null'은 파일만, 'folder ne null'은 폴더만).
- `select` (string, 선택사항): 반환할 필드의 쉼표로 구분된 목록 (예: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
</Accordion>
<Accordion title="microsoft_sharepoint/delete_drive_item">
**설명:** SharePoint 문서 라이브러리에서 파일 또는 폴더를 삭제합니다.
<Accordion title="microsoft_sharepoint/delete_file">
**설명:** SharePoint 문서 라이브러리에서 파일 또는 폴더를 삭제합니다. 폴더의 경우 모든 내용이 재귀적으로 삭제됩니다. 항목은 사이트 휴지통으로 이동됩니다.
**매개변수:**
- `site_id` (string, 필수): SharePoint 사이트의 ID.
- `item_id` (string, 필수): 삭제할 파일 또는 폴더의 ID.
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): 삭제할 파일 또는 폴더의 고유 식별자. list_files에서 가져오세요.
</Accordion>
<Accordion title="microsoft_sharepoint/list_files_by_path">
**설명:** 경로로 SharePoint 문서 라이브러리 폴더의 파일과 폴더를 나열합니다. 깊은 탐색을 위해 여러 list_files 호출보다 더 효율적입니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `folder_path` (string, 필수): 앞뒤 슬래시 없이 폴더의 전체 경로 (예: 'Documents', 'Reports/2024/Q1').
- `top` (integer, 선택사항): 페이지당 반환할 최대 항목 수 (1-1000). 기본값: 50
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
- `orderby` (string, 선택사항): 결과 정렬 순서 (예: 'name asc', 'size desc'). 기본값: 'name asc'
- `select` (string, 선택사항): 반환할 필드의 쉼표로 구분된 목록 (예: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
</Accordion>
<Accordion title="microsoft_sharepoint/download_file">
**설명:** SharePoint 문서 라이브러리에서 원시 파일 내용을 다운로드합니다. 일반 텍스트 파일(.txt, .csv, .json)에만 사용하세요. Excel 파일의 경우 Excel 전용 작업을 사용하세요. Word 파일의 경우 get_word_document_content를 사용하세요.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): 다운로드할 파일의 고유 식별자. list_files 또는 list_files_by_path에서 가져오세요.
</Accordion>
<Accordion title="microsoft_sharepoint/get_file_info">
**설명:** SharePoint 문서 라이브러리의 특정 파일 또는 폴더에 대한 자세한 메타데이터를 가져옵니다. 이름, 크기, 생성/수정 날짜 및 작성자 정보가 포함됩니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): 파일 또는 폴더의 고유 식별자. list_files 또는 list_files_by_path에서 가져오세요.
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'id,name,size,createdDateTime,lastModifiedDateTime,webUrl,createdBy,lastModifiedBy').
</Accordion>
<Accordion title="microsoft_sharepoint/create_folder">
**설명:** SharePoint 문서 라이브러리에 새 폴더를 만듭니다. 기본적으로 루트에 폴더를 만들며 하위 폴더를 만들려면 parent_id를 사용하세요.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `folder_name` (string, 필수): 새 폴더의 이름. 사용할 수 없는 문자: \ / : * ? " < > |
- `parent_id` (string, 선택사항): 상위 폴더의 ID. 문서 라이브러리 루트의 경우 'root'를 사용하거나 list_files에서 가져온 폴더 ID를 제공하세요. 기본값: 'root'
</Accordion>
<Accordion title="microsoft_sharepoint/search_files">
**설명:** 키워드로 SharePoint 문서 라이브러리에서 파일과 폴더를 검색합니다. 파일 이름, 폴더 이름 및 Office 문서의 파일 내용을 검색합니다. 와일드카드나 특수 문자를 사용하지 마세요.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `query` (string, 필수): 검색 키워드 (예: 'report', 'budget 2024'). *.txt와 같은 와일드카드는 지원되지 않습니다.
- `top` (integer, 선택사항): 페이지당 반환할 최대 결과 수 (1-1000). 기본값: 50
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
- `select` (string, 선택사항): 반환할 필드의 쉼표로 구분된 목록 (예: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
</Accordion>
<Accordion title="microsoft_sharepoint/copy_file">
**설명:** SharePoint 내에서 파일 또는 폴더를 새 위치로 복사합니다. 원본 항목은 변경되지 않습니다. 대용량 파일의 경우 복사 작업은 비동기적입니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): 복사할 파일 또는 폴더의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `destination_folder_id` (string, 필수): 대상 폴더의 ID. 루트 폴더의 경우 'root'를 사용하거나 list_files에서 가져온 폴더 ID를 사용하세요.
- `new_name` (string, 선택사항): 복사본의 새 이름. 제공하지 않으면 원래 이름이 사용됩니다.
</Accordion>
<Accordion title="microsoft_sharepoint/move_file">
**설명:** SharePoint 내에서 파일 또는 폴더를 새 위치로 이동합니다. 항목은 원래 위치에서 제거됩니다. 폴더의 경우 모든 내용도 함께 이동됩니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): 이동할 파일 또는 폴더의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `destination_folder_id` (string, 필수): 대상 폴더의 ID. 루트 폴더의 경우 'root'를 사용하거나 list_files에서 가져온 폴더 ID를 사용하세요.
- `new_name` (string, 선택사항): 이동된 항목의 새 이름. 제공하지 않으면 원래 이름이 유지됩니다.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_worksheets">
**설명:** SharePoint 문서 라이브러리에 저장된 Excel 통합 문서의 모든 워크시트(탭)를 나열합니다. 반환된 워크시트 이름을 다른 Excel 작업에 사용하세요.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'id,name,position,visibility').
- `filter` (string, 선택사항): OData 필터 표현식 (예: "visibility eq 'Visible'"로 숨겨진 시트 제외).
- `top` (integer, 선택사항): 반환할 최대 워크시트 수. 최소: 1, 최대: 999
- `orderby` (string, 선택사항): 정렬 순서 (예: 'position asc'로 탭 순서대로 반환).
</Accordion>
<Accordion title="microsoft_sharepoint/create_excel_worksheet">
**설명:** SharePoint 문서 라이브러리에 저장된 Excel 통합 문서에 새 워크시트(탭)를 만듭니다. 새 시트는 탭 목록의 끝에 추가됩니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `name` (string, 필수): 새 워크시트의 이름. 최대 31자. 사용할 수 없는 문자: \ / * ? : [ ]. 통합 문서 내에서 고유해야 합니다.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_range_data">
**설명:** SharePoint에 저장된 Excel 워크시트의 특정 범위에서 셀 값을 가져옵니다. 크기를 모르는 상태에서 모든 데이터를 읽으려면 대신 get_excel_used_range를 사용하세요.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 읽을 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
- `range` (string, 필수): A1 표기법의 셀 범위 (예: 'A1:C10', 'A:C', '1:5', 'A1').
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text').
</Accordion>
<Accordion title="microsoft_sharepoint/update_excel_range_data">
**설명:** SharePoint에 저장된 Excel 워크시트의 특정 범위에 값을 씁니다. 기존 셀 내용을 덮어씁니다. values 배열의 크기는 범위 크기와 정확히 일치해야 합니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 업데이트할 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
- `range` (string, 필수): 값을 쓸 A1 표기법의 셀 범위 (예: 'A1:C3'은 3x3 블록).
- `values` (array, 필수): 2D 값 배열 (셀을 포함하는 행). A1:B2의 예: [["Header1", "Header2"], ["Value1", "Value2"]]. 셀을 지우려면 null을 사용하세요.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_used_range_metadata">
**설명:** 실제 셀 값 없이 워크시트에서 사용된 범위의 메타데이터(주소 및 크기)만 반환합니다. 대용량 파일에서 데이터를 청크로 읽기 전에 스프레드시트 크기를 파악하는 데 이상적입니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 읽을 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_used_range">
**설명:** SharePoint에 저장된 워크시트에서 데이터가 포함된 모든 셀을 가져옵니다. 2MB보다 큰 파일에는 사용하지 마세요. 대용량 파일의 경우 먼저 get_excel_used_range_metadata를 사용한 다음 get_excel_range_data로 작은 청크로 읽으세요.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 읽을 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text,rowCount,columnCount').
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_cell">
**설명:** SharePoint의 Excel 파일에서 행과 열 인덱스로 단일 셀의 값을 가져옵니다. 인덱스는 0 기반입니다 (행 0 = Excel 행 1, 열 0 = 열 A).
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
- `row` (integer, 필수): 0 기반 행 인덱스 (행 0 = Excel 행 1). 유효 범위: 0-1048575
- `column` (integer, 필수): 0 기반 열 인덱스 (열 0 = A, 열 1 = B). 유효 범위: 0-16383
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text').
</Accordion>
<Accordion title="microsoft_sharepoint/add_excel_table">
**설명:** 셀 범위를 필터링, 정렬 및 구조화된 데이터 기능이 있는 서식이 지정된 Excel 테이블로 변환합니다. 테이블을 만들면 add_excel_table_row로 데이터를 추가할 수 있습니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 데이터 범위가 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
- `range` (string, 필수): 헤더와 데이터를 포함하여 테이블로 변환할 셀 범위 (예: 'A1:D10'에서 A1:D1은 열 헤더).
- `has_headers` (boolean, 선택사항): 첫 번째 행에 열 헤더가 포함되어 있으면 true로 설정. 기본값: true
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_tables">
**설명:** SharePoint에 저장된 특정 Excel 워크시트의 모든 테이블을 나열합니다. id, name, showHeaders 및 showTotals를 포함한 테이블 속성을 반환합니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 테이블을 가져올 워크시트의 이름. get_excel_worksheets에서 가져오세요.
</Accordion>
<Accordion title="microsoft_sharepoint/add_excel_table_row">
**설명:** SharePoint 파일의 Excel 테이블 끝에 새 행을 추가합니다. values 배열은 테이블의 열 수와 같은 수의 요소를 가져야 합니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 테이블이 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
- `table_name` (string, 필수): 행을 추가할 테이블의 이름 (예: 'Table1'). get_excel_tables에서 가져오세요. 대소문자를 구분합니다.
- `values` (array, 필수): 새 행의 셀 값 배열로 테이블 순서대로 열당 하나씩 (예: ["John Doe", "john@example.com", 25]).
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_table_data">
**설명:** SharePoint 파일의 Excel 테이블에서 모든 행을 데이터 범위로 가져옵니다. 정확한 범위를 알 필요가 없으므로 구조화된 테이블 작업 시 get_excel_range_data보다 쉽습니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 테이블이 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
- `table_name` (string, 필수): 데이터를 가져올 테이블의 이름 (예: 'Table1'). get_excel_tables에서 가져오세요. 대소문자를 구분합니다.
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text').
</Accordion>
<Accordion title="microsoft_sharepoint/create_excel_chart">
**설명:** SharePoint에 저장된 Excel 워크시트에 데이터 범위에서 차트 시각화를 만듭니다. 차트는 워크시트에 포함됩니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 차트를 만들 워크시트의 이름. get_excel_worksheets에서 가져오세요.
- `chart_type` (string, 필수): 차트 유형 (예: 'ColumnClustered', 'ColumnStacked', 'Line', 'LineMarkers', 'Pie', 'Bar', 'BarClustered', 'Area', 'Scatter', 'Doughnut').
- `source_data` (string, 필수): 헤더를 포함한 A1 표기법의 차트 데이터 범위 (예: 'A1:B10').
- `series_by` (string, 선택사항): 데이터 계열 구성 방법: 'Auto', 'Columns' 또는 'Rows'. 기본값: 'Auto'
</Accordion>
<Accordion title="microsoft_sharepoint/list_excel_charts">
**설명:** SharePoint에 저장된 Excel 워크시트에 포함된 모든 차트를 나열합니다. id, name, chartType, height, width 및 position을 포함한 차트 속성을 반환합니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 차트를 나열할 워크시트의 이름. get_excel_worksheets에서 가져오세요.
</Accordion>
<Accordion title="microsoft_sharepoint/delete_excel_worksheet">
**설명:** SharePoint에 저장된 Excel 통합 문서에서 워크시트(탭)와 모든 내용을 영구적으로 제거합니다. 실행 취소할 수 없습니다. 통합 문서에는 최소 하나의 워크시트가 있어야 합니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 삭제할 워크시트의 이름. 대소문자를 구분합니다. 이 시트의 모든 데이터, 테이블 및 차트가 영구적으로 제거됩니다.
</Accordion>
<Accordion title="microsoft_sharepoint/delete_excel_table">
**설명:** SharePoint의 Excel 워크시트에서 테이블을 제거합니다. 테이블 구조(필터링, 서식, 테이블 기능)는 삭제되지만 기본 셀 데이터는 보존됩니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
- `worksheet_name` (string, 필수): 테이블이 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
- `table_name` (string, 필수): 삭제할 테이블의 이름 (예: 'Table1'). get_excel_tables에서 가져오세요. 테이블 삭제 후에도 셀의 데이터는 유지됩니다.
</Accordion>
<Accordion title="microsoft_sharepoint/list_excel_names">
**설명:** SharePoint에 저장된 Excel 통합 문서에 정의된 모든 명명된 범위를 가져옵니다. 명명된 범위는 셀 범위에 대한 사용자 정의 레이블입니다 (예: 'SalesData'는 A1:D100을 가리킴).
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
</Accordion>
<Accordion title="microsoft_sharepoint/get_word_document_content">
**설명:** SharePoint 문서 라이브러리에 저장된 Word 문서(.docx)에서 텍스트 내용을 다운로드하고 추출합니다. SharePoint에서 Word 문서를 읽는 권장 방법입니다.
**매개변수:**
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
- `item_id` (string, 필수): SharePoint에 있는 Word 문서(.docx)의 고유 식별자. list_files 또는 search_files에서 가져오세요.
</Accordion>
</AccordionGroup>

View File

@@ -107,6 +107,86 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `join_web_url` (string, 필수): 검색할 회의의 웹 참가 URL.
</Accordion>
<Accordion title="microsoft_teams/search_online_meetings_by_meeting_id">
**설명:** 외부 Meeting ID로 온라인 회의를 검색합니다.
**매개변수:**
- `join_meeting_id` (string, 필수): 참석자가 참가할 때 사용하는 회의 ID(숫자 코드). 회의 초대에 표시되는 joinMeetingId이며, Graph API meeting id가 아닙니다.
</Accordion>
<Accordion title="microsoft_teams/get_meeting">
**설명:** 특정 온라인 회의의 세부 정보를 가져옵니다.
**매개변수:**
- `meeting_id` (string, 필수): Graph API 회의 ID(긴 영숫자 문자열). create_meeting 또는 search_online_meetings 작업에서 얻을 수 있습니다. 숫자 joinMeetingId와 다릅니다.
</Accordion>
<Accordion title="microsoft_teams/get_team_members">
**설명:** 특정 팀의 멤버를 가져옵니다.
**매개변수:**
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
- `top` (integer, 선택사항): 페이지당 검색할 멤버 수 (1-999). 기본값: 100.
- `skip_token` (string, 선택사항): 이전 응답의 페이지네이션 토큰. 응답에 @odata.nextLink가 포함된 경우 $skiptoken 매개변수 값을 추출하여 여기에 전달하면 다음 페이지 결과를 가져올 수 있습니다.
</Accordion>
<Accordion title="microsoft_teams/create_channel">
**설명:** 팀에 새 채널을 만듭니다.
**매개변수:**
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
- `display_name` (string, 필수): Teams에 표시되는 채널 이름. 팀 내에서 고유해야 합니다. 최대 50자.
- `description` (string, 선택사항): 채널 목적을 설명하는 선택적 설명. 채널 세부 정보에 표시됩니다. 최대 1024자.
- `membership_type` (string, 선택사항): 채널 가시성. 옵션: standard, private. "standard" = 모든 팀 멤버에게 표시, "private" = 명시적으로 추가된 멤버에게만 표시. 기본값: standard.
</Accordion>
<Accordion title="microsoft_teams/get_message_replies">
**설명:** 채널의 특정 메시지에 대한 회신을 가져옵니다.
**매개변수:**
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
- `channel_id` (string, 필수): 채널의 고유 식별자. get_channels 작업에서 얻을 수 있습니다.
- `message_id` (string, 필수): 상위 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
- `top` (integer, 선택사항): 페이지당 검색할 회신 수 (1-50). 기본값: 50.
- `skip_token` (string, 선택사항): 이전 응답의 페이지네이션 토큰. 응답에 @odata.nextLink가 포함된 경우 $skiptoken 매개변수 값을 추출하여 여기에 전달하면 다음 페이지 결과를 가져올 수 있습니다.
</Accordion>
<Accordion title="microsoft_teams/reply_to_message">
**설명:** Teams 채널의 메시지에 회신합니다.
**매개변수:**
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
- `channel_id` (string, 필수): 채널의 고유 식별자. get_channels 작업에서 얻을 수 있습니다.
- `message_id` (string, 필수): 회신할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
- `message` (string, 필수): 회신 내용. HTML의 경우 서식 태그 포함. 텍스트의 경우 일반 텍스트만.
- `content_type` (string, 선택사항): 콘텐츠 형식. 옵션: html, text. "text"는 일반 텍스트, "html"은 서식이 있는 리치 텍스트. 기본값: text.
</Accordion>
<Accordion title="microsoft_teams/update_meeting">
**설명:** 기존 온라인 회의를 업데이트합니다.
**매개변수:**
- `meeting_id` (string, 필수): 회의의 고유 식별자. create_meeting 또는 search_online_meetings 작업에서 얻을 수 있습니다.
- `subject` (string, 선택사항): 새 회의 제목.
- `startDateTime` (string, 선택사항): 시간대가 포함된 ISO 8601 형식의 새 시작 시간. 예: "2024-01-20T10:00:00-08:00".
- `endDateTime` (string, 선택사항): 시간대가 포함된 ISO 8601 형식의 새 종료 시간.
</Accordion>
<Accordion title="microsoft_teams/delete_meeting">
**설명:** 온라인 회의를 삭제합니다.
**매개변수:**
- `meeting_id` (string, 필수): 삭제할 회의의 고유 식별자. create_meeting 또는 search_online_meetings 작업에서 얻을 수 있습니다.
</Accordion>
</AccordionGroup>
## 사용 예제
@@ -140,6 +220,62 @@ crew = Crew(
crew.kickoff()
```
### 메시징 및 커뮤니케이션
```python
from crewai import Agent, Task, Crew
# 메시징에 특화된 에이전트 생성
messenger = Agent(
role="Teams 메신저",
goal="Teams 채널에서 메시지 전송 및 검색",
backstory="팀 커뮤니케이션 및 메시지 관리에 능숙한 AI 어시스턴트.",
apps=['microsoft_teams/send_message', 'microsoft_teams/get_messages']
)
# 메시지 전송 및 최근 메시지 검색 작업
messaging_task = Task(
description="'your_team_id' 팀의 General 채널에 'Hello team! This is an automated update from our AI assistant.' 메시지를 보낸 다음 해당 채널의 최근 10개 메시지를 검색하세요.",
agent=messenger,
expected_output="메시지가 성공적으로 전송되고 최근 메시지가 검색됨."
)
crew = Crew(
agents=[messenger],
tasks=[messaging_task]
)
crew.kickoff()
```
### 회의 관리
```python
from crewai import Agent, Task, Crew
# 회의 관리를 위한 에이전트 생성
meeting_scheduler = Agent(
role="회의 스케줄러",
goal="Teams 회의 생성 및 관리",
backstory="회의 일정 관리 및 정리를 담당하는 AI 어시스턴트.",
apps=['microsoft_teams/create_meeting', 'microsoft_teams/search_online_meetings_by_join_url']
)
# 회의 생성 작업
schedule_meeting_task = Task(
description="내일 오전 10시에 1시간 동안 진행되는 '주간 팀 동기화' 제목의 Teams 회의를 생성하세요 (시간대가 포함된 적절한 ISO 8601 형식 사용).",
agent=meeting_scheduler,
expected_output="회의 세부 정보와 함께 Teams 회의가 성공적으로 생성됨."
)
crew = Crew(
agents=[meeting_scheduler],
tasks=[schedule_meeting_task]
)
crew.kickoff()
```
## 문제 해결
### 일반적인 문제
@@ -148,11 +284,35 @@ crew.kickoff()
- Microsoft 계정이 Teams 액세스에 필요한 권한을 가지고 있는지 확인하세요.
- 필요한 범위: `Team.ReadBasic.All`, `Channel.ReadBasic.All`, `ChannelMessage.Send`, `ChannelMessage.Read.All`, `OnlineMeetings.ReadWrite`, `OnlineMeetings.Read`.
- OAuth 연결에 필요한 모든 범위가 포함되어 있는지 확인하세요.
**팀 및 채널 액세스**
- 액세스하려는 팀의 멤버인지 확인하세요.
- 팀 및 채널 ID가 올바른지 다시 확인하세요.
- 팀 및 채널 ID는 `get_teams` 및 `get_channels` 작업을 사용하여 얻을 수 있습니다.
**메시지 전송 문제**
- `send_message`에 `team_id`, `channel_id`, `message`가 제공되는지 확인하세요.
- 지정된 채널에 메시지를 보낼 권한이 있는지 확인하세요.
- 메시지 형식에 따라 적절한 `content_type`(text 또는 html)을 선택하세요.
**회의 생성**
- `subject`, `startDateTime`, `endDateTime`이 제공되는지 확인하세요.
- 날짜/시간 필드에 시간대가 포함된 적절한 ISO 8601 형식을 사용하세요 (예: '2024-01-20T10:00:00-08:00').
- 회의 시간이 미래인지 확인하세요.
**메시지 검색 제한**
- `get_messages` 작업은 요청당 최대 50개 메시지만 검색할 수 있습니다.
- 메시지는 역시간순(최신순)으로 반환됩니다.
**회의 검색**
- `search_online_meetings_by_join_url`의 경우 참가 URL이 정확하고 올바르게 형식화되어 있는지 확인하세요.
- URL은 완전한 Teams 회의 참가 URL이어야 합니다.
### 도움 받기

View File

@@ -97,6 +97,26 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
- `file_id` (string, 필수): 삭제할 문서의 ID.
</Accordion>
<Accordion title="microsoft_word/copy_document">
**설명:** OneDrive의 새 위치에 문서를 복사합니다.
**매개변수:**
- `file_id` (string, 필수): 복사할 문서의 ID.
- `name` (string, 선택사항): 복사된 문서의 새 이름.
- `parent_id` (string, 선택사항): 대상 폴더의 ID (기본값: 루트).
</Accordion>
<Accordion title="microsoft_word/move_document">
**설명:** OneDrive의 새 위치로 문서를 이동합니다.
**매개변수:**
- `file_id` (string, 필수): 이동할 문서의 ID.
- `parent_id` (string, 필수): 대상 폴더의 ID.
- `name` (string, 선택사항): 이동된 문서의 새 이름.
</Accordion>
</AccordionGroup>
## 사용 예제

View File

@@ -112,3 +112,9 @@ HITL 워크플로우는 다음과 같은 경우에 특히 유용합니다:
- 민감하거나 고위험 작업
- 인간의 판단이 필요한 창의적 과제
- 컴플라이언스 및 규제 검토
## Enterprise 기능
<Card title="Flow HITL 관리 플랫폼" icon="users-gear" href="/ko/enterprise/features/flow-hitl-management">
CrewAI Enterprise는 플랫폼 내 검토, 응답자 할당, 권한, 에스컬레이션 정책, SLA 관리, 동적 라우팅 및 전체 분석을 갖춘 Flow용 포괄적인 HITL 관리 시스템을 제공합니다. [자세히 알아보기 →](/ko/enterprise/features/flow-hitl-management)
</Card>

View File

@@ -4,6 +4,74 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="26 jan 2026">
## v1.9.0
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.9.0)
## O que Mudou
### Funcionalidades
- Adicionar suporte a saídas estruturadas e response_format em vários provedores
- Adicionar ID de resposta às respostas de streaming
- Adicionar ordenação de eventos com hierarquias pai-filho
- Adicionar suporte à autenticação SSO Keycloak
- Adicionar capacidades de manipulação de arquivos multimodais
- Adicionar suporte nativo à API de respostas OpenAI
- Adicionar utilitários de execução de tarefas A2A
- Adicionar configuração de servidor A2A e geração de cartão de agente
- Aprimorar sistema de eventos e expandir opções de transporte
- Melhorar mecanismos de chamada de ferramentas
### Correções de Bugs
- Aprimorar armazenamento de arquivos com cache de memória de fallback quando aiocache não está disponível
- Garantir que lista de documentos não esteja vazia
- Tratar sequências de parada do Bedrock adequadamente
- Adicionar suporte à chave de API do Google Vertex
- Aprimorar detecção de palavras de parada do modelo Azure
- Melhorar tratamento de erros para HumanFeedbackPending na execução de fluxo
- Corrigir desvinculação de tarefa do span de execução
### Documentação
- Adicionar documentação de manipulação nativa de arquivos
- Adicionar documentação da API de respostas OpenAI
- Adicionar orientação de implementação de cartão de agente
- Refinar documentação A2A
- Atualizar changelog para v1.8.0
### Contribuidores
@Anaisdg, @GininDenis, @Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @koushiv777, @lorenzejay, @nicoferdi96, @vinibrsl
</Update>
<Update label="15 jan 2026">
## v1.8.1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.8.1)
## O que Mudou
### Funcionalidades
- Adicionar utilitários de execução de tarefas A2A
- Adicionar configuração de servidor A2A e geração de cartão de agente
- Adicionar mecanismos de transporte adicionais
- Adicionar suporte à integração Galileo
### Correções de Bugs
- Melhorar compatibilidade do modelo Azure
- Expandir profundidade de inspeção de frame para detectar parent_flow
- Resolver problemas de gerenciamento de span de execução de tarefas
- Aprimorar tratamento de erros para cenários de feedback humano durante execução de fluxo
### Documentação
- Adicionar documentação de cartão de agente A2A
- Adicionar documentação de recurso de redação de PII
### Contribuidores
@Anaisdg, @GininDenis, @greysonlalonde, @joaomdmoura, @koushiv777, @lorenzejay, @vinibrsl
</Update>
<Update label="08 jan 2026">
## v1.8.0

View File

@@ -0,0 +1,563 @@
---
title: "Gerenciamento HITL para Flows"
description: "Revisão humana de nível empresarial para Flows com notificações por email, regras de roteamento e capacidades de resposta automática"
icon: "users-gear"
mode: "wide"
---
<Note>
Os recursos de gerenciamento HITL para Flows requerem o decorador `@human_feedback`, disponível no **CrewAI versão 1.8.0 ou superior**. Estes recursos aplicam-se especificamente a **Flows**, não a Crews.
</Note>
O CrewAI Enterprise oferece um sistema abrangente de gerenciamento Human-in-the-Loop (HITL) para Flows que transforma fluxos de trabalho de IA em processos colaborativos humano-IA. A plataforma usa uma **arquitetura email-first** que permite que qualquer pessoa com um endereço de email responda a solicitações de revisão—sem necessidade de conta na plataforma.
## Visão Geral
<CardGroup cols={3}>
<Card title="Design Email-First" icon="envelope">
Respondentes podem responder diretamente aos emails de notificação para fornecer feedback
</Card>
<Card title="Roteamento Flexível" icon="route">
Direcione solicitações para emails específicos com base em padrões de método ou estado do flow
</Card>
<Card title="Resposta Automática" icon="clock">
Configure respostas automáticas de fallback quando nenhum humano responder a tempo
</Card>
</CardGroup>
### Principais Benefícios
- **Modelo mental simples**: Endereços de email são universais; não é necessário gerenciar usuários ou funções da plataforma
- **Respondentes externos**: Qualquer pessoa com email pode responder, mesmo não sendo usuário da plataforma
- **Atribuição dinâmica**: Obtenha o email do responsável diretamente do estado do flow (ex: `sales_rep_email`)
- **Configuração reduzida**: Menos configurações para definir, tempo mais rápido para gerar valor
- **Email como canal principal**: A maioria dos usuários prefere responder via email do que fazer login em um dashboard
## Configurando Pontos de Revisão Humana em Flows
Configure checkpoints de revisão humana em seus Flows usando o decorador `@human_feedback`. Quando a execução atinge um ponto de revisão, o sistema pausa, notifica o responsável via email e aguarda uma resposta.
```python
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ContentApprovalFlow(Flow):
@start()
def generate_content(self):
# IA gera conteúdo
return "Texto de marketing gerado para campanha Q1..."
@listen(generate_content)
@human_feedback(
message="Por favor, revise este conteúdo para conformidade com a marca:",
emit=["approved", "rejected", "needs_revision"],
)
def review_content(self, content):
return content
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
print(f"Publicando conteúdo aprovado. Notas do revisor: {result.feedback}")
@listen("rejected")
def archive_content(self, result: HumanFeedbackResult):
print(f"Conteúdo rejeitado. Motivo: {result.feedback}")
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
print(f"Revisão solicitada: {result.feedback}")
```
Para detalhes completos de implementação, consulte o guia [Feedback Humano em Flows](/pt-BR/learn/human-feedback-in-flows).
### Parâmetros do Decorador
| Parâmetro | Tipo | Descrição |
|-----------|------|-----------|
| `message` | `str` | A mensagem exibida para o revisor humano |
| `emit` | `list[str]` | Opções de resposta válidas (exibidas como botões na UI) |
## Configuração da Plataforma
Acesse a configuração HITL em: **Deployment** → **Settings** → **Human in the Loop Configuration**
<Frame>
<img src="/images/enterprise/hitl-settings-overview.png" alt="Configurações HITL" />
</Frame>
### Notificações por Email
Toggle para ativar ou desativar notificações por email para solicitações HITL.
| Configuração | Padrão | Descrição |
|--------------|--------|-----------|
| Notificações por Email | Ativado | Enviar emails quando feedback for solicitado |
<Note>
Quando desativado, os respondentes devem usar a UI do dashboard ou você deve configurar webhooks para sistemas de notificação personalizados.
</Note>
### Meta de SLA
Defina um tempo de resposta alvo para fins de rastreamento e métricas.
| Configuração | Descrição |
|--------------|-----------|
| Meta de SLA (minutos) | Tempo de resposta alvo. Usado para métricas do dashboard e rastreamento de SLA |
Deixe vazio para desativar o rastreamento de SLA.
## Notificações e Respostas por Email
O sistema HITL usa uma arquitetura email-first onde os respondentes podem responder diretamente aos emails de notificação.
### Como Funcionam as Respostas por Email
<Steps>
<Step title="Notificação Enviada">
Quando uma solicitação HITL é criada, um email é enviado ao respondente atribuído com o conteúdo e contexto da revisão.
</Step>
<Step title="Endereço Reply-To">
O email inclui um endereço reply-to especial com um token assinado para autenticação.
</Step>
<Step title="Usuário Responde">
O respondente simplesmente responde ao email com seu feedback—nenhum login necessário.
</Step>
<Step title="Validação do Token">
A plataforma recebe a resposta, verifica o token assinado e corresponde o email do remetente.
</Step>
<Step title="Flow Continua">
O feedback é registrado e o flow continua com a entrada humana.
</Step>
</Steps>
### Formato de Resposta
Respondentes podem responder com:
- **Opção emit**: Se a resposta corresponder a uma opção `emit` (ex: "approved"), ela é usada diretamente
- **Texto livre**: Qualquer resposta de texto é passada para o flow como feedback
- **Texto simples**: A primeira linha do corpo da resposta é usada como feedback
### Emails de Confirmação
Após processar uma resposta, o respondente recebe um email de confirmação indicando se o feedback foi enviado com sucesso ou se ocorreu um erro.
### Segurança do Token de Email
- Tokens são assinados criptograficamente para segurança
- Tokens expiram após 7 dias
- Email do remetente deve corresponder ao email autorizado do token
- Emails de confirmação/erro são enviados após o processamento
## Regras de Roteamento
Direcione solicitações HITL para endereços de email específicos com base em padrões de método.
<Frame>
<img src="/images/enterprise/hitl-settings-routing-rules.png" alt="Configuração de Regras de Roteamento HITL" />
</Frame>
### Estrutura da Regra
```json
{
"name": "Aprovações para Financeiro",
"match": {
"method_name": "approve_*"
},
"assign_to_email": "financeiro@empresa.com",
"assign_from_input": "manager_email"
}
```
### Padrões de Correspondência
| Padrão | Descrição | Exemplo de Correspondência |
|--------|-----------|---------------------------|
| `approve_*` | Wildcard (qualquer caractere) | `approve_payment`, `approve_vendor` |
| `review_?` | Caractere único | `review_a`, `review_1` |
| `validate_payment` | Correspondência exata | apenas `validate_payment` |
### Prioridade de Atribuição
1. **Atribuição dinâmica** (`assign_from_input`): Se configurado, obtém email do estado do flow
2. **Email estático** (`assign_to_email`): Fallback para email configurado
3. **Criador do deployment**: Se nenhuma regra corresponder, o email do criador do deployment é usado
### Exemplo de Atribuição Dinâmica
Se seu estado do flow contém `{"sales_rep_email": "alice@empresa.com"}`, configure:
```json
{
"name": "Direcionar para Representante de Vendas",
"match": {
"method_name": "review_*"
},
"assign_from_input": "sales_rep_email"
}
```
A solicitação será atribuída automaticamente para `alice@empresa.com`.
<Tip>
**Caso de Uso**: Obtenha o responsável do seu CRM, banco de dados ou etapa anterior do flow para direcionar revisões dinamicamente para a pessoa certa.
</Tip>
## Resposta Automática
Responda automaticamente a solicitações HITL se nenhum humano responder dentro do timeout. Isso garante que os flows não fiquem travados indefinidamente.
### Configuração
| Configuração | Descrição |
|--------------|-----------|
| Ativado | Toggle para ativar resposta automática |
| Timeout (minutos) | Tempo de espera antes de responder automaticamente |
| Resultado Padrão | O valor da resposta (deve corresponder a uma opção `emit`) |
<Frame>
<img src="/images/enterprise/hitl-settings-auto-respond.png" alt="Configuração de Resposta Automática HITL" />
</Frame>
### Casos de Uso
- **Conformidade com SLA**: Garante que flows não fiquem travados indefinidamente
- **Aprovação padrão**: Aprove automaticamente solicitações de baixo risco após timeout
- **Degradação graciosa**: Continue com um padrão seguro quando revisores não estiverem disponíveis
<Warning>
Use resposta automática com cuidado. Ative apenas para revisões não críticas onde uma resposta padrão é aceitável.
</Warning>
## Processo de Revisão
### Interface do Dashboard
A interface de revisão HITL oferece uma experiência limpa e focada para revisores:
- **Renderização Markdown**: Formatação rica para conteúdo de revisão com destaque de sintaxe
- **Painel de Contexto**: Visualize estado do flow, histórico de execução e informações relacionadas
- **Entrada de Feedback**: Forneça feedback detalhado e comentários com sua decisão
- **Ações Rápidas**: Botões de opção emit com um clique com comentários opcionais
<Frame>
<img src="/images/enterprise/hitl-list-pending-feedbacks.png" alt="Lista de Solicitações HITL Pendentes" />
</Frame>
### Métodos de Resposta
Revisores podem responder por três canais:
| Método | Descrição |
|--------|-----------|
| **Resposta por Email** | Responda diretamente ao email de notificação |
| **Dashboard** | Use a UI do dashboard Enterprise |
| **API/Webhook** | Resposta programática via API |
### Histórico e Trilha de Auditoria
Toda interação HITL é rastreada com uma linha do tempo completa:
- Histórico de decisões (aprovar/rejeitar/revisar)
- Identidade do revisor e timestamp
- Feedback e comentários fornecidos
- Método de resposta (email/dashboard/API)
- Métricas de tempo de resposta
## Análise e Monitoramento
Acompanhe o desempenho HITL com análises abrangentes.
### Dashboard de Desempenho
<Frame>
<img src="/images/enterprise/hitl-metrics.png" alt="Dashboard de Métricas HITL" />
</Frame>
<CardGroup cols={2}>
<Card title="Tempos de Resposta" icon="stopwatch">
Monitore tempos de resposta médios e medianos por revisor ou flow.
</Card>
<Card title="Tendências de Volume" icon="chart-bar">
Analise padrões de volume de revisão para otimizar capacidade da equipe.
</Card>
<Card title="Distribuição de Decisões" icon="chart-pie">
Visualize taxas de aprovação/rejeição em diferentes tipos de revisão.
</Card>
<Card title="Rastreamento de SLA" icon="chart-line">
Acompanhe a porcentagem de revisões concluídas dentro das metas de SLA.
</Card>
</CardGroup>
### Auditoria e Conformidade
Capacidades de auditoria prontas para empresas para requisitos regulatórios:
- Histórico completo de decisões com timestamps
- Verificação de identidade do revisor
- Logs de auditoria imutáveis
- Capacidades de exportação para relatórios de conformidade
## Casos de Uso Comuns
<AccordionGroup>
<Accordion title="Revisões de Segurança" icon="shield-halved">
**Caso de Uso**: Automação de questionários de segurança internos com validação humana
- IA gera respostas para questionários de segurança
- Equipe de segurança revisa e valida precisão via email
- Respostas aprovadas são compiladas na submissão final
- Trilha de auditoria completa para conformidade
</Accordion>
<Accordion title="Aprovação de Conteúdo" icon="file-lines">
**Caso de Uso**: Conteúdo de marketing que requer revisão legal/marca
- IA gera texto de marketing ou conteúdo de mídia social
- Roteie para email da equipe de marca para revisão de voz/tom
- Publicação automática após aprovação
</Accordion>
<Accordion title="Aprovações Financeiras" icon="money-bill">
**Caso de Uso**: Relatórios de despesas, termos de contrato, alocações de orçamento
- IA pré-processa e categoriza solicitações financeiras
- Roteie com base em limites de valor usando atribuição dinâmica
- Mantenha trilha de auditoria completa para conformidade financeira
</Accordion>
<Accordion title="Atribuição Dinâmica do CRM" icon="database">
**Caso de Uso**: Direcione revisões para proprietários de conta do seu CRM
- Flow obtém email do proprietário da conta do CRM
- Armazene email no estado do flow (ex: `account_owner_email`)
- Use `assign_from_input` para direcionar automaticamente para a pessoa certa
</Accordion>
<Accordion title="Garantia de Qualidade" icon="magnifying-glass">
**Caso de Uso**: Validação de saída de IA antes da entrega ao cliente
- IA gera conteúdo ou respostas voltadas ao cliente
- Equipe de QA revisa via notificação por email
- Loops de feedback melhoram desempenho da IA ao longo do tempo
</Accordion>
</AccordionGroup>
## API de Webhooks
Quando seus Flows pausam para feedback humano, você pode configurar webhooks para enviar dados da solicitação para sua própria aplicação. Isso permite:
- Construir UIs de aprovação personalizadas
- Integrar com ferramentas internas (Jira, ServiceNow, dashboards personalizados)
- Rotear aprovações para sistemas de terceiros
- Notificações em apps mobile
- Sistemas de decisão automatizados
<Frame>
<img src="/images/enterprise/hitl-settings-webhook.png" alt="Configuração de Webhook HITL" />
</Frame>
### Configurando Webhooks
<Steps>
<Step title="Navegue até Configurações">
Vá para **Deployment** → **Settings** → **Human in the Loop**
</Step>
<Step title="Expanda a Seção Webhooks">
Clique para expandir a configuração de **Webhooks**
</Step>
<Step title="Adicione sua URL de Webhook">
Digite sua URL de webhook (deve ser HTTPS em produção)
</Step>
<Step title="Salve a Configuração">
Clique em **Salvar Configuração** para ativar
</Step>
</Steps>
Você pode configurar múltiplos webhooks. Cada webhook ativo recebe todos os eventos HITL.
### Eventos de Webhook
Seu endpoint receberá requisições HTTP POST para estes eventos:
| Tipo de Evento | Quando é Disparado |
|----------------|-------------------|
| `new_request` | Um flow pausa e solicita feedback humano |
### Payload do Webhook
Todos os webhooks recebem um payload JSON com esta estrutura:
```json
{
"event": "new_request",
"request": {
"id": "550e8400-e29b-41d4-a716-446655440000",
"flow_id": "flow_abc123",
"method_name": "review_article",
"message": "Por favor, revise este artigo para publicação.",
"emit_options": ["approved", "rejected", "request_changes"],
"state": {
"article_id": 12345,
"author": "john@example.com",
"category": "technology"
},
"metadata": {},
"created_at": "2026-01-14T12:00:00Z"
},
"deployment": {
"id": 456,
"name": "Content Review Flow",
"organization_id": 789
},
"callback_url": "https://api.crewai.com/...",
"assigned_to_email": "reviewer@company.com"
}
```
### Respondendo a Solicitações
Para enviar feedback, **faça POST para a `callback_url`** incluída no payload do webhook.
```http
POST {callback_url}
Content-Type: application/json
{
"feedback": "Aprovado. Ótimo artigo!",
"source": "my_custom_app"
}
```
### Segurança
<Info>
Todas as requisições de webhook são assinadas criptograficamente usando HMAC-SHA256 para garantir autenticidade e prevenir adulteração.
</Info>
#### Segurança do Webhook
- **Assinaturas HMAC-SHA256**: Todo webhook inclui uma assinatura criptográfica
- **Secrets por webhook**: Cada webhook tem seu próprio secret de assinatura único
- **Criptografado em repouso**: Os secrets de assinatura são criptografados no nosso banco de dados
- **Verificação de timestamp**: Previne ataques de replay
#### Headers de Assinatura
Cada requisição de webhook inclui estes headers:
| Header | Descrição |
|--------|-----------|
| `X-Signature` | Assinatura HMAC-SHA256: `sha256=<hex_digest>` |
| `X-Timestamp` | Timestamp Unix de quando a requisição foi assinada |
#### Verificação
Verifique computando:
```python
import hmac
import hashlib
expected = hmac.new(
signing_secret.encode(),
f"{timestamp}.{payload}".encode(),
hashlib.sha256
).hexdigest()
if hmac.compare_digest(expected, signature):
# Assinatura válida
```
### Tratamento de Erros
Seu endpoint de webhook deve retornar um código de status 2xx para confirmar o recebimento:
| Sua Resposta | Nosso Comportamento |
|--------------|---------------------|
| 2xx | Webhook entregue com sucesso |
| 4xx/5xx | Registrado como falha, sem retry |
| Timeout (30s) | Registrado como falha, sem retry |
## Segurança e RBAC
### Acesso ao Dashboard
O acesso HITL é controlado no nível do deployment:
| Permissão | Capacidade |
|-----------|------------|
| `manage_human_feedback` | Configurar settings HITL, ver todas as solicitações |
| `respond_to_human_feedback` | Responder a solicitações, ver solicitações atribuídas |
### Autorização de Resposta por Email
Para respostas por email:
1. O token reply-to codifica o email autorizado
2. Email do remetente deve corresponder ao email do token
3. Token não deve estar expirado (padrão 7 dias)
4. Solicitação ainda deve estar pendente
### Trilha de Auditoria
Todas as ações HITL são registradas:
- Criação de solicitação
- Mudanças de atribuição
- Submissão de resposta (com fonte: dashboard/email/API)
- Status de retomada do flow
## Solução de Problemas
### Emails Não Enviando
1. Verifique se "Notificações por Email" está ativado na configuração
2. Verifique se as regras de roteamento correspondem ao nome do método
3. Verifique se o email do responsável é válido
4. Verifique o fallback do criador do deployment se nenhuma regra de roteamento corresponder
### Respostas de Email Não Processando
1. Verifique se o token não expirou (padrão 7 dias)
2. Verifique se o email do remetente corresponde ao email atribuído
3. Garanta que a solicitação ainda está pendente (não respondida ainda)
### Flow Não Retomando
1. Verifique o status da solicitação no dashboard
2. Verifique se a URL de callback está acessível
3. Garanta que o deployment ainda está rodando
## Melhores Práticas
<Tip>
**Comece Simples**: Comece com notificações por email para o criador do deployment, depois adicione regras de roteamento conforme seus fluxos de trabalho amadurecem.
</Tip>
1. **Use Atribuição Dinâmica**: Obtenha emails de responsáveis do seu estado do flow para roteamento flexível.
2. **Configure Resposta Automática**: Defina um fallback para revisões não críticas para evitar que flows fiquem travados.
3. **Monitore Tempos de Resposta**: Use análises para identificar gargalos e otimizar seu processo de revisão.
4. **Mantenha Mensagens de Revisão Claras**: Escreva mensagens claras e acionáveis no decorador `@human_feedback`.
5. **Teste o Fluxo de Email**: Envie solicitações de teste para verificar a entrega de email antes de ir para produção.
## Recursos Relacionados
<CardGroup cols={2}>
<Card title="Feedback Humano em Flows" icon="code" href="/pt-BR/learn/human-feedback-in-flows">
Guia de implementação para o decorador `@human_feedback`
</Card>
<Card title="Guia de Workflow HITL para Flows" icon="route" href="/pt-BR/enterprise/guides/human-in-the-loop">
Guia passo a passo para configurar workflows HITL
</Card>
<Card title="Configuração RBAC" icon="shield-check" href="/pt-BR/enterprise/features/rbac">
Configure controle de acesso baseado em função para sua organização
</Card>
<Card title="Streaming de Webhook" icon="bolt" href="/pt-BR/enterprise/features/webhook-streaming">
Configure notificações de eventos em tempo real
</Card>
</CardGroup>

View File

@@ -5,9 +5,54 @@ icon: "user-check"
mode: "wide"
---
Human-In-The-Loop (HITL) é uma abordagem poderosa que combina inteligência artificial com expertise humana para aprimorar a tomada de decisão e melhorar os resultados das tarefas. Este guia mostra como implementar HITL dentro do CrewAI.
Human-In-The-Loop (HITL) é uma abordagem poderosa que combina inteligência artificial com expertise humana para aprimorar a tomada de decisão e melhorar os resultados das tarefas. Este guia mostra como implementar HITL dentro do CrewAI Enterprise.
## Configurando Workflows HITL
## Abordagens HITL no CrewAI
CrewAI oferece duas abordagens para implementar workflows human-in-the-loop:
| Abordagem | Melhor Para | Versão |
|----------|----------|---------|
| **Baseada em Flow** (decorador `@human_feedback`) | Produção com UI Enterprise, workflows email-first, recursos completos da plataforma | **1.8.0+** |
| **Baseada em Webhook** | Integrações customizadas, sistemas externos (Slack, Teams, etc.), configurações legadas | Todas as versões |
## HITL Baseado em Flow com Plataforma Enterprise
<Note>
O decorador `@human_feedback` requer **CrewAI versão 1.8.0 ou superior**.
</Note>
Ao usar o decorador `@human_feedback` em seus Flows, o CrewAI Enterprise oferece um **sistema HITL email-first** que permite que qualquer pessoa com um endereço de email responda a solicitações de revisão:
<CardGroup cols={2}>
<Card title="Design Email-First" icon="envelope">
Respondentes recebem notificações por email e podem responder diretamente—nenhum login necessário.
</Card>
<Card title="Revisão no Dashboard" icon="desktop">
Revise e responda a solicitações HITL no dashboard Enterprise quando preferir.
</Card>
<Card title="Roteamento Flexível" icon="route">
Direcione solicitações para emails específicos com base em padrões de método ou obtenha do estado do flow.
</Card>
<Card title="Resposta Automática" icon="clock">
Configure respostas automáticas de fallback quando nenhum humano responder dentro do timeout.
</Card>
</CardGroup>
### Principais Benefícios
- **Respondentes externos**: Qualquer pessoa com email pode responder, mesmo não sendo usuário da plataforma
- **Atribuição dinâmica**: Obtenha o email do responsável do estado do flow (ex: `account_owner_email`)
- **Configuração simples**: Roteamento baseado em email é mais fácil de configurar do que gerenciamento de usuários/funções
- **Fallback do criador do deployment**: Se nenhuma regra de roteamento corresponder, o criador do deployment é notificado
<Tip>
Para detalhes de implementação do decorador `@human_feedback`, consulte o guia [Feedback Humano em Flows](/pt-BR/learn/human-feedback-in-flows).
</Tip>
## Configurando Workflows HITL Baseados em Webhook
Para integrações customizadas com sistemas externos como Slack, Microsoft Teams ou suas próprias aplicações, você pode usar a abordagem baseada em webhook:
<Steps>
<Step title="Configure Sua Tarefa">
@@ -99,3 +144,14 @@ Workflows HITL são particularmente valiosos para:
- Operações sensíveis ou de alto risco
- Tarefas criativas que exigem julgamento humano
- Revisões de conformidade e regulatórias
## Saiba Mais
<CardGroup cols={2}>
<Card title="Gerenciamento HITL para Flows" icon="users-gear" href="/pt-BR/enterprise/features/flow-hitl-management">
Explore os recursos completos da plataforma HITL para Flows, incluindo notificações por email, regras de roteamento, resposta automática e análises.
</Card>
<Card title="Feedback Humano em Flows" icon="code" href="/pt-BR/learn/human-feedback-in-flows">
Guia de implementação para o decorador `@human_feedback` em seus Flows.
</Card>
</CardGroup>

View File

@@ -200,6 +200,25 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
- `clientData` (array, opcional): Dados específicos do cliente. Cada item é um objeto com `key` (string) e `value` (string).
</Accordion>
<Accordion title="google_contacts/update_contact_group">
**Descrição:** Atualizar informações de um grupo de contatos.
**Parâmetros:**
- `resourceName` (string, obrigatório): O nome do recurso do grupo de contatos (ex: 'contactGroups/myContactGroup').
- `name` (string, obrigatório): O nome do grupo de contatos.
- `clientData` (array, opcional): Dados específicos do cliente. Cada item é um objeto com `key` (string) e `value` (string).
</Accordion>
<Accordion title="google_contacts/delete_contact_group">
**Descrição:** Excluir um grupo de contatos.
**Parâmetros:**
- `resourceName` (string, obrigatório): O nome do recurso do grupo de contatos a excluir (ex: 'contactGroups/myContactGroup').
- `deleteContacts` (boolean, opcional): Se os contatos do grupo também devem ser excluídos. Padrão: false
</Accordion>
</AccordionGroup>
## Exemplos de Uso

View File

@@ -131,6 +131,297 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
- `endIndex` (integer, obrigatório): O índice final do intervalo.
</Accordion>
<Accordion title="google_docs/create_document_with_content">
**Descrição:** Criar um novo documento do Google com conteúdo em uma única ação.
**Parâmetros:**
- `title` (string, obrigatório): O título para o novo documento. Aparece no topo do documento e no Google Drive.
- `content` (string, opcional): O conteúdo de texto a inserir no documento. Use `\n` para novos parágrafos.
</Accordion>
<Accordion title="google_docs/append_text">
**Descrição:** Adicionar texto ao final de um documento do Google. Insere automaticamente no final do documento sem necessidade de especificar um índice.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento obtido da resposta de create_document ou URL.
- `text` (string, obrigatório): Texto a adicionar ao final do documento. Use `\n` para novos parágrafos.
</Accordion>
<Accordion title="google_docs/set_text_bold">
**Descrição:** Aplicar ou remover formatação de negrito em texto de um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
- `bold` (boolean, obrigatório): Defina `true` para aplicar negrito, `false` para remover negrito.
</Accordion>
<Accordion title="google_docs/set_text_italic">
**Descrição:** Aplicar ou remover formatação de itálico em texto de um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
- `italic` (boolean, obrigatório): Defina `true` para aplicar itálico, `false` para remover itálico.
</Accordion>
<Accordion title="google_docs/set_text_underline">
**Descrição:** Adicionar ou remover formatação de sublinhado em texto de um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
- `underline` (boolean, obrigatório): Defina `true` para sublinhar, `false` para remover sublinhado.
</Accordion>
<Accordion title="google_docs/set_text_strikethrough">
**Descrição:** Adicionar ou remover formatação de tachado em texto de um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
- `strikethrough` (boolean, obrigatório): Defina `true` para adicionar tachado, `false` para remover.
</Accordion>
<Accordion title="google_docs/set_font_size">
**Descrição:** Alterar o tamanho da fonte do texto em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
- `fontSize` (number, obrigatório): Tamanho da fonte em pontos. Tamanhos comuns: 10, 11, 12, 14, 16, 18, 24, 36.
</Accordion>
<Accordion title="google_docs/set_text_color">
**Descrição:** Alterar a cor do texto usando valores RGB (escala 0-1) em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
- `red` (number, obrigatório): Componente vermelho (0-1). Exemplo: `1` para vermelho total.
- `green` (number, obrigatório): Componente verde (0-1). Exemplo: `0.5` para metade verde.
- `blue` (number, obrigatório): Componente azul (0-1). Exemplo: `0` para sem azul.
</Accordion>
<Accordion title="google_docs/create_hyperlink">
**Descrição:** Transformar texto existente em um hyperlink clicável em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do texto a transformar em link.
- `endIndex` (integer, obrigatório): Posição final do texto a transformar em link (exclusivo).
- `url` (string, obrigatório): A URL para a qual o link deve apontar. Exemplo: `"https://example.com"`.
</Accordion>
<Accordion title="google_docs/apply_heading_style">
**Descrição:** Aplicar um estilo de título ou parágrafo a um intervalo de texto em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do(s) parágrafo(s) a estilizar.
- `endIndex` (integer, obrigatório): Posição final do(s) parágrafo(s) a estilizar.
- `style` (string, obrigatório): O estilo a aplicar. Opções: `NORMAL_TEXT`, `TITLE`, `SUBTITLE`, `HEADING_1`, `HEADING_2`, `HEADING_3`, `HEADING_4`, `HEADING_5`, `HEADING_6`.
</Accordion>
<Accordion title="google_docs/set_paragraph_alignment">
**Descrição:** Definir o alinhamento de texto para parágrafos em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do(s) parágrafo(s) a alinhar.
- `endIndex` (integer, obrigatório): Posição final do(s) parágrafo(s) a alinhar.
- `alignment` (string, obrigatório): Alinhamento do texto. Opções: `START` (esquerda), `CENTER`, `END` (direita), `JUSTIFIED`.
</Accordion>
<Accordion title="google_docs/set_line_spacing">
**Descrição:** Definir o espaçamento entre linhas para parágrafos em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial do(s) parágrafo(s).
- `endIndex` (integer, obrigatório): Posição final do(s) parágrafo(s).
- `lineSpacing` (number, obrigatório): Espaçamento entre linhas como porcentagem. `100` = simples, `115` = 1.15x, `150` = 1.5x, `200` = duplo.
</Accordion>
<Accordion title="google_docs/create_paragraph_bullets">
**Descrição:** Converter parágrafos em uma lista com marcadores ou numerada em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial dos parágrafos a converter em lista.
- `endIndex` (integer, obrigatório): Posição final dos parágrafos a converter em lista.
- `bulletPreset` (string, obrigatório): Estilo de marcadores/numeração. Opções: `BULLET_DISC_CIRCLE_SQUARE`, `BULLET_DIAMONDX_ARROW3D_SQUARE`, `BULLET_CHECKBOX`, `BULLET_ARROW_DIAMOND_DISC`, `BULLET_STAR_CIRCLE_SQUARE`, `NUMBERED_DECIMAL_ALPHA_ROMAN`, `NUMBERED_DECIMAL_ALPHA_ROMAN_PARENS`, `NUMBERED_DECIMAL_NESTED`, `NUMBERED_UPPERALPHA_ALPHA_ROMAN`, `NUMBERED_UPPERROMAN_UPPERALPHA_DECIMAL`.
</Accordion>
<Accordion title="google_docs/delete_paragraph_bullets">
**Descrição:** Remover marcadores ou numeração de parágrafos em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `startIndex` (integer, obrigatório): Posição inicial dos parágrafos de lista.
- `endIndex` (integer, obrigatório): Posição final dos parágrafos de lista.
</Accordion>
<Accordion title="google_docs/insert_table_with_content">
**Descrição:** Inserir uma tabela com conteúdo em um documento do Google em uma única ação. Forneça o conteúdo como um array 2D.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `rows` (integer, obrigatório): Número de linhas na tabela.
- `columns` (integer, obrigatório): Número de colunas na tabela.
- `index` (integer, opcional): Posição para inserir a tabela. Se não fornecido, a tabela é inserida no final do documento.
- `content` (array, obrigatório): Conteúdo da tabela como um array 2D. Cada array interno é uma linha. Exemplo: `[["Ano", "Receita"], ["2023", "$43B"], ["2024", "$45B"]]`.
</Accordion>
<Accordion title="google_docs/insert_table_row">
**Descrição:** Inserir uma nova linha acima ou abaixo de uma célula de referência em uma tabela existente.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela. Obtenha de get_document.
- `rowIndex` (integer, obrigatório): Índice da linha (baseado em 0) da célula de referência.
- `columnIndex` (integer, opcional): Índice da coluna (baseado em 0) da célula de referência. Padrão: `0`.
- `insertBelow` (boolean, opcional): Se `true`, insere abaixo da linha de referência. Se `false`, insere acima. Padrão: `true`.
</Accordion>
<Accordion title="google_docs/insert_table_column">
**Descrição:** Inserir uma nova coluna à esquerda ou à direita de uma célula de referência em uma tabela existente.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
- `rowIndex` (integer, opcional): Índice da linha (baseado em 0) da célula de referência. Padrão: `0`.
- `columnIndex` (integer, obrigatório): Índice da coluna (baseado em 0) da célula de referência.
- `insertRight` (boolean, opcional): Se `true`, insere à direita. Se `false`, insere à esquerda. Padrão: `true`.
</Accordion>
<Accordion title="google_docs/delete_table_row">
**Descrição:** Excluir uma linha de uma tabela existente em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
- `rowIndex` (integer, obrigatório): Índice da linha (baseado em 0) a excluir.
- `columnIndex` (integer, opcional): Índice da coluna (baseado em 0) de qualquer célula na linha. Padrão: `0`.
</Accordion>
<Accordion title="google_docs/delete_table_column">
**Descrição:** Excluir uma coluna de uma tabela existente em um documento do Google.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
- `rowIndex` (integer, opcional): Índice da linha (baseado em 0) de qualquer célula na coluna. Padrão: `0`.
- `columnIndex` (integer, obrigatório): Índice da coluna (baseado em 0) a excluir.
</Accordion>
<Accordion title="google_docs/merge_table_cells">
**Descrição:** Mesclar um intervalo de células de tabela em uma única célula. O conteúdo de todas as células é preservado.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
- `rowIndex` (integer, obrigatório): Índice da linha inicial (baseado em 0) para a mesclagem.
- `columnIndex` (integer, obrigatório): Índice da coluna inicial (baseado em 0) para a mesclagem.
- `rowSpan` (integer, obrigatório): Número de linhas a mesclar.
- `columnSpan` (integer, obrigatório): Número de colunas a mesclar.
</Accordion>
<Accordion title="google_docs/unmerge_table_cells">
**Descrição:** Desfazer a mesclagem de células de tabela previamente mescladas, retornando-as a células individuais.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
- `rowIndex` (integer, obrigatório): Índice da linha (baseado em 0) da célula mesclada.
- `columnIndex` (integer, obrigatório): Índice da coluna (baseado em 0) da célula mesclada.
- `rowSpan` (integer, obrigatório): Número de linhas que a célula mesclada abrange.
- `columnSpan` (integer, obrigatório): Número de colunas que a célula mesclada abrange.
</Accordion>
<Accordion title="google_docs/insert_inline_image">
**Descrição:** Inserir uma imagem de uma URL pública em um documento do Google. A imagem deve ser publicamente acessível, ter menos de 50MB e estar no formato PNG/JPEG/GIF.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `uri` (string, obrigatório): URL pública da imagem. Deve ser acessível sem autenticação.
- `index` (integer, opcional): Posição para inserir a imagem. Se não fornecido, a imagem é inserida no final do documento. Padrão: `1`.
</Accordion>
<Accordion title="google_docs/insert_section_break">
**Descrição:** Inserir uma quebra de seção para criar seções de documento com formatação diferente.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `index` (integer, obrigatório): Posição para inserir a quebra de seção.
- `sectionType` (string, obrigatório): O tipo de quebra de seção. Opções: `CONTINUOUS` (permanece na mesma página), `NEXT_PAGE` (inicia uma nova página).
</Accordion>
<Accordion title="google_docs/create_header">
**Descrição:** Criar um cabeçalho para o documento. Retorna um headerId que pode ser usado com insert_text para adicionar conteúdo ao cabeçalho.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `type` (string, opcional): Tipo de cabeçalho. Opções: `DEFAULT`. Padrão: `DEFAULT`.
</Accordion>
<Accordion title="google_docs/create_footer">
**Descrição:** Criar um rodapé para o documento. Retorna um footerId que pode ser usado com insert_text para adicionar conteúdo ao rodapé.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `type` (string, opcional): Tipo de rodapé. Opções: `DEFAULT`. Padrão: `DEFAULT`.
</Accordion>
<Accordion title="google_docs/delete_header">
**Descrição:** Excluir um cabeçalho do documento. Use get_document para encontrar o headerId.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `headerId` (string, obrigatório): O ID do cabeçalho a excluir. Obtenha da resposta de get_document.
</Accordion>
<Accordion title="google_docs/delete_footer">
**Descrição:** Excluir um rodapé do documento. Use get_document para encontrar o footerId.
**Parâmetros:**
- `documentId` (string, obrigatório): O ID do documento.
- `footerId` (string, obrigatório): O ID do rodapé a excluir. Obtenha da resposta de get_document.
</Accordion>
</AccordionGroup>
## Exemplos de Uso

View File

@@ -61,6 +61,22 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
</Accordion>
<Accordion title="google_slides/get_presentation_metadata">
**Descrição:** Obter metadados leves de uma apresentação (título, número de slides, IDs dos slides). Use isso primeiro antes de recuperar o conteúdo completo.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação a ser recuperada.
</Accordion>
<Accordion title="google_slides/get_presentation_text">
**Descrição:** Extrair todo o conteúdo de texto de uma apresentação. Retorna IDs dos slides e texto de formas e tabelas apenas (sem formatação).
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
</Accordion>
<Accordion title="google_slides/get_presentation">
**Descrição:** Recupera uma apresentação por ID.
@@ -80,6 +96,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
</Accordion>
<Accordion title="google_slides/get_slide_text">
**Descrição:** Extrair conteúdo de texto de um único slide. Retorna apenas texto de formas e tabelas (sem formatação ou estilo).
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `pageObjectId` (string, obrigatório): O ID do slide/página para obter o texto.
</Accordion>
<Accordion title="google_slides/get_page">
**Descrição:** Recupera uma página específica por seu ID.
@@ -98,6 +123,120 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
</Accordion>
<Accordion title="google_slides/create_slide">
**Descrição:** Adicionar um slide em branco adicional a uma apresentação. Novas apresentações já possuem um slide em branco - verifique get_presentation_metadata primeiro. Para slides com áreas de título/corpo, use create_slide_with_layout.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `insertionIndex` (integer, opcional): Onde inserir o slide (baseado em 0). Se omitido, adiciona no final.
</Accordion>
<Accordion title="google_slides/create_slide_with_layout">
**Descrição:** Criar um slide com layout predefinido contendo áreas de espaço reservado para título, corpo, etc. Melhor que create_slide para conteúdo estruturado. Após criar, use get_page para encontrar os IDs de espaço reservado, depois insira texto neles.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `layout` (string, obrigatório): Tipo de layout. Um de: `BLANK`, `TITLE`, `TITLE_AND_BODY`, `TITLE_AND_TWO_COLUMNS`, `TITLE_ONLY`, `SECTION_HEADER`, `ONE_COLUMN_TEXT`, `MAIN_POINT`, `BIG_NUMBER`. TITLE_AND_BODY é melhor para título+descrição. TITLE para slides apenas com título. SECTION_HEADER para divisores de seção.
- `insertionIndex` (integer, opcional): Onde inserir (baseado em 0). Se omitido, adiciona no final.
</Accordion>
<Accordion title="google_slides/create_text_box">
**Descrição:** Criar uma caixa de texto em um slide com conteúdo. Use para títulos, descrições, parágrafos - não para tabelas. Opcionalmente especifique posição (x, y) e tamanho (width, height) em unidades EMU (914400 EMU = 1 polegada).
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do slide para adicionar a caixa de texto.
- `text` (string, obrigatório): O conteúdo de texto da caixa de texto.
- `x` (integer, opcional): Posição X em EMU (914400 = 1 polegada). Padrão: 914400 (1 polegada da esquerda).
- `y` (integer, opcional): Posição Y em EMU (914400 = 1 polegada). Padrão: 914400 (1 polegada do topo).
- `width` (integer, opcional): Largura em EMU. Padrão: 7315200 (~8 polegadas).
- `height` (integer, opcional): Altura em EMU. Padrão: 914400 (~1 polegada).
</Accordion>
<Accordion title="google_slides/delete_slide">
**Descrição:** Remover um slide de uma apresentação. Use get_presentation primeiro para encontrar o ID do slide.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do objeto do slide a excluir. Obtenha de get_presentation.
</Accordion>
<Accordion title="google_slides/duplicate_slide">
**Descrição:** Criar uma cópia de um slide existente. A duplicata é inserida imediatamente após o original.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do objeto do slide a duplicar. Obtenha de get_presentation.
</Accordion>
<Accordion title="google_slides/move_slides">
**Descrição:** Reordenar slides movendo-os para uma nova posição. Os IDs dos slides devem estar na ordem atual da apresentação (sem duplicatas).
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideIds` (array de strings, obrigatório): Array de IDs dos slides a mover. Obrigatoriamente na ordem atual da apresentação.
- `insertionIndex` (integer, obrigatório): Posição de destino (baseado em 0). 0 = início, número de slides = final.
</Accordion>
<Accordion title="google_slides/insert_youtube_video">
**Descrição:** Incorporar um vídeo do YouTube em um slide. O ID do vídeo é o valor após "v=" nas URLs do YouTube (ex: para youtube.com/watch?v=abc123, use "abc123").
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do slide para adicionar o vídeo. Obtenha de get_presentation.
- `videoId` (string, obrigatório): O ID do vídeo do YouTube (o valor após v= na URL).
</Accordion>
<Accordion title="google_slides/insert_drive_video">
**Descrição:** Incorporar um vídeo do Google Drive em um slide. O ID do arquivo pode ser encontrado na URL do arquivo no Drive.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do slide para adicionar o vídeo. Obtenha de get_presentation.
- `fileId` (string, obrigatório): O ID do arquivo do Google Drive do vídeo.
</Accordion>
<Accordion title="google_slides/set_slide_background_image">
**Descrição:** Definir uma imagem de fundo para um slide. A URL da imagem deve ser publicamente acessível.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do slide para definir o fundo. Obtenha de get_presentation.
- `imageUrl` (string, obrigatório): URL publicamente acessível da imagem a usar como fundo.
</Accordion>
<Accordion title="google_slides/create_table">
**Descrição:** Criar uma tabela vazia em um slide. Para criar uma tabela com conteúdo, use create_table_with_content.
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do slide para adicionar a tabela. Obtenha de get_presentation.
- `rows` (integer, obrigatório): Número de linhas na tabela.
- `columns` (integer, obrigatório): Número de colunas na tabela.
</Accordion>
<Accordion title="google_slides/create_table_with_content">
**Descrição:** Criar uma tabela com conteúdo em uma única ação. Forneça o conteúdo como uma matriz 2D onde cada array interno é uma linha. Exemplo: [["Cabeçalho1", "Cabeçalho2"], ["Linha1Col1", "Linha1Col2"]].
**Parâmetros:**
- `presentationId` (string, obrigatório): O ID da apresentação.
- `slideId` (string, obrigatório): O ID do slide para adicionar a tabela. Obtenha de get_presentation.
- `rows` (integer, obrigatório): Número de linhas na tabela.
- `columns` (integer, obrigatório): Número de colunas na tabela.
- `content` (array, obrigatório): Conteúdo da tabela como matriz 2D. Cada array interno é uma linha. Exemplo: [["Ano", "Receita"], ["2023", "$10M"]].
</Accordion>
<Accordion title="google_slides/import_data_from_sheet">
**Descrição:** Importa dados de uma planilha do Google para uma apresentação.

View File

@@ -148,6 +148,16 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
</Accordion>
<Accordion title="microsoft_excel/get_table_data">
**Descrição:** Obter dados de uma tabela específica em uma planilha do Excel.
**Parâmetros:**
- `file_id` (string, obrigatório): O ID do arquivo Excel.
- `worksheet_name` (string, obrigatório): Nome da planilha.
- `table_name` (string, obrigatório): Nome da tabela.
</Accordion>
<Accordion title="microsoft_excel/create_chart">
**Descrição:** Criar um gráfico em uma planilha do Excel.
@@ -180,6 +190,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
</Accordion>
<Accordion title="microsoft_excel/get_used_range_metadata">
**Descrição:** Obter os metadados do intervalo usado (apenas dimensões, sem dados) de uma planilha do Excel.
**Parâmetros:**
- `file_id` (string, obrigatório): O ID do arquivo Excel.
- `worksheet_name` (string, obrigatório): Nome da planilha.
</Accordion>
<Accordion title="microsoft_excel/list_charts">
**Descrição:** Obter todos os gráficos em uma planilha do Excel.

View File

@@ -150,6 +150,49 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
- `item_id` (string, obrigatório): O ID do arquivo.
</Accordion>
<Accordion title="microsoft_onedrive/list_files_by_path">
**Descrição:** Listar arquivos e pastas em um caminho específico do OneDrive.
**Parâmetros:**
- `folder_path` (string, obrigatório): O caminho da pasta (ex: 'Documents/Reports').
- `top` (integer, opcional): Número de itens a recuperar (máx 1000). Padrão: 50.
- `orderby` (string, opcional): Ordenar por campo (ex: "name asc", "lastModifiedDateTime desc"). Padrão: "name asc".
</Accordion>
<Accordion title="microsoft_onedrive/get_recent_files">
**Descrição:** Obter arquivos acessados recentemente no OneDrive.
**Parâmetros:**
- `top` (integer, opcional): Número de itens a recuperar (máx 200). Padrão: 25.
</Accordion>
<Accordion title="microsoft_onedrive/get_shared_with_me">
**Descrição:** Obter arquivos e pastas compartilhados com o usuário.
**Parâmetros:**
- `top` (integer, opcional): Número de itens a recuperar (máx 200). Padrão: 50.
- `orderby` (string, opcional): Ordenar por campo. Padrão: "name asc".
</Accordion>
<Accordion title="microsoft_onedrive/get_file_by_path">
**Descrição:** Obter informações sobre um arquivo ou pasta específica pelo caminho.
**Parâmetros:**
- `file_path` (string, obrigatório): O caminho do arquivo ou pasta (ex: 'Documents/report.docx').
</Accordion>
<Accordion title="microsoft_onedrive/download_file_by_path">
**Descrição:** Baixar um arquivo do OneDrive pelo seu caminho.
**Parâmetros:**
- `file_path` (string, obrigatório): O caminho do arquivo (ex: 'Documents/report.docx').
</Accordion>
</AccordionGroup>
## Exemplos de Uso

View File

@@ -132,6 +132,74 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
- `companyName` (string, opcional): Nome da empresa do contato.
</Accordion>
<Accordion title="microsoft_outlook/get_message">
**Descrição:** Obter uma mensagem de email específica por ID.
**Parâmetros:**
- `message_id` (string, obrigatório): O identificador único da mensagem. Obter pela ação get_messages.
- `select` (string, opcional): Lista separada por vírgulas de propriedades a retornar. Exemplo: "id,subject,body,from,receivedDateTime". Padrão: "id,subject,body,from,toRecipients,receivedDateTime".
</Accordion>
<Accordion title="microsoft_outlook/reply_to_email">
**Descrição:** Responder a uma mensagem de email.
**Parâmetros:**
- `message_id` (string, obrigatório): O identificador único da mensagem a responder. Obter pela ação get_messages.
- `comment` (string, obrigatório): O conteúdo da mensagem de resposta. Pode ser texto simples ou HTML. A mensagem original será citada abaixo deste conteúdo.
</Accordion>
<Accordion title="microsoft_outlook/forward_email">
**Descrição:** Encaminhar uma mensagem de email.
**Parâmetros:**
- `message_id` (string, obrigatório): O identificador único da mensagem a encaminhar. Obter pela ação get_messages.
- `to_recipients` (array, obrigatório): Array de endereços de email dos destinatários. Exemplo: ["john@example.com", "jane@example.com"].
- `comment` (string, opcional): Mensagem opcional a incluir acima do conteúdo encaminhado. Pode ser texto simples ou HTML.
</Accordion>
<Accordion title="microsoft_outlook/mark_message_read">
**Descrição:** Marcar uma mensagem como lida ou não lida.
**Parâmetros:**
- `message_id` (string, obrigatório): O identificador único da mensagem. Obter pela ação get_messages.
- `is_read` (boolean, obrigatório): Definir como true para marcar como lida, false para marcar como não lida.
</Accordion>
<Accordion title="microsoft_outlook/delete_message">
**Descrição:** Excluir uma mensagem de email.
**Parâmetros:**
- `message_id` (string, obrigatório): O identificador único da mensagem a excluir. Obter pela ação get_messages.
</Accordion>
<Accordion title="microsoft_outlook/update_event">
**Descrição:** Atualizar um evento de calendário existente.
**Parâmetros:**
- `event_id` (string, obrigatório): O identificador único do evento. Obter pela ação get_calendar_events.
- `subject` (string, opcional): Novo assunto/título do evento.
- `start_time` (string, opcional): Nova hora de início no formato ISO 8601 (ex: "2024-01-20T10:00:00"). OBRIGATÓRIO: Também deve fornecer start_timezone ao usar este campo.
- `start_timezone` (string, opcional): Fuso horário da hora de início. OBRIGATÓRIO ao atualizar start_time. Exemplos: "Pacific Standard Time", "Eastern Standard Time", "UTC".
- `end_time` (string, opcional): Nova hora de término no formato ISO 8601. OBRIGATÓRIO: Também deve fornecer end_timezone ao usar este campo.
- `end_timezone` (string, opcional): Fuso horário da hora de término. OBRIGATÓRIO ao atualizar end_time. Exemplos: "Pacific Standard Time", "Eastern Standard Time", "UTC".
- `location` (string, opcional): Novo local do evento.
- `body` (string, opcional): Novo corpo/descrição do evento. Suporta formatação HTML.
</Accordion>
<Accordion title="microsoft_outlook/delete_event">
**Descrição:** Excluir um evento de calendário.
**Parâmetros:**
- `event_id` (string, obrigatório): O identificador único do evento a excluir. Obter pela ação get_calendar_events.
</Accordion>
</AccordionGroup>
## Exemplos de Uso

View File

@@ -77,6 +77,17 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
</Accordion>
<Accordion title="microsoft_sharepoint/get_drives">
**Descrição:** Listar todas as bibliotecas de documentos (drives) em um site do SharePoint. Use isto para descobrir bibliotecas disponíveis antes de usar operações de arquivo.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `top` (integer, opcional): Número máximo de drives a retornar por página (1-999). Padrão: 100
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'id,name,webUrl,driveType').
</Accordion>
<Accordion title="microsoft_sharepoint/get_site_lists">
**Descrição:** Obter todas as listas em um site do SharePoint.
@@ -145,20 +156,317 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
</Accordion>
<Accordion title="microsoft_sharepoint/get_drive_items">
**Descrição:** Obter arquivos e pastas de uma biblioteca de documentos do SharePoint.
<Accordion title="microsoft_sharepoint/list_files">
**Descrição:** Recuperar arquivos e pastas de uma biblioteca de documentos do SharePoint. Por padrão, lista a pasta raiz, mas você pode navegar em subpastas fornecendo um folder_id.
**Parâmetros:**
- `site_id` (string, obrigatório): O ID do site do SharePoint.
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `folder_id` (string, opcional): O ID da pasta para listar o conteúdo. Use 'root' para a pasta raiz, ou forneça um ID de pasta de uma chamada anterior de list_files. Padrão: 'root'
- `top` (integer, opcional): Número máximo de itens a retornar por página (1-1000). Padrão: 50
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
- `orderby` (string, opcional): Ordem de classificação dos resultados (ex: 'name asc', 'size desc', 'lastModifiedDateTime desc'). Padrão: 'name asc'
- `filter` (string, opcional): Filtro OData para restringir resultados (ex: 'file ne null' apenas para arquivos, 'folder ne null' apenas para pastas).
- `select` (string, opcional): Lista de campos separados por vírgula para retornar (ex: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
</Accordion>
<Accordion title="microsoft_sharepoint/delete_drive_item">
**Descrição:** Excluir um arquivo ou pasta da biblioteca de documentos do SharePoint.
<Accordion title="microsoft_sharepoint/delete_file">
**Descrição:** Excluir um arquivo ou pasta de uma biblioteca de documentos do SharePoint. Para pastas, todo o conteúdo é excluído recursivamente. Os itens são movidos para a lixeira do site.
**Parâmetros:**
- `site_id` (string, obrigatório): O ID do site do SharePoint.
- `item_id` (string, obrigatório): O ID do arquivo ou pasta a excluir.
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta a excluir. Obtenha de list_files.
</Accordion>
<Accordion title="microsoft_sharepoint/list_files_by_path">
**Descrição:** Listar arquivos e pastas em uma pasta de biblioteca de documentos do SharePoint pelo caminho. Mais eficiente do que múltiplas chamadas list_files para navegação profunda.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `folder_path` (string, obrigatório): O caminho completo para a pasta sem barras iniciais/finais (ex: 'Documents', 'Reports/2024/Q1').
- `top` (integer, opcional): Número máximo de itens a retornar por página (1-1000). Padrão: 50
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
- `orderby` (string, opcional): Ordem de classificação dos resultados (ex: 'name asc', 'size desc'). Padrão: 'name asc'
- `select` (string, opcional): Lista de campos separados por vírgula para retornar (ex: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
</Accordion>
<Accordion title="microsoft_sharepoint/download_file">
**Descrição:** Baixar conteúdo bruto de um arquivo de uma biblioteca de documentos do SharePoint. Use apenas para arquivos de texto simples (.txt, .csv, .json). Para arquivos Excel, use as ações específicas de Excel. Para arquivos Word, use get_word_document_content.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo a baixar. Obtenha de list_files ou list_files_by_path.
</Accordion>
<Accordion title="microsoft_sharepoint/get_file_info">
**Descrição:** Recuperar metadados detalhados de um arquivo ou pasta específico em uma biblioteca de documentos do SharePoint, incluindo nome, tamanho, datas de criação/modificação e informações do autor.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta. Obtenha de list_files ou list_files_by_path.
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'id,name,size,createdDateTime,lastModifiedDateTime,webUrl,createdBy,lastModifiedBy').
</Accordion>
<Accordion title="microsoft_sharepoint/create_folder">
**Descrição:** Criar uma nova pasta em uma biblioteca de documentos do SharePoint. Por padrão, cria a pasta na raiz; use parent_id para criar subpastas.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `folder_name` (string, obrigatório): Nome para a nova pasta. Não pode conter: \ / : * ? " < > |
- `parent_id` (string, opcional): O ID da pasta pai. Use 'root' para a raiz da biblioteca de documentos, ou forneça um ID de pasta de list_files. Padrão: 'root'
</Accordion>
<Accordion title="microsoft_sharepoint/search_files">
**Descrição:** Pesquisar arquivos e pastas em uma biblioteca de documentos do SharePoint por palavras-chave. Pesquisa nomes de arquivos, nomes de pastas e conteúdo de arquivos para documentos Office. Não use curingas ou caracteres especiais.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `query` (string, obrigatório): Palavras-chave de pesquisa (ex: 'relatório', 'orçamento 2024'). Curingas como *.txt não são suportados.
- `top` (integer, opcional): Número máximo de resultados a retornar por página (1-1000). Padrão: 50
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
- `select` (string, opcional): Lista de campos separados por vírgula para retornar (ex: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
</Accordion>
<Accordion title="microsoft_sharepoint/copy_file">
**Descrição:** Copiar um arquivo ou pasta para um novo local dentro do SharePoint. O item original permanece inalterado. A operação de cópia é assíncrona para arquivos grandes.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta a copiar. Obtenha de list_files ou search_files.
- `destination_folder_id` (string, obrigatório): O ID da pasta de destino. Use 'root' para a pasta raiz, ou um ID de pasta de list_files.
- `new_name` (string, opcional): Novo nome para a cópia. Se não fornecido, o nome original é usado.
</Accordion>
<Accordion title="microsoft_sharepoint/move_file">
**Descrição:** Mover um arquivo ou pasta para um novo local dentro do SharePoint. O item é removido de sua localização original. Para pastas, todo o conteúdo é movido também.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta a mover. Obtenha de list_files ou search_files.
- `destination_folder_id` (string, obrigatório): O ID da pasta de destino. Use 'root' para a pasta raiz, ou um ID de pasta de list_files.
- `new_name` (string, opcional): Novo nome para o item movido. Se não fornecido, o nome original é mantido.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_worksheets">
**Descrição:** Listar todas as planilhas (abas) em uma pasta de trabalho Excel armazenada em uma biblioteca de documentos do SharePoint. Use o nome da planilha retornado com outras ações de Excel.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'id,name,position,visibility').
- `filter` (string, opcional): Expressão de filtro OData (ex: "visibility eq 'Visible'" para excluir planilhas ocultas).
- `top` (integer, opcional): Número máximo de planilhas a retornar. Mínimo: 1, Máximo: 999
- `orderby` (string, opcional): Ordem de classificação (ex: 'position asc' para retornar planilhas na ordem das abas).
</Accordion>
<Accordion title="microsoft_sharepoint/create_excel_worksheet">
**Descrição:** Criar uma nova planilha (aba) em uma pasta de trabalho Excel armazenada em uma biblioteca de documentos do SharePoint. A nova planilha é adicionada no final da lista de abas.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `name` (string, obrigatório): Nome para a nova planilha. Máximo de 31 caracteres. Não pode conter: \ / * ? : [ ]. Deve ser único na pasta de trabalho.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_range_data">
**Descrição:** Recuperar valores de células de um intervalo específico em uma planilha Excel armazenada no SharePoint. Para ler todos os dados sem saber as dimensões, use get_excel_used_range em vez disso.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) para leitura. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
- `range` (string, obrigatório): Intervalo de células em notação A1 (ex: 'A1:C10', 'A:C', '1:5', 'A1').
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text').
</Accordion>
<Accordion title="microsoft_sharepoint/update_excel_range_data">
**Descrição:** Escrever valores em um intervalo específico em uma planilha Excel armazenada no SharePoint. Sobrescreve o conteúdo existente das células. As dimensões do array de valores devem corresponder exatamente às dimensões do intervalo.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) a atualizar. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
- `range` (string, obrigatório): Intervalo de células em notação A1 onde os valores serão escritos (ex: 'A1:C3' para um bloco 3x3).
- `values` (array, obrigatório): Array 2D de valores (linhas contendo células). Exemplo para A1:B2: [["Cabeçalho1", "Cabeçalho2"], ["Valor1", "Valor2"]]. Use null para limpar uma célula.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_used_range_metadata">
**Descrição:** Retornar apenas os metadados (endereço e dimensões) do intervalo utilizado em uma planilha, sem os valores reais das células. Ideal para arquivos grandes para entender o tamanho da planilha antes de ler dados em blocos.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) para leitura. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_used_range">
**Descrição:** Recuperar todas as células contendo dados em uma planilha armazenada no SharePoint. Não use para arquivos maiores que 2MB. Para arquivos grandes, use get_excel_used_range_metadata primeiro, depois get_excel_range_data para ler em blocos menores.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) para leitura. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text,rowCount,columnCount').
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_cell">
**Descrição:** Recuperar o valor de uma única célula por índice de linha e coluna de um arquivo Excel no SharePoint. Os índices são baseados em 0 (linha 0 = linha 1 do Excel, coluna 0 = coluna A).
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha (aba). Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
- `row` (integer, obrigatório): Índice de linha baseado em 0 (linha 0 = linha 1 do Excel). Intervalo válido: 0-1048575
- `column` (integer, obrigatório): Índice de coluna baseado em 0 (coluna 0 = A, coluna 1 = B). Intervalo válido: 0-16383
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text').
</Accordion>
<Accordion title="microsoft_sharepoint/add_excel_table">
**Descrição:** Converter um intervalo de células em uma tabela Excel formatada com recursos de filtragem, classificação e dados estruturados. Tabelas habilitam add_excel_table_row para adicionar dados.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha contendo o intervalo de dados. Obtenha de get_excel_worksheets.
- `range` (string, obrigatório): Intervalo de células para converter em tabela, incluindo cabeçalhos e dados (ex: 'A1:D10' onde A1:D1 contém cabeçalhos de coluna).
- `has_headers` (boolean, opcional): Defina como true se a primeira linha contém cabeçalhos de coluna. Padrão: true
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_tables">
**Descrição:** Listar todas as tabelas em uma planilha Excel específica armazenada no SharePoint. Retorna propriedades da tabela incluindo id, name, showHeaders e showTotals.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha para obter tabelas. Obtenha de get_excel_worksheets.
</Accordion>
<Accordion title="microsoft_sharepoint/add_excel_table_row">
**Descrição:** Adicionar uma nova linha ao final de uma tabela Excel em um arquivo do SharePoint. O array de valores deve ter o mesmo número de elementos que o número de colunas da tabela.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha contendo a tabela. Obtenha de get_excel_worksheets.
- `table_name` (string, obrigatório): Nome da tabela para adicionar a linha (ex: 'Table1'). Obtenha de get_excel_tables. Sensível a maiúsculas e minúsculas.
- `values` (array, obrigatório): Array de valores de células para a nova linha, um por coluna na ordem da tabela (ex: ["João Silva", "joao@exemplo.com", 25]).
</Accordion>
<Accordion title="microsoft_sharepoint/get_excel_table_data">
**Descrição:** Obter todas as linhas de uma tabela Excel em um arquivo do SharePoint como um intervalo de dados. Mais fácil do que get_excel_range_data ao trabalhar com tabelas estruturadas, pois não é necessário saber o intervalo exato.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha contendo a tabela. Obtenha de get_excel_worksheets.
- `table_name` (string, obrigatório): Nome da tabela para obter dados (ex: 'Table1'). Obtenha de get_excel_tables. Sensível a maiúsculas e minúsculas.
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text').
</Accordion>
<Accordion title="microsoft_sharepoint/create_excel_chart">
**Descrição:** Criar uma visualização de gráfico em uma planilha Excel armazenada no SharePoint a partir de um intervalo de dados. O gráfico é incorporado na planilha.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha onde o gráfico será criado. Obtenha de get_excel_worksheets.
- `chart_type` (string, obrigatório): Tipo de gráfico (ex: 'ColumnClustered', 'ColumnStacked', 'Line', 'LineMarkers', 'Pie', 'Bar', 'BarClustered', 'Area', 'Scatter', 'Doughnut').
- `source_data` (string, obrigatório): Intervalo de dados para o gráfico em notação A1, incluindo cabeçalhos (ex: 'A1:B10').
- `series_by` (string, opcional): Como as séries de dados são organizadas: 'Auto', 'Columns' ou 'Rows'. Padrão: 'Auto'
</Accordion>
<Accordion title="microsoft_sharepoint/list_excel_charts">
**Descrição:** Listar todos os gráficos incorporados em uma planilha Excel armazenada no SharePoint. Retorna propriedades do gráfico incluindo id, name, chartType, height, width e position.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha para listar gráficos. Obtenha de get_excel_worksheets.
</Accordion>
<Accordion title="microsoft_sharepoint/delete_excel_worksheet">
**Descrição:** Remover permanentemente uma planilha (aba) e todo seu conteúdo de uma pasta de trabalho Excel armazenada no SharePoint. Não pode ser desfeito. Uma pasta de trabalho deve ter pelo menos uma planilha.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha a excluir. Sensível a maiúsculas e minúsculas. Todos os dados, tabelas e gráficos nesta planilha serão permanentemente removidos.
</Accordion>
<Accordion title="microsoft_sharepoint/delete_excel_table">
**Descrição:** Remover uma tabela de uma planilha Excel no SharePoint. Isto exclui a estrutura da tabela (filtragem, formatação, recursos de tabela) mas preserva os dados subjacentes das células.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
- `worksheet_name` (string, obrigatório): Nome da planilha contendo a tabela. Obtenha de get_excel_worksheets.
- `table_name` (string, obrigatório): Nome da tabela a excluir (ex: 'Table1'). Obtenha de get_excel_tables. Os dados nas células permanecerão após a exclusão da tabela.
</Accordion>
<Accordion title="microsoft_sharepoint/list_excel_names">
**Descrição:** Recuperar todos os intervalos nomeados definidos em uma pasta de trabalho Excel armazenada no SharePoint. Intervalos nomeados são rótulos definidos pelo usuário para intervalos de células (ex: 'DadosVendas' para A1:D100).
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
</Accordion>
<Accordion title="microsoft_sharepoint/get_word_document_content">
**Descrição:** Baixar e extrair conteúdo de texto de um documento Word (.docx) armazenado em uma biblioteca de documentos do SharePoint. Esta é a maneira recomendada de ler documentos Word do SharePoint.
**Parâmetros:**
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
- `item_id` (string, obrigatório): O identificador único do documento Word (.docx) no SharePoint. Obtenha de list_files ou search_files.
</Accordion>
</AccordionGroup>

View File

@@ -107,6 +107,86 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
- `join_web_url` (string, obrigatório): A URL de participação na web da reunião a pesquisar.
</Accordion>
<Accordion title="microsoft_teams/search_online_meetings_by_meeting_id">
**Descrição:** Pesquisar reuniões online por ID externo da reunião.
**Parâmetros:**
- `join_meeting_id` (string, obrigatório): O ID da reunião (código numérico) que os participantes usam para entrar. É o joinMeetingId exibido nos convites da reunião, não o meeting id da API Graph.
</Accordion>
<Accordion title="microsoft_teams/get_meeting">
**Descrição:** Obter detalhes de uma reunião online específica.
**Parâmetros:**
- `meeting_id` (string, obrigatório): O ID da reunião na API Graph (string alfanumérica longa). Obter pelas ações create_meeting ou search_online_meetings. Diferente do joinMeetingId numérico.
</Accordion>
<Accordion title="microsoft_teams/get_team_members">
**Descrição:** Obter membros de uma equipe específica.
**Parâmetros:**
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
- `top` (integer, opcional): Número máximo de membros a recuperar por página (1-999). Padrão: 100.
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior. Quando a resposta incluir @odata.nextLink, extraia o valor do parâmetro $skiptoken e passe aqui para obter a próxima página de resultados.
</Accordion>
<Accordion title="microsoft_teams/create_channel">
**Descrição:** Criar um novo canal em uma equipe.
**Parâmetros:**
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
- `display_name` (string, obrigatório): Nome do canal exibido no Teams. Deve ser único na equipe. Máx 50 caracteres.
- `description` (string, opcional): Descrição opcional explicando o propósito do canal. Visível nos detalhes do canal. Máx 1024 caracteres.
- `membership_type` (string, opcional): Visibilidade do canal. Opções: standard, private. "standard" = visível para todos os membros da equipe, "private" = visível apenas para membros adicionados especificamente. Padrão: standard.
</Accordion>
<Accordion title="microsoft_teams/get_message_replies">
**Descrição:** Obter respostas a uma mensagem específica em um canal.
**Parâmetros:**
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
- `channel_id` (string, obrigatório): O identificador único do canal. Obter pela ação get_channels.
- `message_id` (string, obrigatório): O identificador único da mensagem pai. Obter pela ação get_messages.
- `top` (integer, opcional): Número máximo de respostas a recuperar por página (1-50). Padrão: 50.
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior. Quando a resposta incluir @odata.nextLink, extraia o valor do parâmetro $skiptoken e passe aqui para obter a próxima página de resultados.
</Accordion>
<Accordion title="microsoft_teams/reply_to_message">
**Descrição:** Responder a uma mensagem em um canal do Teams.
**Parâmetros:**
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
- `channel_id` (string, obrigatório): O identificador único do canal. Obter pela ação get_channels.
- `message_id` (string, obrigatório): O identificador único da mensagem a responder. Obter pela ação get_messages.
- `message` (string, obrigatório): O conteúdo da resposta. Para HTML, inclua tags de formatação. Para texto, use apenas texto simples.
- `content_type` (string, opcional): Formato do conteúdo. Opções: html, text. "text" para texto simples, "html" para texto rico com formatação. Padrão: text.
</Accordion>
<Accordion title="microsoft_teams/update_meeting">
**Descrição:** Atualizar uma reunião online existente.
**Parâmetros:**
- `meeting_id` (string, obrigatório): O identificador único da reunião. Obter pelas ações create_meeting ou search_online_meetings.
- `subject` (string, opcional): Novo título da reunião.
- `startDateTime` (string, opcional): Nova hora de início no formato ISO 8601 com fuso horário. Exemplo: "2024-01-20T10:00:00-08:00".
- `endDateTime` (string, opcional): Nova hora de término no formato ISO 8601 com fuso horário.
</Accordion>
<Accordion title="microsoft_teams/delete_meeting">
**Descrição:** Excluir uma reunião online.
**Parâmetros:**
- `meeting_id` (string, obrigatório): O identificador único da reunião a excluir. Obter pelas ações create_meeting ou search_online_meetings.
</Accordion>
</AccordionGroup>
## Exemplos de Uso

View File

@@ -97,6 +97,26 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
- `file_id` (string, obrigatório): O ID do documento a excluir.
</Accordion>
<Accordion title="microsoft_word/copy_document">
**Descrição:** Copiar um documento para um novo local no OneDrive.
**Parâmetros:**
- `file_id` (string, obrigatório): O ID do documento a copiar.
- `name` (string, opcional): Novo nome para o documento copiado.
- `parent_id` (string, opcional): O ID da pasta de destino (padrão: raiz).
</Accordion>
<Accordion title="microsoft_word/move_document">
**Descrição:** Mover um documento para um novo local no OneDrive.
**Parâmetros:**
- `file_id` (string, obrigatório): O ID do documento a mover.
- `parent_id` (string, obrigatório): O ID da pasta de destino.
- `name` (string, opcional): Novo nome para o documento movido.
</Accordion>
</AccordionGroup>
## Exemplos de Uso

View File

@@ -112,3 +112,9 @@ Workflows HITL são particularmente valiosos para:
- Operações sensíveis ou de alto risco
- Tarefas criativas que requerem julgamento humano
- Revisões de conformidade e regulamentação
## Recursos Enterprise
<Card title="Plataforma de Gerenciamento HITL para Flows" icon="users-gear" href="/pt-BR/enterprise/features/flow-hitl-management">
O CrewAI Enterprise oferece um sistema abrangente de gerenciamento HITL para Flows com revisão na plataforma, atribuição de respondentes, permissões, políticas de escalação, gerenciamento de SLA, roteamento dinâmico e análises completas. [Saiba mais →](/pt-BR/enterprise/features/flow-hitl-management)
</Card>

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.8.1"
__version__ = "1.9.3"

View File

@@ -12,7 +12,7 @@ dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.8.1",
"crewai==1.9.3",
"lancedb~=0.5.4",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",

View File

@@ -291,4 +291,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.8.1"
__version__ = "1.9.3"

View File

@@ -2,29 +2,95 @@
from __future__ import annotations
from collections.abc import Callable
import logging
from typing import TYPE_CHECKING, Any
from crewai.tools import BaseTool
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
from crewai.utilities.string_utils import sanitize_tool_name
from pydantic import BaseModel
from crewai_tools.adapters.tool_collection import ToolCollection
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from mcp import StdioServerParameters
from mcpadapt.core import MCPAdapt
from mcpadapt.crewai_adapter import CrewAIAdapter
from mcp.types import CallToolResult, TextContent, Tool
from mcpadapt.core import MCPAdapt, ToolAdapter
logger = logging.getLogger(__name__)
try:
from mcp import StdioServerParameters
from mcpadapt.core import MCPAdapt
from mcpadapt.crewai_adapter import CrewAIAdapter
from mcp.types import CallToolResult, TextContent, Tool
from mcpadapt.core import MCPAdapt, ToolAdapter
class CrewAIToolAdapter(ToolAdapter):
"""Adapter that creates CrewAI tools with properly normalized JSON schemas.
This adapter bypasses mcpadapt's model creation which adds invalid null values
to field schemas, instead using CrewAI's own schema utilities.
"""
def adapt(
self,
func: Callable[[dict[str, Any] | None], CallToolResult],
mcp_tool: Tool,
) -> BaseTool:
"""Adapt a MCP tool to a CrewAI tool.
Args:
func: The function to call when the tool is invoked.
mcp_tool: The MCP tool definition to adapt.
Returns:
A CrewAI BaseTool instance.
"""
tool_name = sanitize_tool_name(mcp_tool.name)
tool_description = mcp_tool.description or ""
args_model = create_model_from_schema(mcp_tool.inputSchema)
class CrewAIMCPTool(BaseTool):
name: str = tool_name
description: str = tool_description
args_schema: type[BaseModel] = args_model
def _run(self, **kwargs: Any) -> Any:
result = func(kwargs)
if len(result.content) == 1:
first_content = result.content[0]
if isinstance(first_content, TextContent):
return first_content.text
return str(first_content)
return str(
[
content.text
for content in result.content
if isinstance(content, TextContent)
]
)
def _generate_description(self) -> None:
schema = self.args_schema.model_json_schema()
schema.pop("$defs", None)
self.description = (
f"Tool Name: {self.name}\n"
f"Tool Arguments: {schema}\n"
f"Tool Description: {self.description}"
)
return CrewAIMCPTool()
async def async_adapt(self, afunc: Any, mcp_tool: Tool) -> Any:
"""Async adaptation is not supported by CrewAI."""
raise NotImplementedError("async is not supported by the CrewAI framework.")
MCP_AVAILABLE = True
except ImportError:
except ImportError as e:
logger.debug(f"MCP packages not available: {e}")
MCP_AVAILABLE = False
@@ -34,9 +100,6 @@ class MCPServerAdapter:
Note: tools can only be accessed after the server has been started with the
`start()` method.
Attributes:
tools: The CrewAI tools available from the MCP server.
Usage:
# context manager + stdio
with MCPServerAdapter(...) as tools:
@@ -89,7 +152,9 @@ class MCPServerAdapter:
super().__init__()
self._adapter = None
self._tools = None
self._tool_names = list(tool_names) if tool_names else None
self._tool_names = (
[sanitize_tool_name(name) for name in tool_names] if tool_names else None
)
if not MCP_AVAILABLE:
import click
@@ -100,7 +165,7 @@ class MCPServerAdapter:
import subprocess
try:
subprocess.run(["uv", "add", "mcp crewai-tools[mcp]"], check=True) # noqa: S607
subprocess.run(["uv", "add", "mcp crewai-tools'[mcp]'"], check=True) # noqa: S607
except subprocess.CalledProcessError as e:
raise ImportError("Failed to install mcp package") from e
@@ -112,7 +177,7 @@ class MCPServerAdapter:
try:
self._serverparams = serverparams
self._adapter = MCPAdapt(
self._serverparams, CrewAIAdapter(), connect_timeout
self._serverparams, CrewAIToolAdapter(), connect_timeout
)
self.start()
@@ -124,13 +189,13 @@ class MCPServerAdapter:
logger.error(f"Error during stop cleanup: {stop_e}")
raise RuntimeError(f"Failed to initialize MCP Adapter: {e}") from e
def start(self):
def start(self) -> None:
"""Start the MCP server and initialize the tools."""
self._tools = self._adapter.__enter__()
self._tools = self._adapter.__enter__() # type: ignore[union-attr]
def stop(self):
def stop(self) -> None:
"""Stop the MCP server."""
self._adapter.__exit__(None, None, None)
self._adapter.__exit__(None, None, None) # type: ignore[union-attr]
@property
def tools(self) -> ToolCollection[BaseTool]:
@@ -152,12 +217,19 @@ class MCPServerAdapter:
return tools_collection.filter_by_names(self._tool_names)
return tools_collection
def __enter__(self):
"""Enter the context manager. Note that `__init__()` already starts the MCP server.
So tools should already be available.
def __enter__(self) -> ToolCollection[BaseTool]:
"""Enter the context manager.
Note that `__init__()` already starts the MCP server,
so tools should already be available.
"""
return self.tools
def __exit__(self, exc_type, exc_value, traceback):
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_value: BaseException | None,
traceback: Any,
) -> None:
"""Exit the context manager."""
return self._adapter.__exit__(exc_type, exc_value, traceback)
self._adapter.__exit__(exc_type, exc_value, traceback) # type: ignore[union-attr]

View File

@@ -8,8 +8,9 @@ from typing import Any
from crewai.tools.base_tool import BaseTool, EnvVar
from pydantic import BaseModel
from pydantic.fields import FieldInfo
from pydantic.json_schema import GenerateJsonSchema
from pydantic_core import PydanticOmit
from pydantic_core import PydanticOmit, PydanticUndefined
from crewai_tools import tools
@@ -44,6 +45,9 @@ class ToolSpecExtractor:
schema = self._unwrap_schema(core_schema)
fields = schema.get("schema", {}).get("fields", {})
# Use model_fields to get defaults (handles both default and default_factory)
model_fields = tool_class.model_fields
tool_info = {
"name": tool_class.__name__,
"humanized_name": self._extract_field_default(
@@ -54,9 +58,9 @@ class ToolSpecExtractor:
).strip(),
"run_params_schema": self._extract_params(fields.get("args_schema")),
"init_params_schema": self._extract_init_params(tool_class),
"env_vars": self._extract_env_vars(fields.get("env_vars")),
"package_dependencies": self._extract_field_default(
fields.get("package_dependencies"), fallback=[]
"env_vars": self._extract_env_vars_from_model_fields(model_fields),
"package_dependencies": self._extract_package_deps_from_model_fields(
model_fields
),
}
@@ -103,10 +107,27 @@ class ToolSpecExtractor:
return {}
@staticmethod
def _extract_env_vars(
env_vars_field: dict[str, Any] | None,
def _get_field_default(field: FieldInfo | None) -> Any:
"""Get default value from a FieldInfo, handling both default and default_factory."""
if not field:
return None
default_value = field.default
if default_value is PydanticUndefined or default_value is None:
if field.default_factory:
return field.default_factory()
return None
return default_value
@staticmethod
def _extract_env_vars_from_model_fields(
model_fields: dict[str, FieldInfo],
) -> list[dict[str, Any]]:
if not env_vars_field:
default_value = ToolSpecExtractor._get_field_default(
model_fields.get("env_vars")
)
if not default_value:
return []
return [
@@ -116,10 +137,22 @@ class ToolSpecExtractor:
"required": env_var.required,
"default": env_var.default,
}
for env_var in env_vars_field.get("schema", {}).get("default", [])
for env_var in default_value
if isinstance(env_var, EnvVar)
]
@staticmethod
def _extract_package_deps_from_model_fields(
model_fields: dict[str, FieldInfo],
) -> list[str]:
default_value = ToolSpecExtractor._get_field_default(
model_fields.get("package_dependencies")
)
if not isinstance(default_value, list):
return []
return default_value
@staticmethod
def _extract_init_params(tool_class: type[BaseTool]) -> dict[str, Any]:
ignored_init_params = [
@@ -152,7 +185,7 @@ class ToolSpecExtractor:
if __name__ == "__main__":
output_file = Path(__file__).parent / "tool.specs.json"
output_file = Path(__file__).parent.parent.parent / "tool.specs.json"
extractor = ToolSpecExtractor()
extractor.extract_all_tools()

View File

@@ -1,12 +1,17 @@
from datetime import datetime
import json
import os
import time
from typing import Any, ClassVar
from typing import Annotated, Any, ClassVar, Literal
from crewai.tools import BaseTool, EnvVar
from dotenv import load_dotenv
from pydantic import BaseModel, Field
from pydantic.types import StringConstraints
import requests
load_dotenv()
def _save_results_to_file(content: str) -> None:
"""Saves the search results to a file."""
@@ -15,37 +20,72 @@ def _save_results_to_file(content: str) -> None:
file.write(content)
class BraveSearchToolSchema(BaseModel):
"""Input for BraveSearchTool."""
FreshnessPreset = Literal["pd", "pw", "pm", "py"]
FreshnessRange = Annotated[
str, StringConstraints(pattern=r"^\d{4}-\d{2}-\d{2}to\d{4}-\d{2}-\d{2}$")
]
Freshness = FreshnessPreset | FreshnessRange
SafeSearch = Literal["off", "moderate", "strict"]
search_query: str = Field(
..., description="Mandatory search query you want to use to search the internet"
class BraveSearchToolSchema(BaseModel):
"""Input for BraveSearchTool"""
query: str = Field(..., description="Search query to perform")
country: str | None = Field(
default=None,
description="Country code for geo-targeting (e.g., 'US', 'BR').",
)
search_language: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
)
count: int | None = Field(
default=None,
description="The maximum number of results to return. Actual number may be less.",
)
offset: int | None = Field(
default=None, description="Skip the first N result sets/pages. Max is 9."
)
safesearch: SafeSearch | None = Field(
default=None,
description="Filter out explicit content. Options: off/moderate/strict",
)
spellcheck: bool | None = Field(
default=None,
description="Attempt to correct spelling errors in the search query.",
)
freshness: Freshness | None = Field(
default=None,
description="Enforce freshness of results. Options: pd/pw/pm/py, or YYYY-MM-DDtoYYYY-MM-DD",
)
text_decorations: bool | None = Field(
default=None,
description="Include markup to highlight search terms in the results.",
)
extra_snippets: bool | None = Field(
default=None,
description="Include up to 5 text snippets for each page if possible.",
)
operators: bool | None = Field(
default=None,
description="Whether to apply search operators (e.g., site:example.com).",
)
# TODO: Extend support to additional endpoints (e.g., /images, /news, etc.)
class BraveSearchTool(BaseTool):
"""BraveSearchTool - A tool for performing web searches using the Brave Search API.
"""A tool that performs web searches using the Brave Search API."""
This module provides functionality to search the internet using Brave's Search API,
supporting customizable result counts and country-specific searches.
Dependencies:
- requests
- pydantic
- python-dotenv (for API key management)
"""
name: str = "Brave Web Search the internet"
name: str = "Brave Search"
description: str = (
"A tool that can be used to search the internet with a search_query."
"A tool that performs web searches using the Brave Search API. "
"Results are returned as structured JSON data."
)
args_schema: type[BaseModel] = BraveSearchToolSchema
search_url: str = "https://api.search.brave.com/res/v1/web/search"
country: str | None = ""
n_results: int = 10
save_file: bool = False
_last_request_time: ClassVar[float] = 0
_min_request_interval: ClassVar[float] = 1.0 # seconds
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(
@@ -55,6 +95,9 @@ class BraveSearchTool(BaseTool):
),
]
)
# Rate limiting parameters
_last_request_time: ClassVar[float] = 0
_min_request_interval: ClassVar[float] = 1.0 # seconds
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@@ -73,19 +116,64 @@ class BraveSearchTool(BaseTool):
self._min_request_interval - (current_time - self._last_request_time)
)
BraveSearchTool._last_request_time = time.time()
# Construct and send the request
try:
search_query = kwargs.get("search_query") or kwargs.get("query")
if not search_query:
raise ValueError("Search query is required")
# Maintain both "search_query" and "query" for backwards compatibility
query = kwargs.get("search_query") or kwargs.get("query")
if not query:
raise ValueError("Query is required")
payload = {"q": query}
if country := kwargs.get("country"):
payload["country"] = country
if search_language := kwargs.get("search_language"):
payload["search_language"] = search_language
# Fallback to deprecated n_results parameter if no count is provided
count = kwargs.get("count")
if count is not None:
payload["count"] = count
else:
payload["count"] = self.n_results
# Offset may be 0, so avoid truthiness check
offset = kwargs.get("offset")
if offset is not None:
payload["offset"] = offset
if safesearch := kwargs.get("safesearch"):
payload["safesearch"] = safesearch
save_file = kwargs.get("save_file", self.save_file)
n_results = kwargs.get("n_results", self.n_results)
if freshness := kwargs.get("freshness"):
payload["freshness"] = freshness
payload = {"q": search_query, "count": n_results}
# Boolean parameters
spellcheck = kwargs.get("spellcheck")
if spellcheck is not None:
payload["spellcheck"] = spellcheck
if self.country != "":
payload["country"] = self.country
text_decorations = kwargs.get("text_decorations")
if text_decorations is not None:
payload["text_decorations"] = text_decorations
extra_snippets = kwargs.get("extra_snippets")
if extra_snippets is not None:
payload["extra_snippets"] = extra_snippets
operators = kwargs.get("operators")
if operators is not None:
payload["operators"] = operators
# Limit the result types to "web" since there is presently no
# handling of other types like "discussions", "faq", "infobox",
# "news", "videos", or "locations".
payload["result_filter"] = "web"
# Setup Request Headers
headers = {
"X-Subscription-Token": os.environ["BRAVE_API_KEY"],
"Accept": "application/json",
@@ -97,25 +185,32 @@ class BraveSearchTool(BaseTool):
response.raise_for_status() # Handle non-200 responses
results = response.json()
# TODO: Handle other result types like "discussions", "faq", etc.
web_results_items = []
if "web" in results:
results = results["web"]["results"]
string = []
for result in results:
try:
string.append(
"\n".join(
[
f"Title: {result['title']}",
f"Link: {result['url']}",
f"Snippet: {result['description']}",
"---",
]
)
)
except KeyError: # noqa: PERF203
continue
web_results = results["web"]["results"]
content = "\n".join(string)
for result in web_results:
url = result.get("url")
title = result.get("title")
# If, for whatever reason, this entry does not have a title
# or url, skip it.
if not url or not title:
continue
item = {
"url": url,
"title": title,
}
description = result.get("description")
if description:
item["description"] = description
snippets = result.get("extra_snippets")
if snippets:
item["snippets"] = snippets
web_results_items.append(item)
content = json.dumps(web_results_items)
except requests.RequestException as e:
return f"Error performing search: {e!s}"
except KeyError as e:

View File

@@ -1,10 +1,11 @@
"""Crewai Enterprise Tools."""
import os
import json
import re
from typing import Any, Optional, Union, cast, get_origin
import os
from typing import Any
from crewai.tools import BaseTool
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
from pydantic import Field, create_model
import requests
@@ -14,77 +15,6 @@ from crewai_tools.tools.crewai_platform_tools.misc import (
)
class AllOfSchemaAnalyzer:
"""Helper class to analyze and merge allOf schemas."""
def __init__(self, schemas: list[dict[str, Any]]):
self.schemas = schemas
self._explicit_types: list[str] = []
self._merged_properties: dict[str, Any] = {}
self._merged_required: list[str] = []
self._analyze_schemas()
def _analyze_schemas(self) -> None:
"""Analyze all schemas and extract relevant information."""
for schema in self.schemas:
if "type" in schema:
self._explicit_types.append(schema["type"])
# Merge object properties
if schema.get("type") == "object" and "properties" in schema:
self._merged_properties.update(schema["properties"])
if "required" in schema:
self._merged_required.extend(schema["required"])
def has_consistent_type(self) -> bool:
"""Check if all schemas have the same explicit type."""
return len(set(self._explicit_types)) == 1 if self._explicit_types else False
def get_consistent_type(self) -> type[Any]:
"""Get the consistent type if all schemas agree."""
if not self.has_consistent_type():
raise ValueError("No consistent type found")
type_mapping = {
"string": str,
"integer": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
"null": type(None),
}
return type_mapping.get(self._explicit_types[0], str)
def has_object_schemas(self) -> bool:
"""Check if any schemas are object types with properties."""
return bool(self._merged_properties)
def get_merged_properties(self) -> dict[str, Any]:
"""Get merged properties from all object schemas."""
return self._merged_properties
def get_merged_required_fields(self) -> list[str]:
"""Get merged required fields from all object schemas."""
return list(set(self._merged_required)) # Remove duplicates
def get_fallback_type(self) -> type[Any]:
"""Get a fallback type when merging fails."""
if self._explicit_types:
# Use the first explicit type
type_mapping = {
"string": str,
"integer": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
"null": type(None),
}
return type_mapping.get(self._explicit_types[0], str)
return str
class CrewAIPlatformActionTool(BaseTool):
action_name: str = Field(default="", description="The name of the action")
action_schema: dict[str, Any] = Field(
@@ -97,42 +27,19 @@ class CrewAIPlatformActionTool(BaseTool):
action_name: str,
action_schema: dict[str, Any],
):
self._model_registry: dict[str, type[Any]] = {}
self._base_name = self._sanitize_name(action_name)
schema_props, required = self._extract_schema_info(action_schema)
field_definitions: dict[str, Any] = {}
for param_name, param_details in schema_props.items():
param_desc = param_details.get("description", "")
is_required = param_name in required
parameters = action_schema.get("function", {}).get("parameters", {})
if parameters and parameters.get("properties"):
try:
field_type = self._process_schema_type(
param_details, self._sanitize_name(param_name).title()
)
if "title" not in parameters:
parameters = {**parameters, "title": f"{action_name}Schema"}
if "type" not in parameters:
parameters = {**parameters, "type": "object"}
args_schema = create_model_from_schema(parameters)
except Exception:
field_type = str
field_definitions[param_name] = self._create_field_definition(
field_type, is_required, param_desc
)
if field_definitions:
try:
args_schema = create_model(
f"{self._base_name}Schema", **field_definitions
)
except Exception:
args_schema = create_model(
f"{self._base_name}Schema",
input_text=(str, Field(description="Input for the action")),
)
args_schema = create_model(f"{action_name}Schema")
else:
args_schema = create_model(
f"{self._base_name}Schema",
input_text=(str, Field(description="Input for the action")),
)
args_schema = create_model(f"{action_name}Schema")
super().__init__(
name=action_name.lower().replace(" ", "_"),
@@ -142,285 +49,12 @@ class CrewAIPlatformActionTool(BaseTool):
self.action_name = action_name
self.action_schema = action_schema
@staticmethod
def _sanitize_name(name: str) -> str:
name = name.lower().replace(" ", "_")
sanitized = re.sub(r"[^a-zA-Z0-9_]", "", name)
parts = sanitized.split("_")
return "".join(word.capitalize() for word in parts if word)
@staticmethod
def _extract_schema_info(
action_schema: dict[str, Any],
) -> tuple[dict[str, Any], list[str]]:
schema_props = (
action_schema.get("function", {})
.get("parameters", {})
.get("properties", {})
)
required = (
action_schema.get("function", {}).get("parameters", {}).get("required", [])
)
return schema_props, required
def _process_schema_type(self, schema: dict[str, Any], type_name: str) -> type[Any]:
"""
Process a JSON Schema type definition into a Python type.
Handles complex schema constructs like anyOf, oneOf, allOf, enums, arrays, and objects.
"""
# Handle composite schema types (anyOf, oneOf, allOf)
if composite_type := self._process_composite_schema(schema, type_name):
return composite_type
# Handle primitive types and simple constructs
return self._process_primitive_schema(schema, type_name)
def _process_composite_schema(
self, schema: dict[str, Any], type_name: str
) -> type[Any] | None:
"""Process composite schema types: anyOf, oneOf, allOf."""
if "anyOf" in schema:
return self._process_any_of_schema(schema["anyOf"], type_name)
if "oneOf" in schema:
return self._process_one_of_schema(schema["oneOf"], type_name)
if "allOf" in schema:
return self._process_all_of_schema(schema["allOf"], type_name)
return None
def _process_any_of_schema(
self, any_of_types: list[dict[str, Any]], type_name: str
) -> type[Any]:
"""Process anyOf schema - creates Union of possible types."""
is_nullable = any(t.get("type") == "null" for t in any_of_types)
non_null_types = [t for t in any_of_types if t.get("type") != "null"]
if not non_null_types:
return cast(
type[Any], cast(object, str | None)
) # fallback for only-null case
base_type = (
self._process_schema_type(non_null_types[0], type_name)
if len(non_null_types) == 1
else self._create_union_type(non_null_types, type_name, "AnyOf")
)
return base_type | None if is_nullable else base_type # type: ignore[return-value]
def _process_one_of_schema(
self, one_of_types: list[dict[str, Any]], type_name: str
) -> type[Any]:
"""Process oneOf schema - creates Union of mutually exclusive types."""
return (
self._process_schema_type(one_of_types[0], type_name)
if len(one_of_types) == 1
else self._create_union_type(one_of_types, type_name, "OneOf")
)
def _process_all_of_schema(
self, all_of_schemas: list[dict[str, Any]], type_name: str
) -> type[Any]:
"""Process allOf schema - merges schemas that must all be satisfied."""
if len(all_of_schemas) == 1:
return self._process_schema_type(all_of_schemas[0], type_name)
return self._merge_all_of_schemas(all_of_schemas, type_name)
def _create_union_type(
self, schemas: list[dict[str, Any]], type_name: str, prefix: str
) -> type[Any]:
"""Create a Union type from multiple schemas."""
return Union[ # type: ignore # noqa: UP007
tuple(
self._process_schema_type(schema, f"{type_name}{prefix}{i}")
for i, schema in enumerate(schemas)
)
]
def _process_primitive_schema(
self, schema: dict[str, Any], type_name: str
) -> type[Any]:
"""Process primitive schema types: string, number, array, object, etc."""
json_type = schema.get("type", "string")
if "enum" in schema:
return self._process_enum_schema(schema, json_type)
if json_type == "array":
return self._process_array_schema(schema, type_name)
if json_type == "object":
return self._create_nested_model(schema, type_name)
return self._map_json_type_to_python(json_type)
def _process_enum_schema(self, schema: dict[str, Any], json_type: str) -> type[Any]:
"""Process enum schema - currently falls back to base type."""
enum_values = schema["enum"]
if not enum_values:
return self._map_json_type_to_python(json_type)
# For Literal types, we need to pass the values directly, not as a tuple
# This is a workaround since we can't dynamically create Literal types easily
# Fall back to the base JSON type for now
return self._map_json_type_to_python(json_type)
def _process_array_schema(
self, schema: dict[str, Any], type_name: str
) -> type[Any]:
items_schema = schema.get("items", {"type": "string"})
item_type = self._process_schema_type(items_schema, f"{type_name}Item")
return list[item_type] # type: ignore
def _merge_all_of_schemas(
self, schemas: list[dict[str, Any]], type_name: str
) -> type[Any]:
schema_analyzer = AllOfSchemaAnalyzer(schemas)
if schema_analyzer.has_consistent_type():
return schema_analyzer.get_consistent_type()
if schema_analyzer.has_object_schemas():
return self._create_merged_object_model(
schema_analyzer.get_merged_properties(),
schema_analyzer.get_merged_required_fields(),
type_name,
)
return schema_analyzer.get_fallback_type()
def _create_merged_object_model(
self, properties: dict[str, Any], required: list[str], model_name: str
) -> type[Any]:
full_model_name = f"{self._base_name}{model_name}AllOf"
if full_model_name in self._model_registry:
return self._model_registry[full_model_name]
if not properties:
return dict
field_definitions = self._build_field_definitions(
properties, required, model_name
)
try:
merged_model = create_model(full_model_name, **field_definitions)
self._model_registry[full_model_name] = merged_model
return merged_model
except Exception:
return dict
def _build_field_definitions(
self, properties: dict[str, Any], required: list[str], model_name: str
) -> dict[str, Any]:
field_definitions = {}
for prop_name, prop_schema in properties.items():
prop_desc = prop_schema.get("description", "")
is_required = prop_name in required
try:
prop_type = self._process_schema_type(
prop_schema, f"{model_name}{self._sanitize_name(prop_name).title()}"
)
except Exception:
prop_type = str
field_definitions[prop_name] = self._create_field_definition(
prop_type, is_required, prop_desc
)
return field_definitions
def _create_nested_model(
self, schema: dict[str, Any], model_name: str
) -> type[Any]:
full_model_name = f"{self._base_name}{model_name}"
if full_model_name in self._model_registry:
return self._model_registry[full_model_name]
properties = schema.get("properties", {})
required_fields = schema.get("required", [])
if not properties:
return dict
field_definitions = {}
for prop_name, prop_schema in properties.items():
prop_desc = prop_schema.get("description", "")
is_required = prop_name in required_fields
try:
prop_type = self._process_schema_type(
prop_schema, f"{model_name}{self._sanitize_name(prop_name).title()}"
)
except Exception:
prop_type = str
field_definitions[prop_name] = self._create_field_definition(
prop_type, is_required, prop_desc
)
try:
nested_model = create_model(full_model_name, **field_definitions) # type: ignore
self._model_registry[full_model_name] = nested_model
return nested_model
except Exception:
return dict
def _create_field_definition(
self, field_type: type[Any], is_required: bool, description: str
) -> tuple:
if is_required:
return (field_type, Field(description=description))
if get_origin(field_type) is Union:
return (field_type, Field(default=None, description=description))
return (
Optional[field_type], # noqa: UP045
Field(default=None, description=description),
)
def _map_json_type_to_python(self, json_type: str) -> type[Any]:
type_mapping = {
"string": str,
"integer": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
"null": type(None),
}
return type_mapping.get(json_type, str)
def _get_required_nullable_fields(self) -> list[str]:
schema_props, required = self._extract_schema_info(self.action_schema)
required_nullable_fields = []
for param_name in required:
param_details = schema_props.get(param_name, {})
if self._is_nullable_type(param_details):
required_nullable_fields.append(param_name)
return required_nullable_fields
def _is_nullable_type(self, schema: dict[str, Any]) -> bool:
if "anyOf" in schema:
return any(t.get("type") == "null" for t in schema["anyOf"])
return schema.get("type") == "null"
def _run(self, **kwargs) -> str:
def _run(self, **kwargs: Any) -> str:
try:
cleaned_kwargs = {
key: value for key, value in kwargs.items() if value is not None
}
required_nullable_fields = self._get_required_nullable_fields()
for field_name in required_nullable_fields:
if field_name not in cleaned_kwargs:
cleaned_kwargs[field_name] = None
api_url = (
f"{get_platform_api_base_url()}/actions/{self.action_name}/execute"
)
@@ -429,7 +63,9 @@ class CrewAIPlatformActionTool(BaseTool):
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
}
payload = cleaned_kwargs
payload = {
"integration": cleaned_kwargs if cleaned_kwargs else {"_noop": True}
}
response = requests.post(
url=api_url,
@@ -441,7 +77,14 @@ class CrewAIPlatformActionTool(BaseTool):
data = response.json()
if not response.ok:
error_message = data.get("error", {}).get("message", json.dumps(data))
if isinstance(data, dict):
error_info = data.get("error", {})
if isinstance(error_info, dict):
error_message = error_info.get("message", json.dumps(data))
else:
error_message = str(error_info)
else:
error_message = str(data)
return f"API request failed: {error_message}"
return json.dumps(data, indent=2)

View File

@@ -1,5 +1,10 @@
from typing import Any
"""CrewAI platform tool builder for fetching and creating action tools."""
import logging
import os
from types import TracebackType
from typing import Any
from crewai.tools import BaseTool
import requests
@@ -12,22 +17,29 @@ from crewai_tools.tools.crewai_platform_tools.misc import (
)
logger = logging.getLogger(__name__)
class CrewaiPlatformToolBuilder:
"""Builds platform tools from remote action schemas."""
def __init__(
self,
apps: list[str],
):
) -> None:
self._apps = apps
self._actions_schema = {} # type: ignore[var-annotated]
self._tools = None
self._actions_schema: dict[str, dict[str, Any]] = {}
self._tools: list[BaseTool] | None = None
def tools(self) -> list[BaseTool]:
"""Fetch actions and return built tools."""
if self._tools is None:
self._fetch_actions()
self._create_tools()
return self._tools if self._tools is not None else []
def _fetch_actions(self):
def _fetch_actions(self) -> None:
"""Fetch action schemas from the platform API."""
actions_url = f"{get_platform_api_base_url()}/actions"
headers = {"Authorization": f"Bearer {get_platform_integration_token()}"}
@@ -40,7 +52,8 @@ class CrewaiPlatformToolBuilder:
verify=os.environ.get("CREWAI_FACTORY", "false").lower() != "true",
)
response.raise_for_status()
except Exception:
except Exception as e:
logger.error(f"Failed to fetch platform tools for apps {self._apps}: {e}")
return
raw_data = response.json()
@@ -51,6 +64,8 @@ class CrewaiPlatformToolBuilder:
for app, action_list in action_categories.items():
if isinstance(action_list, list):
for action in action_list:
if not isinstance(action, dict):
continue
if action_name := action.get("name"):
action_schema = {
"function": {
@@ -64,72 +79,16 @@ class CrewaiPlatformToolBuilder:
}
self._actions_schema[action_name] = action_schema
def _generate_detailed_description(
self, schema: dict[str, Any], indent: int = 0
) -> list[str]:
descriptions = []
indent_str = " " * indent
schema_type = schema.get("type", "string")
if schema_type == "object":
properties = schema.get("properties", {})
required_fields = schema.get("required", [])
if properties:
descriptions.append(f"{indent_str}Object with properties:")
for prop_name, prop_schema in properties.items():
prop_desc = prop_schema.get("description", "")
is_required = prop_name in required_fields
req_str = " (required)" if is_required else " (optional)"
descriptions.append(
f"{indent_str} - {prop_name}: {prop_desc}{req_str}"
)
if prop_schema.get("type") == "object":
descriptions.extend(
self._generate_detailed_description(prop_schema, indent + 2)
)
elif prop_schema.get("type") == "array":
items_schema = prop_schema.get("items", {})
if items_schema.get("type") == "object":
descriptions.append(f"{indent_str} Array of objects:")
descriptions.extend(
self._generate_detailed_description(
items_schema, indent + 3
)
)
elif "enum" in items_schema:
descriptions.append(
f"{indent_str} Array of enum values: {items_schema['enum']}"
)
elif "enum" in prop_schema:
descriptions.append(
f"{indent_str} Enum values: {prop_schema['enum']}"
)
return descriptions
def _create_tools(self):
tools = []
def _create_tools(self) -> None:
"""Create tool instances from fetched action schemas."""
tools: list[BaseTool] = []
for action_name, action_schema in self._actions_schema.items():
function_details = action_schema.get("function", {})
description = function_details.get("description", f"Execute {action_name}")
parameters = function_details.get("parameters", {})
param_descriptions = []
if parameters.get("properties"):
param_descriptions.append("\nDetailed Parameter Structure:")
param_descriptions.extend(
self._generate_detailed_description(parameters)
)
full_description = description + "\n".join(param_descriptions)
tool = CrewAIPlatformActionTool(
description=full_description,
description=description,
action_name=action_name,
action_schema=action_schema,
)
@@ -138,8 +97,14 @@ class CrewaiPlatformToolBuilder:
self._tools = tools
def __enter__(self):
def __enter__(self) -> list[BaseTool]:
"""Enter context manager and return tools."""
return self.tools()
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
"""Exit context manager."""

View File

@@ -8,6 +8,29 @@ This enables multiple workflows like having an Agent to access the database fetc
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
## Security Model
`NL2SQLTool` is an execution-capable tool. It runs model-generated SQL directly against the configured database connection.
Risk depends on deployment choices:
- Which credentials are used in `db_uri`
- Whether untrusted input can influence prompts
- Whether tool-call guardrails are enforced before execution
If untrusted input can reach this tool, treat the integration as high risk.
## Hardening Recommendations
Use all of the following in production:
- Use a read-only database user whenever possible
- Prefer a read replica for analytics/retrieval workloads
- Grant least privilege (no superuser/admin roles, no file/system-level capabilities)
- Apply database-side resource limits (statement timeout, lock timeout, cost/row limits)
- Add `before_tool_call` hooks to enforce allowed query patterns
- Enable query logging and alerting for destructive statements
## Requirements
- SqlAlchemy

View File

@@ -4,7 +4,7 @@ import os
import re
from typing import Any
from crewai.tools import BaseTool
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, Field
@@ -137,6 +137,21 @@ class StagehandTool(BaseTool):
- 'observe': For finding elements in a specific area
"""
args_schema: type[BaseModel] = StagehandToolSchema
package_dependencies: list[str] = Field(default_factory=lambda: ["stagehand<=0.5.9"])
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(
name="BROWSERBASE_API_KEY",
description="API key for Browserbase services",
required=False,
),
EnvVar(
name="BROWSERBASE_PROJECT_ID",
description="Project ID for Browserbase services",
required=False,
),
]
)
# Stagehand configuration
api_key: str | None = None

View File

@@ -23,23 +23,26 @@ class MockTool(BaseTool):
)
my_parameter: str = Field("This is default value", description="What a description")
my_parameter_bool: bool = Field(False)
# Use default_factory like real tools do (not direct default)
package_dependencies: list[str] = Field(
["this-is-a-required-package", "another-required-package"], description=""
default_factory=lambda: ["this-is-a-required-package", "another-required-package"]
)
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(
name="SERPER_API_KEY",
description="API key for Serper",
required=True,
default=None,
),
EnvVar(
name="API_RATE_LIMIT",
description="API rate limit",
required=False,
default="100",
),
]
)
env_vars: list[EnvVar] = [
EnvVar(
name="SERPER_API_KEY",
description="API key for Serper",
required=True,
default=None,
),
EnvVar(
name="API_RATE_LIMIT",
description="API rate limit",
required=False,
default="100",
),
]
@pytest.fixture

View File

@@ -1,8 +1,10 @@
import json
from unittest.mock import patch
from crewai_tools.tools.brave_search_tool.brave_search_tool import BraveSearchTool
import pytest
from crewai_tools.tools.brave_search_tool.brave_search_tool import BraveSearchTool
@pytest.fixture
def brave_tool():
@@ -30,16 +32,46 @@ def test_brave_tool_search(mock_get, brave_tool):
}
mock_get.return_value.json.return_value = mock_response
result = brave_tool.run(search_query="test")
assert "Test Title" in result
assert "http://test.com" in result
result = brave_tool.run(query="test")
data = json.loads(result)
assert isinstance(data, list)
assert len(data) >= 1
assert data[0]["title"] == "Test Title"
assert data[0]["url"] == "http://test.com"
def test_brave_tool():
tool = BraveSearchTool(
n_results=2,
)
tool.run(search_query="ChatGPT")
@patch("requests.get")
def test_brave_tool(mock_get):
mock_response = {
"web": {
"results": [
{
"title": "Brave Browser",
"url": "https://brave.com",
"description": "Brave Browser description",
}
]
}
}
mock_get.return_value.json.return_value = mock_response
tool = BraveSearchTool(n_results=2)
result = tool.run(query="Brave Browser")
assert result is not None
# Parse JSON so we can examine the structure
data = json.loads(result)
assert isinstance(data, list)
assert len(data) >= 1
# First item should have expected fields: title, url, and description
first = data[0]
assert "title" in first
assert first["title"] == "Brave Browser"
assert "url" in first
assert first["url"] == "https://brave.com"
assert "description" in first
assert first["description"] == "Brave Browser description"
if __name__ == "__main__":

View File

@@ -1,4 +1,3 @@
from typing import Union, get_args, get_origin
from unittest.mock import patch, Mock
import os
@@ -7,251 +6,6 @@ from crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool import
)
class TestSchemaProcessing:
def setup_method(self):
self.base_action_schema = {
"function": {
"parameters": {
"properties": {},
"required": []
}
}
}
def create_test_tool(self, action_name="test_action"):
return CrewAIPlatformActionTool(
description="Test tool",
action_name=action_name,
action_schema=self.base_action_schema
)
def test_anyof_multiple_types(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"},
{"type": "number"},
{"type": "integer"}
]
}
result_type = tool._process_schema_type(test_schema, "TestField")
assert get_origin(result_type) is Union
args = get_args(result_type)
expected_types = (str, float, int)
for expected_type in expected_types:
assert expected_type in args
def test_anyof_with_null(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"},
{"type": "number"},
{"type": "null"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldNullable")
assert get_origin(result_type) is Union
args = get_args(result_type)
assert type(None) in args
assert str in args
assert float in args
def test_anyof_single_type(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldSingle")
assert result_type is str
def test_oneof_multiple_types(self):
tool = self.create_test_tool()
test_schema = {
"oneOf": [
{"type": "string"},
{"type": "boolean"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldOneOf")
assert get_origin(result_type) is Union
args = get_args(result_type)
expected_types = (str, bool)
for expected_type in expected_types:
assert expected_type in args
def test_oneof_single_type(self):
tool = self.create_test_tool()
test_schema = {
"oneOf": [
{"type": "integer"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldOneOfSingle")
assert result_type is int
def test_basic_types(self):
tool = self.create_test_tool()
test_cases = [
({"type": "string"}, str),
({"type": "integer"}, int),
({"type": "number"}, float),
({"type": "boolean"}, bool),
({"type": "array", "items": {"type": "string"}}, list),
]
for schema, expected_type in test_cases:
result_type = tool._process_schema_type(schema, "TestField")
if schema["type"] == "array":
assert get_origin(result_type) is list
else:
assert result_type is expected_type
def test_enum_handling(self):
tool = self.create_test_tool()
test_schema = {
"type": "string",
"enum": ["option1", "option2", "option3"]
}
result_type = tool._process_schema_type(test_schema, "TestFieldEnum")
assert result_type is str
def test_nested_anyof(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"},
{
"anyOf": [
{"type": "integer"},
{"type": "boolean"}
]
}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldNested")
assert get_origin(result_type) is Union
args = get_args(result_type)
assert str in args
if len(args) == 3:
assert int in args
assert bool in args
else:
nested_union = next(arg for arg in args if get_origin(arg) is Union)
nested_args = get_args(nested_union)
assert int in nested_args
assert bool in nested_args
def test_allof_same_types(self):
tool = self.create_test_tool()
test_schema = {
"allOf": [
{"type": "string"},
{"type": "string", "maxLength": 100}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfSame")
assert result_type is str
def test_allof_object_merge(self):
tool = self.create_test_tool()
test_schema = {
"allOf": [
{
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["name"]
},
{
"type": "object",
"properties": {
"email": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["email"]
}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfMerged")
# Should create a merged model with all properties
# The implementation might fall back to dict if model creation fails
# Let's just verify it's not a basic scalar type
assert result_type is not str
assert result_type is not int
assert result_type is not bool
# It could be dict (fallback) or a proper model class
assert result_type in (dict, type) or hasattr(result_type, '__name__')
def test_allof_single_schema(self):
"""Test that allOf with single schema works correctly."""
tool = self.create_test_tool()
test_schema = {
"allOf": [
{"type": "boolean"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfSingle")
# Should be just bool
assert result_type is bool
def test_allof_mixed_types(self):
tool = self.create_test_tool()
test_schema = {
"allOf": [
{"type": "string"},
{"type": "integer"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfMixed")
assert result_type is str
class TestCrewAIPlatformActionToolVerify:
"""Test suite for SSL verification behavior based on CREWAI_FACTORY environment variable"""

View File

@@ -224,43 +224,6 @@ class TestCrewaiPlatformToolBuilder(unittest.TestCase):
_, kwargs = mock_get.call_args
assert kwargs["params"]["apps"] == ""
def test_detailed_description_generation(self):
builder = CrewaiPlatformToolBuilder(apps=["test"])
complex_schema = {
"type": "object",
"properties": {
"simple_string": {"type": "string", "description": "A simple string"},
"nested_object": {
"type": "object",
"properties": {
"inner_prop": {
"type": "integer",
"description": "Inner property",
}
},
"description": "Nested object",
},
"array_prop": {
"type": "array",
"items": {"type": "string"},
"description": "Array of strings",
},
},
}
descriptions = builder._generate_detailed_description(complex_schema)
assert isinstance(descriptions, list)
assert len(descriptions) > 0
description_text = "\n".join(descriptions)
assert "simple_string" in description_text
assert "nested_object" in description_text
assert "array_prop" in description_text
class TestCrewaiPlatformToolBuilderVerify(unittest.TestCase):
"""Test suite for SSL verification behavior in CrewaiPlatformToolBuilder"""

File diff suppressed because it is too large Load Diff

View File

@@ -10,11 +10,11 @@ requires-python = ">=3.10, <3.14"
dependencies = [
# Core Dependencies
"pydantic~=2.11.9",
"openai~=1.83.0",
"openai>=1.83.0,<3",
"instructor>=1.3.3",
# Text Processing
"pdfplumber~=0.11.4",
"regex~=2024.9.11",
"regex~=2026.1.15",
# Telemetry and Monitoring
"opentelemetry-api~=1.34.0",
"opentelemetry-sdk~=1.34.0",
@@ -36,7 +36,7 @@ dependencies = [
"json5~=0.10.0",
"portalocker~=2.7.0",
"pydantic-settings~=2.10.1",
"mcp~=1.23.1",
"mcp~=1.26.0",
"uv~=0.9.13",
"aiosqlite~=0.21.0",
]
@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.8.1",
"crewai-tools==1.9.3",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -78,7 +78,7 @@ voyageai = [
"voyageai~=0.3.5",
]
litellm = [
"litellm~=1.74.9",
"litellm>=1.74.9,<3",
]
bedrock = [
"boto3~=1.40.45",
@@ -90,7 +90,7 @@ azure-ai-inference = [
"azure-ai-inference~=1.0.0b9",
]
anthropic = [
"anthropic~=0.71.0",
"anthropic~=0.73.0",
]
a2a = [
"a2a-sdk~=0.3.10",

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.8.1"
__version__ = "1.9.3"
_telemetry_submitted = False

View File

@@ -1,20 +1,36 @@
"""A2A authentication schemas."""
from crewai.a2a.auth.schemas import (
from crewai.a2a.auth.client_schemes import (
APIKeyAuth,
AuthScheme,
BearerTokenAuth,
ClientAuthScheme,
HTTPBasicAuth,
HTTPDigestAuth,
OAuth2AuthorizationCode,
OAuth2ClientCredentials,
TLSConfig,
)
from crewai.a2a.auth.server_schemes import (
AuthenticatedUser,
OIDCAuth,
ServerAuthScheme,
SimpleTokenAuth,
)
__all__ = [
"APIKeyAuth",
"AuthScheme",
"AuthenticatedUser",
"BearerTokenAuth",
"ClientAuthScheme",
"HTTPBasicAuth",
"HTTPDigestAuth",
"OAuth2AuthorizationCode",
"OAuth2ClientCredentials",
"OIDCAuth",
"ServerAuthScheme",
"SimpleTokenAuth",
"TLSConfig",
]

View File

@@ -0,0 +1,550 @@
"""Authentication schemes for A2A protocol clients.
Supported authentication methods:
- Bearer tokens
- OAuth2 (Client Credentials, Authorization Code)
- API Keys (header, query, cookie)
- HTTP Basic authentication
- HTTP Digest authentication
- mTLS (mutual TLS) client certificate authentication
"""
from __future__ import annotations
from abc import ABC, abstractmethod
import asyncio
import base64
from collections.abc import Awaitable, Callable, MutableMapping
from pathlib import Path
import ssl
import time
from typing import TYPE_CHECKING, ClassVar, Literal
import urllib.parse
import httpx
from httpx import DigestAuth
from pydantic import BaseModel, ConfigDict, Field, FilePath, PrivateAttr
from typing_extensions import deprecated
if TYPE_CHECKING:
import grpc # type: ignore[import-untyped]
class TLSConfig(BaseModel):
"""TLS/mTLS configuration for secure client connections.
Supports mutual TLS (mTLS) where the client presents a certificate to the server,
and standard TLS with custom CA verification.
Attributes:
client_cert_path: Path to client certificate file (PEM format) for mTLS.
client_key_path: Path to client private key file (PEM format) for mTLS.
ca_cert_path: Path to CA certificate bundle for server verification.
verify: Whether to verify server certificates. Set False only for development.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
client_cert_path: FilePath | None = Field(
default=None,
description="Path to client certificate file (PEM format) for mTLS",
)
client_key_path: FilePath | None = Field(
default=None,
description="Path to client private key file (PEM format) for mTLS",
)
ca_cert_path: FilePath | None = Field(
default=None,
description="Path to CA certificate bundle for server verification",
)
verify: bool = Field(
default=True,
description="Whether to verify server certificates. Set False only for development.",
)
def get_httpx_ssl_context(self) -> ssl.SSLContext | bool | str:
"""Build SSL context for httpx client.
Returns:
SSL context if certificates configured, True for default verification,
False if verification disabled, or path to CA bundle.
"""
if not self.verify:
return False
if self.client_cert_path and self.client_key_path:
context = ssl.create_default_context()
if self.ca_cert_path:
context.load_verify_locations(cafile=str(self.ca_cert_path))
context.load_cert_chain(
certfile=str(self.client_cert_path),
keyfile=str(self.client_key_path),
)
return context
if self.ca_cert_path:
return str(self.ca_cert_path)
return True
def get_grpc_credentials(self) -> grpc.ChannelCredentials | None: # type: ignore[no-any-unimported]
"""Build gRPC channel credentials for secure connections.
Returns:
gRPC SSL credentials if certificates configured, None otherwise.
"""
try:
import grpc
except ImportError:
return None
if not self.verify and not self.client_cert_path:
return None
root_certs: bytes | None = None
private_key: bytes | None = None
certificate_chain: bytes | None = None
if self.ca_cert_path:
root_certs = Path(self.ca_cert_path).read_bytes()
if self.client_cert_path and self.client_key_path:
private_key = Path(self.client_key_path).read_bytes()
certificate_chain = Path(self.client_cert_path).read_bytes()
return grpc.ssl_channel_credentials(
root_certificates=root_certs,
private_key=private_key,
certificate_chain=certificate_chain,
)
class ClientAuthScheme(ABC, BaseModel):
"""Base class for client-side authentication schemes.
Client auth schemes apply credentials to outgoing requests.
Attributes:
tls: Optional TLS/mTLS configuration for secure connections.
"""
tls: TLSConfig | None = Field(
default=None,
description="TLS/mTLS configuration for secure connections",
)
@abstractmethod
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply authentication to request headers.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with authentication applied.
"""
...
@deprecated("Use ClientAuthScheme instead", category=FutureWarning)
class AuthScheme(ClientAuthScheme):
"""Deprecated: Use ClientAuthScheme instead."""
class BearerTokenAuth(ClientAuthScheme):
"""Bearer token authentication (Authorization: Bearer <token>).
Attributes:
token: Bearer token for authentication.
"""
token: str = Field(description="Bearer token")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply Bearer token to Authorization header.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with Bearer token in Authorization header.
"""
headers["Authorization"] = f"Bearer {self.token}"
return headers
class HTTPBasicAuth(ClientAuthScheme):
"""HTTP Basic authentication.
Attributes:
username: Username for Basic authentication.
password: Password for Basic authentication.
"""
username: str = Field(description="Username")
password: str = Field(description="Password")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply HTTP Basic authentication.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with Basic auth in Authorization header.
"""
credentials = f"{self.username}:{self.password}"
encoded = base64.b64encode(credentials.encode()).decode()
headers["Authorization"] = f"Basic {encoded}"
return headers
class HTTPDigestAuth(ClientAuthScheme):
"""HTTP Digest authentication.
Note: Uses httpx-auth library for digest implementation.
Attributes:
username: Username for Digest authentication.
password: Password for Digest authentication.
"""
username: str = Field(description="Username")
password: str = Field(description="Password")
_configured_client_id: int | None = PrivateAttr(default=None)
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Digest auth is handled by httpx auth flow, not headers.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Unchanged headers (Digest auth handled by httpx auth flow).
"""
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client with Digest auth.
Idempotent: Only configures the client once. Subsequent calls on the same
client instance are no-ops to prevent overwriting auth configuration.
Args:
client: HTTP client to configure with Digest authentication.
"""
client_id = id(client)
if self._configured_client_id == client_id:
return
client.auth = DigestAuth(self.username, self.password)
self._configured_client_id = client_id
class APIKeyAuth(ClientAuthScheme):
"""API Key authentication (header, query, or cookie).
Attributes:
api_key: API key value for authentication.
location: Where to send the API key (header, query, or cookie).
name: Parameter name for the API key (default: X-API-Key).
"""
api_key: str = Field(description="API key value")
location: Literal["header", "query", "cookie"] = Field(
default="header", description="Where to send the API key"
)
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
_configured_client_ids: set[int] = PrivateAttr(default_factory=set)
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply API key authentication.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with API key (for header/cookie locations).
"""
if self.location == "header":
headers[self.name] = self.api_key
elif self.location == "cookie":
headers["Cookie"] = f"{self.name}={self.api_key}"
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client for query param API keys.
Idempotent: Only adds the request hook once per client instance.
Subsequent calls on the same client are no-ops to prevent hook accumulation.
Args:
client: HTTP client to configure with query param API key hook.
"""
if self.location == "query":
client_id = id(client)
if client_id in self._configured_client_ids:
return
async def _add_api_key_param(request: httpx.Request) -> None:
url = httpx.URL(request.url)
request.url = url.copy_add_param(self.name, self.api_key)
client.event_hooks["request"].append(_add_api_key_param)
self._configured_client_ids.add(client_id)
class OAuth2ClientCredentials(ClientAuthScheme):
"""OAuth2 Client Credentials flow authentication.
Thread-safe implementation with asyncio.Lock to prevent concurrent token fetches
when multiple requests share the same auth instance.
Attributes:
token_url: OAuth2 token endpoint URL.
client_id: OAuth2 client identifier.
client_secret: OAuth2 client secret.
scopes: List of required OAuth2 scopes.
"""
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = PrivateAttr(default=None)
_token_expires_at: float | None = PrivateAttr(default=None)
_lock: asyncio.Lock = PrivateAttr(default_factory=asyncio.Lock)
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply OAuth2 access token to Authorization header.
Uses asyncio.Lock to ensure only one coroutine fetches tokens at a time,
preventing race conditions when multiple concurrent requests use the same
auth instance.
Args:
client: HTTP client for making token requests.
headers: Current request headers.
Returns:
Updated headers with OAuth2 access token in Authorization header.
"""
if (
self._access_token is None
or self._token_expires_at is None
or time.time() >= self._token_expires_at
):
async with self._lock:
if (
self._access_token is None
or self._token_expires_at is None
or time.time() >= self._token_expires_at
):
await self._fetch_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
"""Fetch OAuth2 access token using client credentials flow.
Args:
client: HTTP client for making token request.
Raises:
httpx.HTTPStatusError: If token request fails.
"""
data = {
"grant_type": "client_credentials",
"client_id": self.client_id,
"client_secret": self.client_secret,
}
if self.scopes:
data["scope"] = " ".join(self.scopes)
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
class OAuth2AuthorizationCode(ClientAuthScheme):
"""OAuth2 Authorization Code flow authentication.
Thread-safe implementation with asyncio.Lock to prevent concurrent token operations.
Note: Requires interactive authorization.
Attributes:
authorization_url: OAuth2 authorization endpoint URL.
token_url: OAuth2 token endpoint URL.
client_id: OAuth2 client identifier.
client_secret: OAuth2 client secret.
redirect_uri: OAuth2 redirect URI for callback.
scopes: List of required OAuth2 scopes.
"""
authorization_url: str = Field(description="OAuth2 authorization endpoint")
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
redirect_uri: str = Field(description="OAuth2 redirect URI")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = PrivateAttr(default=None)
_refresh_token: str | None = PrivateAttr(default=None)
_token_expires_at: float | None = PrivateAttr(default=None)
_authorization_callback: Callable[[str], Awaitable[str]] | None = PrivateAttr(
default=None
)
_lock: asyncio.Lock = PrivateAttr(default_factory=asyncio.Lock)
def set_authorization_callback(
self, callback: Callable[[str], Awaitable[str]] | None
) -> None:
"""Set callback to handle authorization URL.
Args:
callback: Async function that receives authorization URL and returns auth code.
"""
self._authorization_callback = callback
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply OAuth2 access token to Authorization header.
Uses asyncio.Lock to ensure only one coroutine handles token operations
(initial fetch or refresh) at a time.
Args:
client: HTTP client for making token requests.
headers: Current request headers.
Returns:
Updated headers with OAuth2 access token in Authorization header.
Raises:
ValueError: If authorization callback is not set.
"""
if self._access_token is None:
if self._authorization_callback is None:
msg = "Authorization callback not set. Use set_authorization_callback()"
raise ValueError(msg)
async with self._lock:
if self._access_token is None:
await self._fetch_initial_token(client)
elif self._token_expires_at and time.time() >= self._token_expires_at:
async with self._lock:
if self._token_expires_at and time.time() >= self._token_expires_at:
await self._refresh_access_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
"""Fetch initial access token using authorization code flow.
Args:
client: HTTP client for making token request.
Raises:
ValueError: If authorization callback is not set.
httpx.HTTPStatusError: If token request fails.
"""
params = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"scope": " ".join(self.scopes),
}
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
if self._authorization_callback is None:
msg = "Authorization callback not set"
raise ValueError(msg)
auth_code = await self._authorization_callback(auth_url)
data = {
"grant_type": "authorization_code",
"code": auth_code,
"client_id": self.client_id,
"client_secret": self.client_secret,
"redirect_uri": self.redirect_uri,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
self._refresh_token = token_data.get("refresh_token")
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
"""Refresh the access token using refresh token.
Args:
client: HTTP client for making token request.
Raises:
httpx.HTTPStatusError: If token refresh request fails.
"""
if not self._refresh_token:
await self._fetch_initial_token(client)
return
data = {
"grant_type": "refresh_token",
"refresh_token": self._refresh_token,
"client_id": self.client_id,
"client_secret": self.client_secret,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
if "refresh_token" in token_data:
self._refresh_token = token_data["refresh_token"]
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60

View File

@@ -1,392 +1,71 @@
"""Authentication schemes for A2A protocol agents.
"""Deprecated: Authentication schemes for A2A protocol agents.
Supported authentication methods:
- Bearer tokens
- OAuth2 (Client Credentials, Authorization Code)
- API Keys (header, query, cookie)
- HTTP Basic authentication
- HTTP Digest authentication
This module is deprecated. Import from crewai.a2a.auth instead:
- crewai.a2a.auth.ClientAuthScheme (replaces AuthScheme)
- crewai.a2a.auth.BearerTokenAuth
- crewai.a2a.auth.HTTPBasicAuth
- crewai.a2a.auth.HTTPDigestAuth
- crewai.a2a.auth.APIKeyAuth
- crewai.a2a.auth.OAuth2ClientCredentials
- crewai.a2a.auth.OAuth2AuthorizationCode
"""
from __future__ import annotations
from abc import ABC, abstractmethod
import base64
from collections.abc import Awaitable, Callable, MutableMapping
import time
from typing import Literal
import urllib.parse
from typing_extensions import deprecated
import httpx
from httpx import DigestAuth
from pydantic import BaseModel, Field, PrivateAttr
from crewai.a2a.auth.client_schemes import (
APIKeyAuth as _APIKeyAuth,
BearerTokenAuth as _BearerTokenAuth,
ClientAuthScheme as _ClientAuthScheme,
HTTPBasicAuth as _HTTPBasicAuth,
HTTPDigestAuth as _HTTPDigestAuth,
OAuth2AuthorizationCode as _OAuth2AuthorizationCode,
OAuth2ClientCredentials as _OAuth2ClientCredentials,
)
class AuthScheme(ABC, BaseModel):
"""Base class for authentication schemes."""
@deprecated("Use ClientAuthScheme from crewai.a2a.auth instead", category=FutureWarning)
class AuthScheme(_ClientAuthScheme):
"""Deprecated: Use ClientAuthScheme from crewai.a2a.auth instead."""
@abstractmethod
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply authentication to request headers.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
class BearerTokenAuth(_BearerTokenAuth):
"""Deprecated: Import from crewai.a2a.auth instead."""
Returns:
Updated headers with authentication applied.
"""
...
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
class HTTPBasicAuth(_HTTPBasicAuth):
"""Deprecated: Import from crewai.a2a.auth instead."""
class BearerTokenAuth(AuthScheme):
"""Bearer token authentication (Authorization: Bearer <token>).
Attributes:
token: Bearer token for authentication.
"""
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
class HTTPDigestAuth(_HTTPDigestAuth):
"""Deprecated: Import from crewai.a2a.auth instead."""
token: str = Field(description="Bearer token")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply Bearer token to Authorization header.
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
class APIKeyAuth(_APIKeyAuth):
"""Deprecated: Import from crewai.a2a.auth instead."""
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with Bearer token in Authorization header.
"""
headers["Authorization"] = f"Bearer {self.token}"
return headers
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
class OAuth2ClientCredentials(_OAuth2ClientCredentials):
"""Deprecated: Import from crewai.a2a.auth instead."""
class HTTPBasicAuth(AuthScheme):
"""HTTP Basic authentication.
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
class OAuth2AuthorizationCode(_OAuth2AuthorizationCode):
"""Deprecated: Import from crewai.a2a.auth instead."""
Attributes:
username: Username for Basic authentication.
password: Password for Basic authentication.
"""
username: str = Field(description="Username")
password: str = Field(description="Password")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply HTTP Basic authentication.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with Basic auth in Authorization header.
"""
credentials = f"{self.username}:{self.password}"
encoded = base64.b64encode(credentials.encode()).decode()
headers["Authorization"] = f"Basic {encoded}"
return headers
class HTTPDigestAuth(AuthScheme):
"""HTTP Digest authentication.
Note: Uses httpx-auth library for digest implementation.
Attributes:
username: Username for Digest authentication.
password: Password for Digest authentication.
"""
username: str = Field(description="Username")
password: str = Field(description="Password")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Digest auth is handled by httpx auth flow, not headers.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Unchanged headers (Digest auth handled by httpx auth flow).
"""
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client with Digest auth.
Args:
client: HTTP client to configure with Digest authentication.
"""
client.auth = DigestAuth(self.username, self.password)
class APIKeyAuth(AuthScheme):
"""API Key authentication (header, query, or cookie).
Attributes:
api_key: API key value for authentication.
location: Where to send the API key (header, query, or cookie).
name: Parameter name for the API key (default: X-API-Key).
"""
api_key: str = Field(description="API key value")
location: Literal["header", "query", "cookie"] = Field(
default="header", description="Where to send the API key"
)
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply API key authentication.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with API key (for header/cookie locations).
"""
if self.location == "header":
headers[self.name] = self.api_key
elif self.location == "cookie":
headers["Cookie"] = f"{self.name}={self.api_key}"
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client for query param API keys.
Args:
client: HTTP client to configure with query param API key hook.
"""
if self.location == "query":
async def _add_api_key_param(request: httpx.Request) -> None:
url = httpx.URL(request.url)
request.url = url.copy_add_param(self.name, self.api_key)
client.event_hooks["request"].append(_add_api_key_param)
class OAuth2ClientCredentials(AuthScheme):
"""OAuth2 Client Credentials flow authentication.
Attributes:
token_url: OAuth2 token endpoint URL.
client_id: OAuth2 client identifier.
client_secret: OAuth2 client secret.
scopes: List of required OAuth2 scopes.
"""
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = PrivateAttr(default=None)
_token_expires_at: float | None = PrivateAttr(default=None)
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply OAuth2 access token to Authorization header.
Args:
client: HTTP client for making token requests.
headers: Current request headers.
Returns:
Updated headers with OAuth2 access token in Authorization header.
"""
if (
self._access_token is None
or self._token_expires_at is None
or time.time() >= self._token_expires_at
):
await self._fetch_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
"""Fetch OAuth2 access token using client credentials flow.
Args:
client: HTTP client for making token request.
Raises:
httpx.HTTPStatusError: If token request fails.
"""
data = {
"grant_type": "client_credentials",
"client_id": self.client_id,
"client_secret": self.client_secret,
}
if self.scopes:
data["scope"] = " ".join(self.scopes)
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
class OAuth2AuthorizationCode(AuthScheme):
"""OAuth2 Authorization Code flow authentication.
Note: Requires interactive authorization.
Attributes:
authorization_url: OAuth2 authorization endpoint URL.
token_url: OAuth2 token endpoint URL.
client_id: OAuth2 client identifier.
client_secret: OAuth2 client secret.
redirect_uri: OAuth2 redirect URI for callback.
scopes: List of required OAuth2 scopes.
"""
authorization_url: str = Field(description="OAuth2 authorization endpoint")
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
redirect_uri: str = Field(description="OAuth2 redirect URI")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = PrivateAttr(default=None)
_refresh_token: str | None = PrivateAttr(default=None)
_token_expires_at: float | None = PrivateAttr(default=None)
_authorization_callback: Callable[[str], Awaitable[str]] | None = PrivateAttr(
default=None
)
def set_authorization_callback(
self, callback: Callable[[str], Awaitable[str]] | None
) -> None:
"""Set callback to handle authorization URL.
Args:
callback: Async function that receives authorization URL and returns auth code.
"""
self._authorization_callback = callback
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply OAuth2 access token to Authorization header.
Args:
client: HTTP client for making token requests.
headers: Current request headers.
Returns:
Updated headers with OAuth2 access token in Authorization header.
Raises:
ValueError: If authorization callback is not set.
"""
if self._access_token is None:
if self._authorization_callback is None:
msg = "Authorization callback not set. Use set_authorization_callback()"
raise ValueError(msg)
await self._fetch_initial_token(client)
elif self._token_expires_at and time.time() >= self._token_expires_at:
await self._refresh_access_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
"""Fetch initial access token using authorization code flow.
Args:
client: HTTP client for making token request.
Raises:
ValueError: If authorization callback is not set.
httpx.HTTPStatusError: If token request fails.
"""
params = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"scope": " ".join(self.scopes),
}
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
if self._authorization_callback is None:
msg = "Authorization callback not set"
raise ValueError(msg)
auth_code = await self._authorization_callback(auth_url)
data = {
"grant_type": "authorization_code",
"code": auth_code,
"client_id": self.client_id,
"client_secret": self.client_secret,
"redirect_uri": self.redirect_uri,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
self._refresh_token = token_data.get("refresh_token")
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
"""Refresh the access token using refresh token.
Args:
client: HTTP client for making token request.
Raises:
httpx.HTTPStatusError: If token refresh request fails.
"""
if not self._refresh_token:
await self._fetch_initial_token(client)
return
data = {
"grant_type": "refresh_token",
"refresh_token": self._refresh_token,
"client_id": self.client_id,
"client_secret": self.client_secret,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
if "refresh_token" in token_data:
self._refresh_token = token_data["refresh_token"]
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
__all__ = [
"APIKeyAuth",
"AuthScheme",
"BearerTokenAuth",
"HTTPBasicAuth",
"HTTPDigestAuth",
"OAuth2AuthorizationCode",
"OAuth2ClientCredentials",
]

View File

@@ -0,0 +1,739 @@
"""Server-side authentication schemes for A2A protocol.
These schemes validate incoming requests to A2A server endpoints.
Supported authentication methods:
- Simple token validation with static bearer tokens
- OpenID Connect with JWT validation using JWKS
- OAuth2 with JWT validation or token introspection
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from dataclasses import dataclass
import logging
import os
from typing import TYPE_CHECKING, Annotated, Any, ClassVar, Literal
import jwt
from jwt import PyJWKClient
from pydantic import (
BaseModel,
BeforeValidator,
ConfigDict,
Field,
HttpUrl,
PrivateAttr,
SecretStr,
model_validator,
)
from typing_extensions import Self
if TYPE_CHECKING:
from a2a.types import OAuth2SecurityScheme
logger = logging.getLogger(__name__)
try:
from fastapi import HTTPException, status as http_status
HTTP_401_UNAUTHORIZED = http_status.HTTP_401_UNAUTHORIZED
HTTP_500_INTERNAL_SERVER_ERROR = http_status.HTTP_500_INTERNAL_SERVER_ERROR
HTTP_503_SERVICE_UNAVAILABLE = http_status.HTTP_503_SERVICE_UNAVAILABLE
except ImportError:
class HTTPException(Exception): # type: ignore[no-redef] # noqa: N818
"""Fallback HTTPException when FastAPI is not installed."""
def __init__(
self,
status_code: int,
detail: str | None = None,
headers: dict[str, str] | None = None,
) -> None:
self.status_code = status_code
self.detail = detail
self.headers = headers
super().__init__(detail)
HTTP_401_UNAUTHORIZED = 401
HTTP_500_INTERNAL_SERVER_ERROR = 500
HTTP_503_SERVICE_UNAVAILABLE = 503
def _coerce_secret_str(v: str | SecretStr | None) -> SecretStr | None:
"""Coerce string to SecretStr."""
if v is None or isinstance(v, SecretStr):
return v
return SecretStr(v)
CoercedSecretStr = Annotated[SecretStr, BeforeValidator(_coerce_secret_str)]
JWTAlgorithm = Literal[
"RS256",
"RS384",
"RS512",
"ES256",
"ES384",
"ES512",
"PS256",
"PS384",
"PS512",
]
@dataclass
class AuthenticatedUser:
"""Result of successful authentication.
Attributes:
token: The original token that was validated.
scheme: Name of the authentication scheme used.
claims: JWT claims from OIDC or OAuth2 authentication.
"""
token: str
scheme: str
claims: dict[str, Any] | None = None
class ServerAuthScheme(ABC, BaseModel):
"""Base class for server-side authentication schemes.
Each scheme validates incoming requests and returns an AuthenticatedUser
on success, or raises HTTPException on failure.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
@abstractmethod
async def authenticate(self, token: str) -> AuthenticatedUser:
"""Authenticate the provided token.
Args:
token: The bearer token to authenticate.
Returns:
AuthenticatedUser on successful authentication.
Raises:
HTTPException: If authentication fails.
"""
...
class SimpleTokenAuth(ServerAuthScheme):
"""Simple bearer token authentication.
Validates tokens against a configured static token or AUTH_TOKEN env var.
Attributes:
token: Expected token value. Falls back to AUTH_TOKEN env var if not set.
"""
token: CoercedSecretStr | None = Field(
default=None,
description="Expected token. Falls back to AUTH_TOKEN env var.",
)
def _get_expected_token(self) -> str | None:
"""Get the expected token value."""
if self.token:
return self.token.get_secret_value()
return os.environ.get("AUTH_TOKEN")
async def authenticate(self, token: str) -> AuthenticatedUser:
"""Authenticate using simple token comparison.
Args:
token: The bearer token to authenticate.
Returns:
AuthenticatedUser on successful authentication.
Raises:
HTTPException: If authentication fails.
"""
expected = self._get_expected_token()
if expected is None:
logger.warning(
"Simple token authentication failed",
extra={"reason": "no_token_configured"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Authentication not configured",
)
if token != expected:
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid or missing authentication credentials",
)
return AuthenticatedUser(
token=token,
scheme="simple_token",
)
class OIDCAuth(ServerAuthScheme):
"""OpenID Connect authentication.
Validates JWTs using JWKS with caching support via PyJWT.
Attributes:
issuer: The OpenID Connect issuer URL.
audience: The expected audience claim.
jwks_url: Optional explicit JWKS URL. Derived from issuer if not set.
algorithms: List of allowed signing algorithms.
required_claims: List of claims that must be present in the token.
jwks_cache_ttl: TTL for JWKS cache in seconds.
clock_skew_seconds: Allowed clock skew for token validation.
"""
issuer: HttpUrl = Field(
description="OpenID Connect issuer URL (e.g., https://auth.example.com)"
)
audience: str = Field(description="Expected audience claim (e.g., api://my-agent)")
jwks_url: HttpUrl | None = Field(
default=None,
description="Explicit JWKS URL. Derived from issuer if not set.",
)
algorithms: list[str] = Field(
default_factory=lambda: ["RS256"],
description="List of allowed signing algorithms (RS256, ES256, etc.)",
)
required_claims: list[str] = Field(
default_factory=lambda: ["exp", "iat", "iss", "aud", "sub"],
description="List of claims that must be present in the token",
)
jwks_cache_ttl: int = Field(
default=3600,
description="TTL for JWKS cache in seconds",
ge=60,
)
clock_skew_seconds: float = Field(
default=30.0,
description="Allowed clock skew for token validation",
ge=0.0,
)
_jwk_client: PyJWKClient | None = PrivateAttr(default=None)
@model_validator(mode="after")
def _init_jwk_client(self) -> Self:
"""Initialize the JWK client after model creation."""
jwks_url = (
str(self.jwks_url)
if self.jwks_url
else f"{str(self.issuer).rstrip('/')}/.well-known/jwks.json"
)
self._jwk_client = PyJWKClient(jwks_url, lifespan=self.jwks_cache_ttl)
return self
async def authenticate(self, token: str) -> AuthenticatedUser:
"""Authenticate using OIDC JWT validation.
Args:
token: The JWT to authenticate.
Returns:
AuthenticatedUser on successful authentication.
Raises:
HTTPException: If authentication fails.
"""
if self._jwk_client is None:
raise HTTPException(
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
detail="OIDC not initialized",
)
try:
signing_key = self._jwk_client.get_signing_key_from_jwt(token)
claims = jwt.decode(
token,
signing_key.key,
algorithms=self.algorithms,
audience=self.audience,
issuer=str(self.issuer).rstrip("/"),
leeway=self.clock_skew_seconds,
options={
"require": self.required_claims,
},
)
return AuthenticatedUser(
token=token,
scheme="oidc",
claims=claims,
)
except jwt.ExpiredSignatureError:
logger.debug(
"OIDC authentication failed",
extra={"reason": "token_expired", "scheme": "oidc"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Token has expired",
) from None
except jwt.InvalidAudienceError:
logger.debug(
"OIDC authentication failed",
extra={"reason": "invalid_audience", "scheme": "oidc"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid token audience",
) from None
except jwt.InvalidIssuerError:
logger.debug(
"OIDC authentication failed",
extra={"reason": "invalid_issuer", "scheme": "oidc"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid token issuer",
) from None
except jwt.MissingRequiredClaimError as e:
logger.debug(
"OIDC authentication failed",
extra={"reason": "missing_claim", "claim": e.claim, "scheme": "oidc"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail=f"Missing required claim: {e.claim}",
) from None
except jwt.PyJWKClientError as e:
logger.error(
"OIDC authentication failed",
extra={
"reason": "jwks_client_error",
"error": str(e),
"scheme": "oidc",
},
)
raise HTTPException(
status_code=HTTP_503_SERVICE_UNAVAILABLE,
detail="Unable to fetch signing keys",
) from None
except jwt.InvalidTokenError as e:
logger.debug(
"OIDC authentication failed",
extra={"reason": "invalid_token", "error": str(e), "scheme": "oidc"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid or missing authentication credentials",
) from None
class OAuth2ServerAuth(ServerAuthScheme):
"""OAuth2 authentication for A2A server.
Declares OAuth2 security scheme in AgentCard and validates tokens using
either JWKS for JWT tokens or token introspection for opaque tokens.
This is distinct from OIDCAuth in that it declares an explicit OAuth2SecurityScheme
with flows, rather than an OpenIdConnectSecurityScheme with discovery URL.
Attributes:
token_url: OAuth2 token endpoint URL for client_credentials flow.
authorization_url: OAuth2 authorization endpoint for authorization_code flow.
refresh_url: Optional refresh token endpoint URL.
scopes: Available OAuth2 scopes with descriptions.
jwks_url: JWKS URL for JWT validation. Required if not using introspection.
introspection_url: Token introspection endpoint (RFC 7662). Alternative to JWKS.
introspection_client_id: Client ID for introspection endpoint authentication.
introspection_client_secret: Client secret for introspection endpoint.
audience: Expected audience claim for JWT validation.
issuer: Expected issuer claim for JWT validation.
algorithms: Allowed JWT signing algorithms.
required_claims: Claims that must be present in the token.
jwks_cache_ttl: TTL for JWKS cache in seconds.
clock_skew_seconds: Allowed clock skew for token validation.
"""
token_url: HttpUrl = Field(
description="OAuth2 token endpoint URL",
)
authorization_url: HttpUrl | None = Field(
default=None,
description="OAuth2 authorization endpoint URL for authorization_code flow",
)
refresh_url: HttpUrl | None = Field(
default=None,
description="OAuth2 refresh token endpoint URL",
)
scopes: dict[str, str] = Field(
default_factory=dict,
description="Available OAuth2 scopes with descriptions",
)
jwks_url: HttpUrl | None = Field(
default=None,
description="JWKS URL for JWT validation. Required if not using introspection.",
)
introspection_url: HttpUrl | None = Field(
default=None,
description="Token introspection endpoint (RFC 7662). Alternative to JWKS.",
)
introspection_client_id: str | None = Field(
default=None,
description="Client ID for introspection endpoint authentication",
)
introspection_client_secret: CoercedSecretStr | None = Field(
default=None,
description="Client secret for introspection endpoint authentication",
)
audience: str | None = Field(
default=None,
description="Expected audience claim for JWT validation",
)
issuer: str | None = Field(
default=None,
description="Expected issuer claim for JWT validation",
)
algorithms: list[str] = Field(
default_factory=lambda: ["RS256"],
description="Allowed JWT signing algorithms",
)
required_claims: list[str] = Field(
default_factory=lambda: ["exp", "iat"],
description="Claims that must be present in the token",
)
jwks_cache_ttl: int = Field(
default=3600,
description="TTL for JWKS cache in seconds",
ge=60,
)
clock_skew_seconds: float = Field(
default=30.0,
description="Allowed clock skew for token validation",
ge=0.0,
)
_jwk_client: PyJWKClient | None = PrivateAttr(default=None)
@model_validator(mode="after")
def _validate_and_init(self) -> Self:
"""Validate configuration and initialize JWKS client if needed."""
if not self.jwks_url and not self.introspection_url:
raise ValueError(
"Either jwks_url or introspection_url must be provided for token validation"
)
if self.introspection_url:
if not self.introspection_client_id or not self.introspection_client_secret:
raise ValueError(
"introspection_client_id and introspection_client_secret are required "
"when using token introspection"
)
if self.jwks_url:
self._jwk_client = PyJWKClient(
str(self.jwks_url), lifespan=self.jwks_cache_ttl
)
return self
async def authenticate(self, token: str) -> AuthenticatedUser:
"""Authenticate using OAuth2 token validation.
Uses JWKS validation if jwks_url is configured, otherwise falls back
to token introspection.
Args:
token: The OAuth2 access token to authenticate.
Returns:
AuthenticatedUser on successful authentication.
Raises:
HTTPException: If authentication fails.
"""
if self._jwk_client:
return await self._authenticate_jwt(token)
return await self._authenticate_introspection(token)
async def _authenticate_jwt(self, token: str) -> AuthenticatedUser:
"""Authenticate using JWKS JWT validation."""
if self._jwk_client is None:
raise HTTPException(
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
detail="OAuth2 JWKS not initialized",
)
try:
signing_key = self._jwk_client.get_signing_key_from_jwt(token)
decode_options: dict[str, Any] = {
"require": self.required_claims,
}
claims = jwt.decode(
token,
signing_key.key,
algorithms=self.algorithms,
audience=self.audience,
issuer=self.issuer,
leeway=self.clock_skew_seconds,
options=decode_options,
)
return AuthenticatedUser(
token=token,
scheme="oauth2",
claims=claims,
)
except jwt.ExpiredSignatureError:
logger.debug(
"OAuth2 authentication failed",
extra={"reason": "token_expired", "scheme": "oauth2"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Token has expired",
) from None
except jwt.InvalidAudienceError:
logger.debug(
"OAuth2 authentication failed",
extra={"reason": "invalid_audience", "scheme": "oauth2"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid token audience",
) from None
except jwt.InvalidIssuerError:
logger.debug(
"OAuth2 authentication failed",
extra={"reason": "invalid_issuer", "scheme": "oauth2"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid token issuer",
) from None
except jwt.MissingRequiredClaimError as e:
logger.debug(
"OAuth2 authentication failed",
extra={"reason": "missing_claim", "claim": e.claim, "scheme": "oauth2"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail=f"Missing required claim: {e.claim}",
) from None
except jwt.PyJWKClientError as e:
logger.error(
"OAuth2 authentication failed",
extra={
"reason": "jwks_client_error",
"error": str(e),
"scheme": "oauth2",
},
)
raise HTTPException(
status_code=HTTP_503_SERVICE_UNAVAILABLE,
detail="Unable to fetch signing keys",
) from None
except jwt.InvalidTokenError as e:
logger.debug(
"OAuth2 authentication failed",
extra={"reason": "invalid_token", "error": str(e), "scheme": "oauth2"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid or missing authentication credentials",
) from None
async def _authenticate_introspection(self, token: str) -> AuthenticatedUser:
"""Authenticate using OAuth2 token introspection (RFC 7662)."""
import httpx
if not self.introspection_url:
raise HTTPException(
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
detail="OAuth2 introspection not configured",
)
try:
async with httpx.AsyncClient() as client:
response = await client.post(
str(self.introspection_url),
data={"token": token},
auth=(
self.introspection_client_id or "",
self.introspection_client_secret.get_secret_value()
if self.introspection_client_secret
else "",
),
)
response.raise_for_status()
introspection_result = response.json()
except httpx.HTTPStatusError as e:
logger.error(
"OAuth2 introspection failed",
extra={"reason": "http_error", "status_code": e.response.status_code},
)
raise HTTPException(
status_code=HTTP_503_SERVICE_UNAVAILABLE,
detail="Token introspection service unavailable",
) from None
except Exception as e:
logger.error(
"OAuth2 introspection failed",
extra={"reason": "unexpected_error", "error": str(e)},
)
raise HTTPException(
status_code=HTTP_503_SERVICE_UNAVAILABLE,
detail="Token introspection failed",
) from None
if not introspection_result.get("active", False):
logger.debug(
"OAuth2 authentication failed",
extra={"reason": "token_not_active", "scheme": "oauth2"},
)
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Token is not active",
)
return AuthenticatedUser(
token=token,
scheme="oauth2",
claims=introspection_result,
)
def to_security_scheme(self) -> OAuth2SecurityScheme:
"""Generate OAuth2SecurityScheme for AgentCard declaration.
Creates an OAuth2SecurityScheme with appropriate flows based on
the configured URLs. Includes client_credentials flow if token_url
is set, and authorization_code flow if authorization_url is set.
Returns:
OAuth2SecurityScheme suitable for use in AgentCard security_schemes.
"""
from a2a.types import (
AuthorizationCodeOAuthFlow,
ClientCredentialsOAuthFlow,
OAuth2SecurityScheme,
OAuthFlows,
)
client_credentials = None
authorization_code = None
if self.token_url:
client_credentials = ClientCredentialsOAuthFlow(
token_url=str(self.token_url),
refresh_url=str(self.refresh_url) if self.refresh_url else None,
scopes=self.scopes,
)
if self.authorization_url:
authorization_code = AuthorizationCodeOAuthFlow(
authorization_url=str(self.authorization_url),
token_url=str(self.token_url),
refresh_url=str(self.refresh_url) if self.refresh_url else None,
scopes=self.scopes,
)
return OAuth2SecurityScheme(
flows=OAuthFlows(
client_credentials=client_credentials,
authorization_code=authorization_code,
),
description="OAuth2 authentication",
)
class APIKeyServerAuth(ServerAuthScheme):
"""API Key authentication for A2A server.
Validates requests using an API key in a header, query parameter, or cookie.
Attributes:
name: The name of the API key parameter (default: X-API-Key).
location: Where to look for the API key (header, query, or cookie).
api_key: The expected API key value.
"""
name: str = Field(
default="X-API-Key",
description="Name of the API key parameter",
)
location: Literal["header", "query", "cookie"] = Field(
default="header",
description="Where to look for the API key",
)
api_key: CoercedSecretStr = Field(
description="Expected API key value",
)
async def authenticate(self, token: str) -> AuthenticatedUser:
"""Authenticate using API key comparison.
Args:
token: The API key to authenticate.
Returns:
AuthenticatedUser on successful authentication.
Raises:
HTTPException: If authentication fails.
"""
if token != self.api_key.get_secret_value():
raise HTTPException(
status_code=HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
)
return AuthenticatedUser(
token=token,
scheme="api_key",
)
class MTLSServerAuth(ServerAuthScheme):
"""Mutual TLS authentication marker for AgentCard declaration.
This scheme is primarily for AgentCard security_schemes declaration.
Actual mTLS verification happens at the TLS/transport layer, not
at the application layer via token validation.
When configured, this signals to clients that the server requires
client certificates for authentication.
"""
description: str = Field(
default="Mutual TLS certificate authentication",
description="Description for the security scheme",
)
async def authenticate(self, token: str) -> AuthenticatedUser:
"""Return authenticated user for mTLS.
mTLS verification happens at the transport layer before this is called.
If we reach this point, the TLS handshake with client cert succeeded.
Args:
token: Certificate subject or identifier (from TLS layer).
Returns:
AuthenticatedUser indicating mTLS authentication.
"""
return AuthenticatedUser(
token=token or "mtls-verified",
scheme="mtls",
)

View File

@@ -6,8 +6,10 @@ OAuth2, API keys, and HTTP authentication methods.
import asyncio
from collections.abc import Awaitable, Callable, MutableMapping
import hashlib
import re
from typing import Final
import threading
from typing import Final, Literal, cast
from a2a.client.errors import A2AClientHTTPError
from a2a.types import (
@@ -18,10 +20,10 @@ from a2a.types import (
)
from httpx import AsyncClient, Response
from crewai.a2a.auth.schemas import (
from crewai.a2a.auth.client_schemes import (
APIKeyAuth,
AuthScheme,
BearerTokenAuth,
ClientAuthScheme,
HTTPBasicAuth,
HTTPDigestAuth,
OAuth2AuthorizationCode,
@@ -29,12 +31,44 @@ from crewai.a2a.auth.schemas import (
)
_auth_store: dict[int, AuthScheme | None] = {}
class _AuthStore:
"""Store for authentication schemes with safe concurrent access."""
def __init__(self) -> None:
self._store: dict[str, ClientAuthScheme | None] = {}
self._lock = threading.RLock()
@staticmethod
def compute_key(auth_type: str, auth_data: str) -> str:
"""Compute a collision-resistant key using SHA-256."""
content = f"{auth_type}:{auth_data}"
return hashlib.sha256(content.encode()).hexdigest()
def set(self, key: str, auth: ClientAuthScheme | None) -> None:
"""Store an auth scheme."""
with self._lock:
self._store[key] = auth
def get(self, key: str) -> ClientAuthScheme | None:
"""Retrieve an auth scheme by key."""
with self._lock:
return self._store.get(key)
def __setitem__(self, key: str, value: ClientAuthScheme | None) -> None:
with self._lock:
self._store[key] = value
def __getitem__(self, key: str) -> ClientAuthScheme | None:
with self._lock:
return self._store[key]
_auth_store = _AuthStore()
_SCHEME_PATTERN: Final[re.Pattern[str]] = re.compile(r"(\w+)\s+(.+?)(?=,\s*\w+\s+|$)")
_PARAM_PATTERN: Final[re.Pattern[str]] = re.compile(r'(\w+)=(?:"([^"]*)"|([^\s,]+))')
_SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[AuthScheme], ...]]] = {
_SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[ClientAuthScheme], ...]]] = {
OAuth2SecurityScheme: (
OAuth2ClientCredentials,
OAuth2AuthorizationCode,
@@ -43,7 +77,9 @@ _SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[AuthScheme], ...]]] = {
APIKeySecurityScheme: (APIKeyAuth,),
}
_HTTP_SCHEME_MAPPING: Final[dict[str, type[AuthScheme]]] = {
_HTTPSchemeType = Literal["basic", "digest", "bearer"]
_HTTP_SCHEME_MAPPING: Final[dict[_HTTPSchemeType, type[ClientAuthScheme]]] = {
"basic": HTTPBasicAuth,
"digest": HTTPDigestAuth,
"bearer": BearerTokenAuth,
@@ -51,8 +87,8 @@ _HTTP_SCHEME_MAPPING: Final[dict[str, type[AuthScheme]]] = {
def _raise_auth_mismatch(
expected_classes: type[AuthScheme] | tuple[type[AuthScheme], ...],
provided_auth: AuthScheme,
expected_classes: type[ClientAuthScheme] | tuple[type[ClientAuthScheme], ...],
provided_auth: ClientAuthScheme,
) -> None:
"""Raise authentication mismatch error.
@@ -111,7 +147,7 @@ def parse_www_authenticate(header_value: str) -> dict[str, dict[str, str]]:
def validate_auth_against_agent_card(
agent_card: AgentCard, auth: AuthScheme | None
agent_card: AgentCard, auth: ClientAuthScheme | None
) -> None:
"""Validate that provided auth matches AgentCard security requirements.
@@ -145,7 +181,8 @@ def validate_auth_against_agent_card(
return
if isinstance(scheme, HTTPAuthSecurityScheme):
if required_class := _HTTP_SCHEME_MAPPING.get(scheme.scheme.lower()):
scheme_key = cast(_HTTPSchemeType, scheme.scheme.lower())
if required_class := _HTTP_SCHEME_MAPPING.get(scheme_key):
if not isinstance(auth, required_class):
_raise_auth_mismatch(required_class, auth)
return
@@ -156,7 +193,7 @@ def validate_auth_against_agent_card(
async def retry_on_401(
request_func: Callable[[], Awaitable[Response]],
auth_scheme: AuthScheme | None,
auth_scheme: ClientAuthScheme | None,
client: AsyncClient,
headers: MutableMapping[str, str],
max_retries: int = 3,

View File

@@ -5,14 +5,25 @@ This module is separate from experimental.a2a to avoid circular imports.
from __future__ import annotations
from importlib.metadata import version
from typing import Any, ClassVar, Literal
from pathlib import Path
from typing import Any, ClassVar, Literal, cast
import warnings
from pydantic import BaseModel, ConfigDict, Field
from typing_extensions import deprecated
from pydantic import (
BaseModel,
ConfigDict,
Field,
FilePath,
PrivateAttr,
SecretStr,
model_validator,
)
from typing_extensions import Self, deprecated
from crewai.a2a.auth.schemas import AuthScheme
from crewai.a2a.types import TransportType, Url
from crewai.a2a.auth.client_schemes import ClientAuthScheme
from crewai.a2a.auth.server_schemes import ServerAuthScheme
from crewai.a2a.extensions.base import ValidatedA2AExtension
from crewai.a2a.types import ProtocolVersion, TransportType, Url
try:
@@ -25,16 +36,17 @@ try:
SecurityScheme,
)
from crewai.a2a.extensions.server import ServerExtension
from crewai.a2a.updates import UpdateConfig
except ImportError:
UpdateConfig = Any
AgentCapabilities = Any
AgentCardSignature = Any
AgentInterface = Any
AgentProvider = Any
SecurityScheme = Any
AgentSkill = Any
UpdateConfig = Any # type: ignore[misc,assignment]
UpdateConfig: Any = Any # type: ignore[no-redef]
AgentCapabilities: Any = Any # type: ignore[no-redef]
AgentCardSignature: Any = Any # type: ignore[no-redef]
AgentInterface: Any = Any # type: ignore[no-redef]
AgentProvider: Any = Any # type: ignore[no-redef]
SecurityScheme: Any = Any # type: ignore[no-redef]
AgentSkill: Any = Any # type: ignore[no-redef]
ServerExtension: Any = Any # type: ignore[no-redef]
def _get_default_update_config() -> UpdateConfig:
@@ -43,6 +55,309 @@ def _get_default_update_config() -> UpdateConfig:
return StreamingConfig()
SigningAlgorithm = Literal[
"RS256",
"RS384",
"RS512",
"ES256",
"ES384",
"ES512",
"PS256",
"PS384",
"PS512",
]
class AgentCardSigningConfig(BaseModel):
"""Configuration for AgentCard JWS signing.
Provides the private key and algorithm settings for signing AgentCards.
Either private_key_path or private_key_pem must be provided, but not both.
Attributes:
private_key_path: Path to a PEM-encoded private key file.
private_key_pem: PEM-encoded private key as a secret string.
key_id: Optional key identifier for the JWS header (kid claim).
algorithm: Signing algorithm (RS256, ES256, PS256, etc.).
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
private_key_path: FilePath | None = Field(
default=None,
description="Path to PEM-encoded private key file",
)
private_key_pem: SecretStr | None = Field(
default=None,
description="PEM-encoded private key",
)
key_id: str | None = Field(
default=None,
description="Key identifier for JWS header (kid claim)",
)
algorithm: SigningAlgorithm = Field(
default="RS256",
description="Signing algorithm (RS256, ES256, PS256, etc.)",
)
@model_validator(mode="after")
def _validate_key_source(self) -> Self:
"""Ensure exactly one key source is provided."""
has_path = self.private_key_path is not None
has_pem = self.private_key_pem is not None
if not has_path and not has_pem:
raise ValueError(
"Either private_key_path or private_key_pem must be provided"
)
if has_path and has_pem:
raise ValueError(
"Only one of private_key_path or private_key_pem should be provided"
)
return self
def get_private_key(self) -> str:
"""Get the private key content.
Returns:
The PEM-encoded private key as a string.
"""
if self.private_key_pem:
return self.private_key_pem.get_secret_value()
if self.private_key_path:
return Path(self.private_key_path).read_text()
raise ValueError("No private key configured")
class GRPCServerConfig(BaseModel):
"""gRPC server transport configuration.
Presence of this config in ServerTransportConfig.grpc enables gRPC transport.
Attributes:
host: Hostname to advertise in agent cards (default: localhost).
Use docker service name (e.g., 'web') for docker-compose setups.
port: Port for the gRPC server.
tls_cert_path: Path to TLS certificate file for gRPC.
tls_key_path: Path to TLS private key file for gRPC.
max_workers: Maximum number of workers for the gRPC thread pool.
reflection_enabled: Whether to enable gRPC reflection for debugging.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
host: str = Field(
default="localhost",
description="Hostname to advertise in agent cards for gRPC connections",
)
port: int = Field(
default=50051,
description="Port for the gRPC server",
)
tls_cert_path: str | None = Field(
default=None,
description="Path to TLS certificate file for gRPC",
)
tls_key_path: str | None = Field(
default=None,
description="Path to TLS private key file for gRPC",
)
max_workers: int = Field(
default=10,
description="Maximum number of workers for the gRPC thread pool",
)
reflection_enabled: bool = Field(
default=False,
description="Whether to enable gRPC reflection for debugging",
)
class GRPCClientConfig(BaseModel):
"""gRPC client transport configuration.
Attributes:
max_send_message_length: Maximum size for outgoing messages in bytes.
max_receive_message_length: Maximum size for incoming messages in bytes.
keepalive_time_ms: Time between keepalive pings in milliseconds.
keepalive_timeout_ms: Timeout for keepalive ping response in milliseconds.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
max_send_message_length: int | None = Field(
default=None,
description="Maximum size for outgoing messages in bytes",
)
max_receive_message_length: int | None = Field(
default=None,
description="Maximum size for incoming messages in bytes",
)
keepalive_time_ms: int | None = Field(
default=None,
description="Time between keepalive pings in milliseconds",
)
keepalive_timeout_ms: int | None = Field(
default=None,
description="Timeout for keepalive ping response in milliseconds",
)
class JSONRPCServerConfig(BaseModel):
"""JSON-RPC server transport configuration.
Presence of this config in ServerTransportConfig.jsonrpc enables JSON-RPC transport.
Attributes:
rpc_path: URL path for the JSON-RPC endpoint.
agent_card_path: URL path for the agent card endpoint.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
rpc_path: str = Field(
default="/a2a",
description="URL path for the JSON-RPC endpoint",
)
agent_card_path: str = Field(
default="/.well-known/agent-card.json",
description="URL path for the agent card endpoint",
)
class JSONRPCClientConfig(BaseModel):
"""JSON-RPC client transport configuration.
Attributes:
max_request_size: Maximum request body size in bytes.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
max_request_size: int | None = Field(
default=None,
description="Maximum request body size in bytes",
)
class HTTPJSONConfig(BaseModel):
"""HTTP+JSON transport configuration.
Presence of this config in ServerTransportConfig.http_json enables HTTP+JSON transport.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
class ServerPushNotificationConfig(BaseModel):
"""Configuration for outgoing webhook push notifications.
Controls how the server signs and delivers push notifications to clients.
Attributes:
signature_secret: Shared secret for HMAC-SHA256 signing of outgoing webhooks.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
signature_secret: SecretStr | None = Field(
default=None,
description="Shared secret for HMAC-SHA256 signing of outgoing push notifications",
)
class ServerTransportConfig(BaseModel):
"""Transport configuration for A2A server.
Groups all transport-related settings including preferred transport
and protocol-specific configurations.
Attributes:
preferred: Transport protocol for the preferred endpoint.
jsonrpc: JSON-RPC server transport configuration.
grpc: gRPC server transport configuration.
http_json: HTTP+JSON transport configuration.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
preferred: TransportType = Field(
default="JSONRPC",
description="Transport protocol for the preferred endpoint",
)
jsonrpc: JSONRPCServerConfig = Field(
default_factory=JSONRPCServerConfig,
description="JSON-RPC server transport configuration",
)
grpc: GRPCServerConfig | None = Field(
default=None,
description="gRPC server transport configuration",
)
http_json: HTTPJSONConfig | None = Field(
default=None,
description="HTTP+JSON transport configuration",
)
def _migrate_client_transport_fields(
transport: ClientTransportConfig,
transport_protocol: TransportType | None,
supported_transports: list[TransportType] | None,
) -> None:
"""Migrate deprecated transport fields to new config."""
if transport_protocol is not None:
warnings.warn(
"transport_protocol is deprecated, use transport=ClientTransportConfig(preferred=...) instead",
FutureWarning,
stacklevel=5,
)
object.__setattr__(transport, "preferred", transport_protocol)
if supported_transports is not None:
warnings.warn(
"supported_transports is deprecated, use transport=ClientTransportConfig(supported=...) instead",
FutureWarning,
stacklevel=5,
)
object.__setattr__(transport, "supported", supported_transports)
class ClientTransportConfig(BaseModel):
"""Transport configuration for A2A client.
Groups all client transport-related settings including preferred transport,
supported transports for negotiation, and protocol-specific configurations.
Transport negotiation logic:
1. If `preferred` is set and server supports it → use client's preferred
2. Otherwise, if server's preferred is in client's `supported` → use server's preferred
3. Otherwise, find first match from client's `supported` in server's interfaces
Attributes:
preferred: Client's preferred transport. If set, client preference takes priority.
supported: Transports the client can use, in order of preference.
jsonrpc: JSON-RPC client transport configuration.
grpc: gRPC client transport configuration.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
preferred: TransportType | None = Field(
default=None,
description="Client's preferred transport. If set, takes priority over server preference.",
)
supported: list[TransportType] = Field(
default_factory=lambda: cast(list[TransportType], ["JSONRPC"]),
description="Transports the client can use, in order of preference",
)
jsonrpc: JSONRPCClientConfig = Field(
default_factory=JSONRPCClientConfig,
description="JSON-RPC client transport configuration",
)
grpc: GRPCClientConfig = Field(
default_factory=GRPCClientConfig,
description="gRPC client transport configuration",
)
@deprecated(
"""
`crewai.a2a.config.A2AConfig` is deprecated and will be removed in v2.0.0,
@@ -65,13 +380,14 @@ class A2AConfig(BaseModel):
fail_fast: If True, raise error when agent unreachable; if False, skip and continue.
trust_remote_completion_status: If True, return A2A agent's result directly when completed.
updates: Update mechanism config.
transport_protocol: A2A transport protocol (grpc, jsonrpc, http+json).
client_extensions: Client-side processing hooks for tool injection and prompt augmentation.
transport: Transport configuration (preferred, supported transports, gRPC settings).
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
endpoint: Url = Field(description="A2A agent endpoint URL")
auth: AuthScheme | None = Field(
auth: ClientAuthScheme | None = Field(
default=None,
description="Authentication scheme",
)
@@ -95,10 +411,48 @@ class A2AConfig(BaseModel):
default_factory=_get_default_update_config,
description="Update mechanism config",
)
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"] = Field(
default="JSONRPC",
description="Specified mode of A2A transport protocol",
client_extensions: list[ValidatedA2AExtension] = Field(
default_factory=list,
description="Client-side processing hooks for tool injection and prompt augmentation",
)
transport: ClientTransportConfig = Field(
default_factory=ClientTransportConfig,
description="Transport configuration (preferred, supported transports, gRPC settings)",
)
transport_protocol: TransportType | None = Field(
default=None,
description="Deprecated: Use transport.preferred instead",
exclude=True,
)
supported_transports: list[TransportType] | None = Field(
default=None,
description="Deprecated: Use transport.supported instead",
exclude=True,
)
use_client_preference: bool | None = Field(
default=None,
description="Deprecated: Set transport.preferred to enable client preference",
exclude=True,
)
_parallel_delegation: bool = PrivateAttr(default=False)
@model_validator(mode="after")
def _migrate_deprecated_transport_fields(self) -> Self:
"""Migrate deprecated transport fields to new config."""
_migrate_client_transport_fields(
self.transport, self.transport_protocol, self.supported_transports
)
if self.use_client_preference is not None:
warnings.warn(
"use_client_preference is deprecated, set transport.preferred to enable client preference",
FutureWarning,
stacklevel=4,
)
if self.use_client_preference and self.transport.supported:
object.__setattr__(
self.transport, "preferred", self.transport.supported[0]
)
return self
class A2AClientConfig(BaseModel):
@@ -114,15 +468,15 @@ class A2AClientConfig(BaseModel):
trust_remote_completion_status: If True, return A2A agent's result directly when completed.
updates: Update mechanism config.
accepted_output_modes: Media types the client can accept in responses.
supported_transports: Ordered list of transport protocols the client supports.
use_client_preference: Whether to prioritize client transport preferences over server.
extensions: Extension URIs the client supports.
extensions: Extension URIs the client supports (A2A protocol extensions).
client_extensions: Client-side processing hooks for tool injection and prompt augmentation.
transport: Transport configuration (preferred, supported transports, gRPC settings).
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
endpoint: Url = Field(description="A2A agent endpoint URL")
auth: AuthScheme | None = Field(
auth: ClientAuthScheme | None = Field(
default=None,
description="Authentication scheme",
)
@@ -150,22 +504,37 @@ class A2AClientConfig(BaseModel):
default_factory=lambda: ["application/json"],
description="Media types the client can accept in responses",
)
supported_transports: list[str] = Field(
default_factory=lambda: ["JSONRPC"],
description="Ordered list of transport protocols the client supports",
)
use_client_preference: bool = Field(
default=False,
description="Whether to prioritize client transport preferences over server",
)
extensions: list[str] = Field(
default_factory=list,
description="Extension URIs the client supports",
)
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"] = Field(
default="JSONRPC",
description="Specified mode of A2A transport protocol",
client_extensions: list[ValidatedA2AExtension] = Field(
default_factory=list,
description="Client-side processing hooks for tool injection and prompt augmentation",
)
transport: ClientTransportConfig = Field(
default_factory=ClientTransportConfig,
description="Transport configuration (preferred, supported transports, gRPC settings)",
)
transport_protocol: TransportType | None = Field(
default=None,
description="Deprecated: Use transport.preferred instead",
exclude=True,
)
supported_transports: list[TransportType] | None = Field(
default=None,
description="Deprecated: Use transport.supported instead",
exclude=True,
)
_parallel_delegation: bool = PrivateAttr(default=False)
@model_validator(mode="after")
def _migrate_deprecated_transport_fields(self) -> Self:
"""Migrate deprecated transport fields to new config."""
_migrate_client_transport_fields(
self.transport, self.transport_protocol, self.supported_transports
)
return self
class A2AServerConfig(BaseModel):
@@ -182,7 +551,6 @@ class A2AServerConfig(BaseModel):
default_input_modes: Default supported input MIME types.
default_output_modes: Default supported output MIME types.
capabilities: Declaration of optional capabilities.
preferred_transport: Transport protocol for the preferred endpoint.
protocol_version: A2A protocol version this agent supports.
provider: Information about the agent's service provider.
documentation_url: URL to the agent's documentation.
@@ -192,7 +560,12 @@ class A2AServerConfig(BaseModel):
security_schemes: Security schemes available to authorize requests.
supports_authenticated_extended_card: Whether agent provides extended card to authenticated users.
url: Preferred endpoint URL for the agent.
signatures: JSON Web Signatures for the AgentCard.
signing_config: Configuration for signing the AgentCard with JWS.
signatures: Deprecated. Pre-computed JWS signatures. Use signing_config instead.
server_extensions: Server-side A2A protocol extensions with on_request/on_response hooks.
push_notifications: Configuration for outgoing push notifications.
transport: Transport configuration (preferred transport, gRPC, REST settings).
auth: Authentication scheme for A2A endpoints.
"""
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
@@ -228,12 +601,8 @@ class A2AServerConfig(BaseModel):
),
description="Declaration of optional capabilities supported by the agent",
)
preferred_transport: TransportType = Field(
default="JSONRPC",
description="Transport protocol for the preferred endpoint",
)
protocol_version: str = Field(
default_factory=lambda: version("a2a-sdk"),
protocol_version: ProtocolVersion = Field(
default="0.3.0",
description="A2A protocol version this agent supports",
)
provider: AgentProvider | None = Field(
@@ -250,7 +619,7 @@ class A2AServerConfig(BaseModel):
)
additional_interfaces: list[AgentInterface] = Field(
default_factory=list,
description="Additional supported interfaces (transport and URL combinations)",
description="Additional supported interfaces.",
)
security: list[dict[str, list[str]]] = Field(
default_factory=list,
@@ -268,7 +637,54 @@ class A2AServerConfig(BaseModel):
default=None,
description="Preferred endpoint URL for the agent. Set at runtime if not provided.",
)
signatures: list[AgentCardSignature] = Field(
default_factory=list,
description="JSON Web Signatures for the AgentCard",
signing_config: AgentCardSigningConfig | None = Field(
default=None,
description="Configuration for signing the AgentCard with JWS",
)
signatures: list[AgentCardSignature] | None = Field(
default=None,
description="Deprecated: Use signing_config instead. Pre-computed JWS signatures for the AgentCard.",
exclude=True,
deprecated=True,
)
server_extensions: list[ServerExtension] = Field(
default_factory=list,
description="Server-side A2A protocol extensions that modify agent behavior",
)
push_notifications: ServerPushNotificationConfig | None = Field(
default=None,
description="Configuration for outgoing push notifications",
)
transport: ServerTransportConfig = Field(
default_factory=ServerTransportConfig,
description="Transport configuration (preferred transport, gRPC, REST settings)",
)
preferred_transport: TransportType | None = Field(
default=None,
description="Deprecated: Use transport.preferred instead",
exclude=True,
deprecated=True,
)
auth: ServerAuthScheme | None = Field(
default=None,
description="Authentication scheme for A2A endpoints. Defaults to SimpleTokenAuth using AUTH_TOKEN env var.",
)
@model_validator(mode="after")
def _migrate_deprecated_fields(self) -> Self:
"""Migrate deprecated fields to new config."""
if self.preferred_transport is not None:
warnings.warn(
"preferred_transport is deprecated, use transport=ServerTransportConfig(preferred=...) instead",
FutureWarning,
stacklevel=4,
)
object.__setattr__(self.transport, "preferred", self.preferred_transport)
if self.signatures is not None:
warnings.warn(
"signatures is deprecated, use signing_config=AgentCardSigningConfig(...) instead. "
"The signatures field will be removed in v2.0.0.",
FutureWarning,
stacklevel=4,
)
return self

View File

@@ -1,7 +1,491 @@
"""A2A protocol error types."""
"""A2A error codes and error response utilities.
This module provides a centralized mapping of all A2A protocol error codes
as defined in the A2A specification, plus custom CrewAI extensions.
Error codes follow JSON-RPC 2.0 conventions:
- -32700 to -32600: Standard JSON-RPC errors
- -32099 to -32000: Server errors (A2A-specific)
- -32768 to -32100: Reserved for implementation-defined errors
"""
from __future__ import annotations
from dataclasses import dataclass, field
from enum import IntEnum
from typing import Any
from a2a.client.errors import A2AClientTimeoutError
class A2APollingTimeoutError(A2AClientTimeoutError):
"""Raised when polling exceeds the configured timeout."""
class A2AErrorCode(IntEnum):
"""A2A protocol error codes.
Codes follow JSON-RPC 2.0 specification with A2A-specific extensions.
"""
# JSON-RPC 2.0 Standard Errors (-32700 to -32600)
JSON_PARSE_ERROR = -32700
"""Invalid JSON was received by the server."""
INVALID_REQUEST = -32600
"""The JSON sent is not a valid Request object."""
METHOD_NOT_FOUND = -32601
"""The method does not exist / is not available."""
INVALID_PARAMS = -32602
"""Invalid method parameter(s)."""
INTERNAL_ERROR = -32603
"""Internal JSON-RPC error."""
# A2A-Specific Errors (-32099 to -32000)
TASK_NOT_FOUND = -32001
"""The specified task was not found."""
TASK_NOT_CANCELABLE = -32002
"""The task cannot be canceled (already completed/failed)."""
PUSH_NOTIFICATION_NOT_SUPPORTED = -32003
"""Push notifications are not supported by this agent."""
UNSUPPORTED_OPERATION = -32004
"""The requested operation is not supported."""
CONTENT_TYPE_NOT_SUPPORTED = -32005
"""Incompatible content types between client and server."""
INVALID_AGENT_RESPONSE = -32006
"""The agent produced an invalid response."""
# CrewAI Custom Extensions (-32768 to -32100)
UNSUPPORTED_VERSION = -32009
"""The requested A2A protocol version is not supported."""
UNSUPPORTED_EXTENSION = -32010
"""Client does not support required protocol extensions."""
AUTHENTICATION_REQUIRED = -32011
"""Authentication is required for this operation."""
AUTHORIZATION_FAILED = -32012
"""Authorization check failed (insufficient permissions)."""
RATE_LIMIT_EXCEEDED = -32013
"""Rate limit exceeded for this client/operation."""
TASK_TIMEOUT = -32014
"""Task execution timed out."""
TRANSPORT_NEGOTIATION_FAILED = -32015
"""Failed to negotiate a compatible transport protocol."""
CONTEXT_NOT_FOUND = -32016
"""The specified context was not found."""
SKILL_NOT_FOUND = -32017
"""The specified skill was not found."""
ARTIFACT_NOT_FOUND = -32018
"""The specified artifact was not found."""
# Error code to default message mapping
ERROR_MESSAGES: dict[int, str] = {
A2AErrorCode.JSON_PARSE_ERROR: "Parse error",
A2AErrorCode.INVALID_REQUEST: "Invalid Request",
A2AErrorCode.METHOD_NOT_FOUND: "Method not found",
A2AErrorCode.INVALID_PARAMS: "Invalid params",
A2AErrorCode.INTERNAL_ERROR: "Internal error",
A2AErrorCode.TASK_NOT_FOUND: "Task not found",
A2AErrorCode.TASK_NOT_CANCELABLE: "Task not cancelable",
A2AErrorCode.PUSH_NOTIFICATION_NOT_SUPPORTED: "Push Notification is not supported",
A2AErrorCode.UNSUPPORTED_OPERATION: "This operation is not supported",
A2AErrorCode.CONTENT_TYPE_NOT_SUPPORTED: "Incompatible content types",
A2AErrorCode.INVALID_AGENT_RESPONSE: "Invalid agent response",
A2AErrorCode.UNSUPPORTED_VERSION: "Unsupported A2A version",
A2AErrorCode.UNSUPPORTED_EXTENSION: "Client does not support required extensions",
A2AErrorCode.AUTHENTICATION_REQUIRED: "Authentication required",
A2AErrorCode.AUTHORIZATION_FAILED: "Authorization failed",
A2AErrorCode.RATE_LIMIT_EXCEEDED: "Rate limit exceeded",
A2AErrorCode.TASK_TIMEOUT: "Task execution timed out",
A2AErrorCode.TRANSPORT_NEGOTIATION_FAILED: "Transport negotiation failed",
A2AErrorCode.CONTEXT_NOT_FOUND: "Context not found",
A2AErrorCode.SKILL_NOT_FOUND: "Skill not found",
A2AErrorCode.ARTIFACT_NOT_FOUND: "Artifact not found",
}
@dataclass
class A2AError(Exception):
"""Base exception for A2A protocol errors.
Attributes:
code: The A2A/JSON-RPC error code.
message: Human-readable error message.
data: Optional additional error data.
"""
code: int
message: str | None = None
data: Any = None
def __post_init__(self) -> None:
if self.message is None:
self.message = ERROR_MESSAGES.get(self.code, "Unknown error")
super().__init__(self.message)
def to_dict(self) -> dict[str, Any]:
"""Convert to JSON-RPC error object format."""
error: dict[str, Any] = {
"code": self.code,
"message": self.message,
}
if self.data is not None:
error["data"] = self.data
return error
def to_response(self, request_id: str | int | None = None) -> dict[str, Any]:
"""Convert to full JSON-RPC error response."""
return {
"jsonrpc": "2.0",
"error": self.to_dict(),
"id": request_id,
}
@dataclass
class JSONParseError(A2AError):
"""Invalid JSON was received."""
code: int = field(default=A2AErrorCode.JSON_PARSE_ERROR, init=False)
@dataclass
class InvalidRequestError(A2AError):
"""The JSON sent is not a valid Request object."""
code: int = field(default=A2AErrorCode.INVALID_REQUEST, init=False)
@dataclass
class MethodNotFoundError(A2AError):
"""The method does not exist / is not available."""
code: int = field(default=A2AErrorCode.METHOD_NOT_FOUND, init=False)
method: str | None = None
def __post_init__(self) -> None:
if self.message is None and self.method:
self.message = f"Method not found: {self.method}"
super().__post_init__()
@dataclass
class InvalidParamsError(A2AError):
"""Invalid method parameter(s)."""
code: int = field(default=A2AErrorCode.INVALID_PARAMS, init=False)
param: str | None = None
reason: str | None = None
def __post_init__(self) -> None:
if self.message is None:
if self.param and self.reason:
self.message = f"Invalid parameter '{self.param}': {self.reason}"
elif self.param:
self.message = f"Invalid parameter: {self.param}"
super().__post_init__()
@dataclass
class InternalError(A2AError):
"""Internal JSON-RPC error."""
code: int = field(default=A2AErrorCode.INTERNAL_ERROR, init=False)
@dataclass
class TaskNotFoundError(A2AError):
"""The specified task was not found."""
code: int = field(default=A2AErrorCode.TASK_NOT_FOUND, init=False)
task_id: str | None = None
def __post_init__(self) -> None:
if self.message is None and self.task_id:
self.message = f"Task not found: {self.task_id}"
super().__post_init__()
@dataclass
class TaskNotCancelableError(A2AError):
"""The task cannot be canceled."""
code: int = field(default=A2AErrorCode.TASK_NOT_CANCELABLE, init=False)
task_id: str | None = None
reason: str | None = None
def __post_init__(self) -> None:
if self.message is None:
if self.task_id and self.reason:
self.message = f"Task {self.task_id} cannot be canceled: {self.reason}"
elif self.task_id:
self.message = f"Task {self.task_id} cannot be canceled"
super().__post_init__()
@dataclass
class PushNotificationNotSupportedError(A2AError):
"""Push notifications are not supported."""
code: int = field(default=A2AErrorCode.PUSH_NOTIFICATION_NOT_SUPPORTED, init=False)
@dataclass
class UnsupportedOperationError(A2AError):
"""The requested operation is not supported."""
code: int = field(default=A2AErrorCode.UNSUPPORTED_OPERATION, init=False)
operation: str | None = None
def __post_init__(self) -> None:
if self.message is None and self.operation:
self.message = f"Operation not supported: {self.operation}"
super().__post_init__()
@dataclass
class ContentTypeNotSupportedError(A2AError):
"""Incompatible content types."""
code: int = field(default=A2AErrorCode.CONTENT_TYPE_NOT_SUPPORTED, init=False)
requested_types: list[str] | None = None
supported_types: list[str] | None = None
def __post_init__(self) -> None:
if self.message is None and self.requested_types and self.supported_types:
self.message = (
f"Content type not supported. Requested: {self.requested_types}, "
f"Supported: {self.supported_types}"
)
super().__post_init__()
@dataclass
class InvalidAgentResponseError(A2AError):
"""The agent produced an invalid response."""
code: int = field(default=A2AErrorCode.INVALID_AGENT_RESPONSE, init=False)
@dataclass
class UnsupportedVersionError(A2AError):
"""The requested A2A version is not supported."""
code: int = field(default=A2AErrorCode.UNSUPPORTED_VERSION, init=False)
requested_version: str | None = None
supported_versions: list[str] | None = None
def __post_init__(self) -> None:
if self.message is None and self.requested_version:
msg = f"Unsupported A2A version: {self.requested_version}"
if self.supported_versions:
msg += f". Supported versions: {', '.join(self.supported_versions)}"
self.message = msg
super().__post_init__()
@dataclass
class UnsupportedExtensionError(A2AError):
"""Client does not support required extensions."""
code: int = field(default=A2AErrorCode.UNSUPPORTED_EXTENSION, init=False)
required_extensions: list[str] | None = None
def __post_init__(self) -> None:
if self.message is None and self.required_extensions:
self.message = f"Client does not support required extensions: {', '.join(self.required_extensions)}"
super().__post_init__()
@dataclass
class AuthenticationRequiredError(A2AError):
"""Authentication is required."""
code: int = field(default=A2AErrorCode.AUTHENTICATION_REQUIRED, init=False)
@dataclass
class AuthorizationFailedError(A2AError):
"""Authorization check failed."""
code: int = field(default=A2AErrorCode.AUTHORIZATION_FAILED, init=False)
required_scope: str | None = None
def __post_init__(self) -> None:
if self.message is None and self.required_scope:
self.message = (
f"Authorization failed. Required scope: {self.required_scope}"
)
super().__post_init__()
@dataclass
class RateLimitExceededError(A2AError):
"""Rate limit exceeded."""
code: int = field(default=A2AErrorCode.RATE_LIMIT_EXCEEDED, init=False)
retry_after: int | None = None
def __post_init__(self) -> None:
if self.message is None and self.retry_after:
self.message = (
f"Rate limit exceeded. Retry after {self.retry_after} seconds"
)
if self.retry_after:
self.data = {"retry_after": self.retry_after}
super().__post_init__()
@dataclass
class TaskTimeoutError(A2AError):
"""Task execution timed out."""
code: int = field(default=A2AErrorCode.TASK_TIMEOUT, init=False)
task_id: str | None = None
timeout_seconds: float | None = None
def __post_init__(self) -> None:
if self.message is None:
if self.task_id and self.timeout_seconds:
self.message = (
f"Task {self.task_id} timed out after {self.timeout_seconds}s"
)
elif self.task_id:
self.message = f"Task {self.task_id} timed out"
super().__post_init__()
@dataclass
class TransportNegotiationFailedError(A2AError):
"""Failed to negotiate a compatible transport protocol."""
code: int = field(default=A2AErrorCode.TRANSPORT_NEGOTIATION_FAILED, init=False)
client_transports: list[str] | None = None
server_transports: list[str] | None = None
def __post_init__(self) -> None:
if self.message is None and self.client_transports and self.server_transports:
self.message = (
f"Transport negotiation failed. Client: {self.client_transports}, "
f"Server: {self.server_transports}"
)
super().__post_init__()
@dataclass
class ContextNotFoundError(A2AError):
"""The specified context was not found."""
code: int = field(default=A2AErrorCode.CONTEXT_NOT_FOUND, init=False)
context_id: str | None = None
def __post_init__(self) -> None:
if self.message is None and self.context_id:
self.message = f"Context not found: {self.context_id}"
super().__post_init__()
@dataclass
class SkillNotFoundError(A2AError):
"""The specified skill was not found."""
code: int = field(default=A2AErrorCode.SKILL_NOT_FOUND, init=False)
skill_id: str | None = None
def __post_init__(self) -> None:
if self.message is None and self.skill_id:
self.message = f"Skill not found: {self.skill_id}"
super().__post_init__()
@dataclass
class ArtifactNotFoundError(A2AError):
"""The specified artifact was not found."""
code: int = field(default=A2AErrorCode.ARTIFACT_NOT_FOUND, init=False)
artifact_id: str | None = None
def __post_init__(self) -> None:
if self.message is None and self.artifact_id:
self.message = f"Artifact not found: {self.artifact_id}"
super().__post_init__()
def create_error_response(
code: int | A2AErrorCode,
message: str | None = None,
data: Any = None,
request_id: str | int | None = None,
) -> dict[str, Any]:
"""Create a JSON-RPC error response.
Args:
code: Error code (A2AErrorCode or int).
message: Optional error message (uses default if not provided).
data: Optional additional error data.
request_id: Request ID for correlation.
Returns:
Dict in JSON-RPC error response format.
"""
error = A2AError(code=int(code), message=message, data=data)
return error.to_response(request_id)
def is_retryable_error(code: int) -> bool:
"""Check if an error is potentially retryable.
Args:
code: Error code to check.
Returns:
True if the error might be resolved by retrying.
"""
retryable_codes = {
A2AErrorCode.INTERNAL_ERROR,
A2AErrorCode.RATE_LIMIT_EXCEEDED,
A2AErrorCode.TASK_TIMEOUT,
}
return code in retryable_codes
def is_client_error(code: int) -> bool:
"""Check if an error is a client-side error.
Args:
code: Error code to check.
Returns:
True if the error is due to client request issues.
"""
client_error_codes = {
A2AErrorCode.JSON_PARSE_ERROR,
A2AErrorCode.INVALID_REQUEST,
A2AErrorCode.METHOD_NOT_FOUND,
A2AErrorCode.INVALID_PARAMS,
A2AErrorCode.TASK_NOT_FOUND,
A2AErrorCode.CONTENT_TYPE_NOT_SUPPORTED,
A2AErrorCode.UNSUPPORTED_VERSION,
A2AErrorCode.UNSUPPORTED_EXTENSION,
A2AErrorCode.CONTEXT_NOT_FOUND,
A2AErrorCode.SKILL_NOT_FOUND,
A2AErrorCode.ARTIFACT_NOT_FOUND,
}
return code in client_error_codes

View File

@@ -1,4 +1,37 @@
"""A2A Protocol Extensions for CrewAI.
This module contains extensions to the A2A (Agent-to-Agent) protocol.
**Client-side extensions** (A2AExtension) allow customizing how the A2A wrapper
processes requests and responses during delegation to remote agents. These provide
hooks for tool injection, prompt augmentation, and response processing.
**Server-side extensions** (ServerExtension) allow agents to offer additional
functionality beyond the core A2A specification. Clients activate extensions
via the X-A2A-Extensions header.
See: https://a2a-protocol.org/latest/topics/extensions/
"""
from crewai.a2a.extensions.base import (
A2AExtension,
ConversationState,
ExtensionRegistry,
ValidatedA2AExtension,
)
from crewai.a2a.extensions.server import (
ExtensionContext,
ServerExtension,
ServerExtensionRegistry,
)
__all__ = [
"A2AExtension",
"ConversationState",
"ExtensionContext",
"ExtensionRegistry",
"ServerExtension",
"ServerExtensionRegistry",
"ValidatedA2AExtension",
]

View File

@@ -1,14 +1,20 @@
"""Base extension interface for A2A wrapper integrations.
"""Base extension interface for CrewAI A2A wrapper processing hooks.
This module defines the protocol for extending A2A wrapper functionality
with custom logic for conversation processing, prompt augmentation, and
agent response handling.
This module defines the protocol for extending CrewAI's A2A wrapper functionality
with custom logic for tool injection, prompt augmentation, and response processing.
Note: These are CrewAI-specific processing hooks, NOT A2A protocol extensions.
A2A protocol extensions are capability declarations using AgentExtension objects
in AgentCard.capabilities.extensions, activated via the A2A-Extensions HTTP header.
See: https://a2a-protocol.org/latest/topics/extensions/
"""
from __future__ import annotations
from collections.abc import Sequence
from typing import TYPE_CHECKING, Any, Protocol
from typing import TYPE_CHECKING, Annotated, Any, Protocol, runtime_checkable
from pydantic import BeforeValidator
if TYPE_CHECKING:
@@ -17,6 +23,20 @@ if TYPE_CHECKING:
from crewai.agent.core import Agent
def _validate_a2a_extension(v: Any) -> Any:
"""Validate that value implements A2AExtension protocol."""
if not isinstance(v, A2AExtension):
raise ValueError(
f"Value must implement A2AExtension protocol. "
f"Got {type(v).__name__} which is missing required methods."
)
return v
ValidatedA2AExtension = Annotated[Any, BeforeValidator(_validate_a2a_extension)]
@runtime_checkable
class ConversationState(Protocol):
"""Protocol for extension-specific conversation state.
@@ -33,11 +53,36 @@ class ConversationState(Protocol):
...
@runtime_checkable
class A2AExtension(Protocol):
"""Protocol for A2A wrapper extensions.
Extensions can implement this protocol to inject custom logic into
the A2A conversation flow at various integration points.
Example:
class MyExtension:
def inject_tools(self, agent: Agent) -> None:
# Add custom tools to the agent
pass
def extract_state_from_history(
self, conversation_history: Sequence[Message]
) -> ConversationState | None:
# Extract state from conversation
return None
def augment_prompt(
self, base_prompt: str, conversation_state: ConversationState | None
) -> str:
# Add custom instructions
return base_prompt
def process_response(
self, agent_response: Any, conversation_state: ConversationState | None
) -> Any:
# Modify response if needed
return agent_response
"""
def inject_tools(self, agent: Agent) -> None:

View File

@@ -1,34 +1,170 @@
"""Extension registry factory for A2A configurations.
"""A2A Protocol extension utilities.
This module provides utilities for creating extension registries from A2A configurations.
This module provides utilities for working with A2A protocol extensions as
defined in the A2A specification. Extensions are capability declarations in
AgentCard.capabilities.extensions using AgentExtension objects, activated
via the X-A2A-Extensions HTTP header.
See: https://a2a-protocol.org/latest/topics/extensions/
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from typing import Any
from a2a.client.middleware import ClientCallContext, ClientCallInterceptor
from a2a.extensions.common import (
HTTP_EXTENSION_HEADER,
)
from a2a.types import AgentCard, AgentExtension
from crewai.a2a.config import A2AClientConfig, A2AConfig
from crewai.a2a.extensions.base import ExtensionRegistry
if TYPE_CHECKING:
from crewai.a2a.config import A2AConfig
def get_extensions_from_config(
a2a_config: list[A2AConfig | A2AClientConfig] | A2AConfig | A2AClientConfig,
) -> list[str]:
"""Extract extension URIs from A2A configuration.
Args:
a2a_config: A2A configuration (single or list).
Returns:
Deduplicated list of extension URIs from all configs.
"""
configs = a2a_config if isinstance(a2a_config, list) else [a2a_config]
seen: set[str] = set()
result: list[str] = []
for config in configs:
if not isinstance(config, A2AClientConfig):
continue
for uri in config.extensions:
if uri not in seen:
seen.add(uri)
result.append(uri)
return result
class ExtensionsMiddleware(ClientCallInterceptor):
"""Middleware to add X-A2A-Extensions header to requests.
This middleware adds the extensions header to all outgoing requests,
declaring which A2A protocol extensions the client supports.
"""
def __init__(self, extensions: list[str]) -> None:
"""Initialize with extension URIs.
Args:
extensions: List of extension URIs the client supports.
"""
self._extensions = extensions
async def intercept(
self,
method_name: str,
request_payload: dict[str, Any],
http_kwargs: dict[str, Any],
agent_card: AgentCard | None,
context: ClientCallContext | None,
) -> tuple[dict[str, Any], dict[str, Any]]:
"""Add extensions header to the request.
Args:
method_name: The A2A method being called.
request_payload: The JSON-RPC request payload.
http_kwargs: HTTP request kwargs (headers, etc).
agent_card: The target agent's card.
context: Optional call context.
Returns:
Tuple of (request_payload, modified_http_kwargs).
"""
if self._extensions:
headers = http_kwargs.setdefault("headers", {})
headers[HTTP_EXTENSION_HEADER] = ",".join(self._extensions)
return request_payload, http_kwargs
def validate_required_extensions(
agent_card: AgentCard,
client_extensions: list[str] | None,
) -> list[AgentExtension]:
"""Validate that client supports all required extensions from agent.
Args:
agent_card: The agent's card with declared extensions.
client_extensions: Extension URIs the client supports.
Returns:
List of unsupported required extensions.
Raises:
None - returns list of unsupported extensions for caller to handle.
"""
unsupported: list[AgentExtension] = []
client_set = set(client_extensions or [])
if not agent_card.capabilities or not agent_card.capabilities.extensions:
return unsupported
unsupported.extend(
ext
for ext in agent_card.capabilities.extensions
if ext.required and ext.uri not in client_set
)
return unsupported
def create_extension_registry_from_config(
a2a_config: list[A2AConfig] | A2AConfig,
a2a_config: list[A2AConfig | A2AClientConfig] | A2AConfig | A2AClientConfig,
) -> ExtensionRegistry:
"""Create an extension registry from A2A configuration.
"""Create an extension registry from A2A client configuration.
Extracts client_extensions from each A2AClientConfig and registers them
with the ExtensionRegistry. These extensions provide CrewAI-specific
processing hooks (tool injection, prompt augmentation, response processing).
Note: A2A protocol extensions (URI strings sent via X-A2A-Extensions header)
are handled separately via get_extensions_from_config() and ExtensionsMiddleware.
Args:
a2a_config: A2A configuration (single or list)
a2a_config: A2A configuration (single or list).
Returns:
Configured extension registry with all applicable extensions
Extension registry with all client_extensions registered.
Example:
class LoggingExtension:
def inject_tools(self, agent): pass
def extract_state_from_history(self, history): return None
def augment_prompt(self, prompt, state): return prompt
def process_response(self, response, state):
print(f"Response: {response}")
return response
config = A2AClientConfig(
endpoint="https://agent.example.com",
client_extensions=[LoggingExtension()],
)
registry = create_extension_registry_from_config(config)
"""
registry = ExtensionRegistry()
configs = a2a_config if isinstance(a2a_config, list) else [a2a_config]
for _ in configs:
pass
seen: set[int] = set()
for config in configs:
if isinstance(config, (A2AConfig, A2AClientConfig)):
client_exts = getattr(config, "client_extensions", [])
for extension in client_exts:
ext_id = id(extension)
if ext_id not in seen:
seen.add(ext_id)
registry.register(extension)
return registry

View File

@@ -0,0 +1,305 @@
"""A2A protocol server extensions for CrewAI agents.
This module provides the base class and context for implementing A2A protocol
extensions on the server side. Extensions allow agents to offer additional
functionality beyond the core A2A specification.
See: https://a2a-protocol.org/latest/topics/extensions/
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
import logging
from typing import TYPE_CHECKING, Annotated, Any
from a2a.types import AgentExtension
from pydantic_core import CoreSchema, core_schema
if TYPE_CHECKING:
from a2a.server.context import ServerCallContext
from pydantic import GetCoreSchemaHandler
logger = logging.getLogger(__name__)
@dataclass
class ExtensionContext:
"""Context passed to extension hooks during request processing.
Provides access to request metadata, client extensions, and shared state
that extensions can read from and write to.
Attributes:
metadata: Request metadata dict, includes extension-namespaced keys.
client_extensions: Set of extension URIs the client declared support for.
state: Mutable dict for extensions to share data during request lifecycle.
server_context: The underlying A2A server call context.
"""
metadata: dict[str, Any]
client_extensions: set[str]
state: dict[str, Any] = field(default_factory=dict)
server_context: ServerCallContext | None = None
def get_extension_metadata(self, uri: str, key: str) -> Any | None:
"""Get extension-specific metadata value.
Extension metadata uses namespaced keys in the format:
"{extension_uri}/{key}"
Args:
uri: The extension URI.
key: The metadata key within the extension namespace.
Returns:
The metadata value, or None if not present.
"""
full_key = f"{uri}/{key}"
return self.metadata.get(full_key)
def set_extension_metadata(self, uri: str, key: str, value: Any) -> None:
"""Set extension-specific metadata value.
Args:
uri: The extension URI.
key: The metadata key within the extension namespace.
value: The value to set.
"""
full_key = f"{uri}/{key}"
self.metadata[full_key] = value
class ServerExtension(ABC):
"""Base class for A2A protocol server extensions.
Subclass this to create custom extensions that modify agent behavior
when clients activate them. Extensions are identified by URI and can
be marked as required.
Example:
class SamplingExtension(ServerExtension):
uri = "urn:crewai:ext:sampling/v1"
required = True
def __init__(self, max_tokens: int = 4096):
self.max_tokens = max_tokens
@property
def params(self) -> dict[str, Any]:
return {"max_tokens": self.max_tokens}
async def on_request(self, context: ExtensionContext) -> None:
limit = context.get_extension_metadata(self.uri, "limit")
if limit:
context.state["token_limit"] = int(limit)
async def on_response(self, context: ExtensionContext, result: Any) -> Any:
return result
"""
uri: Annotated[str, "Extension URI identifier. Must be unique."]
required: Annotated[bool, "Whether clients must support this extension."] = False
description: Annotated[
str | None, "Human-readable description of the extension."
] = None
@classmethod
def __get_pydantic_core_schema__(
cls,
_source_type: Any,
_handler: GetCoreSchemaHandler,
) -> CoreSchema:
"""Tell Pydantic how to validate ServerExtension instances."""
return core_schema.is_instance_schema(cls)
@property
def params(self) -> dict[str, Any] | None:
"""Extension parameters to advertise in AgentCard.
Override this property to expose configuration that clients can read.
Returns:
Dict of parameter names to values, or None.
"""
return None
def agent_extension(self) -> AgentExtension:
"""Generate the AgentExtension object for the AgentCard.
Returns:
AgentExtension with this extension's URI, required flag, and params.
"""
return AgentExtension(
uri=self.uri,
required=self.required if self.required else None,
description=self.description,
params=self.params,
)
def is_active(self, context: ExtensionContext) -> bool:
"""Check if this extension is active for the current request.
An extension is active if the client declared support for it.
Args:
context: The extension context for the current request.
Returns:
True if the client supports this extension.
"""
return self.uri in context.client_extensions
@abstractmethod
async def on_request(self, context: ExtensionContext) -> None:
"""Called before agent execution if extension is active.
Use this hook to:
- Read extension-specific metadata from the request
- Set up state for the execution
- Modify execution parameters via context.state
Args:
context: The extension context with request metadata and state.
"""
...
@abstractmethod
async def on_response(self, context: ExtensionContext, result: Any) -> Any:
"""Called after agent execution if extension is active.
Use this hook to:
- Modify or enhance the result
- Add extension-specific metadata to the response
- Clean up any resources
Args:
context: The extension context with request metadata and state.
result: The agent execution result.
Returns:
The result, potentially modified.
"""
...
class ServerExtensionRegistry:
"""Registry for managing server-side A2A protocol extensions.
Collects extensions and provides methods to generate AgentCapabilities
and invoke extension hooks during request processing.
"""
def __init__(self, extensions: list[ServerExtension] | None = None) -> None:
"""Initialize the registry with optional extensions.
Args:
extensions: Initial list of extensions to register.
"""
self._extensions: list[ServerExtension] = list(extensions) if extensions else []
self._by_uri: dict[str, ServerExtension] = {
ext.uri: ext for ext in self._extensions
}
def register(self, extension: ServerExtension) -> None:
"""Register an extension.
Args:
extension: The extension to register.
Raises:
ValueError: If an extension with the same URI is already registered.
"""
if extension.uri in self._by_uri:
raise ValueError(f"Extension already registered: {extension.uri}")
self._extensions.append(extension)
self._by_uri[extension.uri] = extension
def get_agent_extensions(self) -> list[AgentExtension]:
"""Get AgentExtension objects for all registered extensions.
Returns:
List of AgentExtension objects for the AgentCard.
"""
return [ext.agent_extension() for ext in self._extensions]
def get_extension(self, uri: str) -> ServerExtension | None:
"""Get an extension by URI.
Args:
uri: The extension URI.
Returns:
The extension, or None if not found.
"""
return self._by_uri.get(uri)
@staticmethod
def create_context(
metadata: dict[str, Any],
client_extensions: set[str],
server_context: ServerCallContext | None = None,
) -> ExtensionContext:
"""Create an ExtensionContext for a request.
Args:
metadata: Request metadata dict.
client_extensions: Set of extension URIs from client.
server_context: Optional server call context.
Returns:
ExtensionContext for use in hooks.
"""
return ExtensionContext(
metadata=metadata,
client_extensions=client_extensions,
server_context=server_context,
)
async def invoke_on_request(self, context: ExtensionContext) -> None:
"""Invoke on_request hooks for all active extensions.
Tracks activated extensions and isolates errors from individual hooks.
Args:
context: The extension context for the request.
"""
for extension in self._extensions:
if extension.is_active(context):
try:
await extension.on_request(context)
if context.server_context is not None:
context.server_context.activated_extensions.add(extension.uri)
except Exception:
logger.exception(
"Extension on_request hook failed",
extra={"extension": extension.uri},
)
async def invoke_on_response(self, context: ExtensionContext, result: Any) -> Any:
"""Invoke on_response hooks for all active extensions.
Isolates errors from individual hooks to prevent one failing extension
from breaking the entire response.
Args:
context: The extension context for the request.
result: The agent execution result.
Returns:
The result after all extensions have processed it.
"""
processed = result
for extension in self._extensions:
if extension.is_active(context):
try:
processed = await extension.on_response(context, processed)
except Exception:
logger.exception(
"Extension on_response hook failed",
extra={"extension": extension.uri},
)
return processed

View File

@@ -51,6 +51,13 @@ ACTIONABLE_STATES: frozenset[TaskState] = frozenset(
}
)
PENDING_STATES: frozenset[TaskState] = frozenset(
{
TaskState.submitted,
TaskState.working,
}
)
class TaskStateResult(TypedDict):
"""Result dictionary from processing A2A task state."""
@@ -272,6 +279,9 @@ def process_task_state(
history=new_messages,
)
if a2a_task.status.state in PENDING_STATES:
return None
return None

View File

@@ -38,3 +38,18 @@ You MUST now:
DO NOT send another request - the task is already done.
</REMOTE_AGENT_STATUS>
"""
REMOTE_AGENT_RESPONSE_NOTICE: Final[str] = """
<REMOTE_AGENT_STATUS>
STATUS: RESPONSE_RECEIVED
The remote agent has responded. Their response is in the conversation history above.
You MUST now:
1. Set is_a2a=false (the remote task is complete and cannot receive more messages)
2. Provide YOUR OWN response to the original task based on the information received
IMPORTANT: Your response should be addressed to the USER who gave you the original task.
Report what the remote agent told you in THIRD PERSON (e.g., "The remote agent said..." or "I learned that...").
Do NOT address the remote agent directly or use "you" to refer to them.
</REMOTE_AGENT_STATUS>
"""

View File

@@ -36,6 +36,17 @@ except ImportError:
TransportType = Literal["JSONRPC", "GRPC", "HTTP+JSON"]
ProtocolVersion = Literal[
"0.2.0",
"0.2.1",
"0.2.2",
"0.2.3",
"0.2.4",
"0.2.5",
"0.2.6",
"0.3.0",
"0.4.0",
]
http_url_adapter: TypeAdapter[HttpUrl] = TypeAdapter(HttpUrl)

View File

@@ -2,12 +2,28 @@
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Protocol, TypedDict
from typing import TYPE_CHECKING, Any, NamedTuple, Protocol, TypedDict
from pydantic import GetCoreSchemaHandler
from pydantic_core import CoreSchema, core_schema
class CommonParams(NamedTuple):
"""Common parameters shared across all update handlers.
Groups the frequently-passed parameters to reduce duplication.
"""
turn_number: int
is_multiturn: bool
agent_role: str | None
endpoint: str
a2a_agent_name: str | None
context_id: str | None
from_task: Any
from_agent: Any
if TYPE_CHECKING:
from a2a.client import Client
from a2a.types import AgentCard, Message, Task
@@ -63,8 +79,8 @@ class PushNotificationResultStore(Protocol):
@classmethod
def __get_pydantic_core_schema__(
cls,
source_type: Any,
handler: GetCoreSchemaHandler,
_source_type: Any,
_handler: GetCoreSchemaHandler,
) -> CoreSchema:
return core_schema.any_schema()
@@ -130,3 +146,31 @@ class UpdateHandler(Protocol):
Result dictionary with status, result/error, and history.
"""
...
def extract_common_params(kwargs: BaseHandlerKwargs) -> CommonParams:
"""Extract common parameters from handler kwargs.
Args:
kwargs: Handler kwargs dict.
Returns:
CommonParams with extracted values.
Raises:
ValueError: If endpoint is not provided.
"""
endpoint = kwargs.get("endpoint")
if endpoint is None:
raise ValueError("endpoint is required for update handlers")
return CommonParams(
turn_number=kwargs.get("turn_number", 0),
is_multiturn=kwargs.get("is_multiturn", False),
agent_role=kwargs.get("agent_role"),
endpoint=endpoint,
a2a_agent_name=kwargs.get("a2a_agent_name"),
context_id=kwargs.get("context_id"),
from_task=kwargs.get("from_task"),
from_agent=kwargs.get("from_agent"),
)

View File

@@ -94,7 +94,7 @@ async def _poll_task_until_complete(
A2APollingStatusEvent(
task_id=task_id,
context_id=effective_context_id,
state=str(task.status.state.value) if task.status.state else "unknown",
state=str(task.status.state.value),
elapsed_seconds=elapsed,
poll_count=poll_count,
endpoint=endpoint,
@@ -325,7 +325,7 @@ class PollingHandler:
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
endpoint=endpoint,
error=str(e),
error_type="unexpected_error",
a2a_agent_name=a2a_agent_name,

View File

@@ -2,10 +2,30 @@
from __future__ import annotations
from typing import Annotated
from a2a.types import PushNotificationAuthenticationInfo
from pydantic import AnyHttpUrl, BaseModel, Field
from pydantic import AnyHttpUrl, BaseModel, BeforeValidator, Field
from crewai.a2a.updates.base import PushNotificationResultStore
from crewai.a2a.updates.push_notifications.signature import WebhookSignatureConfig
def _coerce_signature(
value: str | WebhookSignatureConfig | None,
) -> WebhookSignatureConfig | None:
"""Convert string secret to WebhookSignatureConfig."""
if value is None:
return None
if isinstance(value, str):
return WebhookSignatureConfig.hmac_sha256(secret=value)
return value
SignatureInput = Annotated[
WebhookSignatureConfig | None,
BeforeValidator(_coerce_signature),
]
class PushNotificationConfig(BaseModel):
@@ -19,6 +39,8 @@ class PushNotificationConfig(BaseModel):
timeout: Max seconds to wait for task completion.
interval: Seconds between result polling attempts.
result_store: Store for receiving push notification results.
signature: HMAC signature config. Pass a string (secret) for defaults,
or WebhookSignatureConfig for custom settings.
"""
url: AnyHttpUrl = Field(description="Callback URL for push notifications")
@@ -36,3 +58,8 @@ class PushNotificationConfig(BaseModel):
result_store: PushNotificationResultStore | None = Field(
default=None, description="Result store for push notification handling"
)
signature: SignatureInput = Field(
default=None,
description="HMAC signature config. Pass a string (secret) for simple usage, "
"or WebhookSignatureConfig for custom headers/tolerance.",
)

View File

@@ -24,8 +24,10 @@ from crewai.a2a.task_helpers import (
send_message_and_get_task_id,
)
from crewai.a2a.updates.base import (
CommonParams,
PushNotificationHandlerKwargs,
PushNotificationResultStore,
extract_common_params,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.a2a_events import (
@@ -39,10 +41,81 @@ from crewai.events.types.a2a_events import (
if TYPE_CHECKING:
from a2a.types import Task as A2ATask
logger = logging.getLogger(__name__)
def _handle_push_error(
error: Exception,
error_msg: str,
error_type: str,
new_messages: list[Message],
agent_branch: Any | None,
params: CommonParams,
task_id: str | None,
status_code: int | None = None,
) -> TaskStateResult:
"""Handle push notification errors with consistent event emission.
Args:
error: The exception that occurred.
error_msg: Formatted error message for the result.
error_type: Type of error for the event.
new_messages: List to append error message to.
agent_branch: Agent tree branch for events.
params: Common handler parameters.
task_id: A2A task ID.
status_code: HTTP status code if applicable.
Returns:
TaskStateResult with failed status.
"""
error_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=error_msg))],
context_id=params.context_id,
task_id=task_id,
)
new_messages.append(error_message)
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=params.endpoint,
error=str(error),
error_type=error_type,
status_code=status_code,
a2a_agent_name=params.a2a_agent_name,
operation="push_notification",
context_id=params.context_id,
task_id=task_id,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
crewai_event_bus.emit(
agent_branch,
A2AResponseReceivedEvent(
response=error_msg,
turn_number=params.turn_number,
context_id=params.context_id,
is_multiturn=params.is_multiturn,
status="failed",
final=True,
agent_role=params.agent_role,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
return TaskStateResult(
status=TaskState.failed,
error=error_msg,
history=new_messages,
)
async def _wait_for_push_result(
task_id: str,
result_store: PushNotificationResultStore,
@@ -126,15 +199,8 @@ class PushNotificationHandler:
polling_timeout = kwargs.get("polling_timeout", 300.0)
polling_interval = kwargs.get("polling_interval", 2.0)
agent_branch = kwargs.get("agent_branch")
turn_number = kwargs.get("turn_number", 0)
is_multiturn = kwargs.get("is_multiturn", False)
agent_role = kwargs.get("agent_role")
context_id = kwargs.get("context_id")
task_id = kwargs.get("task_id")
endpoint = kwargs.get("endpoint")
a2a_agent_name = kwargs.get("a2a_agent_name")
from_task = kwargs.get("from_task")
from_agent = kwargs.get("from_agent")
params = extract_common_params(kwargs)
if config is None:
error_msg = (
@@ -143,15 +209,15 @@ class PushNotificationHandler:
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
endpoint=params.endpoint,
error=error_msg,
error_type="configuration_error",
a2a_agent_name=a2a_agent_name,
a2a_agent_name=params.a2a_agent_name,
operation="push_notification",
context_id=context_id,
context_id=params.context_id,
task_id=task_id,
from_task=from_task,
from_agent=from_agent,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
return TaskStateResult(
@@ -167,15 +233,15 @@ class PushNotificationHandler:
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
endpoint=params.endpoint,
error=error_msg,
error_type="configuration_error",
a2a_agent_name=a2a_agent_name,
a2a_agent_name=params.a2a_agent_name,
operation="push_notification",
context_id=context_id,
context_id=params.context_id,
task_id=task_id,
from_task=from_task,
from_agent=from_agent,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
return TaskStateResult(
@@ -189,14 +255,14 @@ class PushNotificationHandler:
event_stream=client.send_message(message),
new_messages=new_messages,
agent_card=agent_card,
turn_number=turn_number,
is_multiturn=is_multiturn,
agent_role=agent_role,
from_task=from_task,
from_agent=from_agent,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
context_id=context_id,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
from_task=params.from_task,
from_agent=params.from_agent,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
context_id=params.context_id,
)
if not isinstance(result_or_task_id, str):
@@ -208,12 +274,12 @@ class PushNotificationHandler:
agent_branch,
A2APushNotificationRegisteredEvent(
task_id=task_id,
context_id=context_id,
context_id=params.context_id,
callback_url=str(config.url),
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=from_task,
from_agent=from_agent,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
@@ -229,11 +295,11 @@ class PushNotificationHandler:
timeout=polling_timeout,
poll_interval=polling_interval,
agent_branch=agent_branch,
from_task=from_task,
from_agent=from_agent,
context_id=context_id,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
context_id=params.context_id,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
)
if final_task is None:
@@ -247,13 +313,13 @@ class PushNotificationHandler:
a2a_task=final_task,
new_messages=new_messages,
agent_card=agent_card,
turn_number=turn_number,
is_multiturn=is_multiturn,
agent_role=agent_role,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=from_task,
from_agent=from_agent,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
)
if result:
return result
@@ -265,98 +331,24 @@ class PushNotificationHandler:
)
except A2AClientHTTPError as e:
error_msg = f"HTTP Error {e.status_code}: {e!s}"
error_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=error_msg))],
context_id=context_id,
return _handle_push_error(
error=e,
error_msg=f"HTTP Error {e.status_code}: {e!s}",
error_type="http_error",
new_messages=new_messages,
agent_branch=agent_branch,
params=params,
task_id=task_id,
)
new_messages.append(error_message)
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
error=str(e),
error_type="http_error",
status_code=e.status_code,
a2a_agent_name=a2a_agent_name,
operation="push_notification",
context_id=context_id,
task_id=task_id,
from_task=from_task,
from_agent=from_agent,
),
)
crewai_event_bus.emit(
agent_branch,
A2AResponseReceivedEvent(
response=error_msg,
turn_number=turn_number,
context_id=context_id,
is_multiturn=is_multiturn,
status="failed",
final=True,
agent_role=agent_role,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=from_task,
from_agent=from_agent,
),
)
return TaskStateResult(
status=TaskState.failed,
error=error_msg,
history=new_messages,
status_code=e.status_code,
)
except Exception as e:
error_msg = f"Unexpected error during push notification: {e!s}"
error_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=error_msg))],
context_id=context_id,
return _handle_push_error(
error=e,
error_msg=f"Unexpected error during push notification: {e!s}",
error_type="unexpected_error",
new_messages=new_messages,
agent_branch=agent_branch,
params=params,
task_id=task_id,
)
new_messages.append(error_message)
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
error=str(e),
error_type="unexpected_error",
a2a_agent_name=a2a_agent_name,
operation="push_notification",
context_id=context_id,
task_id=task_id,
from_task=from_task,
from_agent=from_agent,
),
)
crewai_event_bus.emit(
agent_branch,
A2AResponseReceivedEvent(
response=error_msg,
turn_number=turn_number,
context_id=context_id,
is_multiturn=is_multiturn,
status="failed",
final=True,
agent_role=agent_role,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=from_task,
from_agent=from_agent,
),
)
return TaskStateResult(
status=TaskState.failed,
error=error_msg,
history=new_messages,
)

View File

@@ -0,0 +1,87 @@
"""Webhook signature configuration for push notifications."""
from __future__ import annotations
from enum import Enum
import secrets
from pydantic import BaseModel, Field, SecretStr
class WebhookSignatureMode(str, Enum):
"""Signature mode for webhook push notifications."""
NONE = "none"
HMAC_SHA256 = "hmac_sha256"
class WebhookSignatureConfig(BaseModel):
"""Configuration for webhook signature verification.
Provides cryptographic integrity verification and replay attack protection
for A2A push notifications.
Attributes:
mode: Signature mode (none or hmac_sha256).
secret: Shared secret for HMAC computation (required for hmac_sha256 mode).
timestamp_tolerance_seconds: Max allowed age of timestamps for replay protection.
header_name: HTTP header name for the signature.
timestamp_header_name: HTTP header name for the timestamp.
"""
mode: WebhookSignatureMode = Field(
default=WebhookSignatureMode.NONE,
description="Signature verification mode",
)
secret: SecretStr | None = Field(
default=None,
description="Shared secret for HMAC computation",
)
timestamp_tolerance_seconds: int = Field(
default=300,
ge=0,
description="Max allowed timestamp age in seconds (5 min default)",
)
header_name: str = Field(
default="X-A2A-Signature",
description="HTTP header name for the signature",
)
timestamp_header_name: str = Field(
default="X-A2A-Signature-Timestamp",
description="HTTP header name for the timestamp",
)
@classmethod
def generate_secret(cls, length: int = 32) -> str:
"""Generate a cryptographically secure random secret.
Args:
length: Number of random bytes to generate (default 32).
Returns:
URL-safe base64-encoded secret string.
"""
return secrets.token_urlsafe(length)
@classmethod
def hmac_sha256(
cls,
secret: str | SecretStr,
timestamp_tolerance_seconds: int = 300,
) -> WebhookSignatureConfig:
"""Create an HMAC-SHA256 signature configuration.
Args:
secret: Shared secret for HMAC computation.
timestamp_tolerance_seconds: Max allowed timestamp age in seconds.
Returns:
Configured WebhookSignatureConfig for HMAC-SHA256.
"""
if isinstance(secret, str):
secret = SecretStr(secret)
return cls(
mode=WebhookSignatureMode.HMAC_SHA256,
secret=secret,
timestamp_tolerance_seconds=timestamp_tolerance_seconds,
)

View File

@@ -2,6 +2,9 @@
from __future__ import annotations
import asyncio
import logging
from typing import Final
import uuid
from a2a.client import Client
@@ -11,7 +14,10 @@ from a2a.types import (
Message,
Part,
Role,
Task,
TaskArtifactUpdateEvent,
TaskIdParams,
TaskQueryParams,
TaskState,
TaskStatusUpdateEvent,
TextPart,
@@ -24,7 +30,10 @@ from crewai.a2a.task_helpers import (
TaskStateResult,
process_task_state,
)
from crewai.a2a.updates.base import StreamingHandlerKwargs
from crewai.a2a.updates.base import StreamingHandlerKwargs, extract_common_params
from crewai.a2a.updates.streaming.params import (
process_status_update,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.a2a_events import (
A2AArtifactReceivedEvent,
@@ -35,9 +44,194 @@ from crewai.events.types.a2a_events import (
)
logger = logging.getLogger(__name__)
MAX_RESUBSCRIBE_ATTEMPTS: Final[int] = 3
RESUBSCRIBE_BACKOFF_BASE: Final[float] = 1.0
class StreamingHandler:
"""SSE streaming-based update handler."""
@staticmethod
async def _try_recover_from_interruption( # type: ignore[misc]
client: Client,
task_id: str,
new_messages: list[Message],
agent_card: AgentCard,
result_parts: list[str],
**kwargs: Unpack[StreamingHandlerKwargs],
) -> TaskStateResult | None:
"""Attempt to recover from a stream interruption by checking task state.
If the task completed while we were disconnected, returns the result.
If the task is still running, attempts to resubscribe and continue.
Args:
client: A2A client instance.
task_id: The task ID to recover.
new_messages: List of collected messages.
agent_card: The agent card.
result_parts: Accumulated result text parts.
**kwargs: Handler parameters.
Returns:
TaskStateResult if recovery succeeded (task finished or resubscribe worked).
None if recovery not possible (caller should handle failure).
Note:
When None is returned, recovery failed and the original exception should
be handled by the caller. All recovery attempts are logged.
"""
params = extract_common_params(kwargs) # type: ignore[arg-type]
try:
a2a_task: Task = await client.get_task(TaskQueryParams(id=task_id))
if a2a_task.status.state in TERMINAL_STATES:
logger.info(
"Task completed during stream interruption",
extra={"task_id": task_id, "state": str(a2a_task.status.state)},
)
return process_task_state(
a2a_task=a2a_task,
new_messages=new_messages,
agent_card=agent_card,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
result_parts=result_parts,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
)
if a2a_task.status.state in ACTIONABLE_STATES:
logger.info(
"Task in actionable state during stream interruption",
extra={"task_id": task_id, "state": str(a2a_task.status.state)},
)
return process_task_state(
a2a_task=a2a_task,
new_messages=new_messages,
agent_card=agent_card,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
result_parts=result_parts,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
is_final=False,
)
logger.info(
"Task still running, attempting resubscribe",
extra={"task_id": task_id, "state": str(a2a_task.status.state)},
)
for attempt in range(MAX_RESUBSCRIBE_ATTEMPTS):
try:
backoff = RESUBSCRIBE_BACKOFF_BASE * (2**attempt)
if attempt > 0:
await asyncio.sleep(backoff)
event_stream = client.resubscribe(TaskIdParams(id=task_id))
async for event in event_stream:
if isinstance(event, tuple):
resubscribed_task, update = event
is_final_update = (
process_status_update(update, result_parts)
if isinstance(update, TaskStatusUpdateEvent)
else False
)
if isinstance(update, TaskArtifactUpdateEvent):
artifact = update.artifact
result_parts.extend(
part.root.text
for part in artifact.parts
if part.root.kind == "text"
)
if (
is_final_update
or resubscribed_task.status.state
in TERMINAL_STATES | ACTIONABLE_STATES
):
return process_task_state(
a2a_task=resubscribed_task,
new_messages=new_messages,
agent_card=agent_card,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
result_parts=result_parts,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
is_final=is_final_update,
)
elif isinstance(event, Message):
new_messages.append(event)
result_parts.extend(
part.root.text
for part in event.parts
if part.root.kind == "text"
)
final_task = await client.get_task(TaskQueryParams(id=task_id))
return process_task_state(
a2a_task=final_task,
new_messages=new_messages,
agent_card=agent_card,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
result_parts=result_parts,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
)
except Exception as resubscribe_error: # noqa: PERF203
logger.warning(
"Resubscribe attempt failed",
extra={
"task_id": task_id,
"attempt": attempt + 1,
"max_attempts": MAX_RESUBSCRIBE_ATTEMPTS,
"error": str(resubscribe_error),
},
)
if attempt == MAX_RESUBSCRIBE_ATTEMPTS - 1:
return None
except Exception as e:
logger.warning(
"Failed to recover from stream interruption due to unexpected error",
extra={
"task_id": task_id,
"error": str(e),
"error_type": type(e).__name__,
},
exc_info=True,
)
return None
logger.warning(
"Recovery exhausted all resubscribe attempts without success",
extra={"task_id": task_id, "max_attempts": MAX_RESUBSCRIBE_ATTEMPTS},
)
return None
@staticmethod
async def execute(
client: Client,
@@ -58,42 +252,40 @@ class StreamingHandler:
Returns:
Dictionary with status, result/error, and history.
"""
context_id = kwargs.get("context_id")
task_id = kwargs.get("task_id")
turn_number = kwargs.get("turn_number", 0)
is_multiturn = kwargs.get("is_multiturn", False)
agent_role = kwargs.get("agent_role")
endpoint = kwargs.get("endpoint")
a2a_agent_name = kwargs.get("a2a_agent_name")
from_task = kwargs.get("from_task")
from_agent = kwargs.get("from_agent")
agent_branch = kwargs.get("agent_branch")
params = extract_common_params(kwargs)
result_parts: list[str] = []
final_result: TaskStateResult | None = None
event_stream = client.send_message(message)
chunk_index = 0
current_task_id: str | None = task_id
crewai_event_bus.emit(
agent_branch,
A2AStreamingStartedEvent(
task_id=task_id,
context_id=context_id,
endpoint=endpoint or "",
a2a_agent_name=a2a_agent_name,
turn_number=turn_number,
is_multiturn=is_multiturn,
agent_role=agent_role,
from_task=from_task,
from_agent=from_agent,
context_id=params.context_id,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
try:
async for event in event_stream:
if isinstance(event, tuple):
a2a_task, _ = event
current_task_id = a2a_task.id
if isinstance(event, Message):
new_messages.append(event)
message_context_id = event.context_id or context_id
message_context_id = event.context_id or params.context_id
for part in event.parts:
if part.root.kind == "text":
text = part.root.text
@@ -105,12 +297,12 @@ class StreamingHandler:
context_id=message_context_id,
chunk=text,
chunk_index=chunk_index,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
turn_number=turn_number,
is_multiturn=is_multiturn,
from_task=from_task,
from_agent=from_agent,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
chunk_index += 1
@@ -128,12 +320,12 @@ class StreamingHandler:
artifact_size = None
if artifact.parts:
artifact_size = sum(
len(p.root.text.encode("utf-8"))
len(p.root.text.encode())
if p.root.kind == "text"
else len(getattr(p.root, "data", b""))
for p in artifact.parts
)
effective_context_id = a2a_task.context_id or context_id
effective_context_id = a2a_task.context_id or params.context_id
crewai_event_bus.emit(
agent_branch,
A2AArtifactReceivedEvent(
@@ -147,29 +339,21 @@ class StreamingHandler:
size_bytes=artifact_size,
append=update.append or False,
last_chunk=update.last_chunk or False,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
context_id=effective_context_id,
turn_number=turn_number,
is_multiturn=is_multiturn,
from_task=from_task,
from_agent=from_agent,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
is_final_update = False
if isinstance(update, TaskStatusUpdateEvent):
is_final_update = update.final
if (
update.status
and update.status.message
and update.status.message.parts
):
result_parts.extend(
part.root.text
for part in update.status.message.parts
if part.root.kind == "text" and part.root.text
)
is_final_update = (
process_status_update(update, result_parts)
if isinstance(update, TaskStatusUpdateEvent)
else False
)
if (
not is_final_update
@@ -182,27 +366,68 @@ class StreamingHandler:
a2a_task=a2a_task,
new_messages=new_messages,
agent_card=agent_card,
turn_number=turn_number,
is_multiturn=is_multiturn,
agent_role=agent_role,
turn_number=params.turn_number,
is_multiturn=params.is_multiturn,
agent_role=params.agent_role,
result_parts=result_parts,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=from_task,
from_agent=from_agent,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
is_final=is_final_update,
)
if final_result:
break
except A2AClientHTTPError as e:
if current_task_id:
logger.info(
"Stream interrupted with HTTP error, attempting recovery",
extra={
"task_id": current_task_id,
"error": str(e),
"status_code": e.status_code,
},
)
recovery_kwargs = {k: v for k, v in kwargs.items() if k != "task_id"}
recovered_result = (
await StreamingHandler._try_recover_from_interruption(
client=client,
task_id=current_task_id,
new_messages=new_messages,
agent_card=agent_card,
result_parts=result_parts,
**recovery_kwargs,
)
)
if recovered_result:
logger.info(
"Successfully recovered task after HTTP error",
extra={
"task_id": current_task_id,
"status": str(recovered_result.get("status")),
},
)
return recovered_result
logger.warning(
"Failed to recover from HTTP error, returning failure",
extra={
"task_id": current_task_id,
"status_code": e.status_code,
"original_error": str(e),
},
)
error_msg = f"HTTP Error {e.status_code}: {e!s}"
error_type = "http_error"
status_code = e.status_code
error_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=error_msg))],
context_id=context_id,
context_id=params.context_id,
task_id=task_id,
)
new_messages.append(error_message)
@@ -210,32 +435,118 @@ class StreamingHandler:
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
endpoint=params.endpoint,
error=str(e),
error_type="http_error",
status_code=e.status_code,
a2a_agent_name=a2a_agent_name,
error_type=error_type,
status_code=status_code,
a2a_agent_name=params.a2a_agent_name,
operation="streaming",
context_id=context_id,
context_id=params.context_id,
task_id=task_id,
from_task=from_task,
from_agent=from_agent,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
crewai_event_bus.emit(
agent_branch,
A2AResponseReceivedEvent(
response=error_msg,
turn_number=turn_number,
context_id=context_id,
is_multiturn=is_multiturn,
turn_number=params.turn_number,
context_id=params.context_id,
is_multiturn=params.is_multiturn,
status="failed",
final=True,
agent_role=agent_role,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=from_task,
from_agent=from_agent,
agent_role=params.agent_role,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
return TaskStateResult(
status=TaskState.failed,
error=error_msg,
history=new_messages,
)
except (asyncio.TimeoutError, asyncio.CancelledError, ConnectionError) as e:
error_type = type(e).__name__.lower()
if current_task_id:
logger.info(
f"Stream interrupted with {error_type}, attempting recovery",
extra={"task_id": current_task_id, "error": str(e)},
)
recovery_kwargs = {k: v for k, v in kwargs.items() if k != "task_id"}
recovered_result = (
await StreamingHandler._try_recover_from_interruption(
client=client,
task_id=current_task_id,
new_messages=new_messages,
agent_card=agent_card,
result_parts=result_parts,
**recovery_kwargs,
)
)
if recovered_result:
logger.info(
f"Successfully recovered task after {error_type}",
extra={
"task_id": current_task_id,
"status": str(recovered_result.get("status")),
},
)
return recovered_result
logger.warning(
f"Failed to recover from {error_type}, returning failure",
extra={
"task_id": current_task_id,
"error_type": error_type,
"original_error": str(e),
},
)
error_msg = f"Connection error during streaming: {e!s}"
status_code = None
error_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=error_msg))],
context_id=params.context_id,
task_id=task_id,
)
new_messages.append(error_message)
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=params.endpoint,
error=str(e),
error_type=error_type,
status_code=status_code,
a2a_agent_name=params.a2a_agent_name,
operation="streaming",
context_id=params.context_id,
task_id=task_id,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
crewai_event_bus.emit(
agent_branch,
A2AResponseReceivedEvent(
response=error_msg,
turn_number=params.turn_number,
context_id=params.context_id,
is_multiturn=params.is_multiturn,
status="failed",
final=True,
agent_role=params.agent_role,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
return TaskStateResult(
@@ -245,13 +556,23 @@ class StreamingHandler:
)
except Exception as e:
error_msg = f"Unexpected error during streaming: {e!s}"
logger.exception(
"Unexpected error during streaming",
extra={
"task_id": current_task_id,
"error_type": type(e).__name__,
"endpoint": params.endpoint,
},
)
error_msg = f"Unexpected error during streaming: {type(e).__name__}: {e!s}"
error_type = "unexpected_error"
status_code = None
error_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=error_msg))],
context_id=context_id,
context_id=params.context_id,
task_id=task_id,
)
new_messages.append(error_message)
@@ -259,31 +580,32 @@ class StreamingHandler:
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
endpoint=params.endpoint,
error=str(e),
error_type="unexpected_error",
a2a_agent_name=a2a_agent_name,
error_type=error_type,
status_code=status_code,
a2a_agent_name=params.a2a_agent_name,
operation="streaming",
context_id=context_id,
context_id=params.context_id,
task_id=task_id,
from_task=from_task,
from_agent=from_agent,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
crewai_event_bus.emit(
agent_branch,
A2AResponseReceivedEvent(
response=error_msg,
turn_number=turn_number,
context_id=context_id,
is_multiturn=is_multiturn,
turn_number=params.turn_number,
context_id=params.context_id,
is_multiturn=params.is_multiturn,
status="failed",
final=True,
agent_role=agent_role,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
from_task=from_task,
from_agent=from_agent,
agent_role=params.agent_role,
endpoint=params.endpoint,
a2a_agent_name=params.a2a_agent_name,
from_task=params.from_task,
from_agent=params.from_agent,
),
)
return TaskStateResult(
@@ -301,15 +623,15 @@ class StreamingHandler:
crewai_event_bus.emit(
agent_branch,
A2AConnectionErrorEvent(
endpoint=endpoint or "",
endpoint=params.endpoint,
error=str(close_error),
error_type="stream_close_error",
a2a_agent_name=a2a_agent_name,
a2a_agent_name=params.a2a_agent_name,
operation="stream_close",
context_id=context_id,
context_id=params.context_id,
task_id=task_id,
from_task=from_task,
from_agent=from_agent,
from_task=params.from_task,
from_agent=params.from_agent,
),
)

View File

@@ -0,0 +1,28 @@
"""Common parameter extraction for streaming handlers."""
from __future__ import annotations
from a2a.types import TaskStatusUpdateEvent
def process_status_update(
update: TaskStatusUpdateEvent,
result_parts: list[str],
) -> bool:
"""Process a status update event and extract text parts.
Args:
update: The status update event.
result_parts: List to append text parts to (modified in place).
Returns:
True if this is a final update, False otherwise.
"""
is_final = update.final
if update.status and update.status.message and update.status.message.parts:
result_parts.extend(
part.root.text
for part in update.status.message.parts
if part.root.kind == "text" and part.root.text
)
return is_final

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
import asyncio
from collections.abc import MutableMapping
from functools import lru_cache
import ssl
import time
from types import MethodType
from typing import TYPE_CHECKING
@@ -15,7 +16,7 @@ from aiocache import cached # type: ignore[import-untyped]
from aiocache.serializers import PickleSerializer # type: ignore[import-untyped]
import httpx
from crewai.a2a.auth.schemas import APIKeyAuth, HTTPDigestAuth
from crewai.a2a.auth.client_schemes import APIKeyAuth, HTTPDigestAuth
from crewai.a2a.auth.utils import (
_auth_store,
configure_auth_client,
@@ -32,11 +33,51 @@ from crewai.events.types.a2a_events import (
if TYPE_CHECKING:
from crewai.a2a.auth.schemas import AuthScheme
from crewai.a2a.auth.client_schemes import ClientAuthScheme
from crewai.agent import Agent
from crewai.task import Task
def _get_tls_verify(auth: ClientAuthScheme | None) -> ssl.SSLContext | bool | str:
"""Get TLS verify parameter from auth scheme.
Args:
auth: Optional authentication scheme with TLS config.
Returns:
SSL context, CA cert path, True for default verification,
or False if verification disabled.
"""
if auth and auth.tls:
return auth.tls.get_httpx_ssl_context()
return True
async def _prepare_auth_headers(
auth: ClientAuthScheme | None,
timeout: int,
) -> tuple[MutableMapping[str, str], ssl.SSLContext | bool | str]:
"""Prepare authentication headers and TLS verification settings.
Args:
auth: Optional authentication scheme.
timeout: Request timeout in seconds.
Returns:
Tuple of (headers dict, TLS verify setting).
"""
headers: MutableMapping[str, str] = {}
verify = _get_tls_verify(auth)
if auth:
async with httpx.AsyncClient(
timeout=timeout, verify=verify
) as temp_auth_client:
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, temp_auth_client)
headers = await auth.apply_auth(temp_auth_client, {})
return headers, verify
def _get_server_config(agent: Agent) -> A2AServerConfig | None:
"""Get A2AServerConfig from an agent's a2a configuration.
@@ -59,7 +100,7 @@ def _get_server_config(agent: Agent) -> A2AServerConfig | None:
def fetch_agent_card(
endpoint: str,
auth: AuthScheme | None = None,
auth: ClientAuthScheme | None = None,
timeout: int = 30,
use_cache: bool = True,
cache_ttl: int = 300,
@@ -68,7 +109,7 @@ def fetch_agent_card(
Args:
endpoint: A2A agent endpoint URL (AgentCard URL).
auth: Optional AuthScheme for authentication.
auth: Optional ClientAuthScheme for authentication.
timeout: Request timeout in seconds.
use_cache: Whether to use caching (default True).
cache_ttl: Cache TTL in seconds (default 300 = 5 minutes).
@@ -90,10 +131,10 @@ def fetch_agent_card(
"_authorization_callback",
}
)
auth_hash = hash((type(auth).__name__, auth_data))
auth_hash = _auth_store.compute_key(type(auth).__name__, auth_data)
else:
auth_hash = 0
_auth_store[auth_hash] = auth
auth_hash = _auth_store.compute_key("none", "")
_auth_store.set(auth_hash, auth)
ttl_hash = int(time.time() // cache_ttl)
return _fetch_agent_card_cached(endpoint, auth_hash, timeout, ttl_hash)
@@ -109,7 +150,7 @@ def fetch_agent_card(
async def afetch_agent_card(
endpoint: str,
auth: AuthScheme | None = None,
auth: ClientAuthScheme | None = None,
timeout: int = 30,
use_cache: bool = True,
) -> AgentCard:
@@ -119,7 +160,7 @@ async def afetch_agent_card(
Args:
endpoint: A2A agent endpoint URL (AgentCard URL).
auth: Optional AuthScheme for authentication.
auth: Optional ClientAuthScheme for authentication.
timeout: Request timeout in seconds.
use_cache: Whether to use caching (default True).
@@ -140,10 +181,10 @@ async def afetch_agent_card(
"_authorization_callback",
}
)
auth_hash = hash((type(auth).__name__, auth_data))
auth_hash = _auth_store.compute_key(type(auth).__name__, auth_data)
else:
auth_hash = 0
_auth_store[auth_hash] = auth
auth_hash = _auth_store.compute_key("none", "")
_auth_store.set(auth_hash, auth)
agent_card: AgentCard = await _afetch_agent_card_cached(
endpoint, auth_hash, timeout
)
@@ -155,7 +196,7 @@ async def afetch_agent_card(
@lru_cache()
def _fetch_agent_card_cached(
endpoint: str,
auth_hash: int,
auth_hash: str,
timeout: int,
_ttl_hash: int,
) -> AgentCard:
@@ -175,7 +216,7 @@ def _fetch_agent_card_cached(
@cached(ttl=300, serializer=PickleSerializer()) # type: ignore[untyped-decorator]
async def _afetch_agent_card_cached(
endpoint: str,
auth_hash: int,
auth_hash: str,
timeout: int,
) -> AgentCard:
"""Cached async implementation of AgentCard fetching."""
@@ -185,7 +226,7 @@ async def _afetch_agent_card_cached(
async def _afetch_agent_card_impl(
endpoint: str,
auth: AuthScheme | None,
auth: ClientAuthScheme | None,
timeout: int,
) -> AgentCard:
"""Internal async implementation of AgentCard fetching."""
@@ -197,16 +238,17 @@ async def _afetch_agent_card_impl(
else:
url_parts = endpoint.split("/", 3)
base_url = f"{url_parts[0]}//{url_parts[2]}"
agent_card_path = f"/{url_parts[3]}" if len(url_parts) > 3 else "/"
agent_card_path = (
f"/{url_parts[3]}"
if len(url_parts) > 3 and url_parts[3]
else "/.well-known/agent-card.json"
)
headers: MutableMapping[str, str] = {}
if auth:
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, temp_auth_client)
headers = await auth.apply_auth(temp_auth_client, {})
headers, verify = await _prepare_auth_headers(auth, timeout)
async with httpx.AsyncClient(timeout=timeout, headers=headers) as temp_client:
async with httpx.AsyncClient(
timeout=timeout, headers=headers, verify=verify
) as temp_client:
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, temp_client)
@@ -434,6 +476,7 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
"""Generate an A2A AgentCard from an Agent instance.
Uses A2AServerConfig values when available, falling back to agent properties.
If signing_config is provided, the card will be signed with JWS.
Args:
agent: The Agent instance to generate a card for.
@@ -442,6 +485,8 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
Returns:
AgentCard describing the agent's capabilities.
"""
from crewai.a2a.utils.agent_card_signing import sign_agent_card
server_config = _get_server_config(agent) or A2AServerConfig()
name = server_config.name or agent.role
@@ -472,15 +517,31 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
)
)
return AgentCard(
capabilities = server_config.capabilities
if server_config.server_extensions:
from crewai.a2a.extensions.server import ServerExtensionRegistry
registry = ServerExtensionRegistry(server_config.server_extensions)
ext_list = registry.get_agent_extensions()
existing_exts = list(capabilities.extensions) if capabilities.extensions else []
existing_uris = {e.uri for e in existing_exts}
for ext in ext_list:
if ext.uri not in existing_uris:
existing_exts.append(ext)
capabilities = capabilities.model_copy(update={"extensions": existing_exts})
card = AgentCard(
name=name,
description=description,
url=server_config.url or url,
version=server_config.version,
capabilities=server_config.capabilities,
capabilities=capabilities,
default_input_modes=server_config.default_input_modes,
default_output_modes=server_config.default_output_modes,
skills=skills,
preferred_transport=server_config.transport.preferred,
protocol_version=server_config.protocol_version,
provider=server_config.provider,
documentation_url=server_config.documentation_url,
@@ -489,9 +550,21 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
security=server_config.security,
security_schemes=server_config.security_schemes,
supports_authenticated_extended_card=server_config.supports_authenticated_extended_card,
signatures=server_config.signatures,
)
if server_config.signing_config:
signature = sign_agent_card(
card,
private_key=server_config.signing_config.get_private_key(),
key_id=server_config.signing_config.key_id,
algorithm=server_config.signing_config.algorithm,
)
card = card.model_copy(update={"signatures": [signature]})
elif server_config.signatures:
card = card.model_copy(update={"signatures": server_config.signatures})
return card
def inject_a2a_server_methods(agent: Agent) -> None:
"""Inject A2A server methods onto an Agent instance.

View File

@@ -0,0 +1,236 @@
"""AgentCard JWS signing utilities.
This module provides functions for signing and verifying AgentCards using
JSON Web Signatures (JWS) as per RFC 7515. Signed agent cards allow clients
to verify the authenticity and integrity of agent card information.
Example:
>>> from crewai.a2a.utils.agent_card_signing import sign_agent_card
>>> signature = sign_agent_card(agent_card, private_key_pem, key_id="key-1")
>>> card_with_sig = card.model_copy(update={"signatures": [signature]})
"""
from __future__ import annotations
import base64
import json
import logging
from typing import Any, Literal
from a2a.types import AgentCard, AgentCardSignature
import jwt
from pydantic import SecretStr
logger = logging.getLogger(__name__)
SigningAlgorithm = Literal[
"RS256", "RS384", "RS512", "ES256", "ES384", "ES512", "PS256", "PS384", "PS512"
]
def _normalize_private_key(private_key: str | bytes | SecretStr) -> bytes:
"""Normalize private key to bytes format.
Args:
private_key: PEM-encoded private key as string, bytes, or SecretStr.
Returns:
Private key as bytes.
"""
if isinstance(private_key, SecretStr):
private_key = private_key.get_secret_value()
if isinstance(private_key, str):
private_key = private_key.encode()
return private_key
def _serialize_agent_card(agent_card: AgentCard) -> str:
"""Serialize AgentCard to canonical JSON for signing.
Excludes the signatures field to avoid circular reference during signing.
Uses sorted keys and compact separators for deterministic output.
Args:
agent_card: The AgentCard to serialize.
Returns:
Canonical JSON string representation.
"""
card_dict = agent_card.model_dump(exclude={"signatures"}, exclude_none=True)
return json.dumps(card_dict, sort_keys=True, separators=(",", ":"))
def _base64url_encode(data: bytes | str) -> str:
"""Encode data to URL-safe base64 without padding.
Args:
data: Data to encode.
Returns:
URL-safe base64 encoded string without padding.
"""
if isinstance(data, str):
data = data.encode()
return base64.urlsafe_b64encode(data).rstrip(b"=").decode("ascii")
def sign_agent_card(
agent_card: AgentCard,
private_key: str | bytes | SecretStr,
key_id: str | None = None,
algorithm: SigningAlgorithm = "RS256",
) -> AgentCardSignature:
"""Sign an AgentCard using JWS (RFC 7515).
Creates a detached JWS signature for the AgentCard. The signature covers
all fields except the signatures field itself.
Args:
agent_card: The AgentCard to sign.
private_key: PEM-encoded private key (RSA, EC, or RSA-PSS).
key_id: Optional key identifier for the JWS header (kid claim).
algorithm: Signing algorithm (RS256, ES256, PS256, etc.).
Returns:
AgentCardSignature with protected header and signature.
Raises:
jwt.exceptions.InvalidKeyError: If the private key is invalid.
ValueError: If the algorithm is not supported for the key type.
Example:
>>> signature = sign_agent_card(
... agent_card,
... private_key_pem="-----BEGIN PRIVATE KEY-----...",
... key_id="my-key-id",
... )
"""
key_bytes = _normalize_private_key(private_key)
payload = _serialize_agent_card(agent_card)
protected_header: dict[str, Any] = {"typ": "JWS"}
if key_id:
protected_header["kid"] = key_id
jws_token = jwt.api_jws.encode(
payload.encode(),
key_bytes,
algorithm=algorithm,
headers=protected_header,
)
parts = jws_token.split(".")
protected_b64 = parts[0]
signature_b64 = parts[2]
header: dict[str, Any] | None = None
if key_id:
header = {"kid": key_id}
return AgentCardSignature(
protected=protected_b64,
signature=signature_b64,
header=header,
)
def verify_agent_card_signature(
agent_card: AgentCard,
signature: AgentCardSignature,
public_key: str | bytes,
algorithms: list[str] | None = None,
) -> bool:
"""Verify an AgentCard JWS signature.
Validates that the signature was created with the corresponding private key
and that the AgentCard content has not been modified.
Args:
agent_card: The AgentCard to verify.
signature: The AgentCardSignature to validate.
public_key: PEM-encoded public key (RSA, EC, or RSA-PSS).
algorithms: List of allowed algorithms. Defaults to common asymmetric algorithms.
Returns:
True if signature is valid, False otherwise.
Example:
>>> is_valid = verify_agent_card_signature(
... agent_card, signature, public_key_pem="-----BEGIN PUBLIC KEY-----..."
... )
"""
if algorithms is None:
algorithms = [
"RS256",
"RS384",
"RS512",
"ES256",
"ES384",
"ES512",
"PS256",
"PS384",
"PS512",
]
if isinstance(public_key, str):
public_key = public_key.encode()
payload = _serialize_agent_card(agent_card)
payload_b64 = _base64url_encode(payload)
jws_token = f"{signature.protected}.{payload_b64}.{signature.signature}"
try:
jwt.api_jws.decode(
jws_token,
public_key,
algorithms=algorithms,
)
return True
except jwt.InvalidSignatureError:
logger.debug(
"AgentCard signature verification failed",
extra={"reason": "invalid_signature"},
)
return False
except jwt.DecodeError as e:
logger.debug(
"AgentCard signature verification failed",
extra={"reason": "decode_error", "error": str(e)},
)
return False
except jwt.InvalidAlgorithmError as e:
logger.debug(
"AgentCard signature verification failed",
extra={"reason": "algorithm_error", "error": str(e)},
)
return False
def get_key_id_from_signature(signature: AgentCardSignature) -> str | None:
"""Extract the key ID (kid) from an AgentCardSignature.
Checks both the unprotected header and the protected header for the kid claim.
Args:
signature: The AgentCardSignature to extract from.
Returns:
The key ID if present, None otherwise.
"""
if signature.header and "kid" in signature.header:
kid: str = signature.header["kid"]
return kid
try:
protected = signature.protected
padding_needed = 4 - (len(protected) % 4)
if padding_needed != 4:
protected += "=" * padding_needed
protected_json = base64.urlsafe_b64decode(protected).decode()
protected_header: dict[str, Any] = json.loads(protected_json)
return protected_header.get("kid")
except (ValueError, json.JSONDecodeError):
return None

View File

@@ -0,0 +1,339 @@
"""Content type negotiation for A2A protocol.
This module handles negotiation of input/output MIME types between A2A clients
and servers based on AgentCard capabilities.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import TYPE_CHECKING, Annotated, Final, Literal, cast
from a2a.types import Part
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.a2a_events import A2AContentTypeNegotiatedEvent
if TYPE_CHECKING:
from a2a.types import AgentCard, AgentSkill
TEXT_PLAIN: Literal["text/plain"] = "text/plain"
APPLICATION_JSON: Literal["application/json"] = "application/json"
IMAGE_PNG: Literal["image/png"] = "image/png"
IMAGE_JPEG: Literal["image/jpeg"] = "image/jpeg"
IMAGE_WILDCARD: Literal["image/*"] = "image/*"
APPLICATION_PDF: Literal["application/pdf"] = "application/pdf"
APPLICATION_OCTET_STREAM: Literal["application/octet-stream"] = (
"application/octet-stream"
)
DEFAULT_CLIENT_INPUT_MODES: Final[list[Literal["text/plain", "application/json"]]] = [
TEXT_PLAIN,
APPLICATION_JSON,
]
DEFAULT_CLIENT_OUTPUT_MODES: Final[list[Literal["text/plain", "application/json"]]] = [
TEXT_PLAIN,
APPLICATION_JSON,
]
@dataclass
class NegotiatedContentTypes:
"""Result of content type negotiation."""
input_modes: Annotated[list[str], "Negotiated input MIME types the client can send"]
output_modes: Annotated[
list[str], "Negotiated output MIME types the server will produce"
]
effective_input_modes: Annotated[list[str], "Server's effective input modes"]
effective_output_modes: Annotated[list[str], "Server's effective output modes"]
skill_name: Annotated[
str | None, "Skill name if negotiation was skill-specific"
] = None
class ContentTypeNegotiationError(Exception):
"""Raised when no compatible content types can be negotiated."""
def __init__(
self,
client_input_modes: list[str],
client_output_modes: list[str],
server_input_modes: list[str],
server_output_modes: list[str],
direction: str = "both",
message: str | None = None,
) -> None:
self.client_input_modes = client_input_modes
self.client_output_modes = client_output_modes
self.server_input_modes = server_input_modes
self.server_output_modes = server_output_modes
self.direction = direction
if message is None:
if direction == "input":
message = (
f"No compatible input content types. "
f"Client supports: {client_input_modes}, "
f"Server accepts: {server_input_modes}"
)
elif direction == "output":
message = (
f"No compatible output content types. "
f"Client accepts: {client_output_modes}, "
f"Server produces: {server_output_modes}"
)
else:
message = (
f"No compatible content types. "
f"Input - Client: {client_input_modes}, Server: {server_input_modes}. "
f"Output - Client: {client_output_modes}, Server: {server_output_modes}"
)
super().__init__(message)
def _normalize_mime_type(mime_type: str) -> str:
"""Normalize MIME type for comparison (lowercase, strip whitespace)."""
return mime_type.lower().strip()
def _mime_types_compatible(client_type: str, server_type: str) -> bool:
"""Check if two MIME types are compatible.
Handles wildcards like image/* matching image/png.
"""
client_normalized = _normalize_mime_type(client_type)
server_normalized = _normalize_mime_type(server_type)
if client_normalized == server_normalized:
return True
if "*" in client_normalized or "*" in server_normalized:
client_parts = client_normalized.split("/")
server_parts = server_normalized.split("/")
if len(client_parts) == 2 and len(server_parts) == 2:
type_match = (
client_parts[0] == server_parts[0]
or client_parts[0] == "*"
or server_parts[0] == "*"
)
subtype_match = (
client_parts[1] == server_parts[1]
or client_parts[1] == "*"
or server_parts[1] == "*"
)
return type_match and subtype_match
return False
def _find_compatible_modes(
client_modes: list[str], server_modes: list[str]
) -> list[str]:
"""Find compatible MIME types between client and server.
Returns modes in client preference order.
"""
compatible = []
for client_mode in client_modes:
for server_mode in server_modes:
if _mime_types_compatible(client_mode, server_mode):
if "*" in client_mode and "*" not in server_mode:
if server_mode not in compatible:
compatible.append(server_mode)
else:
if client_mode not in compatible:
compatible.append(client_mode)
break
return compatible
def _get_effective_modes(
agent_card: AgentCard,
skill_name: str | None = None,
) -> tuple[list[str], list[str], AgentSkill | None]:
"""Get effective input/output modes from agent card.
If skill_name is provided and the skill has custom modes, those are used.
Otherwise, falls back to agent card defaults.
"""
skill: AgentSkill | None = None
if skill_name and agent_card.skills:
for s in agent_card.skills:
if s.name == skill_name or s.id == skill_name:
skill = s
break
if skill:
input_modes = (
skill.input_modes if skill.input_modes else agent_card.default_input_modes
)
output_modes = (
skill.output_modes
if skill.output_modes
else agent_card.default_output_modes
)
else:
input_modes = agent_card.default_input_modes
output_modes = agent_card.default_output_modes
return input_modes, output_modes, skill
def negotiate_content_types(
agent_card: AgentCard,
client_input_modes: list[str] | None = None,
client_output_modes: list[str] | None = None,
skill_name: str | None = None,
emit_event: bool = True,
endpoint: str | None = None,
a2a_agent_name: str | None = None,
strict: bool = False,
) -> NegotiatedContentTypes:
"""Negotiate content types between client and server.
Args:
agent_card: The remote agent's card with capability info.
client_input_modes: MIME types the client can send. Defaults to text/plain and application/json.
client_output_modes: MIME types the client can accept. Defaults to text/plain and application/json.
skill_name: Optional skill to use for mode lookup.
emit_event: Whether to emit a content type negotiation event.
endpoint: Agent endpoint (for event metadata).
a2a_agent_name: Agent name (for event metadata).
strict: If True, raises error when no compatible types found.
If False, returns empty lists for incompatible directions.
Returns:
NegotiatedContentTypes with compatible input and output modes.
Raises:
ContentTypeNegotiationError: If strict=True and no compatible types found.
"""
if client_input_modes is None:
client_input_modes = cast(list[str], DEFAULT_CLIENT_INPUT_MODES.copy())
if client_output_modes is None:
client_output_modes = cast(list[str], DEFAULT_CLIENT_OUTPUT_MODES.copy())
server_input_modes, server_output_modes, skill = _get_effective_modes(
agent_card, skill_name
)
compatible_input = _find_compatible_modes(client_input_modes, server_input_modes)
compatible_output = _find_compatible_modes(client_output_modes, server_output_modes)
if strict:
if not compatible_input and not compatible_output:
raise ContentTypeNegotiationError(
client_input_modes=client_input_modes,
client_output_modes=client_output_modes,
server_input_modes=server_input_modes,
server_output_modes=server_output_modes,
)
if not compatible_input:
raise ContentTypeNegotiationError(
client_input_modes=client_input_modes,
client_output_modes=client_output_modes,
server_input_modes=server_input_modes,
server_output_modes=server_output_modes,
direction="input",
)
if not compatible_output:
raise ContentTypeNegotiationError(
client_input_modes=client_input_modes,
client_output_modes=client_output_modes,
server_input_modes=server_input_modes,
server_output_modes=server_output_modes,
direction="output",
)
result = NegotiatedContentTypes(
input_modes=compatible_input,
output_modes=compatible_output,
effective_input_modes=server_input_modes,
effective_output_modes=server_output_modes,
skill_name=skill.name if skill else None,
)
if emit_event:
crewai_event_bus.emit(
None,
A2AContentTypeNegotiatedEvent(
endpoint=endpoint or agent_card.url,
a2a_agent_name=a2a_agent_name or agent_card.name,
skill_name=skill_name,
client_input_modes=client_input_modes,
client_output_modes=client_output_modes,
server_input_modes=server_input_modes,
server_output_modes=server_output_modes,
negotiated_input_modes=compatible_input,
negotiated_output_modes=compatible_output,
negotiation_success=bool(compatible_input and compatible_output),
),
)
return result
def validate_content_type(
content_type: str,
allowed_modes: list[str],
) -> bool:
"""Validate that a content type is allowed by a list of modes.
Args:
content_type: The MIME type to validate.
allowed_modes: List of allowed MIME types (may include wildcards).
Returns:
True if content_type is compatible with any allowed mode.
"""
for mode in allowed_modes:
if _mime_types_compatible(content_type, mode):
return True
return False
def get_part_content_type(part: Part) -> str:
"""Extract MIME type from an A2A Part.
Args:
part: A Part object containing TextPart, DataPart, or FilePart.
Returns:
The MIME type string for this part.
"""
root = part.root
if root.kind == "text":
return TEXT_PLAIN
if root.kind == "data":
return APPLICATION_JSON
if root.kind == "file":
return root.file.mime_type or APPLICATION_OCTET_STREAM
return APPLICATION_OCTET_STREAM
def validate_message_parts(
parts: list[Part],
allowed_modes: list[str],
) -> list[str]:
"""Validate that all message parts have allowed content types.
Args:
parts: List of Parts from the incoming message.
allowed_modes: List of allowed MIME types (from default_input_modes).
Returns:
List of invalid content types found (empty if all valid).
"""
invalid_types: list[str] = []
for part in parts:
content_type = get_part_content_type(part)
if not validate_content_type(content_type, allowed_modes):
if content_type not in invalid_types:
invalid_types.append(content_type)
return invalid_types

View File

@@ -3,14 +3,18 @@
from __future__ import annotations
import asyncio
from collections.abc import AsyncIterator, MutableMapping
import base64
from collections.abc import AsyncIterator, Callable, MutableMapping
from contextlib import asynccontextmanager
from typing import TYPE_CHECKING, Any, Literal
import logging
from typing import TYPE_CHECKING, Any, Final, Literal
import uuid
from a2a.client import Client, ClientConfig, ClientFactory
from a2a.types import (
AgentCard,
FilePart,
FileWithBytes,
Message,
Part,
PushNotificationConfig as A2APushNotificationConfig,
@@ -20,18 +24,24 @@ from a2a.types import (
import httpx
from pydantic import BaseModel
from crewai.a2a.auth.schemas import APIKeyAuth, HTTPDigestAuth
from crewai.a2a.auth.client_schemes import APIKeyAuth, HTTPDigestAuth
from crewai.a2a.auth.utils import (
_auth_store,
configure_auth_client,
validate_auth_against_agent_card,
)
from crewai.a2a.config import ClientTransportConfig, GRPCClientConfig
from crewai.a2a.extensions.registry import (
ExtensionsMiddleware,
validate_required_extensions,
)
from crewai.a2a.task_helpers import TaskStateResult
from crewai.a2a.types import (
HANDLER_REGISTRY,
HandlerType,
PartsDict,
PartsMetadataDict,
TransportType,
)
from crewai.a2a.updates import (
PollingConfig,
@@ -39,7 +49,20 @@ from crewai.a2a.updates import (
StreamingHandler,
UpdateConfig,
)
from crewai.a2a.utils.agent_card import _afetch_agent_card_cached
from crewai.a2a.utils.agent_card import (
_afetch_agent_card_cached,
_get_tls_verify,
_prepare_auth_headers,
)
from crewai.a2a.utils.content_type import (
DEFAULT_CLIENT_OUTPUT_MODES,
negotiate_content_types,
)
from crewai.a2a.utils.transport import (
NegotiatedTransport,
TransportNegotiationError,
negotiate_transport,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.a2a_events import (
A2AConversationStartedEvent,
@@ -49,10 +72,48 @@ from crewai.events.types.a2a_events import (
)
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from a2a.types import Message
from crewai.a2a.auth.schemas import AuthScheme
from crewai.a2a.auth.client_schemes import ClientAuthScheme
_DEFAULT_TRANSPORT: Final[TransportType] = "JSONRPC"
def _create_file_parts(input_files: dict[str, Any] | None) -> list[Part]:
"""Convert FileInput dictionary to FilePart objects.
Args:
input_files: Dictionary mapping names to FileInput objects.
Returns:
List of Part objects containing FilePart data.
"""
if not input_files:
return []
try:
import crewai_files # noqa: F401
except ImportError:
logger.debug("crewai_files not installed, skipping file parts")
return []
parts: list[Part] = []
for name, file_input in input_files.items():
content_bytes = file_input.read()
content_base64 = base64.b64encode(content_bytes).decode()
file_with_bytes = FileWithBytes(
bytes=content_base64,
mimeType=file_input.content_type,
name=file_input.filename or name,
)
parts.append(Part(root=FilePart(file=file_with_bytes)))
return parts
def get_handler(config: UpdateConfig | None) -> HandlerType:
@@ -71,8 +132,7 @@ def get_handler(config: UpdateConfig | None) -> HandlerType:
def execute_a2a_delegation(
endpoint: str,
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"],
auth: AuthScheme | None,
auth: ClientAuthScheme | None,
timeout: int,
task_description: str,
context: str | None = None,
@@ -91,32 +151,24 @@ def execute_a2a_delegation(
from_task: Any | None = None,
from_agent: Any | None = None,
skill_id: str | None = None,
client_extensions: list[str] | None = None,
transport: ClientTransportConfig | None = None,
accepted_output_modes: list[str] | None = None,
input_files: dict[str, Any] | None = None,
) -> TaskStateResult:
"""Execute a task delegation to a remote A2A agent synchronously.
This is the sync wrapper around aexecute_a2a_delegation. For async contexts,
use aexecute_a2a_delegation directly.
WARNING: This function blocks the entire thread by creating and running a new
event loop. Prefer using 'await aexecute_a2a_delegation()' in async contexts
for better performance and resource efficiency.
This is a synchronous wrapper around aexecute_a2a_delegation that creates a
new event loop to run the async implementation. It is provided for compatibility
with synchronous code paths only.
Args:
endpoint: A2A agent endpoint URL (AgentCard URL)
transport_protocol: Optional A2A transport protocol (grpc, jsonrpc, http+json)
auth: Optional AuthScheme for authentication (Bearer, OAuth2, API Key, HTTP Basic/Digest)
timeout: Request timeout in seconds
task_description: The task to delegate
context: Optional context information
context_id: Context ID for correlating messages/tasks
task_id: Specific task identifier
reference_task_ids: List of related task IDs
metadata: Additional metadata (external_id, request_id, etc.)
extensions: Protocol extensions for custom fields
conversation_history: Previous Message objects from conversation
agent_id: Agent identifier for logging
agent_role: Role of the CrewAI agent delegating the task
agent_branch: Optional agent tree branch for logging
response_model: Optional Pydantic model for structured outputs
turn_number: Optional turn number for multi-turn conversations
endpoint: A2A agent endpoint URL.
auth: Optional AuthScheme for authentication.
endpoint: A2A agent endpoint URL (AgentCard URL).
auth: Optional ClientAuthScheme for authentication.
timeout: Request timeout in seconds.
task_description: The task to delegate.
context: Optional context information.
@@ -135,10 +187,27 @@ def execute_a2a_delegation(
from_task: Optional CrewAI Task object for event metadata.
from_agent: Optional CrewAI Agent object for event metadata.
skill_id: Optional skill ID to target a specific agent capability.
client_extensions: A2A protocol extension URIs the client supports.
transport: Transport configuration (preferred, supported transports, gRPC settings).
accepted_output_modes: MIME types the client can accept in responses.
input_files: Optional dictionary of files to send to remote agent.
Returns:
TaskStateResult with status, result/error, history, and agent_card.
Raises:
RuntimeError: If called from an async context with a running event loop.
"""
try:
asyncio.get_running_loop()
raise RuntimeError(
"execute_a2a_delegation() cannot be called from an async context. "
"Use 'await aexecute_a2a_delegation()' instead."
)
except RuntimeError as e:
if "no running event loop" not in str(e).lower():
raise
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
@@ -159,12 +228,15 @@ def execute_a2a_delegation(
agent_role=agent_role,
agent_branch=agent_branch,
response_model=response_model,
transport_protocol=transport_protocol,
turn_number=turn_number,
updates=updates,
from_task=from_task,
from_agent=from_agent,
skill_id=skill_id,
client_extensions=client_extensions,
transport=transport,
accepted_output_modes=accepted_output_modes,
input_files=input_files,
)
)
finally:
@@ -176,8 +248,7 @@ def execute_a2a_delegation(
async def aexecute_a2a_delegation(
endpoint: str,
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"],
auth: AuthScheme | None,
auth: ClientAuthScheme | None,
timeout: int,
task_description: str,
context: str | None = None,
@@ -196,6 +267,10 @@ async def aexecute_a2a_delegation(
from_task: Any | None = None,
from_agent: Any | None = None,
skill_id: str | None = None,
client_extensions: list[str] | None = None,
transport: ClientTransportConfig | None = None,
accepted_output_modes: list[str] | None = None,
input_files: dict[str, Any] | None = None,
) -> TaskStateResult:
"""Execute a task delegation to a remote A2A agent asynchronously.
@@ -203,25 +278,8 @@ async def aexecute_a2a_delegation(
in an async context (e.g., with Crew.akickoff() or agent.aexecute_task()).
Args:
endpoint: A2A agent endpoint URL
transport_protocol: Optional A2A transport protocol (grpc, jsonrpc, http+json)
auth: Optional AuthScheme for authentication
timeout: Request timeout in seconds
task_description: Task to delegate
context: Optional context
context_id: Context ID for correlation
task_id: Specific task identifier
reference_task_ids: Related task IDs
metadata: Additional metadata
extensions: Protocol extensions
conversation_history: Previous Message objects
turn_number: Current turn number
agent_branch: Agent tree branch for logging
agent_id: Agent identifier for logging
agent_role: Agent role for logging
response_model: Optional Pydantic model for structured outputs
endpoint: A2A agent endpoint URL.
auth: Optional AuthScheme for authentication.
auth: Optional ClientAuthScheme for authentication.
timeout: Request timeout in seconds.
task_description: The task to delegate.
context: Optional context information.
@@ -240,6 +298,10 @@ async def aexecute_a2a_delegation(
from_task: Optional CrewAI Task object for event metadata.
from_agent: Optional CrewAI Agent object for event metadata.
skill_id: Optional skill ID to target a specific agent capability.
client_extensions: A2A protocol extension URIs the client supports.
transport: Transport configuration (preferred, supported transports, gRPC settings).
accepted_output_modes: MIME types the client can accept in responses.
input_files: Optional dictionary of files to send to remote agent.
Returns:
TaskStateResult with status, result/error, history, and agent_card.
@@ -271,10 +333,13 @@ async def aexecute_a2a_delegation(
agent_role=agent_role,
response_model=response_model,
updates=updates,
transport_protocol=transport_protocol,
from_task=from_task,
from_agent=from_agent,
skill_id=skill_id,
client_extensions=client_extensions,
transport=transport,
accepted_output_modes=accepted_output_modes,
input_files=input_files,
)
except Exception as e:
crewai_event_bus.emit(
@@ -294,7 +359,7 @@ async def aexecute_a2a_delegation(
)
raise
agent_card_data: dict[str, Any] = result.get("agent_card") or {}
agent_card_data = result.get("agent_card")
crewai_event_bus.emit(
agent_branch,
A2ADelegationCompletedEvent(
@@ -306,7 +371,7 @@ async def aexecute_a2a_delegation(
endpoint=endpoint,
a2a_agent_name=result.get("a2a_agent_name"),
agent_card=agent_card_data,
provider=agent_card_data.get("provider"),
provider=agent_card_data.get("provider") if agent_card_data else None,
metadata=metadata,
extensions=list(extensions.keys()) if extensions else None,
from_task=from_task,
@@ -319,8 +384,7 @@ async def aexecute_a2a_delegation(
async def _aexecute_a2a_delegation_impl(
endpoint: str,
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"],
auth: AuthScheme | None,
auth: ClientAuthScheme | None,
timeout: int,
task_description: str,
context: str | None,
@@ -340,8 +404,14 @@ async def _aexecute_a2a_delegation_impl(
from_task: Any | None = None,
from_agent: Any | None = None,
skill_id: str | None = None,
client_extensions: list[str] | None = None,
transport: ClientTransportConfig | None = None,
accepted_output_modes: list[str] | None = None,
input_files: dict[str, Any] | None = None,
) -> TaskStateResult:
"""Internal async implementation of A2A delegation."""
if transport is None:
transport = ClientTransportConfig()
if auth:
auth_data = auth.model_dump_json(
exclude={
@@ -351,22 +421,70 @@ async def _aexecute_a2a_delegation_impl(
"_authorization_callback",
}
)
auth_hash = hash((type(auth).__name__, auth_data))
auth_hash = _auth_store.compute_key(type(auth).__name__, auth_data)
else:
auth_hash = 0
_auth_store[auth_hash] = auth
auth_hash = _auth_store.compute_key("none", endpoint)
_auth_store.set(auth_hash, auth)
agent_card = await _afetch_agent_card_cached(
endpoint=endpoint, auth_hash=auth_hash, timeout=timeout
)
validate_auth_against_agent_card(agent_card, auth)
headers: MutableMapping[str, str] = {}
if auth:
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, temp_auth_client)
headers = await auth.apply_auth(temp_auth_client, {})
unsupported_exts = validate_required_extensions(agent_card, client_extensions)
if unsupported_exts:
ext_uris = [ext.uri for ext in unsupported_exts]
raise ValueError(
f"Agent requires extensions not supported by client: {ext_uris}"
)
negotiated: NegotiatedTransport | None = None
effective_transport: TransportType = transport.preferred or _DEFAULT_TRANSPORT
effective_url = endpoint
client_transports: list[str] = (
list(transport.supported) if transport.supported else [_DEFAULT_TRANSPORT]
)
try:
negotiated = negotiate_transport(
agent_card=agent_card,
client_supported_transports=client_transports,
client_preferred_transport=transport.preferred,
endpoint=endpoint,
a2a_agent_name=agent_card.name,
)
effective_transport = negotiated.transport # type: ignore[assignment]
effective_url = negotiated.url
except TransportNegotiationError as e:
logger.warning(
"Transport negotiation failed, using fallback",
extra={
"error": str(e),
"fallback_transport": effective_transport,
"fallback_url": effective_url,
"endpoint": endpoint,
"client_transports": client_transports,
"server_transports": [
iface.transport for iface in agent_card.additional_interfaces or []
]
+ [agent_card.preferred_transport or "JSONRPC"],
},
)
effective_output_modes = accepted_output_modes or DEFAULT_CLIENT_OUTPUT_MODES.copy()
content_negotiated = negotiate_content_types(
agent_card=agent_card,
client_output_modes=accepted_output_modes,
skill_name=skill_id,
endpoint=endpoint,
a2a_agent_name=agent_card.name,
)
if content_negotiated.output_modes:
effective_output_modes = content_negotiated.output_modes
headers, _ = await _prepare_auth_headers(auth, timeout)
a2a_agent_name = None
if agent_card.name:
@@ -441,10 +559,13 @@ async def _aexecute_a2a_delegation_impl(
if skill_id:
message_metadata["skill_id"] = skill_id
parts_list: list[Part] = [Part(root=TextPart(**parts))]
parts_list.extend(_create_file_parts(input_files))
message = Message(
role=Role.user,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(**parts))],
parts=parts_list,
context_id=context_id,
task_id=task_id,
reference_task_ids=reference_task_ids,
@@ -513,15 +634,22 @@ async def _aexecute_a2a_delegation_impl(
use_streaming = not use_polling and push_config_for_client is None
client_agent_card = agent_card
if effective_url != agent_card.url:
client_agent_card = agent_card.model_copy(update={"url": effective_url})
async with _create_a2a_client(
agent_card=agent_card,
transport_protocol=transport_protocol,
agent_card=client_agent_card,
transport_protocol=effective_transport,
timeout=timeout,
headers=headers,
streaming=use_streaming,
auth=auth,
use_polling=use_polling,
push_notification_config=push_config_for_client,
client_extensions=client_extensions,
accepted_output_modes=effective_output_modes, # type: ignore[arg-type]
grpc_config=transport.grpc,
) as client:
result = await handler.execute(
client=client,
@@ -535,6 +663,245 @@ async def _aexecute_a2a_delegation_impl(
return result
def _normalize_grpc_metadata(
metadata: tuple[tuple[str, str], ...] | None,
) -> tuple[tuple[str, str], ...] | None:
"""Lowercase all gRPC metadata keys.
gRPC requires lowercase metadata keys, but some libraries (like the A2A SDK)
use mixed-case headers like 'X-A2A-Extensions'. This normalizes them.
"""
if metadata is None:
return None
return tuple((key.lower(), value) for key, value in metadata)
def _create_grpc_interceptors(
auth_metadata: list[tuple[str, str]] | None = None,
) -> list[Any]:
"""Create gRPC interceptors for metadata normalization and auth injection.
Args:
auth_metadata: Optional auth metadata to inject into all calls.
Used for insecure channels that need auth (non-localhost without TLS).
Returns a list of interceptors that lowercase metadata keys for gRPC
compatibility. Must be called after grpc is imported.
"""
import grpc.aio # type: ignore[import-untyped]
def _merge_metadata(
existing: tuple[tuple[str, str], ...] | None,
auth: list[tuple[str, str]] | None,
) -> tuple[tuple[str, str], ...] | None:
"""Merge existing metadata with auth metadata and normalize keys."""
merged: list[tuple[str, str]] = []
if existing:
merged.extend(existing)
if auth:
merged.extend(auth)
if not merged:
return None
return tuple((key.lower(), value) for key, value in merged)
def _inject_metadata(client_call_details: Any) -> Any:
"""Inject merged metadata into call details."""
return client_call_details._replace(
metadata=_merge_metadata(client_call_details.metadata, auth_metadata)
)
class MetadataUnaryUnary(grpc.aio.UnaryUnaryClientInterceptor): # type: ignore[misc,no-any-unimported]
"""Interceptor for unary-unary calls that injects auth metadata."""
async def intercept_unary_unary( # type: ignore[no-untyped-def]
self, continuation, client_call_details, request
):
"""Intercept unary-unary call and inject metadata."""
return await continuation(_inject_metadata(client_call_details), request)
class MetadataUnaryStream(grpc.aio.UnaryStreamClientInterceptor): # type: ignore[misc,no-any-unimported]
"""Interceptor for unary-stream calls that injects auth metadata."""
async def intercept_unary_stream( # type: ignore[no-untyped-def]
self, continuation, client_call_details, request
):
"""Intercept unary-stream call and inject metadata."""
return await continuation(_inject_metadata(client_call_details), request)
class MetadataStreamUnary(grpc.aio.StreamUnaryClientInterceptor): # type: ignore[misc,no-any-unimported]
"""Interceptor for stream-unary calls that injects auth metadata."""
async def intercept_stream_unary( # type: ignore[no-untyped-def]
self, continuation, client_call_details, request_iterator
):
"""Intercept stream-unary call and inject metadata."""
return await continuation(
_inject_metadata(client_call_details), request_iterator
)
class MetadataStreamStream(grpc.aio.StreamStreamClientInterceptor): # type: ignore[misc,no-any-unimported]
"""Interceptor for stream-stream calls that injects auth metadata."""
async def intercept_stream_stream( # type: ignore[no-untyped-def]
self, continuation, client_call_details, request_iterator
):
"""Intercept stream-stream call and inject metadata."""
return await continuation(
_inject_metadata(client_call_details), request_iterator
)
return [
MetadataUnaryUnary(),
MetadataUnaryStream(),
MetadataStreamUnary(),
MetadataStreamStream(),
]
def _create_grpc_channel_factory(
grpc_config: GRPCClientConfig,
auth: ClientAuthScheme | None = None,
) -> Callable[[str], Any]:
"""Create a gRPC channel factory with the given configuration.
Args:
grpc_config: gRPC client configuration with channel options.
auth: Optional ClientAuthScheme for TLS and auth configuration.
Returns:
A callable that creates gRPC channels from URLs.
"""
try:
import grpc
except ImportError as e:
raise ImportError(
"gRPC transport requires grpcio. Install with: pip install a2a-sdk[grpc]"
) from e
auth_metadata: list[tuple[str, str]] = []
if auth is not None:
from crewai.a2a.auth.client_schemes import (
APIKeyAuth,
BearerTokenAuth,
HTTPBasicAuth,
HTTPDigestAuth,
OAuth2AuthorizationCode,
OAuth2ClientCredentials,
)
if isinstance(auth, HTTPDigestAuth):
raise ValueError(
"HTTPDigestAuth is not supported with gRPC transport. "
"Digest authentication requires HTTP challenge-response flow. "
"Use BearerTokenAuth, HTTPBasicAuth, APIKeyAuth (header), or OAuth2 instead."
)
if isinstance(auth, APIKeyAuth) and auth.location in ("query", "cookie"):
raise ValueError(
f"APIKeyAuth with location='{auth.location}' is not supported with gRPC transport. "
"gRPC only supports header-based authentication. "
"Use APIKeyAuth with location='header' instead."
)
if isinstance(auth, BearerTokenAuth):
auth_metadata.append(("authorization", f"Bearer {auth.token}"))
elif isinstance(auth, HTTPBasicAuth):
import base64
basic_credentials = f"{auth.username}:{auth.password}"
encoded = base64.b64encode(basic_credentials.encode()).decode()
auth_metadata.append(("authorization", f"Basic {encoded}"))
elif isinstance(auth, APIKeyAuth) and auth.location == "header":
header_name = auth.name.lower()
auth_metadata.append((header_name, auth.api_key))
elif isinstance(auth, (OAuth2ClientCredentials, OAuth2AuthorizationCode)):
if auth._access_token:
auth_metadata.append(("authorization", f"Bearer {auth._access_token}"))
def factory(url: str) -> Any:
"""Create a gRPC channel for the given URL."""
target = url
use_tls = False
if url.startswith("grpcs://"):
target = url[8:]
use_tls = True
elif url.startswith("grpc://"):
target = url[7:]
elif url.startswith("https://"):
target = url[8:]
use_tls = True
elif url.startswith("http://"):
target = url[7:]
options: list[tuple[str, Any]] = []
if grpc_config.max_send_message_length is not None:
options.append(
("grpc.max_send_message_length", grpc_config.max_send_message_length)
)
if grpc_config.max_receive_message_length is not None:
options.append(
(
"grpc.max_receive_message_length",
grpc_config.max_receive_message_length,
)
)
if grpc_config.keepalive_time_ms is not None:
options.append(("grpc.keepalive_time_ms", grpc_config.keepalive_time_ms))
if grpc_config.keepalive_timeout_ms is not None:
options.append(
("grpc.keepalive_timeout_ms", grpc_config.keepalive_timeout_ms)
)
channel_credentials = None
if auth and hasattr(auth, "tls") and auth.tls:
channel_credentials = auth.tls.get_grpc_credentials()
elif use_tls:
channel_credentials = grpc.ssl_channel_credentials()
if channel_credentials and auth_metadata:
class AuthMetadataPlugin(grpc.AuthMetadataPlugin): # type: ignore[misc,no-any-unimported]
"""gRPC auth metadata plugin that adds auth headers as metadata."""
def __init__(self, metadata: list[tuple[str, str]]) -> None:
self._metadata = tuple(metadata)
def __call__( # type: ignore[no-any-unimported]
self,
context: grpc.AuthMetadataContext,
callback: grpc.AuthMetadataPluginCallback,
) -> None:
callback(self._metadata, None)
call_creds = grpc.metadata_call_credentials(
AuthMetadataPlugin(auth_metadata)
)
credentials = grpc.composite_channel_credentials(
channel_credentials, call_creds
)
interceptors = _create_grpc_interceptors()
return grpc.aio.secure_channel(
target, credentials, options=options or None, interceptors=interceptors
)
if channel_credentials:
interceptors = _create_grpc_interceptors()
return grpc.aio.secure_channel(
target,
channel_credentials,
options=options or None,
interceptors=interceptors,
)
interceptors = _create_grpc_interceptors(
auth_metadata=auth_metadata if auth_metadata else None
)
return grpc.aio.insecure_channel(
target, options=options or None, interceptors=interceptors
)
return factory
@asynccontextmanager
async def _create_a2a_client(
agent_card: AgentCard,
@@ -542,9 +909,12 @@ async def _create_a2a_client(
timeout: int,
headers: MutableMapping[str, str],
streaming: bool,
auth: AuthScheme | None = None,
auth: ClientAuthScheme | None = None,
use_polling: bool = False,
push_notification_config: PushNotificationConfig | None = None,
client_extensions: list[str] | None = None,
accepted_output_modes: list[str] | None = None,
grpc_config: GRPCClientConfig | None = None,
) -> AsyncIterator[Client]:
"""Create and configure an A2A client.
@@ -554,16 +924,21 @@ async def _create_a2a_client(
timeout: Request timeout in seconds.
headers: HTTP headers (already with auth applied).
streaming: Enable streaming responses.
auth: Optional AuthScheme for client configuration.
auth: Optional ClientAuthScheme for client configuration.
use_polling: Enable polling mode.
push_notification_config: Optional push notification config.
client_extensions: A2A protocol extension URIs to declare support for.
accepted_output_modes: MIME types the client can accept in responses.
grpc_config: Optional gRPC client configuration.
Yields:
Configured A2A client instance.
"""
verify = _get_tls_verify(auth)
async with httpx.AsyncClient(
timeout=timeout,
headers=headers,
verify=verify,
) as httpx_client:
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, httpx_client)
@@ -579,15 +954,27 @@ async def _create_a2a_client(
)
)
grpc_channel_factory = None
if transport_protocol == "GRPC":
grpc_channel_factory = _create_grpc_channel_factory(
grpc_config or GRPCClientConfig(),
auth=auth,
)
config = ClientConfig(
httpx_client=httpx_client,
supported_transports=[transport_protocol],
streaming=streaming and not use_polling,
polling=use_polling,
accepted_output_modes=["application/json"],
accepted_output_modes=accepted_output_modes or DEFAULT_CLIENT_OUTPUT_MODES, # type: ignore[arg-type]
push_notification_configs=push_configs,
grpc_channel_factory=grpc_channel_factory,
)
factory = ClientFactory(config)
client = factory.create(agent_card)
if client_extensions:
await client.add_request_middleware(ExtensionsMiddleware(client_extensions))
yield client

View File

@@ -0,0 +1,131 @@
"""Structured JSON logging utilities for A2A module."""
from __future__ import annotations
from contextvars import ContextVar
from datetime import datetime, timezone
import json
import logging
from typing import Any
_log_context: ContextVar[dict[str, Any] | None] = ContextVar(
"log_context", default=None
)
class JSONFormatter(logging.Formatter):
"""JSON formatter for structured logging.
Outputs logs as JSON with consistent fields for log aggregators.
"""
def format(self, record: logging.LogRecord) -> str:
"""Format log record as JSON string."""
log_data: dict[str, Any] = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
}
if record.exc_info:
log_data["exception"] = self.formatException(record.exc_info)
context = _log_context.get()
if context is not None:
log_data.update(context)
if hasattr(record, "task_id"):
log_data["task_id"] = record.task_id
if hasattr(record, "context_id"):
log_data["context_id"] = record.context_id
if hasattr(record, "agent"):
log_data["agent"] = record.agent
if hasattr(record, "endpoint"):
log_data["endpoint"] = record.endpoint
if hasattr(record, "extension"):
log_data["extension"] = record.extension
if hasattr(record, "error"):
log_data["error"] = record.error
for key, value in record.__dict__.items():
if key.startswith("_") or key in (
"name",
"msg",
"args",
"created",
"filename",
"funcName",
"levelname",
"levelno",
"lineno",
"module",
"msecs",
"pathname",
"process",
"processName",
"relativeCreated",
"stack_info",
"exc_info",
"exc_text",
"thread",
"threadName",
"taskName",
"message",
):
continue
if key not in log_data:
log_data[key] = value
return json.dumps(log_data, default=str)
class LogContext:
"""Context manager for adding fields to all logs within a scope.
Example:
with LogContext(task_id="abc", context_id="xyz"):
logger.info("Processing task") # Includes task_id and context_id
"""
def __init__(self, **fields: Any) -> None:
self._fields = fields
self._token: Any = None
def __enter__(self) -> LogContext:
current = _log_context.get() or {}
new_context = {**current, **self._fields}
self._token = _log_context.set(new_context)
return self
def __exit__(self, *args: Any) -> None:
_log_context.reset(self._token)
def configure_json_logging(logger_name: str = "crewai.a2a") -> None:
"""Configure JSON logging for the A2A module.
Args:
logger_name: Logger name to configure.
"""
logger = logging.getLogger(logger_name)
for handler in logger.handlers[:]:
logger.removeHandler(handler)
handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logger.addHandler(handler)
def get_logger(name: str) -> logging.Logger:
"""Get a logger configured for structured JSON output.
Args:
name: Logger name.
Returns:
Configured logger instance.
"""
return logging.getLogger(name)

View File

@@ -7,26 +7,40 @@ import base64
from collections.abc import Callable, Coroutine
from datetime import datetime
from functools import wraps
import json
import logging
import os
from typing import TYPE_CHECKING, Any, ParamSpec, TypeVar, cast
from typing import TYPE_CHECKING, Any, ParamSpec, TypeVar, TypedDict, cast
from urllib.parse import urlparse
from a2a.server.agent_execution import RequestContext
from a2a.server.events import EventQueue
from a2a.types import (
Artifact,
FileWithBytes,
FileWithUri,
InternalError,
InvalidParamsError,
Message,
Part,
Task as A2ATask,
TaskState,
TaskStatus,
TaskStatusUpdateEvent,
)
from a2a.utils import new_agent_text_message, new_text_artifact
from a2a.utils import (
get_data_parts,
get_file_parts,
new_agent_text_message,
new_data_artifact,
new_text_artifact,
)
from a2a.utils.errors import ServerError
from aiocache import SimpleMemoryCache, caches # type: ignore[import-untyped]
from pydantic import BaseModel
from crewai.a2a.utils.agent_card import _get_server_config
from crewai.a2a.utils.content_type import validate_message_parts
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.a2a_events import (
A2AServerTaskCanceledEvent,
@@ -35,9 +49,11 @@ from crewai.events.types.a2a_events import (
A2AServerTaskStartedEvent,
)
from crewai.task import Task
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
if TYPE_CHECKING:
from crewai.a2a.extensions.server import ExtensionContext, ServerExtensionRegistry
from crewai.agent import Agent
@@ -47,7 +63,17 @@ P = ParamSpec("P")
T = TypeVar("T")
def _parse_redis_url(url: str) -> dict[str, Any]:
class RedisCacheConfig(TypedDict, total=False):
"""Configuration for aiocache Redis backend."""
cache: str
endpoint: str
port: int
db: int
password: str
def _parse_redis_url(url: str) -> RedisCacheConfig:
"""Parse a Redis URL into aiocache configuration.
Args:
@@ -56,9 +82,8 @@ def _parse_redis_url(url: str) -> dict[str, Any]:
Returns:
Configuration dict for aiocache.RedisCache.
"""
parsed = urlparse(url)
config: dict[str, Any] = {
config: RedisCacheConfig = {
"cache": "aiocache.RedisCache",
"endpoint": parsed.hostname or "localhost",
"port": parsed.port or 6379,
@@ -138,7 +163,10 @@ def cancellable(
if message["type"] == "message":
return True
except (OSError, ConnectionError) as e:
logger.warning("Cancel watcher error for task_id=%s: %s", task_id, e)
logger.warning(
"Cancel watcher Redis error, falling back to polling",
extra={"task_id": task_id, "error": str(e)},
)
return await poll_for_cancel()
return False
@@ -166,7 +194,98 @@ def cancellable(
return wrapper
@cancellable
def _convert_a2a_files_to_file_inputs(
a2a_files: list[FileWithBytes | FileWithUri],
) -> dict[str, Any]:
"""Convert a2a file types to crewai FileInput dict.
Args:
a2a_files: List of FileWithBytes or FileWithUri from a2a SDK.
Returns:
Dictionary mapping file names to FileInput objects.
"""
try:
from crewai_files import File, FileBytes
except ImportError:
logger.debug("crewai_files not installed, returning empty file dict")
return {}
file_dict: dict[str, Any] = {}
for idx, a2a_file in enumerate(a2a_files):
if isinstance(a2a_file, FileWithBytes):
file_bytes = base64.b64decode(a2a_file.bytes)
name = a2a_file.name or f"file_{idx}"
file_source = FileBytes(data=file_bytes, filename=a2a_file.name)
file_dict[name] = File(source=file_source)
elif isinstance(a2a_file, FileWithUri):
name = a2a_file.name or f"file_{idx}"
file_dict[name] = File(source=a2a_file.uri)
return file_dict
def _extract_response_schema(parts: list[Part]) -> dict[str, Any] | None:
"""Extract response schema from message parts metadata.
The client may include a JSON schema in TextPart metadata to specify
the expected response format (see delegation.py line 463).
Args:
parts: List of message parts.
Returns:
JSON schema dict if found, None otherwise.
"""
for part in parts:
if part.root.kind == "text" and part.root.metadata:
schema = part.root.metadata.get("schema")
if schema and isinstance(schema, dict):
return schema # type: ignore[no-any-return]
return None
def _create_result_artifact(
result: Any,
task_id: str,
) -> Artifact:
"""Create artifact from task result, using DataPart for structured data.
Args:
result: The task execution result.
task_id: The task ID for naming the artifact.
Returns:
Artifact with appropriate part type (DataPart for dict/Pydantic, TextPart for strings).
"""
artifact_name = f"result_{task_id}"
if isinstance(result, dict):
return new_data_artifact(artifact_name, result)
if isinstance(result, BaseModel):
return new_data_artifact(artifact_name, result.model_dump())
return new_text_artifact(artifact_name, str(result))
def _build_task_description(
user_message: str,
structured_inputs: list[dict[str, Any]],
) -> str:
"""Build task description including structured data if present.
Args:
user_message: The original user message text.
structured_inputs: List of structured data from DataParts.
Returns:
Task description with structured data appended if present.
"""
if not structured_inputs:
return user_message
structured_json = json.dumps(structured_inputs, indent=2)
return f"{user_message}\n\nStructured Data:\n{structured_json}"
async def execute(
agent: Agent,
context: RequestContext,
@@ -178,15 +297,54 @@ async def execute(
agent: The CrewAI agent to execute the task.
context: The A2A request context containing the user's message.
event_queue: The event queue for sending responses back.
TODOs:
* need to impl both of structured output and file inputs, depends on `file_inputs` for
`crewai.task.Task`, pass the below two to Task. both utils in `a2a.utils.parts`
* structured outputs ingestion, `structured_inputs = get_data_parts(parts=context.message.parts)`
* file inputs ingestion, `file_inputs = get_file_parts(parts=context.message.parts)`
"""
await _execute_impl(agent, context, event_queue, None, None)
@cancellable
async def _execute_impl(
agent: Agent,
context: RequestContext,
event_queue: EventQueue,
extension_registry: ServerExtensionRegistry | None,
extension_context: ExtensionContext | None,
) -> None:
"""Internal implementation for task execution with optional extensions."""
server_config = _get_server_config(agent)
if context.message and context.message.parts and server_config:
allowed_modes = server_config.default_input_modes
invalid_types = validate_message_parts(context.message.parts, allowed_modes)
if invalid_types:
raise ServerError(
InvalidParamsError(
message=f"Unsupported content type(s): {', '.join(invalid_types)}. "
f"Supported: {', '.join(allowed_modes)}"
)
)
if extension_registry and extension_context:
await extension_registry.invoke_on_request(extension_context)
user_message = context.get_user_input()
response_model: type[BaseModel] | None = None
structured_inputs: list[dict[str, Any]] = []
a2a_files: list[FileWithBytes | FileWithUri] = []
if context.message and context.message.parts:
schema = _extract_response_schema(context.message.parts)
if schema:
try:
response_model = create_model_from_schema(schema)
except Exception as e:
logger.debug(
"Failed to create response model from schema",
extra={"error": str(e), "schema_title": schema.get("title")},
)
structured_inputs = get_data_parts(context.message.parts)
a2a_files = get_file_parts(context.message.parts)
task_id = context.task_id
context_id = context.context_id
if task_id is None or context_id is None:
@@ -203,9 +361,11 @@ async def execute(
raise ServerError(InvalidParamsError(message=msg)) from None
task = Task(
description=user_message,
description=_build_task_description(user_message, structured_inputs),
expected_output="Response to the user's request",
agent=agent,
response_model=response_model,
input_files=_convert_a2a_files_to_file_inputs(a2a_files),
)
crewai_event_bus.emit(
@@ -220,6 +380,10 @@ async def execute(
try:
result = await agent.aexecute_task(task=task, tools=agent.tools)
if extension_registry and extension_context:
result = await extension_registry.invoke_on_response(
extension_context, result
)
result_str = str(result)
history: list[Message] = [context.message] if context.message else []
history.append(new_agent_text_message(result_str, context_id, task_id))
@@ -227,8 +391,8 @@ async def execute(
A2ATask(
id=task_id,
context_id=context_id,
status=TaskStatus(state=TaskState.input_required),
artifacts=[new_text_artifact(result_str, f"result_{task_id}")],
status=TaskStatus(state=TaskState.completed),
artifacts=[_create_result_artifact(result, task_id)],
history=history,
)
)
@@ -269,6 +433,27 @@ async def execute(
) from e
async def execute_with_extensions(
agent: Agent,
context: RequestContext,
event_queue: EventQueue,
extension_registry: ServerExtensionRegistry,
extension_context: ExtensionContext,
) -> None:
"""Execute an A2A task with extension hooks.
Args:
agent: The CrewAI agent to execute the task.
context: The A2A request context containing the user's message.
event_queue: The event queue for sending responses back.
extension_registry: Registry of server extensions.
extension_context: Context for extension hooks.
"""
await _execute_impl(
agent, context, event_queue, extension_registry, extension_context
)
async def cancel(
context: RequestContext,
event_queue: EventQueue,

Some files were not shown because too many files have changed in this diff Show More