Compare commits

...

112 Commits
1.2.0 ... main

Author SHA1 Message Date
Lorenze Jay
8945457883 Lorenze/metrics for human feedback flows (#4188)
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Notify Downstream / notify-downstream (push) Waiting to run
* measuring human feedback feat

* add some tests
2026-01-06 16:12:34 -08:00
Mike Plachta
b787d7e591 Update webhook-streaming.mdx (#4184)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-06 09:09:48 -08:00
Lorenze Jay
25c0c030ce adjust aop to amp docs lang (#4179)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* adjust aop to amp docs lang

* whoop no print
2026-01-05 15:30:21 -08:00
Greyson LaLonde
f8deb0fd18 feat: add streaming tool call events; fix provider id tracking; add tests and cassettes
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Adds support for streaming tool call events with test coverage, fixes tool-stream ID tracking (including OpenAI-style tracking for Azure), improves Gemini tool calling + streaming tests, adds Anthropic tests, generates Azure cassettes, and fixes Azure cassette URIs.
2026-01-05 14:33:36 -05:00
Lorenze Jay
f3c17a249b feat: Introduce production-ready Flows and Crews architecture with ne… (#4003)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: Introduce production-ready Flows and Crews architecture with new runner and updated documentation across multiple languages.

* ko and pt-br for tracing missing links

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-31 14:29:42 -08:00
Lorenze Jay
467ee2917e Improve EventListener and TraceCollectionListener for improved event… (#4160)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* Refactor EventListener and TraceCollectionListener for improved event handling

- Removed unused threading and method branches from EventListener to simplify the code.
- Updated event handling methods in EventListener to use new formatter methods for better clarity and consistency.
- Refactored TraceCollectionListener to eliminate unnecessary parameters in formatter calls, enhancing readability.
- Simplified ConsoleFormatter by removing outdated tree management methods and focusing on panel-based output for status updates.
- Enhanced ToolUsage to track run attempts for better tool usage metrics.

* clearer for knowledge retrieval and dropped some reduancies

* Refactor EventListener and ConsoleFormatter for improved clarity and consistency

- Removed the MCPToolExecutionCompletedEvent handler from EventListener to streamline event processing.
- Updated ConsoleFormatter to enhance output formatting by adding line breaks for better readability in status content.
- Renamed status messages for MCP Tool execution to provide clearer context during tool operations.

* fix run attempt incrementation

* task name consistency

* memory events consistency

* ensure hitl works

* linting
2025-12-30 11:36:31 -08:00
Lorenze Jay
b9dd166a6b Lorenze/agent executor flow pattern (#3975)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* WIP gh pr refactor: update agent executor handling and introduce flow-based executor

* wip

* refactor: clean up comments and improve code clarity in agent executor flow

- Removed outdated comments and unnecessary explanations in  and  classes to enhance code readability.
- Simplified parameter updates in the agent executor to avoid confusion regarding executor recreation.
- Improved clarity in the  method to ensure proper handling of non-final answers without raising errors.

* bumping pytest-randomly numpy

* also bump versions of anthropic sdk

* ensure flow logs are not passed if its on executor

* revert anthropic bump

* fix

* refactor: update dependency markers in uv.lock for platform compatibility

- Enhanced dependency markers for , , , and others to ensure compatibility across different platforms (Linux, Darwin, and architecture-specific conditions).
- Removed unnecessary event emission in the  class during kickoff.
- Cleaned up commented-out code in the  class for better readability and maintainability.

* drop dupllicate

* test: enhance agent executor creation and stop word assertions

- Added calls to create_agent_executor in multiple test cases to ensure proper agent execution setup.
- Updated assertions for stop words in the agent tests to remove unnecessary checks and improve clarity.
- Ensured consistency in task handling by invoking create_agent_executor with the appropriate task parameter.

* refactor: reorganize agent executor imports and introduce CrewAgentExecutorFlow

- Removed the old import of CrewAgentExecutorFlow and replaced it with the new import from the experimental module.
- Updated relevant references in the codebase to ensure compatibility with the new structure.
- Enhanced the organization of imports in core.py and base_agent.py for better clarity and maintainability.

* updating name

* dropped usage of printer here for rich console and dropped non-added value logging

* address i18n

* Enhance concurrency control in CrewAgentExecutorFlow by introducing a threading lock to prevent concurrent executions. This change ensures that the executor instance cannot be invoked while already running, improving stability and reliability during flow execution.

* string literal returns

* string literal returns

* Enhance CrewAgentExecutor initialization by allowing optional i18n parameter for improved internationalization support. This change ensures that the executor can utilize a provided i18n instance or fallback to the default, enhancing flexibility in multilingual contexts.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-28 10:21:32 -08:00
João Moura
c73b36a4c5 Adding HITL for Flows (#4143)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: introduce human feedback events and decorator for flow methods

- Added HumanFeedbackRequestedEvent and HumanFeedbackReceivedEvent classes to handle human feedback interactions within flows.
- Implemented the @human_feedback decorator to facilitate human-in-the-loop workflows, allowing for feedback collection and routing based on responses.
- Enhanced Flow class to store human feedback history and manage feedback outcomes.
- Updated flow wrappers to preserve attributes from methods decorated with @human_feedback.
- Added integration and unit tests for the new human feedback functionality, ensuring proper validation and routing behavior.

* adding deployment docs

* New docs

* fix printer

* wrong change

* Adding Async Support
feat: enhance human feedback support in flows

- Updated the @human_feedback decorator to use 'message' parameter instead of 'request' for clarity.
- Introduced new FlowPausedEvent and MethodExecutionPausedEvent to handle flow and method pauses during human feedback.
- Added ConsoleProvider for synchronous feedback collection and integrated async feedback capabilities.
- Implemented SQLite persistence for managing pending feedback context.
- Expanded documentation to include examples of async human feedback usage and best practices.

* linter

* fix

* migrating off printer

* updating docs

* new tests

* doc update
2025-12-25 21:04:10 -03:00
Lucas Gomide
0c020991c4 docs: fix wrong trigger name in sample docs (#4147)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-23 08:41:51 -05:00
Heitor Carvalho
be70a04153 fix: correct error fetching for workos login polling (#4124)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-19 20:00:26 -03:00
Greyson LaLonde
0c359f4df8 feat: bump versions to 1.7.2
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-19 15:47:00 -05:00
Lucas Gomide
fe288dbe73 Resolving some connection issues (#4129)
* fix: use CREWAI_PLUS_URL env var in precedence over PlusAPI configured value

* feat: bypass TLS certificate verification when calling platform

* test: fix test
2025-12-19 10:15:20 -05:00
Heitor Carvalho
dc63bc2319 chore: remove CREWAI_BASE_URL and fetch url from settings instead
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-18 15:41:38 -03:00
Greyson LaLonde
8d0effafec chore: add commitizen pre-commit hook
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-17 15:49:24 -05:00
Greyson LaLonde
1cdbe79b34 chore: add deployment action, trigger for releases 2025-12-17 08:40:14 -05:00
Lorenze Jay
84328d9311 fixed api-reference/status docs page (#4109)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-16 15:31:30 -08:00
Lorenze Jay
88d3c0fa97 feat: bump versions to 1.7.1 (#4092)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.7.1

* bump projects
2025-12-15 21:51:53 -08:00
Matt Aitchison
75ff7dce0c feat: add --no-commit flag to bump command (#4087)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Allows updating version files without creating a commit, branch, or PR.
2025-12-15 15:32:37 -06:00
Greyson LaLonde
38b0b125d3 feat: use json schema for tool argument serialization
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Replace Python representation with JsonSchema for tool arguments
  - Remove deprecated PydanticSchemaParser in favor of direct schema generation
  - Add handling for VAR_POSITIONAL and VAR_KEYWORD parameters
  - Improve tool argument schema collection
2025-12-11 15:50:19 -05:00
Vini Brasil
9bd8ad51f7 Add docs for AOP Deploy API (#4076)
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 15:58:17 -03:00
Heitor Carvalho
0632a054ca chore: display error message from response when tool repository login fails (#4075)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-11 14:56:00 -03:00
Dragos Ciupureanu
feec6b440e fix: gracefully terminate the future when executing a task async
* fix: gracefully terminate the future when executing a task async

* core: add unit test

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 12:03:33 -05:00
Greyson LaLonde
e43c7debbd fix: add idx for task ordering, tests 2025-12-11 10:18:15 -05:00
Greyson LaLonde
8ef9fe2cab fix: check platform compat for windows signals 2025-12-11 08:38:19 -05:00
Alex Larionov
807f97114f fix: set rpm controller timer as daemon to prevent process hang
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 02:59:55 -05:00
Greyson LaLonde
bdafe0fac7 fix: ensure token usage recording, validate response model on stream
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-10 20:32:10 -05:00
Greyson LaLonde
8e99d490b0 chore: add translated docs for async
* chore: add translated docs for async

* chore: add missing pages
2025-12-10 14:17:10 -05:00
Gil Feig
34b909367b Add docs for the agent handler connector (#4012)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add docs for the agent handler connector

* Fix links

* Update docs
2025-12-09 15:49:52 -08:00
Greyson LaLonde
22684b513e chore: add docs on native async
Some checks failed
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-08 20:49:18 -05:00
Lorenze Jay
3e3b9df761 feat: bump versions to 1.7.0 (#4051)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.7.0

* bump
2025-12-08 16:42:12 -08:00
Greyson LaLonde
177294f588 fix: ensure nonetypes are not passed to otel (#4052)
* fix: ensure nonetypes are not passed to otel

* fix: ensure attribute is always set in span
2025-12-08 16:27:42 -08:00
Greyson LaLonde
beef712646 fix: ensure token store file ops do not deadlock
* fix: ensure token store file ops do not deadlock
* chore: update test method reference
2025-12-08 19:04:21 -05:00
Lorenze Jay
6125b866fd supporting thinking for anthropic models (#3978)
* supporting thinking for anthropic models

* drop comments here

* thinking and tool calling support

* fix: properly mock tool use and text block types in Anthropic tests

- Updated the test for the Anthropic tool use conversation flow to include type attributes for mocked ToolUseBlock and text blocks, ensuring accurate simulation of tool interactions during testing.

* feat: add AnthropicThinkingConfig for enhanced thinking capabilities

This update introduces the AnthropicThinkingConfig class to manage thinking parameters for the Anthropic completion model. The LLM and AnthropicCompletion classes have been updated to utilize this new configuration. Additionally, new test cassettes have been added to validate the functionality of thinking blocks across interactions.
2025-12-08 15:34:54 -08:00
Greyson LaLonde
f2f994612c fix: ensure otel span is closed
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-05 13:23:26 -05:00
Greyson LaLonde
7fff2b654c fix: use HuggingFaceEmbeddingFunction for embeddings, update keys and add tests (#4005)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-04 15:05:50 -08:00
Greyson LaLonde
34e09162ba feat: async flow kickoff
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Introduces akickoff alias to flows, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and removes duplicated logic.
2025-12-04 17:08:08 -05:00
Greyson LaLonde
24d1fad7ab feat: async crew support
native async crew execution. Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and removes duplicated logic.
2025-12-04 16:53:19 -05:00
Greyson LaLonde
9b8f31fa07 feat: async task support (#4024)
* feat: add async support for tools, add async tool tests

* chore: improve tool decorator typing

* fix: ensure _run backward compat

* chore: update docs

* chore: make docstrings a little more readable

* feat: add async execution support to agent executor

* chore: add tests

* feat: add aiosqlite dep; regenerate lockfile

* feat: add async ops to memory feat; create tests

* feat: async knowledge support; add tests

* feat: add async task support

* chore: dry out duplicate logic
2025-12-04 13:34:29 -08:00
Greyson LaLonde
d898d7c02c feat: async knowledge support (#4023)
* feat: add async support for tools, add async tool tests

* chore: improve tool decorator typing

* fix: ensure _run backward compat

* chore: update docs

* chore: make docstrings a little more readable

* feat: add async execution support to agent executor

* chore: add tests

* feat: add aiosqlite dep; regenerate lockfile

* feat: add async ops to memory feat; create tests

* feat: async knowledge support; add tests

* chore: regenerate lockfile
2025-12-04 10:27:52 -08:00
Greyson LaLonde
f04c40babf feat: async memory support
Adds async support for tools with tests, async execution in the agent executor, and async operations for memory (with aiosqlite). Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and regenerates lockfiles.
2025-12-04 12:54:49 -05:00
Lorenze Jay
c456e5c5fa Lorenze/ensure hooks work with lite agents flows (#3981)
* liteagent support hooks

* wip llm.call hooks work - needs tests for this

* fix tests

* fixed more

* more tool hooks test cassettes
2025-12-04 09:38:39 -08:00
Greyson LaLonde
633e279b51 feat: add async support for tools and agent executor; improve typing and docs
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Introduces async tool support with new tests, adds async execution to the agent executor, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, and adds additional tests.
2025-12-03 20:13:03 -05:00
Greyson LaLonde
a25778974d feat: a2a extensions API and async agent card caching; fix task propagation & streaming
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Adds initial extensions API (with registry temporarily no-op), introduces aiocache for async caching, ensures reference task IDs propagate correctly, fixes streamed response model handling, updates streaming tests, and regenerates lockfiles.
2025-12-03 16:29:48 -05:00
Greyson LaLonde
09f1ba6956 feat: native async tool support
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- add async support for tools
- add async tool tests
- improve tool decorator typing
- fix _run backward compatibility
- update docs and improve readability of docstrings
2025-12-02 16:39:58 -05:00
Greyson LaLonde
20704742e2 feat: async llm support
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
feat: introduce async contract to BaseLLM

feat: add async call support for:

Azure provider

Anthropic provider

OpenAI provider

Gemini provider

Bedrock provider

LiteLLM provider

chore: expand scrubbed header fields (conftest, anthropic, bedrock)

chore: update docs to cover async functionality

chore: update and harden tests to support acall; re-add uri for cassette compatibility

chore: generate missing cassette

fix: ensure acall is non-abstract and set supports_tools = true for supported Anthropic models

chore: improve Bedrock async docstring and general test robustness
2025-12-01 18:56:56 -05:00
Greyson LaLonde
59180e9c9f fix: ensure supports_tools is true for all supported anthropic models
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-01 07:21:09 -05:00
Greyson LaLonde
3ce019b07b chore: pin dependencies in crewai, crewai-tools, devtools
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-30 19:51:20 -05:00
Greyson LaLonde
2355ec0733 feat: create sys event types and handler
feat: add system event types and handler

chore: add tests and improve signal-related error logging
2025-11-30 17:44:40 -05:00
Greyson LaLonde
c925d2d519 chore: restructure test env, cassettes, and conftest; fix flaky tests
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Consolidates pytest config, standardizes env handling, reorganizes cassette layout, removes outdated VCR configs, improves sync with threading.Condition, updates event-waiting logic, ensures cleanup, regenerates Gemini cassettes, and reverts unintended test changes.
2025-11-29 16:55:24 -05:00
Lorenze Jay
bc4e6a3127 feat: bump versions to 1.6.1 (#3993)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.6.1

* chore: update crewAI dependency version to 1.6.1 in project templates
2025-11-28 17:57:15 -08:00
Vidit Ostwal
37526c693b Fixing ChatCompletionsClinet call (#3910)
* Fixing ChatCompletionsClinet call

* Moving from json-object -> JsonSchemaFormat

* Regex handling

* Adding additionalProperties explicitly

* fix: ensure additionalProperties is recursive

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-28 17:33:53 -08:00
Greyson LaLonde
c59173a762 fix: ensure async methods are executable for annotations 2025-11-28 19:54:40 -05:00
Lorenze Jay
4d8eec96e8 refactor: enhance model validation and provider inference in LLM class (#3976)
* refactor: enhance model validation and provider inference in LLM class

- Updated the model validation logic to support pattern matching for new models and "latest" versions, improving flexibility for various providers.
- Refactored the `_validate_model_in_constants` method to first check hardcoded constants and then fall back to pattern matching.
- Introduced `_matches_provider_pattern` to streamline provider-specific model checks.
- Enhanced the `_infer_provider_from_model` method to utilize pattern matching for better provider inference.

This refactor aims to improve the extensibility of the LLM class, allowing it to accommodate new models without requiring constant updates to the hardcoded lists.

* feat: add new Anthropic model versions to constants

- Introduced "claude-opus-4-5-20251101" and "claude-opus-4-5" to the AnthropicModels and ANTHROPIC_MODELS lists for enhanced model support.
- Added "anthropic.claude-opus-4-5-20251101-v1:0" to BedrockModels and BEDROCK_MODELS to ensure compatibility with the latest model offerings.
- Updated test cases to ensure proper environment variable handling for model validation, improving robustness in testing scenarios.

* dont infer this way - dropped
2025-11-28 13:54:40 -08:00
Greyson LaLonde
2025a26fc3 fix: ensure parameters in RagTool.add, add typing, tests (#3979)
Some checks failed
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* fix: ensure parameters in RagTool.add, add typing, tests

* feat: substitute pymupdf for pypdf, better parsing performance

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-26 22:32:43 -08:00
Greyson LaLonde
bed9a3847a fix: remove invalid param from sse client (#3980) 2025-11-26 21:37:55 -08:00
Heitor Carvalho
5239dc9859 fix: erase 'oauth2_extra' setting on 'crewai config reset' command
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-11-26 18:43:44 -05:00
Lorenze Jay
52444ad390 feat: bump versions to 1.6.0 (#3974)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* feat: bump versions to 1.6.0

* bump project templates
2025-11-24 17:56:30 -08:00
Greyson LaLonde
f070595e65 fix: ensure custom rag store persist path is set if passed
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-24 20:03:57 -05:00
Lorenze Jay
69c5eace2d Update references from AMP to AOP in documentation (#3972)
- Changed "AMP" to "AOP" in multiple locations across JSON and MDX files to reflect the correct terminology for the Agent Operations Platform.
- Updated the introduction sections in English, Korean, and Portuguese to ensure consistency in the platform's naming.
2025-11-24 16:43:30 -08:00
Vidit Ostwal
d88ac338d5 Adding drop parameters in ChatCompletionsClient
* Adding drop parameters

* Adding test case

* Just some spacing addition

* Adding drop params to maintain consistency

* Changing variable name

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-24 19:16:36 -05:00
Lorenze Jay
4ae8c36815 feat: enhance flow event state management (#3952)
* feat: enhance flow event state management

- Added `state` attribute to `FlowFinishedEvent` to capture the flow's state as a JSON-serialized dictionary.
- Updated flow event emissions to include the serialized state, improving traceability and debugging capabilities during flow execution.

* fix: improve state serialization in Flow class

- Enhanced the `_copy_and_serialize_state` method to handle exceptions during JSON serialization of Pydantic models, ensuring robustness in state management.
- Updated test assertions to access the state as a dictionary, aligning with the new state structure.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-24 15:55:49 -08:00
Greyson LaLonde
b049b73f2e fix: ensure fuzzy returns are more strict, show type warning 2025-11-24 17:35:12 -05:00
Greyson LaLonde
d2b9c54931 fix: re-add openai response_format param, add test 2025-11-24 17:13:20 -05:00
Greyson LaLonde
a928cde6ee fix: rag tool embeddings config
* fix: ensure config is not flattened, add tests

* chore: refactor inits to model_validator

* chore: refactor rag tool config parsing

* chore: add initial docs

* chore: add additional validation aliases for provider env vars

* chore: add solid docs

* chore: move imports to top

* fix: revert circular import

* fix: lazy import qdrant-client

* fix: allow collection name config

* chore: narrow model names for google

* chore: update additional docs

* chore: add backward compat on model name aliases

* chore: add tests for config changes
2025-11-24 16:51:28 -05:00
João Moura
9c84475691 Update AMP to AOP (#3941)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-24 13:15:24 -08:00
Greyson LaLonde
f3c5d1e351 feat: add streaming result support to flows and crews
* feat: add streaming result support to flows and crews
* docs: add streaming execution documentation and integration tests
2025-11-24 15:43:48 -05:00
Mark McDonald
a978267fa2 feat: Add gemini-3-pro-preview (#3950)
* Add gemini-3-pro-preview

Also refactors the tool support check for better forward compatibility.

* Add cassette for Gemini 3 Pro

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-24 14:49:29 -05:00
Heitor Carvalho
b759654e7d feat: support CLI login with Entra ID (#3943) 2025-11-24 15:35:59 -03:00
Greyson LaLonde
9da1f0c0aa fix: ensure flow execution start panel is not shown on plot 2025-11-24 12:50:18 -05:00
Greyson LaLonde
a559cedbd1 chore: ensure proper cassettes for agent tests
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* chore: ensure proper cassettes for agent tests
* chore: tweak eval test to avoid race condition
2025-11-24 12:29:11 -05:00
Gil Feig
bcc3e358cb feat: Add Merge Agent Handler tool (#3911)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: Add Merge Agent Handler tool

* Fix linting issues

* Empty
2025-11-20 16:58:41 -08:00
Greyson LaLonde
d160f0874a chore: don't fail on cleanup error
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-19 01:28:25 -05:00
Lorenze Jay
9fcf55198f feat: bump versions to 1.5.0 (#3924)
Some checks failed
Update Test Durations / update-durations (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.5.0

* chore: update crewAI tools dependency to version 1.5.0 in project templates
2025-11-15 18:00:11 -08:00
Lorenze Jay
f46a846ddc chore: remove unused hooks test file (#3923)
- Deleted the `__init__.py` file from the tests/hooks directory as it contained no tests or functionality. This cleanup helps maintain a tidy test structure.
2025-11-15 17:51:42 -08:00
Greyson LaLonde
b546982690 fix: ensure instrumentation flags 2025-11-15 20:48:40 -05:00
Greyson LaLonde
d7bdac12a2 feat: a2a trust remote completion status flag
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- add trust_remote_completion_status flag to A2AConfig, Adds configuration flag to control whether to trust A2A agent completion status. Resolves #3899
- update docs
2025-11-13 13:43:09 -05:00
Lorenze Jay
528d812263 Lorenze/feat hooks (#3902)
* feat: implement LLM call hooks and enhance agent execution context

- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.

* feat: implement LLM call hooks and enhance agent execution context

- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.

* fix verbose

* feat: introduce crew-scoped hook decorators and refactor hook registration

- Added decorators for before and after LLM and tool calls to enhance flexibility in modifying execution behavior.
- Implemented a centralized hook registration mechanism within CrewBase to automatically register crew-scoped hooks.
- Removed the obsolete base.py file as its functionality has been integrated into the new decorators and registration system.
- Enhanced tests for the new hook decorators to ensure proper registration and execution flow.
- Updated existing hook handling to accommodate the new decorator-based approach, improving code organization and maintainability.

* feat: enhance hook management with clear and unregister functions

- Introduced functions to unregister specific before and after hooks for both LLM and tool calls, improving flexibility in hook management.
- Added clear functions to remove all registered hooks of each type, facilitating easier state management and cleanup.
- Implemented a convenience function to clear all global hooks in one call, streamlining the process for testing and execution context resets.
- Enhanced tests to verify the functionality of unregistering and clearing hooks, ensuring robust behavior in various scenarios.

* refactor: enhance hook type management for LLM and tool hooks

- Updated hook type definitions to use generic protocols for better type safety and flexibility.
- Replaced Callable type annotations with specific BeforeLLMCallHookType and AfterLLMCallHookType for clarity.
- Improved the registration and retrieval functions for before and after hooks to align with the new type definitions.
- Enhanced the setup functions to handle hook execution results, allowing for blocking of LLM calls based on hook logic.
- Updated related tests to ensure proper functionality and type adherence across the hook management system.

* feat: add execution and tool hooks documentation

- Introduced new documentation for execution hooks, LLM call hooks, and tool call hooks to provide comprehensive guidance on their usage and implementation in CrewAI.
- Updated existing documentation to include references to the new hooks, enhancing the learning resources available for users.
- Ensured consistency across multiple languages (English, Portuguese, Korean) for the new documentation, improving accessibility for a wider audience.
- Added examples and troubleshooting sections to assist users in effectively utilizing hooks for agent operations.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-13 10:11:50 -08:00
Greyson LaLonde
ffd717c51a fix: custom tool docs links, add mintlify broken links action (#3903)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: update docs links to point to correct endpoints

* fix: update all broken doc links
2025-11-12 22:55:10 -08:00
Heitor Carvalho
fbe4aa4bd1 feat: fetch and store more data about okta authorization server (#3894)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-12 15:28:00 -03:00
Lorenze Jay
c205d2e8de feat: implement before and after LLM call hooks in CrewAgentExecutor (#3893)
- Added support for before and after LLM call hooks to allow modification of messages and responses during LLM interactions.
- Introduced LLMCallHookContext to provide hooks with access to the executor state, enabling in-place modifications of messages.
- Updated get_llm_response function to utilize the new hooks, ensuring that modifications persist across iterations.
- Enhanced tests to verify the functionality of the hooks and their error handling capabilities, ensuring robust execution flow.
2025-11-12 08:38:13 -08:00
Daniel Barreto
fcb5b19b2e Enhance schema description of QdrantVectorSearchTool (#3891)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-11 14:33:33 -08:00
Rip&Tear
01f0111d52 dependabot.yml creation (#3868)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* dependabot.yml creation

* Configure dependabot for pip package updates

Co-authored-by: matt <matt@crewai.com>

* Fix Dependabot package ecosystem

* Refactor: Use uv package-ecosystem in dependabot

Co-authored-by: matt <matt@crewai.com>

* fix: ensure dependabot uses uv ecosystem

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: matt <matt@crewai.com>
2025-11-11 12:14:16 +08:00
Lorenze Jay
6b52587c67 feat: expose messages to TaskOutput and LiteAgentOutputs (#3880)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: add messages to task and agent outputs

- Introduced a new  field in  and  to capture messages from the last task execution.
- Updated the  class to store the last messages and provide a property for easy access.
- Enhanced the  and  classes to include messages in their outputs.
- Added tests to ensure that messages are correctly included in task outputs and agent outputs during execution.

* using typing_extensions for 3.10 compatability

* feat: add last_messages attribute to agent for improved task tracking

- Introduced a new `last_messages` attribute in the agent class to store messages from the last task execution.
- Updated the `Crew` class to handle the new messages attribute in task outputs.
- Enhanced existing tests to ensure that the `last_messages` attribute is correctly initialized and utilized across various guardrail scenarios.

* fix: add messages field to TaskOutput in tests for consistency

- Updated multiple test cases to include the new `messages` field in the `TaskOutput` instances.
- Ensured that all relevant tests reflect the latest changes in the TaskOutput structure, maintaining consistency across the test suite.
- This change aligns with the recent addition of the `last_messages` attribute in the agent class for improved task tracking.

* feat: preserve messages in task outputs during replay

- Added functionality to the Crew class to store and retrieve messages in task outputs.
- Enhanced the replay mechanism to ensure that messages from stored task outputs are preserved and accessible.
- Introduced a new test case to verify that messages are correctly stored and replayed, ensuring consistency in task execution and output handling.
- This change improves the overall tracking and context retention of task interactions within the CrewAI framework.

* fix original test, prev was debugging
2025-11-10 17:38:30 -08:00
Lorenze Jay
629f7f34ce docs: enhance task guardrail documentation with LLM-based validation support (#3879)
- Added section on LLM-based guardrails, explaining their usage and requirements.
- Updated examples to demonstrate the implementation of multiple guardrails, including both function-based and LLM-based approaches.
- Clarified the distinction between single and multiple guardrails in task configurations.
- Improved explanations of guardrail functionality to ensure better understanding of validation processes.
2025-11-10 15:35:42 -08:00
Lorenze Jay
0f1c173d02 feat: bump versions to 1.4.1 (#3862)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* feat: bump versions to 1.4.1

* chore: update crewAI tools dependency to version 1.4.1 in project templates
2025-11-07 11:19:07 -08:00
Greyson LaLonde
19c5b9a35e fix: properly handle agent max iterations
fixes #3847
2025-11-07 13:54:11 -05:00
Greyson LaLonde
1ed307b58c fix: route llm model syntax to litellm
* fix: route llm model syntax to litellm

* wip: add list of supported models
2025-11-07 13:34:15 -05:00
Lorenze Jay
d29867bbb6 chore: update version numbers to 1.4.0
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 23:04:44 -05:00
Lorenze Jay
b2c278ed22 refactor: improve MCP tool execution handling with concurrent futures (#3854)
- Enhanced the MCP tool execution in both synchronous and asynchronous contexts by utilizing  for better event loop management.
- Updated error handling to provide clearer messages for connection issues and task cancellations.
- Added tests to validate MCP tool execution in both sync and async scenarios, ensuring robust functionality across different contexts.
2025-11-06 19:28:08 -08:00
Greyson LaLonde
f6aed9798b feat: allow non-ast plot routes 2025-11-06 21:17:29 -05:00
Greyson LaLonde
40a2d387a1 fix: keep stopwords updated 2025-11-06 21:10:25 -05:00
Lorenze Jay
6f36d7003b Lorenze/feat mcp first class support (#3850)
* WIP transport support mcp

* refactor: streamline MCP tool loading and error handling

* linted

* Self type from typing with typing_extensions in MCP transport modules

* added tests for mcp setup

* added tests for mcp setup

* docs: enhance MCP overview with detailed integration examples and structured configurations

* feat: implement MCP event handling and logging in event listener and client

- Added MCP event types and handlers for connection and tool execution events.
- Enhanced MCPClient to emit events on connection status and tool execution.
- Updated ConsoleFormatter to handle MCP event logging.
- Introduced new MCP event types for better integration and monitoring.
2025-11-06 17:45:16 -08:00
Greyson LaLonde
9e5906c52f feat: add pydantic validation dunder to BaseInterceptor
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-11-06 15:27:07 -05:00
Lorenze Jay
fc521839e4 Lorenze/fix duplicating doc ids for knowledge (#3840)
* fix: update document ID handling in ChromaDB utility functions to use SHA-256 hashing and include index for uniqueness

* test: add tests for hash-based ID generation in ChromaDB utility functions

* drop idx for preventing dups, upsert should handle dups

* fix: update document ID extraction logic in ChromaDB utility functions to check for doc_id at the top level of the document

* fix: enhance document ID generation in ChromaDB utility functions to deduplicate documents and ensure unique hash-based IDs without suffixes

* fix: improve error handling and document ID generation in ChromaDB utility functions to ensure robust processing and uniqueness
2025-11-06 10:59:52 -08:00
Greyson LaLonde
e4cc9a664c fix: handle unpickleable values in flow state
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 01:29:21 -05:00
Greyson LaLonde
7e6171d5bc fix: ensure lite agents course-correct on validation errors
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: ensure lite agents course-correct on validation errors

* chore: update cassettes and test expectations

* fix: ensure multiple guardrails propogate
2025-11-05 19:02:11 -05:00
Greyson LaLonde
61ad1fb112 feat: add support for llm message interceptor hooks 2025-11-05 11:38:44 -05:00
Greyson LaLonde
54710a8711 fix: hash callback args correctly to ensure caching works
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-05 07:19:09 -05:00
Lucas Gomide
5abf976373 fix: allow adding RAG source content from valid URLs (#3831)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-04 07:58:40 -05:00
Greyson LaLonde
329567153b fix: make plot node selection smoother
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-03 07:49:31 -05:00
Greyson LaLonde
60332e0b19 feat: cache i18n prompts for efficient use 2025-11-03 07:39:05 -05:00
Lorenze Jay
40932af3fa feat: bump versions to 1.3.0 (#3820)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.3.0

* chore: update crew and flow templates to use crewai[tools] version 1.3.0
2025-10-31 18:54:02 -07:00
Greyson LaLonde
e134e5305b Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers

* feat: a2a schemas and utils

* chore: move agent class, update imports

* refactor: organize imports to avoid circularity, add a2a to console

* feat: pass response_model through call chain

* feat: add standard openapi spec serialization to tools and structured output

* feat: a2a events

* chore: add a2a to pyproject

* docs: minimal base for learn docs

* fix: adjust a2a conversation flow, allow llm to decide exit until max_retries

* fix: inject agent skills into initial prompt

* fix: format agent card as json in prompt

* refactor: simplify A2A agent prompt formatting and improve skill display

* chore: wide cleanup

* chore: cleanup logic, add auth cache, use json for messages in prompt

* chore: update docs

* fix: doc snippets formatting

* feat: optimize A2A agent card fetching and improve error reporting

* chore: move imports to top of file

* chore: refactor hasattr check

* chore: add httpx-auth, update lockfile

* feat: create base public api

* chore: cleanup modules, add docstrings, types

* fix: exclude extra fields in prompt

* chore: update docs

* tests: update to correct import

* chore: lint for ruff, add missing import

* fix: tweak openai streaming logic for response model

* tests: add reimport for test

* tests: add reimport for test

* fix: don't set a2a attr if not set

* fix: don't set a2a attr if not set

* chore: update cassettes

* tests: fix tests

* fix: use instructor and dont pass response_format for litellm

* chore: consolidate event listeners, add typing

* fix: address race condition in test, update cassettes

* tests: add correct mocks, rerun cassette for json

* tests: update cassette

* chore: regenerate cassette after new run

* fix: make token manager access-safe

* fix: make token manager access-safe

* merge

* chore: update test and cassete for output pydantic

* fix: tweak to disallow deadlock

* chore: linter

* fix: adjust event ordering for threading

* fix: use conditional for batch check

* tests: tweak for emission

* tests: simplify api + event check

* fix: ensure non-function calling llms see json formatted string

* tests: tweak message comparison

* fix: use internal instructor for litellm structure responses

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
2025-10-31 18:42:03 -07:00
Greyson LaLonde
e229ef4e19 refactor: improve flow handling, typing, and logging; update UI and tests
fix: refine nested flow conditionals and ensure router methods and routes are fully parsed
fix: improve docstrings, typing, and logging coverage across all events
feat: update flow.plot feature with new UI enhancements
chore: apply Ruff linting, reorganize imports, and remove deprecated utilities/files
chore: split constants and utils, clean JS comments, and add typing for linters
tests: strengthen test coverage for flow execution paths and router logic
2025-10-31 21:15:06 -04:00
Greyson LaLonde
2e9eb8c32d fix: refactor use_stop_words to property, add check for stop words
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-10-29 19:14:01 +01:00
Lucas Gomide
4ebb5114ed Fix Firecrawl tools & adding tests (#3810)
* fix: fix Firecrawl Scrape tool

* fix: fix Firecrawl Search tool

* fix: fix Firecrawl Website tool

* tests: adding tests for Firecrawl
2025-10-29 13:37:57 -04:00
Daniel Barreto
70b083945f Enhance QdrantVectorSearchTool (#3806)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-28 13:42:40 -04:00
Tony Kipkemboi
410db1ff39 docs: migrate embedder→embedding_model and require vectordb across tool docs; add provider examples (en/ko/pt-BR) (#3804)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs(tools): migrate embedder->embedding_model, require vectordb; add Chroma/Qdrant examples across en/ko/pt-BR PDF/TXT/XML/MDX/DOCX/CSV/Directory docs

* docs(observability): apply latest Datadog tweaks in ko and pt-BR
2025-10-27 13:29:21 -04:00
Lorenze Jay
5d6b4c922b feat: bump versions to 1.2.1 (#3800)
* feat: bump versions to 1.2.1

* updated templates too
2025-10-27 09:12:04 -07:00
Lucas Gomide
b07c0fc45c docs: describe mandatory env-var to call Platform tools for each integration (#3803) 2025-10-27 10:01:41 -04:00
Sam Brenner
97853199c7 Add Datadog Integration Documentation (#3642)
* add datadog llm observability integration guide

* spacing fix

* wording changes

* alphabetize docs listing

* Update docs/en/observability/datadog.mdx

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>

* add translations

* fix korean code block

---------

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-27 09:48:38 -04:00
Lorenze Jay
494ed7e671 liteagent supports apps and mcps (#3794)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* liteagent supports apps and mcps

* generated cassettes for these
2025-10-24 18:42:08 -07:00
1072 changed files with 138439 additions and 97801 deletions

161
.env.test Normal file
View File

@@ -0,0 +1,161 @@
# =============================================================================
# Test Environment Variables
# =============================================================================
# This file contains all environment variables needed to run tests locally
# in a way that mimics the GitHub Actions CI environment.
# =============================================================================
# -----------------------------------------------------------------------------
# LLM Provider API Keys
# -----------------------------------------------------------------------------
OPENAI_API_KEY=fake-api-key
ANTHROPIC_API_KEY=fake-anthropic-key
GEMINI_API_KEY=fake-gemini-key
AZURE_API_KEY=fake-azure-key
OPENROUTER_API_KEY=fake-openrouter-key
# -----------------------------------------------------------------------------
# AWS Credentials
# -----------------------------------------------------------------------------
AWS_ACCESS_KEY_ID=fake-aws-access-key
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_REGION_NAME=us-east-1
# -----------------------------------------------------------------------------
# Azure OpenAI Configuration
# -----------------------------------------------------------------------------
AZURE_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_API_KEY=fake-azure-openai-key
AZURE_API_VERSION=2024-02-15-preview
OPENAI_API_VERSION=2024-02-15-preview
# -----------------------------------------------------------------------------
# Google Cloud Configuration
# -----------------------------------------------------------------------------
#GOOGLE_CLOUD_PROJECT=fake-gcp-project
#GOOGLE_CLOUD_LOCATION=us-central1
# -----------------------------------------------------------------------------
# OpenAI Configuration
# -----------------------------------------------------------------------------
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_BASE=https://api.openai.com/v1
# -----------------------------------------------------------------------------
# Search & Scraping Tool API Keys
# -----------------------------------------------------------------------------
SERPER_API_KEY=fake-serper-key
EXA_API_KEY=fake-exa-key
BRAVE_API_KEY=fake-brave-key
FIRECRAWL_API_KEY=fake-firecrawl-key
TAVILY_API_KEY=fake-tavily-key
SERPAPI_API_KEY=fake-serpapi-key
SERPLY_API_KEY=fake-serply-key
LINKUP_API_KEY=fake-linkup-key
PARALLEL_API_KEY=fake-parallel-key
# -----------------------------------------------------------------------------
# Exa Configuration
# -----------------------------------------------------------------------------
EXA_BASE_URL=https://api.exa.ai
# -----------------------------------------------------------------------------
# Web Scraping & Automation
# -----------------------------------------------------------------------------
BRIGHT_DATA_API_KEY=fake-brightdata-key
BRIGHT_DATA_ZONE=fake-zone
BRIGHTDATA_API_URL=https://api.brightdata.com
BRIGHTDATA_DEFAULT_TIMEOUT=600
BRIGHTDATA_DEFAULT_POLLING_INTERVAL=1
OXYLABS_USERNAME=fake-oxylabs-user
OXYLABS_PASSWORD=fake-oxylabs-pass
SCRAPFLY_API_KEY=fake-scrapfly-key
SCRAPEGRAPH_API_KEY=fake-scrapegraph-key
BROWSERBASE_API_KEY=fake-browserbase-key
BROWSERBASE_PROJECT_ID=fake-browserbase-project
HYPERBROWSER_API_KEY=fake-hyperbrowser-key
MULTION_API_KEY=fake-multion-key
APIFY_API_TOKEN=fake-apify-token
# -----------------------------------------------------------------------------
# Database & Vector Store Credentials
# -----------------------------------------------------------------------------
SINGLESTOREDB_URL=mysql://fake:fake@localhost:3306/fake
SINGLESTOREDB_HOST=localhost
SINGLESTOREDB_PORT=3306
SINGLESTOREDB_USER=fake-user
SINGLESTOREDB_PASSWORD=fake-password
SINGLESTOREDB_DATABASE=fake-database
SINGLESTOREDB_CONNECT_TIMEOUT=30
SNOWFLAKE_USER=fake-snowflake-user
SNOWFLAKE_PASSWORD=fake-snowflake-password
SNOWFLAKE_ACCOUNT=fake-snowflake-account
SNOWFLAKE_WAREHOUSE=fake-snowflake-warehouse
SNOWFLAKE_DATABASE=fake-snowflake-database
SNOWFLAKE_SCHEMA=fake-snowflake-schema
WEAVIATE_URL=http://localhost:8080
WEAVIATE_API_KEY=fake-weaviate-key
EMBEDCHAIN_DB_URI=sqlite:///test.db
# Databricks Credentials
DATABRICKS_HOST=https://fake-databricks.cloud.databricks.com
DATABRICKS_TOKEN=fake-databricks-token
DATABRICKS_CONFIG_PROFILE=fake-profile
# MongoDB Credentials
MONGODB_URI=mongodb://fake:fake@localhost:27017/fake
# -----------------------------------------------------------------------------
# CrewAI Platform & Enterprise
# -----------------------------------------------------------------------------
# setting CREWAI_PLATFORM_INTEGRATION_TOKEN causes these test to fail:
#=========================== short test summary info ============================
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_platform_context_manager_basic_usage - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_context_var_isolation_between_tests - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_multiple_sequential_context_managers - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#CREWAI_PLATFORM_INTEGRATION_TOKEN=fake-platform-token
CREWAI_PERSONAL_ACCESS_TOKEN=fake-personal-token
CREWAI_PLUS_URL=https://fake.crewai.com
# -----------------------------------------------------------------------------
# Other Service API Keys
# -----------------------------------------------------------------------------
ZAPIER_API_KEY=fake-zapier-key
PATRONUS_API_KEY=fake-patronus-key
MINDS_API_KEY=fake-minds-key
HF_TOKEN=fake-hf-token
# -----------------------------------------------------------------------------
# Feature Flags/Testing Modes
# -----------------------------------------------------------------------------
CREWAI_DISABLE_TELEMETRY=true
OTEL_SDK_DISABLED=true
CREWAI_TESTING=true
CREWAI_TRACING_ENABLED=false
# -----------------------------------------------------------------------------
# Testing/CI Configuration
# -----------------------------------------------------------------------------
# VCR recording mode: "none" (default), "new_episodes", "all", "once"
PYTEST_VCR_RECORD_MODE=none
# Set to "true" by GitHub when running in GitHub Actions
# GITHUB_ACTIONS=false
# -----------------------------------------------------------------------------
# Python Configuration
# -----------------------------------------------------------------------------
PYTHONUNBUFFERED=1

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: uv # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"

35
.github/workflows/docs-broken-links.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Check Documentation Broken Links
on:
pull_request:
paths:
- "docs/**"
- "docs.json"
push:
branches:
- main
paths:
- "docs/**"
- "docs.json"
workflow_dispatch:
jobs:
check-links:
name: Check broken links
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "latest"
- name: Install Mintlify CLI
run: npm i -g mintlify
- name: Run broken link checker
run: |
# Auto-answer the prompt with yes command
yes "" | mintlify broken-links || test $? -eq 141
working-directory: ./docs

View File

@@ -1,9 +1,14 @@
name: Publish to PyPI name: Publish to PyPI
on: on:
release: repository_dispatch:
types: [ published ] types: [deployment-tests-passed]
workflow_dispatch: workflow_dispatch:
inputs:
release_tag:
description: 'Release tag to publish'
required: false
type: string
jobs: jobs:
build: build:
@@ -12,7 +17,21 @@ jobs:
permissions: permissions:
contents: read contents: read
steps: steps:
- name: Determine release tag
id: release
run: |
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
if [ -n "${{ inputs.release_tag }}" ]; then
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
else
echo "tag=" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with:
ref: ${{ steps.release.outputs.tag || github.ref }}
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5

View File

@@ -5,18 +5,6 @@ on: [pull_request]
permissions: permissions:
contents: read contents: read
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
BRAVE_API_KEY: fake-brave-key
SNOWFLAKE_USER: fake-snowflake-user
SNOWFLAKE_PASSWORD: fake-snowflake-password
SNOWFLAKE_ACCOUNT: fake-snowflake-account
SNOWFLAKE_WAREHOUSE: fake-snowflake-warehouse
SNOWFLAKE_DATABASE: fake-snowflake-database
SNOWFLAKE_SCHEMA: fake-snowflake-schema
EMBEDCHAIN_DB_URI: sqlite:///test.db
jobs: jobs:
tests: tests:
name: tests (${{ matrix.python-version }}) name: tests (${{ matrix.python-version }})
@@ -84,26 +72,20 @@ jobs:
# fi # fi
cd lib/crewai && uv run pytest \ cd lib/crewai && uv run pytest \
--block-network \
--timeout=30 \
-vv \ -vv \
--splits 8 \ --splits 8 \
--group ${{ matrix.group }} \ --group ${{ matrix.group }} \
$DURATIONS_ARG \ $DURATIONS_ARG \
--durations=10 \ --durations=10 \
-n auto \
--maxfail=3 --maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8) - name: Run tool tests (group ${{ matrix.group }} of 8)
run: | run: |
cd lib/crewai-tools && uv run pytest \ cd lib/crewai-tools && uv run pytest \
--block-network \
--timeout=30 \
-vv \ -vv \
--splits 8 \ --splits 8 \
--group ${{ matrix.group }} \ --group ${{ matrix.group }} \
--durations=10 \ --durations=10 \
-n auto \
--maxfail=3 --maxfail=3

View File

@@ -0,0 +1,18 @@
name: Trigger Deployment Tests
on:
release:
types: [published]
jobs:
trigger:
name: Trigger deployment tests
runs-on: ubuntu-latest
steps:
- name: Trigger deployment tests
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
event-type: crewai-release
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'

View File

@@ -19,8 +19,15 @@ repos:
language: system language: system
pass_filenames: true pass_filenames: true
types: [python] types: [python]
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit - repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3 rev: 0.9.3
hooks: hooks:
- id: uv-lock - id: uv-lock
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:
- id: commitizen
- id: commitizen-branch
stages: [ pre-push ]

View File

@@ -57,7 +57,7 @@
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario. > It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence. - **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively - **CrewAI Flows**: The **enterprise and production architecture** for building and deploying multi-agent systems. Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation. standard for enterprise-ready AI automation.
@@ -124,7 +124,8 @@ Setup and run your first CrewAI agents by following this tutorial.
[![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial") [![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
### ###
Learning Resources
Learning Resources
Learn CrewAI through our comprehensive courses: Learn CrewAI through our comprehensive courses:
@@ -141,6 +142,7 @@ CrewAI offers two powerful, complementary approaches that work seamlessly togeth
- Dynamic task delegation and collaboration - Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise - Specialized roles with defined goals and expertise
- Flexible problem-solving approaches - Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide: 2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios - Fine-grained control over execution paths for real-world scenarios
@@ -166,13 +168,13 @@ Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](h
First, install CrewAI: First, install CrewAI:
```shell ```shell
pip install crewai uv pip install crewai
``` ```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command: If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell ```shell
pip install 'crewai[tools]' uv pip install 'crewai[tools]'
``` ```
The command above installs the basic package and also adds extra components which require more dependencies to function. The command above installs the basic package and also adds extra components which require more dependencies to function.
@@ -185,14 +187,15 @@ If you encounter issues during installation or usage, here are some common solut
1. **ModuleNotFoundError: No module named 'tiktoken'** 1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'` - Install tiktoken explicitly: `uv pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'` - If using embedchain or other tools: `uv pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken** 2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above) - Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed - For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip` - Try upgrading pip: `uv pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary` - If issues persist, use a pre-built wheel: `uv pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration ### 2. Setting Up Your Crew with the YAML Configuration
@@ -270,7 +273,7 @@ reporting_analyst:
**tasks.yaml** **tasks.yaml**
```yaml ````yaml
# src/my_project/config/tasks.yaml # src/my_project/config/tasks.yaml
research_task: research_task:
description: > description: >
@@ -290,7 +293,7 @@ reporting_task:
Formatted as markdown without '```' Formatted as markdown without '```'
agent: reporting_analyst agent: reporting_analyst
output_file: report.md output_file: report.md
``` ````
**crew.py** **crew.py**
@@ -556,7 +559,7 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems. - **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).* _P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb))._
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows. - **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications. - **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -611,7 +614,7 @@ uv build
### Installing Locally ### Installing Locally
```bash ```bash
pip install dist/*.tar.gz uv pip install dist/*.tar.gz
``` ```
## Telemetry ## Telemetry
@@ -687,13 +690,13 @@ A: CrewAI is a standalone, lean, and fast Python framework built specifically fo
A: Install CrewAI using pip: A: Install CrewAI using pip:
```shell ```shell
pip install crewai uv pip install crewai
``` ```
For additional tools, use: For additional tools, use:
```shell ```shell
pip install 'crewai[tools]' uv pip install 'crewai[tools]'
``` ```
### Q: Does CrewAI depend on LangChain? ### Q: Does CrewAI depend on LangChain?

197
conftest.py Normal file
View File

@@ -0,0 +1,197 @@
"""Pytest configuration for crewAI workspace."""
from collections.abc import Generator
import os
from pathlib import Path
import tempfile
from typing import Any
from dotenv import load_dotenv
import pytest
from vcr.request import Request # type: ignore[import-untyped]
env_test_path = Path(__file__).parent / ".env.test"
load_dotenv(env_test_path, override=True)
load_dotenv(override=True)
@pytest.fixture(autouse=True, scope="function")
def cleanup_event_handlers() -> Generator[None, Any, None]:
"""Clean up event bus handlers after each test to prevent test pollution."""
yield
try:
from crewai.events.event_bus import crewai_event_bus
with crewai_event_bus._rwlock.w_locked():
crewai_event_bus._sync_handlers.clear()
crewai_event_bus._async_handlers.clear()
except Exception: # noqa: S110
pass
@pytest.fixture(autouse=True, scope="function")
def setup_test_environment() -> Generator[None, Any, None]:
"""Setup test environment for crewAI workspace."""
with tempfile.TemporaryDirectory() as temp_dir:
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(
f"Failed to create test storage directory: {storage_dir}"
)
try:
test_file = storage_dir / ".permissions_test"
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(
f"Test storage directory {storage_dir} is not writable: {e}"
) from e
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
os.environ["CREWAI_TESTING"] = "true"
try:
yield
finally:
os.environ.pop("CREWAI_TESTING", "true")
os.environ.pop("CREWAI_STORAGE_DIR", None)
os.environ.pop("CREWAI_DISABLE_TELEMETRY", "true")
os.environ.pop("OTEL_SDK_DISABLED", "true")
os.environ.pop("OPENAI_BASE_URL", "https://api.openai.com/v1")
os.environ.pop("OPENAI_API_BASE", "https://api.openai.com/v1")
HEADERS_TO_FILTER = {
"authorization": "AUTHORIZATION-XXX",
"content-security-policy": "CSP-FILTERED",
"cookie": "COOKIE-XXX",
"set-cookie": "SET-COOKIE-XXX",
"permissions-policy": "PERMISSIONS-POLICY-XXX",
"referrer-policy": "REFERRER-POLICY-XXX",
"strict-transport-security": "STS-XXX",
"x-content-type-options": "X-CONTENT-TYPE-XXX",
"x-frame-options": "X-FRAME-OPTIONS-XXX",
"x-permitted-cross-domain-policies": "X-PERMITTED-XXX",
"x-request-id": "X-REQUEST-ID-XXX",
"x-runtime": "X-RUNTIME-XXX",
"x-xss-protection": "X-XSS-PROTECTION-XXX",
"x-stainless-arch": "X-STAINLESS-ARCH-XXX",
"x-stainless-os": "X-STAINLESS-OS-XXX",
"x-stainless-read-timeout": "X-STAINLESS-READ-TIMEOUT-XXX",
"cf-ray": "CF-RAY-XXX",
"etag": "ETAG-XXX",
"Strict-Transport-Security": "STS-XXX",
"access-control-expose-headers": "ACCESS-CONTROL-XXX",
"openai-organization": "OPENAI-ORG-XXX",
"openai-project": "OPENAI-PROJECT-XXX",
"x-ratelimit-limit-requests": "X-RATELIMIT-LIMIT-REQUESTS-XXX",
"x-ratelimit-limit-tokens": "X-RATELIMIT-LIMIT-TOKENS-XXX",
"x-ratelimit-remaining-requests": "X-RATELIMIT-REMAINING-REQUESTS-XXX",
"x-ratelimit-remaining-tokens": "X-RATELIMIT-REMAINING-TOKENS-XXX",
"x-ratelimit-reset-requests": "X-RATELIMIT-RESET-REQUESTS-XXX",
"x-ratelimit-reset-tokens": "X-RATELIMIT-RESET-TOKENS-XXX",
"x-goog-api-key": "X-GOOG-API-KEY-XXX",
"api-key": "X-API-KEY-XXX",
"User-Agent": "X-USER-AGENT-XXX",
"apim-request-id:": "X-API-CLIENT-REQUEST-ID-XXX",
"azureml-model-session": "AZUREML-MODEL-SESSION-XXX",
"x-ms-client-request-id": "X-MS-CLIENT-REQUEST-ID-XXX",
"x-ms-region": "X-MS-REGION-XXX",
"apim-request-id": "APIM-REQUEST-ID-XXX",
"x-api-key": "X-API-KEY-XXX",
"anthropic-organization-id": "ANTHROPIC-ORGANIZATION-ID-XXX",
"request-id": "REQUEST-ID-XXX",
"anthropic-ratelimit-input-tokens-limit": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-input-tokens-remaining": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-input-tokens-reset": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-output-tokens-limit": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-output-tokens-remaining": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-output-tokens-reset": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-tokens-limit": "ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-tokens-remaining": "ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-tokens-reset": "ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX",
"x-amz-date": "X-AMZ-DATE-XXX",
"amz-sdk-invocation-id": "AMZ-SDK-INVOCATION-ID-XXX",
"accept-encoding": "ACCEPT-ENCODING-XXX",
"x-amzn-requestid": "X-AMZN-REQUESTID-XXX",
"x-amzn-RequestId": "X-AMZN-REQUESTID-XXX",
}
def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any-unimported]
"""Filter sensitive headers from request before recording."""
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in request.headers:
request.headers[variant] = [replacement]
request.method = request.method.upper()
return request
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any]:
"""Filter sensitive headers from response before recording."""
# Remove Content-Encoding to prevent decompression issues on replay
for encoding_header in ["Content-Encoding", "content-encoding"]:
response["headers"].pop(encoding_header, None)
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in response["headers"]:
response["headers"][variant] = [replacement]
return response
@pytest.fixture(scope="module")
def vcr_cassette_dir(request: Any) -> str:
"""Generate cassette directory path based on test module location.
Organizes cassettes to mirror test directory structure within each package:
lib/crewai/tests/llms/google/test_google.py -> lib/crewai/tests/cassettes/llms/google/
lib/crewai-tools/tests/tools/test_search.py -> lib/crewai-tools/tests/cassettes/tools/
"""
test_file = Path(request.fspath)
for parent in test_file.parents:
if parent.name in ("crewai", "crewai-tools") and parent.parent.name == "lib":
package_root = parent
break
else:
package_root = test_file.parent
tests_root = package_root / "tests"
test_dir = test_file.parent
if test_dir != tests_root:
relative_path = test_dir.relative_to(tests_root)
cassette_dir = tests_root / "cassettes" / relative_path
else:
cassette_dir = tests_root / "cassettes"
cassette_dir.mkdir(parents=True, exist_ok=True)
return str(cassette_dir)
@pytest.fixture(scope="module")
def vcr_config(vcr_cassette_dir: str) -> dict[str, Any]:
"""Configure VCR with organized cassette storage."""
config = {
"cassette_library_dir": vcr_cassette_dir,
"record_mode": os.getenv("PYTEST_VCR_RECORD_MODE", "once"),
"filter_headers": [(k, v) for k, v in HEADERS_TO_FILTER.items()],
"before_record_request": _filter_request_headers,
"before_record_response": _filter_response_headers,
"filter_query_parameters": ["key"],
"match_on": ["method", "scheme", "host", "port", "path"],
}
if os.getenv("GITHUB_ACTIONS") == "true":
config["record_mode"] = "none"
return config

View File

@@ -116,6 +116,7 @@
"en/concepts/tasks", "en/concepts/tasks",
"en/concepts/crews", "en/concepts/crews",
"en/concepts/flows", "en/concepts/flows",
"en/concepts/production-architecture",
"en/concepts/knowledge", "en/concepts/knowledge",
"en/concepts/llms", "en/concepts/llms",
"en/concepts/processes", "en/concepts/processes",
@@ -253,7 +254,8 @@
"pages": [ "pages": [
"en/tools/integration/overview", "en/tools/integration/overview",
"en/tools/integration/bedrockinvokeagenttool", "en/tools/integration/bedrockinvokeagenttool",
"en/tools/integration/crewaiautomationtool" "en/tools/integration/crewaiautomationtool",
"en/tools/integration/mergeagenthandlertool"
] ]
}, },
{ {
@@ -276,6 +278,7 @@
"en/observability/overview", "en/observability/overview",
"en/observability/arize-phoenix", "en/observability/arize-phoenix",
"en/observability/braintrust", "en/observability/braintrust",
"en/observability/datadog",
"en/observability/langdb", "en/observability/langdb",
"en/observability/langfuse", "en/observability/langfuse",
"en/observability/langtrace", "en/observability/langtrace",
@@ -306,13 +309,17 @@
"en/learn/hierarchical-process", "en/learn/hierarchical-process",
"en/learn/human-input-on-execution", "en/learn/human-input-on-execution",
"en/learn/human-in-the-loop", "en/learn/human-in-the-loop",
"en/learn/human-feedback-in-flows",
"en/learn/kickoff-async", "en/learn/kickoff-async",
"en/learn/kickoff-for-each", "en/learn/kickoff-for-each",
"en/learn/llm-connections", "en/learn/llm-connections",
"en/learn/multimodal-agents", "en/learn/multimodal-agents",
"en/learn/replay-tasks-from-latest-crew-kickoff", "en/learn/replay-tasks-from-latest-crew-kickoff",
"en/learn/sequential-process", "en/learn/sequential-process",
"en/learn/using-annotations" "en/learn/using-annotations",
"en/learn/execution-hooks",
"en/learn/llm-hooks",
"en/learn/tool-hooks"
] ]
}, },
{ {
@@ -553,6 +560,7 @@
"pt-BR/concepts/tasks", "pt-BR/concepts/tasks",
"pt-BR/concepts/crews", "pt-BR/concepts/crews",
"pt-BR/concepts/flows", "pt-BR/concepts/flows",
"pt-BR/concepts/production-architecture",
"pt-BR/concepts/knowledge", "pt-BR/concepts/knowledge",
"pt-BR/concepts/llms", "pt-BR/concepts/llms",
"pt-BR/concepts/processes", "pt-BR/concepts/processes",
@@ -697,9 +705,11 @@
{ {
"group": "Observabilidade", "group": "Observabilidade",
"pages": [ "pages": [
"pt-BR/observability/tracing",
"pt-BR/observability/overview", "pt-BR/observability/overview",
"pt-BR/observability/arize-phoenix", "pt-BR/observability/arize-phoenix",
"pt-BR/observability/braintrust", "pt-BR/observability/braintrust",
"pt-BR/observability/datadog",
"pt-BR/observability/langdb", "pt-BR/observability/langdb",
"pt-BR/observability/langfuse", "pt-BR/observability/langfuse",
"pt-BR/observability/langtrace", "pt-BR/observability/langtrace",
@@ -729,13 +739,17 @@
"pt-BR/learn/hierarchical-process", "pt-BR/learn/hierarchical-process",
"pt-BR/learn/human-input-on-execution", "pt-BR/learn/human-input-on-execution",
"pt-BR/learn/human-in-the-loop", "pt-BR/learn/human-in-the-loop",
"pt-BR/learn/human-feedback-in-flows",
"pt-BR/learn/kickoff-async", "pt-BR/learn/kickoff-async",
"pt-BR/learn/kickoff-for-each", "pt-BR/learn/kickoff-for-each",
"pt-BR/learn/llm-connections", "pt-BR/learn/llm-connections",
"pt-BR/learn/multimodal-agents", "pt-BR/learn/multimodal-agents",
"pt-BR/learn/replay-tasks-from-latest-crew-kickoff", "pt-BR/learn/replay-tasks-from-latest-crew-kickoff",
"pt-BR/learn/sequential-process", "pt-BR/learn/sequential-process",
"pt-BR/learn/using-annotations" "pt-BR/learn/using-annotations",
"pt-BR/learn/execution-hooks",
"pt-BR/learn/llm-hooks",
"pt-BR/learn/tool-hooks"
] ]
}, },
{ {
@@ -973,6 +987,7 @@
"ko/concepts/tasks", "ko/concepts/tasks",
"ko/concepts/crews", "ko/concepts/crews",
"ko/concepts/flows", "ko/concepts/flows",
"ko/concepts/production-architecture",
"ko/concepts/knowledge", "ko/concepts/knowledge",
"ko/concepts/llms", "ko/concepts/llms",
"ko/concepts/processes", "ko/concepts/processes",
@@ -1129,9 +1144,11 @@
{ {
"group": "Observability", "group": "Observability",
"pages": [ "pages": [
"ko/observability/tracing",
"ko/observability/overview", "ko/observability/overview",
"ko/observability/arize-phoenix", "ko/observability/arize-phoenix",
"ko/observability/braintrust", "ko/observability/braintrust",
"ko/observability/datadog",
"ko/observability/langdb", "ko/observability/langdb",
"ko/observability/langfuse", "ko/observability/langfuse",
"ko/observability/langtrace", "ko/observability/langtrace",
@@ -1161,13 +1178,17 @@
"ko/learn/hierarchical-process", "ko/learn/hierarchical-process",
"ko/learn/human-input-on-execution", "ko/learn/human-input-on-execution",
"ko/learn/human-in-the-loop", "ko/learn/human-in-the-loop",
"ko/learn/human-feedback-in-flows",
"ko/learn/kickoff-async", "ko/learn/kickoff-async",
"ko/learn/kickoff-for-each", "ko/learn/kickoff-for-each",
"ko/learn/llm-connections", "ko/learn/llm-connections",
"ko/learn/multimodal-agents", "ko/learn/multimodal-agents",
"ko/learn/replay-tasks-from-latest-crew-kickoff", "ko/learn/replay-tasks-from-latest-crew-kickoff",
"ko/learn/sequential-process", "ko/learn/sequential-process",
"ko/learn/using-annotations" "ko/learn/using-annotations",
"ko/learn/execution-hooks",
"ko/learn/llm-hooks",
"ko/learn/tool-hooks"
] ]
}, },
{ {

View File

@@ -16,16 +16,17 @@ Welcome to the CrewAI AMP API reference. This API allows you to programmatically
Navigate to your crew's detail page in the CrewAI AMP dashboard and copy your Bearer Token from the Status tab. Navigate to your crew's detail page in the CrewAI AMP dashboard and copy your Bearer Token from the Status tab.
</Step> </Step>
<Step title="Discover Required Inputs"> <Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects. Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step> </Step>
<Step title="Start a Crew Execution"> <Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`. Call `POST /kickoff` with your inputs to start the crew execution and receive
</Step> a `kickoff_id`.
</Step>
<Step title="Monitor Progress"> <Step title="Monitor Progress">
Use `GET /status/{kickoff_id}` to check execution status and retrieve results. Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
</Step> </Step>
</Steps> </Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Token Types ### Token Types
| Token Type | Scope | Use Case | | Token Type | Scope | Use Case |
|:-----------|:--------|:----------| | :-------------------- | :------------------------ | :----------------------------------------------------------- |
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration | | **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations | | **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip> <Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI AMP dashboard. You can find both token types in the Status tab of your crew's detail page in
the CrewAI AMP dashboard.
</Tip> </Tip>
## Base URL ## Base URL
@@ -63,29 +65,33 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
1. **Discovery**: Call `GET /inputs` to understand what your crew needs 1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing 2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion 3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
4. **Results**: Extract the final output from the completed response 4. **Results**: Extract the final output from the completed response
## Error Handling ## Error Handling
The API uses standard HTTP status codes: The API uses standard HTTP status codes:
| Code | Meaning | | Code | Meaning |
|------|:--------| | ----- | :----------------------------------------- |
| `200` | Success | | `200` | Success |
| `400` | Bad Request - Invalid input format | | `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token | | `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist | | `404` | Not Found - Resource doesn't exist |
| `422` | Validation Error - Missing required inputs | | `422` | Validation Error - Missing required inputs |
| `500` | Server Error - Contact support | | `500` | Server Error - Contact support |
## Interactive Testing ## Interactive Testing
<Info> <Info>
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons. **Why no "Send" button?** Since each CrewAI AMP user has their own unique crew
URL, we use **reference mode** instead of an interactive playground to avoid
confusion. This shows you exactly what the requests should look like without
non-functional send buttons.
</Info> </Info>
Each endpoint page shows you: Each endpoint page shows you:
- ✅ **Exact request format** with all parameters - ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases - ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.) - ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Each endpoint page shows you:
</CardGroup> </CardGroup>
**Example workflow:** **Example workflow:**
1. **Copy this cURL example** from any endpoint page 1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL 2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard 3. **Replace the Bearer token** with your real token from the dashboard
@@ -111,10 +118,18 @@ Each endpoint page shows you:
## Need Help? ## Need Help?
<CardGroup cols={2}> <CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com"> <Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
Get help with API integration and troubleshooting Get help with API integration and troubleshooting
</Card> </Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com"> <Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
Manage your crews and view execution logs Manage your crews and view execution logs
</Card> </Card>
</CardGroup> </CardGroup>

View File

@@ -1,8 +1,6 @@
--- ---
title: "GET /status/{kickoff_id}" title: "GET /{kickoff_id}/status"
description: "Get execution status" description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}" openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
mode: "wide" mode: "wide"
--- ---

View File

@@ -8,6 +8,7 @@ mode: "wide"
## Overview of an Agent ## Overview of an Agent
In the CrewAI framework, an `Agent` is an autonomous unit that can: In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Perform specific tasks - Perform specific tasks
- Make decisions based on its role and goal - Make decisions based on its role and goal
- Use tools to accomplish objectives - Use tools to accomplish objectives
@@ -16,7 +17,10 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Delegate tasks when allowed - Delegate tasks when allowed
<Tip> <Tip>
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content. Think of an agent as a specialized team member with specific skills,
expertise, and responsibilities. For example, a `Researcher` agent might excel
at gathering and analyzing information, while a `Writer` agent might be better
at creating content.
</Tip> </Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder"> <Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
@@ -25,6 +29,7 @@ CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and co
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png) ![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Agent Builder enables: The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces - Intuitive agent configuration with form-based interfaces
- Real-time testing and validation - Real-time testing and validation
- Template library with pre-configured agent types - Template library with pre-configured agent types
@@ -33,36 +38,36 @@ The Visual Agent Builder enables:
## Agent Attributes ## Agent Attributes
| Attribute | Parameter | Type | Description | | Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- | | :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. | | **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. | | **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. | | **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". | | **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. | | **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. | | **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. | | **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. | | **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. | | **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. | | **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. | | **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. | | **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. | | **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. | | **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. | | **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. | | **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. | | **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. | | **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. | | **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. | | **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. | | **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. | | **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). | | **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. | | **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. | | **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. | | **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. | | **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. | | **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
## Creating Agents ## Creating Agents
@@ -137,7 +142,8 @@ class LatestAiDevelopmentCrew():
``` ```
<Note> <Note>
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code. The names you use in your YAML files (`agents.yaml`) should match the method
names in your Python code.
</Note> </Note>
### Direct Code Definition ### Direct Code Definition
@@ -184,6 +190,7 @@ agent = Agent(
Let's break down some key parameter combinations for common use cases: Let's break down some key parameter combinations for common use cases:
#### Basic Research Agent #### Basic Research Agent
```python Code ```python Code
research_agent = Agent( research_agent = Agent(
role="Research Analyst", role="Research Analyst",
@@ -195,6 +202,7 @@ research_agent = Agent(
``` ```
#### Code Development Agent #### Code Development Agent
```python Code ```python Code
dev_agent = Agent( dev_agent = Agent(
role="Senior Python Developer", role="Senior Python Developer",
@@ -208,6 +216,7 @@ dev_agent = Agent(
``` ```
#### Long-Running Analysis Agent #### Long-Running Analysis Agent
```python Code ```python Code
analysis_agent = Agent( analysis_agent = Agent(
role="Data Analyst", role="Data Analyst",
@@ -221,6 +230,7 @@ analysis_agent = Agent(
``` ```
#### Custom Template Agent #### Custom Template Agent
```python Code ```python Code
custom_agent = Agent( custom_agent = Agent(
role="Customer Service Representative", role="Customer Service Representative",
@@ -236,6 +246,7 @@ custom_agent = Agent(
``` ```
#### Date-Aware Agent with Reasoning #### Date-Aware Agent with Reasoning
```python Code ```python Code
strategic_agent = Agent( strategic_agent = Agent(
role="Market Analyst", role="Market Analyst",
@@ -250,6 +261,7 @@ strategic_agent = Agent(
``` ```
#### Reasoning Agent #### Reasoning Agent
```python Code ```python Code
reasoning_agent = Agent( reasoning_agent = Agent(
role="Strategic Planner", role="Strategic Planner",
@@ -263,6 +275,7 @@ reasoning_agent = Agent(
``` ```
#### Multimodal Agent #### Multimodal Agent
```python Code ```python Code
multimodal_agent = Agent( multimodal_agent = Agent(
role="Visual Content Analyst", role="Visual Content Analyst",
@@ -276,52 +289,64 @@ multimodal_agent = Agent(
### Parameter Details ### Parameter Details
#### Critical Parameters #### Critical Parameters
- `role`, `goal`, and `backstory` are required and shape the agent's behavior - `role`, `goal`, and `backstory` are required and shape the agent's behavior
- `llm` determines the language model used (default: OpenAI's GPT-4) - `llm` determines the language model used (default: OpenAI's GPT-4)
#### Memory and Context #### Memory and Context
- `memory`: Enable to maintain conversation history - `memory`: Enable to maintain conversation history
- `respect_context_window`: Prevents token limit issues - `respect_context_window`: Prevents token limit issues
- `knowledge_sources`: Add domain-specific knowledge bases - `knowledge_sources`: Add domain-specific knowledge bases
#### Execution Control #### Execution Control
- `max_iter`: Maximum attempts before giving best answer - `max_iter`: Maximum attempts before giving best answer
- `max_execution_time`: Timeout in seconds - `max_execution_time`: Timeout in seconds
- `max_rpm`: Rate limiting for API calls - `max_rpm`: Rate limiting for API calls
- `max_retry_limit`: Retries on error - `max_retry_limit`: Retries on error
#### Code Execution #### Code Execution
- `allow_code_execution`: Must be True to run code - `allow_code_execution`: Must be True to run code
- `code_execution_mode`: - `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production) - `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments) - `"unsafe"`: Direct execution (use only in trusted environments)
<Note> <Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section. This runs a default Docker image. If you want to configure the docker image,
Add the code interpreter tool as a tool in the agent as a tool parameter. the checkout the Code Interpreter Tool in the tools section. Add the code
</Note> interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features #### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content - `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks - `reasoning`: Enable agent to reflect and create plans before executing tasks
- `inject_date`: Automatically inject current date into task descriptions - `inject_date`: Automatically inject current date into task descriptions
#### Templates #### Templates
- `system_template`: Defines agent's core behavior - `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format - `prompt_template`: Structures input format
- `response_template`: Formats agent responses - `response_template`: Formats agent responses
<Note> <Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting. When using custom templates, ensure that both `system_template` and
`prompt_template` are defined. The `response_template` is optional but
recommended for consistent output formatting.
</Note> </Note>
<Note> <Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution. When using custom templates, you can use variables like `{role}`, `{goal}`,
and `{backstory}` in your templates. These will be automatically populated
during execution.
</Note> </Note>
## Agent Tools ## Agent Tools
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from: Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) - [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools) - [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
@@ -360,7 +385,8 @@ analyst = Agent(
``` ```
<Note> <Note>
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks. When `memory` is enabled, the agent will maintain context across multiple
interactions, improving its ability to handle complex, multi-step tasks.
</Note> </Note>
## Context Window Management ## Context Window Management
@@ -390,6 +416,7 @@ smart_agent = Agent(
``` ```
**What happens when context limits are exceeded:** **What happens when context limits are exceeded:**
- ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."` - ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."`
- 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history - 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history
- ✅ **Continued execution**: Task execution continues seamlessly with the summarized context - ✅ **Continued execution**: Task execution continues seamlessly with the summarized context
@@ -411,6 +438,7 @@ strict_agent = Agent(
``` ```
**What happens when context limits are exceeded:** **What happens when context limits are exceeded:**
- ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."` - ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."`
- 🛑 **Execution stops**: Task execution halts immediately - 🛑 **Execution stops**: Task execution halts immediately
- 🔧 **Manual intervention required**: You need to modify your approach - 🔧 **Manual intervention required**: You need to modify your approach
@@ -418,6 +446,7 @@ strict_agent = Agent(
### Choosing the Right Setting ### Choosing the Right Setting
#### Use `respect_context_window=True` (Default) when: #### Use `respect_context_window=True` (Default) when:
- **Processing large documents** that might exceed context limits - **Processing large documents** that might exceed context limits
- **Long-running conversations** where some summarization is acceptable - **Long-running conversations** where some summarization is acceptable
- **Research tasks** where general context is more important than exact details - **Research tasks** where general context is more important than exact details
@@ -436,6 +465,7 @@ document_processor = Agent(
``` ```
#### Use `respect_context_window=False` when: #### Use `respect_context_window=False` when:
- **Precision is critical** and information loss is unacceptable - **Precision is critical** and information loss is unacceptable
- **Legal or medical tasks** requiring complete context - **Legal or medical tasks** requiring complete context
- **Code review** where missing details could introduce bugs - **Code review** where missing details could introduce bugs
@@ -458,6 +488,7 @@ precision_agent = Agent(
When dealing with very large datasets, consider these strategies: When dealing with very large datasets, consider these strategies:
#### 1. Use RAG Tools #### 1. Use RAG Tools
```python Code ```python Code
from crewai_tools import RagTool from crewai_tools import RagTool
@@ -475,6 +506,7 @@ rag_agent = Agent(
``` ```
#### 2. Use Knowledge Sources #### 2. Use Knowledge Sources
```python Code ```python Code
# Use knowledge sources instead of large prompts # Use knowledge sources instead of large prompts
knowledge_agent = Agent( knowledge_agent = Agent(
@@ -498,6 +530,7 @@ knowledge_agent = Agent(
### Troubleshooting Context Issues ### Troubleshooting Context Issues
**If you're getting context limit errors:** **If you're getting context limit errors:**
```python Code ```python Code
# Quick fix: Enable automatic handling # Quick fix: Enable automatic handling
agent.respect_context_window = True agent.respect_context_window = True
@@ -511,6 +544,7 @@ agent.tools = [RagTool()]
``` ```
**If automatic summarization loses important information:** **If automatic summarization loses important information:**
```python Code ```python Code
# Disable auto-summarization and use RAG instead # Disable auto-summarization and use RAG instead
agent = Agent( agent = Agent(
@@ -524,7 +558,10 @@ agent = Agent(
``` ```
<Note> <Note>
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest! The context window management feature works automatically in the background.
You don't need to call any special functions - just set
`respect_context_window` to your preferred behavior and CrewAI handles the
rest!
</Note> </Note>
## Direct Agent Interaction with `kickoff()` ## Direct Agent Interaction with `kickoff()`
@@ -556,10 +593,10 @@ print(result.raw)
### Parameters and Return Values ### Parameters and Return Values
| Parameter | Type | Description | | Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ | | :---------------- | :--------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content | | `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output | | `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties: The method returns a `LiteAgentOutput` object with the following properties:
@@ -621,28 +658,34 @@ asyncio.run(main())
``` ```
<Note> <Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.). The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler
execution flow while preserving all of the agent's configuration (role, goal,
backstory, tools, etc.).
</Note> </Note>
## Important Considerations and Best Practices ## Important Considerations and Best Practices
### Security and Code Execution ### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it - When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments - Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops - Consider setting appropriate `max_execution_time` limits to prevent infinite loops
### Performance Optimization ### Performance Optimization
- Use `respect_context_window: true` to prevent token limit issues - Use `respect_context_window: true` to prevent token limit issues
- Set appropriate `max_rpm` to avoid rate limiting - Set appropriate `max_rpm` to avoid rate limiting
- Enable `cache: true` to improve performance for repetitive tasks - Enable `cache: true` to improve performance for repetitive tasks
- Adjust `max_iter` and `max_retry_limit` based on task complexity - Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management ### Memory and Context Management
- Leverage `knowledge_sources` for domain-specific information - Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models - Configure `embedder` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior - Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Advanced Features ### Advanced Features
- Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks - Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks
- Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts) - Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts)
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks - Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
@@ -650,6 +693,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Enable `multimodal: true` for agents that need to process both text and visual content - Enable `multimodal: true` for agents that need to process both text and visual content
### Agent Collaboration ### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together - Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions - Use `step_callback` to monitor and log agent interactions
- Consider using different LLMs for different purposes: - Consider using different LLMs for different purposes:
@@ -657,6 +701,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- `function_calling_llm` for efficient tool usage - `function_calling_llm` for efficient tool usage
### Date Awareness and Reasoning ### Date Awareness and Reasoning
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks - Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes - Customize the date format with `date_format` using standard Python datetime format codes
- Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc. - Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc.
@@ -664,22 +709,26 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection - Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection
### Model Compatibility ### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages - Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling) - Ensure your chosen `llm` supports the features you need (like function calling)
## Troubleshooting Common Issues ## Troubleshooting Common Issues
1. **Rate Limiting**: If you're hitting API rate limits: 1. **Rate Limiting**: If you're hitting API rate limits:
- Implement appropriate `max_rpm` - Implement appropriate `max_rpm`
- Use caching for repetitive operations - Use caching for repetitive operations
- Consider batching requests - Consider batching requests
2. **Context Window Errors**: If you're exceeding context limits: 2. **Context Window Errors**: If you're exceeding context limits:
- Enable `respect_context_window` - Enable `respect_context_window`
- Use more efficient prompts - Use more efficient prompts
- Clear agent memory periodically - Clear agent memory periodically
3. **Code Execution Issues**: If code execution fails: 3. **Code Execution Issues**: If code execution fails:
- Verify Docker is installed for safe mode - Verify Docker is installed for safe mode
- Check execution permissions - Check execution permissions
- Review code sandbox settings - Review code sandbox settings

View File

@@ -5,7 +5,12 @@ icon: terminal
mode: "wide" mode: "wide"
--- ---
<Warning>Since release 0.140.0, CrewAI AMP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning> <Warning>
Since release 0.140.0, CrewAI AMP started a process of migrating their login
provider. As such, the authentication flow via CLI was updated. Users that use
Google to login, or that created their account after July 3rd, 2025 will be
unable to log in with older versions of the `crewai` library.
</Warning>
## Overview ## Overview
@@ -41,6 +46,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow - `NAME`: Name of the crew or flow
Example: Example:
```shell Terminal ```shell Terminal
crewai create crew my_new_crew crewai create crew my_new_crew
crewai create flow my_new_flow crewai create flow my_new_flow
@@ -57,6 +63,7 @@ crewai version [OPTIONS]
- `--tools`: (Optional) Show the installed version of CrewAI tools - `--tools`: (Optional) Show the installed version of CrewAI tools
Example: Example:
```shell Terminal ```shell Terminal
crewai version crewai version
crewai version --tools crewai version --tools
@@ -74,6 +81,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl") - `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example: Example:
```shell Terminal ```shell Terminal
crewai train -n 10 -f my_training_data.pkl crewai train -n 10 -f my_training_data.pkl
``` ```
@@ -89,6 +97,7 @@ crewai replay [OPTIONS]
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks - `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example: Example:
```shell Terminal ```shell Terminal
crewai replay -t task_123456 crewai replay -t task_123456
``` ```
@@ -118,6 +127,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories - `-a, --all`: Reset ALL memories
Example: Example:
```shell Terminal ```shell Terminal
crewai reset-memories --long --short crewai reset-memories --long --short
crewai reset-memories --all crewai reset-memories --all
@@ -135,6 +145,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini") - `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example: Example:
```shell Terminal ```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo crewai test -n 5 -m gpt-3.5-turbo
``` ```
@@ -148,12 +159,16 @@ crewai run
``` ```
<Note> <Note>
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows. Starting from version 0.103.0, the `crewai run` command can be used to run
both standard crews and flows. For flows, it automatically detects the type
from pyproject.toml and runs the appropriate command. This is now the
recommended way to run both crews and flows.
</Note> </Note>
<Note> <Note>
Make sure to run these commands from the directory where your CrewAI project is set up. Make sure to run these commands from the directory where your CrewAI project
Some commands may require additional configuration or setup within your project structure. is set up. Some commands may require additional configuration or setup within
your project structure.
</Note> </Note>
### 9. Chat ### 9. Chat
@@ -165,6 +180,7 @@ After receiving the results, you can continue interacting with the assistant for
```shell Terminal ```shell Terminal
crewai chat crewai chat
``` ```
<Note> <Note>
Ensure you execute these commands from your CrewAI project's root directory. Ensure you execute these commands from your CrewAI project's root directory.
</Note> </Note>
@@ -182,6 +198,7 @@ def crew(self) -> Crew:
chat_llm="gpt-4o", # LLM for chat orchestration chat_llm="gpt-4o", # LLM for chat orchestration
) )
``` ```
</Note> </Note>
### 10. Deploy ### 10. Deploy
@@ -189,17 +206,18 @@ def crew(self) -> Crew:
Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com). Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP. - **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
You can login or create an account with: You can login or create an account with:
```shell Terminal
crewai login ```shell Terminal
``` crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject. - **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal ```shell Terminal
crewai deploy create crewai deploy create
``` ```
- Reads your local project configuration. - Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this. - Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management ### 11. Organization Management
@@ -212,63 +230,78 @@ crewai org [COMMAND] [OPTIONS]
#### Commands: #### Commands:
- `list`: List all organizations you belong to - `list`: List all organizations you belong to
```shell Terminal ```shell Terminal
crewai org list crewai org list
``` ```
- `current`: Display your currently active organization - `current`: Display your currently active organization
```shell Terminal ```shell Terminal
crewai org current crewai org current
``` ```
- `switch`: Switch to a specific organization - `switch`: Switch to a specific organization
```shell Terminal ```shell Terminal
crewai org switch <organization_id> crewai org switch <organization_id>
``` ```
<Note> <Note>
You must be authenticated to CrewAI AMP to use these organization management commands. You must be authenticated to CrewAI AMP to use these organization management
commands.
</Note> </Note>
- **Create a deployment** (continued): - **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP. - **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
```shell Terminal
crewai deploy push ```shell Terminal
``` crewai deploy push
- Initiates the deployment process on the CrewAI AMP platform. ```
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- Initiates the deployment process on the CrewAI AMP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with: - **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status ```shell Terminal
``` crewai deploy status
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`). ```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with: - **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs ```shell Terminal
``` crewai deploy logs
This streams the deployment logs to your terminal. ```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with: - **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list ```shell Terminal
``` crewai deploy list
This lists all your deployments. ```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with: - **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove ```shell Terminal
``` crewai deploy remove
This deletes the deployment from the CrewAI AMP platform. ```
This deletes the deployment from the CrewAI AMP platform.
- **Help Command**: You can get help with the CLI with: - **Help Command**: You can get help with the CLI with:
```shell Terminal ```shell Terminal
crewai deploy --help crewai deploy --help
``` ```
This shows the help message for the CrewAI Deploy CLI. This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI. Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
@@ -290,18 +323,20 @@ crewai login
``` ```
What happens: What happens:
- A verification URL and short code are displayed in your terminal - A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL - Your browser opens to the verification URL
- Enter/confirm the code to complete authentication - Enter/confirm the code to complete authentication
Notes: Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`) - The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically - After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate - If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys ### 12. API Keys
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider. When running `crewai create crew` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider and model, you will be prompted for API keys. Once you've selected an LLM provider and model, you will be prompted for API keys.
@@ -309,11 +344,11 @@ Once you've selected an LLM provider and model, you will be prompted for API key
Here's a list of the most popular LLM providers suggested by the CLI: Here's a list of the most popular LLM providers suggested by the CLI:
* OpenAI - OpenAI
* Groq - Groq
* Anthropic - Anthropic
* Google Gemini - Google Gemini
* SambaNova - SambaNova
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key. When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
@@ -325,7 +360,7 @@ When you select a provider, the CLI will prompt you to enter the Key name and th
See the following link for each provider's key name: See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers) - [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management ### 13. Configuration Management
@@ -338,16 +373,19 @@ crewai config [COMMAND] [OPTIONS]
#### Commands: #### Commands:
- `list`: Display all CLI configuration parameters - `list`: Display all CLI configuration parameters
```shell Terminal ```shell Terminal
crewai config list crewai config list
``` ```
- `set`: Set a CLI configuration parameter - `set`: Set a CLI configuration parameter
```shell Terminal ```shell Terminal
crewai config set <key> <value> crewai config set <key> <value>
``` ```
- `reset`: Reset all CLI configuration parameters to default values - `reset`: Reset all CLI configuration parameters to default values
```shell Terminal ```shell Terminal
crewai config reset crewai config reset
``` ```
@@ -363,49 +401,141 @@ crewai config reset
#### Examples #### Examples
Display current configuration: Display current configuration:
```shell Terminal ```shell Terminal
crewai config list crewai config list
``` ```
Example output: Example output:
| Setting | Value | Description | | Setting | Value | Description |
| :------------------ | :----------------------- | :---------------------------------------------------------- | | :------------------ | :----------------------- | :---------------------------------------------------------- |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance | | enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
| org_name | Not set | Name of the currently active organization | | org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization | | org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) | | oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource | | oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider | | oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) | | oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
Set the enterprise base URL: Set the enterprise base URL:
```shell Terminal ```shell Terminal
crewai config set enterprise_base_url https://my-enterprise.crewai.com crewai config set enterprise_base_url https://my-enterprise.crewai.com
``` ```
Set OAuth2 provider: Set OAuth2 provider:
```shell Terminal ```shell Terminal
crewai config set oauth2_provider auth0 crewai config set oauth2_provider auth0
``` ```
Set OAuth2 domain: Set OAuth2 domain:
```shell Terminal ```shell Terminal
crewai config set oauth2_domain my-company.auth0.com crewai config set oauth2_domain my-company.auth0.com
``` ```
Reset all configuration to defaults: Reset all configuration to defaults:
```shell Terminal ```shell Terminal
crewai config reset crewai config reset
``` ```
<Tip> <Tip>
After resetting configuration, re-run `crewai login` to authenticate again. After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
### 14. Trace Management
Manage trace collection preferences for your Crew and Flow executions.
```shell Terminal
crewai traces [COMMAND]
```
#### Commands:
- `enable`: Enable trace collection for crew/flow executions
```shell Terminal
crewai traces enable
```
- `disable`: Disable trace collection for crew/flow executions
```shell Terminal
crewai traces disable
```
- `status`: Show current trace collection status
```shell Terminal
crewai traces status
```
#### How Tracing Works
Trace collection is controlled by checking three settings in priority order:
1. **Explicit flag in code** (highest priority - can enable OR disable):
```python
crew = Crew(agents=[...], tasks=[...], tracing=True) # Always enable
crew = Crew(agents=[...], tasks=[...], tracing=False) # Always disable
crew = Crew(agents=[...], tasks=[...]) # Check lower priorities (default)
```
- `tracing=True` will **always enable** tracing (overrides everything)
- `tracing=False` will **always disable** tracing (overrides everything)
- `tracing=None` or omitted will check lower priority settings
2. **Environment variable** (second priority):
```env
CREWAI_TRACING_ENABLED=true
```
- Checked only if `tracing` is not explicitly set to `True` or `False` in code
- Set to `true` or `1` to enable tracing
3. **User preference** (lowest priority):
```shell Terminal
crewai traces enable
```
- Checked only if `tracing` is not set in code and `CREWAI_TRACING_ENABLED` is not set to `true`
- Running `crewai traces enable` is sufficient to enable tracing by itself
<Note>
**To enable tracing**, use any one of these methods:
- Set `tracing=True` in your Crew/Flow code, OR
- Add `CREWAI_TRACING_ENABLED=true` to your `.env` file, OR
- Run `crewai traces enable`
**To disable tracing**, use any ONE of these methods:
- Set `tracing=False` in your Crew/Flow code (overrides everything), OR
- Remove or set to `false` the `CREWAI_TRACING_ENABLED` env var, OR
- Run `crewai traces disable`
Higher priority settings override lower ones.
</Note>
<Tip>
For more information about tracing, see the [Tracing
documentation](/observability/tracing).
</Tip> </Tip>
<Tip> <Tip>
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs. CrewAI CLI handles authentication to the Tool Repository automatically when
adding packages to your project. Just append `crewai` before any `uv` command
to use it. E.g. `crewai uv add requests`. For more information, see [Tool
Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
</Tip> </Tip>
<Note> <Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users. Configuration settings are stored in `~/.config/crewai/settings.json`. Some
settings like organization name and UUID are read-only and managed through
authentication and organization commands. Tool repository related settings are
hidden and cannot be set directly by users.
</Note> </Note>

View File

@@ -33,6 +33,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | | **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. | | **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
<Tip> <Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. **Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -306,12 +307,27 @@ print(result)
### Different Ways to Kick Off a Crew ### Different Ways to Kick Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`. Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process.
#### Synchronous Methods
- `kickoff()`: Starts the execution process according to the defined process flow. - `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection. - `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing. #### Asynchronous Methods
CrewAI offers two approaches for async execution:
| Method | Type | Description |
|--------|------|-------------|
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
| `akickoff_for_each()` | Native async | Native async execution for each input in a list |
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
| `kickoff_for_each_async()` | Thread-based | Thread-based async for each input in a list |
<Note>
For high-concurrency workloads, `akickoff()` and `akickoff_for_each()` are recommended as they use native async for task execution, memory operations, and knowledge retrieval.
</Note>
```python Code ```python Code
# Start the crew's task execution # Start the crew's task execution
@@ -324,19 +340,53 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results: for result in results:
print(result) print(result)
# Example of using kickoff_async # Example of using native async with akickoff
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.akickoff(inputs=inputs)
print(async_result)
# Example of using native async with akickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
for async_result in async_results:
print(async_result)
# Example of using thread-based kickoff_async
inputs = {'topic': 'AI in healthcare'} inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs) async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result) print(async_result)
# Example of using kickoff_for_each_async # Example of using thread-based kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}] inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array) async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results: for async_result in async_results:
print(async_result) print(async_result)
``` ```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. For detailed async examples, see the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide.
### Streaming Crew Execution
For real-time visibility into crew execution, you can enable streaming to receive output as it's generated:
```python Code
# Enable streaming
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Iterate over streaming output
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
```
Learn more about streaming in the [Streaming Crew Execution](/en/learn/streaming-crew-execution) guide.
### Replaying from a Specific Task ### Replaying from a Specific Task

View File

@@ -1,6 +1,6 @@
--- ---
title: 'Event Listeners' title: "Event Listeners"
description: 'Tap into CrewAI events to build custom integrations and monitoring' description: "Tap into CrewAI events to build custom integrations and monitoring"
icon: spinner icon: spinner
mode: "wide" mode: "wide"
--- ---
@@ -25,6 +25,7 @@ CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event s
![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png) ![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png)
With Prompt Tracing you can: With Prompt Tracing you can:
- View the complete history of all prompts sent to your LLM - View the complete history of all prompts sent to your LLM
- Track token usage and costs - Track token usage and costs
- Debug agent reasoning failures - Debug agent reasoning failures
@@ -274,7 +275,6 @@ The structure of the event object depends on the event type, but all events inhe
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields. Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Advanced Usage: Scoped Handlers ## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager: For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:

View File

@@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`. When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
### Human in the Loop (human feedback)
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Do you approve this content?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "Content to be reviewed..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Approved! Feedback: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"Rejected. Reason: {result.feedback}")
```
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
You can also use `@human_feedback` without routing to simply collect feedback:
```python Code
@start()
@human_feedback(message="Any comments on this output?")
def my_method(self):
return "Output for review"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# Access feedback via result.feedback
# Access original output via result.output
pass
```
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
## Adding Agents to Flows ## Adding Agents to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research: Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
@@ -897,6 +946,31 @@ flow = ExampleFlow()
result = flow.kickoff() result = flow.kickoff()
``` ```
### Streaming Flow Execution
For real-time visibility into flow execution, you can enable streaming to receive output as it's generated:
```python
class StreamingFlow(Flow):
stream = True # Enable streaming
@start()
def research(self):
# Your flow implementation
pass
# Iterate over streaming output
flow = StreamingFlow()
streaming = flow.kickoff()
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
```
Learn more about streaming in the [Streaming Flow Execution](/en/learn/streaming-flow-execution) guide.
### Using the CLI ### Using the CLI
Starting from version 0.103.0, you can run flows using the `crewai run` command: Starting from version 0.103.0, you can run flows using the `crewai run` command:

View File

@@ -388,8 +388,8 @@ crew = Crew(
agents=[sales_agent, tech_agent, support_agent], agents=[sales_agent, tech_agent, support_agent],
tasks=[...], tasks=[...],
embedder={ # Fallback embedder for agents without their own embedder={ # Fallback embedder for agents without their own
"provider": "google", "provider": "google-generativeai",
"config": {"model": "text-embedding-004"} "config": {"model_name": "gemini-embedding-001"}
} }
) )
@@ -629,9 +629,9 @@ agent = Agent(
backstory="Expert researcher", backstory="Expert researcher",
knowledge_sources=[knowledge_source], knowledge_sources=[knowledge_source],
embedder={ embedder={
"provider": "google", "provider": "google-generativeai",
"config": { "config": {
"model": "models/text-embedding-004", "model_name": "gemini-embedding-001",
"api_key": "your-google-key" "api_key": "your-google-key"
} }
} }
@@ -739,7 +739,7 @@ class KnowledgeMonitorListener(BaseEventListener):
knowledge_monitor = KnowledgeMonitorListener() knowledge_monitor = KnowledgeMonitorListener()
``` ```
For more information on using events, see the [Event Listeners](https://docs.crewai.com/concepts/event-listener) documentation. For more information on using events, see the [Event Listeners](/en/concepts/event-listener) documentation.
### Custom Knowledge Sources ### Custom Knowledge Sources

View File

@@ -283,11 +283,54 @@ In this section, you'll find detailed examples that help you select, configure,
) )
``` ```
**Extended Thinking (Claude Sonnet 4 and Beyond):**
CrewAI supports Anthropic's Extended Thinking feature, which allows Claude to think through problems in a more human-like way before responding. This is particularly useful for complex reasoning, analysis, and problem-solving tasks.
```python Code
from crewai import LLM
# Enable extended thinking with default settings
llm = LLM(
model="anthropic/claude-sonnet-4",
thinking={"type": "enabled"},
max_tokens=10000
)
# Configure thinking with budget control
llm = LLM(
model="anthropic/claude-sonnet-4",
thinking={
"type": "enabled",
"budget_tokens": 5000 # Limit thinking tokens
},
max_tokens=10000
)
```
**Thinking Configuration Options:**
- `type`: Set to `"enabled"` to activate extended thinking mode
- `budget_tokens` (optional): Maximum tokens to use for thinking (helps control costs)
**Models Supporting Extended Thinking:**
- `claude-sonnet-4` and newer models
- `claude-3-7-sonnet` (with extended thinking capabilities)
**When to Use Extended Thinking:**
- Complex reasoning and multi-step problem solving
- Mathematical calculations and proofs
- Code analysis and debugging
- Strategic planning and decision making
- Research and analytical tasks
**Note:** Extended thinking consumes additional tokens but can significantly improve response quality for complex tasks.
**Supported Environment Variables:** **Supported Environment Variables:**
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required) - `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
**Features:** **Features:**
- Native tool use support for Claude 3+ models - Native tool use support for Claude 3+ models
- Extended Thinking support for Claude Sonnet 4+
- Streaming support for real-time responses - Streaming support for real-time responses
- Automatic system message handling - Automatic system message handling
- Stop sequences for controlled output - Stop sequences for controlled output
@@ -305,6 +348,7 @@ In this section, you'll find detailed examples that help you select, configure,
| Model | Context Window | Best For | | Model | Context Window | Best For |
|------------------------------|----------------|-----------------------------------------------| |------------------------------|----------------|-----------------------------------------------|
| claude-sonnet-4 | 200,000 tokens | Latest with extended thinking capabilities |
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks | | claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance | | claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses | | claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
@@ -1035,7 +1079,7 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
``` ```
<Tip> <Tip>
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details [Click here](/en/concepts/event-listener#event-listeners) for more details
</Tip> </Tip>
</Tab> </Tab>
@@ -1089,6 +1133,50 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
</Tab> </Tab>
</Tabs> </Tabs>
## Async LLM Calls
CrewAI supports asynchronous LLM calls for improved performance and concurrency in your AI workflows. Async calls allow you to run multiple LLM requests concurrently without blocking, making them ideal for high-throughput applications and parallel agent operations.
<Tabs>
<Tab title="Basic Usage">
Use the `acall` method for asynchronous LLM requests:
```python
import asyncio
from crewai import LLM
async def main():
llm = LLM(model="openai/gpt-4o")
# Single async call
response = await llm.acall("What is the capital of France?")
print(response)
asyncio.run(main())
```
The `acall` method supports all the same parameters as the synchronous `call` method, including messages, tools, and callbacks.
</Tab>
<Tab title="With Streaming">
Combine async calls with streaming for real-time concurrent responses:
```python
import asyncio
from crewai import LLM
async def stream_async():
llm = LLM(model="openai/gpt-4o", stream=True)
response = await llm.acall("Write a short story about AI")
print(response)
asyncio.run(stream_async())
```
</Tab>
</Tabs>
## Structured LLM Calls ## Structured LLM Calls
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing. CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
@@ -1200,6 +1288,52 @@ Learn how to get the most out of your LLM configuration:
) )
``` ```
</Accordion> </Accordion>
<Accordion title="Transport Interceptors">
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
**Supported Providers:**
- ✅ OpenAI
- ✅ Anthropic
**Basic Usage:**
```python
import httpx
from crewai import LLM
from crewai.llms.hooks import BaseInterceptor
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
"""Custom interceptor to modify requests and responses."""
def on_outbound(self, request: httpx.Request) -> httpx.Request:
"""Print request before sending to the LLM provider."""
print(request)
return request
def on_inbound(self, response: httpx.Response) -> httpx.Response:
"""Process response after receiving from the LLM provider."""
print(f"Status: {response.status_code}")
print(f"Response time: {response.elapsed}")
return response
# Use the interceptor with an LLM
llm = LLM(
model="openai/gpt-4o",
interceptor=CustomInterceptor()
)
```
**Important Notes:**
- Both methods must return the received object or type of object.
- Modifying received objects may result in unexpected behavior or application crashes.
- Not all providers support interceptors - check the supported providers list above
<Info>
Interceptors operate at the transport layer. This is particularly useful for:
- Message transformation and filtering
- Debugging API interactions
</Info>
</Accordion>
</AccordionGroup> </AccordionGroup>
## Common Issues and Solutions ## Common Issues and Solutions

View File

@@ -341,7 +341,7 @@ crew = Crew(
embedder={ embedder={
"provider": "openai", "provider": "openai",
"config": { "config": {
"model": "text-embedding-3-small" # or "text-embedding-3-large" "model_name": "text-embedding-3-small" # or "text-embedding-3-large"
} }
} }
) )
@@ -353,7 +353,7 @@ crew = Crew(
"provider": "openai", "provider": "openai",
"config": { "config": {
"api_key": "your-openai-api-key", # Optional: override env var "api_key": "your-openai-api-key", # Optional: override env var
"model": "text-embedding-3-large", "model_name": "text-embedding-3-large",
"dimensions": 1536, # Optional: reduce dimensions for smaller storage "dimensions": 1536, # Optional: reduce dimensions for smaller storage
"organization_id": "your-org-id" # Optional: for organization accounts "organization_id": "your-org-id" # Optional: for organization accounts
} }
@@ -375,7 +375,7 @@ crew = Crew(
"api_base": "https://your-resource.openai.azure.com/", "api_base": "https://your-resource.openai.azure.com/",
"api_type": "azure", "api_type": "azure",
"api_version": "2023-05-15", "api_version": "2023-05-15",
"model": "text-embedding-3-small", "model_name": "text-embedding-3-small",
"deployment_id": "your-deployment-name" # Azure deployment name "deployment_id": "your-deployment-name" # Azure deployment name
} }
} }
@@ -390,10 +390,10 @@ Use Google's text embedding models for integration with Google Cloud services.
crew = Crew( crew = Crew(
memory=True, memory=True,
embedder={ embedder={
"provider": "google", "provider": "google-generativeai",
"config": { "config": {
"api_key": "your-google-api-key", "api_key": "your-google-api-key",
"model": "text-embedding-004" # or "text-embedding-preview-0409" "model_name": "gemini-embedding-001" # or "text-embedding-005", "text-multilingual-embedding-002"
} }
} }
) )
@@ -461,7 +461,7 @@ crew = Crew(
"provider": "cohere", "provider": "cohere",
"config": { "config": {
"api_key": "your-cohere-api-key", "api_key": "your-cohere-api-key",
"model": "embed-english-v3.0" # or "embed-multilingual-v3.0" "model_name": "embed-english-v3.0" # or "embed-multilingual-v3.0"
} }
} }
) )
@@ -478,7 +478,7 @@ crew = Crew(
"provider": "voyageai", "provider": "voyageai",
"config": { "config": {
"api_key": "your-voyage-api-key", "api_key": "your-voyage-api-key",
"model": "voyage-large-2", # or "voyage-code-2" for code "model": "voyage-3", # or "voyage-3-lite", "voyage-code-3"
"input_type": "document" # or "query" "input_type": "document" # or "query"
} }
} }
@@ -515,8 +515,7 @@ crew = Crew(
"provider": "huggingface", "provider": "huggingface",
"config": { "config": {
"api_key": "your-hf-token", # Optional for public models "api_key": "your-hf-token", # Optional for public models
"model": "sentence-transformers/all-MiniLM-L6-v2", "model": "sentence-transformers/all-MiniLM-L6-v2"
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
} }
} }
) )
@@ -912,10 +911,10 @@ crew = Crew(
crew = Crew( crew = Crew(
memory=True, memory=True,
embedder={ embedder={
"provider": "google", "provider": "google-generativeai",
"config": { "config": {
"api_key": "your-api-key", "api_key": "your-api-key",
"model": "text-embedding-004" "model_name": "gemini-embedding-001"
} }
} }
) )

View File

@@ -0,0 +1,154 @@
---
title: Production Architecture
description: Best practices for building production-ready AI applications with CrewAI
icon: server
mode: "wide"
---
# The Flow-First Mindset
When building production AI applications with CrewAI, **we recommend starting with a Flow**.
While it's possible to run individual Crews or Agents, wrapping them in a Flow provides the necessary structure for a robust, scalable application.
## Why Flows?
1. **State Management**: Flows provide a built-in way to manage state across different steps of your application. This is crucial for passing data between Crews, maintaining context, and handling user inputs.
2. **Control**: Flows allow you to define precise execution paths, including loops, conditionals, and branching logic. This is essential for handling edge cases and ensuring your application behaves predictably.
3. **Observability**: Flows provide a clear structure that makes it easier to trace execution, debug issues, and monitor performance. We recommend using [CrewAI Tracing](/en/observability/tracing) for detailed insights. Simply run `crewai login` to enable free observability features.
## The Architecture
A typical production CrewAI application looks like this:
```mermaid
graph TD
Start((Start)) --> Flow[Flow Orchestrator]
Flow --> State{State Management}
State --> Step1[Step 1: Data Gathering]
Step1 --> Crew1[Research Crew]
Crew1 --> State
State --> Step2{Condition Check}
Step2 -- "Valid" --> Step3[Step 3: Execution]
Step3 --> Crew2[Action Crew]
Step2 -- "Invalid" --> End((End))
Crew2 --> End
```
### 1. The Flow Class
Your `Flow` class is the entry point. It defines the state schema and the methods that execute your logic.
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class AppState(BaseModel):
user_input: str = ""
research_results: str = ""
final_report: str = ""
class ProductionFlow(Flow[AppState]):
@start()
def gather_input(self):
# ... logic to get input ...
pass
@listen(gather_input)
def run_research_crew(self):
# ... trigger a Crew ...
pass
```
### 2. State Management
Use Pydantic models to define your state. This ensures type safety and makes it clear what data is available at each step.
- **Keep it minimal**: Store only what you need to persist between steps.
- **Use structured data**: Avoid unstructured dictionaries when possible.
### 3. Crews as Units of Work
Delegate complex tasks to Crews. A Crew should be focused on a specific goal (e.g., "Research a topic", "Write a blog post").
- **Don't over-engineer Crews**: Keep them focused.
- **Pass state explicitly**: Pass the necessary data from the Flow state to the Crew inputs.
```python
@listen(gather_input)
def run_research_crew(self):
crew = ResearchCrew()
result = crew.kickoff(inputs={"topic": self.state.user_input})
self.state.research_results = result.raw
```
## Control Primitives
Leverage CrewAI's control primitives to add robustness and control to your Crews.
### 1. Task Guardrails
Use [Task Guardrails](/en/concepts/tasks#task-guardrails) to validate task outputs before they are accepted. This ensures that your agents produce high-quality results.
```python
def validate_content(result: TaskOutput) -> Tuple[bool, Any]:
if len(result.raw) < 100:
return (False, "Content is too short. Please expand.")
return (True, result.raw)
task = Task(
...,
guardrail=validate_content
)
```
### 2. Structured Outputs
Always use structured outputs (`output_pydantic` or `output_json`) when passing data between tasks or to your application. This prevents parsing errors and ensures type safety.
```python
class ResearchResult(BaseModel):
summary: str
sources: List[str]
task = Task(
...,
output_pydantic=ResearchResult
)
```
### 3. LLM Hooks
Use [LLM Hooks](/en/learn/llm-hooks) to inspect or modify messages before they are sent to the LLM, or to sanitize responses.
```python
@before_llm_call
def log_request(context):
print(f"Agent {context.agent.role} is calling the LLM...")
```
## Deployment Patterns
When deploying your Flow, consider the following:
### CrewAI Enterprise
The easiest way to deploy your Flow is using CrewAI Enterprise. It handles the infrastructure, authentication, and monitoring for you.
Check out the [Deployment Guide](/en/enterprise/guides/deploy-crew) to get started.
```bash
crewai deploy create
```
### Async Execution
For long-running tasks, use `kickoff_async` to avoid blocking your API.
### Persistence
Use the `@persist` decorator to save the state of your Flow to a database. This allows you to resume execution if the process crashes or if you need to wait for human input.
```python
@persist
class ProductionFlow(Flow[AppState]):
# ...
```
## Summary
- **Start with a Flow.**
- **Define a clear State.**
- **Use Crews for complex tasks.**
- **Deploy with an API and persistence.**

View File

@@ -19,6 +19,7 @@ CrewAI AMP includes a Visual Task Builder in Crew Studio that simplifies complex
![Task Builder Screenshot](/images/enterprise/crew-studio-interface.png) ![Task Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Task Builder enables: The Visual Task Builder enables:
- Drag-and-drop task creation - Drag-and-drop task creation
- Visual task dependencies and flow - Visual task dependencies and flow
- Real-time testing and validation - Real-time testing and validation
@@ -28,10 +29,12 @@ The Visual Task Builder enables:
### Task Execution Flow ### Task Execution Flow
Tasks can be executed in two ways: Tasks can be executed in two ways:
- **Sequential**: Tasks are executed in the order they are defined - **Sequential**: Tasks are executed in the order they are defined
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise - **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
The execution flow is defined when creating the crew: The execution flow is defined when creating the crew:
```python Code ```python Code
crew = Crew( crew = Crew(
agents=[agent1, agent2], agents=[agent1, agent2],
@@ -42,29 +45,31 @@ crew = Crew(
## Task Attributes ## Task Attributes
| Attribute | Parameters | Type | Description | | Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- | | :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. | | **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. | | **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. | | **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. | | **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. | | **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. | | **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. | | **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. | | **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. | | **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. | | **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. | | **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. | | **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. | | **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. | | **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. | | **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. | | **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. | | **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
<Note type="warning" title="Deprecated: max_retries"> <Note type="warning" title="Deprecated: max_retries">
The task attribute `max_retries` is deprecated and will be removed in v1.0.0. The task attribute `max_retries` is deprecated and will be removed in v1.0.0.
Use `guardrail_max_retries` instead to control retry attempts when a guardrail fails. Use `guardrail_max_retries` instead to control retry attempts when a guardrail
fails.
</Note> </Note>
## Creating Tasks ## Creating Tasks
@@ -86,7 +91,7 @@ crew.kickoff(inputs={'topic': 'AI Agents'})
Here's an example of how to configure tasks using YAML: Here's an example of how to configure tasks using YAML:
```yaml tasks.yaml ````yaml tasks.yaml
research_task: research_task:
description: > description: >
Conduct a thorough research about {topic} Conduct a thorough research about {topic}
@@ -106,7 +111,7 @@ reporting_task:
agent: reporting_analyst agent: reporting_analyst
markdown: true markdown: true
output_file: report.md output_file: report.md
``` ````
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`: To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
@@ -164,7 +169,8 @@ class LatestAiDevelopmentCrew():
``` ```
<Note> <Note>
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code. The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should
match the method names in your Python code.
</Note> </Note>
### Direct Code Definition (Alternative) ### Direct Code Definition (Alternative)
@@ -201,7 +207,8 @@ reporting_task = Task(
``` ```
<Tip> <Tip>
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc. Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's
process decide based on roles, availability, etc.
</Tip> </Tip>
## Task Output ## Task Output
@@ -223,6 +230,7 @@ By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput`
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. | | **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
| **Agent** | `agent` | `str` | The agent that executed the task. | | **Agent** | `agent` | `str` | The agent that executed the task. |
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. | | **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
| **Messages** | `messages` | `list[LLMMessage]` | The messages from the last task execution. |
### Task Methods and Properties ### Task Methods and Properties
@@ -285,12 +293,13 @@ formatted_task = Task(
``` ```
When `markdown=True`, the agent will receive additional instructions to format the output using: When `markdown=True`, the agent will receive additional instructions to format the output using:
- `#` for headers - `#` for headers
- `**text**` for bold text - `**text**` for bold text
- `*text*` for italic text - `*text*` for italic text
- `-` or `*` for bullet points - `-` or `*` for bullet points
- `` `code` `` for inline code - `` `code` `` for inline code
- ``` ```language ``` for code blocks - ` `language ``` for code blocks
### YAML Configuration with Markdown ### YAML Configuration with Markdown
@@ -301,7 +310,7 @@ analysis_task:
expected_output: > expected_output: >
A comprehensive analysis with charts and key findings A comprehensive analysis with charts and key findings
agent: analyst agent: analyst
markdown: true # Enable markdown formatting markdown: true # Enable markdown formatting
output_file: analysis.md output_file: analysis.md
``` ```
@@ -313,7 +322,9 @@ analysis_task:
- **Cross-Platform Compatibility**: Markdown is universally supported - **Cross-Platform Compatibility**: Markdown is universally supported
<Note> <Note>
The markdown formatting instructions are automatically added to the task prompt when `markdown=True`, so you don't need to specify formatting requirements in your task description. The markdown formatting instructions are automatically added to the task
prompt when `markdown=True`, so you don't need to specify formatting
requirements in your task description.
</Note> </Note>
## Task Dependencies and Context ## Task Dependencies and Context
@@ -341,7 +352,11 @@ Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides are passed to the next task. This feature helps ensure data quality and provides
feedback to agents when their output doesn't meet specific criteria. feedback to agents when their output doesn't meet specific criteria.
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results. CrewAI supports two types of guardrails:
1. **Function-based guardrails**: Python functions with custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
2. **LLM-based guardrails**: String descriptions that use the agent's LLM to validate outputs based on natural language criteria. These are ideal for complex or subjective validation requirements.
### Function-Based Guardrails ### Function-Based Guardrails
@@ -355,12 +370,12 @@ def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate blog content meets requirements.""" """Validate blog content meets requirements."""
try: try:
# Check word count # Check word count
word_count = len(result.split()) word_count = len(result.raw.split())
if word_count > 200: if word_count > 200:
return (False, "Blog content exceeds 200 words") return (False, "Blog content exceeds 200 words")
# Additional validation logic here # Additional validation logic here
return (True, result.strip()) return (True, result.raw.strip())
except Exception as e: except Exception as e:
return (False, "Unexpected error during validation") return (False, "Unexpected error during validation")
@@ -372,9 +387,156 @@ blog_task = Task(
) )
``` ```
### LLM-Based Guardrails (String Descriptions)
Instead of writing custom validation functions, you can use string descriptions that leverage LLM-based validation. When you provide a string to the `guardrail` or `guardrails` parameter, CrewAI automatically creates an `LLMGuardrail` that uses the agent's LLM to validate the output based on your description.
**Requirements**:
- The task must have an `agent` assigned (the guardrail uses the agent's LLM)
- Provide a clear, descriptive string explaining the validation criteria
```python Code
from crewai import Task
# Single LLM-based guardrail
blog_task = Task(
description="Write a blog post about AI",
expected_output="A blog post under 200 words",
agent=blog_agent,
guardrail="The blog post must be under 200 words and contain no technical jargon"
)
```
LLM-based guardrails are particularly useful for:
- **Complex validation logic** that's difficult to express programmatically
- **Subjective criteria** like tone, style, or quality assessments
- **Natural language requirements** that are easier to describe than code
The LLM guardrail will:
1. Analyze the task output against your description
2. Return `(True, output)` if the output complies with the criteria
3. Return `(False, feedback)` with specific feedback if validation fails
**Example with detailed validation criteria**:
```python Code
research_task = Task(
description="Research the latest developments in quantum computing",
expected_output="A comprehensive research report",
agent=researcher_agent,
guardrail="""
The research report must:
- Be at least 1000 words long
- Include at least 5 credible sources
- Cover both technical and practical applications
- Be written in a professional, academic tone
- Avoid speculation or unverified claims
"""
)
```
### Multiple Guardrails
You can apply multiple guardrails to a task using the `guardrails` parameter. Multiple guardrails are executed sequentially, with each guardrail receiving the output from the previous one. This allows you to chain validation and transformation steps.
The `guardrails` parameter accepts:
- A list of guardrail functions or string descriptions
- A single guardrail function or string (same as `guardrail`)
**Note**: If `guardrails` is provided, it takes precedence over `guardrail`. The `guardrail` parameter will be ignored when `guardrails` is set.
```python Code
from typing import Tuple, Any
from crewai import TaskOutput, Task
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate word count is within limits."""
word_count = len(result.raw.split())
if word_count < 100:
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
if word_count > 500:
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
return (True, result.raw)
def validate_no_profanity(result: TaskOutput) -> Tuple[bool, Any]:
"""Check for inappropriate language."""
profanity_words = ["badword1", "badword2"] # Example list
content_lower = result.raw.lower()
for word in profanity_words:
if word in content_lower:
return (False, f"Inappropriate language detected: {word}")
return (True, result.raw)
def format_output(result: TaskOutput) -> Tuple[bool, Any]:
"""Format and clean the output."""
formatted = result.raw.strip()
# Capitalize first letter
formatted = formatted[0].upper() + formatted[1:] if formatted else formatted
return (True, formatted)
# Apply multiple guardrails sequentially
blog_task = Task(
description="Write a blog post about AI",
expected_output="A well-formatted blog post between 100-500 words",
agent=blog_agent,
guardrails=[
validate_word_count, # First: validate length
validate_no_profanity, # Second: check content
format_output # Third: format the result
],
guardrail_max_retries=3
)
```
In this example, the guardrails execute in order:
1. `validate_word_count` checks the word count
2. `validate_no_profanity` checks for inappropriate language (using the output from step 1)
3. `format_output` formats the final result (using the output from step 2)
If any guardrail fails, the error is sent back to the agent, and the task is retried up to `guardrail_max_retries` times.
**Mixing function-based and LLM-based guardrails**:
You can combine both function-based and string-based guardrails in the same list:
```python Code
from typing import Tuple, Any
from crewai import TaskOutput, Task
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate word count is within limits."""
word_count = len(result.raw.split())
if word_count < 100:
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
if word_count > 500:
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
return (True, result.raw)
# Mix function-based and LLM-based guardrails
blog_task = Task(
description="Write a blog post about AI",
expected_output="A well-formatted blog post between 100-500 words",
agent=blog_agent,
guardrails=[
validate_word_count, # Function-based: precise word count check
"The content must be engaging and suitable for a general audience", # LLM-based: subjective quality check
"The writing style should be clear, concise, and free of technical jargon" # LLM-based: style validation
],
guardrail_max_retries=3
)
```
This approach combines the precision of programmatic validation with the flexibility of LLM-based assessment for subjective criteria.
### Guardrail Function Requirements ### Guardrail Function Requirements
1. **Function Signature**: 1. **Function Signature**:
- Must accept exactly one parameter (the task output) - Must accept exactly one parameter (the task output)
- Should return a tuple of `(bool, Any)` - Should return a tuple of `(bool, Any)`
- Type hints are recommended but optional - Type hints are recommended but optional
@@ -383,11 +545,10 @@ blog_task = Task(
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)` - On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")` - On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### Error Handling Best Practices ### Error Handling Best Practices
1. **Structured Error Responses**: 1. **Structured Error Responses**:
```python Code ```python Code
from crewai import TaskOutput, LLMGuardrail from crewai import TaskOutput, LLMGuardrail
@@ -403,11 +564,13 @@ def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
``` ```
2. **Error Categories**: 2. **Error Categories**:
- Use specific error codes - Use specific error codes
- Include relevant context - Include relevant context
- Provide actionable feedback - Provide actionable feedback
3. **Validation Chain**: 3. **Validation Chain**:
```python Code ```python Code
from typing import Any, Dict, List, Tuple, Union from typing import Any, Dict, List, Tuple, Union
from crewai import TaskOutput from crewai import TaskOutput
@@ -434,6 +597,7 @@ def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
### Handling Guardrail Results ### Handling Guardrail Results
When a guardrail returns `(False, error)`: When a guardrail returns `(False, error)`:
1. The error is sent back to the agent 1. The error is sent back to the agent
2. The agent attempts to fix the issue 2. The agent attempts to fix the issue
3. The process repeats until: 3. The process repeats until:
@@ -441,6 +605,7 @@ When a guardrail returns `(False, error)`:
- Maximum retries are reached (`guardrail_max_retries`) - Maximum retries are reached (`guardrail_max_retries`)
Example with retry handling: Example with retry handling:
```python Code ```python Code
from typing import Optional, Tuple, Union from typing import Optional, Tuple, Union
from crewai import TaskOutput, Task from crewai import TaskOutput, Task
@@ -466,10 +631,12 @@ task = Task(
## Getting Structured Consistent Outputs from Tasks ## Getting Structured Consistent Outputs from Tasks
<Note> <Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself. It's also important to note that the output of the final task of a crew
becomes the final output of the actual crew itself.
</Note> </Note>
### Using `output_pydantic` ### Using `output_pydantic`
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model. The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
Here's an example demonstrating how to use output_pydantic: Here's an example demonstrating how to use output_pydantic:
@@ -539,18 +706,22 @@ print("Accessing Properties - Option 5")
print("Blog:", result) print("Blog:", result)
``` ```
In this example: In this example:
* A Pydantic model Blog is defined with title and content fields.
* The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model. - A Pydantic model Blog is defined with title and content fields.
* After executing the crew, you can access the structured output in multiple ways as shown. - The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
- After executing the crew, you can access the structured output in multiple ways as shown.
#### Explanation of Accessing the Output #### Explanation of Accessing the Output
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the __getitem__ method.
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the **getitem** method.
2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object. 2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object.
3. Using to_dict() Method: Convert the output to a dictionary and access the fields. 3. Using to_dict() Method: Convert the output to a dictionary and access the fields.
4. Printing the Entire Object: Simply print the result object to see the structured output. 4. Printing the Entire Object: Simply print the result object to see the structured output.
### Using `output_json` ### Using `output_json`
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application. The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
Here's an example demonstrating how to use `output_json`: Here's an example demonstrating how to use `output_json`:
@@ -610,14 +781,15 @@ print("Blog:", result)
``` ```
In this example: In this example:
* A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
* The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model. - A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
* After executing the crew, you can access the structured JSON output in two ways as shown. - The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
- After executing the crew, you can access the structured JSON output in two ways as shown.
#### Explanation of Accessing the Output #### Explanation of Accessing the Output
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the __getitem__ method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result. 1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the **getitem** method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the __str__ method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object. 2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the **str** method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
--- ---
@@ -807,8 +979,6 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework. These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Creating Directories when Saving Files ## Creating Directories when Saving Files
The `create_directory` parameter controls whether CrewAI should automatically create directories when saving task outputs to files. This feature is particularly useful for organizing outputs and ensuring that file paths are correctly structured, especially when working with complex project hierarchies. The `create_directory` parameter controls whether CrewAI should automatically create directories when saving task outputs to files. This feature is particularly useful for organizing outputs and ensuring that file paths are correctly structured, especially when working with complex project hierarchies.
@@ -855,7 +1025,7 @@ analysis_task:
A comprehensive financial report with quarterly insights A comprehensive financial report with quarterly insights
agent: financial_analyst agent: financial_analyst
output_file: reports/quarterly/q4_2024_analysis.pdf output_file: reports/quarterly/q4_2024_analysis.pdf
create_directory: true # Automatically create 'reports/quarterly/' directory create_directory: true # Automatically create 'reports/quarterly/' directory
audit_task: audit_task:
description: > description: >
@@ -864,18 +1034,20 @@ audit_task:
A compliance audit report A compliance audit report
agent: auditor agent: auditor
output_file: audit/compliance_report.md output_file: audit/compliance_report.md
create_directory: false # Directory must already exist create_directory: false # Directory must already exist
``` ```
### Use Cases ### Use Cases
**Automatic Directory Creation (`create_directory=True`):** **Automatic Directory Creation (`create_directory=True`):**
- Development and prototyping environments - Development and prototyping environments
- Dynamic report generation with date-based folders - Dynamic report generation with date-based folders
- Automated workflows where directory structure may vary - Automated workflows where directory structure may vary
- Multi-tenant applications with user-specific folders - Multi-tenant applications with user-specific folders
**Manual Directory Management (`create_directory=False`):** **Manual Directory Management (`create_directory=False`):**
- Production environments with strict file system controls - Production environments with strict file system controls
- Security-sensitive applications where directories must be pre-configured - Security-sensitive applications where directories must be pre-configured
- Systems with specific permission requirements - Systems with specific permission requirements

View File

@@ -20,6 +20,7 @@ enabling everything from simple searches to complex interactions and effective t
CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days. CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
The Enterprise Tools Repository includes: The Enterprise Tools Repository includes:
- Pre-built connectors for popular enterprise systems - Pre-built connectors for popular enterprise systems
- Custom tool creation interface - Custom tool creation interface
- Version control and sharing capabilities - Version control and sharing capabilities

View File

@@ -37,7 +37,7 @@ you can use them locally or refine them to your needs.
<Card title="Tools & Integrations" href="/en/enterprise/features/tools-and-integrations" icon="wrench"> <Card title="Tools & Integrations" href="/en/enterprise/features/tools-and-integrations" icon="wrench">
Connect external apps and manage internal tools your agents can use. Connect external apps and manage internal tools your agents can use.
</Card> </Card>
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox"> <Card title="Tool Repository" href="/en/enterprise/guides/tool-repository#tool-repository" icon="toolbox">
Publish and install tools to enhance your crews' capabilities. Publish and install tools to enhance your crews' capabilities.
</Card> </Card>
<Card title="Agents Repository" href="/en/enterprise/features/agent-repositories" icon="people-group"> <Card title="Agents Repository" href="/en/enterprise/features/agent-repositories" icon="people-group">

View File

@@ -31,7 +31,8 @@ You can configure users and roles in Settings → Roles.
Go to <b>Settings → Roles</b> in CrewAI AMP. Go to <b>Settings → Roles</b> in CrewAI AMP.
</Step> </Step>
<Step title="Choose a role type"> <Step title="Choose a role type">
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click <b>Create role</b> to define a custom one. Use a predefined role (<b>Owner</b>, <b>Member</b>) or click{" "}
<b>Create role</b> to define a custom one.
</Step> </Step>
<Step title="Assign to members"> <Step title="Assign to members">
Select users and assign the role. You can change this anytime. Select users and assign the role. You can change this anytime.
@@ -40,10 +41,10 @@ You can configure users and roles in Settings → Roles.
### Configuration summary ### Configuration summary
| Area | Where to configure | Options | | Area | Where to configure | Options |
|:---|:---|:---| | :-------------------- | :--------------------------------- | :-------------------------------------- |
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles | | Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles | | Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
## Automationlevel Access Control ## Automationlevel Access Control
@@ -70,26 +71,30 @@ You can configure automationlevel access control in Automation → Settings
Navigate to <b>Automation → Settings → Visibility</b>. Navigate to <b>Automation → Settings → Visibility</b>.
</Step> </Step>
<Step title="Set visibility"> <Step title="Set visibility">
Choose <b>Private</b> to restrict access. The organization owner always retains access. Choose <b>Private</b> to restrict access. The organization owner always
retains access.
</Step> </Step>
<Step title="Whitelist access"> <Step title="Whitelist access">
Add specific users and roles allowed to view, run, and access logs/metrics/settings. Add specific users and roles allowed to view, run, and access
logs/metrics/settings.
</Step> </Step>
<Step title="Save and verify"> <Step title="Save and verify">
Save changes, then confirm that nonwhitelisted users cannot view or run the automation. Save changes, then confirm that nonwhitelisted users cannot view or run the
automation.
</Step> </Step>
</Steps> </Steps>
### Private visibility: access outcomes ### Private visibility: access outcomes
| Action | Owner | Whitelisted user/role | Not whitelisted | | Action | Owner | Whitelisted user/role | Not whitelisted |
|:---|:---|:---|:---| | :--------------------------- | :---- | :-------------------- | :-------------- |
| View automation | ✓ | ✓ | ✗ | | View automation | ✓ | ✓ | ✗ |
| Run automation/API | ✓ | ✓ | ✗ | | Run automation/API | ✓ | ✓ | ✗ |
| Access logs/metrics/settings | ✓ | ✓ | ✗ | | Access logs/metrics/settings | ✓ | ✓ | ✗ |
<Tip> <Tip>
The organization owner always has access. In private mode, only whitelisted users and roles can view, run, and access logs/metrics/settings. The organization owner always has access. In private mode, only whitelisted
users and roles can view, run, and access logs/metrics/settings.
</Tip> </Tip>
<Frame> <Frame>

View File

@@ -18,222 +18,226 @@ Tools & Integrations is the central hub for connecting thirdparty apps and ma
<Tabs> <Tabs>
<Tab title="Integrations" icon="plug"> <Tab title="Integrations" icon="plug">
## Agent Apps (Integrations) ## Agent Apps (Integrations)
Connect enterprisegrade applications (e.g., Gmail, Google Drive, HubSpot, Slack) via OAuth to enable agent actions. Connect enterprisegrade applications (e.g., Gmail, Google Drive, HubSpot, Slack) via OAuth to enable agent actions.
<Steps> {" "}
<Step title="Connect"> <Steps>
Click <b>Connect</b> on an app and complete OAuth. <Step title="Connect">
</Step> Click <b>Connect</b> on an app and complete OAuth.
<Step title="Configure"> </Step>
Optionally adjust scopes, triggers, and action availability. <Step title="Configure">
</Step> Optionally adjust scopes, triggers, and action availability.
<Step title="Use in Agents"> </Step>
Connected services become available as tools for your agents. <Step title="Use in Agents">
</Step> Connected services become available as tools for your agents.
</Steps> </Step>
</Steps>
<Frame> {" "}
![Integrations Grid](/images/enterprise/agent-apps.png) <Frame>![Integrations Grid](/images/enterprise/agent-apps.png)</Frame>
</Frame>
### Connect your Account ### Connect your Account
1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link> 1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link>
2. Click <b>Connect</b> on the desired service 2. Click <b>Connect</b> on the desired service
3. Complete the OAuth flow and grant scopes 3. Complete the OAuth flow and grant scopes
4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link> 4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link>
<Frame> {" "}
![Enterprise Token](/images/enterprise/enterprise_action_auth_token.png) <Frame>
</Frame> ![Enterprise Token](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
### Install Integration Tools ### Install Integration Tools
To use the integrations locally, you need to install the latest `crewai-tools` package. To use the integrations locally, you need to install the latest `crewai-tools` package.
```bash ```bash
uv add crewai-tools uv add crewai-tools
``` ```
### Environment Variable Setup ### Environment Variable Setup
<Note> {" "}
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token. <Note>
</Note> To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash ```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token" export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
``` ```
Or add it to your `.env` file: Or add it to your `.env` file:
``` ```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
``` ```
### Usage Example ### Usage Example
<Tip> {" "}
Use the new streamlined approach to integrate enterprise apps. Simply specify the app and its actions directly in the Agent configuration. <Tip>
</Tip> Use the new streamlined approach to integrate enterprise apps. Simply specify
the app and its actions directly in the Agent configuration.
</Tip>
```python ```python
from crewai import Agent, Task, Crew from crewai import Agent, Task, Crew
# Create an agent with Gmail capabilities # Create an agent with Gmail capabilities
email_agent = Agent( email_agent = Agent(
role="Email Manager", role="Email Manager",
goal="Manage and organize email communications", goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.", backstory="An AI assistant specialized in email management and communication.",
apps=['gmail', 'gmail/send_email'] # Using canonical name 'gmail' apps=['gmail', 'gmail/send_email'] # Using canonical name 'gmail'
) )
# Task to send an email # Task to send an email
email_task = Task( email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update", description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent, agent=email_agent,
expected_output="Confirmation that email was sent successfully" expected_output="Confirmation that email was sent successfully"
) )
# Run the task # Run the task
crew = Crew( crew = Crew(
agents=[email_agent], agents=[email_agent],
tasks=[email_task] tasks=[email_task]
) )
# Run the crew # Run the crew
crew.kickoff() crew.kickoff()
``` ```
### Filtering Tools ### Filtering Tools
```python ```python
from crewai import Agent, Task, Crew from crewai import Agent, Task, Crew
# Create agent with specific Gmail actions only # Create agent with specific Gmail actions only
gmail_agent = Agent( gmail_agent = Agent(
role="Gmail Manager", role="Gmail Manager",
goal="Manage gmail communications and notifications", goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.", backstory="An AI assistant that helps coordinate gmail communications.",
apps=['gmail/fetch_emails'] # Using canonical name with specific action apps=['gmail/fetch_emails'] # Using canonical name with specific action
) )
notification_task = Task( notification_task = Task(
description="Find the email from john@example.com", description="Find the email from john@example.com",
agent=gmail_agent, agent=gmail_agent,
expected_output="Email found from john@example.com" expected_output="Email found from john@example.com"
) )
crew = Crew( crew = Crew(
agents=[gmail_agent], agents=[gmail_agent],
tasks=[notification_task] tasks=[notification_task]
) )
``` ```
On a deployed crew, you can specify which actions are available for each integration from the service settings page. On a deployed crew, you can specify which actions are available for each integration from the service settings page.
<Frame> {" "}
![Filter Actions](/images/enterprise/filtering_enterprise_action_tools.png) <Frame>
</Frame> ![Filter Actions](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
### Scoped Deployments (multiuser orgs) ### Scoped Deployments (multiuser orgs)
You can scope each integration to a specific user. For example, a crew that connects to Google can use a specific users Gmail account. You can scope each integration to a specific user. For example, a crew that connects to Google can use a specific users Gmail account.
<Tip> {" "}
Useful when different teams/users must keep data access separated. <Tip>Useful when different teams/users must keep data access separated.</Tip>
</Tip>
Use the `user_bearer_token` to scope authentication to the requesting user. If the user isnt logged in, the crew wont use connected integrations. Otherwise it falls back to the default bearer token configured for the deployment. Use the `user_bearer_token` to scope authentication to the requesting user. If the user isnt logged in, the crew wont use connected integrations. Otherwise it falls back to the default bearer token configured for the deployment.
<Frame> {" "}
![User Bearer Token](/images/enterprise/user_bearer_token.png) <Frame>![User Bearer Token](/images/enterprise/user_bearer_token.png)</Frame>
</Frame>
<div id="catalog"></div> {" "}
### Catalog <div id="catalog"></div>
### Catalog
#### Communication & Collaboration #### Communication & Collaboration
- Gmail — Manage emails and drafts
- Slack — Workspace notifications and alerts
- Microsoft — Office 365 and Teams integration
#### Project Management - Gmail — Manage emails and drafts
- Jira — Issue tracking and project management - Slack — Workspace notifications and alerts
- ClickUp — Task and productivity management - Microsoft — Office 365 and Teams integration
- Asana — Team task and project coordination
- Notion — Page and database management
- Linear — Software project and bug tracking
- GitHub — Repository and issue management
#### Customer Relationship Management #### Project Management
- Salesforce — CRM account and opportunity management
- HubSpot — Sales pipeline and contact management
- Zendesk — Customer support ticket management
#### Business & Finance - Jira — Issue tracking and project management
- Stripe — Payment processing and customer management - ClickUp — Task and productivity management
- Shopify — Ecommerce store and product management - Asana — Team task and project coordination
- Notion — Page and database management
- Linear — Software project and bug tracking
- GitHub — Repository and issue management
#### Productivity & Storage #### Customer Relationship Management
- Google Sheets — Spreadsheet data synchronization
- Google Calendar — Event and schedule management
- Box — File storage and document management
…and more to come! - Salesforce — CRM account and opportunity management
- HubSpot — Sales pipeline and contact management
- Zendesk — Customer support ticket management
#### Business & Finance
- Stripe — Payment processing and customer management
- Shopify — Ecommerce store and product management
#### Productivity & Storage
- Google Sheets — Spreadsheet data synchronization
- Google Calendar — Event and schedule management
- Box — File storage and document management
…and more to come!
</Tab> </Tab>
<Tab title="Internal Tools" icon="toolbox"> <Tab title="Internal Tools" icon="toolbox">
## Internal Tools ## Internal Tools
Create custom tools locally, publish them on CrewAI AMP Tool Repository and use them in your agents. Create custom tools locally, publish them on CrewAI AMP Tool Repository and use them in your agents.
<Tip> {" "}
Before running the commands below, make sure you log in to your CrewAI AMP account by running this command: <Tip>
```bash Before running the commands below, make sure you log in to your CrewAI AMP
crewai login account by running this command: ```bash crewai login ```
``` </Tip>
</Tip>
<Frame> {" "}
![Internal Tool Detail](/images/enterprise/tools-integrations-internal.png) <Frame>
</Frame> ![Internal Tool Detail](/images/enterprise/tools-integrations-internal.png)
</Frame>
<Steps> {" "}
<Step title="Create"> <Steps>
Create a new tool locally. <Step title="Create">
```bash Create a new tool locally. ```bash crewai tool create your-tool ```
crewai tool create your-tool </Step>
``` <Step title="Publish">
</Step> Publish the tool to the CrewAI AMP Tool Repository. ```bash crewai tool
<Step title="Publish"> publish ```
Publish the tool to the CrewAI AMP Tool Repository. </Step>
```bash <Step title="Install">
crewai tool publish Install the tool from the CrewAI AMP Tool Repository. ```bash crewai tool
``` install your-tool ```
</Step> </Step>
<Step title="Install"> </Steps>
Install the tool from the CrewAI AMP Tool Repository.
```bash
crewai tool install your-tool
```
</Step>
</Steps>
Manage: Manage:
- Name and description - Name and description
- Visibility (Private / Public) - Visibility (Private / Public)
- Required environment variables - Required environment variables
- Version history and downloads - Version history and downloads
- Team and role access - Team and role access
<Frame> {" "}
![Internal Tool Detail](/images/enterprise/tool-configs.png) <Frame>![Internal Tool Detail](/images/enterprise/tool-configs.png)</Frame>
</Frame>
</Tab> </Tab>
</Tabs> </Tabs>
@@ -241,10 +245,18 @@ Tools & Integrations is the central hub for connecting thirdparty apps and ma
## Related ## Related
<CardGroup cols={2}> <CardGroup cols={2}>
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox"> <Card
title="Tool Repository"
href="/en/enterprise/guides/tool-repository#tool-repository"
icon="toolbox"
>
Create, publish, and version custom tools for your organization. Create, publish, and version custom tools for your organization.
</Card> </Card>
<Card title="Webhook Automation" href="/en/enterprise/guides/webhook-automation" icon="bolt"> <Card
title="Webhook Automation"
href="/en/enterprise/guides/webhook-automation"
icon="bolt"
>
Automate workflows and integrate with external platforms and services. Automate workflows and integrate with external platforms and services.
</Card> </Card>
</CardGroup> </CardGroup>

View File

@@ -20,9 +20,7 @@ Traces in CrewAI AMP are detailed execution records that capture every aspect of
- Execution times - Execution times
- Cost estimates - Cost estimates
<Frame> <Frame>![Traces Overview](/images/enterprise/traces-overview.png)</Frame>
![Traces Overview](/images/enterprise/traces-overview.png)
</Frame>
## Accessing Traces ## Accessing Traces
@@ -51,9 +49,7 @@ The top section displays high-level metrics about the execution:
- **Execution Time**: Total duration of the crew run - **Execution Time**: Total duration of the crew run
- **Estimated Cost**: Approximate cost based on token usage - **Estimated Cost**: Approximate cost based on token usage
<Frame> <Frame>![Execution Summary](/images/enterprise/trace-summary.png)</Frame>
![Execution Summary](/images/enterprise/trace-summary.png)
</Frame>
### 2. Tasks & Agents ### 2. Tasks & Agents
@@ -64,33 +60,25 @@ This section shows all tasks and agents that were part of the crew execution:
- Status (completed/failed) - Status (completed/failed)
- Individual execution time of the task - Individual execution time of the task
<Frame> <Frame>![Task List](/images/enterprise/trace-tasks.png)</Frame>
![Task List](/images/enterprise/trace-tasks.png)
</Frame>
### 3. Final Output ### 3. Final Output
Displays the final result produced by the crew after all tasks are completed. Displays the final result produced by the crew after all tasks are completed.
<Frame> <Frame>![Final Output](/images/enterprise/final-output.png)</Frame>
![Final Output](/images/enterprise/final-output.png)
</Frame>
### 4. Execution Timeline ### 4. Execution Timeline
A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns. A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns.
<Frame> <Frame>![Execution Timeline](/images/enterprise/trace-timeline.png)</Frame>
![Execution Timeline](/images/enterprise/trace-timeline.png)
</Frame>
### 5. Detailed Task View ### 5. Detailed Task View
When you click on a specific task in the timeline or task list, you'll see: When you click on a specific task in the timeline or task list, you'll see:
<Frame> <Frame>![Detailed Task View](/images/enterprise/trace-detailed-task.png)</Frame>
![Detailed Task View](/images/enterprise/trace-detailed-task.png)
</Frame>
- **Task Key**: Unique identifier for the task - **Task Key**: Unique identifier for the task
- **Task ID**: Technical identifier in the system - **Task ID**: Technical identifier in the system
@@ -104,7 +92,6 @@ When you click on a specific task in the timeline or task list, you'll see:
- **Input**: Any input provided to this task from previous tasks - **Input**: Any input provided to this task from previous tasks
- **Output**: The actual result produced by the agent - **Output**: The actual result produced by the agent
## Using Traces for Debugging ## Using Traces for Debugging
Traces are invaluable for troubleshooting issues with your crews: Traces are invaluable for troubleshooting issues with your crews:
@@ -121,6 +108,7 @@ Traces are invaluable for troubleshooting issues with your crews:
<Frame> <Frame>
![Failure Points](/images/enterprise/failure.png) ![Failure Points](/images/enterprise/failure.png)
</Frame> </Frame>
</Step> </Step>
<Step title="Optimize Performance"> <Step title="Optimize Performance">
@@ -130,6 +118,7 @@ Traces are invaluable for troubleshooting issues with your crews:
- Excessive token usage - Excessive token usage
- Redundant tool operations - Redundant tool operations
- Unnecessary API calls - Unnecessary API calls
</Step> </Step>
<Step title="Improve Cost Efficiency"> <Step title="Improve Cost Efficiency">
@@ -139,6 +128,7 @@ Traces are invaluable for troubleshooting issues with your crews:
- Refine prompts to be more concise - Refine prompts to be more concise
- Cache frequently accessed information - Cache frequently accessed information
- Structure tasks to minimize redundant operations - Structure tasks to minimize redundant operations
</Step> </Step>
</Steps> </Steps>
@@ -153,5 +143,6 @@ CrewAI batches trace uploads to reduce overhead on high-volume runs:
This yields more stable tracing under load while preserving detailed task/agent telemetry. This yields more stable tracing under load while preserving detailed task/agent telemetry.
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with trace analysis or any other CrewAI AMP features. Contact our support team for assistance with trace analysis or any other
CrewAI AMP features.
</Card> </Card>

View File

@@ -16,7 +16,7 @@ When using the Kickoff API, include a `webhooks` object to your request, for exa
```json ```json
{ {
"inputs": {"foo": "bar"}, "inputs": { "foo": "bar" },
"webhooks": { "webhooks": {
"events": ["crew_kickoff_started", "llm_call_started"], "events": ["crew_kickoff_started", "llm_call_started"],
"url": "https://your.endpoint/webhook", "url": "https://your.endpoint/webhook",
@@ -46,8 +46,8 @@ Each webhook sends a list of events:
"data": { "data": {
"model": "gpt-4", "model": "gpt-4",
"messages": [ "messages": [
{"role": "system", "content": "You are an assistant."}, { "role": "system", "content": "You are an assistant." },
{"role": "user", "content": "Summarize this article."} { "role": "user", "content": "Summarize this article." }
] ]
} }
} }
@@ -55,7 +55,7 @@ Each webhook sends a list of events:
} }
``` ```
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) on GitHub. The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/lib/crewai/src/crewai/events/types) on GitHub.
As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field. As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field.
@@ -65,104 +65,109 @@ CrewAI supports both system events and custom events in Enterprise Event Streami
### Flow Events: ### Flow Events:
- `flow_created` - `flow_created`
- `flow_started` - `flow_started`
- `flow_finished` - `flow_finished`
- `flow_plot` - `flow_plot`
- `method_execution_started` - `method_execution_started`
- `method_execution_finished` - `method_execution_finished`
- `method_execution_failed` - `method_execution_failed`
### Agent Events: ### Agent Events:
- `agent_execution_started` - `agent_execution_started`
- `agent_execution_completed` - `agent_execution_completed`
- `agent_execution_error` - `agent_execution_error`
- `lite_agent_execution_started` - `lite_agent_execution_started`
- `lite_agent_execution_completed` - `lite_agent_execution_completed`
- `lite_agent_execution_error` - `lite_agent_execution_error`
- `agent_logs_started` - `agent_logs_started`
- `agent_logs_execution` - `agent_logs_execution`
- `agent_evaluation_started` - `agent_evaluation_started`
- `agent_evaluation_completed` - `agent_evaluation_completed`
- `agent_evaluation_failed` - `agent_evaluation_failed`
### Crew Events: ### Crew Events:
- `crew_kickoff_started` - `crew_kickoff_started`
- `crew_kickoff_completed` - `crew_kickoff_completed`
- `crew_kickoff_failed` - `crew_kickoff_failed`
- `crew_train_started` - `crew_train_started`
- `crew_train_completed` - `crew_train_completed`
- `crew_train_failed` - `crew_train_failed`
- `crew_test_started` - `crew_test_started`
- `crew_test_completed` - `crew_test_completed`
- `crew_test_failed` - `crew_test_failed`
- `crew_test_result` - `crew_test_result`
### Task Events: ### Task Events:
- `task_started` - `task_started`
- `task_completed` - `task_completed`
- `task_failed` - `task_failed`
- `task_evaluation` - `task_evaluation`
### Tool Usage Events: ### Tool Usage Events:
- `tool_usage_started` - `tool_usage_started`
- `tool_usage_finished` - `tool_usage_finished`
- `tool_usage_error` - `tool_usage_error`
- `tool_validate_input_error` - `tool_validate_input_error`
- `tool_selection_error` - `tool_selection_error`
- `tool_execution_error` - `tool_execution_error`
### LLM Events: ### LLM Events:
- `llm_call_started` - `llm_call_started`
- `llm_call_completed` - `llm_call_completed`
- `llm_call_failed` - `llm_call_failed`
- `llm_stream_chunk` - `llm_stream_chunk`
### LLM Guardrail Events: ### LLM Guardrail Events:
- `llm_guardrail_started` - `llm_guardrail_started`
- `llm_guardrail_completed` - `llm_guardrail_completed`
### Memory Events: ### Memory Events:
- `memory_query_started` - `memory_query_started`
- `memory_query_completed` - `memory_query_completed`
- `memory_query_failed` - `memory_query_failed`
- `memory_save_started` - `memory_save_started`
- `memory_save_completed` - `memory_save_completed`
- `memory_save_failed` - `memory_save_failed`
- `memory_retrieval_started` - `memory_retrieval_started`
- `memory_retrieval_completed` - `memory_retrieval_completed`
### Knowledge Events: ### Knowledge Events:
- `knowledge_search_query_started` - `knowledge_search_query_started`
- `knowledge_search_query_completed` - `knowledge_search_query_completed`
- `knowledge_search_query_failed` - `knowledge_search_query_failed`
- `knowledge_query_started` - `knowledge_query_started`
- `knowledge_query_completed` - `knowledge_query_completed`
- `knowledge_query_failed` - `knowledge_query_failed`
### Reasoning Events: ### Reasoning Events:
- `agent_reasoning_started` - `agent_reasoning_started`
- `agent_reasoning_completed` - `agent_reasoning_completed`
- `agent_reasoning_failed` - `agent_reasoning_failed`
Event names match the internal event bus. See GitHub for the full list of events. Event names match the internal event bus. See GitHub for the full list of events.
You can emit your own custom events, and they will be delivered through the webhook stream alongside system events. You can emit your own custom events, and they will be delivered through the webhook stream alongside system events.
<CardGroup> <CardGroup>
<Card title="GitHub" icon="github" href="https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events"> <Card
Full list of events title="GitHub"
</Card> icon="github"
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> href="https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events"
Contact our support team for assistance with webhook integration or troubleshooting. >
</Card> Full list of events
</Card>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with webhook integration or
troubleshooting.
</Card>
</CardGroup> </CardGroup>

View File

@@ -20,37 +20,61 @@ Deep-dive guides walk through setup and sample workflows for each integration:
<a href="/en/enterprise/guides/gmail-trigger">Enable crews when emails arrive or threads update.</a> <a href="/en/enterprise/guides/gmail-trigger">Enable crews when emails arrive or threads update.</a>
</Card> </Card>
<Card title="Google Calendar Trigger" icon="calendar-days"> {" "}
<a href="/en/enterprise/guides/google-calendar-trigger">React to calendar events as they are created, updated, or cancelled.</a> <Card title="Google Calendar Trigger" icon="calendar-days">
</Card> <a href="/en/enterprise/guides/google-calendar-trigger">
React to calendar events as they are created, updated, or cancelled.
</a>
</Card>
<Card title="Google Drive Trigger" icon="folder-open"> {" "}
<a href="/en/enterprise/guides/google-drive-trigger">Handle Drive file uploads, edits, and deletions.</a> <Card title="Google Drive Trigger" icon="folder-open">
</Card> <a href="/en/enterprise/guides/google-drive-trigger">
Handle Drive file uploads, edits, and deletions.
</a>
</Card>
<Card title="Outlook Trigger" icon="envelope-open"> {" "}
<a href="/en/enterprise/guides/outlook-trigger">Automate responses to new Outlook messages and calendar updates.</a> <Card title="Outlook Trigger" icon="envelope-open">
</Card> <a href="/en/enterprise/guides/outlook-trigger">
Automate responses to new Outlook messages and calendar updates.
</a>
</Card>
<Card title="OneDrive Trigger" icon="cloud"> {" "}
<a href="/en/enterprise/guides/onedrive-trigger">Audit file activity and sharing changes in OneDrive.</a> <Card title="OneDrive Trigger" icon="cloud">
</Card> <a href="/en/enterprise/guides/onedrive-trigger">
Audit file activity and sharing changes in OneDrive.
</a>
</Card>
<Card title="Microsoft Teams Trigger" icon="comments"> {" "}
<a href="/en/enterprise/guides/microsoft-teams-trigger">Kick off workflows when new Teams chats start.</a> <Card title="Microsoft Teams Trigger" icon="comments">
</Card> <a href="/en/enterprise/guides/microsoft-teams-trigger">
Kick off workflows when new Teams chats start.
</a>
</Card>
<Card title="HubSpot Trigger" icon="hubspot"> {" "}
<a href="/en/enterprise/guides/hubspot-trigger">Launch automations from HubSpot workflows and lifecycle events.</a> <Card title="HubSpot Trigger" icon="hubspot">
</Card> <a href="/en/enterprise/guides/hubspot-trigger">
Launch automations from HubSpot workflows and lifecycle events.
</a>
</Card>
<Card title="Salesforce Trigger" icon="salesforce"> {" "}
<a href="/en/enterprise/guides/salesforce-trigger">Connect Salesforce processes to CrewAI for CRM automation.</a> <Card title="Salesforce Trigger" icon="salesforce">
</Card> <a href="/en/enterprise/guides/salesforce-trigger">
Connect Salesforce processes to CrewAI for CRM automation.
</a>
</Card>
<Card title="Slack Trigger" icon="slack"> {" "}
<a href="/en/enterprise/guides/slack-trigger">Start crews directly from Slack slash commands.</a> <Card title="Slack Trigger" icon="slack">
</Card> <a href="/en/enterprise/guides/slack-trigger">
Start crews directly from Slack slash commands.
</a>
</Card>
<Card title="Zapier Trigger" icon="bolt"> <Card title="Zapier Trigger" icon="bolt">
<a href="/en/enterprise/guides/zapier-trigger">Bridge CrewAI with thousands of Zapier-supported apps.</a> <a href="/en/enterprise/guides/zapier-trigger">Bridge CrewAI with thousands of Zapier-supported apps.</a>
@@ -76,7 +100,10 @@ To access and manage your automation triggers:
2. Click on the **Triggers** tab to view all available trigger integrations 2. Click on the **Triggers** tab to view all available trigger integrations
<Frame caption="Example of available automation triggers for a Gmail deployment"> <Frame caption="Example of available automation triggers for a Gmail deployment">
<img src="/images/enterprise/list-available-triggers.png" alt="List of available automation triggers" /> <img
src="/images/enterprise/list-available-triggers.png"
alt="List of available automation triggers"
/>
</Frame> </Frame>
This view shows all the trigger integrations available for your deployment, along with their current connection status. This view shows all the trigger integrations available for your deployment, along with their current connection status.
@@ -86,7 +113,10 @@ This view shows all the trigger integrations available for your deployment, alon
Each trigger can be easily enabled or disabled using the toggle switch: Each trigger can be easily enabled or disabled using the toggle switch:
<Frame caption="Enable or disable triggers with toggle"> <Frame caption="Enable or disable triggers with toggle">
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" /> <img
src="/images/enterprise/trigger-selected.png"
alt="Enable or disable triggers with toggle"
/>
</Frame> </Frame>
- **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur - **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur
@@ -99,7 +129,10 @@ Simply click the toggle to change the trigger state. Changes take effect immedia
Track the performance and history of your triggered executions: Track the performance and history of your triggered executions:
<Frame caption="List of executions triggered by automation"> <Frame caption="List of executions triggered by automation">
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" /> <img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame> </Frame>
## Building Trigger-Driven Automations ## Building Trigger-Driven Automations
@@ -130,6 +163,7 @@ crewai triggers list
``` ```
This command displays all triggers available based on your connected integrations, showing: This command displays all triggers available based on your connected integrations, showing:
- Integration name and connection status - Integration name and connection status
- Available trigger types - Available trigger types
- Trigger names and descriptions - Trigger names and descriptions
@@ -149,6 +183,7 @@ crewai triggers run microsoft_onedrive/file_changed
``` ```
This command: This command:
- Executes your crew locally - Executes your crew locally
- Passes a complete, realistic trigger payload - Passes a complete, realistic trigger payload
- Simulates exactly how your crew will be called in production - Simulates exactly how your crew will be called in production
@@ -161,7 +196,6 @@ This command:
- If your crew expects parameters that aren't in the trigger payload, execution may fail - If your crew expects parameters that aren't in the trigger payload, execution may fail
</Warning> </Warning>
### Triggers with Crew ### Triggers with Crew
Your existing crew definitions work seamlessly with triggers, you just need to have a task to parse the received payload: Your existing crew definitions work seamlessly with triggers, you just need to have a task to parse the received payload:
@@ -193,10 +227,12 @@ class MyAutomatedCrew:
The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms. The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms.
<Note> <Note>
Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI automatically injects this payload: Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI
- Tasks: appended to the first task's description by default ("Trigger Payload: {crewai_trigger_payload}") automatically injects this payload: - Tasks: appended to the first task's
- Control via `allow_crewai_trigger_context`: set `True` to always inject, `False` to never inject description by default ("Trigger Payload: {crewai_trigger_payload}") - Control
- Flows: any `@start()` method that accepts a `crewai_trigger_payload` parameter will receive it via `allow_crewai_trigger_context`: set `True` to always inject, `False` to
never inject - Flows: any `@start()` method that accepts a
`crewai_trigger_payload` parameter will receive it
</Note> </Note>
### Integration with Flows ### Integration with Flows
@@ -264,17 +300,20 @@ def delegate_to_crew(self, crewai_trigger_payload: dict = None):
## Troubleshooting ## Troubleshooting
**Trigger not firing:** **Trigger not firing:**
- Verify the trigger is enabled in your deployment's Triggers tab - Verify the trigger is enabled in your deployment's Triggers tab
- Check integration connection status under Tools & Integrations - Check integration connection status under Tools & Integrations
- Ensure all required environment variables are properly configured - Ensure all required environment variables are properly configured
**Execution failures:** **Execution failures:**
- Check the execution logs for error details - Check the execution logs for error details
- Use `crewai triggers run <trigger_name>` to test locally and see the exact payload structure - Use `crewai triggers run <trigger_name>` to test locally and see the exact payload structure
- Verify your crew can handle the `crewai_trigger_payload` parameter - Verify your crew can handle the `crewai_trigger_payload` parameter
- Ensure your crew doesn't expect parameters that aren't included in the trigger payload - Ensure your crew doesn't expect parameters that aren't included in the trigger payload
**Development issues:** **Development issues:**
- Always test with `crewai triggers run <trigger>` before deploying to see the complete payload - Always test with `crewai triggers run <trigger>` before deploying to see the complete payload
- Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead - Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead
- Use `crewai triggers list` to verify which triggers are available for your connected integrations - Use `crewai triggers list` to verify which triggers are available for your connected integrations

View File

@@ -37,6 +37,7 @@ This guide walks you through connecting Azure OpenAI with Crew Studio for seamle
- Navigate to `Resource Management > Networking`. - Navigate to `Resource Management > Networking`.
- Ensure that `Allow access from all networks` is enabled. If this setting is restricted, CrewAI may be blocked from accessing your Azure OpenAI endpoint. - Ensure that `Allow access from all networks` is enabled. If this setting is restricted, CrewAI may be blocked from accessing your Azure OpenAI endpoint.
</Step> </Step>
</Steps> </Steps>
## Verification ## Verification
@@ -46,6 +47,7 @@ You're all set! Crew Studio will now use your Azure OpenAI connection. Test the
## Troubleshooting ## Troubleshooting
If you encounter issues: If you encounter issues:
- Verify the Target URI format matches the expected pattern - Verify the Target URI format matches the expected pattern
- Check that the API key is correct and has proper permissions - Check that the API key is correct and has proper permissions
- Ensure network access is configured to allow CrewAI connections - Ensure network access is configured to allow CrewAI connections

View File

@@ -22,21 +22,27 @@ mode: "wide"
### Installation and Setup ### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/en/installation"> <Card
Follow our standard installation guide to set up CrewAI CLI and create your first project. title="Follow Standard Installation"
icon="wrench"
href="/en/installation"
>
Follow our standard installation guide to set up CrewAI CLI and create your
first project.
</Card> </Card>
### Building Your Crew ### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart"> <Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration. Follow our quickstart guide to create your first agent crew using YAML
configuration.
</Card> </Card>
## Support and Resources ## Support and Resources
For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com). For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com).
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com"> <Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization. Book time with our team to learn more about Enterprise features and how they
can benefit your organization.
</Card> </Card>

View File

@@ -14,22 +14,17 @@ CrewAI AMP provides a powerful way to capture telemetry logs from your deploymen
Your organization should have ENTERPRISE OTEL SETUP enabled Your organization should have ENTERPRISE OTEL SETUP enabled
</Card> </Card>
<Card title="OTEL collector setup" icon="server"> <Card title="OTEL collector setup" icon="server">
Your organization should have an OTEL collector setup or a provider like Datadog log intake setup Your organization should have an OTEL collector setup or a provider like
Datadog log intake setup
</Card> </Card>
</CardGroup> </CardGroup>
## How to capture telemetry logs ## How to capture telemetry logs
1. Go to settings/organization tab 1. Go to settings/organization tab
2. Configure your OTEL collector setup 2. Configure your OTEL collector setup
3. Save 3. Save
Example to setup OTEL log collection capture to Datadog. Example to setup OTEL log collection capture to Datadog.
<Frame>![Capture Telemetry Logs](/images/crewai-otel-export.png)</Frame>
<Frame>
![Capture Telemetry Logs](/images/crewai-otel-export.png)
</Frame>

View File

@@ -6,17 +6,21 @@ mode: "wide"
--- ---
<Note> <Note>
After creating a crew locally or through Crew Studio, the next step is deploying it to the CrewAI AMP platform. This guide covers multiple deployment methods to help you choose the best approach for your workflow. After creating a crew locally or through Crew Studio, the next step is
deploying it to the CrewAI AMP platform. This guide covers multiple deployment
methods to help you choose the best approach for your workflow.
</Note> </Note>
## Prerequisites ## Prerequisites
<CardGroup cols={2}> <CardGroup cols={2}>
<Card title="Crew Ready for Deployment" icon="users"> <Card title="Crew Ready for Deployment" icon="users">
You should have a working crew either built locally or created through Crew Studio You should have a working crew either built locally or created through Crew
Studio
</Card> </Card>
<Card title="GitHub Repository" icon="github"> <Card title="GitHub Repository" icon="github">
Your crew code should be in a GitHub repository (for GitHub integration method) Your crew code should be in a GitHub repository (for GitHub integration
method)
</Card> </Card>
</CardGroup> </CardGroup>
@@ -105,22 +109,22 @@ The CLI provides the fastest way to deploy locally developed crews to the Enterp
The CrewAI CLI offers several commands to manage your deployments: The CrewAI CLI offers several commands to manage your deployments:
```bash ```bash
# List all your deployments # List all your deployments
crewai deploy list crewai deploy list
# Get the status of your deployment # Get the status of your deployment
crewai deploy status crewai deploy status
# View the logs of your deployment # View the logs of your deployment
crewai deploy logs crewai deploy logs
# Push updates after code changes # Push updates after code changes
crewai deploy push crewai deploy push
# Remove a deployment # Remove a deployment
crewai deploy remove <deployment_id> crewai deploy remove <deployment_id>
``` ```
## Option 2: Deploy Directly via Web Interface ## Option 2: Deploy Directly via Web Interface
@@ -130,7 +134,7 @@ You can also deploy your crews directly through the CrewAI AMP web interface by
<Step title="Pushing to GitHub"> <Step title="Pushing to GitHub">
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart). You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart).
</Step> </Step>
@@ -187,10 +191,102 @@ You can also deploy your crews directly through the CrewAI AMP web interface by
</Steps> </Steps>
## Option 3: Redeploy Using API (CI/CD Integration)
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
<Steps>
<Step title="Get Your Personal Access Token">
Navigate to your CrewAI AMP account settings to generate an API token:
1. Go to [app.crewai.com](https://app.crewai.com)
2. Click on **Settings** → **Account** → **Personal Access Token**
3. Generate a new token and copy it securely
4. Store this token as a secret in your CI/CD system
</Step>
<Step title="Find Your Automation UUID">
Locate the unique identifier for your deployed crew:
1. Go to **Automations** in your CrewAI AMP dashboard
2. Select your existing automation/crew
3. Click on **Additional Details**
4. Copy the **UUID** - this identifies your specific crew deployment
</Step>
<Step title="Trigger Redeployment via API">
Use the Deploy API endpoint to trigger a redeployment:
```bash
curl -i -X POST \
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
# HTTP/2 200
# content-type: application/json
#
# {
# "uuid": "your-automation-uuid",
# "status": "Deploy Enqueued",
# "public_url": "https://your-crew-deployment.crewai.com",
# "token": "your-bearer-token"
# }
```
<Info>
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
</Info>
</Step>
<Step title="GitHub Actions Integration Example">
Here's a GitHub Actions workflow with more complex deployment triggers:
```yaml
name: Deploy CrewAI Automation
on:
push:
branches: [ main ]
pull_request:
types: [ labeled ]
release:
types: [ published ]
jobs:
deploy:
runs-on: ubuntu-latest
if: |
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
(github.event_name == 'release')
steps:
- name: Trigger CrewAI Redeployment
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
```
<Tip>
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
</Tip>
</Step>
</Steps>
## ⚠️ Environment Variable Security Requirements ## ⚠️ Environment Variable Security Requirements
<Warning> <Warning>
**Important**: CrewAI AMP has security restrictions on environment variable names that can cause deployment failures if not followed. **Important**: CrewAI AMP has security restrictions on environment variable
names that can cause deployment failures if not followed.
</Warning> </Warning>
### Blocked Environment Variable Patterns ### Blocked Environment Variable Patterns
@@ -198,12 +294,14 @@ You can also deploy your crews directly through the CrewAI AMP web interface by
For security reasons, the following environment variable naming patterns are **automatically filtered** and will cause deployment issues: For security reasons, the following environment variable naming patterns are **automatically filtered** and will cause deployment issues:
**Blocked Patterns:** **Blocked Patterns:**
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`) - Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`) - Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
- Variables ending with `_SECRET` (e.g., `API_SECRET`) - Variables ending with `_SECRET` (e.g., `API_SECRET`)
- Variables ending with `_KEY` in certain contexts - Variables ending with `_KEY` in certain contexts
**Specific Blocked Variables:** **Specific Blocked Variables:**
- `GITHUB_USER`, `GITHUB_TOKEN` - `GITHUB_USER`, `GITHUB_TOKEN`
- `AWS_REGION`, `AWS_DEFAULT_REGION` - `AWS_REGION`, `AWS_DEFAULT_REGION`
- Various internal CrewAI system variables - Various internal CrewAI system variables
@@ -211,6 +309,7 @@ For security reasons, the following environment variable naming patterns are **a
### Allowed Exceptions ### Allowed Exceptions
Some variables are explicitly allowed despite matching blocked patterns: Some variables are explicitly allowed despite matching blocked patterns:
- `AZURE_AD_TOKEN` - `AZURE_AD_TOKEN`
- `AZURE_OPENAI_AD_TOKEN` - `AZURE_OPENAI_AD_TOKEN`
- `ENTERPRISE_ACTION_TOKEN` - `ENTERPRISE_ACTION_TOKEN`
@@ -240,7 +339,8 @@ API_CONFIG=secret123
4. **Document changes**: Keep track of renamed variables for your team 4. **Document changes**: Keep track of renamed variables for your team
<Tip> <Tip>
If you encounter deployment failures with cryptic environment variable errors, check your variable names against these patterns first. If you encounter deployment failures with cryptic environment variable errors,
check your variable names against these patterns first.
</Tip> </Tip>
### Interact with Your Deployed Crew ### Interact with Your Deployed Crew
@@ -248,6 +348,7 @@ If you encounter deployment failures with cryptic environment variable errors, c
Once deployment is complete, you can access your crew through: Once deployment is complete, you can access your crew through:
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes: 1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
- `/inputs`: Lists the required input parameters - `/inputs`: Lists the required input parameters
- `/kickoff`: Initiates an execution with provided inputs - `/kickoff`: Initiates an execution with provided inputs
- `/status/{kickoff_id}`: Checks the execution status - `/status/{kickoff_id}`: Checks the execution status
@@ -287,5 +388,6 @@ The Enterprise platform also offers:
- **Crew Studio**: Build crews through a chat interface without writing code - **Crew Studio**: Build crews through a chat interface without writing code
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with deployment issues or questions about the Enterprise platform. Contact our support team for assistance with deployment issues or questions
about the Enterprise platform.
</Card> </Card>

View File

@@ -6,7 +6,8 @@ mode: "wide"
--- ---
<Tip> <Tip>
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface. Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly
scaffold or build Crews through a conversational interface.
</Tip> </Tip>
## What is Crew Studio? ## What is Crew Studio?
@@ -52,6 +53,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
<Frame> <Frame>
![LLM Connection Configuration](/images/enterprise/llm-connection-config.png) ![LLM Connection Configuration](/images/enterprise/llm-connection-config.png)
</Frame> </Frame>
</Step> </Step>
<Step title="Verify Connection Added"> <Step title="Verify Connection Added">
@@ -60,6 +62,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
<Frame> <Frame>
![Connection Added](/images/enterprise/connection-added.png) ![Connection Added](/images/enterprise/connection-added.png)
</Frame> </Frame>
</Step> </Step>
<Step title="Configure LLM Defaults"> <Step title="Configure LLM Defaults">
@@ -73,6 +76,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
<Frame> <Frame>
![LLM Defaults Configuration](/images/enterprise/llm-defaults.png) ![LLM Defaults Configuration](/images/enterprise/llm-defaults.png)
</Frame> </Frame>
</Step> </Step>
</Steps> </Steps>
@@ -93,6 +97,7 @@ Now that you've configured your LLM connection and default settings, you're read
``` ```
The Crew Assistant will ask clarifying questions to better understand your requirements. The Crew Assistant will ask clarifying questions to better understand your requirements.
</Step> </Step>
<Step title="Review Generated Crew"> <Step title="Review Generated Crew">
@@ -104,6 +109,7 @@ Now that you've configured your LLM connection and default settings, you're read
- Tools to be used - Tools to be used
This is your opportunity to refine the configuration before proceeding. This is your opportunity to refine the configuration before proceeding.
</Step> </Step>
<Step title="Deploy or Download"> <Step title="Deploy or Download">
@@ -112,6 +118,7 @@ Now that you've configured your LLM connection and default settings, you're read
- Download the generated code for local customization - Download the generated code for local customization
- Deploy the crew directly to the CrewAI AMP platform - Deploy the crew directly to the CrewAI AMP platform
- Modify the configuration and regenerate the crew - Modify the configuration and regenerate the crew
</Step> </Step>
<Step title="Test Your Crew"> <Step title="Test Your Crew">
@@ -120,7 +127,9 @@ Now that you've configured your LLM connection and default settings, you're read
</Steps> </Steps>
<Tip> <Tip>
For best results, provide clear, detailed descriptions of what you want your crew to accomplish. Include specific inputs and expected outputs in your description. For best results, provide clear, detailed descriptions of what you want your
crew to accomplish. Include specific inputs and expected outputs in your
description.
</Tip> </Tip>
## Example Workflow ## Example Workflow
@@ -134,11 +143,14 @@ Here's a typical workflow for creating a crew with Crew Studio:
```md ```md
I need a crew that can analyze financial news and provide investment recommendations I need a crew that can analyze financial news and provide investment recommendations
``` ```
</Step> </Step>
<Step title="Answer Questions"> {" "}
Respond to clarifying questions from the Crew Assistant to refine your requirements. <Step title="Answer Questions">
</Step> Respond to clarifying questions from the Crew Assistant to refine your
requirements.
</Step>
<Step title="Review the Plan"> <Step title="Review the Plan">
Review the generated crew plan, which might include: Review the generated crew plan, which might include:
@@ -146,15 +158,18 @@ Here's a typical workflow for creating a crew with Crew Studio:
- A Research Agent to gather financial news - A Research Agent to gather financial news
- An Analysis Agent to interpret the data - An Analysis Agent to interpret the data
- A Recommendations Agent to provide investment advice - A Recommendations Agent to provide investment advice
</Step> </Step>
<Step title="Approve or Modify"> {" "}
Approve the plan or request changes if necessary. <Step title="Approve or Modify">
</Step> Approve the plan or request changes if necessary.
</Step>
<Step title="Download or Deploy"> {" "}
Download the code for customization or deploy directly to the platform. <Step title="Download or Deploy">
</Step> Download the code for customization or deploy directly to the platform.
</Step>
<Step title="Test and Refine"> <Step title="Test and Refine">
Test your crew with sample inputs and refine as needed. Test your crew with sample inputs and refine as needed.
@@ -162,5 +177,6 @@ Here's a typical workflow for creating a crew with Crew Studio:
</Steps> </Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Crew Studio or any other CrewAI AMP features. Contact our support team for assistance with Crew Studio or any other CrewAI
AMP features.
</Card> </Card>

View File

@@ -10,7 +10,8 @@ mode: "wide"
Use the Gmail Trigger to kick off your deployed crews when Gmail events happen in connected accounts, such as receiving a new email or messages matching a label/filter. Use the Gmail Trigger to kick off your deployed crews when Gmail events happen in connected accounts, such as receiving a new email or messages matching a label/filter.
<Tip> <Tip>
Make sure Gmail is connected in Tools & Integrations and the trigger is enabled for your deployment. Make sure Gmail is connected in Tools & Integrations and the trigger is
enabled for your deployment.
</Tip> </Tip>
## Enabling the Gmail Trigger ## Enabling the Gmail Trigger
@@ -20,7 +21,10 @@ Use the Gmail Trigger to kick off your deployed crews when Gmail events happen i
3. Locate **Gmail** and switch the toggle to enable 3. Locate **Gmail** and switch the toggle to enable
<Frame> <Frame>
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" /> <img
src="/images/enterprise/trigger-selected.png"
alt="Enable or disable triggers with toggle"
/>
</Frame> </Frame>
## Example: Process new emails ## Example: Process new emails
@@ -62,13 +66,15 @@ Test your Gmail trigger integration locally using the CrewAI CLI:
crewai triggers list crewai triggers list
# Simulate a Gmail trigger with realistic payload # Simulate a Gmail trigger with realistic payload
crewai triggers run gmail/new_email crewai triggers run gmail/new_email_received
``` ```
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment. The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
<Warning> <Warning>
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload. Use `crewai triggers run gmail/new_email_received` (not `crewai run`) to
simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning> </Warning>
## Monitoring Executions ## Monitoring Executions
@@ -76,13 +82,16 @@ The `crewai triggers run` command will execute your crew with a complete Gmail p
Track history and performance of triggered runs: Track history and performance of triggered runs:
<Frame> <Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" /> <img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame> </Frame>
## Troubleshooting ## Troubleshooting
- Ensure Gmail is connected in Tools & Integrations - Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab - Verify the Gmail Trigger is enabled on the Triggers tab
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure - Test locally with `crewai triggers run gmail/new_email_received` to see the exact payload structure
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload` - Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution - Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -10,7 +10,8 @@ mode: "wide"
Use the Google Calendar trigger to launch automations whenever calendar events change. Common use cases include briefing a team before a meeting, notifying stakeholders when a critical event is cancelled, or summarizing daily schedules. Use the Google Calendar trigger to launch automations whenever calendar events change. Common use cases include briefing a team before a meeting, notifying stakeholders when a critical event is cancelled, or summarizing daily schedules.
<Tip> <Tip>
Make sure Google Calendar is connected in **Tools & Integrations** and enabled for the deployment you want to automate. Make sure Google Calendar is connected in **Tools & Integrations** and enabled
for the deployment you want to automate.
</Tip> </Tip>
## Enabling the Google Calendar Trigger ## Enabling the Google Calendar Trigger
@@ -20,7 +21,10 @@ Use the Google Calendar trigger to launch automations whenever calendar events c
3. Locate **Google Calendar** and switch the toggle to enable 3. Locate **Google Calendar** and switch the toggle to enable
<Frame> <Frame>
<img src="/images/enterprise/calendar-trigger.png" alt="Enable or disable triggers with toggle" /> <img
src="/images/enterprise/calendar-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame> </Frame>
## Example: Summarize meeting details ## Example: Summarize meeting details
@@ -54,7 +58,9 @@ crewai triggers run google_calendar/event_changed
The `crewai triggers run` command will execute your crew with a complete Calendar payload, allowing you to test your parsing logic before deployment. The `crewai triggers run` command will execute your crew with a complete Calendar payload, allowing you to test your parsing logic before deployment.
<Warning> <Warning>
Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload. Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to
simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning> </Warning>
## Monitoring Executions ## Monitoring Executions
@@ -62,7 +68,10 @@ The `crewai triggers run` command will execute your crew with a complete Calenda
The **Executions** list in the deployment dashboard tracks every triggered run and surfaces payload metadata, output summaries, and errors. The **Executions** list in the deployment dashboard tracks every triggered run and surfaces payload metadata, output summaries, and errors.
<Frame> <Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" /> <img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame> </Frame>
## Troubleshooting ## Troubleshooting

View File

@@ -10,7 +10,8 @@ mode: "wide"
Trigger your automations when files are created, updated, or removed in Google Drive. Typical workflows include summarizing newly uploaded content, enforcing sharing policies, or notifying owners when critical files change. Trigger your automations when files are created, updated, or removed in Google Drive. Typical workflows include summarizing newly uploaded content, enforcing sharing policies, or notifying owners when critical files change.
<Tip> <Tip>
Connect Google Drive in **Tools & Integrations** and confirm the trigger is enabled for the automation you want to monitor. Connect Google Drive in **Tools & Integrations** and confirm the trigger is
enabled for the automation you want to monitor.
</Tip> </Tip>
## Enabling the Google Drive Trigger ## Enabling the Google Drive Trigger
@@ -20,7 +21,10 @@ Trigger your automations when files are created, updated, or removed in Google D
3. Locate **Google Drive** and switch the toggle to enable 3. Locate **Google Drive** and switch the toggle to enable
<Frame> <Frame>
<img src="/images/enterprise/gdrive-trigger.png" alt="Enable or disable triggers with toggle" /> <img
src="/images/enterprise/gdrive-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame> </Frame>
## Example: Summarize file activity ## Example: Summarize file activity
@@ -51,7 +55,9 @@ crewai triggers run google_drive/file_changed
The `crewai triggers run` command will execute your crew with a complete Drive payload, allowing you to test your parsing logic before deployment. The `crewai triggers run` command will execute your crew with a complete Drive payload, allowing you to test your parsing logic before deployment.
<Warning> <Warning>
Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload. Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to
simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning> </Warning>
## Monitoring Executions ## Monitoring Executions
@@ -59,7 +65,10 @@ The `crewai triggers run` command will execute your crew with a complete Drive p
Track history and performance of triggered runs with the **Executions** list in the deployment dashboard. Track history and performance of triggered runs with the **Executions** list in the deployment dashboard.
<Frame> <Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" /> <img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame> </Frame>
## Troubleshooting ## Troubleshooting

View File

@@ -15,38 +15,47 @@ This guide provides a step-by-step process to set up HubSpot triggers for CrewAI
## Setup Steps ## Setup Steps
<Steps> <Steps>
<Step title="Connect your HubSpot account with CrewAI AMP"> <Step title="Connect your HubSpot account with CrewAI AMP">
- Log in to your `CrewAI AMP account > Triggers` - Log in to your `CrewAI AMP account > Triggers` - Select `HubSpot` from the
- Select `HubSpot` from the list of available triggers list of available triggers - Choose the HubSpot account you want to connect
- Choose the HubSpot account you want to connect with CrewAI AMP with CrewAI AMP - Follow the on-screen prompts to authorize CrewAI AMP
- Follow the on-screen prompts to authorize CrewAI AMP access to your HubSpot account access to your HubSpot account - A confirmation message will appear once
- A confirmation message will appear once HubSpot is successfully connected with CrewAI AMP HubSpot is successfully connected with CrewAI AMP
</Step> </Step>
<Step title="Create a HubSpot Workflow"> <Step title="Create a HubSpot Workflow">
- Log in to your `HubSpot account > Automations > Workflows > New workflow` - Log in to your `HubSpot account > Automations > Workflows > New workflow`
- Select the workflow type that fits your needs (e.g., Start from scratch) - Select the workflow type that fits your needs (e.g., Start from scratch) -
- In the workflow builder, click the Plus (+) icon to add a new action. In the workflow builder, click the Plus (+) icon to add a new action. -
- Choose `Integrated apps > CrewAI > Kickoff a Crew`. Choose `Integrated apps > CrewAI > Kickoff a Crew`. - Select the Crew you
- Select the Crew you want to initiate. want to initiate. - Click `Save` to add the action to your workflow
- Click `Save` to add the action to your workflow <Frame>
<Frame> <img
<img src="/images/enterprise/hubspot-workflow-1.png" alt="HubSpot Workflow 1" /> src="/images/enterprise/hubspot-workflow-1.png"
</Frame> alt="HubSpot Workflow 1"
</Step> />
<Step title="Use Crew results with other actions"> </Frame>
- After the Kickoff a Crew step, click the Plus (+) icon to add a new action. </Step>
- For example, to send an internal email notification, choose `Communications > Send internal email notification` <Step title="Use Crew results with other actions">
- In the Body field, click `Insert data`, select `View properties or action outputs from > Action outputs > Crew Result` to include Crew data in the email - After the Kickoff a Crew step, click the Plus (+) icon to add a new
<Frame> action. - For example, to send an internal email notification, choose
<img src="/images/enterprise/hubspot-workflow-2.png" alt="HubSpot Workflow 2" /> `Communications > Send internal email notification` - In the Body field,
</Frame> click `Insert data`, select `View properties or action outputs from > Action
- Configure any additional actions as needed outputs > Crew Result` to include Crew data in the email
- Review your workflow steps to ensure everything is set up correctly <Frame>
- Activate the workflow <img
<Frame> src="/images/enterprise/hubspot-workflow-2.png"
<img src="/images/enterprise/hubspot-workflow-3.png" alt="HubSpot Workflow 3" /> alt="HubSpot Workflow 2"
</Frame> />
</Step> </Frame>
- Configure any additional actions as needed - Review your workflow
steps to ensure everything is set up correctly - Activate the workflow
<Frame>
<img
src="/images/enterprise/hubspot-workflow-3.png"
alt="HubSpot Workflow 3"
/>
</Frame>
</Step>
</Steps> </Steps>
For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows). For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows).

View File

@@ -17,9 +17,7 @@ Once you've deployed your crew to the CrewAI AMP platform, you can kickoff execu
2. Click on the crew name from your projects list 2. Click on the crew name from your projects list
3. You'll be taken to the crew's detail page 3. You'll be taken to the crew's detail page
<Frame> <Frame>![Crew Dashboard](/images/enterprise/crew-dashboard.png)</Frame>
![Crew Dashboard](/images/enterprise/crew-dashboard.png)
</Frame>
### Step 2: Initiate Execution ### Step 2: Initiate Execution
@@ -31,9 +29,7 @@ From your crew's detail page, you have two options to kickoff an execution:
2. Enter the required input parameters for your crew in the JSON editor 2. Enter the required input parameters for your crew in the JSON editor
3. Click the `Send Request` button 3. Click the `Send Request` button
<Frame> <Frame>![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)</Frame>
![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)
</Frame>
#### Option B: Using the Visual Interface #### Option B: Using the Visual Interface
@@ -41,9 +37,7 @@ From your crew's detail page, you have two options to kickoff an execution:
2. Enter the required inputs in the form fields 2. Enter the required inputs in the form fields
3. Click the `Run Crew` button 3. Click the `Run Crew` button
<Frame> <Frame>![Run Crew](/images/enterprise/run-crew.png)</Frame>
![Run Crew](/images/enterprise/run-crew.png)
</Frame>
### Step 3: Monitor Execution Progress ### Step 3: Monitor Execution Progress
@@ -52,9 +46,7 @@ After initiating the execution:
1. You'll receive a response containing a `kickoff_id` - **copy this ID** 1. You'll receive a response containing a `kickoff_id` - **copy this ID**
2. This ID is essential for tracking your execution 2. This ID is essential for tracking your execution
<Frame> <Frame>![Copy Task ID](/images/enterprise/copy-task-id.png)</Frame>
![Copy Task ID](/images/enterprise/copy-task-id.png)
</Frame>
### Step 4: Check Execution Status ### Step 4: Check Execution Status
@@ -64,11 +56,10 @@ To monitor the progress of your execution:
2. Paste the `kickoff_id` into the designated field 2. Paste the `kickoff_id` into the designated field
3. Click the "Get Status" button 3. Click the "Get Status" button
<Frame> <Frame>![Get Status](/images/enterprise/get-status.png)</Frame>
![Get Status](/images/enterprise/get-status.png)
</Frame>
The status response will show: The status response will show:
- Current execution state (`running`, `completed`, etc.) - Current execution state (`running`, `completed`, etc.)
- Details about which tasks are in progress - Details about which tasks are in progress
- Any outputs produced so far - Any outputs produced so far
@@ -122,7 +113,7 @@ curl -X GET \
The response will be a JSON object containing an array of required input parameters, for example: The response will be a JSON object containing an array of required input parameters, for example:
```json ```json
{"inputs":["topic","current_year"]} { "inputs": ["topic", "current_year"] }
``` ```
This example shows that this particular crew requires two inputs: `topic` and `current_year`. This example shows that this particular crew requires two inputs: `topic` and `current_year`.
@@ -142,7 +133,7 @@ curl -X POST \
The response will include a `kickoff_id` that you'll need for tracking: The response will include a `kickoff_id` that you'll need for tracking:
```json ```json
{"kickoff_id":"abcd1234-5678-90ef-ghij-klmnopqrstuv"} { "kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv" }
``` ```
### Step 3: Check Execution Status ### Step 3: Check Execution Status
@@ -182,5 +173,6 @@ If an execution fails:
3. Look for LLM responses and tool usage in the trace details 3. Look for LLM responses and tool usage in the trace details
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with execution issues or questions about the Enterprise platform. Contact our support team for assistance with execution issues or questions
about the Enterprise platform.
</Card> </Card>

View File

@@ -10,7 +10,8 @@ mode: "wide"
Use the Microsoft Teams trigger to start automations whenever a new chat is created. Common patterns include summarizing inbound requests, routing urgent messages to support teams, or creating follow-up tasks in other systems. Use the Microsoft Teams trigger to start automations whenever a new chat is created. Common patterns include summarizing inbound requests, routing urgent messages to support teams, or creating follow-up tasks in other systems.
<Tip> <Tip>
Confirm Microsoft Teams is connected under **Tools & Integrations** and enabled in the **Triggers** tab for your deployment. Confirm Microsoft Teams is connected under **Tools & Integrations** and
enabled in the **Triggers** tab for your deployment.
</Tip> </Tip>
## Enabling the Microsoft Teams Trigger ## Enabling the Microsoft Teams Trigger
@@ -20,7 +21,10 @@ Use the Microsoft Teams trigger to start automations whenever a new chat is crea
3. Locate **Microsoft Teams** and switch the toggle to enable 3. Locate **Microsoft Teams** and switch the toggle to enable
<Frame caption="Microsoft Teams trigger connection"> <Frame caption="Microsoft Teams trigger connection">
<img src="/images/enterprise/msteams-trigger.png" alt="Enable or disable triggers with toggle" /> <img
src="/images/enterprise/msteams-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame> </Frame>
## Example: Summarize a new chat thread ## Example: Summarize a new chat thread
@@ -52,7 +56,9 @@ crewai triggers run microsoft_teams/teams_message_created
The `crewai triggers run` command will execute your crew with a complete Teams payload, allowing you to test your parsing logic before deployment. The `crewai triggers run` command will execute your crew with a complete Teams payload, allowing you to test your parsing logic before deployment.
<Warning> <Warning>
Use `crewai triggers run microsoft_teams/teams_message_created` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload. Use `crewai triggers run microsoft_teams/teams_message_created` (not `crewai
run`) to simulate trigger execution during development. After deployment, your
crew will automatically receive the trigger payload.
</Warning> </Warning>
## Troubleshooting ## Troubleshooting

View File

@@ -10,7 +10,8 @@ mode: "wide"
Start automations when files change inside OneDrive. You can generate audit summaries, notify security teams about external sharing, or update downstream line-of-business systems with new document metadata. Start automations when files change inside OneDrive. You can generate audit summaries, notify security teams about external sharing, or update downstream line-of-business systems with new document metadata.
<Tip> <Tip>
Connect OneDrive in **Tools & Integrations** and toggle the trigger on for your deployment. Connect OneDrive in **Tools & Integrations** and toggle the trigger on for
your deployment.
</Tip> </Tip>
## Enabling the OneDrive Trigger ## Enabling the OneDrive Trigger
@@ -20,7 +21,10 @@ Start automations when files change inside OneDrive. You can generate audit summ
3. Locate **OneDrive** and switch the toggle to enable 3. Locate **OneDrive** and switch the toggle to enable
<Frame caption="Microsoft OneDrive trigger connection"> <Frame caption="Microsoft OneDrive trigger connection">
<img src="/images/enterprise/onedrive-trigger.png" alt="Enable or disable triggers with toggle" /> <img
src="/images/enterprise/onedrive-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame> </Frame>
## Example: Audit file permissions ## Example: Audit file permissions
@@ -51,7 +55,9 @@ crewai triggers run microsoft_onedrive/file_changed
The `crewai triggers run` command will execute your crew with a complete OneDrive payload, allowing you to test your parsing logic before deployment. The `crewai triggers run` command will execute your crew with a complete OneDrive payload, allowing you to test your parsing logic before deployment.
<Warning> <Warning>
Use `crewai triggers run microsoft_onedrive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload. Use `crewai triggers run microsoft_onedrive/file_changed` (not `crewai run`)
to simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning> </Warning>
## Troubleshooting ## Troubleshooting

View File

@@ -10,7 +10,8 @@ mode: "wide"
Automate responses when Outlook delivers a new message or when an event is removed from the calendar. Teams commonly route escalations, file tickets, or alert attendees of cancellations. Automate responses when Outlook delivers a new message or when an event is removed from the calendar. Teams commonly route escalations, file tickets, or alert attendees of cancellations.
<Tip> <Tip>
Connect Outlook in **Tools & Integrations** and ensure the trigger is enabled for your deployment. Connect Outlook in **Tools & Integrations** and ensure the trigger is enabled
for your deployment.
</Tip> </Tip>
## Enabling the Outlook Trigger ## Enabling the Outlook Trigger
@@ -20,7 +21,10 @@ Automate responses when Outlook delivers a new message or when an event is remov
3. Locate **Outlook** and switch the toggle to enable 3. Locate **Outlook** and switch the toggle to enable
<Frame caption="Microsoft Outlook trigger connection"> <Frame caption="Microsoft Outlook trigger connection">
<img src="/images/enterprise/outlook-trigger.png" alt="Enable or disable triggers with toggle" /> <img
src="/images/enterprise/outlook-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame> </Frame>
## Example: Summarize a new email ## Example: Summarize a new email
@@ -51,7 +55,9 @@ crewai triggers run microsoft_outlook/email_received
The `crewai triggers run` command will execute your crew with a complete Outlook payload, allowing you to test your parsing logic before deployment. The `crewai triggers run` command will execute your crew with a complete Outlook payload, allowing you to test your parsing logic before deployment.
<Warning> <Warning>
Use `crewai triggers run microsoft_outlook/email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload. Use `crewai triggers run microsoft_outlook/email_received` (not `crewai run`)
to simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning> </Warning>
## Troubleshooting ## Troubleshooting

View File

@@ -17,6 +17,7 @@ This guide explains how to export CrewAI AMP crews as React components and integ
<img src="/images/enterprise/export-react-component.png" alt="Export React Component" /> <img src="/images/enterprise/export-react-component.png" alt="Export React Component" />
</Frame> </Frame>
</Step> </Step>
</Steps> </Steps>
## Setting Up Your React Environment ## Setting Up Your React Environment
@@ -83,6 +84,7 @@ To run this React component locally, you'll need to set up a React development e
``` ```
- This will start the development server, and your default web browser should open automatically to http://localhost:3000, where you'll see your React app running. - This will start the development server, and your default web browser should open automatically to http://localhost:3000, where you'll see your React app running.
</Step> </Step>
</Steps> </Steps>
## Customization ## Customization
@@ -90,10 +92,16 @@ To run this React component locally, you'll need to set up a React development e
You can then customise the `CrewLead.jsx` to add color, title etc You can then customise the `CrewLead.jsx` to add color, title etc
<Frame> <Frame>
<img src="/images/enterprise/customise-react-component.png" alt="Customise React Component" /> <img
src="/images/enterprise/customise-react-component.png"
alt="Customise React Component"
/>
</Frame> </Frame>
<Frame> <Frame>
<img src="/images/enterprise/customise-react-component-2.png" alt="Customise React Component" /> <img
src="/images/enterprise/customise-react-component-2.png"
alt="Customise React Component"
/>
</Frame> </Frame>
## Next Steps ## Next Steps

View File

@@ -10,31 +10,30 @@ As an administrator of a CrewAI AMP account, you can easily invite new team memb
## Inviting Team Members ## Inviting Team Members
<Steps> <Steps>
<Step title="Access the Settings Page"> <Step title="Access the Settings Page">
- Log in to your CrewAI AMP account - Log in to your CrewAI AMP account - Look for the gear icon (⚙️) in the top
- Look for the gear icon (⚙️) in the top right corner of the dashboard right corner of the dashboard - Click on the gear icon to access the
- Click on the gear icon to access the **Settings** page: **Settings** page:
<Frame caption="Settings page"> <Frame caption="Settings page">
<img src="/images/enterprise/settings-page.png" alt="Settings Page" /> <img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame> </Frame>
</Step> </Step>
<Step title="Navigate to the Members Section"> <Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Members` tab - On the Settings page, you'll see a `Members` tab - Click on the `Members`
- Click on the `Members` tab to access the **Members** page: tab to access the **Members** page:
<Frame caption="Members tab"> <Frame caption="Members tab">
<img src="/images/enterprise/members-tab.png" alt="Members Tab" /> <img src="/images/enterprise/members-tab.png" alt="Members Tab" />
</Frame> </Frame>
</Step> </Step>
<Step title="Invite New Members"> <Step title="Invite New Members">
- In the Members section, you'll see a list of current members (including yourself) - In the Members section, you'll see a list of current members (including
- Locate the `Email` input field yourself) - Locate the `Email` input field - Enter the email address of the
- Enter the email address of the person you want to invite person you want to invite - Click the `Invite` button to send the invitation
- Click the `Invite` button to send the invitation </Step>
</Step> <Step title="Repeat as Needed">
<Step title="Repeat as Needed"> - You can repeat this process to invite multiple team members - Each invited
- You can repeat this process to invite multiple team members member will receive an email invitation to join your organization
- Each invited member will receive an email invitation to join your organization </Step>
</Step>
</Steps> </Steps>
## Adding Roles ## Adding Roles
@@ -42,40 +41,44 @@ As an administrator of a CrewAI AMP account, you can easily invite new team memb
You can add roles to your team members to control their access to different parts of the platform. You can add roles to your team members to control their access to different parts of the platform.
<Steps> <Steps>
<Step title="Access the Settings Page"> <Step title="Access the Settings Page">
- Log in to your CrewAI AMP account - Log in to your CrewAI AMP account - Look for the gear icon (⚙️) in the top
- Look for the gear icon (⚙️) in the top right corner of the dashboard right corner of the dashboard - Click on the gear icon to access the
- Click on the gear icon to access the **Settings** page: **Settings** page:
<Frame> <Frame>
<img src="/images/enterprise/settings-page.png" alt="Settings Page" /> <img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame> </Frame>
</Step> </Step>
<Step title="Navigate to the Members Section"> <Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Roles` tab - On the Settings page, you'll see a `Roles` tab - Click on the `Roles` tab
- Click on the `Roles` tab to access the **Roles** page. to access the **Roles** page.
<Frame> <Frame>
<img src="/images/enterprise/roles-tab.png" alt="Roles Tab" /> <img src="/images/enterprise/roles-tab.png" alt="Roles Tab" />
</Frame> </Frame>
- Click on the `Add Role` button to add a new role. - Click on the `Add Role` button to add a new role. - Enter the
- Enter the details and permissions of the role and click the `Create Role` button to create the role. details and permissions of the role and click the `Create Role` button to
<Frame> create the role.
<img src="/images/enterprise/add-role-modal.png" alt="Add Role Modal" /> <Frame>
</Frame> <img src="/images/enterprise/add-role-modal.png" alt="Add Role Modal" />
</Step> </Frame>
<Step title="Add Roles to Members"> </Step>
- In the Members section, you'll see a list of current members (including yourself) <Step title="Add Roles to Members">
<Frame> - In the Members section, you'll see a list of current members (including
<img src="/images/enterprise/member-accepted-invitation.png" alt="Member Accepted Invitation" /> yourself)
</Frame> <Frame>
- Once the member has accepted the invitation, you can add a role to them. <img
- Navigate back to `Roles` tab src="/images/enterprise/member-accepted-invitation.png"
- Go to the member you want to add a role to and under the `Role` column, click on the dropdown alt="Member Accepted Invitation"
- Select the role you want to add to the member />
- Click the `Update` button to save the role </Frame>
<Frame> - Once the member has accepted the invitation, you can add a role to
<img src="/images/enterprise/assign-role.png" alt="Add Role to Member" /> them. - Navigate back to `Roles` tab - Go to the member you want to add a
</Frame> role to and under the `Role` column, click on the dropdown - Select the role
</Step> you want to add to the member - Click the `Update` button to save the role
<Frame>
<img src="/images/enterprise/assign-role.png" alt="Add Role to Member" />
</Frame>
</Step>
</Steps> </Steps>
## Important Notes ## Important Notes

View File

@@ -21,7 +21,7 @@ The repository is not a version control system. Use Git to track code changes an
Before using the Tool Repository, ensure you have: Before using the Tool Repository, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account - A [CrewAI AMP](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed - [CrewAI CLI](/en/concepts/cli#cli) installed
- uv>=0.5.0 installed. Check out [how to upgrade](https://docs.astral.sh/uv/getting-started/installation/#upgrading-uv) - uv>=0.5.0 installed. Check out [how to upgrade](https://docs.astral.sh/uv/getting-started/installation/#upgrading-uv)
- [Git](https://git-scm.com) installed and configured - [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI AMP organization - Access permissions to publish or install tools in your CrewAI AMP organization
@@ -112,7 +112,7 @@ By default, tools are published as private. To make a tool public:
crewai tool publish --public crewai tool publish --public
``` ```
For more details on how to build tools, see [Creating your own tools](https://docs.crewai.com/concepts/tools#creating-your-own-tools). For more details on how to build tools, see [Creating your own tools](/en/concepts/tools#creating-your-own-tools).
## Updating Tools ## Updating Tools
@@ -137,7 +137,7 @@ To delete a tool:
4. Click **Delete** 4. Click **Delete**
<Warning> <Warning>
Deletion is permanent. Deleted tools cannot be restored or re-installed. Deletion is permanent. Deleted tools cannot be restored or re-installed.
</Warning> </Warning>
## Security Checks ## Security Checks
@@ -149,7 +149,6 @@ You can check the security check status of a tool at:
`CrewAI AMP > Tools > Your Tool > Versions` `CrewAI AMP > Tools > Your Tool > Versions`
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting. Contact our support team for assistance with API integration or
troubleshooting.
</Card> </Card>

View File

@@ -6,8 +6,9 @@ mode: "wide"
--- ---
<Note> <Note>
After deploying your crew to CrewAI AMP, you may need to make updates to the code, security settings, or configuration. After deploying your crew to CrewAI AMP, you may need to make updates to the
This guide explains how to perform these common update operations. code, security settings, or configuration. This guide explains how to perform
these common update operations.
</Note> </Note>
## Why Update Your Crew? ## Why Update Your Crew?
@@ -15,6 +16,7 @@ This guide explains how to perform these common update operations.
CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew. CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew.
There are several reasons you might want to update your crew deployment: There are several reasons you might want to update your crew deployment:
- You want to update the code with a latest commit you pushed to GitHub - You want to update the code with a latest commit you pushed to GitHub
- You want to reset the bearer token for security reasons - You want to reset the bearer token for security reasons
- You want to update environment variables - You want to update environment variables
@@ -26,9 +28,7 @@ When you've pushed new commits to your GitHub repository and want to update your
1. Navigate to your crew in the CrewAI AMP platform 1. Navigate to your crew in the CrewAI AMP platform
2. Click on the `Re-deploy` button on your crew details page 2. Click on the `Re-deploy` button on your crew details page
<Frame> <Frame>![Re-deploy Button](/images/enterprise/redeploy-button.png)</Frame>
![Re-deploy Button](/images/enterprise/redeploy-button.png)
</Frame>
This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment. This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment.
@@ -40,12 +40,11 @@ If you need to generate a new bearer token (for example, if you suspect the curr
2. Find the `Bearer Token` section 2. Find the `Bearer Token` section
3. Click the `Reset` button next to your current token 3. Click the `Reset` button next to your current token
<Frame> <Frame>![Reset Token](/images/enterprise/reset-token.png)</Frame>
![Reset Token](/images/enterprise/reset-token.png)
</Frame>
<Warning> <Warning>
Resetting your bearer token will invalidate the previous token immediately. Make sure to update any applications or scripts that are using the old token. Resetting your bearer token will invalidate the previous token immediately.
Make sure to update any applications or scripts that are using the old token.
</Warning> </Warning>
## 3. Updating Environment Variables ## 3. Updating Environment Variables
@@ -69,7 +68,8 @@ To update the environment variables for your crew:
5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes 5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes
<Note> <Note>
Updating environment variables will trigger a new deployment, but this will only update the environment configuration and not the code itself. Updating environment variables will trigger a new deployment, but this will
only update the environment configuration and not the code itself.
</Note> </Note>
## After Updating ## After Updating
@@ -81,9 +81,11 @@ After performing any update:
3. Once complete, test your crew to ensure the changes are working as expected 3. Once complete, test your crew to ensure the changes are working as expected
<Tip> <Tip>
If you encounter any issues after updating, you can view deployment logs in the platform or contact support for assistance. If you encounter any issues after updating, you can view deployment logs in
the platform or contact support for assistance.
</Tip> </Tip>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with updating your crew or troubleshooting deployment issues. Contact our support team for assistance with updating your crew or
troubleshooting deployment issues.
</Card> </Card>

View File

@@ -76,6 +76,7 @@ CrewAI AMP allows you to automate your workflow using webhooks. This article wil
<img src="/images/enterprise/activepieces-email.png" alt="ActivePieces Email" /> <img src="/images/enterprise/activepieces-email.png" alt="ActivePieces Email" />
</Frame> </Frame>
</Step> </Step>
</Steps> </Steps>
## Webhook Output Examples ## Webhook Output Examples
@@ -152,4 +153,5 @@ CrewAI AMP allows you to automate your workflow using webhooks. This article wil
} }
``` ```
</Tab> </Tab>
</Tabs> </Tabs>

View File

@@ -93,6 +93,7 @@ This guide will walk you through the process of setting up Zapier triggers for C
<img src="/images/enterprise/zapier-9.png" alt="Zapier 12" /> <img src="/images/enterprise/zapier-9.png" alt="Zapier 12" />
</Frame> </Frame>
</Step> </Step>
</Steps> </Steps>
## Tips for Success ## Tips for Success

View File

@@ -33,6 +33,24 @@ Before using the Asana integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -42,6 +60,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `task` (string, required): Task ID - The ID of the Task the comment will be added to. The comment will be authored by the currently authenticated user. - `task` (string, required): Task ID - The ID of the Task the comment will be added to. The comment will be authored by the currently authenticated user.
- `text` (string, required): Text (example: "This is a comment."). - `text` (string, required): Text (example: "This is a comment.").
</Accordion> </Accordion>
<Accordion title="asana/create_project"> <Accordion title="asana/create_project">
@@ -52,6 +71,7 @@ uv add crewai-tools
- `workspace` (string, required): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Projects in. Defaults to the user's first Workspace if left blank. - `workspace` (string, required): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Projects in. Defaults to the user's first Workspace if left blank.
- `team` (string, optional): Team - Use Connect Portal Workflow Settings to allow users to select which Team to share this Project with. Defaults to the user's first Team if left blank. - `team` (string, optional): Team - Use Connect Portal Workflow Settings to allow users to select which Team to share this Project with. Defaults to the user's first Team if left blank.
- `notes` (string, optional): Notes (example: "These are things we need to purchase."). - `notes` (string, optional): Notes (example: "These are things we need to purchase.").
</Accordion> </Accordion>
<Accordion title="asana/get_projects"> <Accordion title="asana/get_projects">
@@ -60,6 +80,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `archived` (string, optional): Archived - Choose "true" to show archived projects, "false" to display only active projects, or "default" to show both archived and active projects. - `archived` (string, optional): Archived - Choose "true" to show archived projects, "false" to display only active projects, or "default" to show both archived and active projects.
- Options: `default`, `true`, `false` - Options: `default`, `true`, `false`
</Accordion> </Accordion>
<Accordion title="asana/get_project_by_id"> <Accordion title="asana/get_project_by_id">
@@ -67,6 +88,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `projectFilterId` (string, required): Project ID. - `projectFilterId` (string, required): Project ID.
</Accordion> </Accordion>
<Accordion title="asana/create_task"> <Accordion title="asana/create_task">
@@ -81,6 +103,7 @@ uv add crewai-tools
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z"). - `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee. - `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later. - `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion> </Accordion>
<Accordion title="asana/update_task"> <Accordion title="asana/update_task">
@@ -96,6 +119,7 @@ uv add crewai-tools
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z"). - `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee. - `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later. - `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion> </Accordion>
<Accordion title="asana/get_tasks"> <Accordion title="asana/get_tasks">
@@ -106,6 +130,7 @@ uv add crewai-tools
- `project` (string, optional): Project - The ID of the Project to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Project. - `project` (string, optional): Project - The ID of the Project to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Project.
- `assignee` (string, optional): Assignee - The ID of the assignee to filter tasks on. Use Connect Portal Workflow Settings to allow users to select an Assignee. - `assignee` (string, optional): Assignee - The ID of the assignee to filter tasks on. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00"). - `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00").
</Accordion> </Accordion>
<Accordion title="asana/get_tasks_by_id"> <Accordion title="asana/get_tasks_by_id">
@@ -113,6 +138,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `taskId` (string, required): Task ID. - `taskId` (string, required): Task ID.
</Accordion> </Accordion>
<Accordion title="asana/get_task_by_external_id"> <Accordion title="asana/get_task_by_external_id">
@@ -120,6 +146,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application. - `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application.
</Accordion> </Accordion>
<Accordion title="asana/add_task_to_section"> <Accordion title="asana/add_task_to_section">
@@ -130,6 +157,7 @@ uv add crewai-tools
- `taskId` (string, required): Task ID - The ID of the task. (example: "1204619611402340"). - `taskId` (string, required): Task ID - The ID of the task. (example: "1204619611402340").
- `beforeTaskId` (string, optional): Before Task ID - The ID of a task in this section that this task will be inserted before. Cannot be used with After Task ID. (example: "1204619611402340"). - `beforeTaskId` (string, optional): Before Task ID - The ID of a task in this section that this task will be inserted before. Cannot be used with After Task ID. (example: "1204619611402340").
- `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340"). - `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340").
</Accordion> </Accordion>
<Accordion title="asana/get_teams"> <Accordion title="asana/get_teams">
@@ -137,12 +165,14 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user. - `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user.
</Accordion> </Accordion>
<Accordion title="asana/get_workspaces"> <Accordion title="asana/get_workspaces">
**Description:** Get a list of workspaces in Asana. **Description:** Get a list of workspaces in Asana.
**Parameters:** None required. **Parameters:** None required.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -33,6 +33,24 @@ Before using the Box integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -50,6 +68,7 @@ uv add crewai-tools
} }
``` ```
- `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "https://picsum.photos/200/300"). - `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "https://picsum.photos/200/300").
</Accordion> </Accordion>
<Accordion title="box/save_file_from_object"> <Accordion title="box/save_file_from_object">
@@ -59,6 +78,7 @@ uv add crewai-tools
- `file` (string, required): File - Accepts a File Object containing file data. Files must be smaller than 50MB in size. - `file` (string, required): File - Accepts a File Object containing file data. Files must be smaller than 50MB in size.
- `fileName` (string, required): File Name (example: "qwerty.png"). - `fileName` (string, required): File Name (example: "qwerty.png").
- `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank. - `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank.
</Accordion> </Accordion>
<Accordion title="box/get_file_by_id"> <Accordion title="box/get_file_by_id">
@@ -66,6 +86,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345"). - `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345").
</Accordion> </Accordion>
<Accordion title="box/list_files"> <Accordion title="box/list_files">
@@ -91,6 +112,7 @@ uv add crewai-tools
] ]
} }
``` ```
</Accordion> </Accordion>
<Accordion title="box/create_folder"> <Accordion title="box/create_folder">
@@ -104,6 +126,7 @@ uv add crewai-tools
"id": "123456" "id": "123456"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="box/move_folder"> <Accordion title="box/move_folder">
@@ -118,6 +141,7 @@ uv add crewai-tools
"id": "123456" "id": "123456"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="box/get_folder_by_id"> <Accordion title="box/get_folder_by_id">
@@ -125,6 +149,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0"). - `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
</Accordion> </Accordion>
<Accordion title="box/search_folders"> <Accordion title="box/search_folders">
@@ -150,6 +175,7 @@ uv add crewai-tools
] ]
} }
``` ```
</Accordion> </Accordion>
<Accordion title="box/delete_folder"> <Accordion title="box/delete_folder">
@@ -158,6 +184,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0"). - `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `recursive` (boolean, optional): Recursive - Delete a folder that is not empty by recursively deleting the folder and all of its content. - `recursive` (boolean, optional): Recursive - Delete a folder that is not empty by recursively deleting the folder and all of its content.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -33,6 +33,24 @@ Before using the ClickUp integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -59,6 +77,7 @@ uv add crewai-tools
} }
``` ```
Available fields: `space_ids%5B%5D`, `project_ids%5B%5D`, `list_ids%5B%5D`, `statuses%5B%5D`, `include_closed`, `assignees%5B%5D`, `tags%5B%5D`, `due_date_gt`, `due_date_lt`, `date_created_gt`, `date_created_lt`, `date_updated_gt`, `date_updated_lt` Available fields: `space_ids%5B%5D`, `project_ids%5B%5D`, `list_ids%5B%5D`, `statuses%5B%5D`, `include_closed`, `assignees%5B%5D`, `tags%5B%5D`, `due_date_gt`, `due_date_lt`, `date_created_gt`, `date_created_lt`, `date_updated_gt`, `date_updated_lt`
</Accordion> </Accordion>
<Accordion title="clickup/get_task_in_list"> <Accordion title="clickup/get_task_in_list">
@@ -67,6 +86,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `listId` (string, required): List - Select a List to get tasks from. Use Connect Portal User Settings to allow users to select a ClickUp List. - `listId` (string, required): List - Select a List to get tasks from. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `taskFilterFormula` (string, optional): Search for tasks that match specified filters. For example: name=task1. - `taskFilterFormula` (string, optional): Search for tasks that match specified filters. For example: name=task1.
</Accordion> </Accordion>
<Accordion title="clickup/create_task"> <Accordion title="clickup/create_task">
@@ -80,6 +100,7 @@ uv add crewai-tools
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member. - `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on. - `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON. - `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion> </Accordion>
<Accordion title="clickup/update_task"> <Accordion title="clickup/update_task">
@@ -94,6 +115,7 @@ uv add crewai-tools
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member. - `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on. - `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON. - `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion> </Accordion>
<Accordion title="clickup/delete_task"> <Accordion title="clickup/delete_task">
@@ -101,6 +123,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `taskId` (string, required): Task ID - The ID of the task to delete. - `taskId` (string, required): Task ID - The ID of the task to delete.
</Accordion> </Accordion>
<Accordion title="clickup/get_list"> <Accordion title="clickup/get_list">
@@ -108,6 +131,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the lists. - `spaceId` (string, required): Space ID - The ID of the space containing the lists.
</Accordion> </Accordion>
<Accordion title="clickup/get_custom_fields_in_list"> <Accordion title="clickup/get_custom_fields_in_list">
@@ -115,6 +139,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `listId` (string, required): List ID - The ID of the list to get custom fields from. - `listId` (string, required): List ID - The ID of the list to get custom fields from.
</Accordion> </Accordion>
<Accordion title="clickup/get_all_fields_in_list"> <Accordion title="clickup/get_all_fields_in_list">
@@ -122,6 +147,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `listId` (string, required): List ID - The ID of the list to get all fields from. - `listId` (string, required): List ID - The ID of the list to get all fields from.
</Accordion> </Accordion>
<Accordion title="clickup/get_space"> <Accordion title="clickup/get_space">
@@ -129,6 +155,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `spaceId` (string, optional): Space ID - The ID of the space to retrieve. - `spaceId` (string, optional): Space ID - The ID of the space to retrieve.
</Accordion> </Accordion>
<Accordion title="clickup/get_folders"> <Accordion title="clickup/get_folders">
@@ -136,12 +163,14 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the folders. - `spaceId` (string, required): Space ID - The ID of the space containing the folders.
</Accordion> </Accordion>
<Accordion title="clickup/get_member"> <Accordion title="clickup/get_member">
**Description:** Get Member information in ClickUp. **Description:** Get Member information in ClickUp.
**Parameters:** None required. **Parameters:** None required.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -268,5 +297,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with ClickUp integration setup or troubleshooting. Contact our support team for assistance with ClickUp integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the GitHub integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -45,6 +63,7 @@ uv add crewai-tools
- `title` (string, required): Issue Title - Specify the title of the issue to create. - `title` (string, required): Issue Title - Specify the title of the issue to create.
- `body` (string, optional): Issue Body - Specify the body contents of the issue to create. - `body` (string, optional): Issue Body - Specify the body contents of the issue to create.
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`). - `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
</Accordion> </Accordion>
<Accordion title="github/update_issue"> <Accordion title="github/update_issue">
@@ -59,6 +78,7 @@ uv add crewai-tools
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`). - `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
- `state` (string, optional): State - Specify the updated state of the issue. - `state` (string, optional): State - Specify the updated state of the issue.
- Options: `open`, `closed` - Options: `open`, `closed`
</Accordion> </Accordion>
<Accordion title="github/get_issue_by_number"> <Accordion title="github/get_issue_by_number">
@@ -68,6 +88,7 @@ uv add crewai-tools
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc"). - `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue. - `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `issue_number` (string, required): Issue Number - Specify the number of the issue to fetch. - `issue_number` (string, required): Issue Number - Specify the number of the issue to fetch.
</Accordion> </Accordion>
<Accordion title="github/lock_issue"> <Accordion title="github/lock_issue">
@@ -79,6 +100,7 @@ uv add crewai-tools
- `issue_number` (string, required): Issue Number - Specify the number of the issue to lock. - `issue_number` (string, required): Issue Number - Specify the number of the issue to lock.
- `lock_reason` (string, required): Lock Reason - Specify a reason for locking the issue or pull request conversation. - `lock_reason` (string, required): Lock Reason - Specify a reason for locking the issue or pull request conversation.
- Options: `off-topic`, `too heated`, `resolved`, `spam` - Options: `off-topic`, `too heated`, `resolved`, `spam`
</Accordion> </Accordion>
<Accordion title="github/search_issue"> <Accordion title="github/search_issue">
@@ -106,6 +128,7 @@ uv add crewai-tools
} }
``` ```
Available fields: `assignee`, `creator`, `mentioned`, `labels` Available fields: `assignee`, `creator`, `mentioned`, `labels`
</Accordion> </Accordion>
<Accordion title="github/create_release"> <Accordion title="github/create_release">
@@ -124,6 +147,7 @@ uv add crewai-tools
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository. - `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified. - `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
- Options: `true`, `false` - Options: `true`, `false`
</Accordion> </Accordion>
<Accordion title="github/update_release"> <Accordion title="github/update_release">
@@ -143,6 +167,7 @@ uv add crewai-tools
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository. - `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified. - `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
- Options: `true`, `false` - Options: `true`, `false`
</Accordion> </Accordion>
<Accordion title="github/get_release_by_id"> <Accordion title="github/get_release_by_id">
@@ -152,6 +177,7 @@ uv add crewai-tools
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc"). - `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release. - `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the release ID of the release to fetch. - `id` (string, required): Release ID - Specify the release ID of the release to fetch.
</Accordion> </Accordion>
<Accordion title="github/get_release_by_tag_name"> <Accordion title="github/get_release_by_tag_name">
@@ -161,6 +187,7 @@ uv add crewai-tools
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc"). - `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release. - `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `tag_name` (string, required): Name - Specify the tag of the release to fetch. (example: "v1.0.0"). - `tag_name` (string, required): Name - Specify the tag of the release to fetch. (example: "v1.0.0").
</Accordion> </Accordion>
<Accordion title="github/delete_release"> <Accordion title="github/delete_release">
@@ -170,6 +197,7 @@ uv add crewai-tools
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc"). - `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release. - `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the ID of the release to delete. - `id` (string, required): Release ID - Specify the ID of the release to delete.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -298,5 +326,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with GitHub integration setup or troubleshooting. Contact our support team for assistance with GitHub integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Gmail integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -46,6 +64,7 @@ uv add crewai-tools
- `pageToken` (string, optional): Page token to retrieve a specific page of results. - `pageToken` (string, optional): Page token to retrieve a specific page of results.
- `labelIds` (array, optional): Only return messages with labels that match all of the specified label IDs. - `labelIds` (array, optional): Only return messages with labels that match all of the specified label IDs.
- `includeSpamTrash` (boolean, optional): Include messages from SPAM and TRASH in the results. (default: false) - `includeSpamTrash` (boolean, optional): Include messages from SPAM and TRASH in the results. (default: false)
</Accordion> </Accordion>
<Accordion title="gmail/send_email"> <Accordion title="gmail/send_email">
@@ -61,6 +80,7 @@ uv add crewai-tools
- `from` (string, optional): Sender email address (if different from authenticated user). - `from` (string, optional): Sender email address (if different from authenticated user).
- `replyTo` (string, optional): Reply-to email address. - `replyTo` (string, optional): Reply-to email address.
- `threadId` (string, optional): Thread ID if replying to an existing conversation. - `threadId` (string, optional): Thread ID if replying to an existing conversation.
</Accordion> </Accordion>
<Accordion title="gmail/delete_email"> <Accordion title="gmail/delete_email">
@@ -69,6 +89,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. - `userId` (string, required): The user's email address or 'me' for the authenticated user.
- `id` (string, required): The ID of the message to delete. - `id` (string, required): The ID of the message to delete.
</Accordion> </Accordion>
<Accordion title="gmail/create_draft"> <Accordion title="gmail/create_draft">
@@ -78,6 +99,7 @@ uv add crewai-tools
- `userId` (string, required): The user's email address or 'me' for the authenticated user. - `userId` (string, required): The user's email address or 'me' for the authenticated user.
- `message` (object, required): Message object containing the draft content. - `message` (object, required): Message object containing the draft content.
- `raw` (string, required): Base64url encoded email message. - `raw` (string, required): Base64url encoded email message.
</Accordion> </Accordion>
<Accordion title="gmail/get_message"> <Accordion title="gmail/get_message">
@@ -88,6 +110,7 @@ uv add crewai-tools
- `id` (string, required): The ID of the message to retrieve. - `id` (string, required): The ID of the message to retrieve.
- `format` (string, optional): The format to return the message in. Options: "full", "metadata", "minimal", "raw". (default: "full") - `format` (string, optional): The format to return the message in. Options: "full", "metadata", "minimal", "raw". (default: "full")
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified. - `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
</Accordion> </Accordion>
<Accordion title="gmail/get_attachment"> <Accordion title="gmail/get_attachment">
@@ -97,6 +120,7 @@ uv add crewai-tools
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me") - `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `messageId` (string, required): The ID of the message containing the attachment. - `messageId` (string, required): The ID of the message containing the attachment.
- `id` (string, required): The ID of the attachment to retrieve. - `id` (string, required): The ID of the attachment to retrieve.
</Accordion> </Accordion>
<Accordion title="gmail/fetch_thread"> <Accordion title="gmail/fetch_thread">
@@ -107,6 +131,7 @@ uv add crewai-tools
- `id` (string, required): The ID of the thread to retrieve. - `id` (string, required): The ID of the thread to retrieve.
- `format` (string, optional): The format to return the messages in. Options: "full", "metadata", "minimal". (default: "full") - `format` (string, optional): The format to return the messages in. Options: "full", "metadata", "minimal". (default: "full")
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified. - `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
</Accordion> </Accordion>
<Accordion title="gmail/modify_thread"> <Accordion title="gmail/modify_thread">
@@ -117,6 +142,7 @@ uv add crewai-tools
- `id` (string, required): The ID of the thread to modify. - `id` (string, required): The ID of the thread to modify.
- `addLabelIds` (array, optional): A list of IDs of labels to add to this thread. - `addLabelIds` (array, optional): A list of IDs of labels to add to this thread.
- `removeLabelIds` (array, optional): A list of IDs of labels to remove from this thread. - `removeLabelIds` (array, optional): A list of IDs of labels to remove from this thread.
</Accordion> </Accordion>
<Accordion title="gmail/trash_thread"> <Accordion title="gmail/trash_thread">
@@ -125,6 +151,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me") - `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to trash. - `id` (string, required): The ID of the thread to trash.
</Accordion> </Accordion>
<Accordion title="gmail/untrash_thread"> <Accordion title="gmail/untrash_thread">
@@ -133,6 +160,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me") - `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to untrash. - `id` (string, required): The ID of the thread to untrash.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -270,5 +298,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Gmail integration setup or troubleshooting. Contact our support team for assistance with Gmail integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Google Calendar integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -53,6 +71,7 @@ uv add crewai-tools
- `timeZone` (string, optional): Time zone used in the response. The default is UTC. - `timeZone` (string, optional): Time zone used in the response. The default is UTC.
- `groupExpansionMax` (integer, optional): Maximal number of calendar identifiers to be provided for a single group. Maximum: 100 - `groupExpansionMax` (integer, optional): Maximal number of calendar identifiers to be provided for a single group. Maximum: 100
- `calendarExpansionMax` (integer, optional): Maximal number of calendars for which FreeBusy information is to be provided. Maximum: 50 - `calendarExpansionMax` (integer, optional): Maximal number of calendars for which FreeBusy information is to be provided. Maximum: 50
</Accordion> </Accordion>
<Accordion title="google_calendar/create_event"> <Accordion title="google_calendar/create_event">
@@ -101,6 +120,7 @@ uv add crewai-tools
``` ```
- `visibility` (string, optional): Visibility of the event. Options: default, public, private, confidential. Default: default - `visibility` (string, optional): Visibility of the event. Options: default, public, private, confidential. Default: default
- `transparency` (string, optional): Whether the event blocks time on the calendar. Options: opaque, transparent. Default: opaque - `transparency` (string, optional): Whether the event blocks time on the calendar. Options: opaque, transparent. Default: opaque
</Accordion> </Accordion>
<Accordion title="google_calendar/view_events"> <Accordion title="google_calendar/view_events">
@@ -120,6 +140,7 @@ uv add crewai-tools
- `timeZone` (string, optional): Time zone used in the response. - `timeZone` (string, optional): Time zone used in the response.
- `updatedMin` (string, optional): Lower bound for an event's last modification time (RFC3339) to filter by. - `updatedMin` (string, optional): Lower bound for an event's last modification time (RFC3339) to filter by.
- `iCalUID` (string, optional): Specifies an event ID in the iCalendar format to be provided in the response. - `iCalUID` (string, optional): Specifies an event ID in the iCalendar format to be provided in the response.
</Accordion> </Accordion>
<Accordion title="google_calendar/update_event"> <Accordion title="google_calendar/update_event">
@@ -132,6 +153,7 @@ uv add crewai-tools
- `description` (string, optional): Updated event description - `description` (string, optional): Updated event description
- `start_dateTime` (string, optional): Updated start time - `start_dateTime` (string, optional): Updated start time
- `end_dateTime` (string, optional): Updated end time - `end_dateTime` (string, optional): Updated end time
</Accordion> </Accordion>
<Accordion title="google_calendar/delete_event"> <Accordion title="google_calendar/delete_event">
@@ -140,6 +162,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `calendarId` (string, required): Calendar ID - `calendarId` (string, required): Calendar ID
- `eventId` (string, required): Event ID to delete - `eventId` (string, required): Event ID to delete
</Accordion> </Accordion>
<Accordion title="google_calendar/view_calendar_list"> <Accordion title="google_calendar/view_calendar_list">
@@ -151,6 +174,7 @@ uv add crewai-tools
- `showDeleted` (boolean, optional): Whether to include deleted calendar list entries in the result. Default: false - `showDeleted` (boolean, optional): Whether to include deleted calendar list entries in the result. Default: false
- `showHidden` (boolean, optional): Whether to show hidden entries. Default: false - `showHidden` (boolean, optional): Whether to show hidden entries. Default: false
- `minAccessRole` (string, optional): The minimum access role for the user in the returned entries. Options: freeBusyReader, owner, reader, writer - `minAccessRole` (string, optional): The minimum access role for the user in the returned entries. Options: freeBusyReader, owner, reader, writer
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -311,22 +335,26 @@ crew.kickoff()
### Common Issues ### Common Issues
**Authentication Errors** **Authentication Errors**
- Ensure your Google account has the necessary permissions for calendar access - Ensure your Google account has the necessary permissions for calendar access
- Verify that the OAuth connection includes all required scopes for Google Calendar API - Verify that the OAuth connection includes all required scopes for Google Calendar API
- Check if calendar sharing settings allow the required access level - Check if calendar sharing settings allow the required access level
**Event Creation Issues** **Event Creation Issues**
- Verify that time formats are correct (RFC3339 format) - Verify that time formats are correct (RFC3339 format)
- Ensure attendee email addresses are properly formatted - Ensure attendee email addresses are properly formatted
- Check that the target calendar exists and is accessible - Check that the target calendar exists and is accessible
- Verify time zones are correctly specified - Verify time zones are correctly specified
**Availability and Time Conflicts** **Availability and Time Conflicts**
- Use proper RFC3339 format for time ranges when checking availability - Use proper RFC3339 format for time ranges when checking availability
- Ensure time zones are consistent across all operations - Ensure time zones are consistent across all operations
- Verify that calendar IDs are correct when checking multiple calendars - Verify that calendar IDs are correct when checking multiple calendars
**Event Updates and Deletions** **Event Updates and Deletions**
- Verify that event IDs are correct and events exist - Verify that event IDs are correct and events exist
- Ensure you have edit permissions for the events - Ensure you have edit permissions for the events
- Check that calendar ownership allows modifications - Check that calendar ownership allows modifications
@@ -334,5 +362,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Calendar integration setup or troubleshooting. Contact our support team for assistance with Google Calendar integration setup
or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Google Contacts integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -45,6 +63,7 @@ uv add crewai-tools
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers - `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false - `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
- `sortOrder` (string, optional): The order in which the connections should be sorted. Options: LAST_MODIFIED_ASCENDING, LAST_MODIFIED_DESCENDING, FIRST_NAME_ASCENDING, LAST_NAME_ASCENDING - `sortOrder` (string, optional): The order in which the connections should be sorted. Options: LAST_MODIFIED_ASCENDING, LAST_MODIFIED_DESCENDING, FIRST_NAME_ASCENDING, LAST_NAME_ASCENDING
</Accordion> </Accordion>
<Accordion title="google_contacts/search_contacts"> <Accordion title="google_contacts/search_contacts">
@@ -56,6 +75,7 @@ uv add crewai-tools
- `pageSize` (integer, optional): Number of results to return. Minimum: 1, Maximum: 30 - `pageSize` (integer, optional): Number of results to return. Minimum: 1, Maximum: 30
- `pageToken` (string, optional): Token specifying which result page to return. - `pageToken` (string, optional): Token specifying which result page to return.
- `sources` (array, optional): The sources to search in. Options: READ_SOURCE_TYPE_CONTACT, READ_SOURCE_TYPE_PROFILE. Default: READ_SOURCE_TYPE_CONTACT - `sources` (array, optional): The sources to search in. Options: READ_SOURCE_TYPE_CONTACT, READ_SOURCE_TYPE_PROFILE. Default: READ_SOURCE_TYPE_CONTACT
</Accordion> </Accordion>
<Accordion title="google_contacts/list_directory_people"> <Accordion title="google_contacts/list_directory_people">
@@ -68,6 +88,7 @@ uv add crewai-tools
- `readMask` (string, optional): Fields to read (e.g., 'names,emailAddresses') - `readMask` (string, optional): Fields to read (e.g., 'names,emailAddresses')
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false - `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
- `mergeSources` (array, optional): Additional data to merge into the directory people responses. Options: CONTACT - `mergeSources` (array, optional): Additional data to merge into the directory people responses. Options: CONTACT
</Accordion> </Accordion>
<Accordion title="google_contacts/search_directory_people"> <Accordion title="google_contacts/search_directory_people">
@@ -78,6 +99,7 @@ uv add crewai-tools
- `sources` (string, required): Directory sources (use 'DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE') - `sources` (string, required): Directory sources (use 'DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE')
- `pageSize` (integer, optional): Number of results to return - `pageSize` (integer, optional): Number of results to return
- `readMask` (string, optional): Fields to read - `readMask` (string, optional): Fields to read
</Accordion> </Accordion>
<Accordion title="google_contacts/list_other_contacts"> <Accordion title="google_contacts/list_other_contacts">
@@ -88,6 +110,7 @@ uv add crewai-tools
- `pageToken` (string, optional): Token specifying which result page to return. - `pageToken` (string, optional): Token specifying which result page to return.
- `readMask` (string, optional): Fields to read - `readMask` (string, optional): Fields to read
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false - `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
</Accordion> </Accordion>
<Accordion title="google_contacts/search_other_contacts"> <Accordion title="google_contacts/search_other_contacts">
@@ -97,6 +120,7 @@ uv add crewai-tools
- `query` (string, required): Search query - `query` (string, required): Search query
- `readMask` (string, required): Fields to read (e.g., 'names,emailAddresses') - `readMask` (string, required): Fields to read (e.g., 'names,emailAddresses')
- `pageSize` (integer, optional): Number of results - `pageSize` (integer, optional): Number of results
</Accordion> </Accordion>
<Accordion title="google_contacts/get_person"> <Accordion title="google_contacts/get_person">
@@ -105,6 +129,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `resourceName` (string, required): The resource name of the person to get (e.g., 'people/c123456789') - `resourceName` (string, required): The resource name of the person to get (e.g., 'people/c123456789')
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers - `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
</Accordion> </Accordion>
<Accordion title="google_contacts/create_contact"> <Accordion title="google_contacts/create_contact">
@@ -158,6 +183,7 @@ uv add crewai-tools
} }
] ]
``` ```
</Accordion> </Accordion>
<Accordion title="google_contacts/update_contact"> <Accordion title="google_contacts/update_contact">
@@ -169,6 +195,7 @@ uv add crewai-tools
- `names` (array, optional): Person's names - `names` (array, optional): Person's names
- `emailAddresses` (array, optional): Email addresses - `emailAddresses` (array, optional): Email addresses
- `phoneNumbers` (array, optional): Phone numbers - `phoneNumbers` (array, optional): Phone numbers
</Accordion> </Accordion>
<Accordion title="google_contacts/delete_contact"> <Accordion title="google_contacts/delete_contact">
@@ -176,6 +203,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `resourceName` (string, required): The resource name of the person to delete (e.g., 'people/c123456789') - `resourceName` (string, required): The resource name of the person to delete (e.g., 'people/c123456789')
</Accordion> </Accordion>
<Accordion title="google_contacts/batch_get_people"> <Accordion title="google_contacts/batch_get_people">
@@ -184,6 +212,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `resourceNames` (array, required): Resource names of people to get. Maximum: 200 items - `resourceNames` (array, required): Resource names of people to get. Maximum: 200 items
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers - `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
</Accordion> </Accordion>
<Accordion title="google_contacts/list_contact_groups"> <Accordion title="google_contacts/list_contact_groups">
@@ -193,6 +222,7 @@ uv add crewai-tools
- `pageSize` (integer, optional): Number of contact groups to return. Minimum: 1, Maximum: 1000 - `pageSize` (integer, optional): Number of contact groups to return. Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): Token specifying which result page to return. - `pageToken` (string, optional): Token specifying which result page to return.
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount - `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -361,36 +391,43 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Google account has appropriate permissions for contacts access - Ensure your Google account has appropriate permissions for contacts access
- Verify that the OAuth connection includes required scopes for Google Contacts API - Verify that the OAuth connection includes required scopes for Google Contacts API
- Check that directory access permissions are granted for organization contacts - Check that directory access permissions are granted for organization contacts
**Resource Name Format Issues** **Resource Name Format Issues**
- Ensure resource names follow the correct format (e.g., 'people/c123456789' for contacts) - Ensure resource names follow the correct format (e.g., 'people/c123456789' for contacts)
- Verify that contact group resource names use the format 'contactGroups/groupId' - Verify that contact group resource names use the format 'contactGroups/groupId'
- Check that resource names exist and are accessible - Check that resource names exist and are accessible
**Search and Query Issues** **Search and Query Issues**
- Ensure search queries are properly formatted and not empty - Ensure search queries are properly formatted and not empty
- Use appropriate readMask fields for the data you need - Use appropriate readMask fields for the data you need
- Verify that search sources are correctly specified (contacts vs profiles) - Verify that search sources are correctly specified (contacts vs profiles)
**Contact Creation and Updates** **Contact Creation and Updates**
- Ensure required fields are provided when creating contacts - Ensure required fields are provided when creating contacts
- Verify that email addresses and phone numbers are properly formatted - Verify that email addresses and phone numbers are properly formatted
- Check that updatePersonFields parameter includes all fields being updated - Check that updatePersonFields parameter includes all fields being updated
**Directory Access Issues** **Directory Access Issues**
- Ensure you have appropriate permissions to access organization directory - Ensure you have appropriate permissions to access organization directory
- Verify that directory sources are correctly specified - Verify that directory sources are correctly specified
- Check that your organization allows API access to directory information - Check that your organization allows API access to directory information
**Pagination and Limits** **Pagination and Limits**
- Be mindful of page size limits (varies by endpoint) - Be mindful of page size limits (varies by endpoint)
- Use pageToken for pagination through large result sets - Use pageToken for pagination through large result sets
- Respect API rate limits and implement appropriate delays - Respect API rate limits and implement appropriate delays
**Contact Groups and Organization** **Contact Groups and Organization**
- Ensure contact group names are unique when creating new groups - Ensure contact group names are unique when creating new groups
- Verify that contacts exist before adding them to groups - Verify that contacts exist before adding them to groups
- Check that you have permissions to modify contact groups - Check that you have permissions to modify contact groups
@@ -398,5 +435,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Contacts integration setup or troubleshooting. Contact our support team for assistance with Google Contacts integration setup
or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Google Docs integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -41,6 +59,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `title` (string, optional): The title for the new document. - `title` (string, optional): The title for the new document.
</Accordion> </Accordion>
<Accordion title="google_docs/get_document"> <Accordion title="google_docs/get_document">
@@ -50,6 +69,7 @@ uv add crewai-tools
- `documentId` (string, required): The ID of the document to retrieve. - `documentId` (string, required): The ID of the document to retrieve.
- `includeTabsContent` (boolean, optional): Whether to include tab content. Default is `false`. - `includeTabsContent` (boolean, optional): Whether to include tab content. Default is `false`.
- `suggestionsViewMode` (string, optional): The suggestions view mode to apply to the document. Enum: `DEFAULT_FOR_CURRENT_ACCESS`, `PREVIEW_SUGGESTIONS_ACCEPTED`, `PREVIEW_WITHOUT_SUGGESTIONS`. Default is `DEFAULT_FOR_CURRENT_ACCESS`. - `suggestionsViewMode` (string, optional): The suggestions view mode to apply to the document. Enum: `DEFAULT_FOR_CURRENT_ACCESS`, `PREVIEW_SUGGESTIONS_ACCEPTED`, `PREVIEW_WITHOUT_SUGGESTIONS`. Default is `DEFAULT_FOR_CURRENT_ACCESS`.
</Accordion> </Accordion>
<Accordion title="google_docs/batch_update"> <Accordion title="google_docs/batch_update">
@@ -59,6 +79,7 @@ uv add crewai-tools
- `documentId` (string, required): The ID of the document to update. - `documentId` (string, required): The ID of the document to update.
- `requests` (array, required): A list of updates to apply to the document. Each item is an object representing a request. - `requests` (array, required): A list of updates to apply to the document. Each item is an object representing a request.
- `writeControl` (object, optional): Provides control over how write requests are executed. Contains `requiredRevisionId` (string) and `targetRevisionId` (string). - `writeControl` (object, optional): Provides control over how write requests are executed. Contains `requiredRevisionId` (string) and `targetRevisionId` (string).
</Accordion> </Accordion>
<Accordion title="google_docs/insert_text"> <Accordion title="google_docs/insert_text">
@@ -68,6 +89,7 @@ uv add crewai-tools
- `documentId` (string, required): The ID of the document to update. - `documentId` (string, required): The ID of the document to update.
- `text` (string, required): The text to insert. - `text` (string, required): The text to insert.
- `index` (integer, optional): The zero-based index where to insert the text. Default is `1`. - `index` (integer, optional): The zero-based index where to insert the text. Default is `1`.
</Accordion> </Accordion>
<Accordion title="google_docs/replace_text"> <Accordion title="google_docs/replace_text">
@@ -78,6 +100,7 @@ uv add crewai-tools
- `containsText` (string, required): The text to find and replace. - `containsText` (string, required): The text to find and replace.
- `replaceText` (string, required): The text to replace it with. - `replaceText` (string, required): The text to replace it with.
- `matchCase` (boolean, optional): Whether the search should respect case. Default is `false`. - `matchCase` (boolean, optional): Whether the search should respect case. Default is `false`.
</Accordion> </Accordion>
<Accordion title="google_docs/delete_content_range"> <Accordion title="google_docs/delete_content_range">
@@ -87,6 +110,7 @@ uv add crewai-tools
- `documentId` (string, required): The ID of the document to update. - `documentId` (string, required): The ID of the document to update.
- `startIndex` (integer, required): The start index of the range to delete. - `startIndex` (integer, required): The start index of the range to delete.
- `endIndex` (integer, required): The end index of the range to delete. - `endIndex` (integer, required): The end index of the range to delete.
</Accordion> </Accordion>
<Accordion title="google_docs/insert_page_break"> <Accordion title="google_docs/insert_page_break">
@@ -95,6 +119,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `documentId` (string, required): The ID of the document to update. - `documentId` (string, required): The ID of the document to update.
- `index` (integer, optional): The zero-based index where to insert the page break. Default is `1`. - `index` (integer, optional): The zero-based index where to insert the page break. Default is `1`.
</Accordion> </Accordion>
<Accordion title="google_docs/create_named_range"> <Accordion title="google_docs/create_named_range">
@@ -105,6 +130,7 @@ uv add crewai-tools
- `name` (string, required): The name for the named range. - `name` (string, required): The name for the named range.
- `startIndex` (integer, required): The start index of the range. - `startIndex` (integer, required): The start index of the range.
- `endIndex` (integer, required): The end index of the range. - `endIndex` (integer, required): The end index of the range.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -200,29 +226,35 @@ crew.kickoff()
### Common Issues ### Common Issues
**Authentication Errors** **Authentication Errors**
- Ensure your Google account has the necessary permissions for Google Docs access. - Ensure your Google account has the necessary permissions for Google Docs access.
- Verify that the OAuth connection includes all required scopes (`https://www.googleapis.com/auth/documents`). - Verify that the OAuth connection includes all required scopes (`https://www.googleapis.com/auth/documents`).
**Document ID Issues** **Document ID Issues**
- Double-check document IDs for correctness. - Double-check document IDs for correctness.
- Ensure the document exists and is accessible to your account. - Ensure the document exists and is accessible to your account.
- Document IDs can be found in the Google Docs URL. - Document IDs can be found in the Google Docs URL.
**Text Insertion and Range Operations** **Text Insertion and Range Operations**
- When using `insert_text` or `delete_content_range`, ensure index positions are valid. - When using `insert_text` or `delete_content_range`, ensure index positions are valid.
- Remember that Google Docs uses zero-based indexing. - Remember that Google Docs uses zero-based indexing.
- The document must have content at the specified index positions. - The document must have content at the specified index positions.
**Batch Update Request Formatting** **Batch Update Request Formatting**
- When using `batch_update`, ensure the `requests` array is correctly formatted according to the Google Docs API documentation. - When using `batch_update`, ensure the `requests` array is correctly formatted according to the Google Docs API documentation.
- Complex updates require specific JSON structures for each request type. - Complex updates require specific JSON structures for each request type.
**Replace Text Operations** **Replace Text Operations**
- For `replace_text`, ensure the `containsText` parameter exactly matches the text you want to replace. - For `replace_text`, ensure the `containsText` parameter exactly matches the text you want to replace.
- Use `matchCase` parameter to control case sensitivity. - Use `matchCase` parameter to control case sensitivity.
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Docs integration setup or troubleshooting. Contact our support team for assistance with Google Docs integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Google Drive integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -41,6 +59,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the file to retrieve. - `file_id` (string, required): The ID of the file to retrieve.
</Accordion> </Accordion>
<Accordion title="google_drive/list_files"> <Accordion title="google_drive/list_files">
@@ -52,6 +71,7 @@ uv add crewai-tools
- `page_token` (string, optional): Token for retrieving the next page of results. - `page_token` (string, optional): Token for retrieving the next page of results.
- `order_by` (string, optional): Sort order (example: "name", "createdTime desc", "modifiedTime"). - `order_by` (string, optional): Sort order (example: "name", "createdTime desc", "modifiedTime").
- `spaces` (string, optional): Comma-separated list of spaces to query (drive, appDataFolder, photos). - `spaces` (string, optional): Comma-separated list of spaces to query (drive, appDataFolder, photos).
</Accordion> </Accordion>
<Accordion title="google_drive/upload_file"> <Accordion title="google_drive/upload_file">
@@ -63,6 +83,7 @@ uv add crewai-tools
- `mime_type` (string, optional): MIME type of the file (example: "text/plain", "application/pdf"). - `mime_type` (string, optional): MIME type of the file (example: "text/plain", "application/pdf").
- `parent_folder_id` (string, optional): ID of the parent folder where the file should be created. - `parent_folder_id` (string, optional): ID of the parent folder where the file should be created.
- `description` (string, optional): Description of the file. - `description` (string, optional): Description of the file.
</Accordion> </Accordion>
<Accordion title="google_drive/download_file"> <Accordion title="google_drive/download_file">
@@ -71,6 +92,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the file to download. - `file_id` (string, required): The ID of the file to download.
- `mime_type` (string, optional): MIME type for export (required for Google Workspace documents). - `mime_type` (string, optional): MIME type for export (required for Google Workspace documents).
</Accordion> </Accordion>
<Accordion title="google_drive/create_folder"> <Accordion title="google_drive/create_folder">
@@ -80,6 +102,7 @@ uv add crewai-tools
- `name` (string, required): Name of the folder to create. - `name` (string, required): Name of the folder to create.
- `parent_folder_id` (string, optional): ID of the parent folder where the new folder should be created. - `parent_folder_id` (string, optional): ID of the parent folder where the new folder should be created.
- `description` (string, optional): Description of the folder. - `description` (string, optional): Description of the folder.
</Accordion> </Accordion>
<Accordion title="google_drive/delete_file"> <Accordion title="google_drive/delete_file">
@@ -87,6 +110,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the file to delete. - `file_id` (string, required): The ID of the file to delete.
</Accordion> </Accordion>
<Accordion title="google_drive/share_file"> <Accordion title="google_drive/share_file">
@@ -100,6 +124,7 @@ uv add crewai-tools
- `domain` (string, optional): The domain to share with (required for domain type). - `domain` (string, optional): The domain to share with (required for domain type).
- `send_notification_email` (boolean, optional): Whether to send a notification email (default: true). - `send_notification_email` (boolean, optional): Whether to send a notification email (default: true).
- `email_message` (string, optional): A plain text custom message to include in the notification email. - `email_message` (string, optional): A plain text custom message to include in the notification email.
</Accordion> </Accordion>
<Accordion title="google_drive/update_file"> <Accordion title="google_drive/update_file">
@@ -113,6 +138,7 @@ uv add crewai-tools
- `description` (string, optional): New description for the file. - `description` (string, optional): New description for the file.
- `add_parents` (string, optional): Comma-separated list of parent folder IDs to add. - `add_parents` (string, optional): Comma-separated list of parent folder IDs to add.
- `remove_parents` (string, optional): Comma-separated list of parent folder IDs to remove. - `remove_parents` (string, optional): Comma-separated list of parent folder IDs to remove.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -34,6 +34,24 @@ Before using the Google Sheets integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -45,6 +63,7 @@ uv add crewai-tools
- `ranges` (array, optional): The ranges to retrieve from the spreadsheet. - `ranges` (array, optional): The ranges to retrieve from the spreadsheet.
- `includeGridData` (boolean, optional): True if grid data should be returned. Default: false - `includeGridData` (boolean, optional): True if grid data should be returned. Default: false
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data. - `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
</Accordion> </Accordion>
<Accordion title="google_sheets/get_values"> <Accordion title="google_sheets/get_values">
@@ -56,6 +75,7 @@ uv add crewai-tools
- `valueRenderOption` (string, optional): How values should be represented in the output. Options: FORMATTED_VALUE, UNFORMATTED_VALUE, FORMULA. Default: FORMATTED_VALUE - `valueRenderOption` (string, optional): How values should be represented in the output. Options: FORMATTED_VALUE, UNFORMATTED_VALUE, FORMULA. Default: FORMATTED_VALUE
- `dateTimeRenderOption` (string, optional): How dates, times, and durations should be represented in the output. Options: SERIAL_NUMBER, FORMATTED_STRING. Default: SERIAL_NUMBER - `dateTimeRenderOption` (string, optional): How dates, times, and durations should be represented in the output. Options: SERIAL_NUMBER, FORMATTED_STRING. Default: SERIAL_NUMBER
- `majorDimension` (string, optional): The major dimension that results should use. Options: ROWS, COLUMNS. Default: ROWS - `majorDimension` (string, optional): The major dimension that results should use. Options: ROWS, COLUMNS. Default: ROWS
</Accordion> </Accordion>
<Accordion title="google_sheets/update_values"> <Accordion title="google_sheets/update_values">
@@ -72,6 +92,7 @@ uv add crewai-tools
] ]
``` ```
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED - `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
</Accordion> </Accordion>
<Accordion title="google_sheets/append_values"> <Accordion title="google_sheets/append_values">
@@ -89,6 +110,7 @@ uv add crewai-tools
``` ```
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED - `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
- `insertDataOption` (string, optional): How the input data should be inserted. Options: OVERWRITE, INSERT_ROWS. Default: INSERT_ROWS - `insertDataOption` (string, optional): How the input data should be inserted. Options: OVERWRITE, INSERT_ROWS. Default: INSERT_ROWS
</Accordion> </Accordion>
<Accordion title="google_sheets/create_spreadsheet"> <Accordion title="google_sheets/create_spreadsheet">
@@ -106,6 +128,7 @@ uv add crewai-tools
} }
] ]
``` ```
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -303,31 +326,37 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Google account has edit access to the target spreadsheets - Ensure your Google account has edit access to the target spreadsheets
- Verify that the OAuth connection includes required scopes for Google Sheets API - Verify that the OAuth connection includes required scopes for Google Sheets API
- Check that spreadsheets are shared with the authenticated account - Check that spreadsheets are shared with the authenticated account
**Spreadsheet Structure Issues** **Spreadsheet Structure Issues**
- Ensure worksheets have proper column headers before creating or updating rows - Ensure worksheets have proper column headers before creating or updating rows
- Verify that range notation (A1 format) is correct for the target cells - Verify that range notation (A1 format) is correct for the target cells
- Check that the specified spreadsheet ID exists and is accessible - Check that the specified spreadsheet ID exists and is accessible
**Data Type and Format Issues** **Data Type and Format Issues**
- Ensure data values match the expected format for each column - Ensure data values match the expected format for each column
- Use proper date formats for date columns (ISO format recommended) - Use proper date formats for date columns (ISO format recommended)
- Verify that numeric values are properly formatted for number columns - Verify that numeric values are properly formatted for number columns
**Range and Cell Reference Issues** **Range and Cell Reference Issues**
- Use proper A1 notation for ranges (e.g., "A1:C10", "Sheet1!A1:B5") - Use proper A1 notation for ranges (e.g., "A1:C10", "Sheet1!A1:B5")
- Ensure range references don't exceed the actual spreadsheet dimensions - Ensure range references don't exceed the actual spreadsheet dimensions
- Verify that sheet names in range references match actual sheet names - Verify that sheet names in range references match actual sheet names
**Value Input and Rendering Options** **Value Input and Rendering Options**
- Choose appropriate `valueInputOption` (RAW vs USER_ENTERED) for your data - Choose appropriate `valueInputOption` (RAW vs USER_ENTERED) for your data
- Select proper `valueRenderOption` based on how you want data formatted - Select proper `valueRenderOption` based on how you want data formatted
- Consider `dateTimeRenderOption` for consistent date/time handling - Consider `dateTimeRenderOption` for consistent date/time handling
**Spreadsheet Creation Issues** **Spreadsheet Creation Issues**
- Ensure spreadsheet titles are unique and follow naming conventions - Ensure spreadsheet titles are unique and follow naming conventions
- Verify that sheet properties are properly structured when creating sheets - Verify that sheet properties are properly structured when creating sheets
- Check that you have permissions to create new spreadsheets in your account - Check that you have permissions to create new spreadsheets in your account
@@ -335,5 +364,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Sheets integration setup or troubleshooting. Contact our support team for assistance with Google Sheets integration setup
or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Google Slides integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -41,6 +59,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `title` (string, required): The title of the presentation. - `title` (string, required): The title of the presentation.
</Accordion> </Accordion>
<Accordion title="google_slides/get_presentation"> <Accordion title="google_slides/get_presentation">
@@ -49,6 +68,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `presentationId` (string, required): The ID of the presentation to retrieve. - `presentationId` (string, required): The ID of the presentation to retrieve.
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data. - `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
</Accordion> </Accordion>
<Accordion title="google_slides/batch_update_presentation"> <Accordion title="google_slides/batch_update_presentation">
@@ -73,6 +93,7 @@ uv add crewai-tools
"requiredRevisionId": "revision_id_string" "requiredRevisionId": "revision_id_string"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="google_slides/get_page"> <Accordion title="google_slides/get_page">
@@ -81,6 +102,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `presentationId` (string, required): The ID of the presentation. - `presentationId` (string, required): The ID of the presentation.
- `pageObjectId` (string, required): The ID of the page to retrieve. - `pageObjectId` (string, required): The ID of the page to retrieve.
</Accordion> </Accordion>
<Accordion title="google_slides/get_thumbnail"> <Accordion title="google_slides/get_thumbnail">
@@ -89,6 +111,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `presentationId` (string, required): The ID of the presentation. - `presentationId` (string, required): The ID of the presentation.
- `pageObjectId` (string, required): The ID of the page for thumbnail generation. - `pageObjectId` (string, required): The ID of the page for thumbnail generation.
</Accordion> </Accordion>
<Accordion title="google_slides/import_data_from_sheet"> <Accordion title="google_slides/import_data_from_sheet">
@@ -98,6 +121,7 @@ uv add crewai-tools
- `presentationId` (string, required): The ID of the presentation. - `presentationId` (string, required): The ID of the presentation.
- `sheetId` (string, required): The ID of the Google Sheet to import from. - `sheetId` (string, required): The ID of the Google Sheet to import from.
- `dataRange` (string, required): The range of data to import from the sheet. - `dataRange` (string, required): The range of data to import from the sheet.
</Accordion> </Accordion>
<Accordion title="google_slides/upload_file_to_drive"> <Accordion title="google_slides/upload_file_to_drive">
@@ -106,6 +130,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file` (string, required): The file data to upload. - `file` (string, required): The file data to upload.
- `presentationId` (string, required): The ID of the presentation to link the uploaded file. - `presentationId` (string, required): The ID of the presentation to link the uploaded file.
</Accordion> </Accordion>
<Accordion title="google_slides/link_file_to_presentation"> <Accordion title="google_slides/link_file_to_presentation">
@@ -114,6 +139,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `presentationId` (string, required): The ID of the presentation. - `presentationId` (string, required): The ID of the presentation.
- `fileId` (string, required): The ID of the file to link. - `fileId` (string, required): The ID of the file to link.
</Accordion> </Accordion>
<Accordion title="google_slides/get_all_presentations"> <Accordion title="google_slides/get_all_presentations">
@@ -122,6 +148,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `pageSize` (integer, optional): The number of presentations to return per page. - `pageSize` (integer, optional): The number of presentations to return per page.
- `pageToken` (string, optional): A token for pagination. - `pageToken` (string, optional): A token for pagination.
</Accordion> </Accordion>
<Accordion title="google_slides/delete_presentation"> <Accordion title="google_slides/delete_presentation">
@@ -129,6 +156,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `presentationId` (string, required): The ID of the presentation to delete. - `presentationId` (string, required): The ID of the presentation to delete.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -330,36 +358,43 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Google account has appropriate permissions for Google Slides - Ensure your Google account has appropriate permissions for Google Slides
- Verify that the OAuth connection includes required scopes for presentations, spreadsheets, and drive access - Verify that the OAuth connection includes required scopes for presentations, spreadsheets, and drive access
- Check that presentations are shared with the authenticated account - Check that presentations are shared with the authenticated account
**Presentation ID Issues** **Presentation ID Issues**
- Verify that presentation IDs are correct and presentations exist - Verify that presentation IDs are correct and presentations exist
- Ensure you have access permissions to the presentations you're trying to modify - Ensure you have access permissions to the presentations you're trying to modify
- Check that presentation IDs are properly formatted - Check that presentation IDs are properly formatted
**Content Update Issues** **Content Update Issues**
- Ensure batch update requests are properly formatted according to Google Slides API specifications - Ensure batch update requests are properly formatted according to Google Slides API specifications
- Verify that object IDs for slides and elements exist in the presentation - Verify that object IDs for slides and elements exist in the presentation
- Check that write control revision IDs are current if using optimistic concurrency - Check that write control revision IDs are current if using optimistic concurrency
**Data Import Issues** **Data Import Issues**
- Verify that Google Sheet IDs are correct and accessible - Verify that Google Sheet IDs are correct and accessible
- Ensure data ranges are properly specified using A1 notation - Ensure data ranges are properly specified using A1 notation
- Check that you have read permissions for the source spreadsheets - Check that you have read permissions for the source spreadsheets
**File Upload and Linking Issues** **File Upload and Linking Issues**
- Ensure file data is properly encoded for upload - Ensure file data is properly encoded for upload
- Verify that Drive file IDs are correct when linking files - Verify that Drive file IDs are correct when linking files
- Check that you have appropriate Drive permissions for file operations - Check that you have appropriate Drive permissions for file operations
**Page and Thumbnail Operations** **Page and Thumbnail Operations**
- Verify that page object IDs exist in the specified presentation - Verify that page object IDs exist in the specified presentation
- Ensure presentations have content before attempting to generate thumbnails - Ensure presentations have content before attempting to generate thumbnails
- Check that page structure is valid for thumbnail generation - Check that page structure is valid for thumbnail generation
**Pagination and Listing Issues** **Pagination and Listing Issues**
- Use appropriate page sizes for listing presentations - Use appropriate page sizes for listing presentations
- Implement proper pagination using page tokens for large result sets - Implement proper pagination using page tokens for large result sets
- Handle empty result sets gracefully - Handle empty result sets gracefully
@@ -367,5 +402,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Slides integration setup or troubleshooting. Contact our support team for assistance with Google Slides integration setup
or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the HubSpot integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -99,6 +117,7 @@ uv add crewai-tools
- `web_technologies` (string, optional): Web Technologies used. Must be one of the predefined values. - `web_technologies` (string, optional): Web Technologies used. Must be one of the predefined values.
- `website` (string, optional): Website URL. - `website` (string, optional): Website URL.
- `founded_year` (string, optional): Year Founded. - `founded_year` (string, optional): Year Founded.
</Accordion> </Accordion>
<Accordion title="hubspot/create_contact"> <Accordion title="hubspot/create_contact">
@@ -198,6 +217,7 @@ uv add crewai-tools
- `hs_whatsapp_phone_number` (string, optional): WhatsApp Phone Number. - `hs_whatsapp_phone_number` (string, optional): WhatsApp Phone Number.
- `work_email` (string, optional): Work email. - `work_email` (string, optional): Work email.
- `hs_googleplusid` (string, optional): googleplus ID. - `hs_googleplusid` (string, optional): googleplus ID.
</Accordion> </Accordion>
<Accordion title="hubspot/create_deal"> <Accordion title="hubspot/create_deal">
@@ -213,6 +233,7 @@ uv add crewai-tools
- `dealtype` (string, optional): The type of deal. Available values: `newbusiness`, `existingbusiness`. - `dealtype` (string, optional): The type of deal. Available values: `newbusiness`, `existingbusiness`.
- `description` (string, optional): A description of the deal. - `description` (string, optional): A description of the deal.
- `hs_priority` (string, optional): The priority of the deal. Available values: `low`, `medium`, `high`. - `hs_priority` (string, optional): The priority of the deal. Available values: `low`, `medium`, `high`.
</Accordion> </Accordion>
<Accordion title="hubspot/create_record_engagements"> <Accordion title="hubspot/create_record_engagements">
@@ -230,6 +251,7 @@ uv add crewai-tools
- `hs_meeting_body` (string, optional): The description for the meeting. (Used for `MEETING`) - `hs_meeting_body` (string, optional): The description for the meeting. (Used for `MEETING`)
- `hs_meeting_start_time` (string, optional): The start time of the meeting. (Used for `MEETING`) - `hs_meeting_start_time` (string, optional): The start time of the meeting. (Used for `MEETING`)
- `hs_meeting_end_time` (string, optional): The end time of the meeting. (Used for `MEETING`) - `hs_meeting_end_time` (string, optional): The end time of the meeting. (Used for `MEETING`)
</Accordion> </Accordion>
<Accordion title="hubspot/update_company"> <Accordion title="hubspot/update_company">
@@ -247,6 +269,7 @@ uv add crewai-tools
- `numberofemployees` (number, optional): Number of Employees. - `numberofemployees` (number, optional): Number of Employees.
- `annualrevenue` (number, optional): Annual Revenue. - `annualrevenue` (number, optional): Annual Revenue.
- `description` (string, optional): Description. - `description` (string, optional): Description.
</Accordion> </Accordion>
<Accordion title="hubspot/create_record_any"> <Accordion title="hubspot/create_record_any">
@@ -255,6 +278,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordType` (string, required): The object type ID of the custom object. - `recordType` (string, required): The object type ID of the custom object.
- Additional parameters depend on the custom object's schema. - Additional parameters depend on the custom object's schema.
</Accordion> </Accordion>
<Accordion title="hubspot/update_contact"> <Accordion title="hubspot/update_contact">
@@ -269,6 +293,7 @@ uv add crewai-tools
- `company` (string, optional): Company Name. - `company` (string, optional): Company Name.
- `jobtitle` (string, optional): Job Title. - `jobtitle` (string, optional): Job Title.
- `lifecyclestage` (string, optional): Lifecycle Stage. - `lifecyclestage` (string, optional): Lifecycle Stage.
</Accordion> </Accordion>
<Accordion title="hubspot/update_deal"> <Accordion title="hubspot/update_deal">
@@ -282,6 +307,7 @@ uv add crewai-tools
- `pipeline` (string, optional): The pipeline the deal belongs to. - `pipeline` (string, optional): The pipeline the deal belongs to.
- `closedate` (string, optional): The date the deal is expected to close. - `closedate` (string, optional): The date the deal is expected to close.
- `dealtype` (string, optional): The type of deal. - `dealtype` (string, optional): The type of deal.
</Accordion> </Accordion>
<Accordion title="hubspot/update_record_engagements"> <Accordion title="hubspot/update_record_engagements">
@@ -293,6 +319,7 @@ uv add crewai-tools
- `hs_task_subject` (string, optional): The title of the task. - `hs_task_subject` (string, optional): The title of the task.
- `hs_task_body` (string, optional): The notes for the task. - `hs_task_body` (string, optional): The notes for the task.
- `hs_task_status` (string, optional): The status of the task. - `hs_task_status` (string, optional): The status of the task.
</Accordion> </Accordion>
<Accordion title="hubspot/update_record_any"> <Accordion title="hubspot/update_record_any">
@@ -302,6 +329,7 @@ uv add crewai-tools
- `recordId` (string, required): The ID of the record to update. - `recordId` (string, required): The ID of the record to update.
- `recordType` (string, required): The object type ID of the custom object. - `recordType` (string, required): The object type ID of the custom object.
- Additional parameters depend on the custom object's schema. - Additional parameters depend on the custom object's schema.
</Accordion> </Accordion>
<Accordion title="hubspot/list_companies"> <Accordion title="hubspot/list_companies">
@@ -309,6 +337,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/list_contacts"> <Accordion title="hubspot/list_contacts">
@@ -316,6 +345,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/list_deals"> <Accordion title="hubspot/list_deals">
@@ -323,6 +353,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/get_records_engagements"> <Accordion title="hubspot/get_records_engagements">
@@ -331,6 +362,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `objectName` (string, required): The type of engagement to fetch (e.g., "notes"). - `objectName` (string, required): The type of engagement to fetch (e.g., "notes").
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/get_records_any"> <Accordion title="hubspot/get_records_any">
@@ -339,6 +371,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordType` (string, required): The object type ID of the custom object. - `recordType` (string, required): The object type ID of the custom object.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/get_company"> <Accordion title="hubspot/get_company">
@@ -346,6 +379,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the company to retrieve. - `recordId` (string, required): The ID of the company to retrieve.
</Accordion> </Accordion>
<Accordion title="hubspot/get_contact"> <Accordion title="hubspot/get_contact">
@@ -353,6 +387,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the contact to retrieve. - `recordId` (string, required): The ID of the contact to retrieve.
</Accordion> </Accordion>
<Accordion title="hubspot/get_deal"> <Accordion title="hubspot/get_deal">
@@ -360,6 +395,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the deal to retrieve. - `recordId` (string, required): The ID of the deal to retrieve.
</Accordion> </Accordion>
<Accordion title="hubspot/get_record_by_id_engagements"> <Accordion title="hubspot/get_record_by_id_engagements">
@@ -367,6 +403,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the engagement to retrieve. - `recordId` (string, required): The ID of the engagement to retrieve.
</Accordion> </Accordion>
<Accordion title="hubspot/get_record_by_id_any"> <Accordion title="hubspot/get_record_by_id_any">
@@ -375,6 +412,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordType` (string, required): The object type ID of the custom object. - `recordType` (string, required): The object type ID of the custom object.
- `recordId` (string, required): The ID of the record to retrieve. - `recordId` (string, required): The ID of the record to retrieve.
</Accordion> </Accordion>
<Accordion title="hubspot/search_companies"> <Accordion title="hubspot/search_companies">
@@ -383,6 +421,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs). - `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/search_contacts"> <Accordion title="hubspot/search_contacts">
@@ -391,6 +430,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs). - `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/search_deals"> <Accordion title="hubspot/search_deals">
@@ -399,6 +439,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs). - `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/search_records_engagements"> <Accordion title="hubspot/search_records_engagements">
@@ -407,6 +448,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `engagementFilterFormula` (object, optional): A filter for engagements. - `engagementFilterFormula` (object, optional): A filter for engagements.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/search_records_any"> <Accordion title="hubspot/search_records_any">
@@ -416,6 +458,7 @@ uv add crewai-tools
- `recordType` (string, required): The object type ID to search. - `recordType` (string, required): The object type ID to search.
- `filterFormula` (string, optional): The filter formula to apply. - `filterFormula` (string, optional): The filter formula to apply.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/delete_record_companies"> <Accordion title="hubspot/delete_record_companies">
@@ -423,6 +466,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the company to delete. - `recordId` (string, required): The ID of the company to delete.
</Accordion> </Accordion>
<Accordion title="hubspot/delete_record_contacts"> <Accordion title="hubspot/delete_record_contacts">
@@ -430,6 +474,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the contact to delete. - `recordId` (string, required): The ID of the contact to delete.
</Accordion> </Accordion>
<Accordion title="hubspot/delete_record_deals"> <Accordion title="hubspot/delete_record_deals">
@@ -437,6 +482,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the deal to delete. - `recordId` (string, required): The ID of the deal to delete.
</Accordion> </Accordion>
<Accordion title="hubspot/delete_record_engagements"> <Accordion title="hubspot/delete_record_engagements">
@@ -444,6 +490,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordId` (string, required): The ID of the engagement to delete. - `recordId` (string, required): The ID of the engagement to delete.
</Accordion> </Accordion>
<Accordion title="hubspot/delete_record_any"> <Accordion title="hubspot/delete_record_any">
@@ -452,6 +499,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordType` (string, required): The object type ID of the custom object. - `recordType` (string, required): The object type ID of the custom object.
- `recordId` (string, required): The ID of the record to delete. - `recordId` (string, required): The ID of the record to delete.
</Accordion> </Accordion>
<Accordion title="hubspot/get_contacts_by_list_id"> <Accordion title="hubspot/get_contacts_by_list_id">
@@ -460,6 +508,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `listId` (string, required): The ID of the list to get contacts from. - `listId` (string, required): The ID of the list to get contacts from.
- `paginationParameters` (object, optional): Use `pageCursor` for subsequent pages. - `paginationParameters` (object, optional): Use `pageCursor` for subsequent pages.
</Accordion> </Accordion>
<Accordion title="hubspot/describe_action_schema"> <Accordion title="hubspot/describe_action_schema">
@@ -468,6 +517,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `recordType` (string, required): The object type ID (e.g., 'companies'). - `recordType` (string, required): The object type ID (e.g., 'companies').
- `operation` (string, required): The operation type (e.g., 'CREATE_RECORD'). - `operation` (string, required): The operation type (e.g., 'CREATE_RECORD').
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -561,5 +611,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with HubSpot integration setup or troubleshooting. Contact our support team for assistance with HubSpot integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Jira integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -54,6 +72,7 @@ uv add crewai-tools
"customfield_10001": "value" "customfield_10001": "value"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="jira/update_issue"> <Accordion title="jira/update_issue">
@@ -69,6 +88,7 @@ uv add crewai-tools
- Options: `description`, `descriptionJSON` - Options: `description`, `descriptionJSON`
- `description` (string, optional): Description - A detailed description of the issue. This field appears only when 'descriptionType' = 'description'. - `description` (string, optional): Description - A detailed description of the issue. This field appears only when 'descriptionType' = 'description'.
- `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format. - `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format.
</Accordion> </Accordion>
<Accordion title="jira/get_issue_by_key"> <Accordion title="jira/get_issue_by_key">
@@ -76,6 +96,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `issueKey` (string, required): Issue Key (example: "TEST-1234"). - `issueKey` (string, required): Issue Key (example: "TEST-1234").
</Accordion> </Accordion>
<Accordion title="jira/filter_issues"> <Accordion title="jira/filter_issues">
@@ -102,6 +123,7 @@ uv add crewai-tools
``` ```
Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan` Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`
- `limit` (string, optional): Limit results - Limit the maximum number of issues to return. Defaults to 10 if left blank. - `limit` (string, optional): Limit results - Limit the maximum number of issues to return. Defaults to 10 if left blank.
</Accordion> </Accordion>
<Accordion title="jira/search_by_jql"> <Accordion title="jira/search_by_jql">
@@ -115,12 +137,14 @@ uv add crewai-tools
"pageCursor": "cursor_string" "pageCursor": "cursor_string"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="jira/update_issue_any"> <Accordion title="jira/update_issue_any">
**Description:** Update any issue in Jira. Use DESCRIBE_ACTION_SCHEMA to get properties schema for this function. **Description:** Update any issue in Jira. Use DESCRIBE_ACTION_SCHEMA to get properties schema for this function.
**Parameters:** No specific parameters - use JIRA_DESCRIBE_ACTION_SCHEMA first to get the expected schema. **Parameters:** No specific parameters - use JIRA_DESCRIBE_ACTION_SCHEMA first to get the expected schema.
</Accordion> </Accordion>
<Accordion title="jira/describe_action_schema"> <Accordion title="jira/describe_action_schema">
@@ -130,6 +154,7 @@ uv add crewai-tools
- `issueTypeId` (string, required): Issue Type ID. - `issueTypeId` (string, required): Issue Type ID.
- `projectKey` (string, required): Project key. - `projectKey` (string, required): Project key.
- `operation` (string, required): Operation Type value, for example CREATE_ISSUE or UPDATE_ISSUE. - `operation` (string, required): Operation Type value, for example CREATE_ISSUE or UPDATE_ISSUE.
</Accordion> </Accordion>
<Accordion title="jira/get_projects"> <Accordion title="jira/get_projects">
@@ -142,6 +167,7 @@ uv add crewai-tools
"pageCursor": "cursor_string" "pageCursor": "cursor_string"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="jira/get_issue_types_by_project"> <Accordion title="jira/get_issue_types_by_project">
@@ -149,12 +175,14 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `project` (string, required): Project key. - `project` (string, required): Project key.
</Accordion> </Accordion>
<Accordion title="jira/get_issue_types"> <Accordion title="jira/get_issue_types">
**Description:** Get all Issue Types in Jira. **Description:** Get all Issue Types in Jira.
**Parameters:** None required. **Parameters:** None required.
</Accordion> </Accordion>
<Accordion title="jira/get_issue_status_by_project"> <Accordion title="jira/get_issue_status_by_project">
@@ -162,6 +190,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `project` (string, required): Project key. - `project` (string, required): Project key.
</Accordion> </Accordion>
<Accordion title="jira/get_all_assignees_by_project"> <Accordion title="jira/get_all_assignees_by_project">
@@ -169,6 +198,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `project` (string, required): Project key. - `project` (string, required): Project key.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -332,31 +362,37 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Jira account has necessary permissions for the target projects - Ensure your Jira account has necessary permissions for the target projects
- Verify that the OAuth connection includes required scopes for Jira API - Verify that the OAuth connection includes required scopes for Jira API
- Check if you have create/edit permissions for issues in the specified projects - Check if you have create/edit permissions for issues in the specified projects
**Invalid Project or Issue Keys** **Invalid Project or Issue Keys**
- Double-check project keys and issue keys for correct format (e.g., "PROJ-123") - Double-check project keys and issue keys for correct format (e.g., "PROJ-123")
- Ensure projects exist and are accessible to your account - Ensure projects exist and are accessible to your account
- Verify that issue keys reference existing issues - Verify that issue keys reference existing issues
**Issue Type and Status Issues** **Issue Type and Status Issues**
- Use JIRA_GET_ISSUE_TYPES_BY_PROJECT to get valid issue types for a project - Use JIRA_GET_ISSUE_TYPES_BY_PROJECT to get valid issue types for a project
- Use JIRA_GET_ISSUE_STATUS_BY_PROJECT to get valid statuses - Use JIRA_GET_ISSUE_STATUS_BY_PROJECT to get valid statuses
- Ensure issue types and statuses are available in the target project - Ensure issue types and statuses are available in the target project
**JQL Query Problems** **JQL Query Problems**
- Test JQL queries in Jira's issue search before using in API calls - Test JQL queries in Jira's issue search before using in API calls
- Ensure field names in JQL are spelled correctly and exist in your Jira instance - Ensure field names in JQL are spelled correctly and exist in your Jira instance
- Use proper JQL syntax for complex queries - Use proper JQL syntax for complex queries
**Custom Fields and Schema Issues** **Custom Fields and Schema Issues**
- Use JIRA_DESCRIBE_ACTION_SCHEMA to get the correct schema for complex issue types - Use JIRA_DESCRIBE_ACTION_SCHEMA to get the correct schema for complex issue types
- Ensure custom field IDs are correct (e.g., "customfield_10001") - Ensure custom field IDs are correct (e.g., "customfield_10001")
- Verify that custom fields are available in the target project and issue type - Verify that custom fields are available in the target project and issue type
**Filter Formula Issues** **Filter Formula Issues**
- Ensure filter formulas follow the correct JSON structure for disjunctive normal form - Ensure filter formulas follow the correct JSON structure for disjunctive normal form
- Use valid field names that exist in your Jira configuration - Use valid field names that exist in your Jira configuration
- Test simple filters before building complex multi-condition queries - Test simple filters before building complex multi-condition queries
@@ -364,5 +400,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Jira integration setup or troubleshooting. Contact our support team for assistance with Jira integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Linear integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -54,6 +72,7 @@ uv add crewai-tools
"labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"] "labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"]
} }
``` ```
</Accordion> </Accordion>
<Accordion title="linear/update_issue"> <Accordion title="linear/update_issue">
@@ -74,6 +93,7 @@ uv add crewai-tools
"labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"] "labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"]
} }
``` ```
</Accordion> </Accordion>
<Accordion title="linear/get_issue_by_id"> <Accordion title="linear/get_issue_by_id">
@@ -81,6 +101,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to fetch. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977"). - `issueId` (string, required): Issue ID - Specify the record ID of the issue to fetch. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
</Accordion> </Accordion>
<Accordion title="linear/get_issue_by_issue_identifier"> <Accordion title="linear/get_issue_by_issue_identifier">
@@ -88,6 +109,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `externalId` (string, required): External ID - Specify the human-readable Issue identifier of the issue to fetch. (example: "ABC-1"). - `externalId` (string, required): External ID - Specify the human-readable Issue identifier of the issue to fetch. (example: "ABC-1").
</Accordion> </Accordion>
<Accordion title="linear/search_issue"> <Accordion title="linear/search_issue">
@@ -115,6 +137,7 @@ uv add crewai-tools
``` ```
Available fields: `title`, `number`, `project`, `createdAt` Available fields: `title`, `number`, `project`, `createdAt`
Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringStartsWith`, `$stringDoesNotStartWith`, `$stringEndsWith`, `$stringDoesNotEndWith`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`, `$numberGreaterThanOrEqualTo`, `$numberLessThanOrEqualTo`, `$numberGreaterThan`, `$numberLessThan`, `$dateTimeAfter`, `$dateTimeBefore` Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringStartsWith`, `$stringDoesNotStartWith`, `$stringEndsWith`, `$stringDoesNotEndWith`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`, `$numberGreaterThanOrEqualTo`, `$numberLessThanOrEqualTo`, `$numberGreaterThan`, `$numberLessThan`, `$dateTimeAfter`, `$dateTimeBefore`
</Accordion> </Accordion>
<Accordion title="linear/delete_issue"> <Accordion title="linear/delete_issue">
@@ -122,6 +145,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to delete. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977"). - `issueId` (string, required): Issue ID - Specify the record ID of the issue to delete. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
</Accordion> </Accordion>
<Accordion title="linear/archive_issue"> <Accordion title="linear/archive_issue">
@@ -129,6 +153,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to archive. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977"). - `issueId` (string, required): Issue ID - Specify the record ID of the issue to archive. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
</Accordion> </Accordion>
<Accordion title="linear/create_sub_issue"> <Accordion title="linear/create_sub_issue">
@@ -145,6 +170,7 @@ uv add crewai-tools
"lead": "linear_user_id" "lead": "linear_user_id"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="linear/create_project"> <Accordion title="linear/create_project">
@@ -167,6 +193,7 @@ uv add crewai-tools
"description": "" "description": ""
} }
``` ```
</Accordion> </Accordion>
<Accordion title="linear/update_project"> <Accordion title="linear/update_project">
@@ -183,6 +210,7 @@ uv add crewai-tools
"description": "" "description": ""
} }
``` ```
</Accordion> </Accordion>
<Accordion title="linear/get_project_by_id"> <Accordion title="linear/get_project_by_id">
@@ -190,6 +218,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `projectId` (string, required): Project ID - Specify the Project ID of the project to fetch. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b"). - `projectId` (string, required): Project ID - Specify the Project ID of the project to fetch. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b").
</Accordion> </Accordion>
<Accordion title="linear/delete_project"> <Accordion title="linear/delete_project">
@@ -197,6 +226,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `projectId` (string, required): Project ID - Specify the Project ID of the project to delete. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b"). - `projectId` (string, required): Project ID - Specify the Project ID of the project to delete. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b").
</Accordion> </Accordion>
<Accordion title="linear/search_teams"> <Accordion title="linear/search_teams">
@@ -222,6 +252,7 @@ uv add crewai-tools
} }
``` ```
Available fields: `id`, `name` Available fields: `id`, `name`
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -385,37 +416,44 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Linear account has necessary permissions for the target workspace - Ensure your Linear account has necessary permissions for the target workspace
- Verify that the OAuth connection includes required scopes for Linear API - Verify that the OAuth connection includes required scopes for Linear API
- Check if you have create/edit permissions for issues and projects in the workspace - Check if you have create/edit permissions for issues and projects in the workspace
**Invalid IDs and References** **Invalid IDs and References**
- Double-check team IDs, issue IDs, and project IDs for correct UUID format - Double-check team IDs, issue IDs, and project IDs for correct UUID format
- Ensure referenced entities (teams, projects, cycles) exist and are accessible - Ensure referenced entities (teams, projects, cycles) exist and are accessible
- Verify that issue identifiers follow the correct format (e.g., "ABC-1") - Verify that issue identifiers follow the correct format (e.g., "ABC-1")
**Team and Project Association Issues** **Team and Project Association Issues**
- Use LINEAR_SEARCH_TEAMS to get valid team IDs before creating issues or projects - Use LINEAR_SEARCH_TEAMS to get valid team IDs before creating issues or projects
- Ensure teams exist and are active in your workspace - Ensure teams exist and are active in your workspace
- Verify that team IDs are properly formatted as UUIDs - Verify that team IDs are properly formatted as UUIDs
**Issue Status and Priority Problems** **Issue Status and Priority Problems**
- Check that status IDs reference valid workflow states for the team - Check that status IDs reference valid workflow states for the team
- Ensure priority values are within the valid range for your Linear configuration - Ensure priority values are within the valid range for your Linear configuration
- Verify that custom fields and labels exist before referencing them - Verify that custom fields and labels exist before referencing them
**Date and Time Format Issues** **Date and Time Format Issues**
- Use ISO 8601 format for due dates and timestamps - Use ISO 8601 format for due dates and timestamps
- Ensure time zones are handled correctly for due date calculations - Ensure time zones are handled correctly for due date calculations
- Verify that date values are valid and in the future for due dates - Verify that date values are valid and in the future for due dates
**Search and Filter Issues** **Search and Filter Issues**
- Ensure search queries are properly formatted and not empty - Ensure search queries are properly formatted and not empty
- Use valid field names in filter formulas: `title`, `number`, `project`, `createdAt` - Use valid field names in filter formulas: `title`, `number`, `project`, `createdAt`
- Test simple filters before building complex multi-condition queries - Test simple filters before building complex multi-condition queries
- Verify that operator types match the data types of the fields being filtered - Verify that operator types match the data types of the fields being filtered
**Sub-issue Creation Problems** **Sub-issue Creation Problems**
- Ensure parent issue IDs are valid and accessible - Ensure parent issue IDs are valid and accessible
- Verify that the team ID for sub-issues matches or is compatible with the parent issue's team - Verify that the team ID for sub-issues matches or is compatible with the parent issue's team
- Check that parent issues are not already archived or deleted - Check that parent issues are not already archived or deleted
@@ -423,5 +461,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Linear integration setup or troubleshooting. Contact our support team for assistance with Linear integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Microsoft Excel integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -52,6 +70,7 @@ uv add crewai-tools
} }
] ]
``` ```
</Accordion> </Accordion>
<Accordion title="microsoft_excel/get_workbooks"> <Accordion title="microsoft_excel/get_workbooks">
@@ -63,6 +82,7 @@ uv add crewai-tools
- `expand` (string, optional): Expand related resources inline - `expand` (string, optional): Expand related resources inline
- `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999 - `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999
- `orderby` (string, optional): Order results by specified properties - `orderby` (string, optional): Order results by specified properties
</Accordion> </Accordion>
<Accordion title="microsoft_excel/get_worksheets"> <Accordion title="microsoft_excel/get_worksheets">
@@ -75,6 +95,7 @@ uv add crewai-tools
- `expand` (string, optional): Expand related resources inline - `expand` (string, optional): Expand related resources inline
- `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999 - `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999
- `orderby` (string, optional): Order results by specified properties - `orderby` (string, optional): Order results by specified properties
</Accordion> </Accordion>
<Accordion title="microsoft_excel/create_worksheet"> <Accordion title="microsoft_excel/create_worksheet">
@@ -83,6 +104,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
- `name` (string, required): Name of the new worksheet - `name` (string, required): Name of the new worksheet
</Accordion> </Accordion>
<Accordion title="microsoft_excel/get_range_data"> <Accordion title="microsoft_excel/get_range_data">
@@ -92,6 +114,7 @@ uv add crewai-tools
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet - `worksheet_name` (string, required): Name of the worksheet
- `range` (string, required): Range address (e.g., 'A1:C10') - `range` (string, required): Range address (e.g., 'A1:C10')
</Accordion> </Accordion>
<Accordion title="microsoft_excel/update_range_data"> <Accordion title="microsoft_excel/update_range_data">
@@ -109,6 +132,7 @@ uv add crewai-tools
["Jane", 25, "Los Angeles"] ["Jane", 25, "Los Angeles"]
] ]
``` ```
</Accordion> </Accordion>
<Accordion title="microsoft_excel/add_table"> <Accordion title="microsoft_excel/add_table">
@@ -119,6 +143,7 @@ uv add crewai-tools
- `worksheet_name` (string, required): Name of the worksheet - `worksheet_name` (string, required): Name of the worksheet
- `range` (string, required): Range for the table (e.g., 'A1:D10') - `range` (string, required): Range for the table (e.g., 'A1:D10')
- `has_headers` (boolean, optional): Whether the first row contains headers. Default: true - `has_headers` (boolean, optional): Whether the first row contains headers. Default: true
</Accordion> </Accordion>
<Accordion title="microsoft_excel/get_tables"> <Accordion title="microsoft_excel/get_tables">
@@ -127,6 +152,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet - `worksheet_name` (string, required): Name of the worksheet
</Accordion> </Accordion>
<Accordion title="microsoft_excel/add_table_row"> <Accordion title="microsoft_excel/add_table_row">
@@ -140,6 +166,7 @@ uv add crewai-tools
```json ```json
["John Doe", 35, "Manager", "Sales"] ["John Doe", 35, "Manager", "Sales"]
``` ```
</Accordion> </Accordion>
<Accordion title="microsoft_excel/create_chart"> <Accordion title="microsoft_excel/create_chart">
@@ -151,6 +178,7 @@ uv add crewai-tools
- `chart_type` (string, required): Type of chart (e.g., 'ColumnClustered', 'Line', 'Pie') - `chart_type` (string, required): Type of chart (e.g., 'ColumnClustered', 'Line', 'Pie')
- `source_data` (string, required): Range of data for the chart (e.g., 'A1:B10') - `source_data` (string, required): Range of data for the chart (e.g., 'A1:B10')
- `series_by` (string, optional): How to interpret the data ('Auto', 'Columns', or 'Rows'). Default: Auto - `series_by` (string, optional): How to interpret the data ('Auto', 'Columns', or 'Rows'). Default: Auto
</Accordion> </Accordion>
<Accordion title="microsoft_excel/get_cell"> <Accordion title="microsoft_excel/get_cell">
@@ -161,6 +189,7 @@ uv add crewai-tools
- `worksheet_name` (string, required): Name of the worksheet - `worksheet_name` (string, required): Name of the worksheet
- `row` (integer, required): Row number (0-based) - `row` (integer, required): Row number (0-based)
- `column` (integer, required): Column number (0-based) - `column` (integer, required): Column number (0-based)
</Accordion> </Accordion>
<Accordion title="microsoft_excel/get_used_range"> <Accordion title="microsoft_excel/get_used_range">
@@ -169,6 +198,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet - `worksheet_name` (string, required): Name of the worksheet
</Accordion> </Accordion>
<Accordion title="microsoft_excel/list_charts"> <Accordion title="microsoft_excel/list_charts">
@@ -177,6 +207,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet - `worksheet_name` (string, required): Name of the worksheet
</Accordion> </Accordion>
<Accordion title="microsoft_excel/delete_worksheet"> <Accordion title="microsoft_excel/delete_worksheet">
@@ -185,6 +216,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet to delete - `worksheet_name` (string, required): Name of the worksheet to delete
</Accordion> </Accordion>
<Accordion title="microsoft_excel/delete_table"> <Accordion title="microsoft_excel/delete_table">
@@ -194,6 +226,7 @@ uv add crewai-tools
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
- `worksheet_name` (string, required): Name of the worksheet - `worksheet_name` (string, required): Name of the worksheet
- `table_name` (string, required): Name of the table to delete - `table_name` (string, required): Name of the table to delete
</Accordion> </Accordion>
<Accordion title="microsoft_excel/list_names"> <Accordion title="microsoft_excel/list_names">
@@ -201,6 +234,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the Excel file - `file_id` (string, required): The ID of the Excel file
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -405,36 +439,43 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Microsoft account has appropriate permissions for Excel and OneDrive/SharePoint - Ensure your Microsoft account has appropriate permissions for Excel and OneDrive/SharePoint
- Verify that the OAuth connection includes required scopes (Files.Read.All, Files.ReadWrite.All) - Verify that the OAuth connection includes required scopes (Files.Read.All, Files.ReadWrite.All)
- Check that you have access to the specific workbooks you're trying to modify - Check that you have access to the specific workbooks you're trying to modify
**File ID and Path Issues** **File ID and Path Issues**
- Verify that file IDs are correct and files exist in your OneDrive or SharePoint - Verify that file IDs are correct and files exist in your OneDrive or SharePoint
- Ensure file paths are properly formatted when creating new workbooks - Ensure file paths are properly formatted when creating new workbooks
- Check that workbook files have the correct .xlsx extension - Check that workbook files have the correct .xlsx extension
**Worksheet and Range Issues** **Worksheet and Range Issues**
- Verify that worksheet names exist in the specified workbook - Verify that worksheet names exist in the specified workbook
- Ensure range addresses are properly formatted (e.g., 'A1:C10') - Ensure range addresses are properly formatted (e.g., 'A1:C10')
- Check that ranges don't exceed worksheet boundaries - Check that ranges don't exceed worksheet boundaries
**Data Format Issues** **Data Format Issues**
- Ensure data values are properly formatted for Excel (strings, numbers, integers) - Ensure data values are properly formatted for Excel (strings, numbers, integers)
- Verify that 2D arrays for ranges have consistent row and column counts - Verify that 2D arrays for ranges have consistent row and column counts
- Check that table data includes proper headers when has_headers is true - Check that table data includes proper headers when has_headers is true
**Chart Creation Issues** **Chart Creation Issues**
- Verify that chart types are supported (ColumnClustered, Line, Pie, etc.) - Verify that chart types are supported (ColumnClustered, Line, Pie, etc.)
- Ensure source data ranges contain appropriate data for the chart type - Ensure source data ranges contain appropriate data for the chart type
- Check that the source data range exists and contains data - Check that the source data range exists and contains data
**Table Management Issues** **Table Management Issues**
- Ensure table names are unique within worksheets - Ensure table names are unique within worksheets
- Verify that table ranges don't overlap with existing tables - Verify that table ranges don't overlap with existing tables
- Check that new row data matches the table's column structure - Check that new row data matches the table's column structure
**Cell and Range Operations** **Cell and Range Operations**
- Verify that row and column indices are 0-based for cell operations - Verify that row and column indices are 0-based for cell operations
- Ensure ranges contain data when using get_used_range - Ensure ranges contain data when using get_used_range
- Check that named ranges exist before referencing them - Check that named ranges exist before referencing them
@@ -442,5 +483,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Microsoft Excel integration setup or troubleshooting. Contact our support team for assistance with Microsoft Excel integration setup
or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Microsoft OneDrive integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -43,6 +61,7 @@ uv add crewai-tools
- `top` (integer, optional): Number of items to retrieve (max 1000). Default is `50`. - `top` (integer, optional): Number of items to retrieve (max 1000). Default is `50`.
- `orderby` (string, optional): Order by field (e.g., "name asc", "lastModifiedDateTime desc"). Default is "name asc". - `orderby` (string, optional): Order by field (e.g., "name asc", "lastModifiedDateTime desc"). Default is "name asc".
- `filter` (string, optional): OData filter expression. - `filter` (string, optional): OData filter expression.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/get_file_info"> <Accordion title="microsoft_onedrive/get_file_info">
@@ -50,6 +69,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `item_id` (string, required): The ID of the file or folder. - `item_id` (string, required): The ID of the file or folder.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/download_file"> <Accordion title="microsoft_onedrive/download_file">
@@ -57,6 +77,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `item_id` (string, required): The ID of the file to download. - `item_id` (string, required): The ID of the file to download.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/upload_file"> <Accordion title="microsoft_onedrive/upload_file">
@@ -65,6 +86,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_name` (string, required): Name of the file to upload. - `file_name` (string, required): Name of the file to upload.
- `content` (string, required): Base64 encoded file content. - `content` (string, required): Base64 encoded file content.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/create_folder"> <Accordion title="microsoft_onedrive/create_folder">
@@ -72,6 +94,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `folder_name` (string, required): Name of the folder to create. - `folder_name` (string, required): Name of the folder to create.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/delete_item"> <Accordion title="microsoft_onedrive/delete_item">
@@ -79,6 +102,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `item_id` (string, required): The ID of the file or folder to delete. - `item_id` (string, required): The ID of the file or folder to delete.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/copy_item"> <Accordion title="microsoft_onedrive/copy_item">
@@ -88,6 +112,7 @@ uv add crewai-tools
- `item_id` (string, required): The ID of the file or folder to copy. - `item_id` (string, required): The ID of the file or folder to copy.
- `parent_id` (string, optional): The ID of the destination folder (optional, defaults to root). - `parent_id` (string, optional): The ID of the destination folder (optional, defaults to root).
- `new_name` (string, optional): New name for the copied item (optional). - `new_name` (string, optional): New name for the copied item (optional).
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/move_item"> <Accordion title="microsoft_onedrive/move_item">
@@ -97,6 +122,7 @@ uv add crewai-tools
- `item_id` (string, required): The ID of the file or folder to move. - `item_id` (string, required): The ID of the file or folder to move.
- `parent_id` (string, required): The ID of the destination folder. - `parent_id` (string, required): The ID of the destination folder.
- `new_name` (string, optional): New name for the item (optional). - `new_name` (string, optional): New name for the item (optional).
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/search_files"> <Accordion title="microsoft_onedrive/search_files">
@@ -105,6 +131,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `query` (string, required): Search query string. - `query` (string, required): Search query string.
- `top` (integer, optional): Number of results to return (max 1000). Default is `50`. - `top` (integer, optional): Number of results to return (max 1000). Default is `50`.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/share_item"> <Accordion title="microsoft_onedrive/share_item">
@@ -114,6 +141,7 @@ uv add crewai-tools
- `item_id` (string, required): The ID of the file or folder to share. - `item_id` (string, required): The ID of the file or folder to share.
- `type` (string, optional): Type of sharing link. Enum: `view`, `edit`, `embed`. Default is `view`. - `type` (string, optional): Type of sharing link. Enum: `view`, `edit`, `embed`. Default is `view`.
- `scope` (string, optional): Scope of the sharing link. Enum: `anonymous`, `organization`. Default is `anonymous`. - `scope` (string, optional): Scope of the sharing link. Enum: `anonymous`, `organization`. Default is `anonymous`.
</Accordion> </Accordion>
<Accordion title="microsoft_onedrive/get_thumbnails"> <Accordion title="microsoft_onedrive/get_thumbnails">
@@ -121,6 +149,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `item_id` (string, required): The ID of the file. - `item_id` (string, required): The ID of the file.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -216,29 +245,35 @@ crew.kickoff()
### Common Issues ### Common Issues
**Authentication Errors** **Authentication Errors**
- Ensure your Microsoft account has the necessary permissions for file access (e.g., `Files.Read`, `Files.ReadWrite`). - Ensure your Microsoft account has the necessary permissions for file access (e.g., `Files.Read`, `Files.ReadWrite`).
- Verify that the OAuth connection includes all required scopes. - Verify that the OAuth connection includes all required scopes.
**File Upload Issues** **File Upload Issues**
- Ensure `file_name` and `content` are provided for file uploads. - Ensure `file_name` and `content` are provided for file uploads.
- Content must be Base64 encoded for binary files. - Content must be Base64 encoded for binary files.
- Check that you have write permissions to OneDrive. - Check that you have write permissions to OneDrive.
**File/Folder ID Issues** **File/Folder ID Issues**
- Double-check item IDs for correctness when accessing specific files or folders. - Double-check item IDs for correctness when accessing specific files or folders.
- Item IDs are returned by other operations like `list_files` or `search_files`. - Item IDs are returned by other operations like `list_files` or `search_files`.
- Ensure the referenced items exist and are accessible. - Ensure the referenced items exist and are accessible.
**Search and Filter Operations** **Search and Filter Operations**
- Use appropriate search terms for `search_files` operations. - Use appropriate search terms for `search_files` operations.
- For `filter` parameters, use proper OData syntax. - For `filter` parameters, use proper OData syntax.
**File Operations (Copy/Move)** **File Operations (Copy/Move)**
- For `move_item`, ensure both `item_id` and `parent_id` are provided. - For `move_item`, ensure both `item_id` and `parent_id` are provided.
- For `copy_item`, only `item_id` is required; `parent_id` defaults to root if not specified. - For `copy_item`, only `item_id` is required; `parent_id` defaults to root if not specified.
- Verify that destination folders exist and are accessible. - Verify that destination folders exist and are accessible.
**Sharing Link Creation** **Sharing Link Creation**
- Ensure the item exists before creating sharing links. - Ensure the item exists before creating sharing links.
- Choose appropriate `type` and `scope` based on your sharing requirements. - Choose appropriate `type` and `scope` based on your sharing requirements.
- `anonymous` scope allows access without sign-in; `organization` requires organizational account. - `anonymous` scope allows access without sign-in; `organization` requires organizational account.
@@ -246,5 +281,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Microsoft OneDrive integration setup or troubleshooting. Contact our support team for assistance with Microsoft OneDrive integration
setup or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Microsoft Outlook integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -46,6 +64,7 @@ uv add crewai-tools
- `orderby` (string, optional): Order by field (e.g., "receivedDateTime desc"). Default is "receivedDateTime desc". - `orderby` (string, optional): Order by field (e.g., "receivedDateTime desc"). Default is "receivedDateTime desc".
- `select` (string, optional): Select specific properties to return. - `select` (string, optional): Select specific properties to return.
- `expand` (string, optional): Expand related resources inline. - `expand` (string, optional): Expand related resources inline.
</Accordion> </Accordion>
<Accordion title="microsoft_outlook/send_email"> <Accordion title="microsoft_outlook/send_email">
@@ -61,6 +80,7 @@ uv add crewai-tools
- `importance` (string, optional): Message importance level. Enum: `low`, `normal`, `high`. Default is `normal`. - `importance` (string, optional): Message importance level. Enum: `low`, `normal`, `high`. Default is `normal`.
- `reply_to` (array, optional): Array of reply-to email addresses. - `reply_to` (array, optional): Array of reply-to email addresses.
- `save_to_sent_items` (boolean, optional): Whether to save the message to Sent Items folder. Default is `true`. - `save_to_sent_items` (boolean, optional): Whether to save the message to Sent Items folder. Default is `true`.
</Accordion> </Accordion>
<Accordion title="microsoft_outlook/get_calendar_events"> <Accordion title="microsoft_outlook/get_calendar_events">
@@ -71,6 +91,7 @@ uv add crewai-tools
- `skip` (integer, optional): Number of events to skip. Default is `0`. - `skip` (integer, optional): Number of events to skip. Default is `0`.
- `filter` (string, optional): OData filter expression (e.g., "start/dateTime ge '2024-01-01T00:00:00Z'"). - `filter` (string, optional): OData filter expression (e.g., "start/dateTime ge '2024-01-01T00:00:00Z'").
- `orderby` (string, optional): Order by field (e.g., "start/dateTime asc"). Default is "start/dateTime asc". - `orderby` (string, optional): Order by field (e.g., "start/dateTime asc"). Default is "start/dateTime asc".
</Accordion> </Accordion>
<Accordion title="microsoft_outlook/create_calendar_event"> <Accordion title="microsoft_outlook/create_calendar_event">
@@ -84,6 +105,7 @@ uv add crewai-tools
- `timezone` (string, optional): Time zone (e.g., 'Pacific Standard Time'). Default is `UTC`. - `timezone` (string, optional): Time zone (e.g., 'Pacific Standard Time'). Default is `UTC`.
- `location` (string, optional): Event location. - `location` (string, optional): Event location.
- `attendees` (array, optional): Array of attendee email addresses. - `attendees` (array, optional): Array of attendee email addresses.
</Accordion> </Accordion>
<Accordion title="microsoft_outlook/get_contacts"> <Accordion title="microsoft_outlook/get_contacts">
@@ -94,6 +116,7 @@ uv add crewai-tools
- `skip` (integer, optional): Number of contacts to skip. Default is `0`. - `skip` (integer, optional): Number of contacts to skip. Default is `0`.
- `filter` (string, optional): OData filter expression. - `filter` (string, optional): OData filter expression.
- `orderby` (string, optional): Order by field (e.g., "displayName asc"). Default is "displayName asc". - `orderby` (string, optional): Order by field (e.g., "displayName asc"). Default is "displayName asc".
</Accordion> </Accordion>
<Accordion title="microsoft_outlook/create_contact"> <Accordion title="microsoft_outlook/create_contact">
@@ -108,6 +131,7 @@ uv add crewai-tools
- `homePhones` (array, optional): Array of home phone numbers. - `homePhones` (array, optional): Array of home phone numbers.
- `jobTitle` (string, optional): Contact's job title. - `jobTitle` (string, optional): Contact's job title.
- `companyName` (string, optional): Contact's company name. - `companyName` (string, optional): Contact's company name.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -203,30 +227,36 @@ crew.kickoff()
### Common Issues ### Common Issues
**Authentication Errors** **Authentication Errors**
- Ensure your Microsoft account has the necessary permissions for mail, calendar, and contact access. - Ensure your Microsoft account has the necessary permissions for mail, calendar, and contact access.
- Required scopes include: `Mail.Read`, `Mail.Send`, `Calendars.Read`, `Calendars.ReadWrite`, `Contacts.Read`, `Contacts.ReadWrite`. - Required scopes include: `Mail.Read`, `Mail.Send`, `Calendars.Read`, `Calendars.ReadWrite`, `Contacts.Read`, `Contacts.ReadWrite`.
- Verify that the OAuth connection includes all required scopes. - Verify that the OAuth connection includes all required scopes.
**Email Sending Issues** **Email Sending Issues**
- Ensure `to_recipients`, `subject`, and `body` are provided for `send_email`. - Ensure `to_recipients`, `subject`, and `body` are provided for `send_email`.
- Check that email addresses are properly formatted. - Check that email addresses are properly formatted.
- Verify that the account has `Mail.Send` permissions. - Verify that the account has `Mail.Send` permissions.
**Calendar Event Creation** **Calendar Event Creation**
- Ensure `subject`, `start_datetime`, and `end_datetime` are provided. - Ensure `subject`, `start_datetime`, and `end_datetime` are provided.
- Use proper ISO 8601 format for datetime fields (e.g., '2024-01-20T10:00:00'). - Use proper ISO 8601 format for datetime fields (e.g., '2024-01-20T10:00:00').
- Verify timezone settings if events appear at incorrect times. - Verify timezone settings if events appear at incorrect times.
**Contact Management** **Contact Management**
- For `create_contact`, ensure `displayName` is provided as it's required. - For `create_contact`, ensure `displayName` is provided as it's required.
- When providing `emailAddresses`, use the proper object format with `address` and `name` properties. - When providing `emailAddresses`, use the proper object format with `address` and `name` properties.
**Search and Filter Issues** **Search and Filter Issues**
- Use proper OData syntax for `filter` parameters. - Use proper OData syntax for `filter` parameters.
- For date filters, use ISO 8601 format (e.g., "receivedDateTime ge '2024-01-01T00:00:00Z'"). - For date filters, use ISO 8601 format (e.g., "receivedDateTime ge '2024-01-01T00:00:00Z'").
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Microsoft Outlook integration setup or troubleshooting. Contact our support team for assistance with Microsoft Outlook integration
setup or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Microsoft SharePoint integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -47,6 +65,7 @@ uv add crewai-tools
- `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999 - `top` (integer, optional): Number of items to return. Minimum: 1, Maximum: 999
- `skip` (integer, optional): Number of items to skip. Minimum: 0 - `skip` (integer, optional): Number of items to skip. Minimum: 0
- `orderby` (string, optional): Order results by specified properties (e.g., 'displayName desc') - `orderby` (string, optional): Order results by specified properties (e.g., 'displayName desc')
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/get_site"> <Accordion title="microsoft_sharepoint/get_site">
@@ -56,6 +75,7 @@ uv add crewai-tools
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
- `select` (string, optional): Select specific properties to return (e.g., 'displayName,id,webUrl,drives') - `select` (string, optional): Select specific properties to return (e.g., 'displayName,id,webUrl,drives')
- `expand` (string, optional): Expand related resources inline (e.g., 'drives,lists') - `expand` (string, optional): Expand related resources inline (e.g., 'drives,lists')
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/get_site_lists"> <Accordion title="microsoft_sharepoint/get_site_lists">
@@ -63,6 +83,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/get_list"> <Accordion title="microsoft_sharepoint/get_list">
@@ -71,6 +92,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
- `list_id` (string, required): The ID of the list - `list_id` (string, required): The ID of the list
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/get_list_items"> <Accordion title="microsoft_sharepoint/get_list_items">
@@ -80,6 +102,7 @@ uv add crewai-tools
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
- `list_id` (string, required): The ID of the list - `list_id` (string, required): The ID of the list
- `expand` (string, optional): Expand related data (e.g., 'fields') - `expand` (string, optional): Expand related data (e.g., 'fields')
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/create_list_item"> <Accordion title="microsoft_sharepoint/create_list_item">
@@ -96,6 +119,7 @@ uv add crewai-tools
"Status": "Active" "Status": "Active"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/update_list_item"> <Accordion title="microsoft_sharepoint/update_list_item">
@@ -112,6 +136,7 @@ uv add crewai-tools
"Status": "Completed" "Status": "Completed"
} }
``` ```
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/delete_list_item"> <Accordion title="microsoft_sharepoint/delete_list_item">
@@ -121,6 +146,7 @@ uv add crewai-tools
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
- `list_id` (string, required): The ID of the list - `list_id` (string, required): The ID of the list
- `item_id` (string, required): The ID of the item to delete - `item_id` (string, required): The ID of the item to delete
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/upload_file_to_library"> <Accordion title="microsoft_sharepoint/upload_file_to_library">
@@ -130,6 +156,7 @@ uv add crewai-tools
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
- `file_path` (string, required): The path where to upload the file (e.g., 'folder/filename.txt') - `file_path` (string, required): The path where to upload the file (e.g., 'folder/filename.txt')
- `content` (string, required): The file content to upload - `content` (string, required): The file content to upload
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/get_drive_items"> <Accordion title="microsoft_sharepoint/get_drive_items">
@@ -137,6 +164,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
</Accordion> </Accordion>
<Accordion title="microsoft_sharepoint/delete_drive_item"> <Accordion title="microsoft_sharepoint/delete_drive_item">
@@ -145,6 +173,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `site_id` (string, required): The ID of the SharePoint site - `site_id` (string, required): The ID of the SharePoint site
- `item_id` (string, required): The ID of the file or folder to delete - `item_id` (string, required): The ID of the file or folder to delete
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -347,36 +376,43 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Microsoft account has appropriate permissions for SharePoint sites - Ensure your Microsoft account has appropriate permissions for SharePoint sites
- Verify that the OAuth connection includes required scopes (Sites.Read.All, Sites.ReadWrite.All) - Verify that the OAuth connection includes required scopes (Sites.Read.All, Sites.ReadWrite.All)
- Check that you have access to the specific sites and lists you're trying to access - Check that you have access to the specific sites and lists you're trying to access
**Site and List ID Issues** **Site and List ID Issues**
- Verify that site IDs and list IDs are correct and properly formatted - Verify that site IDs and list IDs are correct and properly formatted
- Ensure that sites and lists exist and are accessible to your account - Ensure that sites and lists exist and are accessible to your account
- Use the get_sites and get_site_lists actions to discover valid IDs - Use the get_sites and get_site_lists actions to discover valid IDs
**Field and Schema Issues** **Field and Schema Issues**
- Ensure field names match exactly with the SharePoint list schema - Ensure field names match exactly with the SharePoint list schema
- Verify that required fields are included when creating or updating list items - Verify that required fields are included when creating or updating list items
- Check that field types and values are compatible with the list column definitions - Check that field types and values are compatible with the list column definitions
**File Upload Issues** **File Upload Issues**
- Ensure file paths are properly formatted and don't contain invalid characters - Ensure file paths are properly formatted and don't contain invalid characters
- Verify that you have write permissions to the target document library - Verify that you have write permissions to the target document library
- Check that file content is properly encoded for upload - Check that file content is properly encoded for upload
**OData Query Issues** **OData Query Issues**
- Use proper OData syntax for filter, select, expand, and orderby parameters - Use proper OData syntax for filter, select, expand, and orderby parameters
- Verify that property names used in queries exist in the target resources - Verify that property names used in queries exist in the target resources
- Test simple queries before building complex filter expressions - Test simple queries before building complex filter expressions
**Pagination and Performance** **Pagination and Performance**
- Use top and skip parameters appropriately for large result sets - Use top and skip parameters appropriately for large result sets
- Implement proper pagination for lists with many items - Implement proper pagination for lists with many items
- Consider using select parameters to return only needed properties - Consider using select parameters to return only needed properties
**Document Library Operations** **Document Library Operations**
- Ensure you have proper permissions for document library operations - Ensure you have proper permissions for document library operations
- Verify that drive item IDs are correct when deleting files or folders - Verify that drive item IDs are correct when deleting files or folders
- Check that file paths don't conflict with existing content - Check that file paths don't conflict with existing content
@@ -384,5 +420,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Microsoft SharePoint integration setup or troubleshooting. Contact our support team for assistance with Microsoft SharePoint integration
setup or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Microsoft Teams integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -41,6 +59,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- No parameters required. - No parameters required.
</Accordion> </Accordion>
<Accordion title="microsoft_teams/get_channels"> <Accordion title="microsoft_teams/get_channels">
@@ -48,6 +67,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `team_id` (string, required): The ID of the team. - `team_id` (string, required): The ID of the team.
</Accordion> </Accordion>
<Accordion title="microsoft_teams/send_message"> <Accordion title="microsoft_teams/send_message">
@@ -58,6 +78,7 @@ uv add crewai-tools
- `channel_id` (string, required): The ID of the channel. - `channel_id` (string, required): The ID of the channel.
- `message` (string, required): The message content. - `message` (string, required): The message content.
- `content_type` (string, optional): Content type (html or text). Enum: `html`, `text`. Default is `text`. - `content_type` (string, optional): Content type (html or text). Enum: `html`, `text`. Default is `text`.
</Accordion> </Accordion>
<Accordion title="microsoft_teams/get_messages"> <Accordion title="microsoft_teams/get_messages">
@@ -67,6 +88,7 @@ uv add crewai-tools
- `team_id` (string, required): The ID of the team. - `team_id` (string, required): The ID of the team.
- `channel_id` (string, required): The ID of the channel. - `channel_id` (string, required): The ID of the channel.
- `top` (integer, optional): Number of messages to retrieve (max 50). Default is `20`. - `top` (integer, optional): Number of messages to retrieve (max 50). Default is `20`.
</Accordion> </Accordion>
<Accordion title="microsoft_teams/create_meeting"> <Accordion title="microsoft_teams/create_meeting">
@@ -76,6 +98,7 @@ uv add crewai-tools
- `subject` (string, required): Meeting subject/title. - `subject` (string, required): Meeting subject/title.
- `startDateTime` (string, required): Meeting start time (ISO 8601 format with timezone). - `startDateTime` (string, required): Meeting start time (ISO 8601 format with timezone).
- `endDateTime` (string, required): Meeting end time (ISO 8601 format with timezone). - `endDateTime` (string, required): Meeting end time (ISO 8601 format with timezone).
</Accordion> </Accordion>
<Accordion title="microsoft_teams/search_online_meetings_by_join_url"> <Accordion title="microsoft_teams/search_online_meetings_by_join_url">
@@ -83,6 +106,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `join_web_url` (string, required): The join web URL of the meeting to search for. - `join_web_url` (string, required): The join web URL of the meeting to search for.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -178,35 +202,42 @@ crew.kickoff()
### Common Issues ### Common Issues
**Authentication Errors** **Authentication Errors**
- Ensure your Microsoft account has the necessary permissions for Teams access. - Ensure your Microsoft account has the necessary permissions for Teams access.
- Required scopes include: `Team.ReadBasic.All`, `Channel.ReadBasic.All`, `ChannelMessage.Send`, `ChannelMessage.Read.All`, `OnlineMeetings.ReadWrite`, `OnlineMeetings.Read`. - Required scopes include: `Team.ReadBasic.All`, `Channel.ReadBasic.All`, `ChannelMessage.Send`, `ChannelMessage.Read.All`, `OnlineMeetings.ReadWrite`, `OnlineMeetings.Read`.
- Verify that the OAuth connection includes all required scopes. - Verify that the OAuth connection includes all required scopes.
**Team and Channel Access** **Team and Channel Access**
- Ensure you are a member of the teams you're trying to access. - Ensure you are a member of the teams you're trying to access.
- Double-check team IDs and channel IDs for correctness. - Double-check team IDs and channel IDs for correctness.
- Team and channel IDs can be obtained using the `get_teams` and `get_channels` actions. - Team and channel IDs can be obtained using the `get_teams` and `get_channels` actions.
**Message Sending Issues** **Message Sending Issues**
- Ensure `team_id`, `channel_id`, and `message` are provided for `send_message`. - Ensure `team_id`, `channel_id`, and `message` are provided for `send_message`.
- Verify that you have permissions to send messages to the specified channel. - Verify that you have permissions to send messages to the specified channel.
- Choose appropriate `content_type` (text or html) based on your message format. - Choose appropriate `content_type` (text or html) based on your message format.
**Meeting Creation** **Meeting Creation**
- Ensure `subject`, `startDateTime`, and `endDateTime` are provided. - Ensure `subject`, `startDateTime`, and `endDateTime` are provided.
- Use proper ISO 8601 format with timezone for datetime fields (e.g., '2024-01-20T10:00:00-08:00'). - Use proper ISO 8601 format with timezone for datetime fields (e.g., '2024-01-20T10:00:00-08:00').
- Verify that the meeting times are in the future. - Verify that the meeting times are in the future.
**Message Retrieval Limitations** **Message Retrieval Limitations**
- The `get_messages` action can retrieve a maximum of 50 messages per request. - The `get_messages` action can retrieve a maximum of 50 messages per request.
- Messages are returned in reverse chronological order (newest first). - Messages are returned in reverse chronological order (newest first).
**Meeting Search** **Meeting Search**
- For `search_online_meetings_by_join_url`, ensure the join URL is exact and properly formatted. - For `search_online_meetings_by_join_url`, ensure the join URL is exact and properly formatted.
- The URL should be the complete Teams meeting join URL. - The URL should be the complete Teams meeting join URL.
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Microsoft Teams integration setup or troubleshooting. Contact our support team for assistance with Microsoft Teams integration setup
or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Microsoft Word integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -45,6 +63,7 @@ uv add crewai-tools
- `expand` (string, optional): Expand related resources inline. - `expand` (string, optional): Expand related resources inline.
- `top` (integer, optional): Number of items to return (min 1, max 999). - `top` (integer, optional): Number of items to return (min 1, max 999).
- `orderby` (string, optional): Order results by specified properties. - `orderby` (string, optional): Order results by specified properties.
</Accordion> </Accordion>
<Accordion title="microsoft_word/create_text_document"> <Accordion title="microsoft_word/create_text_document">
@@ -53,6 +72,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_name` (string, required): Name of the text document (should end with .txt). - `file_name` (string, required): Name of the text document (should end with .txt).
- `content` (string, optional): Text content for the document. Default is "This is a new text document created via API." - `content` (string, optional): Text content for the document. Default is "This is a new text document created via API."
</Accordion> </Accordion>
<Accordion title="microsoft_word/get_document_content"> <Accordion title="microsoft_word/get_document_content">
@@ -60,6 +80,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the document. - `file_id` (string, required): The ID of the document.
</Accordion> </Accordion>
<Accordion title="microsoft_word/get_document_properties"> <Accordion title="microsoft_word/get_document_properties">
@@ -67,6 +88,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the document. - `file_id` (string, required): The ID of the document.
</Accordion> </Accordion>
<Accordion title="microsoft_word/delete_document"> <Accordion title="microsoft_word/delete_document">
@@ -74,6 +96,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `file_id` (string, required): The ID of the document to delete. - `file_id` (string, required): The ID of the document to delete.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -169,24 +192,29 @@ crew.kickoff()
### Common Issues ### Common Issues
**Authentication Errors** **Authentication Errors**
- Ensure your Microsoft account has the necessary permissions for file access (e.g., `Files.Read.All`, `Files.ReadWrite.All`). - Ensure your Microsoft account has the necessary permissions for file access (e.g., `Files.Read.All`, `Files.ReadWrite.All`).
- Verify that the OAuth connection includes all required scopes. - Verify that the OAuth connection includes all required scopes.
**File Creation Issues** **File Creation Issues**
- When creating text documents, ensure the `file_name` ends with `.txt` extension. - When creating text documents, ensure the `file_name` ends with `.txt` extension.
- Verify that you have write permissions to the target location (OneDrive/SharePoint). - Verify that you have write permissions to the target location (OneDrive/SharePoint).
**Document Access Issues** **Document Access Issues**
- Double-check document IDs for correctness when accessing specific documents. - Double-check document IDs for correctness when accessing specific documents.
- Ensure the referenced documents exist and are accessible. - Ensure the referenced documents exist and are accessible.
- Note that this integration works best with text files (.txt) for content operations. - Note that this integration works best with text files (.txt) for content operations.
**Content Retrieval Limitations** **Content Retrieval Limitations**
- The `get_document_content` action works best with text files (.txt). - The `get_document_content` action works best with text files (.txt).
- For complex Word documents (.docx), consider using the document properties action to get metadata. - For complex Word documents (.docx), consider using the document properties action to get metadata.
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Microsoft Word integration setup or troubleshooting. Contact our support team for assistance with Microsoft Word integration setup
or troubleshooting.
</Card> </Card>

View File

@@ -33,6 +33,24 @@ Before using the Notion integration, ensure you have:
uv add crewai-tools uv add crewai-tools
``` ```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions ## Available Actions
<AccordionGroup> <AccordionGroup>
@@ -42,6 +60,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `page_size` (integer, optional): Number of items returned in the response. Minimum: 1, Maximum: 100, Default: 100 - `page_size` (integer, optional): Number of items returned in the response. Minimum: 1, Maximum: 100, Default: 100
- `start_cursor` (string, optional): Cursor for pagination. Return results after this cursor. - `start_cursor` (string, optional): Cursor for pagination. Return results after this cursor.
</Accordion> </Accordion>
<Accordion title="notion/get_user"> <Accordion title="notion/get_user">
@@ -49,6 +68,7 @@ uv add crewai-tools
**Parameters:** **Parameters:**
- `user_id` (string, required): The ID of the user to retrieve. - `user_id` (string, required): The ID of the user to retrieve.
</Accordion> </Accordion>
<Accordion title="notion/create_comment"> <Accordion title="notion/create_comment">
@@ -80,6 +100,7 @@ uv add crewai-tools
} }
] ]
``` ```
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -238,26 +259,31 @@ crew.kickoff()
### Common Issues ### Common Issues
**Permission Errors** **Permission Errors**
- Ensure your Notion account has appropriate permissions to read user information - Ensure your Notion account has appropriate permissions to read user information
- Verify that the OAuth connection includes required scopes for user access and comment creation - Verify that the OAuth connection includes required scopes for user access and comment creation
- Check that you have permissions to comment on the target pages or discussions - Check that you have permissions to comment on the target pages or discussions
**User Access Issues** **User Access Issues**
- Ensure you have workspace admin permissions to list all users - Ensure you have workspace admin permissions to list all users
- Verify that user IDs are correct and users exist in the workspace - Verify that user IDs are correct and users exist in the workspace
- Check that the workspace allows API access to user information - Check that the workspace allows API access to user information
**Comment Creation Issues** **Comment Creation Issues**
- Verify that page IDs or discussion IDs are correct and accessible - Verify that page IDs or discussion IDs are correct and accessible
- Ensure that rich text content follows Notion's API format specifications - Ensure that rich text content follows Notion's API format specifications
- Check that you have comment permissions on the target pages or discussions - Check that you have comment permissions on the target pages or discussions
**API Rate Limits** **API Rate Limits**
- Be mindful of Notion's API rate limits when making multiple requests - Be mindful of Notion's API rate limits when making multiple requests
- Implement appropriate delays between requests if needed - Implement appropriate delays between requests if needed
- Consider pagination for large user lists - Consider pagination for large user lists
**Parent Object Specification** **Parent Object Specification**
- Ensure parent object type is correctly specified (page_id or discussion_id) - Ensure parent object type is correctly specified (page_id or discussion_id)
- Verify that the parent page or discussion exists and is accessible - Verify that the parent page or discussion exists and is accessible
- Check that the parent object ID format is correct - Check that the parent object ID format is correct
@@ -265,5 +291,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Notion integration setup or troubleshooting. Contact our support team for assistance with Notion integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -17,6 +17,40 @@ Before using the Salesforce integration, ensure you have:
- A Salesforce account with appropriate permissions - A Salesforce account with appropriate permissions
- Connected your Salesforce account through the [Integrations page](https://app.crewai.com/integrations) - Connected your Salesforce account through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Salesforce Integration
### 1. Connect Your Salesforce Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Salesforce** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for CRM and sales management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools ## Available Tools
### **Record Management** ### **Record Management**
@@ -33,6 +67,7 @@ Before using the Salesforce integration, ensure you have:
- `Title` (string, optional): Title of the contact, such as CEO or Vice President - `Title` (string, optional): Title of the contact, such as CEO or Vice President
- `Description` (string, optional): A description of the Contact - `Description` (string, optional): A description of the Contact
- `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields
</Accordion> </Accordion>
<Accordion title="salesforce/create_record_lead"> <Accordion title="salesforce/create_record_lead">
@@ -49,6 +84,7 @@ Before using the Salesforce integration, ensure you have:
- `Status` (string, optional): Lead Status - Use Connect Portal Workflow Settings to select Lead Status - `Status` (string, optional): Lead Status - Use Connect Portal Workflow Settings to select Lead Status
- `Description` (string, optional): A description of the Lead - `Description` (string, optional): A description of the Lead
- `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields
</Accordion> </Accordion>
<Accordion title="salesforce/create_record_opportunity"> <Accordion title="salesforce/create_record_opportunity">
@@ -64,6 +100,7 @@ Before using the Salesforce integration, ensure you have:
- `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity - `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity
- `NextStep` (string, optional): Description of next task in closing Opportunity - `NextStep` (string, optional): Description of next task in closing Opportunity
- `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields
</Accordion> </Accordion>
<Accordion title="salesforce/create_record_task"> <Accordion title="salesforce/create_record_task">
@@ -82,6 +119,7 @@ Before using the Salesforce integration, ensure you have:
- `isReminderSet` (boolean, optional): Whether reminder is set - `isReminderSet` (boolean, optional): Whether reminder is set
- `reminderDateTime` (string, optional): Reminder Date/Time in ISO format - `reminderDateTime` (string, optional): Reminder Date/Time in ISO format
- `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields
</Accordion> </Accordion>
<Accordion title="salesforce/create_record_account"> <Accordion title="salesforce/create_record_account">
@@ -94,12 +132,14 @@ Before using the Salesforce integration, ensure you have:
- `Phone` (string, optional): Phone number - `Phone` (string, optional): Phone number
- `Description` (string, optional): Account description - `Description` (string, optional): Account description
- `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields
</Accordion> </Accordion>
<Accordion title="salesforce/create_record_any"> <Accordion title="salesforce/create_record_any">
**Description:** Create a record of any object type in Salesforce. **Description:** Create a record of any object type in Salesforce.
**Note:** This is a flexible tool for creating records of custom or unknown object types. **Note:** This is a flexible tool for creating records of custom or unknown object types.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -118,6 +158,7 @@ Before using the Salesforce integration, ensure you have:
- `Title` (string, optional): Title of the contact - `Title` (string, optional): Title of the contact
- `Description` (string, optional): A description of the Contact - `Description` (string, optional): A description of the Contact
- `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields
</Accordion> </Accordion>
<Accordion title="salesforce/update_record_lead"> <Accordion title="salesforce/update_record_lead">
@@ -135,6 +176,7 @@ Before using the Salesforce integration, ensure you have:
- `Status` (string, optional): Lead Status - `Status` (string, optional): Lead Status
- `Description` (string, optional): A description of the Lead - `Description` (string, optional): A description of the Lead
- `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields
</Accordion> </Accordion>
<Accordion title="salesforce/update_record_opportunity"> <Accordion title="salesforce/update_record_opportunity">
@@ -151,6 +193,7 @@ Before using the Salesforce integration, ensure you have:
- `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity - `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity
- `NextStep` (string, optional): Description of next task in closing Opportunity - `NextStep` (string, optional): Description of next task in closing Opportunity
- `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields
</Accordion> </Accordion>
<Accordion title="salesforce/update_record_task"> <Accordion title="salesforce/update_record_task">
@@ -169,6 +212,7 @@ Before using the Salesforce integration, ensure you have:
- `isReminderSet` (boolean, optional): Whether reminder is set - `isReminderSet` (boolean, optional): Whether reminder is set
- `reminderDateTime` (string, optional): Reminder Date/Time in ISO format - `reminderDateTime` (string, optional): Reminder Date/Time in ISO format
- `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields
</Accordion> </Accordion>
<Accordion title="salesforce/update_record_account"> <Accordion title="salesforce/update_record_account">
@@ -182,12 +226,14 @@ Before using the Salesforce integration, ensure you have:
- `Phone` (string, optional): Phone number - `Phone` (string, optional): Phone number
- `Description` (string, optional): Account description - `Description` (string, optional): Account description
- `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields - `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields
</Accordion> </Accordion>
<Accordion title="salesforce/update_record_any"> <Accordion title="salesforce/update_record_any">
**Description:** Update a record of any object type in Salesforce. **Description:** Update a record of any object type in Salesforce.
**Note:** This is a flexible tool for updating records of custom or unknown object types. **Note:** This is a flexible tool for updating records of custom or unknown object types.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -199,6 +245,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `recordId` (string, required): Record ID of the Contact - `recordId` (string, required): Record ID of the Contact
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_id_lead"> <Accordion title="salesforce/get_record_by_id_lead">
@@ -206,6 +253,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `recordId` (string, required): Record ID of the Lead - `recordId` (string, required): Record ID of the Lead
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_id_opportunity"> <Accordion title="salesforce/get_record_by_id_opportunity">
@@ -213,6 +261,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `recordId` (string, required): Record ID of the Opportunity - `recordId` (string, required): Record ID of the Opportunity
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_id_task"> <Accordion title="salesforce/get_record_by_id_task">
@@ -220,6 +269,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `recordId` (string, required): Record ID of the Task - `recordId` (string, required): Record ID of the Task
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_id_account"> <Accordion title="salesforce/get_record_by_id_account">
@@ -227,6 +277,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `recordId` (string, required): Record ID of the Account - `recordId` (string, required): Record ID of the Account
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_id_any"> <Accordion title="salesforce/get_record_by_id_any">
@@ -235,6 +286,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `recordType` (string, required): Record Type (e.g., "CustomObject__c") - `recordType` (string, required): Record Type (e.g., "CustomObject__c")
- `recordId` (string, required): Record ID - `recordId` (string, required): Record ID
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -250,6 +302,7 @@ Before using the Salesforce integration, ensure you have:
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC - `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results - `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/search_records_lead"> <Accordion title="salesforce/search_records_lead">
@@ -261,6 +314,7 @@ Before using the Salesforce integration, ensure you have:
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC - `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results - `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/search_records_opportunity"> <Accordion title="salesforce/search_records_opportunity">
@@ -272,6 +326,7 @@ Before using the Salesforce integration, ensure you have:
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC - `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results - `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/search_records_task"> <Accordion title="salesforce/search_records_task">
@@ -283,6 +338,7 @@ Before using the Salesforce integration, ensure you have:
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC - `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results - `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/search_records_account"> <Accordion title="salesforce/search_records_account">
@@ -294,6 +350,7 @@ Before using the Salesforce integration, ensure you have:
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC - `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results - `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/search_records_any"> <Accordion title="salesforce/search_records_any">
@@ -304,6 +361,7 @@ Before using the Salesforce integration, ensure you have:
- `filterFormula` (string, optional): Filter search criteria - `filterFormula` (string, optional): Filter search criteria
- `includeAllFields` (boolean, optional): Include all fields in results - `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -316,6 +374,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `listViewId` (string, required): List View ID - `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_view_id_lead"> <Accordion title="salesforce/get_record_by_view_id_lead">
@@ -324,6 +383,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `listViewId` (string, required): List View ID - `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_view_id_opportunity"> <Accordion title="salesforce/get_record_by_view_id_opportunity">
@@ -332,6 +392,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `listViewId` (string, required): List View ID - `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_view_id_task"> <Accordion title="salesforce/get_record_by_view_id_task">
@@ -340,6 +401,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `listViewId` (string, required): List View ID - `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_view_id_account"> <Accordion title="salesforce/get_record_by_view_id_account">
@@ -348,6 +410,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `listViewId` (string, required): List View ID - `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
<Accordion title="salesforce/get_record_by_view_id_any"> <Accordion title="salesforce/get_record_by_view_id_any">
@@ -357,6 +420,7 @@ Before using the Salesforce integration, ensure you have:
- `recordType` (string, required): Record Type - `recordType` (string, required): Record Type
- `listViewId` (string, required): List View ID - `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor - `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -377,6 +441,7 @@ Before using the Salesforce integration, ensure you have:
- `description` (string, optional): Field description - `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover - `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value - `defaultFieldValue` (string, optional): Default field value
</Accordion> </Accordion>
<Accordion title="salesforce/create_custom_field_lead"> <Accordion title="salesforce/create_custom_field_lead">
@@ -393,6 +458,7 @@ Before using the Salesforce integration, ensure you have:
- `description` (string, optional): Field description - `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover - `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value - `defaultFieldValue` (string, optional): Default field value
</Accordion> </Accordion>
<Accordion title="salesforce/create_custom_field_opportunity"> <Accordion title="salesforce/create_custom_field_opportunity">
@@ -409,6 +475,7 @@ Before using the Salesforce integration, ensure you have:
- `description` (string, optional): Field description - `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover - `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value - `defaultFieldValue` (string, optional): Default field value
</Accordion> </Accordion>
<Accordion title="salesforce/create_custom_field_task"> <Accordion title="salesforce/create_custom_field_task">
@@ -425,6 +492,7 @@ Before using the Salesforce integration, ensure you have:
- `description` (string, optional): Field description - `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover - `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value - `defaultFieldValue` (string, optional): Default field value
</Accordion> </Accordion>
<Accordion title="salesforce/create_custom_field_account"> <Accordion title="salesforce/create_custom_field_account">
@@ -441,12 +509,14 @@ Before using the Salesforce integration, ensure you have:
- `description` (string, optional): Field description - `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover - `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value - `defaultFieldValue` (string, optional): Default field value
</Accordion> </Accordion>
<Accordion title="salesforce/create_custom_field_any"> <Accordion title="salesforce/create_custom_field_any">
**Description:** Deploy custom fields for any object type. **Description:** Deploy custom fields for any object type.
**Note:** This is a flexible tool for creating custom fields on custom or unknown object types. **Note:** This is a flexible tool for creating custom fields on custom or unknown object types.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -458,6 +528,7 @@ Before using the Salesforce integration, ensure you have:
**Parameters:** **Parameters:**
- `query` (string, required): SOQL Query (e.g., "SELECT Id, Name FROM Account WHERE Name = 'Example'") - `query` (string, required): SOQL Query (e.g., "SELECT Id, Name FROM Account WHERE Name = 'Example'")
</Accordion> </Accordion>
<Accordion title="salesforce/create_custom_object"> <Accordion title="salesforce/create_custom_object">
@@ -468,6 +539,7 @@ Before using the Salesforce integration, ensure you have:
- `pluralLabel` (string, required): Plural Label (e.g., "Accounts") - `pluralLabel` (string, required): Plural Label (e.g., "Accounts")
- `description` (string, optional): A description of the Custom Object - `description` (string, optional): A description of the Custom Object
- `recordName` (string, required): Record Name that appears in layouts and searches (e.g., "Account Name") - `recordName` (string, required): Record Name that appears in layouts and searches (e.g., "Account Name")
</Accordion> </Accordion>
<Accordion title="salesforce/describe_action_schema"> <Accordion title="salesforce/describe_action_schema">
@@ -478,6 +550,7 @@ Before using the Salesforce integration, ensure you have:
- `operation` (string, required): Operation Type (e.g., "CREATE_RECORD" or "UPDATE_RECORD") - `operation` (string, required): Operation Type (e.g., "CREATE_RECORD" or "UPDATE_RECORD")
**Note:** Use this function first when working with custom objects to understand their schema before performing operations. **Note:** Use this function first when working with custom objects to understand their schema before performing operations.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -607,5 +680,6 @@ This comprehensive documentation covers all the Salesforce tools organized by fu
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Salesforce integration setup or troubleshooting. Contact our support team for assistance with Salesforce integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -17,6 +17,40 @@ Before using the Shopify integration, ensure you have:
- A Shopify store with appropriate admin permissions - A Shopify store with appropriate admin permissions
- Connected your Shopify store through the [Integrations page](https://app.crewai.com/integrations) - Connected your Shopify store through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Shopify Integration
### 1. Connect Your Shopify Store
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Shopify** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for store and product management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools ## Available Tools
### **Customer Management** ### **Customer Management**
@@ -32,6 +66,7 @@ Before using the Shopify integration, ensure you have:
- `updatedAtMin` (string, optional): Only return customers updated after this date (ISO or Unix timestamp) - `updatedAtMin` (string, optional): Only return customers updated after this date (ISO or Unix timestamp)
- `updatedAtMax` (string, optional): Only return customers updated before this date (ISO or Unix timestamp) - `updatedAtMax` (string, optional): Only return customers updated before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of customers to return (defaults to 250) - `limit` (string, optional): Maximum number of customers to return (defaults to 250)
</Accordion> </Accordion>
<Accordion title="shopify/search_customers"> <Accordion title="shopify/search_customers">
@@ -40,6 +75,7 @@ Before using the Shopify integration, ensure you have:
**Parameters:** **Parameters:**
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators - `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
- `limit` (string, optional): Maximum number of customers to return (defaults to 250) - `limit` (string, optional): Maximum number of customers to return (defaults to 250)
</Accordion> </Accordion>
<Accordion title="shopify/create_customer"> <Accordion title="shopify/create_customer">
@@ -61,6 +97,7 @@ Before using the Shopify integration, ensure you have:
- `note` (string, optional): Customer note - `note` (string, optional): Customer note
- `sendEmailInvite` (boolean, optional): Whether to send email invitation - `sendEmailInvite` (boolean, optional): Whether to send email invitation
- `metafields` (object, optional): Additional metafields in JSON format - `metafields` (object, optional): Additional metafields in JSON format
</Accordion> </Accordion>
<Accordion title="shopify/update_customer"> <Accordion title="shopify/update_customer">
@@ -83,6 +120,7 @@ Before using the Shopify integration, ensure you have:
- `note` (string, optional): Customer note - `note` (string, optional): Customer note
- `sendEmailInvite` (boolean, optional): Whether to send email invitation - `sendEmailInvite` (boolean, optional): Whether to send email invitation
- `metafields` (object, optional): Additional metafields in JSON format - `metafields` (object, optional): Additional metafields in JSON format
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -99,6 +137,7 @@ Before using the Shopify integration, ensure you have:
- `updatedAtMin` (string, optional): Only return orders updated after this date (ISO or Unix timestamp) - `updatedAtMin` (string, optional): Only return orders updated after this date (ISO or Unix timestamp)
- `updatedAtMax` (string, optional): Only return orders updated before this date (ISO or Unix timestamp) - `updatedAtMax` (string, optional): Only return orders updated before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of orders to return (defaults to 250) - `limit` (string, optional): Maximum number of orders to return (defaults to 250)
</Accordion> </Accordion>
<Accordion title="shopify/create_order"> <Accordion title="shopify/create_order">
@@ -112,6 +151,7 @@ Before using the Shopify integration, ensure you have:
- `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided - `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided
- `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy - `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy
- `note` (string, optional): Order note - `note` (string, optional): Order note
</Accordion> </Accordion>
<Accordion title="shopify/update_order"> <Accordion title="shopify/update_order">
@@ -126,6 +166,7 @@ Before using the Shopify integration, ensure you have:
- `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided - `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided
- `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy - `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy
- `note` (string, optional): Order note - `note` (string, optional): Order note
</Accordion> </Accordion>
<Accordion title="shopify/get_abandoned_carts"> <Accordion title="shopify/get_abandoned_carts">
@@ -138,6 +179,7 @@ Before using the Shopify integration, ensure you have:
- `createdAtMin` (string, optional): Only return carts created after this date (ISO or Unix timestamp) - `createdAtMin` (string, optional): Only return carts created after this date (ISO or Unix timestamp)
- `createdAtMax` (string, optional): Only return carts created before this date (ISO or Unix timestamp) - `createdAtMax` (string, optional): Only return carts created before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of carts to return (defaults to 250) - `limit` (string, optional): Maximum number of carts to return (defaults to 250)
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -158,6 +200,7 @@ Before using the Shopify integration, ensure you have:
- `updatedAtMin` (string, optional): Only return products updated after this date (ISO or Unix timestamp) - `updatedAtMin` (string, optional): Only return products updated after this date (ISO or Unix timestamp)
- `updatedAtMax` (string, optional): Only return products updated before this date (ISO or Unix timestamp) - `updatedAtMax` (string, optional): Only return products updated before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of products to return (defaults to 250) - `limit` (string, optional): Maximum number of products to return (defaults to 250)
</Accordion> </Accordion>
<Accordion title="shopify/create_product"> <Accordion title="shopify/create_product">
@@ -174,6 +217,7 @@ Before using the Shopify integration, ensure you have:
- `imageUrl` (string, optional): Product image URL - `imageUrl` (string, optional): Product image URL
- `isPublished` (boolean, optional): Whether product is published - `isPublished` (boolean, optional): Whether product is published
- `publishToPointToSale` (boolean, optional): Whether to publish to point of sale - `publishToPointToSale` (boolean, optional): Whether to publish to point of sale
</Accordion> </Accordion>
<Accordion title="shopify/update_product"> <Accordion title="shopify/update_product">
@@ -191,6 +235,7 @@ Before using the Shopify integration, ensure you have:
- `imageUrl` (string, optional): Product image URL - `imageUrl` (string, optional): Product image URL
- `isPublished` (boolean, optional): Whether product is published - `isPublished` (boolean, optional): Whether product is published
- `publishToPointToSale` (boolean, optional): Whether to publish to point of sale - `publishToPointToSale` (boolean, optional): Whether to publish to point of sale
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -202,6 +247,7 @@ Before using the Shopify integration, ensure you have:
**Parameters:** **Parameters:**
- `productFilterFormula` (object, optional): Advanced filter in disjunctive normal form with support for fields like id, title, vendor, status, handle, tag, created_at, updated_at, published_at - `productFilterFormula` (object, optional): Advanced filter in disjunctive normal form with support for fields like id, title, vendor, status, handle, tag, created_at, updated_at, published_at
</Accordion> </Accordion>
<Accordion title="shopify/create_product_graphql"> <Accordion title="shopify/create_product_graphql">
@@ -215,6 +261,7 @@ Before using the Shopify integration, ensure you have:
- `tags` (string, optional): Product tags as array or comma-separated list - `tags` (string, optional): Product tags as array or comma-separated list
- `media` (object, optional): Media objects with alt text, content type, and source URL - `media` (object, optional): Media objects with alt text, content type, and source URL
- `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard - `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard
</Accordion> </Accordion>
<Accordion title="shopify/update_product_graphql"> <Accordion title="shopify/update_product_graphql">
@@ -229,6 +276,7 @@ Before using the Shopify integration, ensure you have:
- `tags` (string, optional): Product tags as array or comma-separated list - `tags` (string, optional): Product tags as array or comma-separated list
- `media` (object, optional): Updated media objects with alt text, content type, and source URL - `media` (object, optional): Updated media objects with alt text, content type, and source URL
- `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard - `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -357,5 +405,6 @@ crew.kickoff()
### Getting Help ### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Shopify integration setup or troubleshooting. Contact our support team for assistance with Shopify integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -17,6 +17,40 @@ Before using the Slack integration, ensure you have:
- A Slack workspace with appropriate permissions - A Slack workspace with appropriate permissions
- Connected your Slack workspace through the [Integrations page](https://app.crewai.com/integrations) - Connected your Slack workspace through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Slack Integration
### 1. Connect Your Slack Workspace
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Slack** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for team communication
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools ## Available Tools
### **User Management** ### **User Management**
@@ -27,6 +61,7 @@ Before using the Slack integration, ensure you have:
**Parameters:** **Parameters:**
- No parameters required - retrieves all channel members - No parameters required - retrieves all channel members
</Accordion> </Accordion>
<Accordion title="slack/get_user_by_email"> <Accordion title="slack/get_user_by_email">
@@ -34,6 +69,7 @@ Before using the Slack integration, ensure you have:
**Parameters:** **Parameters:**
- `email` (string, required): The email address of a user in the workspace - `email` (string, required): The email address of a user in the workspace
</Accordion> </Accordion>
<Accordion title="slack/get_users_by_name"> <Accordion title="slack/get_users_by_name">
@@ -44,6 +80,7 @@ Before using the Slack integration, ensure you have:
- `displayName` (string, required): User's display name to search for - `displayName` (string, required): User's display name to search for
- `paginationParameters` (object, optional): Pagination settings - `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination - `pageCursor` (string, optional): Page cursor for pagination
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -55,6 +92,7 @@ Before using the Slack integration, ensure you have:
**Parameters:** **Parameters:**
- No parameters required - retrieves all accessible channels - No parameters required - retrieves all accessible channels
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -71,6 +109,7 @@ Before using the Slack integration, ensure you have:
- `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:") - `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:")
- `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements - `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements
- `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false) - `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false)
</Accordion> </Accordion>
<Accordion title="slack/send_direct_message"> <Accordion title="slack/send_direct_message">
@@ -83,6 +122,7 @@ Before using the Slack integration, ensure you have:
- `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:") - `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:")
- `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements - `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements
- `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false) - `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false)
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -100,6 +140,7 @@ Before using the Slack integration, ensure you have:
- `from:@john in:#general` - Search for messages from John in the #general channel - `from:@john in:#general` - Search for messages from John in the #general channel
- `has:link after:2023-01-01` - Search for messages with links after January 1, 2023 - `has:link after:2023-01-01` - Search for messages with links after January 1, 2023
- `in:@channel before:yesterday` - Search for messages in a specific channel before yesterday - `in:@channel before:yesterday` - Search for messages in a specific channel before yesterday
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -108,6 +149,7 @@ Before using the Slack integration, ensure you have:
Slack's Block Kit allows you to create rich, interactive messages. Here are some examples of how to use the `blocks` parameter: Slack's Block Kit allows you to create rich, interactive messages. Here are some examples of how to use the `blocks` parameter:
### Simple Text with Attachment ### Simple Text with Attachment
```json ```json
[ [
{ {
@@ -122,6 +164,7 @@ Slack's Block Kit allows you to create rich, interactive messages. Here are some
``` ```
### Rich Formatting with Sections ### Rich Formatting with Sections
```json ```json
[ [
{ {
@@ -279,5 +322,6 @@ crew.kickoff()
## Contact Support ## Contact Support
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com"> <Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Slack integration setup or troubleshooting. Contact our support team for assistance with Slack integration setup or
troubleshooting.
</Card> </Card>

View File

@@ -17,6 +17,40 @@ Before using the Stripe integration, ensure you have:
- A Stripe account with appropriate API permissions - A Stripe account with appropriate API permissions
- Connected your Stripe account through the [Integrations page](https://app.crewai.com/integrations) - Connected your Stripe account through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Stripe Integration
### 1. Connect Your Stripe Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Stripe** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for payment processing
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools ## Available Tools
### **Customer Management** ### **Customer Management**
@@ -30,6 +64,7 @@ Before using the Stripe integration, ensure you have:
- `name` (string, optional): Customer's full name - `name` (string, optional): Customer's full name
- `description` (string, optional): Customer description for internal reference - `description` (string, optional): Customer description for internal reference
- `metadataCreateCustomer` (object, optional): Additional metadata as key-value pairs (e.g., `{"field1": 1, "field2": 2}`) - `metadataCreateCustomer` (object, optional): Additional metadata as key-value pairs (e.g., `{"field1": 1, "field2": 2}`)
</Accordion> </Accordion>
<Accordion title="stripe/get_customer_by_id"> <Accordion title="stripe/get_customer_by_id">
@@ -37,6 +72,7 @@ Before using the Stripe integration, ensure you have:
**Parameters:** **Parameters:**
- `idGetCustomer` (string, required): The Stripe customer ID to retrieve - `idGetCustomer` (string, required): The Stripe customer ID to retrieve
</Accordion> </Accordion>
<Accordion title="stripe/get_customers"> <Accordion title="stripe/get_customers">
@@ -47,6 +83,7 @@ Before using the Stripe integration, ensure you have:
- `createdAfter` (string, optional): Filter customers created after this date (Unix timestamp) - `createdAfter` (string, optional): Filter customers created after this date (Unix timestamp)
- `createdBefore` (string, optional): Filter customers created before this date (Unix timestamp) - `createdBefore` (string, optional): Filter customers created before this date (Unix timestamp)
- `limitGetCustomers` (string, optional): Maximum number of customers to return (defaults to 10) - `limitGetCustomers` (string, optional): Maximum number of customers to return (defaults to 10)
</Accordion> </Accordion>
<Accordion title="stripe/update_customer"> <Accordion title="stripe/update_customer">
@@ -58,6 +95,7 @@ Before using the Stripe integration, ensure you have:
- `name` (string, optional): Updated customer name - `name` (string, optional): Updated customer name
- `description` (string, optional): Updated customer description - `description` (string, optional): Updated customer description
- `metadataUpdateCustomer` (object, optional): Updated metadata as key-value pairs - `metadataUpdateCustomer` (object, optional): Updated metadata as key-value pairs
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -71,6 +109,7 @@ Before using the Stripe integration, ensure you have:
- `customerIdCreateSubscription` (string, required): The customer ID for whom the subscription will be created - `customerIdCreateSubscription` (string, required): The customer ID for whom the subscription will be created
- `plan` (string, required): The plan ID for the subscription - Use Connect Portal Workflow Settings to allow users to select a plan - `plan` (string, required): The plan ID for the subscription - Use Connect Portal Workflow Settings to allow users to select a plan
- `metadataCreateSubscription` (object, optional): Additional metadata for the subscription - `metadataCreateSubscription` (object, optional): Additional metadata for the subscription
</Accordion> </Accordion>
<Accordion title="stripe/get_subscriptions"> <Accordion title="stripe/get_subscriptions">
@@ -80,6 +119,7 @@ Before using the Stripe integration, ensure you have:
- `customerIdGetSubscriptions` (string, optional): Filter subscriptions by customer ID - `customerIdGetSubscriptions` (string, optional): Filter subscriptions by customer ID
- `subscriptionStatus` (string, optional): Filter by subscription status - Options: incomplete, incomplete_expired, trialing, active, past_due, canceled, unpaid - `subscriptionStatus` (string, optional): Filter by subscription status - Options: incomplete, incomplete_expired, trialing, active, past_due, canceled, unpaid
- `limitGetSubscriptions` (string, optional): Maximum number of subscriptions to return (defaults to 10) - `limitGetSubscriptions` (string, optional): Maximum number of subscriptions to return (defaults to 10)
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -93,6 +133,7 @@ Before using the Stripe integration, ensure you have:
- `productName` (string, required): The product name - `productName` (string, required): The product name
- `description` (string, optional): Product description - `description` (string, optional): Product description
- `metadataProduct` (object, optional): Additional product metadata as key-value pairs - `metadataProduct` (object, optional): Additional product metadata as key-value pairs
</Accordion> </Accordion>
<Accordion title="stripe/get_product_by_id"> <Accordion title="stripe/get_product_by_id">
@@ -100,6 +141,7 @@ Before using the Stripe integration, ensure you have:
**Parameters:** **Parameters:**
- `productId` (string, required): The Stripe product ID to retrieve - `productId` (string, required): The Stripe product ID to retrieve
</Accordion> </Accordion>
<Accordion title="stripe/get_products"> <Accordion title="stripe/get_products">
@@ -109,6 +151,7 @@ Before using the Stripe integration, ensure you have:
- `createdAfter` (string, optional): Filter products created after this date (Unix timestamp) - `createdAfter` (string, optional): Filter products created after this date (Unix timestamp)
- `createdBefore` (string, optional): Filter products created before this date (Unix timestamp) - `createdBefore` (string, optional): Filter products created before this date (Unix timestamp)
- `limitGetProducts` (string, optional): Maximum number of products to return (defaults to 10) - `limitGetProducts` (string, optional): Maximum number of products to return (defaults to 10)
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -122,6 +165,7 @@ Before using the Stripe integration, ensure you have:
- `balanceTransactionType` (string, optional): Filter by transaction type - Options: charge, refund, payment, payment_refund - `balanceTransactionType` (string, optional): Filter by transaction type - Options: charge, refund, payment, payment_refund
- `paginationParameters` (object, optional): Pagination settings - `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination - `pageCursor` (string, optional): Page cursor for pagination
</Accordion> </Accordion>
<Accordion title="stripe/get_plans"> <Accordion title="stripe/get_plans">
@@ -131,6 +175,7 @@ Before using the Stripe integration, ensure you have:
- `isPlanActive` (boolean, optional): Filter by plan status - true for active plans, false for inactive plans - `isPlanActive` (boolean, optional): Filter by plan status - true for active plans, false for inactive plans
- `paginationParameters` (object, optional): Pagination settings - `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination - `pageCursor` (string, optional): Page cursor for pagination
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -17,6 +17,40 @@ Before using the Zendesk integration, ensure you have:
- A Zendesk account with appropriate API permissions - A Zendesk account with appropriate API permissions
- Connected your Zendesk account through the [Integrations page](https://app.crewai.com/integrations) - Connected your Zendesk account through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Zendesk Integration
### 1. Connect Your Zendesk Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Zendesk** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for ticket and user management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools ## Available Tools
### **Ticket Management** ### **Ticket Management**
@@ -38,6 +72,7 @@ Before using the Zendesk integration, ensure you have:
- `ticketTags` (string, optional): Array of tags to apply (e.g., `["enterprise", "other_tag"]`) - `ticketTags` (string, optional): Array of tags to apply (e.g., `["enterprise", "other_tag"]`)
- `ticketExternalId` (string, optional): External ID to link tickets to local records - `ticketExternalId` (string, optional): External ID to link tickets to local records
- `ticketCustomFields` (object, optional): Custom field values in JSON format - `ticketCustomFields` (object, optional): Custom field values in JSON format
</Accordion> </Accordion>
<Accordion title="zendesk/update_ticket"> <Accordion title="zendesk/update_ticket">
@@ -56,6 +91,7 @@ Before using the Zendesk integration, ensure you have:
- `ticketTags` (string, optional): Updated tags array - `ticketTags` (string, optional): Updated tags array
- `ticketExternalId` (string, optional): Updated external ID - `ticketExternalId` (string, optional): Updated external ID
- `ticketCustomFields` (object, optional): Updated custom field values - `ticketCustomFields` (object, optional): Updated custom field values
</Accordion> </Accordion>
<Accordion title="zendesk/get_ticket_by_id"> <Accordion title="zendesk/get_ticket_by_id">
@@ -63,6 +99,7 @@ Before using the Zendesk integration, ensure you have:
**Parameters:** **Parameters:**
- `ticketId` (string, required): The ticket ID to retrieve (e.g., "35436") - `ticketId` (string, required): The ticket ID to retrieve (e.g., "35436")
</Accordion> </Accordion>
<Accordion title="zendesk/add_comment_to_ticket"> <Accordion title="zendesk/add_comment_to_ticket">
@@ -73,6 +110,7 @@ Before using the Zendesk integration, ensure you have:
- `commentBody` (string, required): Comment message (accepts plain text or HTML, e.g., "Thanks for your help!") - `commentBody` (string, required): Comment message (accepts plain text or HTML, e.g., "Thanks for your help!")
- `isInternalNote` (boolean, optional): Set to true for internal notes instead of public replies (defaults to false) - `isInternalNote` (boolean, optional): Set to true for internal notes instead of public replies (defaults to false)
- `isPublic` (boolean, optional): True for public comments, false for internal notes - `isPublic` (boolean, optional): True for public comments, false for internal notes
</Accordion> </Accordion>
<Accordion title="zendesk/search_tickets"> <Accordion title="zendesk/search_tickets">
@@ -94,6 +132,7 @@ Before using the Zendesk integration, ensure you have:
- `dueDate` (object, optional): Filter by due date with operator and value - `dueDate` (object, optional): Filter by due date with operator and value
- `sort_by` (string, optional): Sort field - Options: created_at, updated_at, priority, status, ticket_type - `sort_by` (string, optional): Sort field - Options: created_at, updated_at, priority, status, ticket_type
- `sort_order` (string, optional): Sort direction - Options: asc, desc - `sort_order` (string, optional): Sort direction - Options: asc, desc
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -111,6 +150,7 @@ Before using the Zendesk integration, ensure you have:
- `externalId` (string, optional): Unique identifier from another system - `externalId` (string, optional): Unique identifier from another system
- `details` (string, optional): Additional user details - `details` (string, optional): Additional user details
- `notes` (string, optional): Internal notes about the user - `notes` (string, optional): Internal notes about the user
</Accordion> </Accordion>
<Accordion title="zendesk/update_user"> <Accordion title="zendesk/update_user">
@@ -125,6 +165,7 @@ Before using the Zendesk integration, ensure you have:
- `externalId` (string, optional): Updated external ID - `externalId` (string, optional): Updated external ID
- `details` (string, optional): Updated user details - `details` (string, optional): Updated user details
- `notes` (string, optional): Updated internal notes - `notes` (string, optional): Updated internal notes
</Accordion> </Accordion>
<Accordion title="zendesk/get_user_by_id"> <Accordion title="zendesk/get_user_by_id">
@@ -132,6 +173,7 @@ Before using the Zendesk integration, ensure you have:
**Parameters:** **Parameters:**
- `userId` (string, required): The user ID to retrieve - `userId` (string, required): The user ID to retrieve
</Accordion> </Accordion>
<Accordion title="zendesk/search_users"> <Accordion title="zendesk/search_users">
@@ -144,6 +186,7 @@ Before using the Zendesk integration, ensure you have:
- `externalId` (string, optional): Filter by external ID - `externalId` (string, optional): Filter by external ID
- `sort_by` (string, optional): Sort field - Options: created_at, updated_at - `sort_by` (string, optional): Sort field - Options: created_at, updated_at
- `sort_order` (string, optional): Sort direction - Options: asc, desc - `sort_order` (string, optional): Sort direction - Options: asc, desc
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -156,6 +199,7 @@ Before using the Zendesk integration, ensure you have:
**Parameters:** **Parameters:**
- `paginationParameters` (object, optional): Pagination settings - `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination - `pageCursor` (string, optional): Page cursor for pagination
</Accordion> </Accordion>
<Accordion title="zendesk/get_ticket_audits"> <Accordion title="zendesk/get_ticket_audits">
@@ -165,6 +209,7 @@ Before using the Zendesk integration, ensure you have:
- `ticketId` (string, optional): Get audits for specific ticket (if empty, retrieves audits for all non-archived tickets, e.g., "1234") - `ticketId` (string, optional): Get audits for specific ticket (if empty, retrieves audits for all non-archived tickets, e.g., "1234")
- `paginationParameters` (object, optional): Pagination settings - `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination - `pageCursor` (string, optional): Page cursor for pagination
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -10,7 +10,10 @@ mode: "wide"
CrewAI AMP(Agent Management Platform) provides a platform for deploying, monitoring, and scaling your crews and agents in a production environment. CrewAI AMP(Agent Management Platform) provides a platform for deploying, monitoring, and scaling your crews and agents in a production environment.
<Frame> <Frame>
<img src="/images/enterprise/crewai-enterprise-dashboard.png" alt="CrewAI AMP Dashboard" /> <img
src="/images/enterprise/crewai-enterprise-dashboard.png"
alt="CrewAI AMP Dashboard"
/>
</Frame> </Frame>
CrewAI AMP extends the power of the open-source framework with features designed for production deployments, collaboration, and scalability. Deploy your crews to a managed infrastructure and monitor their execution in real-time. CrewAI AMP extends the power of the open-source framework with features designed for production deployments, collaboration, and scalability. Deploy your crews to a managed infrastructure and monitor their execution in real-time.
@@ -22,7 +25,8 @@ CrewAI AMP extends the power of the open-source framework with features designed
Deploy your crews to a managed infrastructure with a few clicks Deploy your crews to a managed infrastructure with a few clicks
</Card> </Card>
<Card title="API Access" icon="code"> <Card title="API Access" icon="code">
Access your deployed crews via REST API for integration with existing systems Access your deployed crews via REST API for integration with existing
systems
</Card> </Card>
<Card title="Observability" icon="chart-line"> <Card title="Observability" icon="chart-line">
Monitor your crews with detailed execution traces and logs Monitor your crews with detailed execution traces and logs
@@ -57,11 +61,7 @@ CrewAI AMP extends the power of the open-source framework with features designed
<Steps> <Steps>
<Step title="Sign up for an account"> <Step title="Sign up for an account">
Create your account at [app.crewai.com](https://app.crewai.com) Create your account at [app.crewai.com](https://app.crewai.com)
<Card <Card title="Sign Up" icon="user" href="https://app.crewai.com/signup">
title="Sign Up"
icon="user"
href="https://app.crewai.com/signup"
>
Sign Up Sign Up
</Card> </Card>
</Step> </Step>

View File

@@ -49,7 +49,7 @@ mode: "wide"
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer. This input can provide extra context, clarify ambiguities, or validate the agent's output. To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer. This input can provide extra context, clarify ambiguities, or validate the agent's output.
For detailed implementation guidance, see our [Human-in-the-Loop guide](/en/how-to/human-in-the-loop). For detailed implementation guidance, see our [Human-in-the-Loop guide](/en/enterprise/guides/human-in-the-loop).
</Accordion> </Accordion>
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?"> <Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
@@ -142,10 +142,11 @@ mode: "wide"
<Accordion title="How can I create custom tools for my CrewAI agents?"> <Accordion title="How can I create custom tools for my CrewAI agents?">
You can create custom tools by subclassing the `BaseTool` class provided by CrewAI or by using the tool decorator. Subclassing involves defining a new class that inherits from `BaseTool`, specifying the name, description, and the `_run` method for operational logic. The tool decorator allows you to create a `Tool` object directly with the required attributes and a functional logic. You can create custom tools by subclassing the `BaseTool` class provided by CrewAI or by using the tool decorator. Subclassing involves defining a new class that inherits from `BaseTool`, specifying the name, description, and the `_run` method for operational logic. The tool decorator allows you to create a `Tool` object directly with the required attributes and a functional logic.
<Card href="https://docs.crewai.com/how-to/create-custom-tools" icon="code">CrewAI Tools Guide</Card> <Card href="/en/learn/create-custom-tools" icon="code">CrewAI Tools Guide</Card>
</Accordion> </Accordion>
<Accordion title="How can you control the maximum number of requests per minute that the entire crew can perform?"> <Accordion title="How can you control the maximum number of requests per minute that the entire crew can perform?">
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -6,6 +6,7 @@ mode: "wide"
--- ---
## Video Tutorial ## Video Tutorial
Watch this video tutorial for a step-by-step demonstration of the installation process: Watch this video tutorial for a step-by-step demonstration of the installation process:
<iframe <iframe
@@ -18,21 +19,25 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
></iframe> ></iframe>
## Text Tutorial ## Text Tutorial
<Note> <Note>
**Python Version Requirements** **Python Version Requirements**
CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version: CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version:
```bash
python3 --version ```bash
``` python3 --version
```
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note> </Note>
<Note> <Note>
**OpenAI SDK Requirement** **OpenAI SDK Requirement**
CrewAI 0.175.0 requires `openai >= 1.13.3`. If you manage dependencies yourself, ensure your environment satisfies this constraint to avoid import/runtime issues. CrewAI 0.175.0 requires `openai >= 1.13.3`. If you manage dependencies yourself, ensure your environment satisfies this constraint to avoid import/runtime issues.
</Note> </Note>
CrewAI uses the `uv` as its dependency management and package handling tool. It simplifies project setup and execution, offering a seamless experience. CrewAI uses the `uv` as its dependency management and package handling tool. It simplifies project setup and execution, offering a seamless experience.
@@ -95,6 +100,7 @@ If you haven't installed `uv` yet, follow **step 1** to quickly get it set up on
``` ```
<Check>Installation successful! You're ready to create your first crew! 🎉</Check> <Check>Installation successful! You're ready to create your first crew! 🎉</Check>
</Step> </Step>
</Steps> </Steps>
# Creating a CrewAI Project # Creating a CrewAI Project
@@ -128,6 +134,7 @@ We recommend using the `YAML` template scaffolding for a structured approach to
├── agents.yaml ├── agents.yaml
└── tasks.yaml └── tasks.yaml
``` ```
</Step> </Step>
<Step title="Customize Your Project"> <Step title="Customize Your Project">
@@ -144,6 +151,7 @@ We recommend using the `YAML` template scaffolding for a structured approach to
- Start by editing `agents.yaml` and `tasks.yaml` to define your crew's behavior. - Start by editing `agents.yaml` and `tasks.yaml` to define your crew's behavior.
- Keep sensitive information like API keys in `.env`. - Keep sensitive information like API keys in `.env`.
</Step> </Step>
<Step title="Run your Crew"> <Step title="Run your Crew">
@@ -168,12 +176,14 @@ We recommend using the `YAML` template scaffolding for a structured approach to
For teams and organizations, CrewAI offers enterprise deployment options that eliminate setup complexity: For teams and organizations, CrewAI offers enterprise deployment options that eliminate setup complexity:
### CrewAI AMP (SaaS) ### CrewAI AMP (SaaS)
- Zero installation required - just sign up for free at [app.crewai.com](https://app.crewai.com) - Zero installation required - just sign up for free at [app.crewai.com](https://app.crewai.com)
- Automatic updates and maintenance - Automatic updates and maintenance
- Managed infrastructure and scaling - Managed infrastructure and scaling
- Build Crews with no Code - Build Crews with no Code
### CrewAI Factory (Self-hosted) ### CrewAI Factory (Self-hosted)
- Containerized deployment for your infrastructure - Containerized deployment for your infrastructure
- Supports any hyperscaler including on prem deployments - Supports any hyperscaler including on prem deployments
- Integration with your existing security systems - Integration with your existing security systems
@@ -186,12 +196,9 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
## Next Steps ## Next Steps
<CardGroup cols={2}> <CardGroup cols={2}>
<Card <Card title="Build Your First Agent" icon="code" href="/en/quickstart">
title="Build Your First Agent" Follow our quickstart guide to create your first CrewAI agent and get
icon="code" hands-on experience.
href="/en/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card> </Card>
<Card <Card
title="Join the Community" title="Join the Community"

View File

@@ -7,110 +7,89 @@ mode: "wide"
# What is CrewAI? # What is CrewAI?
**CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely independent of LangChain or other agent frameworks.** **CrewAI is the leading open-source framework for orchestrating autonomous AI agents and building complex workflows.**
CrewAI empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario: It empowers developers to build production-ready multi-agent systems by combining the collaborative intelligence of **Crews** with the precise control of **Flows**.
- **[CrewAI Crews](/en/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals. - **[CrewAI Flows](/en/guides/flows/first-flow)**: The backbone of your AI application. Flows allow you to create structured, event-driven workflows that manage state and control execution. They provide the scaffolding for your AI agents to work within.
- **[CrewAI Flows](/en/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively. - **[CrewAI Crews](/en/guides/crews/first-crew)**: The units of work within your Flow. Crews are teams of autonomous agents that collaborate to solve specific tasks delegated to them by the Flow.
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation. With over 100,000 developers certified through our community courses, CrewAI is the standard for enterprise-ready AI automation.
## The CrewAI Architecture
## How Crews Work CrewAI's architecture is designed to balance autonomy with control.
### 1. Flows: The Backbone
<Note> <Note>
Just like a company has departments (Sales, Engineering, Marketing) working together under leadership to achieve business goals, CrewAI helps you create an organization of AI agents with specialized roles collaborating to accomplish complex tasks. Think of a Flow as the "manager" or the "process definition" of your application. It defines the steps, the logic, and how data moves through your system.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
|:----------|:-----------:|:------------|
| **Crew** | The top-level organization | • Manages AI agent teams<br/>• Oversees workflows<br/>• Ensures collaboration<br/>• Delivers outcomes |
| **AI Agents** | Specialized team members | • Have specific roles (researcher, writer)<br/>• Use designated tools<br/>• Can delegate tasks<br/>• Make autonomous decisions |
| **Process** | Workflow management system | • Defines collaboration patterns<br/>• Controls task assignments<br/>• Manages interactions<br/>• Ensures efficient execution |
| **Tasks** | Individual assignments | • Have clear objectives<br/>• Use specific tools<br/>• Feed into larger process<br/>• Produce actionable results |
### How It All Works Together
1. The **Crew** organizes the overall operation
2. **AI Agents** work on their specialized tasks
3. The **Process** ensures smooth collaboration
4. **Tasks** get completed to achieve the goal
## Key Features
<CardGroup cols={2}>
<Card title="Role-Based Agents" icon="users">
Create specialized agents with defined roles, expertise, and goals - from researchers to analysts to writers
</Card>
<Card title="Flexible Tools" icon="screwdriver-wrench">
Equip agents with custom tools and APIs to interact with external services and data sources
</Card>
<Card title="Intelligent Collaboration" icon="people-arrows">
Agents work together, sharing insights and coordinating tasks to achieve complex objectives
</Card>
<Card title="Task Management" icon="list-check">
Define sequential or parallel workflows, with agents automatically handling task dependencies
</Card>
</CardGroup>
## How Flows Work
<Note>
While Crews excel at autonomous collaboration, Flows provide structured automations, offering granular control over workflow execution. Flows ensure tasks are executed reliably, securely, and efficiently, handling conditional logic, loops, and dynamic state management with precision. Flows integrate seamlessly with Crews, enabling you to balance high autonomy with exacting control.
</Note> </Note>
<Frame caption="CrewAI Framework Overview"> <Frame caption="CrewAI Framework Overview">
<img src="/images/flows.png" alt="CrewAI Framework Overview" /> <img src="/images/flows.png" alt="CrewAI Framework Overview" />
</Frame> </Frame>
| Component | Description | Key Features | Flows provide:
|:----------|:-----------:|:------------| - **State Management**: Persist data across steps and executions.
| **Flow** | Structured workflow orchestration | • Manages execution paths<br/>• Handles state transitions<br/>• Controls task sequencing<br/>• Ensures reliable execution | - **Event-Driven Execution**: Trigger actions based on events or external inputs.
| **Events** | Triggers for workflow actions | • Initiate specific processes<br/>• Enable dynamic responses<br/>• Support conditional branching<br/>• Allow for real-time adaptation | - **Control Flow**: Use conditional logic, loops, and branching.
| **States** | Workflow execution contexts | • Maintain execution data<br/>• Enable persistence<br/>• Support resumability<br/>• Ensure execution integrity |
| **Crew Support** | Enhances workflow automation | • Injects pockets of agency when needed<br/>• Complements structured workflows<br/>• Balances automation with intelligence<br/>• Enables adaptive decision-making |
### Key Capabilities ### 2. Crews: The Intelligence
<Note>
Crews are the "teams" that do the heavy lifting. Within a Flow, you can trigger a Crew to tackle a complex problem requiring creativity and collaboration.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
Crews provide:
- **Role-Playing Agents**: Specialized agents with specific goals and tools.
- **Autonomous Collaboration**: Agents work together to solve tasks.
- **Task Delegation**: Tasks are assigned and executed based on agent capabilities.
## How It All Works Together
1. **The Flow** triggers an event or starts a process.
2. **The Flow** manages the state and decides what to do next.
3. **The Flow** delegates a complex task to a **Crew**.
4. **The Crew**'s agents collaborate to complete the task.
5. **The Crew** returns the result to the **Flow**.
6. **The Flow** continues execution based on the result.
## Key Features
<CardGroup cols={2}> <CardGroup cols={2}>
<Card title="Event-Driven Orchestration" icon="bolt"> <Card title="Production-Grade Flows" icon="arrow-progress">
Define precise execution paths responding dynamically to events Build reliable, stateful workflows that can handle long-running processes and complex logic.
</Card> </Card>
<Card title="Fine-Grained Control" icon="sliders"> <Card title="Autonomous Crews" icon="users">
Manage workflow states and conditional execution securely and efficiently Deploy teams of agents that can plan, execute, and collaborate to achieve high-level goals.
</Card> </Card>
<Card title="Native Crew Integration" icon="puzzle-piece"> <Card title="Flexible Tools" icon="screwdriver-wrench">
Effortlessly combine with Crews for enhanced autonomy and intelligence Connect your agents to any API, database, or local tool.
</Card> </Card>
<Card title="Deterministic Execution" icon="route"> <Card title="Enterprise Security" icon="lock">
Ensure predictable outcomes with explicit control flow and error handling Designed with security and compliance in mind for enterprise deployments.
</Card> </Card>
</CardGroup> </CardGroup>
## When to Use Crews vs. Flows ## When to Use Crews vs. Flows
<Note> **The short answer: Use both.**
Understanding when to use [Crews](/en/guides/crews/first-crew) versus [Flows](/en/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
</Note>
| Use Case | Recommended Approach | Why? | For any production-ready application, **start with a Flow**.
|:---------|:---------------------|:-----|
| **Open-ended research** | [Crews](/en/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
| **Content generation** | [Crews](/en/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
| **Decision workflows** | [Flows](/en/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
| **API orchestration** | [Flows](/en/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
| **Hybrid applications** | Combined approach | Use [Flows](/en/guides/flows/first-flow) to orchestrate overall process with [Crews](/en/guides/crews/first-crew) handling complex subtasks |
### Decision Framework - **Use a Flow** to define the overall structure, state, and logic of your application.
- **Use a Crew** within a Flow step when you need a team of agents to perform a specific, complex task that requires autonomy.
- **Choose [Crews](/en/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks | Use Case | Architecture |
- **Choose [Flows](/en/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution | :--- | :--- |
- **Combine both when:** Your application needs both structured processes and pockets of autonomous intelligence | **Simple Automation** | Single Flow with Python tasks |
| **Complex Research** | Flow managing state -> Crew performing research |
| **Application Backend** | Flow handling API requests -> Crew generating content -> Flow saving to DB |
## Why Choose CrewAI? ## Why Choose CrewAI?
@@ -124,13 +103,6 @@ With over 100,000 developers certified through our community courses, CrewAI is
## Ready to Start Building? ## Ready to Start Building?
<CardGroup cols={2}> <CardGroup cols={2}>
<Card
title="Build Your First Crew"
icon="users-gear"
href="/en/guides/crews/first-crew"
>
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
</Card>
<Card <Card
title="Build Your First Flow" title="Build Your First Flow"
icon="diagram-project" icon="diagram-project"
@@ -138,6 +110,13 @@ With over 100,000 developers certified through our community courses, CrewAI is
> >
Learn how to create structured, event-driven workflows with precise control over execution. Learn how to create structured, event-driven workflows with precise control over execution.
</Card> </Card>
<Card
title="Build Your First Crew"
icon="users-gear"
href="/en/guides/crews/first-crew"
>
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
</Card>
</CardGroup> </CardGroup>
<CardGroup cols={3}> <CardGroup cols={3}>

View File

@@ -0,0 +1,295 @@
---
title: Agent-to-Agent (A2A) Protocol
description: Enable CrewAI agents to delegate tasks to remote A2A-compliant agents for specialized handling
icon: network-wired
mode: "wide"
---
## A2A Agent Delegation
CrewAI supports the Agent-to-Agent (A2A) protocol, allowing agents to delegate tasks to remote specialized agents. The agent's LLM automatically decides whether to handle a task directly or delegate to an A2A agent based on the task requirements.
<Note>
A2A delegation requires the `a2a-sdk` package. Install with: `uv add 'crewai[a2a]'` or `pip install 'crewai[a2a]'`
</Note>
## How It Works
When an agent is configured with A2A capabilities:
1. The LLM analyzes each task
2. It decides to either:
- Handle the task directly using its own capabilities
- Delegate to a remote A2A agent for specialized handling
3. If delegating, the agent communicates with the remote A2A agent through the protocol
4. Results are returned to the CrewAI workflow
## Basic Configuration
Configure an agent for A2A delegation by setting the `a2a` parameter:
```python Code
from crewai import Agent, Crew, Task
from crewai.a2a import A2AConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks efficiently",
backstory="Expert at delegating to specialized research agents",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://example.com/.well-known/agent-card.json",
timeout=120,
max_turns=10
)
)
task = Task(
description="Research the latest developments in quantum computing",
expected_output="A comprehensive research report",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task], verbose=True)
result = crew.kickoff()
```
## Configuration Options
The `A2AConfig` class accepts the following parameters:
<ParamField path="endpoint" type="str" required>
The A2A agent endpoint URL (typically points to `.well-known/agent-card.json`)
</ParamField>
<ParamField path="auth" type="AuthScheme" default="None">
Authentication scheme for the A2A agent. Supports Bearer tokens, OAuth2, API keys, and HTTP authentication.
</ParamField>
<ParamField path="timeout" type="int" default="120">
Request timeout in seconds
</ParamField>
<ParamField path="max_turns" type="int" default="10">
Maximum number of conversation turns with the A2A agent
</ParamField>
<ParamField path="response_model" type="type[BaseModel]" default="None">
Optional Pydantic model for requesting structured output from an A2A agent. A2A protocol does not
enforce this, so an A2A agent does not need to honor this request.
</ParamField>
<ParamField path="fail_fast" type="bool" default="True">
Whether to raise an error immediately if agent connection fails. When `False`, the agent continues with available agents and informs the LLM about unavailable ones.
</ParamField>
<ParamField path="trust_remote_completion_status" type="bool" default="False">
When `True`, returns the A2A agent's result directly when it signals completion. When `False`, allows the server agent to review the result and potentially continue the conversation.
</ParamField>
## Authentication
For A2A agents that require authentication, use one of the provided auth schemes:
<Tabs>
<Tab title="Bearer Token">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Secure Coordinator",
goal="Coordinate tasks with secured agents",
backstory="Manages secure agent communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://secure-agent.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="your-bearer-token"),
timeout=120
)
)
```
</Tab>
<Tab title="API Key">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import APIKeyAuth
agent = Agent(
role="API Coordinator",
goal="Coordinate with API-based agents",
backstory="Manages API-authenticated communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://api-agent.example.com/.well-known/agent-card.json",
auth=APIKeyAuth(
api_key="your-api-key",
location="header", # or "query" or "cookie"
name="X-API-Key"
),
timeout=120
)
)
```
</Tab>
<Tab title="OAuth2">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import OAuth2ClientCredentials
agent = Agent(
role="OAuth Coordinator",
goal="Coordinate with OAuth-secured agents",
backstory="Manages OAuth-authenticated communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://oauth-agent.example.com/.well-known/agent-card.json",
auth=OAuth2ClientCredentials(
token_url="https://auth.example.com/oauth/token",
client_id="your-client-id",
client_secret="your-client-secret",
scopes=["read", "write"]
),
timeout=120
)
)
```
</Tab>
<Tab title="HTTP Basic">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import HTTPBasicAuth
agent = Agent(
role="Basic Auth Coordinator",
goal="Coordinate with basic auth agents",
backstory="Manages basic authentication communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://basic-agent.example.com/.well-known/agent-card.json",
auth=HTTPBasicAuth(
username="your-username",
password="your-password"
),
timeout=120
)
)
```
</Tab>
</Tabs>
## Multiple A2A Agents
Configure multiple A2A agents for delegation by passing a list:
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple specialized agents",
backstory="Expert at delegating to the right specialist",
llm="gpt-4o",
a2a=[
A2AConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
timeout=120
),
A2AConfig(
endpoint="https://data.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="data-token"),
timeout=90
)
]
)
```
The LLM will automatically choose which A2A agent to delegate to based on the task requirements.
## Error Handling
Control how agent connection failures are handled using the `fail_fast` parameter:
```python Code
from crewai.a2a import A2AConfig
# Fail immediately on connection errors (default)
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
fail_fast=True
)
)
# Continue with available agents
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple agents",
backstory="Expert at working with available resources",
llm="gpt-4o",
a2a=[
A2AConfig(
endpoint="https://primary.example.com/.well-known/agent-card.json",
fail_fast=False
),
A2AConfig(
endpoint="https://backup.example.com/.well-known/agent-card.json",
fail_fast=False
)
]
)
```
When `fail_fast=False`:
- If some agents fail, the LLM is informed which agents are unavailable and can delegate to working agents
- If all agents fail, the LLM receives a notice about unavailable agents and handles the task directly
- Connection errors are captured and included in the context for better decision-making
## Best Practices
<CardGroup cols={2}>
<Card title="Set Appropriate Timeouts" icon="clock">
Configure timeouts based on expected A2A agent response times. Longer-running tasks may need higher timeout values.
</Card>
<Card title="Limit Conversation Turns" icon="comments">
Use `max_turns` to prevent excessive back-and-forth. The agent will automatically conclude conversations before hitting the limit.
</Card>
<Card title="Use Resilient Error Handling" icon="shield-check">
Set `fail_fast=False` for production environments with multiple agents to gracefully handle connection failures and maintain workflow continuity.
</Card>
<Card title="Secure Your Credentials" icon="lock">
Store authentication tokens and credentials as environment variables, not in code.
</Card>
<Card title="Monitor Delegation Decisions" icon="eye">
Use verbose mode to observe when the LLM chooses to delegate versus handle tasks directly.
</Card>
</CardGroup>
## Supported Authentication Methods
- **Bearer Token** - Simple token-based authentication
- **OAuth2 Client Credentials** - OAuth2 flow for machine-to-machine communication
- **OAuth2 Authorization Code** - OAuth2 flow requiring user authorization
- **API Key** - Key-based authentication (header, query param, or cookie)
- **HTTP Basic** - Username/password authentication
- **HTTP Digest** - Digest authentication (requires `httpx-auth` package)
## Learn More
For more information about the A2A protocol and reference implementations:
- [A2A Protocol Documentation](https://a2a-protocol.org)
- [A2A Sample Implementations](https://github.com/a2aproject/a2a-samples)
- [A2A Python SDK](https://github.com/a2aproject/a2a-python)

View File

@@ -66,5 +66,55 @@ def my_cache_strategy(arguments: dict, result: str) -> bool:
cached_tool.cache_function = my_cache_strategy cached_tool.cache_function = my_cache_strategy
``` ```
### Creating Async Tools
CrewAI supports async tools for non-blocking I/O operations. This is useful when your tool needs to make HTTP requests, database queries, or other I/O-bound operations.
#### Using the `@tool` Decorator with Async Functions
The simplest way to create an async tool is using the `@tool` decorator with an async function:
```python Code
import aiohttp
from crewai.tools import tool
@tool("Async Web Fetcher")
async def fetch_webpage(url: str) -> str:
"""Fetch content from a webpage asynchronously."""
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
```
#### Subclassing `BaseTool` with Async Support
For more control, subclass `BaseTool` and implement both `_run` (sync) and `_arun` (async) methods:
```python Code
import requests
import aiohttp
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class WebFetcherInput(BaseModel):
"""Input schema for WebFetcher."""
url: str = Field(..., description="The URL to fetch")
class WebFetcherTool(BaseTool):
name: str = "Web Fetcher"
description: str = "Fetches content from a URL"
args_schema: type[BaseModel] = WebFetcherInput
def _run(self, url: str) -> str:
"""Synchronous implementation."""
return requests.get(url).text
async def _arun(self, url: str) -> str:
"""Asynchronous implementation for non-blocking I/O."""
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
```
By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes, By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes,
you can leverage the full capabilities of the CrewAI framework, enhancing both the development experience and the efficiency of your AI agents. you can leverage the full capabilities of the CrewAI framework, enhancing both the development experience and the efficiency of your AI agents.

View File

@@ -0,0 +1,522 @@
---
title: Execution Hooks Overview
description: Understanding and using execution hooks in CrewAI for fine-grained control over agent operations
mode: "wide"
---
Execution Hooks provide fine-grained control over the runtime behavior of your CrewAI agents. Unlike kickoff hooks that run before and after crew execution, execution hooks intercept specific operations during agent execution, allowing you to modify behavior, implement safety checks, and add comprehensive monitoring.
## Types of Execution Hooks
CrewAI provides two main categories of execution hooks:
### 1. [LLM Call Hooks](/learn/llm-hooks)
Control and monitor language model interactions:
- **Before LLM Call**: Modify prompts, validate inputs, implement approval gates
- **After LLM Call**: Transform responses, sanitize outputs, update conversation history
**Use Cases:**
- Iteration limiting
- Cost tracking and token usage monitoring
- Response sanitization and content filtering
- Human-in-the-loop approval for LLM calls
- Adding safety guidelines or context
- Debug logging and request/response inspection
[View LLM Hooks Documentation →](/learn/llm-hooks)
### 2. [Tool Call Hooks](/learn/tool-hooks)
Control and monitor tool execution:
- **Before Tool Call**: Modify inputs, validate parameters, block dangerous operations
- **After Tool Call**: Transform results, sanitize outputs, log execution details
**Use Cases:**
- Safety guardrails for destructive operations
- Human approval for sensitive actions
- Input validation and sanitization
- Result caching and rate limiting
- Tool usage analytics
- Debug logging and monitoring
[View Tool Hooks Documentation →](/learn/tool-hooks)
## Hook Registration Methods
### 1. Decorator-Based Hooks (Recommended)
The cleanest and most Pythonic way to register hooks:
```python
from crewai.hooks import before_llm_call, after_llm_call, before_tool_call, after_tool_call
@before_llm_call
def limit_iterations(context):
"""Prevent infinite loops by limiting iterations."""
if context.iterations > 10:
return False # Block execution
return None
@after_llm_call
def sanitize_response(context):
"""Remove sensitive data from LLM responses."""
if "API_KEY" in context.response:
return context.response.replace("API_KEY", "[REDACTED]")
return None
@before_tool_call
def block_dangerous_tools(context):
"""Block destructive operations."""
if context.tool_name == "delete_database":
return False # Block execution
return None
@after_tool_call
def log_tool_result(context):
"""Log tool execution."""
print(f"Tool {context.tool_name} completed")
return None
```
### 2. Crew-Scoped Hooks
Apply hooks only to specific crew instances:
```python
from crewai import CrewBase
from crewai.project import crew
from crewai.hooks import before_llm_call_crew, after_tool_call_crew
@CrewBase
class MyProjCrew:
@before_llm_call_crew
def validate_inputs(self, context):
# Only applies to this crew
print(f"LLM call in {self.__class__.__name__}")
return None
@after_tool_call_crew
def log_results(self, context):
# Crew-specific logging
print(f"Tool result: {context.tool_result[:50]}...")
return None
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential
)
```
## Hook Execution Flow
### LLM Call Flow
```
Agent needs to call LLM
[Before LLM Call Hooks Execute]
├→ Hook 1: Validate iteration count
├→ Hook 2: Add safety context
└→ Hook 3: Log request
If any hook returns False:
├→ Block LLM call
└→ Raise ValueError
If all hooks return True/None:
├→ LLM call proceeds
└→ Response generated
[After LLM Call Hooks Execute]
├→ Hook 1: Sanitize response
├→ Hook 2: Log response
└→ Hook 3: Update metrics
Final response returned
```
### Tool Call Flow
```
Agent needs to execute tool
[Before Tool Call Hooks Execute]
├→ Hook 1: Check if tool is allowed
├→ Hook 2: Validate inputs
└→ Hook 3: Request approval if needed
If any hook returns False:
├→ Block tool execution
└→ Return error message
If all hooks return True/None:
├→ Tool execution proceeds
└→ Result generated
[After Tool Call Hooks Execute]
├→ Hook 1: Sanitize result
├→ Hook 2: Cache result
└→ Hook 3: Log metrics
Final result returned
```
## Hook Context Objects
### LLMCallHookContext
Provides access to LLM execution state:
```python
class LLMCallHookContext:
executor: CrewAgentExecutor # Full executor access
messages: list # Mutable message list
agent: Agent # Current agent
task: Task # Current task
crew: Crew # Crew instance
llm: BaseLLM # LLM instance
iterations: int # Current iteration
response: str | None # LLM response (after hooks)
```
### ToolCallHookContext
Provides access to tool execution state:
```python
class ToolCallHookContext:
tool_name: str # Tool being called
tool_input: dict # Mutable input parameters
tool: CrewStructuredTool # Tool instance
agent: Agent | None # Agent executing
task: Task | None # Current task
crew: Crew | None # Crew instance
tool_result: str | None # Tool result (after hooks)
```
## Common Patterns
### Safety and Validation
```python
@before_tool_call
def safety_check(context):
"""Block destructive operations."""
dangerous = ['delete_file', 'drop_table', 'system_shutdown']
if context.tool_name in dangerous:
print(f"🛑 Blocked: {context.tool_name}")
return False
return None
@before_llm_call
def iteration_limit(context):
"""Prevent infinite loops."""
if context.iterations > 15:
print("⛔ Maximum iterations exceeded")
return False
return None
```
### Human-in-the-Loop
```python
@before_tool_call
def require_approval(context):
"""Require approval for sensitive operations."""
sensitive = ['send_email', 'make_payment', 'post_message']
if context.tool_name in sensitive:
response = context.request_human_input(
prompt=f"Approve {context.tool_name}?",
default_message="Type 'yes' to approve:"
)
if response.lower() != 'yes':
return False
return None
```
### Monitoring and Analytics
```python
from collections import defaultdict
import time
metrics = defaultdict(lambda: {'count': 0, 'total_time': 0})
@before_tool_call
def start_timer(context):
context.tool_input['_start'] = time.time()
return None
@after_tool_call
def track_metrics(context):
start = context.tool_input.get('_start', time.time())
duration = time.time() - start
metrics[context.tool_name]['count'] += 1
metrics[context.tool_name]['total_time'] += duration
return None
# View metrics
def print_metrics():
for tool, data in metrics.items():
avg = data['total_time'] / data['count']
print(f"{tool}: {data['count']} calls, {avg:.2f}s avg")
```
### Response Sanitization
```python
import re
@after_llm_call
def sanitize_llm_response(context):
"""Remove sensitive data from LLM responses."""
if not context.response:
return None
result = context.response
result = re.sub(r'(api[_-]?key)["\']?\s*[:=]\s*["\']?[\w-]+',
r'\1: [REDACTED]', result, flags=re.IGNORECASE)
return result
@after_tool_call
def sanitize_tool_result(context):
"""Remove sensitive data from tool results."""
if not context.tool_result:
return None
result = context.tool_result
result = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
'[EMAIL-REDACTED]', result)
return result
```
## Hook Management
### Clearing All Hooks
```python
from crewai.hooks import clear_all_global_hooks
# Clear all hooks at once
result = clear_all_global_hooks()
print(f"Cleared {result['total']} hooks")
# Output: {'llm_hooks': (2, 1), 'tool_hooks': (1, 2), 'total': (3, 3)}
```
### Clearing Specific Hook Types
```python
from crewai.hooks import (
clear_before_llm_call_hooks,
clear_after_llm_call_hooks,
clear_before_tool_call_hooks,
clear_after_tool_call_hooks
)
# Clear specific types
llm_before_count = clear_before_llm_call_hooks()
tool_after_count = clear_after_tool_call_hooks()
```
### Unregistering Individual Hooks
```python
from crewai.hooks import (
unregister_before_llm_call_hook,
unregister_after_tool_call_hook
)
def my_hook(context):
...
# Register
register_before_llm_call_hook(my_hook)
# Later, unregister
success = unregister_before_llm_call_hook(my_hook)
print(f"Unregistered: {success}")
```
## Best Practices
### 1. Keep Hooks Focused
Each hook should have a single, clear responsibility:
```python
# ✅ Good - focused responsibility
@before_tool_call
def validate_file_path(context):
if context.tool_name == 'read_file':
if '..' in context.tool_input.get('path', ''):
return False
return None
# ❌ Bad - too many responsibilities
@before_tool_call
def do_everything(context):
# Validation + logging + metrics + approval...
...
```
### 2. Handle Errors Gracefully
```python
@before_llm_call
def safe_hook(context):
try:
# Your logic
if some_condition:
return False
except Exception as e:
print(f"Hook error: {e}")
return None # Allow execution despite error
```
### 3. Modify Context In-Place
```python
# ✅ Correct - modify in-place
@before_llm_call
def add_context(context):
context.messages.append({"role": "system", "content": "Be concise"})
# ❌ Wrong - replaces reference
@before_llm_call
def wrong_approach(context):
context.messages = [{"role": "system", "content": "Be concise"}]
```
### 4. Use Type Hints
```python
from crewai.hooks import LLMCallHookContext, ToolCallHookContext
def my_llm_hook(context: LLMCallHookContext) -> bool | None:
# IDE autocomplete and type checking
return None
def my_tool_hook(context: ToolCallHookContext) -> str | None:
return None
```
### 5. Clean Up in Tests
```python
import pytest
from crewai.hooks import clear_all_global_hooks
@pytest.fixture(autouse=True)
def clean_hooks():
"""Reset hooks before each test."""
yield
clear_all_global_hooks()
```
## When to Use Which Hook
### Use LLM Hooks When:
- Implementing iteration limits
- Adding context or safety guidelines to prompts
- Tracking token usage and costs
- Sanitizing or transforming responses
- Implementing approval gates for LLM calls
- Debugging prompt/response interactions
### Use Tool Hooks When:
- Blocking dangerous or destructive operations
- Validating tool inputs before execution
- Implementing approval gates for sensitive actions
- Caching tool results
- Tracking tool usage and performance
- Sanitizing tool outputs
- Rate limiting tool calls
### Use Both When:
Building comprehensive observability, safety, or approval systems that need to monitor all agent operations.
## Alternative Registration Methods
### Programmatic Registration (Advanced)
For dynamic hook registration or when you need to register hooks programmatically:
```python
from crewai.hooks import (
register_before_llm_call_hook,
register_after_tool_call_hook
)
def my_hook(context):
return None
# Register programmatically
register_before_llm_call_hook(my_hook)
# Useful for:
# - Loading hooks from configuration
# - Conditional hook registration
# - Plugin systems
```
**Note:** For most use cases, decorators are cleaner and more maintainable.
## Performance Considerations
1. **Keep Hooks Fast**: Hooks execute on every call - avoid heavy computation
2. **Cache When Possible**: Store expensive validations or lookups
3. **Be Selective**: Use crew-scoped hooks when global hooks aren't needed
4. **Monitor Hook Overhead**: Profile hook execution time in production
5. **Lazy Import**: Import heavy dependencies only when needed
## Debugging Hooks
### Enable Debug Logging
```python
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
@before_llm_call
def debug_hook(context):
logger.debug(f"LLM call: {context.agent.role}, iteration {context.iterations}")
return None
```
### Hook Execution Order
Hooks execute in registration order. If a before hook returns `False`, subsequent hooks don't execute:
```python
# Register order matters!
register_before_tool_call_hook(hook1) # Executes first
register_before_tool_call_hook(hook2) # Executes second
register_before_tool_call_hook(hook3) # Executes third
# If hook2 returns False:
# - hook1 executed
# - hook2 executed and returned False
# - hook3 NOT executed
# - Tool call blocked
```
## Related Documentation
- [LLM Call Hooks →](/learn/llm-hooks) - Detailed LLM hook documentation
- [Tool Call Hooks →](/learn/tool-hooks) - Detailed tool hook documentation
- [Before and After Kickoff Hooks →](/learn/before-and-after-kickoff-hooks) - Crew lifecycle hooks
- [Human-in-the-Loop →](/learn/human-in-the-loop) - Human input patterns
## Conclusion
Execution hooks provide powerful control over agent runtime behavior. Use them to implement safety guardrails, approval workflows, comprehensive monitoring, and custom business logic. Combined with proper error handling, type safety, and performance considerations, hooks enable production-ready, secure, and observable agent systems.

View File

@@ -97,7 +97,7 @@ project_crew = Crew(
``` ```
<Tip> <Tip>
For more details on creating and customizing a manager agent, check out the [Custom Manager Agent documentation](https://docs.crewai.com/how-to/custom-manager-agent#custom-manager-agent). For more details on creating and customizing a manager agent, check out the [Custom Manager Agent documentation](/en/learn/custom-manager-agent).
</Tip> </Tip>

View File

@@ -0,0 +1,581 @@
---
title: Human Feedback in Flows
description: Learn how to integrate human feedback directly into your CrewAI Flows using the @human_feedback decorator
icon: user-check
mode: "wide"
---
## Overview
The `@human_feedback` decorator enables human-in-the-loop (HITL) workflows directly within CrewAI Flows. It allows you to pause flow execution, present output to a human for review, collect their feedback, and optionally route to different listeners based on the feedback outcome.
This is particularly valuable for:
- **Quality assurance**: Review AI-generated content before it's used downstream
- **Decision gates**: Let humans make critical decisions in automated workflows
- **Approval workflows**: Implement approve/reject/revise patterns
- **Interactive refinement**: Collect feedback to improve outputs iteratively
```mermaid
flowchart LR
A[Flow Method] --> B[Output Generated]
B --> C[Human Reviews]
C --> D{Feedback}
D -->|emit specified| E[LLM Collapses to Outcome]
D -->|no emit| F[HumanFeedbackResult]
E --> G["@listen('approved')"]
E --> H["@listen('rejected')"]
F --> I[Next Listener]
```
## Quick Start
Here's the simplest way to add human feedback to a flow:
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback
class SimpleReviewFlow(Flow):
@start()
@human_feedback(message="Please review this content:")
def generate_content(self):
return "This is AI-generated content that needs review."
@listen(generate_content)
def process_feedback(self, result):
print(f"Content: {result.output}")
print(f"Human said: {result.feedback}")
flow = SimpleReviewFlow()
flow.kickoff()
```
When this flow runs, it will:
1. Execute `generate_content` and return the string
2. Display the output to the user with the request message
3. Wait for the user to type feedback (or press Enter to skip)
4. Pass a `HumanFeedbackResult` object to `process_feedback`
## The @human_feedback Decorator
### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `message` | `str` | Yes | The message shown to the human alongside the method output |
| `emit` | `Sequence[str]` | No | List of possible outcomes. Feedback is collapsed to one of these, which triggers `@listen` decorators |
| `llm` | `str \| BaseLLM` | When `emit` specified | LLM used to interpret feedback and map to an outcome |
| `default_outcome` | `str` | No | Outcome to use if no feedback provided. Must be in `emit` |
| `metadata` | `dict` | No | Additional data for enterprise integrations |
| `provider` | `HumanFeedbackProvider` | No | Custom provider for async/non-blocking feedback. See [Async Human Feedback](#async-human-feedback-non-blocking) |
### Basic Usage (No Routing)
When you don't specify `emit`, the decorator simply collects feedback and passes a `HumanFeedbackResult` to the next listener:
```python Code
@start()
@human_feedback(message="What do you think of this analysis?")
def analyze_data(self):
return "Analysis results: Revenue up 15%, costs down 8%"
@listen(analyze_data)
def handle_feedback(self, result):
# result is a HumanFeedbackResult
print(f"Analysis: {result.output}")
print(f"Feedback: {result.feedback}")
```
### Routing with emit
When you specify `emit`, the decorator becomes a router. The human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes:
```python Code
@start()
@human_feedback(
message="Do you approve this content for publication?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_content(self):
return "Draft blog post content here..."
@listen("approved")
def publish(self, result):
print(f"Publishing! User said: {result.feedback}")
@listen("rejected")
def discard(self, result):
print(f"Discarding. Reason: {result.feedback}")
@listen("needs_revision")
def revise(self, result):
print(f"Revising based on: {result.feedback}")
```
<Tip>
The LLM uses structured outputs (function calling) when available to guarantee the response is one of your specified outcomes. This makes routing reliable and predictable.
</Tip>
## HumanFeedbackResult
The `HumanFeedbackResult` dataclass contains all information about a human feedback interaction:
```python Code
from crewai.flow.human_feedback import HumanFeedbackResult
@dataclass
class HumanFeedbackResult:
output: Any # The original method output shown to the human
feedback: str # The raw feedback text from the human
outcome: str | None # The collapsed outcome (if emit was specified)
timestamp: datetime # When the feedback was received
method_name: str # Name of the decorated method
metadata: dict # Any metadata passed to the decorator
```
### Accessing in Listeners
When a listener is triggered by a `@human_feedback` method with `emit`, it receives the `HumanFeedbackResult`:
```python Code
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Original output: {result.output}")
print(f"User feedback: {result.feedback}")
print(f"Outcome: {result.outcome}") # "approved"
print(f"Received at: {result.timestamp}")
```
## Accessing Feedback History
The `Flow` class provides two attributes for accessing human feedback:
### last_human_feedback
Returns the most recent `HumanFeedbackResult`:
```python Code
@listen(some_method)
def check_feedback(self):
if self.last_human_feedback:
print(f"Last feedback: {self.last_human_feedback.feedback}")
```
### human_feedback_history
A list of all `HumanFeedbackResult` objects collected during the flow:
```python Code
@listen(final_step)
def summarize(self):
print(f"Total feedback collected: {len(self.human_feedback_history)}")
for i, fb in enumerate(self.human_feedback_history):
print(f"{i+1}. {fb.method_name}: {fb.outcome or 'no routing'}")
```
<Warning>
Each `HumanFeedbackResult` is appended to `human_feedback_history`, so multiple feedback steps won't overwrite each other. Use this list to access all feedback collected during the flow.
</Warning>
## Complete Example: Content Approval Workflow
Here's a full example implementing a content review and approval workflow:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
draft: str = ""
final_content: str = ""
revision_count: int = 0
class ContentApprovalFlow(Flow[ContentState]):
"""A flow that generates content and gets human approval."""
@start()
def get_topic(self):
self.state.topic = input("What topic should I write about? ")
return self.state.topic
@listen(get_topic)
def generate_draft(self, topic):
# In real use, this would call an LLM
self.state.draft = f"# {topic}\n\nThis is a draft about {topic}..."
return self.state.draft
@listen(generate_draft)
@human_feedback(
message="Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_draft(self, draft):
return draft
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
self.state.final_content = result.output
print("\n✅ Content approved and published!")
print(f"Reviewer comment: {result.feedback}")
return "published"
@listen("rejected")
def handle_rejection(self, result: HumanFeedbackResult):
print("\n❌ Content rejected")
print(f"Reason: {result.feedback}")
return "rejected"
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
self.state.revision_count += 1
print(f"\n📝 Revision #{self.state.revision_count} requested")
print(f"Feedback: {result.feedback}")
# In a real flow, you might loop back to generate_draft
# For this example, we just acknowledge
return "revision_requested"
# Run the flow
flow = ContentApprovalFlow()
result = flow.kickoff()
print(f"\nFlow completed. Revisions requested: {flow.state.revision_count}")
```
```text Output
What topic should I write about? AI Safety
==================================================
OUTPUT FOR REVIEW:
==================================================
# AI Safety
This is a draft about AI Safety...
==================================================
Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:
(Press Enter to skip, or type your feedback)
Your feedback: Looks good, approved!
✅ Content approved and published!
Reviewer comment: Looks good, approved!
Flow completed. Revisions requested: 0
```
</CodeGroup>
## Combining with Other Decorators
The `@human_feedback` decorator works with other flow decorators. Place it as the innermost decorator (closest to the function):
```python Code
# Correct: @human_feedback is innermost (closest to the function)
@start()
@human_feedback(message="Review this:")
def my_start_method(self):
return "content"
@listen(other_method)
@human_feedback(message="Review this too:")
def my_listener(self, data):
return f"processed: {data}"
```
<Tip>
Place `@human_feedback` as the innermost decorator (last/closest to the function) so it wraps the method directly and can capture the return value before passing to the flow system.
</Tip>
## Best Practices
### 1. Write Clear Request Messages
The `request` parameter is what the human sees. Make it actionable:
```python Code
# ✅ Good - clear and actionable
@human_feedback(message="Does this summary accurately capture the key points? Reply 'yes' or explain what's missing:")
# ❌ Bad - vague
@human_feedback(message="Review this:")
```
### 2. Choose Meaningful Outcomes
When using `emit`, pick outcomes that map naturally to human responses:
```python Code
# ✅ Good - natural language outcomes
emit=["approved", "rejected", "needs_more_detail"]
# ❌ Bad - technical or unclear
emit=["state_1", "state_2", "state_3"]
```
### 3. Always Provide a Default Outcome
Use `default_outcome` to handle cases where users press Enter without typing:
```python Code
@human_feedback(
message="Approve? (press Enter to request revision)",
emit=["approved", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision", # Safe default
)
```
### 4. Use Feedback History for Audit Trails
Access `human_feedback_history` to create audit logs:
```python Code
@listen(final_step)
def create_audit_log(self):
log = []
for fb in self.human_feedback_history:
log.append({
"step": fb.method_name,
"outcome": fb.outcome,
"feedback": fb.feedback,
"timestamp": fb.timestamp.isoformat(),
})
return log
```
### 5. Handle Both Routed and Non-Routed Feedback
When designing flows, consider whether you need routing:
| Scenario | Use |
|----------|-----|
| Simple review, just need the feedback text | No `emit` |
| Need to branch to different paths based on response | Use `emit` |
| Approval gates with approve/reject/revise | Use `emit` |
| Collecting comments for logging only | No `emit` |
## Async Human Feedback (Non-Blocking)
By default, `@human_feedback` blocks execution waiting for console input. For production applications, you may need **async/non-blocking** feedback that integrates with external systems like Slack, email, webhooks, or APIs.
### The Provider Abstraction
Use the `provider` parameter to specify a custom feedback collection strategy:
```python Code
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
class WebhookProvider(HumanFeedbackProvider):
"""Provider that pauses flow and waits for webhook callback."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Notify external system (e.g., send Slack message, create ticket)
self.send_notification(context)
# Pause execution - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
)
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Review this content:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=WebhookProvider("https://myapp.com/api"),
)
def generate_content(self):
return "AI-generated content..."
@listen("approved")
def publish(self, result):
return "Published!"
```
<Tip>
The flow framework **automatically persists state** when `HumanFeedbackPending` is raised. Your provider only needs to notify the external system and raise the exception—no manual persistence calls required.
</Tip>
### Handling Paused Flows
When using an async provider, `kickoff()` returns a `HumanFeedbackPending` object instead of raising an exception:
```python Code
flow = ReviewFlow()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow is paused, state is automatically persisted
print(f"Waiting for feedback at: {result.callback_info['webhook_url']}")
print(f"Flow ID: {result.context.flow_id}")
else:
# Normal completion
print(f"Flow completed: {result}")
```
### Resuming a Paused Flow
When feedback arrives (e.g., via webhook), resume the flow:
```python Code
# Sync handler:
def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = flow.resume(feedback)
return result
# Async handler (FastAPI, aiohttp, etc.):
async def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = await flow.resume_async(feedback)
return result
```
### Key Types
| Type | Description |
|------|-------------|
| `HumanFeedbackProvider` | Protocol for custom feedback providers |
| `PendingFeedbackContext` | Contains all info needed to resume a paused flow |
| `HumanFeedbackPending` | Returned by `kickoff()` when flow is paused for feedback |
| `ConsoleProvider` | Default blocking console input provider |
### PendingFeedbackContext
The context contains everything needed to resume:
```python Code
@dataclass
class PendingFeedbackContext:
flow_id: str # Unique identifier for this flow execution
flow_class: str # Fully qualified class name
method_name: str # Method that triggered feedback
method_output: Any # Output shown to the human
message: str # The request message
emit: list[str] | None # Possible outcomes for routing
default_outcome: str | None
metadata: dict # Custom metadata
llm: str | None # LLM for outcome collapsing
requested_at: datetime
```
### Complete Async Flow Example
```python Code
from crewai.flow import (
Flow, start, listen, human_feedback,
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
)
class SlackNotificationProvider(HumanFeedbackProvider):
"""Provider that sends Slack notifications and pauses for async feedback."""
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Send Slack notification (implement your own)
slack_thread_id = self.post_to_slack(
channel=self.channel,
message=f"Review needed:\n\n{context.method_output}\n\n{context.message}",
)
# Pause execution - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": self.channel,
"thread_id": slack_thread_id,
}
)
class ContentPipeline(Flow):
@start()
@human_feedback(
message="Approve this content for publication?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
provider=SlackNotificationProvider("#content-reviews"),
)
def generate_content(self):
return "AI-generated blog post content..."
@listen("approved")
def publish(self, result):
print(f"Publishing! Reviewer said: {result.feedback}")
return {"status": "published"}
@listen("rejected")
def archive(self, result):
print(f"Archived. Reason: {result.feedback}")
return {"status": "archived"}
@listen("needs_revision")
def queue_revision(self, result):
print(f"Queued for revision: {result.feedback}")
return {"status": "revision_needed"}
# Starting the flow (will pause and wait for Slack response)
def start_content_pipeline():
flow = ContentPipeline()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
return {"status": "pending", "flow_id": result.context.flow_id}
return result
# Resuming when Slack webhook fires (sync handler)
def on_slack_feedback(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = flow.resume(slack_message)
return result
# If your handler is async (FastAPI, aiohttp, Slack Bolt async, etc.)
async def on_slack_feedback_async(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = await flow.resume_async(slack_message)
return result
```
<Warning>
If you're using an async web framework (FastAPI, aiohttp, Slack Bolt async mode), use `await flow.resume_async()` instead of `flow.resume()`. Calling `resume()` from within a running event loop will raise a `RuntimeError`.
</Warning>
### Best Practices for Async Feedback
1. **Check the return type**: `kickoff()` returns `HumanFeedbackPending` when paused—no try/except needed
2. **Use the right resume method**: Use `resume()` in sync code, `await resume_async()` in async code
3. **Store callback info**: Use `callback_info` to store webhook URLs, ticket IDs, etc.
4. **Implement idempotency**: Your resume handler should be idempotent for safety
5. **Automatic persistence**: State is automatically saved when `HumanFeedbackPending` is raised and uses `SQLiteFlowPersistence` by default
6. **Custom persistence**: Pass a custom persistence instance to `from_pending()` if needed
## Related Documentation
- [Flows Overview](/en/concepts/flows) - Learn about CrewAI Flows
- [Flow State Management](/en/guides/flows/mastering-flow-state) - Managing state in flows
- [Flow Persistence](/en/concepts/flows#persistence) - Persisting flow state
- [Routing with @router](/en/concepts/flows#router) - More about conditional routing
- [Human Input on Execution](/en/learn/human-input-on-execution) - Task-level human input

View File

@@ -5,9 +5,22 @@ icon: "user-check"
mode: "wide" mode: "wide"
--- ---
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI. Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. CrewAI provides multiple ways to implement HITL depending on your needs.
## Setting Up HITL Workflows ## Choosing Your HITL Approach
CrewAI offers two main approaches for implementing human-in-the-loop workflows:
| Approach | Best For | Integration |
|----------|----------|-------------|
| **Flow-based** (`@human_feedback` decorator) | Local development, console-based review, synchronous workflows | [Human Feedback in Flows](/en/learn/human-feedback-in-flows) |
| **Webhook-based** (Enterprise) | Production deployments, async workflows, external integrations (Slack, Teams, etc.) | This guide |
<Tip>
If you're building flows and want to add human review steps with routing based on feedback, check out the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide for the `@human_feedback` decorator.
</Tip>
## Setting Up Webhook-Based HITL Workflows
<Steps> <Steps>
<Step title="Configure Your Task"> <Step title="Configure Your Task">

View File

@@ -10,14 +10,25 @@ mode: "wide"
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner. CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner.
This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing. This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
## Asynchronous Crew Execution CrewAI offers two approaches for async execution:
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks. | Method | Type | Description |
|--------|------|-------------|
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
<Note>
For high-concurrency workloads, `akickoff()` is recommended as it uses native async for task execution, memory operations, and knowledge retrieval.
</Note>
## Native Async Execution with `akickoff()`
The `akickoff()` method provides true native async execution, using async/await throughout the entire execution chain including task execution, memory operations, and knowledge queries.
### Method Signature ### Method Signature
```python Code ```python Code
def kickoff_async(self, inputs: dict) -> CrewOutput: async def akickoff(self, inputs: dict) -> CrewOutput:
``` ```
### Parameters ### Parameters
@@ -28,23 +39,13 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
- `CrewOutput`: An object representing the result of the crew execution. - `CrewOutput`: An object representing the result of the crew execution.
## Potential Use Cases ### Example: Native Async Crew Execution
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
## Example: Single Asynchronous Crew Execution
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
```python Code ```python Code
import asyncio import asyncio
from crewai import Crew, Agent, Task from crewai import Crew, Agent, Task
# Create an agent with code execution enabled # Create an agent
coding_agent = Agent( coding_agent = Agent(
role="Python Data Analyst", role="Python Data Analyst",
goal="Analyze data and provide insights using Python", goal="Analyze data and provide insights using Python",
@@ -52,37 +53,165 @@ coding_agent = Agent(
allow_code_execution=True allow_code_execution=True
) )
# Create a task that requires code execution # Create a task
data_analysis_task = Task( data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}", description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent, agent=coding_agent,
expected_output="The average age of the participants." expected_output="The average age of the participants."
) )
# Create a crew and add the task # Create a crew
analysis_crew = Crew( analysis_crew = Crew(
agents=[coding_agent], agents=[coding_agent],
tasks=[data_analysis_task] tasks=[data_analysis_task]
) )
# Async function to kickoff the crew asynchronously # Native async execution
async def async_crew_execution(): async def main():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]}) result = await analysis_crew.akickoff(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result) print("Crew Result:", result)
# Run the async function asyncio.run(main())
asyncio.run(async_crew_execution())
``` ```
## Example: Multiple Asynchronous Crew Executions ### Example: Multiple Native Async Crews
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`: Run multiple crews concurrently using `asyncio.gather()` with native async:
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
task_1 = Task(
description="Analyze the first dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
task_2 = Task(
description="Analyze the second dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
async def main():
results = await asyncio.gather(
crew_1.akickoff(inputs={"ages": [25, 30, 35, 40, 45]}),
crew_2.akickoff(inputs={"ages": [20, 22, 24, 28, 30]})
)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
asyncio.run(main())
```
### Example: Native Async for Multiple Inputs
Use `akickoff_for_each()` to execute your crew against multiple inputs concurrently with native async:
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def main():
datasets = [
{"ages": [25, 30, 35, 40, 45]},
{"ages": [20, 22, 24, 28, 30]},
{"ages": [30, 35, 40, 45, 50]}
]
results = await analysis_crew.akickoff_for_each(datasets)
for i, result in enumerate(results, 1):
print(f"Dataset {i} Result:", result)
asyncio.run(main())
```
## Thread-Based Async with `kickoff_async()`
The `kickoff_async()` method provides async execution by wrapping the synchronous `kickoff()` in a thread. This is useful for simpler async integration or backward compatibility.
### Method Signature
```python Code
async def kickoff_async(self, inputs: dict) -> CrewOutput:
```
### Parameters
- `inputs` (dict): A dictionary containing the input data required for the tasks.
### Returns
- `CrewOutput`: An object representing the result of the crew execution.
### Example: Thread-Based Async Execution
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
asyncio.run(async_crew_execution())
```
### Example: Multiple Thread-Based Async Crews
```python Code ```python Code
import asyncio import asyncio
from crewai import Crew, Agent, Task from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent( coding_agent = Agent(
role="Python Data Analyst", role="Python Data Analyst",
goal="Analyze data and provide insights using Python", goal="Analyze data and provide insights using Python",
@@ -90,7 +219,6 @@ coding_agent = Agent(
allow_code_execution=True allow_code_execution=True
) )
# Create tasks that require code execution
task_1 = Task( task_1 = Task(
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}", description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent, agent=coding_agent,
@@ -103,22 +231,76 @@ task_2 = Task(
expected_output="The average age of the participants." expected_output="The average age of the participants."
) )
# Create two crews and add tasks
crew_1 = Crew(agents=[coding_agent], tasks=[task_1]) crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2]) crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
# Async function to kickoff multiple crews asynchronously and wait for all to finish
async def async_multiple_crews(): async def async_multiple_crews():
# Create coroutines for concurrent execution
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]}) result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]}) result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
# Wait for both crews to finish
results = await asyncio.gather(result_1, result_2) results = await asyncio.gather(result_1, result_2)
for i, result in enumerate(results, 1): for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result) print(f"Crew {i} Result:", result)
# Run the async function
asyncio.run(async_multiple_crews()) asyncio.run(async_multiple_crews())
``` ```
## Async Streaming
Both async methods support streaming when `stream=True` is set on the crew:
```python Code
import asyncio
from crewai import Crew, Agent, Task
agent = Agent(
role="Researcher",
goal="Research and summarize topics",
backstory="You are an expert researcher."
)
task = Task(
description="Research the topic: {topic}",
agent=agent,
expected_output="A comprehensive summary of the topic."
)
crew = Crew(
agents=[agent],
tasks=[task],
stream=True # Enable streaming
)
async def main():
streaming_output = await crew.akickoff(inputs={"topic": "AI trends in 2024"})
# Async iteration over streaming chunks
async for chunk in streaming_output:
print(f"Chunk: {chunk.content}")
# Access final result after streaming completes
result = streaming_output.result
print(f"Final result: {result.raw}")
asyncio.run(main())
```
## Potential Use Cases
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch.
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment.
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities.
## Choosing Between `akickoff()` and `kickoff_async()`
| Feature | `akickoff()` | `kickoff_async()` |
|---------|--------------|-------------------|
| Execution model | Native async/await | Thread-based wrapper |
| Task execution | Async with `aexecute_sync()` | Sync in thread pool |
| Memory operations | Async | Sync in thread pool |
| Knowledge retrieval | Async | Sync in thread pool |
| Best for | High-concurrency, I/O-bound workloads | Simple async integration |
| Streaming support | Yes | Yes |

427
docs/en/learn/llm-hooks.mdx Normal file
View File

@@ -0,0 +1,427 @@
---
title: LLM Call Hooks
description: Learn how to use LLM call hooks to intercept, modify, and control language model interactions in CrewAI
mode: "wide"
---
LLM Call Hooks provide fine-grained control over language model interactions during agent execution. These hooks allow you to intercept LLM calls, modify prompts, transform responses, implement approval gates, and add custom logging or monitoring.
## Overview
LLM hooks are executed at two critical points:
- **Before LLM Call**: Modify messages, validate inputs, or block execution
- **After LLM Call**: Transform responses, sanitize outputs, or modify conversation history
## Hook Types
### Before LLM Call Hooks
Executed before every LLM call, these hooks can:
- Inspect and modify messages sent to the LLM
- Block LLM execution based on conditions
- Implement rate limiting or approval gates
- Add context or system messages
- Log request details
**Signature:**
```python
def before_hook(context: LLMCallHookContext) -> bool | None:
# Return False to block execution
# Return True or None to allow execution
...
```
### After LLM Call Hooks
Executed after every LLM call, these hooks can:
- Modify or sanitize LLM responses
- Add metadata or formatting
- Log response details
- Update conversation history
- Implement content filtering
**Signature:**
```python
def after_hook(context: LLMCallHookContext) -> str | None:
# Return modified response string
# Return None to keep original response
...
```
## LLM Hook Context
The `LLMCallHookContext` object provides comprehensive access to execution state:
```python
class LLMCallHookContext:
executor: CrewAgentExecutor # Full executor reference
messages: list # Mutable message list
agent: Agent # Current agent
task: Task # Current task
crew: Crew # Crew instance
llm: BaseLLM # LLM instance
iterations: int # Current iteration count
response: str | None # LLM response (after hooks only)
```
### Modifying Messages
**Important:** Always modify messages in-place:
```python
# ✅ Correct - modify in-place
def add_context(context: LLMCallHookContext) -> None:
context.messages.append({"role": "system", "content": "Be concise"})
# ❌ Wrong - replaces list reference
def wrong_approach(context: LLMCallHookContext) -> None:
context.messages = [{"role": "system", "content": "Be concise"}]
```
## Registration Methods
### 1. Global Hook Registration
Register hooks that apply to all LLM calls across all crews:
```python
from crewai.hooks import register_before_llm_call_hook, register_after_llm_call_hook
def log_llm_call(context):
print(f"LLM call by {context.agent.role} at iteration {context.iterations}")
return None # Allow execution
register_before_llm_call_hook(log_llm_call)
```
### 2. Decorator-Based Registration
Use decorators for cleaner syntax:
```python
from crewai.hooks import before_llm_call, after_llm_call
@before_llm_call
def validate_iteration_count(context):
if context.iterations > 10:
print("⚠️ Exceeded maximum iterations")
return False # Block execution
return None
@after_llm_call
def sanitize_response(context):
if context.response and "API_KEY" in context.response:
return context.response.replace("API_KEY", "[REDACTED]")
return None
```
### 3. Crew-Scoped Hooks
Register hooks for a specific crew instance:
```python
@CrewBase
class MyProjCrew:
@before_llm_call_crew
def validate_inputs(self, context):
# Only applies to this crew
if context.iterations == 0:
print(f"Starting task: {context.task.description}")
return None
@after_llm_call_crew
def log_responses(self, context):
# Crew-specific response logging
print(f"Response length: {len(context.response)}")
return None
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True
)
```
## Common Use Cases
### 1. Iteration Limiting
```python
@before_llm_call
def limit_iterations(context: LLMCallHookContext) -> bool | None:
max_iterations = 15
if context.iterations > max_iterations:
print(f"⛔ Blocked: Exceeded {max_iterations} iterations")
return False # Block execution
return None
```
### 2. Human Approval Gate
```python
@before_llm_call
def require_approval(context: LLMCallHookContext) -> bool | None:
if context.iterations > 5:
response = context.request_human_input(
prompt=f"Iteration {context.iterations}: Approve LLM call?",
default_message="Press Enter to approve, or type 'no' to block:"
)
if response.lower() == "no":
print("🚫 LLM call blocked by user")
return False
return None
```
### 3. Adding System Context
```python
@before_llm_call
def add_guardrails(context: LLMCallHookContext) -> None:
# Add safety guidelines to every LLM call
context.messages.append({
"role": "system",
"content": "Ensure responses are factual and cite sources when possible."
})
return None
```
### 4. Response Sanitization
```python
@after_llm_call
def sanitize_sensitive_data(context: LLMCallHookContext) -> str | None:
if not context.response:
return None
# Remove sensitive patterns
import re
sanitized = context.response
sanitized = re.sub(r'\b\d{3}-\d{2}-\d{4}\b', '[SSN-REDACTED]', sanitized)
sanitized = re.sub(r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b', '[CARD-REDACTED]', sanitized)
return sanitized
```
### 5. Cost Tracking
```python
import tiktoken
@before_llm_call
def track_token_usage(context: LLMCallHookContext) -> None:
encoding = tiktoken.get_encoding("cl100k_base")
total_tokens = sum(
len(encoding.encode(msg.get("content", "")))
for msg in context.messages
)
print(f"📊 Input tokens: ~{total_tokens}")
return None
@after_llm_call
def track_response_tokens(context: LLMCallHookContext) -> None:
if context.response:
encoding = tiktoken.get_encoding("cl100k_base")
tokens = len(encoding.encode(context.response))
print(f"📊 Response tokens: ~{tokens}")
return None
```
### 6. Debug Logging
```python
@before_llm_call
def debug_request(context: LLMCallHookContext) -> None:
print(f"""
🔍 LLM Call Debug:
- Agent: {context.agent.role}
- Task: {context.task.description[:50]}...
- Iteration: {context.iterations}
- Message Count: {len(context.messages)}
- Last Message: {context.messages[-1] if context.messages else 'None'}
""")
return None
@after_llm_call
def debug_response(context: LLMCallHookContext) -> None:
if context.response:
print(f"✅ Response Preview: {context.response[:100]}...")
return None
```
## Hook Management
### Unregistering Hooks
```python
from crewai.hooks import (
unregister_before_llm_call_hook,
unregister_after_llm_call_hook
)
# Unregister specific hook
def my_hook(context):
...
register_before_llm_call_hook(my_hook)
# Later...
unregister_before_llm_call_hook(my_hook) # Returns True if found
```
### Clearing Hooks
```python
from crewai.hooks import (
clear_before_llm_call_hooks,
clear_after_llm_call_hooks,
clear_all_llm_call_hooks
)
# Clear specific hook type
count = clear_before_llm_call_hooks()
print(f"Cleared {count} before hooks")
# Clear all LLM hooks
before_count, after_count = clear_all_llm_call_hooks()
print(f"Cleared {before_count} before and {after_count} after hooks")
```
### Listing Registered Hooks
```python
from crewai.hooks import (
get_before_llm_call_hooks,
get_after_llm_call_hooks
)
# Get current hooks
before_hooks = get_before_llm_call_hooks()
after_hooks = get_after_llm_call_hooks()
print(f"Registered: {len(before_hooks)} before, {len(after_hooks)} after")
```
## Advanced Patterns
### Conditional Hook Execution
```python
@before_llm_call
def conditional_blocking(context: LLMCallHookContext) -> bool | None:
# Only block for specific agents
if context.agent.role == "researcher" and context.iterations > 10:
return False
# Only block for specific tasks
if "sensitive" in context.task.description.lower() and context.iterations > 5:
return False
return None
```
### Context-Aware Modifications
```python
@before_llm_call
def adaptive_prompting(context: LLMCallHookContext) -> None:
# Add different context based on iteration
if context.iterations == 0:
context.messages.append({
"role": "system",
"content": "Start with a high-level overview."
})
elif context.iterations > 3:
context.messages.append({
"role": "system",
"content": "Focus on specific details and provide examples."
})
return None
```
### Chaining Hooks
```python
# Multiple hooks execute in registration order
@before_llm_call
def first_hook(context):
print("1. First hook executed")
return None
@before_llm_call
def second_hook(context):
print("2. Second hook executed")
return None
@before_llm_call
def blocking_hook(context):
if context.iterations > 10:
print("3. Blocking hook - execution stopped")
return False # Subsequent hooks won't execute
print("3. Blocking hook - execution allowed")
return None
```
## Best Practices
1. **Keep Hooks Focused**: Each hook should have a single responsibility
2. **Avoid Heavy Computation**: Hooks execute on every LLM call
3. **Handle Errors Gracefully**: Use try-except to prevent hook failures from breaking execution
4. **Use Type Hints**: Leverage `LLMCallHookContext` for better IDE support
5. **Document Hook Behavior**: Especially for blocking conditions
6. **Test Hooks Independently**: Unit test hooks before using in production
7. **Clear Hooks in Tests**: Use `clear_all_llm_call_hooks()` between test runs
8. **Modify In-Place**: Always modify `context.messages` in-place, never replace
## Error Handling
```python
@before_llm_call
def safe_hook(context: LLMCallHookContext) -> bool | None:
try:
# Your hook logic
if some_condition:
return False
except Exception as e:
print(f"⚠️ Hook error: {e}")
# Decide: allow or block on error
return None # Allow execution despite error
```
## Type Safety
```python
from crewai.hooks import LLMCallHookContext, BeforeLLMCallHookType, AfterLLMCallHookType
# Explicit type annotations
def my_before_hook(context: LLMCallHookContext) -> bool | None:
return None
def my_after_hook(context: LLMCallHookContext) -> str | None:
return None
# Type-safe registration
register_before_llm_call_hook(my_before_hook)
register_after_llm_call_hook(my_after_hook)
```
## Troubleshooting
### Hook Not Executing
- Verify hook is registered before crew execution
- Check if previous hook returned `False` (blocks subsequent hooks)
- Ensure hook signature matches expected type
### Message Modifications Not Persisting
- Use in-place modifications: `context.messages.append()`
- Don't replace the list: `context.messages = []`
### Response Modifications Not Working
- Return the modified string from after hooks
- Returning `None` keeps the original response
## Conclusion
LLM Call Hooks provide powerful capabilities for controlling and monitoring language model interactions in CrewAI. Use them to implement safety guardrails, approval gates, logging, cost tracking, and response sanitization. Combined with proper error handling and type safety, hooks enable robust and production-ready agent systems.

View File

@@ -1,7 +1,7 @@
--- ---
title: 'Strategic LLM Selection Guide' title: "Strategic LLM Selection Guide"
description: 'Strategic framework for choosing the right LLM for your CrewAI AI agents and writing effective task and agent definitions' description: "Strategic framework for choosing the right LLM for your CrewAI AI agents and writing effective task and agent definitions"
icon: 'brain-circuit' icon: "brain-circuit"
mode: "wide" mode: "wide"
--- ---
@@ -10,23 +10,35 @@ mode: "wide"
Rather than prescriptive model recommendations, we advocate for a **thinking framework** that helps you make informed decisions based on your specific use case, constraints, and requirements. The LLM landscape evolves rapidly, with new models emerging regularly and existing ones being updated frequently. What matters most is developing a systematic approach to evaluation that remains relevant regardless of which specific models are available. Rather than prescriptive model recommendations, we advocate for a **thinking framework** that helps you make informed decisions based on your specific use case, constraints, and requirements. The LLM landscape evolves rapidly, with new models emerging regularly and existing ones being updated frequently. What matters most is developing a systematic approach to evaluation that remains relevant regardless of which specific models are available.
<Note> <Note>
This guide focuses on strategic thinking rather than specific model recommendations, as the LLM landscape evolves rapidly. This guide focuses on strategic thinking rather than specific model
recommendations, as the LLM landscape evolves rapidly.
</Note> </Note>
## Quick Decision Framework ## Quick Decision Framework
<Steps> <Steps>
<Step title="Analyze Your Tasks"> <Step title="Analyze Your Tasks">
Begin by deeply understanding what your tasks actually require. Consider the cognitive complexity involved, the depth of reasoning needed, the format of expected outputs, and the amount of context the model will need to process. This foundational analysis will guide every subsequent decision. Begin by deeply understanding what your tasks actually require. Consider the
cognitive complexity involved, the depth of reasoning needed, the format of
expected outputs, and the amount of context the model will need to process.
This foundational analysis will guide every subsequent decision.
</Step> </Step>
<Step title="Map Model Capabilities"> <Step title="Map Model Capabilities">
Once you understand your requirements, map them to model strengths. Different model families excel at different types of work; some are optimized for reasoning and analysis, others for creativity and content generation, and others for speed and efficiency. Once you understand your requirements, map them to model strengths.
Different model families excel at different types of work; some are
optimized for reasoning and analysis, others for creativity and content
generation, and others for speed and efficiency.
</Step> </Step>
<Step title="Consider Constraints"> <Step title="Consider Constraints">
Factor in your real-world operational constraints including budget limitations, latency requirements, data privacy needs, and infrastructure capabilities. The theoretically best model may not be the practically best choice for your situation. Factor in your real-world operational constraints including budget
limitations, latency requirements, data privacy needs, and infrastructure
capabilities. The theoretically best model may not be the practically best
choice for your situation.
</Step> </Step>
<Step title="Test and Iterate"> <Step title="Test and Iterate">
Start with reliable, well-understood models and optimize based on actual performance in your specific use case. Real-world results often differ from theoretical benchmarks, so empirical testing is crucial. Start with reliable, well-understood models and optimize based on actual
performance in your specific use case. Real-world results often differ from
theoretical benchmarks, so empirical testing is crucial.
</Step> </Step>
</Steps> </Steps>
@@ -43,6 +55,7 @@ The most critical step in LLM selection is understanding what your task actually
- **Complex Tasks** require multi-step reasoning, strategic thinking, and the ability to handle ambiguous or incomplete information. These might involve analyzing multiple data sources, developing comprehensive strategies, or solving problems that require breaking down into smaller components. The model needs to maintain context across multiple reasoning steps and often must make inferences that aren't explicitly stated. - **Complex Tasks** require multi-step reasoning, strategic thinking, and the ability to handle ambiguous or incomplete information. These might involve analyzing multiple data sources, developing comprehensive strategies, or solving problems that require breaking down into smaller components. The model needs to maintain context across multiple reasoning steps and often must make inferences that aren't explicitly stated.
- **Creative Tasks** demand a different type of cognitive capability focused on generating novel, engaging, and contextually appropriate content. This includes storytelling, marketing copy creation, and creative problem-solving. The model needs to understand nuance, tone, and audience while producing content that feels authentic and engaging rather than formulaic. - **Creative Tasks** demand a different type of cognitive capability focused on generating novel, engaging, and contextually appropriate content. This includes storytelling, marketing copy creation, and creative problem-solving. The model needs to understand nuance, tone, and audience while producing content that feels authentic and engaging rather than formulaic.
</Tab> </Tab>
<Tab title="Output Requirements"> <Tab title="Output Requirements">
@@ -51,6 +64,7 @@ The most critical step in LLM selection is understanding what your task actually
- **Creative Content** outputs demand a balance of technical competence and creative flair. The model needs to understand audience, tone, and brand voice while producing content that engages readers and achieves specific communication goals. Quality here is often subjective and requires models that can adapt their writing style to different contexts and purposes. - **Creative Content** outputs demand a balance of technical competence and creative flair. The model needs to understand audience, tone, and brand voice while producing content that engages readers and achieves specific communication goals. Quality here is often subjective and requires models that can adapt their writing style to different contexts and purposes.
- **Technical Content** sits between structured data and creative content, requiring both precision and clarity. Documentation, code generation, and technical analysis need to be accurate and comprehensive while remaining accessible to the intended audience. The model must understand complex technical concepts and communicate them effectively. - **Technical Content** sits between structured data and creative content, requiring both precision and clarity. Documentation, code generation, and technical analysis need to be accurate and comprehensive while remaining accessible to the intended audience. The model must understand complex technical concepts and communicate them effectively.
</Tab> </Tab>
<Tab title="Context Needs"> <Tab title="Context Needs">
@@ -59,6 +73,7 @@ The most critical step in LLM selection is understanding what your task actually
- **Long Context** requirements emerge when working with substantial documents, extended conversations, or complex multi-part tasks. The model needs to maintain coherence across thousands of tokens while referencing earlier information accurately. This capability becomes crucial for document analysis, comprehensive research, and sophisticated dialogue systems. - **Long Context** requirements emerge when working with substantial documents, extended conversations, or complex multi-part tasks. The model needs to maintain coherence across thousands of tokens while referencing earlier information accurately. This capability becomes crucial for document analysis, comprehensive research, and sophisticated dialogue systems.
- **Very Long Context** scenarios push the boundaries of what's currently possible, involving massive document processing, extensive research synthesis, or complex multi-session interactions. These use cases require models specifically designed for extended context handling and often involve trade-offs between context length and processing speed. - **Very Long Context** scenarios push the boundaries of what's currently possible, involving massive document processing, extensive research synthesis, or complex multi-session interactions. These use cases require models specifically designed for extended context handling and often involve trade-offs between context length and processing speed.
</Tab> </Tab>
</Tabs> </Tabs>
@@ -73,6 +88,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
The strength of reasoning models lies in their ability to maintain logical consistency across extended reasoning chains and to break down complex problems into manageable components. They're particularly valuable for strategic planning, complex analysis, and situations where the quality of reasoning matters more than speed of response. The strength of reasoning models lies in their ability to maintain logical consistency across extended reasoning chains and to break down complex problems into manageable components. They're particularly valuable for strategic planning, complex analysis, and situations where the quality of reasoning matters more than speed of response.
However, reasoning models often come with trade-offs in terms of speed and cost. They may also be less suitable for creative tasks or simple operations where their sophisticated reasoning capabilities aren't needed. Consider these models when your tasks involve genuine complexity that benefits from systematic, step-by-step analysis. However, reasoning models often come with trade-offs in terms of speed and cost. They may also be less suitable for creative tasks or simple operations where their sophisticated reasoning capabilities aren't needed. Consider these models when your tasks involve genuine complexity that benefits from systematic, step-by-step analysis.
</Accordion> </Accordion>
<Accordion title="General Purpose Models" icon="microchip"> <Accordion title="General Purpose Models" icon="microchip">
@@ -81,6 +97,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
The primary advantage of general purpose models is their reliability and predictability across different types of work. They handle most standard business tasks competently, from research and analysis to content creation and data processing. This makes them excellent choices for teams that need consistent performance across varied workflows. The primary advantage of general purpose models is their reliability and predictability across different types of work. They handle most standard business tasks competently, from research and analysis to content creation and data processing. This makes them excellent choices for teams that need consistent performance across varied workflows.
While general purpose models may not achieve the peak performance of specialized alternatives in specific domains, they offer operational simplicity and reduced complexity in model management. They're often the best starting point for new projects, allowing teams to understand their specific needs before potentially optimizing with more specialized models. While general purpose models may not achieve the peak performance of specialized alternatives in specific domains, they offer operational simplicity and reduced complexity in model management. They're often the best starting point for new projects, allowing teams to understand their specific needs before potentially optimizing with more specialized models.
</Accordion> </Accordion>
<Accordion title="Fast & Efficient Models" icon="bolt"> <Accordion title="Fast & Efficient Models" icon="bolt">
@@ -89,6 +106,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
These models excel in scenarios involving routine operations, simple data processing, function calling, and high-volume tasks where the cognitive requirements are relatively straightforward. They're particularly valuable for applications that need to process many requests quickly or operate within tight budget constraints. These models excel in scenarios involving routine operations, simple data processing, function calling, and high-volume tasks where the cognitive requirements are relatively straightforward. They're particularly valuable for applications that need to process many requests quickly or operate within tight budget constraints.
The key consideration with efficient models is ensuring that their capabilities align with your task requirements. While they can handle many routine operations effectively, they may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation. They're best used for well-defined, routine operations where speed and cost matter more than sophistication. The key consideration with efficient models is ensuring that their capabilities align with your task requirements. While they can handle many routine operations effectively, they may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation. They're best used for well-defined, routine operations where speed and cost matter more than sophistication.
</Accordion> </Accordion>
<Accordion title="Creative Models" icon="pen"> <Accordion title="Creative Models" icon="pen">
@@ -97,6 +115,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
The strength of creative models lies in their ability to adapt writing style to different audiences, maintain consistent voice and tone, and generate content that engages readers effectively. They often perform better on tasks involving storytelling, marketing copy, brand communications, and other content where creativity and engagement are primary goals. The strength of creative models lies in their ability to adapt writing style to different audiences, maintain consistent voice and tone, and generate content that engages readers effectively. They often perform better on tasks involving storytelling, marketing copy, brand communications, and other content where creativity and engagement are primary goals.
When selecting creative models, consider not just their ability to generate text, but their understanding of audience, context, and purpose. The best creative models can adapt their output to match specific brand voices, target different audience segments, and maintain consistency across extended content pieces. When selecting creative models, consider not just their ability to generate text, but their understanding of audience, context, and purpose. The best creative models can adapt their output to match specific brand voices, target different audience segments, and maintain consistency across extended content pieces.
</Accordion> </Accordion>
<Accordion title="Open Source Models" icon="code"> <Accordion title="Open Source Models" icon="code">
@@ -105,6 +124,7 @@ Understanding model capabilities requires looking beyond marketing claims and be
The primary benefits of open source models include elimination of per-token costs, ability to fine-tune for specific use cases, complete data privacy, and independence from external API providers. They're particularly valuable for organizations with strict data privacy requirements, budget constraints, or specific customization needs. The primary benefits of open source models include elimination of per-token costs, ability to fine-tune for specific use cases, complete data privacy, and independence from external API providers. They're particularly valuable for organizations with strict data privacy requirements, budget constraints, or specific customization needs.
However, open source models require more technical expertise to deploy and maintain effectively. Teams need to consider infrastructure costs, model management complexity, and the ongoing effort required to keep models updated and optimized. The total cost of ownership may be higher than cloud-based alternatives when factoring in technical overhead. However, open source models require more technical expertise to deploy and maintain effectively. Teams need to consider infrastructure costs, model management complexity, and the ongoing effort required to keep models updated and optimized. The total cost of ownership may be higher than cloud-based alternatives when factoring in technical overhead.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -113,7 +133,8 @@ Understanding model capabilities requires looking beyond marketing claims and be
### a. Multi-Model Approach ### a. Multi-Model Approach
<Tip> <Tip>
Use different models for different purposes within the same crew to optimize both performance and cost. Use different models for different purposes within the same crew to optimize
both performance and cost.
</Tip> </Tip>
The most sophisticated CrewAI implementations often employ multiple models strategically, assigning different models to different agents based on their specific roles and requirements. This approach allows teams to optimize for both performance and cost by using the most appropriate model for each type of work. The most sophisticated CrewAI implementations often employ multiple models strategically, assigning different models to different agents based on their specific roles and requirements. This approach allows teams to optimize for both performance and cost by using the most appropriate model for each type of work.
@@ -177,6 +198,7 @@ The key to successful multi-model implementation is understanding how different
Effective manager LLMs require strong reasoning capabilities to make good delegation decisions, consistent performance to ensure predictable coordination, and excellent context management to track the state of multiple agents simultaneously. The model needs to understand the capabilities and limitations of different agents while optimizing task allocation for efficiency and quality. Effective manager LLMs require strong reasoning capabilities to make good delegation decisions, consistent performance to ensure predictable coordination, and excellent context management to track the state of multiple agents simultaneously. The model needs to understand the capabilities and limitations of different agents while optimizing task allocation for efficiency and quality.
Cost considerations are particularly important for manager LLMs since they're involved in every operation. The model needs to provide sufficient capability for effective coordination while remaining cost-effective for frequent use. This often means finding models that offer good reasoning capabilities without the premium pricing of the most sophisticated options. Cost considerations are particularly important for manager LLMs since they're involved in every operation. The model needs to provide sufficient capability for effective coordination while remaining cost-effective for frequent use. This often means finding models that offer good reasoning capabilities without the premium pricing of the most sophisticated options.
</Tab> </Tab>
<Tab title="Function Calling LLM"> <Tab title="Function Calling LLM">
@@ -185,6 +207,7 @@ The key to successful multi-model implementation is understanding how different
The most important characteristics for function calling LLMs are precision and reliability rather than creativity or sophisticated reasoning. The model needs to consistently extract the correct parameters from natural language requests and handle tool responses appropriately. Speed is also important since tool usage often involves multiple round trips that can impact overall performance. The most important characteristics for function calling LLMs are precision and reliability rather than creativity or sophisticated reasoning. The model needs to consistently extract the correct parameters from natural language requests and handle tool responses appropriately. Speed is also important since tool usage often involves multiple round trips that can impact overall performance.
Many teams find that specialized function calling models or general purpose models with strong tool support work better than creative or reasoning-focused models for this role. The key is ensuring that the model can reliably bridge the gap between natural language instructions and structured tool calls. Many teams find that specialized function calling models or general purpose models with strong tool support work better than creative or reasoning-focused models for this role. The key is ensuring that the model can reliably bridge the gap between natural language instructions and structured tool calls.
</Tab> </Tab>
<Tab title="Agent-Specific Overrides"> <Tab title="Agent-Specific Overrides">
@@ -193,6 +216,7 @@ The key to successful multi-model implementation is understanding how different
Consider agent-specific overrides when an agent's role requires capabilities that differ substantially from other crew members. For example, a creative writing agent might benefit from a model optimized for content generation, while a data analysis agent might perform better with a reasoning-focused model. Consider agent-specific overrides when an agent's role requires capabilities that differ substantially from other crew members. For example, a creative writing agent might benefit from a model optimized for content generation, while a data analysis agent might perform better with a reasoning-focused model.
The challenge with agent-specific overrides is balancing optimization with operational complexity. Each additional model adds complexity to deployment, monitoring, and cost management. Teams should focus overrides on agents where the performance improvement justifies the additional complexity. The challenge with agent-specific overrides is balancing optimization with operational complexity. Each additional model adds complexity to deployment, monitoring, and cost management. Teams should focus overrides on agents where the performance improvement justifies the additional complexity.
</Tab> </Tab>
</Tabs> </Tabs>
@@ -209,6 +233,7 @@ Effective task definition is often more important than model selection in determ
Effective task descriptions include relevant context and constraints that help the agent understand the broader purpose and any limitations they need to work within. They break complex work into focused steps that can be executed systematically, rather than presenting overwhelming, multi-faceted objectives that are difficult to approach systematically. Effective task descriptions include relevant context and constraints that help the agent understand the broader purpose and any limitations they need to work within. They break complex work into focused steps that can be executed systematically, rather than presenting overwhelming, multi-faceted objectives that are difficult to approach systematically.
Common mistakes include being too vague about objectives, failing to provide necessary context, setting unclear success criteria, or combining multiple unrelated tasks into a single description. The goal is to provide enough information for the agent to succeed while maintaining focus on a single, clear objective. Common mistakes include being too vague about objectives, failing to provide necessary context, setting unclear success criteria, or combining multiple unrelated tasks into a single description. The goal is to provide enough information for the agent to succeed while maintaining focus on a single, clear objective.
</Accordion> </Accordion>
<Accordion title="Expected Output Guidelines" icon="bullseye"> <Accordion title="Expected Output Guidelines" icon="bullseye">
@@ -217,6 +242,7 @@ Effective task definition is often more important than model selection in determ
The best output guidelines provide concrete examples of quality indicators and define completion criteria clearly enough that both the agent and human reviewers can assess whether the task has been completed successfully. This reduces ambiguity and helps ensure consistent results across multiple task executions. The best output guidelines provide concrete examples of quality indicators and define completion criteria clearly enough that both the agent and human reviewers can assess whether the task has been completed successfully. This reduces ambiguity and helps ensure consistent results across multiple task executions.
Avoid generic output descriptions that could apply to any task, missing format specifications that leave agents guessing about structure, unclear quality standards that make evaluation difficult, or failing to provide examples or templates that help agents understand expectations. Avoid generic output descriptions that could apply to any task, missing format specifications that leave agents guessing about structure, unclear quality standards that make evaluation difficult, or failing to provide examples or templates that help agents understand expectations.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -229,6 +255,7 @@ Effective task definition is often more important than model selection in determ
Implementing sequential dependencies effectively requires using the context parameter to chain related tasks, building complexity gradually through task progression, and ensuring that each task produces outputs that serve as meaningful inputs for subsequent tasks. The goal is to maintain logical flow between dependent tasks while avoiding unnecessary bottlenecks. Implementing sequential dependencies effectively requires using the context parameter to chain related tasks, building complexity gradually through task progression, and ensuring that each task produces outputs that serve as meaningful inputs for subsequent tasks. The goal is to maintain logical flow between dependent tasks while avoiding unnecessary bottlenecks.
Sequential dependencies work best when there's a clear logical progression from one task to another and when the output of one task genuinely improves the quality or feasibility of subsequent tasks. However, they can create bottlenecks if not managed carefully, so it's important to identify which dependencies are truly necessary versus those that are merely convenient. Sequential dependencies work best when there's a clear logical progression from one task to another and when the output of one task genuinely improves the quality or feasibility of subsequent tasks. However, they can create bottlenecks if not managed carefully, so it's important to identify which dependencies are truly necessary versus those that are merely convenient.
</Tab> </Tab>
<Tab title="Parallel Execution"> <Tab title="Parallel Execution">
@@ -237,6 +264,7 @@ Effective task definition is often more important than model selection in determ
Successful parallel execution requires identifying tasks that can truly run independently, grouping related but separate work streams effectively, and planning for result integration when parallel tasks need to be combined into a final deliverable. The key is ensuring that parallel tasks don't create conflicts or redundancies that reduce overall quality. Successful parallel execution requires identifying tasks that can truly run independently, grouping related but separate work streams effectively, and planning for result integration when parallel tasks need to be combined into a final deliverable. The key is ensuring that parallel tasks don't create conflicts or redundancies that reduce overall quality.
Consider parallel execution when you have multiple independent research streams, different types of analysis that don't depend on each other, or content creation tasks that can be developed simultaneously. However, be mindful of resource allocation and ensure that parallel execution doesn't overwhelm your available model capacity or budget. Consider parallel execution when you have multiple independent research streams, different types of analysis that don't depend on each other, or content creation tasks that can be developed simultaneously. However, be mindful of resource allocation and ensure that parallel execution doesn't overwhelm your available model capacity or budget.
</Tab> </Tab>
</Tabs> </Tabs>
@@ -245,7 +273,8 @@ Effective task definition is often more important than model selection in determ
### a. Role-Driven LLM Selection ### a. Role-Driven LLM Selection
<Warning> <Warning>
Generic agent roles make it impossible to select the right LLM. Specific roles enable targeted model optimization. Generic agent roles make it impossible to select the right LLM. Specific roles
enable targeted model optimization.
</Warning> </Warning>
The specificity of your agent roles directly determines which LLM capabilities matter most for optimal performance. This creates a strategic opportunity to match precise model strengths with agent responsibilities. The specificity of your agent roles directly determines which LLM capabilities matter most for optimal performance. This creates a strategic opportunity to match precise model strengths with agent responsibilities.
@@ -253,6 +282,7 @@ The specificity of your agent roles directly determines which LLM capabilities m
**Generic vs. Specific Role Impact on LLM Choice:** **Generic vs. Specific Role Impact on LLM Choice:**
When defining roles, think about the specific domain knowledge, working style, and decision-making frameworks that would be most valuable for the tasks the agent will handle. The more specific and contextual the role definition, the better the model can embody that role effectively. When defining roles, think about the specific domain knowledge, working style, and decision-making frameworks that would be most valuable for the tasks the agent will handle. The more specific and contextual the role definition, the better the model can embody that role effectively.
```python ```python
# ✅ Specific role - clear LLM requirements # ✅ Specific role - clear LLM requirements
specific_agent = Agent( specific_agent = Agent(
@@ -273,7 +303,8 @@ specific_agent = Agent(
### b. Backstory as Model Context Amplifier ### b. Backstory as Model Context Amplifier
<Info> <Info>
Strategic backstories multiply your chosen LLM's effectiveness by providing domain-specific context that generic prompting cannot achieve. Strategic backstories multiply your chosen LLM's effectiveness by providing
domain-specific context that generic prompting cannot achieve.
</Info> </Info>
A well-crafted backstory transforms your LLM choice from generic capability to specialized expertise. This is especially crucial for cost optimization - a well-contextualized efficient model can outperform a premium model without proper context. A well-crafted backstory transforms your LLM choice from generic capability to specialized expertise. This is especially crucial for cost optimization - a well-contextualized efficient model can outperform a premium model without proper context.
@@ -300,6 +331,7 @@ domain_expert = Agent(
``` ```
**Backstory Elements That Enhance LLM Performance:** **Backstory Elements That Enhance LLM Performance:**
- **Domain Experience**: "10+ years in enterprise SaaS sales" - **Domain Experience**: "10+ years in enterprise SaaS sales"
- **Specific Expertise**: "Specializes in technical due diligence for Series B+ rounds" - **Specific Expertise**: "Specializes in technical due diligence for Series B+ rounds"
- **Working Style**: "Prefers data-driven decisions with clear documentation" - **Working Style**: "Prefers data-driven decisions with clear documentation"
@@ -332,6 +364,7 @@ tech_writer = Agent(
``` ```
**Alignment Checklist:** **Alignment Checklist:**
- ✅ **Role Specificity**: Clear domain and responsibilities - ✅ **Role Specificity**: Clear domain and responsibilities
- ✅ **LLM Match**: Model strengths align with role requirements - ✅ **LLM Match**: Model strengths align with role requirements
- ✅ **Backstory Depth**: Provides domain context the LLM can leverage - ✅ **Backstory Depth**: Provides domain context the LLM can leverage
@@ -353,6 +386,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
- Are any agents heavily tool-dependent? - Are any agents heavily tool-dependent?
**Action**: Document current agent roles and identify optimization opportunities. **Action**: Document current agent roles and identify optimization opportunities.
</Step> </Step>
<Step title="Implement Crew-Level Strategy" icon="users-gear"> <Step title="Implement Crew-Level Strategy" icon="users-gear">
@@ -369,6 +403,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
``` ```
**Action**: Establish your crew's default LLM before optimizing individual agents. **Action**: Establish your crew's default LLM before optimizing individual agents.
</Step> </Step>
<Step title="Optimize High-Impact Agents" icon="star"> <Step title="Optimize High-Impact Agents" icon="star">
@@ -390,6 +425,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
``` ```
**Action**: Upgrade 20% of your agents that handle 80% of the complexity. **Action**: Upgrade 20% of your agents that handle 80% of the complexity.
</Step> </Step>
<Step title="Validate with Enterprise Testing" icon="test-tube"> <Step title="Validate with Enterprise Testing" icon="test-tube">
@@ -400,6 +436,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
- Share results with your team for collaborative decision-making - Share results with your team for collaborative decision-making
**Action**: Replace guesswork with data-driven validation using the testing platform. **Action**: Replace guesswork with data-driven validation using the testing platform.
</Step> </Step>
</Steps> </Steps>
@@ -412,6 +449,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
Consider reasoning models for business strategy development, complex data analysis that requires drawing insights from multiple sources, multi-step problem solving where each step depends on previous analysis, and strategic planning tasks that require considering multiple variables and their interactions. Consider reasoning models for business strategy development, complex data analysis that requires drawing insights from multiple sources, multi-step problem solving where each step depends on previous analysis, and strategic planning tasks that require considering multiple variables and their interactions.
However, reasoning models often come with higher costs and slower response times, so they're best reserved for tasks where their sophisticated capabilities provide genuine value rather than being used for simple operations that don't require complex reasoning. However, reasoning models often come with higher costs and slower response times, so they're best reserved for tasks where their sophisticated capabilities provide genuine value rather than being used for simple operations that don't require complex reasoning.
</Tab> </Tab>
<Tab title="Creative Models"> <Tab title="Creative Models">
@@ -420,6 +458,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
Use creative models for blog post writing and article creation, marketing copy that needs to engage and persuade, creative storytelling and narrative development, and brand communications where voice and tone are crucial. These models often understand nuance and context better than general purpose alternatives. Use creative models for blog post writing and article creation, marketing copy that needs to engage and persuade, creative storytelling and narrative development, and brand communications where voice and tone are crucial. These models often understand nuance and context better than general purpose alternatives.
Creative models may be less suitable for technical or analytical tasks where precision and factual accuracy are more important than engagement and style. They're best used when the creative and communicative aspects of the output are primary success factors. Creative models may be less suitable for technical or analytical tasks where precision and factual accuracy are more important than engagement and style. They're best used when the creative and communicative aspects of the output are primary success factors.
</Tab> </Tab>
<Tab title="Efficient Models"> <Tab title="Efficient Models">
@@ -428,6 +467,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
Consider efficient models for data processing and transformation tasks, simple formatting and organization operations, function calling and tool usage where precision matters more than sophistication, and high-volume operations where cost per operation is a significant factor. Consider efficient models for data processing and transformation tasks, simple formatting and organization operations, function calling and tool usage where precision matters more than sophistication, and high-volume operations where cost per operation is a significant factor.
The key with efficient models is ensuring that their capabilities align with task requirements. They can handle many routine operations effectively but may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation. The key with efficient models is ensuring that their capabilities align with task requirements. They can handle many routine operations effectively but may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation.
</Tab> </Tab>
<Tab title="Open Source Models"> <Tab title="Open Source Models">
@@ -436,6 +476,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
Consider open source models for internal company tools where data privacy is paramount, privacy-sensitive applications that can't use external APIs, cost-optimized deployments where per-token pricing is prohibitive, and situations requiring custom model modifications or fine-tuning. Consider open source models for internal company tools where data privacy is paramount, privacy-sensitive applications that can't use external APIs, cost-optimized deployments where per-token pricing is prohibitive, and situations requiring custom model modifications or fine-tuning.
However, open source models require more technical expertise to deploy and maintain effectively. Consider the total cost of ownership including infrastructure, technical overhead, and ongoing maintenance when evaluating open source options. However, open source models require more technical expertise to deploy and maintain effectively. Consider the total cost of ownership including infrastructure, technical overhead, and ongoing maintenance when evaluating open source options.
</Tab> </Tab>
</Tabs> </Tabs>
@@ -455,6 +496,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
# Processing agent gets efficient model # Processing agent gets efficient model
processor = Agent(role="Data Processor", llm=LLM(model="gpt-4o-mini")) processor = Agent(role="Data Processor", llm=LLM(model="gpt-4o-mini"))
``` ```
</Accordion> </Accordion>
<Accordion title="Ignoring Crew-Level vs Agent-Level LLM Hierarchy" icon="shuffle"> <Accordion title="Ignoring Crew-Level vs Agent-Level LLM Hierarchy" icon="shuffle">
@@ -474,6 +516,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
# Agents inherit crew LLM unless specifically overridden # Agents inherit crew LLM unless specifically overridden
agent1 = Agent(llm=LLM(model="claude-3-5-sonnet")) # Override for specific needs agent1 = Agent(llm=LLM(model="claude-3-5-sonnet")) # Override for specific needs
``` ```
</Accordion> </Accordion>
<Accordion title="Function Calling Model Mismatch" icon="screwdriver-wrench"> <Accordion title="Function Calling Model Mismatch" icon="screwdriver-wrench">
@@ -492,6 +535,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
llm=LLM(model="claude-3-5-sonnet") # Also strong with tools llm=LLM(model="claude-3-5-sonnet") # Also strong with tools
) )
``` ```
</Accordion> </Accordion>
<Accordion title="Premature Optimization Without Testing" icon="gear"> <Accordion title="Premature Optimization Without Testing" icon="gear">
@@ -507,6 +551,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
# Test performance, then optimize specific agents as needed # Test performance, then optimize specific agents as needed
# Use Enterprise platform testing to validate improvements # Use Enterprise platform testing to validate improvements
``` ```
</Accordion> </Accordion>
<Accordion title="Overlooking Context and Memory Limitations" icon="brain"> <Accordion title="Overlooking Context and Memory Limitations" icon="brain">
@@ -515,6 +560,7 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
**Real Example**: Using a short-context model for agents that need to maintain conversation history across multiple task iterations, or in crews with extensive agent-to-agent communication. **Real Example**: Using a short-context model for agents that need to maintain conversation history across multiple task iterations, or in crews with extensive agent-to-agent communication.
**CrewAI Solution**: Match context capabilities to crew communication patterns. **CrewAI Solution**: Match context capabilities to crew communication patterns.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -522,21 +568,35 @@ Rather than repeating the strategic framework, here's a tactical checklist for i
<Steps> <Steps>
<Step title="Start Simple" icon="play"> <Step title="Start Simple" icon="play">
Begin with reliable, general-purpose models that are well-understood and widely supported. This provides a stable foundation for understanding your specific requirements and performance expectations before optimizing for specialized needs. Begin with reliable, general-purpose models that are well-understood and
widely supported. This provides a stable foundation for understanding your
specific requirements and performance expectations before optimizing for
specialized needs.
</Step> </Step>
<Step title="Measure What Matters" icon="chart-line"> <Step title="Measure What Matters" icon="chart-line">
Develop metrics that align with your specific use case and business requirements rather than relying solely on general benchmarks. Focus on measuring outcomes that directly impact your success rather than theoretical performance indicators. Develop metrics that align with your specific use case and business
requirements rather than relying solely on general benchmarks. Focus on
measuring outcomes that directly impact your success rather than theoretical
performance indicators.
</Step> </Step>
<Step title="Iterate Based on Results" icon="arrows-rotate"> <Step title="Iterate Based on Results" icon="arrows-rotate">
Make model changes based on observed performance in your specific context rather than theoretical considerations or general recommendations. Real-world performance often differs significantly from benchmark results or general reputation. Make model changes based on observed performance in your specific context
rather than theoretical considerations or general recommendations.
Real-world performance often differs significantly from benchmark results or
general reputation.
</Step> </Step>
<Step title="Consider Total Cost" icon="calculator"> <Step title="Consider Total Cost" icon="calculator">
Evaluate the complete cost of ownership including model costs, development time, maintenance overhead, and operational complexity. The cheapest model per token may not be the most cost-effective choice when considering all factors. Evaluate the complete cost of ownership including model costs, development
time, maintenance overhead, and operational complexity. The cheapest model
per token may not be the most cost-effective choice when considering all
factors.
</Step> </Step>
</Steps> </Steps>
<Tip> <Tip>
Focus on understanding your requirements first, then select models that best match those needs. The best LLM choice is the one that consistently delivers the results you need within your operational constraints. Focus on understanding your requirements first, then select models that best
match those needs. The best LLM choice is the one that consistently delivers
the results you need within your operational constraints.
</Tip> </Tip>
### Enterprise-Grade Model Validation ### Enterprise-Grade Model Validation
@@ -562,7 +622,9 @@ For teams serious about optimizing their LLM selection, the **CrewAI AMP platfor
Go to [app.crewai.com](https://app.crewai.com) to get started! Go to [app.crewai.com](https://app.crewai.com) to get started!
<Info> <Info>
The Enterprise platform transforms model selection from guesswork into a data-driven process, enabling you to validate the principles in this guide with your actual use cases and requirements. The Enterprise platform transforms model selection from guesswork into a
data-driven process, enabling you to validate the principles in this guide
with your actual use cases and requirements.
</Info> </Info>
## Key Principles Summary ## Key Principles Summary
@@ -572,21 +634,27 @@ The Enterprise platform transforms model selection from guesswork into a data-dr
Choose models based on what the task actually requires, not theoretical capabilities or general reputation. Choose models based on what the task actually requires, not theoretical capabilities or general reputation.
</Card> </Card>
<Card title="Capability Matching" icon="puzzle-piece"> {" "}
Align model strengths with agent roles and responsibilities for optimal performance. <Card title="Capability Matching" icon="puzzle-piece">
</Card> Align model strengths with agent roles and responsibilities for optimal
performance.
</Card>
<Card title="Strategic Consistency" icon="link"> {" "}
Maintain coherent model selection strategy across related components and workflows. <Card title="Strategic Consistency" icon="link">
</Card> Maintain coherent model selection strategy across related components and
workflows.
</Card>
<Card title="Practical Testing" icon="flask"> {" "}
Validate choices through real-world usage rather than benchmarks alone. <Card title="Practical Testing" icon="flask">
</Card> Validate choices through real-world usage rather than benchmarks alone.
</Card>
<Card title="Iterative Improvement" icon="arrow-up"> {" "}
Start simple and optimize based on actual performance and needs. <Card title="Iterative Improvement" icon="arrow-up">
</Card> Start simple and optimize based on actual performance and needs.
</Card>
<Card title="Operational Balance" icon="scale-balanced"> <Card title="Operational Balance" icon="scale-balanced">
Balance performance requirements with cost and complexity constraints. Balance performance requirements with cost and complexity constraints.
@@ -594,13 +662,20 @@ The Enterprise platform transforms model selection from guesswork into a data-dr
</CardGroup> </CardGroup>
<Check> <Check>
Remember: The best LLM choice is the one that consistently delivers the results you need within your operational constraints. Focus on understanding your requirements first, then select models that best match those needs. Remember: The best LLM choice is the one that consistently delivers the
results you need within your operational constraints. Focus on understanding
your requirements first, then select models that best match those needs.
</Check> </Check>
## Current Model Landscape (June 2025) ## Current Model Landscape (June 2025)
<Warning> <Warning>
**Snapshot in Time**: The following model rankings represent current leaderboard standings as of June 2025, compiled from [LMSys Arena](https://arena.lmsys.org/), [Artificial Analysis](https://artificialanalysis.ai/), and other leading benchmarks. LLM performance, availability, and pricing change rapidly. Always conduct your own evaluations with your specific use cases and data. **Snapshot in Time**: The following model rankings represent current
leaderboard standings as of June 2025, compiled from [LMSys
Arena](https://arena.lmsys.org/), [Artificial
Analysis](https://artificialanalysis.ai/), and other leading benchmarks. LLM
performance, availability, and pricing change rapidly. Always conduct your own
evaluations with your specific use cases and data.
</Warning> </Warning>
### Leading Models by Category ### Leading Models by Category
@@ -608,7 +683,10 @@ Remember: The best LLM choice is the one that consistently delivers the results
The tables below show a representative sample of current top-performing models across different categories, with guidance on their suitability for CrewAI agents: The tables below show a representative sample of current top-performing models across different categories, with guidance on their suitability for CrewAI agents:
<Note> <Note>
These tables/metrics showcase selected leading models in each category and are not exhaustive. Many excellent models exist beyond those listed here. The goal is to illustrate the types of capabilities to look for rather than provide a complete catalog. These tables/metrics showcase selected leading models in each category and are
not exhaustive. Many excellent models exist beyond those listed here. The goal
is to illustrate the types of capabilities to look for rather than provide a
complete catalog.
</Note> </Note>
<Tabs> <Tabs>
@@ -624,6 +702,7 @@ These tables/metrics showcase selected leading models in each category and are n
| **Qwen3 235B (Reasoning)** | 62 | $2.63 | Moderate | Open-source alternative for reasoning tasks | | **Qwen3 235B (Reasoning)** | 62 | $2.63 | Moderate | Open-source alternative for reasoning tasks |
These models excel at multi-step reasoning and are ideal for agents that need to develop strategies, coordinate other agents, or analyze complex information. These models excel at multi-step reasoning and are ideal for agents that need to develop strategies, coordinate other agents, or analyze complex information.
</Tab> </Tab>
<Tab title="Coding & Technical"> <Tab title="Coding & Technical">
@@ -638,6 +717,7 @@ These tables/metrics showcase selected leading models in each category and are n
| **Llama 3.1 405B** | Good | 81.1% | $3.50 | Function calling LLM for tool-heavy workflows | | **Llama 3.1 405B** | Good | 81.1% | $3.50 | Function calling LLM for tool-heavy workflows |
These models are optimized for code generation, debugging, and technical problem-solving, making them ideal for development-focused crews. These models are optimized for code generation, debugging, and technical problem-solving, making them ideal for development-focused crews.
</Tab> </Tab>
<Tab title="Speed & Efficiency"> <Tab title="Speed & Efficiency">
@@ -652,6 +732,7 @@ These tables/metrics showcase selected leading models in each category and are n
| **Nova Micro** | High | 0.30s | $0.04 | Simple, fast task execution | | **Nova Micro** | High | 0.30s | $0.04 | Simple, fast task execution |
These models prioritize speed and efficiency, perfect for agents handling routine operations or requiring quick responses. **Pro tip**: Pairing these models with fast inference providers like Groq can achieve even better performance, especially for open-source models like Llama. These models prioritize speed and efficiency, perfect for agents handling routine operations or requiring quick responses. **Pro tip**: Pairing these models with fast inference providers like Groq can achieve even better performance, especially for open-source models like Llama.
</Tab> </Tab>
<Tab title="Balanced Performance"> <Tab title="Balanced Performance">
@@ -666,6 +747,7 @@ These tables/metrics showcase selected leading models in each category and are n
| **Qwen3 32B** | 44 | Good | $1.23 | Budget-friendly versatility | | **Qwen3 32B** | 44 | Good | $1.23 | Budget-friendly versatility |
These models offer good performance across multiple dimensions, suitable for crews with diverse task requirements. These models offer good performance across multiple dimensions, suitable for crews with diverse task requirements.
</Tab> </Tab>
</Tabs> </Tabs>
@@ -676,24 +758,28 @@ These tables/metrics showcase selected leading models in each category and are n
**When performance is the priority**: Use top-tier models like **o3**, **Gemini 2.5 Pro**, or **Claude 4 Sonnet** for manager LLMs and critical agents. These models excel at complex reasoning and coordination but come with higher costs. **When performance is the priority**: Use top-tier models like **o3**, **Gemini 2.5 Pro**, or **Claude 4 Sonnet** for manager LLMs and critical agents. These models excel at complex reasoning and coordination but come with higher costs.
**Strategy**: Implement a multi-model approach where premium models handle strategic thinking while efficient models handle routine operations. **Strategy**: Implement a multi-model approach where premium models handle strategic thinking while efficient models handle routine operations.
</Accordion> </Accordion>
<Accordion title="Cost-Conscious Crews" icon="dollar-sign"> <Accordion title="Cost-Conscious Crews" icon="dollar-sign">
**When budget is a primary constraint**: Focus on models like **DeepSeek R1**, **Llama 4 Scout**, or **Gemini 2.0 Flash**. These provide strong performance at significantly lower costs. **When budget is a primary constraint**: Focus on models like **DeepSeek R1**, **Llama 4 Scout**, or **Gemini 2.0 Flash**. These provide strong performance at significantly lower costs.
**Strategy**: Use cost-effective models for most agents, reserving premium models only for the most critical decision-making roles. **Strategy**: Use cost-effective models for most agents, reserving premium models only for the most critical decision-making roles.
</Accordion> </Accordion>
<Accordion title="Specialized Workflows" icon="screwdriver-wrench"> <Accordion title="Specialized Workflows" icon="screwdriver-wrench">
**For specific domain expertise**: Choose models optimized for your primary use case. **Claude 4** series for coding, **Gemini 2.5 Pro** for research, **Llama 405B** for function calling. **For specific domain expertise**: Choose models optimized for your primary use case. **Claude 4** series for coding, **Gemini 2.5 Pro** for research, **Llama 405B** for function calling.
**Strategy**: Select models based on your crew's primary function, ensuring the core capability aligns with model strengths. **Strategy**: Select models based on your crew's primary function, ensuring the core capability aligns with model strengths.
</Accordion> </Accordion>
<Accordion title="Enterprise & Privacy" icon="shield"> <Accordion title="Enterprise & Privacy" icon="shield">
**For data-sensitive operations**: Consider open-source models like **Llama 4** series, **DeepSeek V3**, or **Qwen3** that can be deployed locally while maintaining competitive performance. **For data-sensitive operations**: Consider open-source models like **Llama 4** series, **DeepSeek V3**, or **Qwen3** that can be deployed locally while maintaining competitive performance.
**Strategy**: Deploy open-source models on private infrastructure, accepting potential performance trade-offs for data control. **Strategy**: Deploy open-source models on private infrastructure, accepting potential performance trade-offs for data control.
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
@@ -706,7 +792,10 @@ These tables/metrics showcase selected leading models in each category and are n
- **Open Source Viability**: The gap between open-source and proprietary models continues to narrow, with models like Llama 4 Maverick and DeepSeek V3 offering competitive performance at attractive price points. Fast inference providers particularly shine with open-source models, often delivering better speed-to-cost ratios than proprietary alternatives. - **Open Source Viability**: The gap between open-source and proprietary models continues to narrow, with models like Llama 4 Maverick and DeepSeek V3 offering competitive performance at attractive price points. Fast inference providers particularly shine with open-source models, often delivering better speed-to-cost ratios than proprietary alternatives.
<Info> <Info>
**Testing is Essential**: Leaderboard rankings provide general guidance, but your specific use case, prompting style, and evaluation criteria may produce different results. Always test candidate models with your actual tasks and data before making final decisions. **Testing is Essential**: Leaderboard rankings provide general guidance, but
your specific use case, prompting style, and evaluation criteria may produce
different results. Always test candidate models with your actual tasks and
data before making final decisions.
</Info> </Info>
### Practical Implementation Strategy ### Practical Implementation Strategy
@@ -716,13 +805,20 @@ These tables/metrics showcase selected leading models in each category and are n
Begin with well-established models like **GPT-4.1**, **Claude 3.7 Sonnet**, or **Gemini 2.0 Flash** that offer good performance across multiple dimensions and have extensive real-world validation. Begin with well-established models like **GPT-4.1**, **Claude 3.7 Sonnet**, or **Gemini 2.0 Flash** that offer good performance across multiple dimensions and have extensive real-world validation.
</Step> </Step>
<Step title="Identify Specialized Needs"> {" "}
Determine if your crew has specific requirements (coding, reasoning, speed) that would benefit from specialized models like **Claude 4 Sonnet** for development or **o3** for complex analysis. For speed-critical applications, consider fast inference providers like **Groq** alongside model selection. <Step title="Identify Specialized Needs">
</Step> Determine if your crew has specific requirements (coding, reasoning, speed)
that would benefit from specialized models like **Claude 4 Sonnet** for
development or **o3** for complex analysis. For speed-critical applications,
consider fast inference providers like **Groq** alongside model selection.
</Step>
<Step title="Implement Multi-Model Strategy"> {" "}
Use different models for different agents based on their roles. High-capability models for managers and complex tasks, efficient models for routine operations. <Step title="Implement Multi-Model Strategy">
</Step> Use different models for different agents based on their roles.
High-capability models for managers and complex tasks, efficient models for
routine operations.
</Step>
<Step title="Monitor and Optimize"> <Step title="Monitor and Optimize">
Track performance metrics relevant to your use case and be prepared to adjust model selections as new models are released or pricing changes. Track performance metrics relevant to your use case and be prepared to adjust model selections as new models are released or pricing changes.

View File

@@ -0,0 +1,356 @@
---
title: Streaming Crew Execution
description: Stream real-time output from your CrewAI crew execution
icon: wave-pulse
mode: "wide"
---
## Introduction
CrewAI provides the ability to stream real-time output during crew execution, allowing you to display results as they're generated rather than waiting for the entire process to complete. This feature is particularly useful for building interactive applications, providing user feedback, and monitoring long-running processes.
## How Streaming Works
When streaming is enabled, CrewAI captures LLM responses and tool calls as they happen, packaging them into structured chunks that include context about which task and agent is executing. You can iterate over these chunks in real-time and access the final result once execution completes.
## Enabling Streaming
To enable streaming, set the `stream` parameter to `True` when creating your crew:
```python Code
from crewai import Agent, Crew, Task
# Create your agents and tasks
researcher = Agent(
role="Research Analyst",
goal="Gather comprehensive information on topics",
backstory="You are an experienced researcher with excellent analytical skills.",
)
task = Task(
description="Research the latest developments in AI",
expected_output="A detailed report on recent AI advancements",
agent=researcher,
)
# Enable streaming
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True # Enable streaming output
)
```
## Synchronous Streaming
When you call `kickoff()` on a crew with streaming enabled, it returns a `CrewStreamingOutput` object that you can iterate over to receive chunks as they arrive:
```python Code
# Start streaming execution
streaming = crew.kickoff(inputs={"topic": "artificial intelligence"})
# Iterate over chunks as they arrive
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access the final result after streaming completes
result = streaming.result
print(f"\n\nFinal output: {result.raw}")
```
### Stream Chunk Information
Each chunk provides rich context about the execution:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(f"Task: {chunk.task_name} (index {chunk.task_index})")
print(f"Agent: {chunk.agent_role}")
print(f"Content: {chunk.content}")
print(f"Type: {chunk.chunk_type}") # TEXT or TOOL_CALL
if chunk.tool_call:
print(f"Tool: {chunk.tool_call.tool_name}")
print(f"Arguments: {chunk.tool_call.arguments}")
```
### Accessing Streaming Results
The `CrewStreamingOutput` object provides several useful properties:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
# Iterate and collect chunks
for chunk in streaming:
print(chunk.content, end="", flush=True)
# After iteration completes
print(f"\nCompleted: {streaming.is_completed}")
print(f"Full text: {streaming.get_full_text()}")
print(f"All chunks: {len(streaming.chunks)}")
print(f"Final result: {streaming.result.raw}")
```
## Asynchronous Streaming
For async applications, you can use either `akickoff()` (native async) or `kickoff_async()` (thread-based) with async iteration:
### Native Async with `akickoff()`
The `akickoff()` method provides true native async execution throughout the entire chain:
```python Code
import asyncio
async def stream_crew():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Start native async streaming
streaming = await crew.akickoff(inputs={"topic": "AI"})
# Async iteration over chunks
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
print(f"\n\nFinal output: {result.raw}")
asyncio.run(stream_crew())
```
### Thread-Based Async with `kickoff_async()`
For simpler async integration or backward compatibility:
```python Code
import asyncio
async def stream_crew():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Start thread-based async streaming
streaming = await crew.kickoff_async(inputs={"topic": "AI"})
# Async iteration over chunks
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
print(f"\n\nFinal output: {result.raw}")
asyncio.run(stream_crew())
```
<Note>
For high-concurrency workloads, `akickoff()` is recommended as it uses native async for task execution, memory operations, and knowledge retrieval. See the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide for more details.
</Note>
## Streaming with kickoff_for_each
When executing a crew for multiple inputs with `kickoff_for_each()`, streaming works differently depending on whether you use sync or async:
### Synchronous kickoff_for_each
With synchronous `kickoff_for_each()`, you get a list of `CrewStreamingOutput` objects, one for each input:
```python Code
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
inputs_list = [
{"topic": "AI in healthcare"},
{"topic": "AI in finance"}
]
# Returns list of streaming outputs
streaming_outputs = crew.kickoff_for_each(inputs=inputs_list)
# Iterate over each streaming output
for i, streaming in enumerate(streaming_outputs):
print(f"\n=== Input {i + 1} ===")
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\n\nResult {i + 1}: {result.raw}")
```
### Asynchronous kickoff_for_each_async
With async `kickoff_for_each_async()`, you get a single `CrewStreamingOutput` that yields chunks from all crews as they arrive concurrently:
```python Code
import asyncio
async def stream_multiple_crews():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
inputs_list = [
{"topic": "AI in healthcare"},
{"topic": "AI in finance"}
]
# Returns single streaming output for all crews
streaming = await crew.kickoff_for_each_async(inputs=inputs_list)
# Chunks from all crews arrive as they're generated
async for chunk in streaming:
print(f"[{chunk.task_name}] {chunk.content}", end="", flush=True)
# Access all results
results = streaming.results # List of CrewOutput objects
for i, result in enumerate(results):
print(f"\n\nResult {i + 1}: {result.raw}")
asyncio.run(stream_multiple_crews())
```
## Stream Chunk Types
Chunks can be of different types, indicated by the `chunk_type` field:
### TEXT Chunks
Standard text content from LLM responses:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
```
### TOOL_CALL Chunks
Information about tool calls being made:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TOOL_CALL:
print(f"\nCalling tool: {chunk.tool_call.tool_name}")
print(f"Arguments: {chunk.tool_call.arguments}")
```
## Practical Example: Building a UI with Streaming
Here's a complete example showing how to build an interactive application with streaming:
```python Code
import asyncio
from crewai import Agent, Crew, Task
from crewai.types.streaming import StreamChunkType
async def interactive_research():
# Create crew with streaming enabled
researcher = Agent(
role="Research Analyst",
goal="Provide detailed analysis on any topic",
backstory="You are an expert researcher with broad knowledge.",
)
task = Task(
description="Research and analyze: {topic}",
expected_output="A comprehensive analysis with key insights",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True,
verbose=False
)
# Get user input
topic = input("Enter a topic to research: ")
print(f"\n{'='*60}")
print(f"Researching: {topic}")
print(f"{'='*60}\n")
# Start streaming execution
streaming = await crew.kickoff_async(inputs={"topic": topic})
current_task = ""
async for chunk in streaming:
# Show task transitions
if chunk.task_name != current_task:
current_task = chunk.task_name
print(f"\n[{chunk.agent_role}] Working on: {chunk.task_name}")
print("-" * 60)
# Display text chunks
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
# Display tool calls
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
print(f"\n🔧 Using tool: {chunk.tool_call.tool_name}")
# Show final result
result = streaming.result
print(f"\n\n{'='*60}")
print("Analysis Complete!")
print(f"{'='*60}")
print(f"\nToken Usage: {result.token_usage}")
asyncio.run(interactive_research())
```
## Use Cases
Streaming is particularly valuable for:
- **Interactive Applications**: Provide real-time feedback to users as agents work
- **Long-Running Tasks**: Show progress for research, analysis, or content generation
- **Debugging and Monitoring**: Observe agent behavior and decision-making in real-time
- **User Experience**: Reduce perceived latency by showing incremental results
- **Live Dashboards**: Build monitoring interfaces that display crew execution status
## Important Notes
- Streaming automatically enables LLM streaming for all agents in the crew
- You must iterate through all chunks before accessing the `.result` property
- For `kickoff_for_each_async()` with streaming, use `.results` (plural) to get all outputs
- Streaming adds minimal overhead and can actually improve perceived performance
- Each chunk includes full context (task, agent, chunk type) for rich UIs
## Error Handling
Handle errors during streaming execution:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
try:
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\nSuccess: {result.raw}")
except Exception as e:
print(f"\nError during streaming: {e}")
if streaming.is_completed:
print("Streaming completed but an error occurred")
```
By leveraging streaming, you can build more responsive and interactive applications with CrewAI, providing users with real-time visibility into agent execution and results.

View File

@@ -0,0 +1,450 @@
---
title: Streaming Flow Execution
description: Stream real-time output from your CrewAI flow execution
icon: wave-pulse
mode: "wide"
---
## Introduction
CrewAI Flows support streaming output, allowing you to receive real-time updates as your flow executes. This feature enables you to build responsive applications that display results incrementally, provide live progress updates, and create better user experiences for long-running workflows.
## How Flow Streaming Works
When streaming is enabled on a Flow, CrewAI captures and streams output from any crews or LLM calls within the flow. The stream delivers structured chunks containing the content, task context, and agent information as execution progresses.
## Enabling Streaming
To enable streaming, set the `stream` attribute to `True` on your Flow class:
```python Code
from crewai.flow.flow import Flow, listen, start
from crewai import Agent, Crew, Task
class ResearchFlow(Flow):
stream = True # Enable streaming for the entire flow
@start()
def initialize(self):
return {"topic": "AI trends"}
@listen(initialize)
def research_topic(self, data):
researcher = Agent(
role="Research Analyst",
goal="Research topics thoroughly",
backstory="Expert researcher with analytical skills",
)
task = Task(
description="Research {topic} and provide insights",
expected_output="Detailed research findings",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
)
return crew.kickoff(inputs=data)
```
## Synchronous Streaming
When you call `kickoff()` on a flow with streaming enabled, it returns a `FlowStreamingOutput` object that you can iterate over:
```python Code
flow = ResearchFlow()
# Start streaming execution
streaming = flow.kickoff()
# Iterate over chunks as they arrive
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access the final result after streaming completes
result = streaming.result
print(f"\n\nFinal output: {result}")
```
### Stream Chunk Information
Each chunk provides context about where it originated in the flow:
```python Code
streaming = flow.kickoff()
for chunk in streaming:
print(f"Agent: {chunk.agent_role}")
print(f"Task: {chunk.task_name}")
print(f"Content: {chunk.content}")
print(f"Type: {chunk.chunk_type}") # TEXT or TOOL_CALL
```
### Accessing Streaming Properties
The `FlowStreamingOutput` object provides useful properties and methods:
```python Code
streaming = flow.kickoff()
# Iterate and collect chunks
for chunk in streaming:
print(chunk.content, end="", flush=True)
# After iteration completes
print(f"\nCompleted: {streaming.is_completed}")
print(f"Full text: {streaming.get_full_text()}")
print(f"Total chunks: {len(streaming.chunks)}")
print(f"Final result: {streaming.result}")
```
## Asynchronous Streaming
For async applications, use `kickoff_async()` with async iteration:
```python Code
import asyncio
async def stream_flow():
flow = ResearchFlow()
# Start async streaming
streaming = await flow.kickoff_async()
# Async iteration over chunks
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
print(f"\n\nFinal output: {result}")
asyncio.run(stream_flow())
```
## Streaming with Multi-Step Flows
Streaming works seamlessly across multiple flow steps, including flows that execute multiple crews:
```python Code
from crewai.flow.flow import Flow, listen, start
from crewai import Agent, Crew, Task
class MultiStepFlow(Flow):
stream = True
@start()
def research_phase(self):
"""First crew: Research the topic."""
researcher = Agent(
role="Research Analyst",
goal="Gather comprehensive information",
backstory="Expert at finding relevant information",
)
task = Task(
description="Research AI developments in healthcare",
expected_output="Research findings on AI in healthcare",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
self.state["research"] = result.raw
return result.raw
@listen(research_phase)
def analysis_phase(self, research_data):
"""Second crew: Analyze the research."""
analyst = Agent(
role="Data Analyst",
goal="Analyze information and extract insights",
backstory="Expert at identifying patterns and trends",
)
task = Task(
description="Analyze this research: {research}",
expected_output="Key insights and trends",
agent=analyst,
)
crew = Crew(agents=[analyst], tasks=[task])
return crew.kickoff(inputs={"research": research_data})
# Stream across both phases
flow = MultiStepFlow()
streaming = flow.kickoff()
current_step = ""
for chunk in streaming:
# Track which flow step is executing
if chunk.task_name != current_step:
current_step = chunk.task_name
print(f"\n\n=== {chunk.task_name} ===\n")
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\n\nFinal analysis: {result}")
```
## Practical Example: Progress Dashboard
Here's a complete example showing how to build a progress dashboard with streaming:
```python Code
import asyncio
from crewai.flow.flow import Flow, listen, start
from crewai import Agent, Crew, Task
from crewai.types.streaming import StreamChunkType
class ResearchPipeline(Flow):
stream = True
@start()
def gather_data(self):
researcher = Agent(
role="Data Gatherer",
goal="Collect relevant information",
backstory="Skilled at finding quality sources",
)
task = Task(
description="Gather data on renewable energy trends",
expected_output="Collection of relevant data points",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
self.state["data"] = result.raw
return result.raw
@listen(gather_data)
def analyze_data(self, data):
analyst = Agent(
role="Data Analyst",
goal="Extract meaningful insights",
backstory="Expert at data analysis",
)
task = Task(
description="Analyze: {data}",
expected_output="Key insights and trends",
agent=analyst,
)
crew = Crew(agents=[analyst], tasks=[task])
return crew.kickoff(inputs={"data": data})
async def run_with_dashboard():
flow = ResearchPipeline()
print("="*60)
print("RESEARCH PIPELINE DASHBOARD")
print("="*60)
streaming = await flow.kickoff_async()
current_agent = ""
current_task = ""
chunk_count = 0
async for chunk in streaming:
chunk_count += 1
# Display phase transitions
if chunk.task_name != current_task:
current_task = chunk.task_name
current_agent = chunk.agent_role
print(f"\n\n📋 Phase: {current_task}")
print(f"👤 Agent: {current_agent}")
print("-" * 60)
# Display text output
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
# Display tool usage
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
print(f"\n🔧 Tool: {chunk.tool_call.tool_name}")
# Show completion summary
result = streaming.result
print(f"\n\n{'='*60}")
print("PIPELINE COMPLETE")
print(f"{'='*60}")
print(f"Total chunks: {chunk_count}")
print(f"Final output length: {len(str(result))} characters")
asyncio.run(run_with_dashboard())
```
## Streaming with State Management
Streaming works naturally with Flow state management:
```python Code
from pydantic import BaseModel
class AnalysisState(BaseModel):
topic: str = ""
research: str = ""
insights: str = ""
class StatefulStreamingFlow(Flow[AnalysisState]):
stream = True
@start()
def research(self):
# State is available during streaming
topic = self.state.topic
print(f"Researching: {topic}")
researcher = Agent(
role="Researcher",
goal="Research topics thoroughly",
backstory="Expert researcher",
)
task = Task(
description=f"Research {topic}",
expected_output="Research findings",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
self.state.research = result.raw
return result.raw
@listen(research)
def analyze(self, research):
# Access updated state
print(f"Analyzing {len(self.state.research)} chars of research")
analyst = Agent(
role="Analyst",
goal="Extract insights",
backstory="Expert analyst",
)
task = Task(
description="Analyze: {research}",
expected_output="Key insights",
agent=analyst,
)
crew = Crew(agents=[analyst], tasks=[task])
result = crew.kickoff(inputs={"research": research})
self.state.insights = result.raw
return result.raw
# Run with streaming
flow = StatefulStreamingFlow()
streaming = flow.kickoff(inputs={"topic": "quantum computing"})
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\n\nFinal state:")
print(f"Topic: {flow.state.topic}")
print(f"Research length: {len(flow.state.research)}")
print(f"Insights length: {len(flow.state.insights)}")
```
## Use Cases
Flow streaming is particularly valuable for:
- **Multi-Stage Workflows**: Show progress across research, analysis, and synthesis phases
- **Complex Pipelines**: Provide visibility into long-running data processing flows
- **Interactive Applications**: Build responsive UIs that display intermediate results
- **Monitoring and Debugging**: Observe flow execution and crew interactions in real-time
- **Progress Tracking**: Show users which stage of the workflow is currently executing
- **Live Dashboards**: Create monitoring interfaces for production flows
## Stream Chunk Types
Like crew streaming, flow chunks can be of different types:
### TEXT Chunks
Standard text content from LLM responses:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
```
### TOOL_CALL Chunks
Information about tool calls within the flow:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
print(f"\nTool: {chunk.tool_call.tool_name}")
print(f"Args: {chunk.tool_call.arguments}")
```
## Error Handling
Handle errors gracefully during streaming:
```python Code
flow = ResearchFlow()
streaming = flow.kickoff()
try:
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\nSuccess! Result: {result}")
except Exception as e:
print(f"\nError during flow execution: {e}")
if streaming.is_completed:
print("Streaming completed but flow encountered an error")
```
## Important Notes
- Streaming automatically enables LLM streaming for any crews used within the flow
- You must iterate through all chunks before accessing the `.result` property
- Streaming works with both structured and unstructured flow state
- Flow streaming captures output from all crews and LLM calls in the flow
- Each chunk includes context about which agent and task generated it
- Streaming adds minimal overhead to flow execution
## Combining with Flow Visualization
You can combine streaming with flow visualization to provide a complete picture:
```python Code
# Generate flow visualization
flow = ResearchFlow()
flow.plot("research_flow") # Creates HTML visualization
# Run with streaming
streaming = flow.kickoff()
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\nFlow complete! View structure at: research_flow.html")
```
By leveraging flow streaming, you can build sophisticated, responsive applications that provide users with real-time visibility into complex multi-stage workflows, making your AI automations more transparent and engaging.

View File

@@ -0,0 +1,600 @@
---
title: Tool Call Hooks
description: Learn how to use tool call hooks to intercept, modify, and control tool execution in CrewAI
mode: "wide"
---
Tool Call Hooks provide fine-grained control over tool execution during agent operations. These hooks allow you to intercept tool calls, modify inputs, transform outputs, implement safety checks, and add comprehensive logging or monitoring.
## Overview
Tool hooks are executed at two critical points:
- **Before Tool Call**: Modify inputs, validate parameters, or block execution
- **After Tool Call**: Transform results, sanitize outputs, or log execution details
## Hook Types
### Before Tool Call Hooks
Executed before every tool execution, these hooks can:
- Inspect and modify tool inputs
- Block tool execution based on conditions
- Implement approval gates for dangerous operations
- Validate parameters
- Log tool invocations
**Signature:**
```python
def before_hook(context: ToolCallHookContext) -> bool | None:
# Return False to block execution
# Return True or None to allow execution
...
```
### After Tool Call Hooks
Executed after every tool execution, these hooks can:
- Modify or sanitize tool results
- Add metadata or formatting
- Log execution results
- Implement result validation
- Transform output formats
**Signature:**
```python
def after_hook(context: ToolCallHookContext) -> str | None:
# Return modified result string
# Return None to keep original result
...
```
## Tool Hook Context
The `ToolCallHookContext` object provides comprehensive access to tool execution state:
```python
class ToolCallHookContext:
tool_name: str # Name of the tool being called
tool_input: dict[str, Any] # Mutable tool input parameters
tool: CrewStructuredTool # Tool instance reference
agent: Agent | BaseAgent | None # Agent executing the tool
task: Task | None # Current task
crew: Crew | None # Crew instance
tool_result: str | None # Tool result (after hooks only)
```
### Modifying Tool Inputs
**Important:** Always modify tool inputs in-place:
```python
# ✅ Correct - modify in-place
def sanitize_input(context: ToolCallHookContext) -> None:
context.tool_input['query'] = context.tool_input['query'].lower()
# ❌ Wrong - replaces dict reference
def wrong_approach(context: ToolCallHookContext) -> None:
context.tool_input = {'query': 'new query'}
```
## Registration Methods
### 1. Global Hook Registration
Register hooks that apply to all tool calls across all crews:
```python
from crewai.hooks import register_before_tool_call_hook, register_after_tool_call_hook
def log_tool_call(context):
print(f"Tool: {context.tool_name}")
print(f"Input: {context.tool_input}")
return None # Allow execution
register_before_tool_call_hook(log_tool_call)
```
### 2. Decorator-Based Registration
Use decorators for cleaner syntax:
```python
from crewai.hooks import before_tool_call, after_tool_call
@before_tool_call
def block_dangerous_tools(context):
dangerous_tools = ['delete_database', 'drop_table', 'rm_rf']
if context.tool_name in dangerous_tools:
print(f"⛔ Blocked dangerous tool: {context.tool_name}")
return False # Block execution
return None
@after_tool_call
def sanitize_results(context):
if context.tool_result and "password" in context.tool_result.lower():
return context.tool_result.replace("password", "[REDACTED]")
return None
```
### 3. Crew-Scoped Hooks
Register hooks for a specific crew instance:
```python
@CrewBase
class MyProjCrew:
@before_tool_call_crew
def validate_tool_inputs(self, context):
# Only applies to this crew
if context.tool_name == "web_search":
if not context.tool_input.get('query'):
print("❌ Invalid search query")
return False
return None
@after_tool_call_crew
def log_tool_results(self, context):
# Crew-specific tool logging
print(f"✅ {context.tool_name} completed")
return None
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True
)
```
## Common Use Cases
### 1. Safety Guardrails
```python
@before_tool_call
def safety_check(context: ToolCallHookContext) -> bool | None:
# Block tools that could cause harm
destructive_tools = [
'delete_file',
'drop_table',
'remove_user',
'system_shutdown'
]
if context.tool_name in destructive_tools:
print(f"🛑 Blocked destructive tool: {context.tool_name}")
return False
# Warn on sensitive operations
sensitive_tools = ['send_email', 'post_to_social_media', 'charge_payment']
if context.tool_name in sensitive_tools:
print(f"⚠️ Executing sensitive tool: {context.tool_name}")
return None
```
### 2. Human Approval Gate
```python
@before_tool_call
def require_approval_for_actions(context: ToolCallHookContext) -> bool | None:
approval_required = [
'send_email',
'make_purchase',
'delete_file',
'post_message'
]
if context.tool_name in approval_required:
response = context.request_human_input(
prompt=f"Approve {context.tool_name}?",
default_message=f"Input: {context.tool_input}\nType 'yes' to approve:"
)
if response.lower() != 'yes':
print(f"❌ Tool execution denied: {context.tool_name}")
return False
return None
```
### 3. Input Validation and Sanitization
```python
@before_tool_call
def validate_and_sanitize_inputs(context: ToolCallHookContext) -> bool | None:
# Validate search queries
if context.tool_name == 'web_search':
query = context.tool_input.get('query', '')
if len(query) < 3:
print("❌ Search query too short")
return False
# Sanitize query
context.tool_input['query'] = query.strip().lower()
# Validate file paths
if context.tool_name == 'read_file':
path = context.tool_input.get('path', '')
if '..' in path or path.startswith('/'):
print("❌ Invalid file path")
return False
return None
```
### 4. Result Sanitization
```python
@after_tool_call
def sanitize_sensitive_data(context: ToolCallHookContext) -> str | None:
if not context.tool_result:
return None
import re
result = context.tool_result
# Remove API keys
result = re.sub(
r'(api[_-]?key|token)["\']?\s*[:=]\s*["\']?[\w-]+',
r'\1: [REDACTED]',
result,
flags=re.IGNORECASE
)
# Remove email addresses
result = re.sub(
r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
'[EMAIL-REDACTED]',
result
)
# Remove credit card numbers
result = re.sub(
r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b',
'[CARD-REDACTED]',
result
)
return result
```
### 5. Tool Usage Analytics
```python
import time
from collections import defaultdict
tool_stats = defaultdict(lambda: {'count': 0, 'total_time': 0, 'failures': 0})
@before_tool_call
def start_timer(context: ToolCallHookContext) -> None:
context.tool_input['_start_time'] = time.time()
return None
@after_tool_call
def track_tool_usage(context: ToolCallHookContext) -> None:
start_time = context.tool_input.get('_start_time', time.time())
duration = time.time() - start_time
tool_stats[context.tool_name]['count'] += 1
tool_stats[context.tool_name]['total_time'] += duration
if not context.tool_result or 'error' in context.tool_result.lower():
tool_stats[context.tool_name]['failures'] += 1
print(f"""
📊 Tool Stats for {context.tool_name}:
- Executions: {tool_stats[context.tool_name]['count']}
- Avg Time: {tool_stats[context.tool_name]['total_time'] / tool_stats[context.tool_name]['count']:.2f}s
- Failures: {tool_stats[context.tool_name]['failures']}
""")
return None
```
### 6. Rate Limiting
```python
from collections import defaultdict
from datetime import datetime, timedelta
tool_call_history = defaultdict(list)
@before_tool_call
def rate_limit_tools(context: ToolCallHookContext) -> bool | None:
tool_name = context.tool_name
now = datetime.now()
# Clean old entries (older than 1 minute)
tool_call_history[tool_name] = [
call_time for call_time in tool_call_history[tool_name]
if now - call_time < timedelta(minutes=1)
]
# Check rate limit (max 10 calls per minute)
if len(tool_call_history[tool_name]) >= 10:
print(f"🚫 Rate limit exceeded for {tool_name}")
return False
# Record this call
tool_call_history[tool_name].append(now)
return None
```
### 7. Caching Tool Results
```python
import hashlib
import json
tool_cache = {}
def cache_key(tool_name: str, tool_input: dict) -> str:
"""Generate cache key from tool name and input."""
input_str = json.dumps(tool_input, sort_keys=True)
return hashlib.md5(f"{tool_name}:{input_str}".encode()).hexdigest()
@before_tool_call
def check_cache(context: ToolCallHookContext) -> bool | None:
key = cache_key(context.tool_name, context.tool_input)
if key in tool_cache:
print(f"💾 Cache hit for {context.tool_name}")
# Note: Can't return cached result from before hook
# Would need to implement this differently
return None
@after_tool_call
def cache_result(context: ToolCallHookContext) -> None:
if context.tool_result:
key = cache_key(context.tool_name, context.tool_input)
tool_cache[key] = context.tool_result
print(f"💾 Cached result for {context.tool_name}")
return None
```
### 8. Debug Logging
```python
@before_tool_call
def debug_tool_call(context: ToolCallHookContext) -> None:
print(f"""
🔍 Tool Call Debug:
- Tool: {context.tool_name}
- Agent: {context.agent.role if context.agent else 'Unknown'}
- Task: {context.task.description[:50] if context.task else 'Unknown'}...
- Input: {context.tool_input}
""")
return None
@after_tool_call
def debug_tool_result(context: ToolCallHookContext) -> None:
if context.tool_result:
result_preview = context.tool_result[:200]
print(f"✅ Result Preview: {result_preview}...")
else:
print("⚠️ No result returned")
return None
```
## Hook Management
### Unregistering Hooks
```python
from crewai.hooks import (
unregister_before_tool_call_hook,
unregister_after_tool_call_hook
)
# Unregister specific hook
def my_hook(context):
...
register_before_tool_call_hook(my_hook)
# Later...
success = unregister_before_tool_call_hook(my_hook)
print(f"Unregistered: {success}")
```
### Clearing Hooks
```python
from crewai.hooks import (
clear_before_tool_call_hooks,
clear_after_tool_call_hooks,
clear_all_tool_call_hooks
)
# Clear specific hook type
count = clear_before_tool_call_hooks()
print(f"Cleared {count} before hooks")
# Clear all tool hooks
before_count, after_count = clear_all_tool_call_hooks()
print(f"Cleared {before_count} before and {after_count} after hooks")
```
### Listing Registered Hooks
```python
from crewai.hooks import (
get_before_tool_call_hooks,
get_after_tool_call_hooks
)
# Get current hooks
before_hooks = get_before_tool_call_hooks()
after_hooks = get_after_tool_call_hooks()
print(f"Registered: {len(before_hooks)} before, {len(after_hooks)} after")
```
## Advanced Patterns
### Conditional Hook Execution
```python
@before_tool_call
def conditional_blocking(context: ToolCallHookContext) -> bool | None:
# Only block for specific agents
if context.agent and context.agent.role == "junior_agent":
if context.tool_name in ['delete_file', 'send_email']:
print(f"❌ Junior agents cannot use {context.tool_name}")
return False
# Only block during specific tasks
if context.task and "sensitive" in context.task.description.lower():
if context.tool_name == 'web_search':
print("❌ Web search blocked for sensitive tasks")
return False
return None
```
### Context-Aware Input Modification
```python
@before_tool_call
def enhance_tool_inputs(context: ToolCallHookContext) -> None:
# Add context based on agent role
if context.agent and context.agent.role == "researcher":
if context.tool_name == 'web_search':
# Add domain restrictions for researchers
context.tool_input['domains'] = ['edu', 'gov', 'org']
# Add context based on task
if context.task and "urgent" in context.task.description.lower():
if context.tool_name == 'send_email':
context.tool_input['priority'] = 'high'
return None
```
### Tool Chain Monitoring
```python
tool_call_chain = []
@before_tool_call
def track_tool_chain(context: ToolCallHookContext) -> None:
tool_call_chain.append({
'tool': context.tool_name,
'timestamp': time.time(),
'agent': context.agent.role if context.agent else 'Unknown'
})
# Detect potential infinite loops
recent_calls = tool_call_chain[-5:]
if len(recent_calls) == 5 and all(c['tool'] == context.tool_name for c in recent_calls):
print(f"⚠️ Warning: {context.tool_name} called 5 times in a row")
return None
```
## Best Practices
1. **Keep Hooks Focused**: Each hook should have a single responsibility
2. **Avoid Heavy Computation**: Hooks execute on every tool call
3. **Handle Errors Gracefully**: Use try-except to prevent hook failures
4. **Use Type Hints**: Leverage `ToolCallHookContext` for better IDE support
5. **Document Blocking Conditions**: Make it clear when/why tools are blocked
6. **Test Hooks Independently**: Unit test hooks before using in production
7. **Clear Hooks in Tests**: Use `clear_all_tool_call_hooks()` between test runs
8. **Modify In-Place**: Always modify `context.tool_input` in-place, never replace
9. **Log Important Decisions**: Especially when blocking tool execution
10. **Consider Performance**: Cache expensive validations when possible
## Error Handling
```python
@before_tool_call
def safe_validation(context: ToolCallHookContext) -> bool | None:
try:
# Your validation logic
if not validate_input(context.tool_input):
return False
except Exception as e:
print(f"⚠️ Hook error: {e}")
# Decide: allow or block on error
return None # Allow execution despite error
```
## Type Safety
```python
from crewai.hooks import ToolCallHookContext, BeforeToolCallHookType, AfterToolCallHookType
# Explicit type annotations
def my_before_hook(context: ToolCallHookContext) -> bool | None:
return None
def my_after_hook(context: ToolCallHookContext) -> str | None:
return None
# Type-safe registration
register_before_tool_call_hook(my_before_hook)
register_after_tool_call_hook(my_after_hook)
```
## Integration with Existing Tools
### Wrapping Existing Validation
```python
def existing_validator(tool_name: str, inputs: dict) -> bool:
"""Your existing validation function."""
# Your validation logic
return True
@before_tool_call
def integrate_validator(context: ToolCallHookContext) -> bool | None:
if not existing_validator(context.tool_name, context.tool_input):
print(f"❌ Validation failed for {context.tool_name}")
return False
return None
```
### Logging to External Systems
```python
import logging
logger = logging.getLogger(__name__)
@before_tool_call
def log_to_external_system(context: ToolCallHookContext) -> None:
logger.info(f"Tool call: {context.tool_name}", extra={
'tool_name': context.tool_name,
'tool_input': context.tool_input,
'agent': context.agent.role if context.agent else None
})
return None
```
## Troubleshooting
### Hook Not Executing
- Verify hook is registered before crew execution
- Check if previous hook returned `False` (blocks execution and subsequent hooks)
- Ensure hook signature matches expected type
### Input Modifications Not Working
- Use in-place modifications: `context.tool_input['key'] = value`
- Don't replace the dict: `context.tool_input = {}`
### Result Modifications Not Working
- Return the modified string from after hooks
- Returning `None` keeps the original result
- Ensure the tool actually returned a result
### Tool Blocked Unexpectedly
- Check all before hooks for blocking conditions
- Verify hook execution order
- Add debug logging to identify which hook is blocking
## Conclusion
Tool Call Hooks provide powerful capabilities for controlling and monitoring tool execution in CrewAI. Use them to implement safety guardrails, approval gates, input validation, result sanitization, logging, and analytics. Combined with proper error handling and type safety, hooks enable secure and production-ready agent systems with comprehensive observability.

View File

@@ -10,7 +10,9 @@ mode: "wide"
CrewAI's MCP DSL (Domain Specific Language) integration provides the **simplest way** to connect your agents to MCP (Model Context Protocol) servers. Just add an `mcps` field to your agent and CrewAI handles all the complexity automatically. CrewAI's MCP DSL (Domain Specific Language) integration provides the **simplest way** to connect your agents to MCP (Model Context Protocol) servers. Just add an `mcps` field to your agent and CrewAI handles all the complexity automatically.
<Info> <Info>
This is the **recommended approach** for most MCP use cases. For advanced scenarios requiring manual connection management, see [MCPServerAdapter](/en/mcp/overview#advanced-mcpserveradapter). This is the **recommended approach** for most MCP use cases. For advanced
scenarios requiring manual connection management, see
[MCPServerAdapter](/en/mcp/overview#advanced-mcpserveradapter).
</Info> </Info>
## Basic Usage ## Basic Usage
@@ -319,6 +321,7 @@ agent = Agent(
### Common Issues ### Common Issues
**No tools discovered:** **No tools discovered:**
```python ```python
# Check your MCP server URL and authentication # Check your MCP server URL and authentication
# Verify the server is running and accessible # Verify the server is running and accessible
@@ -326,6 +329,7 @@ mcps=["https://mcp.example.com/mcp?api_key=valid_key"]
``` ```
**Connection timeouts:** **Connection timeouts:**
```python ```python
# Server may be slow or overloaded # Server may be slow or overloaded
# CrewAI will log warnings and continue with other servers # CrewAI will log warnings and continue with other servers
@@ -333,6 +337,7 @@ mcps=["https://mcp.example.com/mcp?api_key=valid_key"]
``` ```
**Authentication failures:** **Authentication failures:**
```python ```python
# Verify API keys and credentials # Verify API keys and credentials
# Check server documentation for required parameters # Check server documentation for required parameters

View File

@@ -1,6 +1,6 @@
--- ---
title: 'MCP Servers as Tools in CrewAI' title: "MCP Servers as Tools in CrewAI"
description: 'Learn how to integrate MCP servers as tools in your CrewAI agents using the `crewai-tools` library.' description: "Learn how to integrate MCP servers as tools in your CrewAI agents using the `crewai-tools` library."
icon: plug icon: plug
mode: "wide" mode: "wide"
--- ---
@@ -11,9 +11,13 @@ The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP)
CrewAI offers **two approaches** for MCP integration: CrewAI offers **two approaches** for MCP integration:
### Simple DSL Integration** (Recommended) ### 🚀 **Simple DSL Integration** (Recommended)
Use the `mcps` field directly on agents for seamless MCP tool integration: Use the `mcps` field directly on agents for seamless MCP tool integration. The DSL supports both **string references** (for quick setup) and **structured configurations** (for full control).
#### String-Based References (Quick Setup)
Perfect for remote HTTPS servers and CrewAI AMP marketplace:
```python ```python
from crewai import Agent from crewai import Agent
@@ -32,6 +36,46 @@ agent = Agent(
# MCP tools are now automatically available to your agent! # MCP tools are now automatically available to your agent!
``` ```
#### Structured Configurations (Full Control)
For complete control over connection settings, tool filtering, and all transport types:
```python
from crewai import Agent
from crewai.mcp import MCPServerStdio, MCPServerHTTP, MCPServerSSE
from crewai.mcp.filters import create_static_tool_filter
agent = Agent(
role="Advanced Research Analyst",
goal="Research with full control over MCP connections",
backstory="Expert researcher with advanced tool access",
mcps=[
# Stdio transport for local servers
MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
env={"API_KEY": "your_key"},
tool_filter=create_static_tool_filter(
allowed_tool_names=["read_file", "list_directory"]
),
cache_tools_list=True,
),
# HTTP/Streamable HTTP transport for remote servers
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer your_token"},
streamable=True,
cache_tools_list=True,
),
# SSE transport for real-time streaming
MCPServerSSE(
url="https://stream.example.com/mcp/sse",
headers={"Authorization": "Bearer your_token"},
),
]
)
```
### 🔧 **Advanced: MCPServerAdapter** (For Complex Scenarios) ### 🔧 **Advanced: MCPServerAdapter** (For Complex Scenarios)
For advanced use cases requiring manual connection management, the `crewai-tools` library provides the `MCPServerAdapter` class. For advanced use cases requiring manual connection management, the `crewai-tools` library provides the `MCPServerAdapter` class.
@@ -43,6 +87,7 @@ We currently support the following transport mechanisms:
- **Streamable HTTPS**: for remote servers (flexible, potentially bi-directional communication over HTTPS, often utilizing SSE for server-to-client streams) - **Streamable HTTPS**: for remote servers (flexible, potentially bi-directional communication over HTTPS, often utilizing SSE for server-to-client streams)
## Video Tutorial ## Video Tutorial
Watch this video tutorial for a comprehensive guide on MCP integration with CrewAI: Watch this video tutorial for a comprehensive guide on MCP integration with CrewAI:
<iframe <iframe
@@ -68,12 +113,14 @@ uv pip install 'crewai-tools[mcp]'
## Quick Start: Simple DSL Integration ## Quick Start: Simple DSL Integration
The easiest way to integrate MCP servers is using the `mcps` field on your agents: The easiest way to integrate MCP servers is using the `mcps` field on your agents. You can use either string references or structured configurations.
### Quick Start with String References
```python ```python
from crewai import Agent, Task, Crew from crewai import Agent, Task, Crew
# Create agent with MCP tools # Create agent with MCP tools using string references
research_agent = Agent( research_agent = Agent(
role="Research Analyst", role="Research Analyst",
goal="Find and analyze information using advanced search tools", goal="Find and analyze information using advanced search tools",
@@ -96,13 +143,53 @@ crew = Crew(agents=[research_agent], tasks=[research_task])
result = crew.kickoff() result = crew.kickoff()
``` ```
### Quick Start with Structured Configurations
```python
from crewai import Agent, Task, Crew
from crewai.mcp import MCPServerStdio, MCPServerHTTP, MCPServerSSE
# Create agent with structured MCP configurations
research_agent = Agent(
role="Research Analyst",
goal="Find and analyze information using advanced search tools",
backstory="Expert researcher with access to multiple data sources",
mcps=[
# Local stdio server
MCPServerStdio(
command="python",
args=["local_server.py"],
env={"API_KEY": "your_key"},
),
# Remote HTTP server
MCPServerHTTP(
url="https://api.research.com/mcp",
headers={"Authorization": "Bearer your_token"},
),
]
)
# Create task
research_task = Task(
description="Research the latest developments in AI agent frameworks",
expected_output="Comprehensive research report with citations",
agent=research_agent
)
# Create and run crew
crew = Crew(agents=[research_agent], tasks=[research_task])
result = crew.kickoff()
```
That's it! The MCP tools are automatically discovered and available to your agent. That's it! The MCP tools are automatically discovered and available to your agent.
## MCP Reference Formats ## MCP Reference Formats
The `mcps` field supports various reference formats for maximum flexibility: The `mcps` field supports both **string references** (for quick setup) and **structured configurations** (for full control). You can mix both formats in the same list.
### External MCP Servers ### String-Based References
#### External MCP Servers
```python ```python
mcps=[ mcps=[
@@ -117,7 +204,7 @@ mcps=[
] ]
``` ```
### CrewAI AMP Marketplace #### CrewAI AMP Marketplace
```python ```python
mcps=[ mcps=[
@@ -133,17 +220,167 @@ mcps=[
] ]
``` ```
### Mixed References ### Structured Configurations
#### Stdio Transport (Local Servers)
Perfect for local MCP servers that run as processes:
```python ```python
from crewai.mcp import MCPServerStdio
from crewai.mcp.filters import create_static_tool_filter
mcps=[ mcps=[
"https://external-api.com/mcp", # External server MCPServerStdio(
"https://weather.service.com/mcp#forecast", # Specific external tool command="npx",
"crewai-amp:financial-insights", # AMP service args=["-y", "@modelcontextprotocol/server-filesystem"],
"crewai-amp:data-analysis#sentiment_tool" # Specific AMP tool env={"API_KEY": "your_key"},
tool_filter=create_static_tool_filter(
allowed_tool_names=["read_file", "write_file"]
),
cache_tools_list=True,
),
# Python-based server
MCPServerStdio(
command="python",
args=["path/to/server.py"],
env={"UV_PYTHON": "3.12", "API_KEY": "your_key"},
),
] ]
``` ```
#### HTTP/Streamable HTTP Transport (Remote Servers)
For remote MCP servers over HTTP/HTTPS:
```python
from crewai.mcp import MCPServerHTTP
mcps=[
# Streamable HTTP (default)
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer your_token"},
streamable=True,
cache_tools_list=True,
),
# Standard HTTP
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer your_token"},
streamable=False,
),
]
```
#### SSE Transport (Real-Time Streaming)
For remote servers using Server-Sent Events:
```python
from crewai.mcp import MCPServerSSE
mcps=[
MCPServerSSE(
url="https://stream.example.com/mcp/sse",
headers={"Authorization": "Bearer your_token"},
cache_tools_list=True,
),
]
```
### Mixed References
You can combine string references and structured configurations:
```python
from crewai.mcp import MCPServerStdio, MCPServerHTTP
mcps=[
# String references
"https://external-api.com/mcp", # External server
"crewai-amp:financial-insights", # AMP service
# Structured configurations
MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
),
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer token"},
),
]
```
### Tool Filtering
Structured configurations support advanced tool filtering:
```python
from crewai.mcp import MCPServerStdio
from crewai.mcp.filters import create_static_tool_filter, create_dynamic_tool_filter, ToolFilterContext
# Static filtering (allow/block lists)
static_filter = create_static_tool_filter(
allowed_tool_names=["read_file", "write_file"],
blocked_tool_names=["delete_file"],
)
# Dynamic filtering (context-aware)
def dynamic_filter(context: ToolFilterContext, tool: dict) -> bool:
# Block dangerous tools for certain agent roles
if context.agent.role == "Code Reviewer":
if "delete" in tool.get("name", "").lower():
return False
return True
mcps=[
MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
tool_filter=static_filter, # or dynamic_filter
),
]
```
## Configuration Parameters
Each transport type supports specific configuration options:
### MCPServerStdio Parameters
- **`command`** (required): Command to execute (e.g., `"python"`, `"node"`, `"npx"`, `"uvx"`)
- **`args`** (optional): List of command arguments (e.g., `["server.py"]` or `["-y", "@mcp/server"]`)
- **`env`** (optional): Dictionary of environment variables to pass to the process
- **`tool_filter`** (optional): Tool filter function for filtering available tools
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
### MCPServerHTTP Parameters
- **`url`** (required): Server URL (e.g., `"https://api.example.com/mcp"`)
- **`headers`** (optional): Dictionary of HTTP headers for authentication or other purposes
- **`streamable`** (optional): Whether to use streamable HTTP transport (default: `True`)
- **`tool_filter`** (optional): Tool filter function for filtering available tools
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
### MCPServerSSE Parameters
- **`url`** (required): Server URL (e.g., `"https://api.example.com/mcp/sse"`)
- **`headers`** (optional): Dictionary of HTTP headers for authentication or other purposes
- **`tool_filter`** (optional): Tool filter function for filtering available tools
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
### Common Parameters
All transport types support:
- **`tool_filter`**: Filter function to control which tools are available. Can be:
- `None` (default): All tools are available
- Static filter: Created with `create_static_tool_filter()` for allow/block lists
- Dynamic filter: Created with `create_dynamic_tool_filter()` for context-aware filtering
- **`cache_tools_list`**: When `True`, caches the tool list after first discovery to improve performance on subsequent connections
## Key Features ## Key Features
- 🔄 **Automatic Tool Discovery**: Tools are automatically discovered and integrated - 🔄 **Automatic Tool Discovery**: Tools are automatically discovered and integrated
@@ -152,26 +389,48 @@ mcps=[
- 🛡️ **Error Resilience**: Graceful handling of unavailable servers - 🛡️ **Error Resilience**: Graceful handling of unavailable servers
- ⏱️ **Timeout Protection**: Built-in timeouts prevent hanging connections - ⏱️ **Timeout Protection**: Built-in timeouts prevent hanging connections
- 📊 **Transparent Integration**: Works seamlessly with existing CrewAI features - 📊 **Transparent Integration**: Works seamlessly with existing CrewAI features
- 🔧 **Full Transport Support**: Stdio, HTTP/Streamable HTTP, and SSE transports
- 🎯 **Advanced Filtering**: Static and dynamic tool filtering capabilities
- 🔐 **Flexible Authentication**: Support for headers, environment variables, and query parameters
## Error Handling ## Error Handling
The MCP DSL integration is designed to be resilient: The MCP DSL integration is designed to be resilient and handles failures gracefully:
```python ```python
from crewai import Agent
from crewai.mcp import MCPServerStdio, MCPServerHTTP
agent = Agent( agent = Agent(
role="Resilient Agent", role="Resilient Agent",
goal="Continue working despite server issues", goal="Continue working despite server issues",
backstory="Agent that handles failures gracefully", backstory="Agent that handles failures gracefully",
mcps=[ mcps=[
# String references
"https://reliable-server.com/mcp", # Will work "https://reliable-server.com/mcp", # Will work
"https://unreachable-server.com/mcp", # Will be skipped gracefully "https://unreachable-server.com/mcp", # Will be skipped gracefully
"https://slow-server.com/mcp", # Will timeout gracefully "crewai-amp:working-service", # Will work
"crewai-amp:working-service" # Will work
# Structured configs
MCPServerStdio(
command="python",
args=["reliable_server.py"], # Will work
),
MCPServerHTTP(
url="https://slow-server.com/mcp", # Will timeout gracefully
),
] ]
) )
# Agent will use tools from working servers and log warnings for failing ones # Agent will use tools from working servers and log warnings for failing ones
``` ```
All connection errors are handled gracefully:
- **Connection failures**: Logged as warnings, agent continues with available tools
- **Timeout errors**: Connections timeout after 30 seconds (configurable)
- **Authentication errors**: Logged clearly for debugging
- **Invalid configurations**: Validation errors are raised at agent creation time
## Advanced: MCPServerAdapter ## Advanced: MCPServerAdapter
For complex scenarios requiring manual connection management, use the `MCPServerAdapter` class from `crewai-tools`. Using a Python context manager (`with` statement) is the recommended approach as it automatically handles starting and stopping the connection to the MCP server. For complex scenarios requiring manual connection management, use the `MCPServerAdapter` class from `crewai-tools`. Using a Python context manager (`with` statement) is the recommended approach as it automatically handles starting and stopping the connection to the MCP server.
@@ -228,6 +487,7 @@ with MCPServerAdapter(server_params, connect_timeout=60) as mcp_tools:
) )
# ... rest of your crew setup ... # ... rest of your crew setup ...
``` ```
This general pattern shows how to integrate tools. For specific examples tailored to each transport, refer to the detailed guides below. This general pattern shows how to integrate tools. For specific examples tailored to each transport, refer to the detailed guides below.
## Filtering Tools ## Filtering Tools
@@ -314,6 +574,7 @@ When a crew class is decorated with `@CrewBase`, the adapter lifecycle is manage
- If `mcp_server_params` is not defined, `get_mcp_tools()` simply returns an empty list, allowing the same code paths to run with or without MCP configured. - If `mcp_server_params` is not defined, `get_mcp_tools()` simply returns an empty list, allowing the same code paths to run with or without MCP configured.
This makes it safe to call `get_mcp_tools()` from multiple agent methods or selectively enable MCP per environment. This makes it safe to call `get_mcp_tools()` from multiple agent methods or selectively enable MCP per environment.
</Tip> </Tip>
### Connection Timeout Configuration ### Connection Timeout Configuration
@@ -380,7 +641,8 @@ class CrewWithCustomTimeout:
href="/en/mcp/dsl-integration" href="/en/mcp/dsl-integration"
color="#3B82F6" color="#3B82F6"
> >
**Recommended**: Use the simple `mcps=[]` field syntax for effortless MCP integration. **Recommended**: Use the simple `mcps=[]` field syntax for effortless MCP
integration.
</Card> </Card>
<Card <Card
title="Stdio Transport" title="Stdio Transport"
@@ -388,15 +650,12 @@ class CrewWithCustomTimeout:
href="/en/mcp/stdio" href="/en/mcp/stdio"
color="#10B981" color="#10B981"
> >
Connect to local MCP servers via standard input/output. Ideal for scripts and local executables. Connect to local MCP servers via standard input/output. Ideal for scripts
and local executables.
</Card> </Card>
<Card <Card title="SSE Transport" icon="wifi" href="/en/mcp/sse" color="#F59E0B">
title="SSE Transport" Integrate with remote MCP servers using Server-Sent Events for real-time
icon="wifi" data streaming.
href="/en/mcp/sse"
color="#F59E0B"
>
Integrate with remote MCP servers using Server-Sent Events for real-time data streaming.
</Card> </Card>
<Card <Card
title="Streamable HTTP Transport" title="Streamable HTTP Transport"
@@ -404,7 +663,8 @@ class CrewWithCustomTimeout:
href="/en/mcp/streamable-http" href="/en/mcp/streamable-http"
color="#8B5CF6" color="#8B5CF6"
> >
Utilize flexible Streamable HTTP for robust communication with remote MCP servers. Utilize flexible Streamable HTTP for robust communication with remote MCP
servers.
</Card> </Card>
<Card <Card
title="Connecting to Multiple Servers" title="Connecting to Multiple Servers"
@@ -412,7 +672,8 @@ class CrewWithCustomTimeout:
href="/en/mcp/multiple-servers" href="/en/mcp/multiple-servers"
color="#EF4444" color="#EF4444"
> >
Aggregate tools from several MCP servers simultaneously using a single adapter. Aggregate tools from several MCP servers simultaneously using a single
adapter.
</Card> </Card>
<Card <Card
title="Security Considerations" title="Security Considerations"
@@ -420,27 +681,28 @@ class CrewWithCustomTimeout:
href="/en/mcp/security" href="/en/mcp/security"
color="#DC2626" color="#DC2626"
> >
Review important security best practices for MCP integration to keep your agents safe. Review important security best practices for MCP integration to keep your
agents safe.
</Card> </Card>
</CardGroup> </CardGroup>
Checkout this repository for full demos and examples of MCP integration with CrewAI! 👇 Checkout this repository for full demos and examples of MCP integration with CrewAI! 👇
<Card <Card
title="GitHub Repository" title="GitHub Repository"
icon="github" icon="github"
href="https://github.com/tonykipkemboi/crewai-mcp-demo" href="https://github.com/tonykipkemboi/crewai-mcp-demo"
target="_blank" target="_blank"
> >
CrewAI MCP Demo CrewAI MCP Demo
</Card> </Card>
## Staying Safe with MCP ## Staying Safe with MCP
<Warning>
Always ensure that you trust an MCP Server before using it. <Warning>Always ensure that you trust an MCP Server before using it.</Warning>
</Warning>
#### Security Warning: DNS Rebinding Attacks #### Security Warning: DNS Rebinding Attacks
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured. SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
To prevent this: To prevent this:
@@ -453,6 +715,7 @@ Without these protections, attackers could use DNS rebinding to interact with lo
For more details, see the [Anthropic's MCP Transport Security docs](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations). For more details, see the [Anthropic's MCP Transport Security docs](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).
### Limitations ### Limitations
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
Other MCP primitives like `prompts` or `resources` are not directly integrated as CrewAI components through this adapter at this time. - **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
* **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern. Other MCP primitives like `prompts` or `resources` are not directly integrated as CrewAI components through this adapter at this time.
- **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern.

View File

@@ -0,0 +1,109 @@
---
title: Datadog Integration
description: Learn how to integrate Datadog with CrewAI to submit LLM Observability traces to Datadog.
icon: dog
mode: "wide"
---
# Integrate Datadog with CrewAI
This guide will demonstrate how to integrate **[Datadog LLM Observability](https://docs.datadoghq.com/llm_observability/)** with **CrewAI** using [Datadog auto-instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python). By the end of this guide, you will be able to submit LLM Observability traces to Datadog and view your CrewAI agent runs in Datadog LLM Observability's [Agentic Execution View](https://docs.datadoghq.com/llm_observability/monitoring/agent_monitoring).
## What is Datadog LLM Observability?
[Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/) helps AI engineers, data scientists, and application developers quickly develop, evaluate, and monitor LLM applications. Confidently improve output quality, performance, costs, and overall risk with structured experiments, end-to-end tracing across AI agents, and evaluations.
## Getting Started
### Install Dependencies
```shell
pip install ddtrace crewai crewai-tools
```
### Set Environment Variables
If you do not have a Datadog API key, you can [create an account](https://www.datadoghq.com/) and [get your API key](https://docs.datadoghq.com/account_management/api-app-keys/#api-keys).
You will also need to specify an ML Application name in the following environment variables. An ML Application is a grouping of LLM Observability traces associated with a specific LLM-based application. See [ML Application Naming Guidelines](https://docs.datadoghq.com/llm_observability/instrumentation/sdk?tab=python#application-naming-guidelines) for more information on limitations with ML Application names.
```shell
export DD_API_KEY=<YOUR_DD_API_KEY>
export DD_SITE=<YOUR_DD_SITE>
export DD_LLMOBS_ENABLED=true
export DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>
export DD_LLMOBS_AGENTLESS_ENABLED=true
export DD_APM_TRACING_ENABLED=false
```
Additionally, configure any LLM provider API keys
```shell
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
export ANTHROPIC_API_KEY=<YOUR_ANTHROPIC_API_KEY>
export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>
...
```
### Create a CrewAI Agent Application
```python
# crewai_agent.py
from crewai import Agent, Task, Crew
from crewai_tools import (
WebsiteSearchTool
)
web_rag_tool = WebsiteSearchTool()
writer = Agent(
role="Writer",
goal="You make math engaging and understandable for young children through poetry",
backstory="You're an expert in writing haikus but you know nothing of math.",
tools=[web_rag_tool],
)
task = Task(
description=("What is {multiplication}?"),
expected_output=("Compose a haiku that includes the answer."),
agent=writer
)
crew = Crew(
agents=[writer],
tasks=[task],
share_crew=False
)
output = crew.kickoff(dict(multiplication="2 * 2"))
```
### Run the Application with Datadog Auto-Instrumentation
With the [environment variables](#set-environment-variables) set, you can now run the application with Datadog auto-instrumentation.
```shell
ddtrace-run python crewai_agent.py
```
### View the Traces in Datadog
After running the application, you can view the traces in [Datadog LLM Observability's Traces View](https://app.datadoghq.com/llm/traces), selecting the ML Application name you chose from the top-left dropdown.
Clicking on a trace will show you the details of the trace, including total tokens used, number of LLM calls, models used, and estimated cost. Clicking into a specific span will narrow down these details, and show related input, output, and metadata.
<Frame>
<img src="/images/datadog-llm-observability-1.png" alt="Datadog LLM Observability Trace View" />
</Frame>
Additionally, you can view the execution graph view of the trace, which shows the control and data flow of the trace, which will scale with larger agents to show handoffs and relationships between LLM calls, tool calls, and agent interactions.
<Frame>
<img src="/images/datadog-llm-observability-2.png" alt="Datadog LLM Observability Agent Execution Flow View" />
</Frame>
## References
- [Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/)
- [Datadog LLM Observability CrewAI Auto-Instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python#crew-ai)

View File

@@ -733,9 +733,7 @@ Here's a basic configuration to route requests to OpenAI, specifically using GPT
- Collect relevant metadata to filter logs - Collect relevant metadata to filter logs
- Enforce access permissions - Enforce access permissions
Create API keys through: Create API keys through the [Portkey App](https://app.portkey.ai/)
- [Portkey App](https://app.portkey.ai/)
- [API Key Management API](/en/api-reference/admin-api/control-plane/api-keys/create-api-key)
Example using Python SDK: Example using Python SDK:
```python ```python
@@ -758,7 +756,7 @@ Here's a basic configuration to route requests to OpenAI, specifically using GPT
) )
``` ```
For detailed key management instructions, see our [API Keys documentation](/en/api-reference/admin-api/control-plane/api-keys/create-api-key). For detailed key management instructions, see the [Portkey documentation](https://portkey.ai/docs).
</Accordion> </Accordion>
<Accordion title="Step 4: Deploy & Monitor"> <Accordion title="Step 4: Deploy & Monitor">

View File

@@ -45,6 +45,7 @@ crewai login
``` ```
This command will: This command will:
1. Open your browser to the authentication page 1. Open your browser to the authentication page
2. Prompt you to enter a device code 2. Prompt you to enter a device code
3. Authenticate your local environment with your CrewAI AMP account 3. Authenticate your local environment with your CrewAI AMP account
@@ -153,7 +154,6 @@ After running the crew or flow, you can view the traces generated by your CrewAI
Just click on the link below to view the traces or head over to the traces tab in the dashboard [here](https://app.crewai.com/crewai_plus/trace_batches) Just click on the link below to view the traces or head over to the traces tab in the dashboard [here](https://app.crewai.com/crewai_plus/trace_batches)
![CrewAI Tracing Interface](/images/view-traces.png) ![CrewAI Tracing Interface](/images/view-traces.png)
### Alternative: Environment Variable Configuration ### Alternative: Environment Variable Configuration
You can also enable tracing globally by setting an environment variable: You can also enable tracing globally by setting an environment variable:
@@ -190,6 +190,7 @@ CrewAI tracing provides comprehensive visibility into:
- **Error Tracking**: Detailed error information and stack traces - **Error Tracking**: Detailed error information and stack traces
### Trace Features ### Trace Features
- **Execution Timeline**: Click through different stages of execution - **Execution Timeline**: Click through different stages of execution
- **Detailed Logs**: Access comprehensive logs for debugging - **Detailed Logs**: Access comprehensive logs for debugging
- **Performance Analytics**: Analyze execution patterns and optimize performance - **Performance Analytics**: Analyze execution patterns and optimize performance

View File

@@ -58,6 +58,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
your ability to turn complex data into clear and concise reports, making your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide. it easy for others to understand and act on the information you provide.
``` ```
</Step> </Step>
<Step title="Modify your `tasks.yaml` file"> <Step title="Modify your `tasks.yaml` file">
```yaml tasks.yaml ```yaml tasks.yaml
@@ -81,6 +82,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
agent: reporting_analyst agent: reporting_analyst
output_file: report.md output_file: report.md
``` ```
</Step> </Step>
<Step title="Modify your `crew.py` file"> <Step title="Modify your `crew.py` file">
```python crew.py ```python crew.py
@@ -136,6 +138,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
verbose=True, verbose=True,
) )
``` ```
</Step> </Step>
<Step title="[Optional] Add before and after crew functions"> <Step title="[Optional] Add before and after crew functions">
```python crew.py ```python crew.py
@@ -160,6 +163,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
# ... remaining code # ... remaining code
``` ```
</Step> </Step>
<Step title="Feel free to pass custom inputs to your crew"> <Step title="Feel free to pass custom inputs to your crew">
For example, you can pass the `topic` input to your crew to customize the research and reporting. For example, you can pass the `topic` input to your crew to customize the research and reporting.
@@ -178,6 +182,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
} }
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs) LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
``` ```
</Step> </Step>
<Step title="Set your environment variables"> <Step title="Set your environment variables">
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file: Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
@@ -211,13 +216,13 @@ Follow the steps below to get Crewing! 🚣‍♂️
<Step title="Enterprise Alternative: Create in Crew Studio"> <Step title="Enterprise Alternative: Create in Crew Studio">
For CrewAI AMP users, you can create the same crew without writing code: For CrewAI AMP users, you can create the same crew without writing code:
1. Log in to your CrewAI AMP account (create a free account at [app.crewai.com](https://app.crewai.com)) 1. Log in to your CrewAI AMP account (create a free account at [app.crewai.com](https://app.crewai.com))
2. Open Crew Studio 2. Open Crew Studio
3. Type what is the automation you're trying to build 3. Type what is the automation you're trying to build
4. Create your tasks visually and connect them in sequence 4. Create your tasks visually and connect them in sequence
5. Configure your inputs and click "Download Code" or "Deploy" 5. Configure your inputs and click "Download Code" or "Deploy"
![Crew Studio Quickstart](/images/enterprise/crew-studio-interface.png) ![Crew Studio Quickstart](/images/enterprise/crew-studio-interface.png)
<Card title="Try CrewAI AMP" icon="rocket" href="https://app.crewai.com"> <Card title="Try CrewAI AMP" icon="rocket" href="https://app.crewai.com">
Start your free account at CrewAI AMP Start your free account at CrewAI AMP
@@ -226,7 +231,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
<Step title="View your final report"> <Step title="View your final report">
You should see the output in the console and the `report.md` file should be created in the root of your project with the final report. You should see the output in the console and the `report.md` file should be created in the root of your project with the final report.
Here's an example of what the report should look like: Here's an example of what the report should look like:
<CodeGroup> <CodeGroup>
```markdown output/report.md ```markdown output/report.md
@@ -289,6 +294,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
## 8. Conclusion ## 8. Conclusion
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment. The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
``` ```
</CodeGroup> </CodeGroup>
</Step> </Step>
</Steps> </Steps>
@@ -297,6 +303,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
Congratulations! Congratulations!
You have successfully set up your crew project and are ready to start building your own agentic workflows! You have successfully set up your crew project and are ready to start building your own agentic workflows!
</Check> </Check>
### Note on Consistency in Naming ### Note on Consistency in Naming
@@ -308,34 +315,38 @@ This naming consistency allows CrewAI to automatically link your configurations
#### Example References #### Example References
<Tip> <Tip>
Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file. Note how we use the same name for the agent in the `agents.yaml`
(`email_summarizer`) file as the method name in the `crew.py`
(`email_summarizer`) file.
</Tip> </Tip>
```yaml agents.yaml ```yaml agents.yaml
email_summarizer: email_summarizer:
role: > role: >
Email Summarizer Email Summarizer
goal: > goal: >
Summarize emails into a concise and clear summary Summarize emails into a concise and clear summary
backstory: > backstory: >
You will create a 5 bullet point summary of the report You will create a 5 bullet point summary of the report
llm: provider/model-id # Add your choice of model here llm: provider/model-id # Add your choice of model here
``` ```
<Tip> <Tip>
Note how we use the same name for the task in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file. Note how we use the same name for the task in the `tasks.yaml`
(`email_summarizer_task`) file as the method name in the `crew.py`
(`email_summarizer_task`) file.
</Tip> </Tip>
```yaml tasks.yaml ```yaml tasks.yaml
email_summarizer_task: email_summarizer_task:
description: > description: >
Summarize the email into a 5 bullet point summary Summarize the email into a 5 bullet point summary
expected_output: > expected_output: >
A 5 bullet point summary of the email A 5 bullet point summary of the email
agent: email_summarizer agent: email_summarizer
context: context:
- reporting_task - reporting_task
- research_task - research_task
``` ```
## Deploying Your Crew ## Deploying Your Crew
@@ -354,18 +365,16 @@ Watch this video tutorial for a step-by-step demonstration of deploying your cre
></iframe> ></iframe>
<CardGroup cols={2}> <CardGroup cols={2}>
<Card <Card title="Deploy on Enterprise" icon="rocket" href="http://app.crewai.com">
title="Deploy on Enterprise" Get started with CrewAI AMP and deploy your crew in a production environment
icon="rocket" with just a few clicks.
href="http://app.crewai.com"
>
Get started with CrewAI AMP and deploy your crew in a production environment with just a few clicks.
</Card> </Card>
<Card <Card
title="Join the Community" title="Join the Community"
icon="comments" icon="comments"
href="https://community.crewai.com" href="https://community.crewai.com"
> >
Join our open source community to discuss ideas, share your projects, and connect with other CrewAI developers. Join our open source community to discuss ideas, share your projects, and
connect with other CrewAI developers.
</Card> </Card>
</CardGroup> </CardGroup>

View File

@@ -77,7 +77,7 @@ The `RagTool` accepts the following parameters:
- **summarize**: Optional. Whether to summarize the retrieved content. Default is `False`. - **summarize**: Optional. Whether to summarize the retrieved content. Default is `False`.
- **adapter**: Optional. A custom adapter for the knowledge base. If not provided, a CrewAIRagAdapter will be used. - **adapter**: Optional. A custom adapter for the knowledge base. If not provided, a CrewAIRagAdapter will be used.
- **config**: Optional. Configuration for the underlying CrewAI RAG system. - **config**: Optional. Configuration for the underlying CrewAI RAG system. Accepts a `RagToolConfig` TypedDict with optional `embedding_model` (ProviderSpec) and `vectordb` (VectorDbConfig) keys. All configuration values provided programmatically take precedence over environment variables.
## Adding Content ## Adding Content
@@ -127,26 +127,528 @@ You can customize the behavior of the `RagTool` by providing a configuration dic
```python Code ```python Code
from crewai_tools import RagTool from crewai_tools import RagTool
from crewai_tools.tools.rag import RagToolConfig, VectorDbConfig, ProviderSpec
# Create a RAG tool with custom configuration # Create a RAG tool with custom configuration
config = {
"vectordb": { vectordb: VectorDbConfig = {
"provider": "qdrant", "provider": "qdrant",
"config": { "config": {
"collection_name": "my-collection" "collection_name": "my-collection"
}
},
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small"
}
} }
} }
embedding_model: ProviderSpec = {
"provider": "openai",
"config": {
"model_name": "text-embedding-3-small"
}
}
config: RagToolConfig = {
"vectordb": vectordb,
"embedding_model": embedding_model
}
rag_tool = RagTool(config=config, summarize=True) rag_tool = RagTool(config=config, summarize=True)
``` ```
## Embedding Model Configuration
The `embedding_model` parameter accepts a `crewai.rag.embeddings.types.ProviderSpec` dictionary with the structure:
```python
{
"provider": "provider-name", # Required
"config": { # Optional
# Provider-specific configuration
}
}
```
### Supported Providers
<AccordionGroup>
<Accordion title="OpenAI">
```python main.py
from crewai.rag.embeddings.providers.openai.types import OpenAIProviderSpec
embedding_model: OpenAIProviderSpec = {
"provider": "openai",
"config": {
"api_key": "your-api-key",
"model_name": "text-embedding-ada-002",
"dimensions": 1536,
"organization_id": "your-org-id",
"api_base": "https://api.openai.com/v1",
"api_version": "v1",
"default_headers": {"Custom-Header": "value"}
}
}
```
**Config Options:**
- `api_key` (str): OpenAI API key
- `model_name` (str): Model to use. Default: `text-embedding-ada-002`. Options: `text-embedding-3-small`, `text-embedding-3-large`, `text-embedding-ada-002`
- `dimensions` (int): Number of dimensions for the embedding
- `organization_id` (str): OpenAI organization ID
- `api_base` (str): Custom API base URL
- `api_version` (str): API version
- `default_headers` (dict): Custom headers for API requests
**Environment Variables:**
- `OPENAI_API_KEY` or `EMBEDDINGS_OPENAI_API_KEY`: `api_key`
- `OPENAI_ORGANIZATION_ID` or `EMBEDDINGS_OPENAI_ORGANIZATION_ID`: `organization_id`
- `OPENAI_MODEL_NAME` or `EMBEDDINGS_OPENAI_MODEL_NAME`: `model_name`
- `OPENAI_API_BASE` or `EMBEDDINGS_OPENAI_API_BASE`: `api_base`
- `OPENAI_API_VERSION` or `EMBEDDINGS_OPENAI_API_VERSION`: `api_version`
- `OPENAI_DIMENSIONS` or `EMBEDDINGS_OPENAI_DIMENSIONS`: `dimensions`
</Accordion>
<Accordion title="Cohere">
```python main.py
from crewai.rag.embeddings.providers.cohere.types import CohereProviderSpec
embedding_model: CohereProviderSpec = {
"provider": "cohere",
"config": {
"api_key": "your-api-key",
"model_name": "embed-english-v3.0"
}
}
```
**Config Options:**
- `api_key` (str): Cohere API key
- `model_name` (str): Model to use. Default: `large`. Options: `embed-english-v3.0`, `embed-multilingual-v3.0`, `large`, `small`
**Environment Variables:**
- `COHERE_API_KEY` or `EMBEDDINGS_COHERE_API_KEY`: `api_key`
- `EMBEDDINGS_COHERE_MODEL_NAME`: `model_name`
</Accordion>
<Accordion title="VoyageAI">
```python main.py
from crewai.rag.embeddings.providers.voyageai.types import VoyageAIProviderSpec
embedding_model: VoyageAIProviderSpec = {
"provider": "voyageai",
"config": {
"api_key": "your-api-key",
"model": "voyage-3",
"input_type": "document",
"truncation": True,
"output_dtype": "float32",
"output_dimension": 1024,
"max_retries": 3,
"timeout": 60.0
}
}
```
**Config Options:**
- `api_key` (str): VoyageAI API key
- `model` (str): Model to use. Default: `voyage-2`. Options: `voyage-3`, `voyage-3-lite`, `voyage-code-3`, `voyage-large-2`
- `input_type` (str): Type of input. Options: `document` (for storage), `query` (for search)
- `truncation` (bool): Whether to truncate inputs that exceed max length. Default: `True`
- `output_dtype` (str): Output data type
- `output_dimension` (int): Dimension of output embeddings
- `max_retries` (int): Maximum number of retry attempts. Default: `0`
- `timeout` (float): Request timeout in seconds
**Environment Variables:**
- `VOYAGEAI_API_KEY` or `EMBEDDINGS_VOYAGEAI_API_KEY`: `api_key`
- `VOYAGEAI_MODEL` or `EMBEDDINGS_VOYAGEAI_MODEL`: `model`
- `VOYAGEAI_INPUT_TYPE` or `EMBEDDINGS_VOYAGEAI_INPUT_TYPE`: `input_type`
- `VOYAGEAI_TRUNCATION` or `EMBEDDINGS_VOYAGEAI_TRUNCATION`: `truncation`
- `VOYAGEAI_OUTPUT_DTYPE` or `EMBEDDINGS_VOYAGEAI_OUTPUT_DTYPE`: `output_dtype`
- `VOYAGEAI_OUTPUT_DIMENSION` or `EMBEDDINGS_VOYAGEAI_OUTPUT_DIMENSION`: `output_dimension`
- `VOYAGEAI_MAX_RETRIES` or `EMBEDDINGS_VOYAGEAI_MAX_RETRIES`: `max_retries`
- `VOYAGEAI_TIMEOUT` or `EMBEDDINGS_VOYAGEAI_TIMEOUT`: `timeout`
</Accordion>
<Accordion title="Ollama">
```python main.py
from crewai.rag.embeddings.providers.ollama.types import OllamaProviderSpec
embedding_model: OllamaProviderSpec = {
"provider": "ollama",
"config": {
"model_name": "llama2",
"url": "http://localhost:11434/api/embeddings"
}
}
```
**Config Options:**
- `model_name` (str): Ollama model name (e.g., `llama2`, `mistral`, `nomic-embed-text`)
- `url` (str): Ollama API endpoint URL. Default: `http://localhost:11434/api/embeddings`
**Environment Variables:**
- `OLLAMA_MODEL` or `EMBEDDINGS_OLLAMA_MODEL`: `model_name`
- `OLLAMA_URL` or `EMBEDDINGS_OLLAMA_URL`: `url`
</Accordion>
<Accordion title="Amazon Bedrock">
```python main.py
from crewai.rag.embeddings.providers.aws.types import BedrockProviderSpec
embedding_model: BedrockProviderSpec = {
"provider": "amazon-bedrock",
"config": {
"model_name": "amazon.titan-embed-text-v2:0",
"session": boto3_session
}
}
```
**Config Options:**
- `model_name` (str): Bedrock model ID. Default: `amazon.titan-embed-text-v1`. Options: `amazon.titan-embed-text-v1`, `amazon.titan-embed-text-v2:0`, `cohere.embed-english-v3`, `cohere.embed-multilingual-v3`
- `session` (Any): Boto3 session object for AWS authentication
**Environment Variables:**
- `AWS_ACCESS_KEY_ID`: AWS access key
- `AWS_SECRET_ACCESS_KEY`: AWS secret key
- `AWS_REGION`: AWS region (e.g., `us-east-1`)
</Accordion>
<Accordion title="Azure OpenAI">
```python main.py
from crewai.rag.embeddings.providers.microsoft.types import AzureProviderSpec
embedding_model: AzureProviderSpec = {
"provider": "azure",
"config": {
"deployment_id": "your-deployment-id",
"api_key": "your-api-key",
"api_base": "https://your-resource.openai.azure.com",
"api_version": "2024-02-01",
"model_name": "text-embedding-ada-002",
"api_type": "azure"
}
}
```
**Config Options:**
- `deployment_id` (str): **Required** - Azure OpenAI deployment ID
- `api_key` (str): Azure OpenAI API key
- `api_base` (str): Azure OpenAI resource endpoint
- `api_version` (str): API version. Example: `2024-02-01`
- `model_name` (str): Model name. Default: `text-embedding-ada-002`
- `api_type` (str): API type. Default: `azure`
- `dimensions` (int): Output dimensions
- `default_headers` (dict): Custom headers
**Environment Variables:**
- `AZURE_OPENAI_API_KEY` or `EMBEDDINGS_AZURE_API_KEY`: `api_key`
- `AZURE_OPENAI_ENDPOINT` or `EMBEDDINGS_AZURE_API_BASE`: `api_base`
- `EMBEDDINGS_AZURE_DEPLOYMENT_ID`: `deployment_id`
- `EMBEDDINGS_AZURE_API_VERSION`: `api_version`
- `EMBEDDINGS_AZURE_MODEL_NAME`: `model_name`
- `EMBEDDINGS_AZURE_API_TYPE`: `api_type`
- `EMBEDDINGS_AZURE_DIMENSIONS`: `dimensions`
</Accordion>
<Accordion title="Google Generative AI">
```python main.py
from crewai.rag.embeddings.providers.google.types import GenerativeAiProviderSpec
embedding_model: GenerativeAiProviderSpec = {
"provider": "google-generativeai",
"config": {
"api_key": "your-api-key",
"model_name": "gemini-embedding-001",
"task_type": "RETRIEVAL_DOCUMENT"
}
}
```
**Config Options:**
- `api_key` (str): Google AI API key
- `model_name` (str): Model name. Default: `gemini-embedding-001`. Options: `gemini-embedding-001`, `text-embedding-005`, `text-multilingual-embedding-002`
- `task_type` (str): Task type for embeddings. Default: `RETRIEVAL_DOCUMENT`. Options: `RETRIEVAL_DOCUMENT`, `RETRIEVAL_QUERY`
**Environment Variables:**
- `GOOGLE_API_KEY`, `GEMINI_API_KEY`, or `EMBEDDINGS_GOOGLE_API_KEY`: `api_key`
- `EMBEDDINGS_GOOGLE_GENERATIVE_AI_MODEL_NAME`: `model_name`
- `EMBEDDINGS_GOOGLE_GENERATIVE_AI_TASK_TYPE`: `task_type`
</Accordion>
<Accordion title="Google Vertex AI">
```python main.py
from crewai.rag.embeddings.providers.google.types import VertexAIProviderSpec
embedding_model: VertexAIProviderSpec = {
"provider": "google-vertex",
"config": {
"model_name": "text-embedding-004",
"project_id": "your-project-id",
"region": "us-central1",
"api_key": "your-api-key"
}
}
```
**Config Options:**
- `model_name` (str): Model name. Default: `textembedding-gecko`. Options: `text-embedding-004`, `textembedding-gecko`, `textembedding-gecko-multilingual`
- `project_id` (str): Google Cloud project ID. Default: `cloud-large-language-models`
- `region` (str): Google Cloud region. Default: `us-central1`
- `api_key` (str): API key for authentication
**Environment Variables:**
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account JSON file
- `GOOGLE_CLOUD_PROJECT` or `EMBEDDINGS_GOOGLE_VERTEX_PROJECT_ID`: `project_id`
- `EMBEDDINGS_GOOGLE_VERTEX_MODEL_NAME`: `model_name`
- `EMBEDDINGS_GOOGLE_VERTEX_REGION`: `region`
- `EMBEDDINGS_GOOGLE_VERTEX_API_KEY`: `api_key`
</Accordion>
<Accordion title="Jina AI">
```python main.py
from crewai.rag.embeddings.providers.jina.types import JinaProviderSpec
embedding_model: JinaProviderSpec = {
"provider": "jina",
"config": {
"api_key": "your-api-key",
"model_name": "jina-embeddings-v3"
}
}
```
**Config Options:**
- `api_key` (str): Jina AI API key
- `model_name` (str): Model name. Default: `jina-embeddings-v2-base-en`. Options: `jina-embeddings-v3`, `jina-embeddings-v2-base-en`, `jina-embeddings-v2-small-en`
**Environment Variables:**
- `JINA_API_KEY` or `EMBEDDINGS_JINA_API_KEY`: `api_key`
- `EMBEDDINGS_JINA_MODEL_NAME`: `model_name`
</Accordion>
<Accordion title="HuggingFace">
```python main.py
from crewai.rag.embeddings.providers.huggingface.types import HuggingFaceProviderSpec
embedding_model: HuggingFaceProviderSpec = {
"provider": "huggingface",
"config": {
"url": "https://api-inference.huggingface.co/models/sentence-transformers/all-MiniLM-L6-v2"
}
}
```
**Config Options:**
- `url` (str): Full URL to HuggingFace inference API endpoint
**Environment Variables:**
- `HUGGINGFACE_URL` or `EMBEDDINGS_HUGGINGFACE_URL`: `url`
</Accordion>
<Accordion title="Instructor">
```python main.py
from crewai.rag.embeddings.providers.instructor.types import InstructorProviderSpec
embedding_model: InstructorProviderSpec = {
"provider": "instructor",
"config": {
"model_name": "hkunlp/instructor-xl",
"device": "cuda",
"instruction": "Represent the document"
}
}
```
**Config Options:**
- `model_name` (str): HuggingFace model ID. Default: `hkunlp/instructor-base`. Options: `hkunlp/instructor-xl`, `hkunlp/instructor-large`, `hkunlp/instructor-base`
- `device` (str): Device to run on. Default: `cpu`. Options: `cpu`, `cuda`, `mps`
- `instruction` (str): Instruction prefix for embeddings
**Environment Variables:**
- `EMBEDDINGS_INSTRUCTOR_MODEL_NAME`: `model_name`
- `EMBEDDINGS_INSTRUCTOR_DEVICE`: `device`
- `EMBEDDINGS_INSTRUCTOR_INSTRUCTION`: `instruction`
</Accordion>
<Accordion title="Sentence Transformer">
```python main.py
from crewai.rag.embeddings.providers.sentence_transformer.types import SentenceTransformerProviderSpec
embedding_model: SentenceTransformerProviderSpec = {
"provider": "sentence-transformer",
"config": {
"model_name": "all-mpnet-base-v2",
"device": "cuda",
"normalize_embeddings": True
}
}
```
**Config Options:**
- `model_name` (str): Sentence Transformers model name. Default: `all-MiniLM-L6-v2`. Options: `all-mpnet-base-v2`, `all-MiniLM-L6-v2`, `paraphrase-multilingual-MiniLM-L12-v2`
- `device` (str): Device to run on. Default: `cpu`. Options: `cpu`, `cuda`, `mps`
- `normalize_embeddings` (bool): Whether to normalize embeddings. Default: `False`
**Environment Variables:**
- `EMBEDDINGS_SENTENCE_TRANSFORMER_MODEL_NAME`: `model_name`
- `EMBEDDINGS_SENTENCE_TRANSFORMER_DEVICE`: `device`
- `EMBEDDINGS_SENTENCE_TRANSFORMER_NORMALIZE_EMBEDDINGS`: `normalize_embeddings`
</Accordion>
<Accordion title="ONNX">
```python main.py
from crewai.rag.embeddings.providers.onnx.types import ONNXProviderSpec
embedding_model: ONNXProviderSpec = {
"provider": "onnx",
"config": {
"preferred_providers": ["CUDAExecutionProvider", "CPUExecutionProvider"]
}
}
```
**Config Options:**
- `preferred_providers` (list[str]): List of ONNX execution providers in order of preference
**Environment Variables:**
- `EMBEDDINGS_ONNX_PREFERRED_PROVIDERS`: `preferred_providers` (comma-separated list)
</Accordion>
<Accordion title="OpenCLIP">
```python main.py
from crewai.rag.embeddings.providers.openclip.types import OpenCLIPProviderSpec
embedding_model: OpenCLIPProviderSpec = {
"provider": "openclip",
"config": {
"model_name": "ViT-B-32",
"checkpoint": "laion2b_s34b_b79k",
"device": "cuda"
}
}
```
**Config Options:**
- `model_name` (str): OpenCLIP model architecture. Default: `ViT-B-32`. Options: `ViT-B-32`, `ViT-B-16`, `ViT-L-14`
- `checkpoint` (str): Pretrained checkpoint name. Default: `laion2b_s34b_b79k`. Options: `laion2b_s34b_b79k`, `laion400m_e32`, `openai`
- `device` (str): Device to run on. Default: `cpu`. Options: `cpu`, `cuda`
**Environment Variables:**
- `EMBEDDINGS_OPENCLIP_MODEL_NAME`: `model_name`
- `EMBEDDINGS_OPENCLIP_CHECKPOINT`: `checkpoint`
- `EMBEDDINGS_OPENCLIP_DEVICE`: `device`
</Accordion>
<Accordion title="Text2Vec">
```python main.py
from crewai.rag.embeddings.providers.text2vec.types import Text2VecProviderSpec
embedding_model: Text2VecProviderSpec = {
"provider": "text2vec",
"config": {
"model_name": "shibing624/text2vec-base-multilingual"
}
}
```
**Config Options:**
- `model_name` (str): Text2Vec model name from HuggingFace. Default: `shibing624/text2vec-base-chinese`. Options: `shibing624/text2vec-base-multilingual`, `shibing624/text2vec-base-chinese`
**Environment Variables:**
- `EMBEDDINGS_TEXT2VEC_MODEL_NAME`: `model_name`
</Accordion>
<Accordion title="Roboflow">
```python main.py
from crewai.rag.embeddings.providers.roboflow.types import RoboflowProviderSpec
embedding_model: RoboflowProviderSpec = {
"provider": "roboflow",
"config": {
"api_key": "your-api-key",
"api_url": "https://infer.roboflow.com"
}
}
```
**Config Options:**
- `api_key` (str): Roboflow API key. Default: `""` (empty string)
- `api_url` (str): Roboflow inference API URL. Default: `https://infer.roboflow.com`
**Environment Variables:**
- `ROBOFLOW_API_KEY` or `EMBEDDINGS_ROBOFLOW_API_KEY`: `api_key`
- `ROBOFLOW_API_URL` or `EMBEDDINGS_ROBOFLOW_API_URL`: `api_url`
</Accordion>
<Accordion title="WatsonX (IBM)">
```python main.py
from crewai.rag.embeddings.providers.ibm.types import WatsonXProviderSpec
embedding_model: WatsonXProviderSpec = {
"provider": "watsonx",
"config": {
"model_id": "ibm/slate-125m-english-rtrvr",
"url": "https://us-south.ml.cloud.ibm.com",
"api_key": "your-api-key",
"project_id": "your-project-id",
"batch_size": 100,
"concurrency_limit": 10,
"persistent_connection": True
}
}
```
**Config Options:**
- `model_id` (str): WatsonX model identifier
- `url` (str): WatsonX API endpoint
- `api_key` (str): IBM Cloud API key
- `project_id` (str): WatsonX project ID
- `space_id` (str): WatsonX space ID (alternative to project_id)
- `batch_size` (int): Batch size for embeddings. Default: `100`
- `concurrency_limit` (int): Maximum concurrent requests. Default: `10`
- `persistent_connection` (bool): Use persistent connections. Default: `True`
- Plus 20+ additional authentication and configuration options
**Environment Variables:**
- `WATSONX_API_KEY` or `EMBEDDINGS_WATSONX_API_KEY`: `api_key`
- `WATSONX_URL` or `EMBEDDINGS_WATSONX_URL`: `url`
- `WATSONX_PROJECT_ID` or `EMBEDDINGS_WATSONX_PROJECT_ID`: `project_id`
- `EMBEDDINGS_WATSONX_MODEL_ID`: `model_id`
- `EMBEDDINGS_WATSONX_SPACE_ID`: `space_id`
- `EMBEDDINGS_WATSONX_BATCH_SIZE`: `batch_size`
- `EMBEDDINGS_WATSONX_CONCURRENCY_LIMIT`: `concurrency_limit`
- `EMBEDDINGS_WATSONX_PERSISTENT_CONNECTION`: `persistent_connection`
</Accordion>
<Accordion title="Custom">
```python main.py
from crewai.rag.core.base_embeddings_callable import EmbeddingFunction
from crewai.rag.embeddings.providers.custom.types import CustomProviderSpec
class MyEmbeddingFunction(EmbeddingFunction):
def __call__(self, input):
# Your custom embedding logic
return embeddings
embedding_model: CustomProviderSpec = {
"provider": "custom",
"config": {
"embedding_callable": MyEmbeddingFunction
}
}
```
**Config Options:**
- `embedding_callable` (type[EmbeddingFunction]): Custom embedding function class
**Note:** Custom embedding functions must implement the `EmbeddingFunction` protocol defined in `crewai.rag.core.base_embeddings_callable`. The `__call__` method should accept input data and return embeddings as a list of numpy arrays (or compatible format that will be normalized). The returned embeddings are automatically normalized and validated.
</Accordion>
</AccordionGroup>
### Notes
- All config fields are optional unless marked as **Required**
- API keys can typically be provided via environment variables instead of config
- Default values are shown where applicable
## Conclusion ## Conclusion
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses. The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.

View File

@@ -18,7 +18,7 @@ These tools enable your agents to interact with cloud services, access cloud sto
Write and upload files to Amazon S3 storage. Write and upload files to Amazon S3 storage.
</Card> </Card>
<Card title="Bedrock Invoke Agent" icon="aws" href="/en/tools/cloud-storage/bedrockinvokeagenttool"> <Card title="Bedrock Invoke Agent" icon="aws" href="/en/tools/integration/bedrockinvokeagenttool">
Invoke Amazon Bedrock agents for AI-powered tasks. Invoke Amazon Bedrock agents for AI-powered tasks.
</Card> </Card>

View File

@@ -58,10 +58,10 @@ tool = MySQLSearchTool(
), ),
), ),
embedder=dict( embedder=dict(
provider="google", provider="google-generativeai",
config=dict( config=dict(
model="models/embedding-001", model_name="gemini-embedding-001",
task_type="retrieval_document", task_type="RETRIEVAL_DOCUMENT",
# title="Embeddings", # title="Embeddings",
), ),
), ),

View File

@@ -71,10 +71,10 @@ tool = PGSearchTool(
), ),
), ),
embedder=dict( embedder=dict(
provider="google", # or openai, ollama, ... provider="google-generativeai", # or openai, ollama, ...
config=dict( config=dict(
model="models/embedding-001", model_name="gemini-embedding-001",
task_type="retrieval_document", task_type="RETRIEVAL_DOCUMENT",
# title="Embeddings", # title="Embeddings",
), ),
), ),

Some files were not shown because too many files have changed in this diff Show More