Compare commits

...

58 Commits

Author SHA1 Message Date
Devin AI
7a0feb8c43 Fix tool output token overflow issue
- Add max_tool_output_tokens parameter to Agent (default: 4096)
- Implement token counting and truncation utilities in agent_utils
- Truncate large tool outputs before appending to messages
- Update CONTEXT_LIMIT_ERRORS to recognize negative max_tokens error
- Add comprehensive tests for tool output truncation

Fixes #3843

Co-Authored-By: João <joao@crewai.com>
2025-11-06 13:01:52 +00:00
Greyson LaLonde
e4cc9a664c fix: handle unpickleable values in flow state
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 01:29:21 -05:00
Greyson LaLonde
7e6171d5bc fix: ensure lite agents course-correct on validation errors
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: ensure lite agents course-correct on validation errors

* chore: update cassettes and test expectations

* fix: ensure multiple guardrails propogate
2025-11-05 19:02:11 -05:00
Greyson LaLonde
61ad1fb112 feat: add support for llm message interceptor hooks 2025-11-05 11:38:44 -05:00
Greyson LaLonde
54710a8711 fix: hash callback args correctly to ensure caching works
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-05 07:19:09 -05:00
Lucas Gomide
5abf976373 fix: allow adding RAG source content from valid URLs (#3831)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-04 07:58:40 -05:00
Greyson LaLonde
329567153b fix: make plot node selection smoother
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-03 07:49:31 -05:00
Greyson LaLonde
60332e0b19 feat: cache i18n prompts for efficient use 2025-11-03 07:39:05 -05:00
Lorenze Jay
40932af3fa feat: bump versions to 1.3.0 (#3820)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.3.0

* chore: update crew and flow templates to use crewai[tools] version 1.3.0
2025-10-31 18:54:02 -07:00
Greyson LaLonde
e134e5305b Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers

* feat: a2a schemas and utils

* chore: move agent class, update imports

* refactor: organize imports to avoid circularity, add a2a to console

* feat: pass response_model through call chain

* feat: add standard openapi spec serialization to tools and structured output

* feat: a2a events

* chore: add a2a to pyproject

* docs: minimal base for learn docs

* fix: adjust a2a conversation flow, allow llm to decide exit until max_retries

* fix: inject agent skills into initial prompt

* fix: format agent card as json in prompt

* refactor: simplify A2A agent prompt formatting and improve skill display

* chore: wide cleanup

* chore: cleanup logic, add auth cache, use json for messages in prompt

* chore: update docs

* fix: doc snippets formatting

* feat: optimize A2A agent card fetching and improve error reporting

* chore: move imports to top of file

* chore: refactor hasattr check

* chore: add httpx-auth, update lockfile

* feat: create base public api

* chore: cleanup modules, add docstrings, types

* fix: exclude extra fields in prompt

* chore: update docs

* tests: update to correct import

* chore: lint for ruff, add missing import

* fix: tweak openai streaming logic for response model

* tests: add reimport for test

* tests: add reimport for test

* fix: don't set a2a attr if not set

* fix: don't set a2a attr if not set

* chore: update cassettes

* tests: fix tests

* fix: use instructor and dont pass response_format for litellm

* chore: consolidate event listeners, add typing

* fix: address race condition in test, update cassettes

* tests: add correct mocks, rerun cassette for json

* tests: update cassette

* chore: regenerate cassette after new run

* fix: make token manager access-safe

* fix: make token manager access-safe

* merge

* chore: update test and cassete for output pydantic

* fix: tweak to disallow deadlock

* chore: linter

* fix: adjust event ordering for threading

* fix: use conditional for batch check

* tests: tweak for emission

* tests: simplify api + event check

* fix: ensure non-function calling llms see json formatted string

* tests: tweak message comparison

* fix: use internal instructor for litellm structure responses

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
2025-10-31 18:42:03 -07:00
Greyson LaLonde
e229ef4e19 refactor: improve flow handling, typing, and logging; update UI and tests
fix: refine nested flow conditionals and ensure router methods and routes are fully parsed
fix: improve docstrings, typing, and logging coverage across all events
feat: update flow.plot feature with new UI enhancements
chore: apply Ruff linting, reorganize imports, and remove deprecated utilities/files
chore: split constants and utils, clean JS comments, and add typing for linters
tests: strengthen test coverage for flow execution paths and router logic
2025-10-31 21:15:06 -04:00
Greyson LaLonde
2e9eb8c32d fix: refactor use_stop_words to property, add check for stop words
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-10-29 19:14:01 +01:00
Lucas Gomide
4ebb5114ed Fix Firecrawl tools & adding tests (#3810)
* fix: fix Firecrawl Scrape tool

* fix: fix Firecrawl Search tool

* fix: fix Firecrawl Website tool

* tests: adding tests for Firecrawl
2025-10-29 13:37:57 -04:00
Daniel Barreto
70b083945f Enhance QdrantVectorSearchTool (#3806)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-28 13:42:40 -04:00
Tony Kipkemboi
410db1ff39 docs: migrate embedder→embedding_model and require vectordb across tool docs; add provider examples (en/ko/pt-BR) (#3804)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs(tools): migrate embedder->embedding_model, require vectordb; add Chroma/Qdrant examples across en/ko/pt-BR PDF/TXT/XML/MDX/DOCX/CSV/Directory docs

* docs(observability): apply latest Datadog tweaks in ko and pt-BR
2025-10-27 13:29:21 -04:00
Lorenze Jay
5d6b4c922b feat: bump versions to 1.2.1 (#3800)
* feat: bump versions to 1.2.1

* updated templates too
2025-10-27 09:12:04 -07:00
Lucas Gomide
b07c0fc45c docs: describe mandatory env-var to call Platform tools for each integration (#3803) 2025-10-27 10:01:41 -04:00
Sam Brenner
97853199c7 Add Datadog Integration Documentation (#3642)
* add datadog llm observability integration guide

* spacing fix

* wording changes

* alphabetize docs listing

* Update docs/en/observability/datadog.mdx

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>

* add translations

* fix korean code block

---------

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-27 09:48:38 -04:00
Lorenze Jay
494ed7e671 liteagent supports apps and mcps (#3794)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* liteagent supports apps and mcps

* generated cassettes for these
2025-10-24 18:42:08 -07:00
Lorenze Jay
a83c57a2f2 feat: bump versions to 1.2.0 (#3787)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.2.0

* also include projects
2025-10-23 18:04:34 -07:00
Lorenze Jay
08e15ab267 fix: update default LLM model and improve error logging in LLM utilities (#3785)
* fix: update default LLM model and improve error logging in LLM utilities

* Updated the default LLM model from "gpt-4o-mini" to "gpt-4.1-mini" for better performance.
* Enhanced error logging in the LLM utilities to use logger.error instead of logger.debug, ensuring that errors are properly reported and raised.
* Added tests to verify behavior when OpenAI API key is missing and when Anthropic dependency is not available, improving robustness and error handling in LLM creation.

* fix: update test for default LLM model usage

* Refactored the test_create_llm_with_none_uses_default_model to use the imported DEFAULT_LLM_MODEL constant instead of a hardcoded string.
* Ensured that the test correctly asserts the model used is the current default, improving maintainability and consistency across tests.

* change default model to gpt-4.1-mini

* change default model use defualt
2025-10-23 17:54:11 -07:00
Greyson LaLonde
9728388ea7 fix: change flow viz del dir; method inspection
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update flow viz deletion dir, add typing
* tests: add flow viz tests to ensure lib dir is not deleted
2025-10-22 19:32:38 -04:00
Greyson LaLonde
4371cf5690 chore: remove aisuite
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Little usage + blocking some features
2025-10-21 23:18:06 -04:00
Lorenze Jay
d28daa26cd feat: bump versions to 1.1.0 (#3770)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.1.0

* chore: bump template versions

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-21 15:52:44 -07:00
Lorenze Jay
a850813f2b feat: enhance InternalInstructor to support multiple LLM providers (#3767)
* feat: enhance InternalInstructor to support multiple LLM providers

- Updated InternalInstructor to conditionally create an instructor client based on the LLM provider.
- Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern.
- Added functionality to extract the provider from the LLM model name.
- Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling.

This update improves flexibility and extensibility for different LLM integrations.

* fix test
2025-10-21 15:24:59 -07:00
Cameron Warren
5944a39629 fix: correct broken integration documentation links
Fix navigation paths for two integration tool cards that were redirecting to the
introduction page instead of their intended documentation pages.

Fixes #3516

Co-authored-by: Cwarre33 <cwarre33@charlotte.edu>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 18:12:08 -04:00
Greyson LaLonde
c594859ed0 feat: mypy plugin base
* feat: base mypy plugin with CrewBase

* fix: add crew method to protocol
2025-10-21 17:36:08 -04:00
Daniel Barreto
2ee27efca7 feat: improvements on QdrantVectorSearchTool
* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 16:50:08 -04:00
Greyson LaLonde
f6e13eb890 chore: update codeql config paths to new folders
* chore: update codeql config paths to new folders

* tests: use threading.Condition for event check

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:43:25 -04:00
Lorenze Jay
e7b3ce27ca docs: update LLM integration details and examples
* docs: update LLM integration details and examples

- Changed references from LiteLLM to native SDKs for LLM providers.
- Enhanced OpenAI and AWS Bedrock sections with new usage examples and advanced configuration options.
- Added structured output examples and supported environment variables for better clarity.
- Improved documentation on additional parameters and features for LLM configurations.

* drop this example - should use strucutred output from task instead

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 14:39:50 -04:00
Greyson LaLonde
dba27cf8b5 fix: fix double trace call; add types
* fix: fix double trace call; add types

* tests: skip long running uv install test, refactor in future

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:15:39 -04:00
Greyson LaLonde
6469f224f6 chore: improve CrewBase typing 2025-10-21 13:58:35 -04:00
Greyson LaLonde
f3a63be215 tests: cassettes, threading for flow tests 2025-10-21 13:48:21 -04:00
Greyson LaLonde
01d8c189f0 fix: pin template versions to latest
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-10-21 10:56:41 -04:00
Lorenze Jay
cc83c1ead5 feat: bump versions to 1.0.0
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-20 17:34:38 -04:00
Greyson LaLonde
7578901f6d chore: use main for devtools branch 2025-10-20 17:29:09 -04:00
Lorenze Jay
d1343b96ed Release/v1.0.0 (#3618)
* feat: add `apps` & `actions` attributes to Agent (#3504)

* feat: add app attributes to Agent

* feat: add actions attribute to Agent

* chore: resolve linter issues

* refactor: merge the apps and actions parameters into a single one

* fix: remove unnecessary print

* feat: logging error when CrewaiPlatformTools fails

* chore: export CrewaiPlatformTools directly from crewai_tools

* style: resolver linter issues

* test: fix broken tests

* style: solve linter issues

* fix: fix broken test

* feat: monorepo restructure and test/ci updates

- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules

* chore: update python version to 3.13 and package metadata

* feat: add crewai-tools workspace and fix tests/dependencies

* feat: add crewai-tools workspace structure

* Squashed 'temp-crewai-tools/' content from commit 9bae5633

git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9

* feat: configure crewai-tools workspace package with dependencies

* fix: apply ruff auto-formatting to crewai-tools code

* chore: update lockfile

* fix: don't allow tool tests yet

* fix: comment out extra pytest flags for now

* fix: remove conflicting conftest.py from crewai-tools tests

* fix: resolve dependency conflicts and test issues

- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0

* chore: update CI workflows and docs for monorepo structure

* chore: update CI workflows and docs for monorepo structure

* fix: actions syntax

* chore: ci publish and pin versions

* fix: add permission to action

* chore: bump version to 1.0.0a1 across all packages

- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency

* WIP: v1 docs (#3626)

(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)

* docs: parity for all translations

* docs: full name of acronym AMP

* docs: fix lingering unused code

* docs: expand contextual options in docs.json

* docs: add contextual action to request feature on GitHub (#3635)

* chore: apply linting fixes to crewai-tools

* feat: add required env var validation for brightdata

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* fix: handle properly anyOf oneOf allOf schema's props

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: bump version to 1.0.0a2

* Lorenze/native inference sdks (#3619)

* ruff linted

* using native sdks with litellm fallback

* drop exa

* drop print on completion

* Refactor LLM and utility functions for type consistency

- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.

* fix agent_tests

* fix litellm tests and usagemetrics fix

* drop print

* Refactor LLM event handling and improve test coverage

- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.

* fix test_converter

* fixed tests/agents/test_agent.py

* Refactor LLM context length exception handling and improve provider integration

- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.

* Enhance LLM context window handling across providers

- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.

* fix test agent again

* fix test agent

* feat: add native LLM providers for Anthropic, Azure, and Gemini

- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.

* chore: update dependencies and improve context management

- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.

* refactor(tests): update LLM instantiation to include is_litellm flag in test cases

- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.

* linter

* linted

* revert constants

* fix(tests): correct type hint in expected model description

- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.

* refactor(llm): enhance LLM instantiation and error handling

- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.

* fixed test

* refactor(llm): enhance token usage tracking and add copy methods

- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.

* refactor(tests): reorganize imports and enhance error messages in test cases

- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.

* feat: add base devtooling

* fix: ensure dep refs are updated for devtools

* fix: allow pre-release

* feat: allow release after tag

* feat: bump versions to 1.0.0a3 

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: match tag and release title, ignore devtools build for pypi

* fix: allow failed pypi publish

* feat: introduce trigger listing and execution commands for local development (#3643)

* chore: exclude tests from ruff linting

* chore: exclude tests from GitHub Actions linter

* fix: replace print statements with logger in agent and memory handling

* chore: add noqa for intentional print in printer utility

* fix: resolve linting errors across codebase

* feat: update docs with new approach to consume Platform Actions (#3675)

* fix: remove duplicate line and add explicit env var

* feat: bump versions to 1.0.0a4 (#3686)

* Update triggers docs (#3678)

* docs: introduce triggers list & triggers run command

* docs: add KO triggers docs

* docs: ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned on docs (#3687)

* Lorenze/bedrock llm (#3693)

* feat: add AWS Bedrock support and update dependencies

- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.

* using converse api

* converse

* linted

* refactor: update BedrockCompletion class to improve parameter handling

- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.

* feat: improve event bus thread safety and async support

Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.

Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.

* feat: enhance OpenAICompletion class with additional client parameters (#3701)

* feat: enhance OpenAICompletion class with additional client parameters

- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.

* fix: correct test case for unsupported OpenAI model

- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.

* fix: enhance error handling in OpenAICompletion class

- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.

* fix: improve test for unsupported OpenAI model handling

- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.

* feat: bump versions to 1.0.0b1 (#3706)

* Lorenze/tools drop litellm (#3710)

* completely drop litellm and correctly pass config for qdrant

* feat: add support for additional embedding models in EmbeddingService

- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.

* fix: update collection parameter handling in CrewAIRagAdapter

- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.

* moved stagehand as optional dep (#3712)

* feat: bump versions to 1.0.0b2 (#3713)

* feat: enhance AnthropicCompletion class with additional client parame… (#3707)

* feat: enhance AnthropicCompletion class with additional client parameters and tool handling

- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.

* drop print statements

* test: add fixture to mock ANTHROPIC_API_KEY for tests

- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.

* refactor: streamline streaming message handling in AnthropicCompletion class

- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.

* fix streaming here too

* fix: improve error handling in tool conversion for AnthropicCompletion class

- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.

* feat: enhance GeminiCompletion class with client parameter support (#3717)

* feat: enhance GeminiCompletion class with client parameter support

- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.

* add optional dependancies

* refactor: update test fixture to mock GOOGLE_API_KEY

- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.

* fix tests

* feat: enhance BedrockCompletion class with advanced features

* feat: enhance BedrockCompletion class with advanced features and error handling

- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.

* chore: add boto typing

* fix: use typing_extensions.Required for Python 3.10 compatibility

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: azure native tests

* feat: add Azure AI Inference support and related tests

- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.

* feat: enhance AzureCompletion class with Azure OpenAI support

- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.

* refactor: convert project module to metaclass with full typing

* Lorenze/OpenAI base url backwards support (#3723)

* fix: enhance OpenAICompletion class base URL handling

- Updated the base URL assignment in the OpenAICompletion class to prioritize the new `api_base` attribute and fallback to the environment variable `OPENAI_BASE_URL` if both are not set.
- Added `api_base` to the list of parameters in the OpenAICompletion class to ensure proper configuration and flexibility in API endpoint management.

* feat: enhance OpenAICompletion class with api_base support

- Added the `api_base` parameter to the OpenAICompletion class to allow for flexible API endpoint configuration.
- Updated the `_get_client_params` method to prioritize `base_url` over `api_base`, ensuring correct URL handling.
- Introduced comprehensive tests to validate the behavior of `api_base` and `base_url` in various scenarios, including environment variable fallback.
- Enhanced test coverage for client parameter retrieval, ensuring robust integration with the OpenAI API.

* fix: improve OpenAICompletion class configuration handling

- Added a debug print statement to log the client configuration parameters during initialization for better traceability.
- Updated the base URL assignment logic to ensure it defaults to None if no valid base URL is provided, enhancing robustness in API endpoint configuration.
- Refined the retrieval of the `api_base` environment variable to streamline the configuration process.

* drop print

* feat: improvements on import native sdk support (#3725)

* feat: add support for Anthropic provider and enhance logging

- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.

* feat: enhance LLM support with Bedrock provider and update dependencies

- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.

* test: update native provider fallback tests to expect ImportError

* adjust the test with the expected bevaior - raising ImportError

* this is exoecting the litellm format, all gemini native tests are in test_google.py

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: remove stdout prints, improve test determinism, and update trace handling

Removed `print` statements from the `LLMStreamChunkEvent` handler to prevent
LLM response chunks from being written directly to stdout. The listener now
only tracks chunks internally.

Fixes #3715

Added explicit return statements for trace-related tests.

Updated cassette for `test_failed_evaluation` to reflect new behavior where
an empty trace dict is used instead of returning early.

Ensured deterministic cleanup order in test fixtures by making
`clear_event_bus_handlers` depend on `setup_test_environment`. This guarantees
event bus shutdown and file handle cleanup occur before temporary directory
deletion, resolving intermittent “Directory not empty” errors in CI.

* chore: remove lib/crewai exclusion from pre-commit hooks

* feat: enhance task guardrail functionality and validation

* feat: enhance task guardrail functionality and validation

- Introduced support for multiple guardrails in the Task class, allowing for sequential processing of guardrails.
- Added a new `guardrails` field to the Task model to accept a list of callable guardrails or string descriptions.
- Implemented validation to ensure guardrails are processed correctly, including handling of retries and error messages.
- Enhanced the `_invoke_guardrail_function` method to manage guardrail execution and integrate with existing task output processing.
- Updated tests to cover various scenarios involving multiple guardrails, including success, failure, and retry mechanisms.

This update improves the flexibility and robustness of task execution by allowing for more complex validation scenarios.

* refactor: enhance guardrail type handling in Task model

- Updated the Task class to improve guardrail type definitions, introducing GuardrailType and GuardrailsType for better clarity and type safety.
- Simplified the validation logic for guardrails, ensuring that both single and multiple guardrails are processed correctly.
- Enhanced error messages for guardrail validation to provide clearer feedback when incorrect types are provided.
- This refactor improves the maintainability and robustness of task execution by standardizing guardrail handling.

* feat: implement per-guardrail retry tracking in Task model

- Introduced a new private attribute `_guardrail_retry_counts` to the Task class for tracking retry attempts on a per-guardrail basis.
- Updated the guardrail processing logic to utilize the new retry tracking, allowing for independent retry counts for each guardrail.
- Enhanced error handling to provide clearer feedback when guardrails fail validation after exceeding retry limits.
- Modified existing tests to validate the new retry tracking behavior, ensuring accurate assertions on guardrail retries.

This update improves the robustness and flexibility of task execution by allowing for more granular control over guardrail validation and retry mechanisms.

* chore: 1.0.0b3 bump (#3734)

* chore: full ruff and mypy

improved linting, pre-commit setup, and internal architecture. Configured Ruff to respect .gitignore, added stricter rules, and introduced a lock pre-commit hook with virtualenv activation. Fixed type shadowing in EXASearchTool using a type_ alias to avoid PEP 563 conflicts and resolved circular imports in agent executor and guardrail modules. Removed agent-ops attributes, deprecated watson alias, and dropped crewai-enterprise tools with corresponding test updates. Refactored cache and memoization for thread safety and cleaned up structured output adapters and related logic.

* New MCL DSL (#3738)

* Adding MCP implementation

* New tests for MCP implementation

* fix tests

* update docs

* Revert "New tests for MCP implementation"

This reverts commit 0bbe6dee90.

* linter

* linter

* fix

* verify mcp pacakge exists

* adjust docs to be clear only remote servers are supported

* reverted

* ensure args schema generated properly

* properly close out

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: a2a experimental

experimental a2a support

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Mike Plachta <mplachta@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-10-20 14:10:19 -07:00
Greyson LaLonde
42f2b4d551 fix: preserve nested condition structure in Flow decorators
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Fixes nested boolean conditions being flattened in @listen, @start, and @router decorators. The or_() and and_() combinators now preserve their nested structure using a "conditions" key instead of flattening to a list. Added recursive evaluation logic to properly handle complex patterns like or_(and_(A, B), and_(C, D)).
2025-10-17 17:06:19 -04:00
Greyson LaLonde
0229390ad1 fix: add standard print parameters to Printer.print method
- Adds sep, end, file, and flush parameters to match Python's built-in print function signature.
2025-10-17 15:27:22 -04:00
Vidit Ostwal
f0fb349ddf Fixing copy and adding NOT_SPECIFIED check in task.py (#3690)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* Fixing copy and adding NOT_SPECIFIED check:

* Fixed mypy issues

* Added test Cases

* added linting checks

* Removed the docs bot folder

* Fixed ruff checks

* Remove secret_folder from tracking

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-14 09:52:39 -07:00
João Moura
bf2e2a42da fix: don't error out if there it no input() available
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Specific to jupyter notebooks
2025-10-13 22:36:19 -04:00
Lorenze Jay
814c962196 chore: update crewAI version to 0.203.1 in multiple templates (#3699)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
- Bumped the `crewai` version in `__init__.py` to 0.203.1.
- Updated the dependency versions in the crew, flow, and tool templates' `pyproject.toml` files to reflect the new `crewai` version.
2025-10-13 11:46:22 -07:00
Heitor Carvalho
2ebb2e845f fix: add a leeway of 10s when decoding jwt (#3698) 2025-10-13 12:42:03 -03:00
Greyson LaLonde
7b550ebfe8 fix: inject tool repository credentials in crewai run command
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-10 15:00:04 -04:00
Greyson LaLonde
29919c2d81 fix: revert bad cron sched
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This reverts commit b71c88814f.
2025-10-09 13:52:25 -04:00
Greyson LaLonde
b71c88814f fix: correct cron schedule to run every 5 days at specific dates 2025-10-09 13:10:45 -04:00
Rip&Tear
cb8bcfe214 docs: update security policy for vulnerability reporting
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
- Revised the security policy to clarify the reporting process for vulnerabilities.
- Added detailed sections on scope, reporting requirements, and our commitment to addressing reported issues.
- Emphasized the importance of not disclosing vulnerabilities publicly and provided guidance on how to report them securely.
- Included a new section on coordinated disclosure and safe harbor provisions for ethical reporting.

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-09 00:57:57 -04:00
Lorenze Jay
13a514f8be chore: update crewAI and crewAI-tools dependencies to version 0.203.0 and 0.76.0 respectively (#3674)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Updated the `crewai-tools` dependency in `pyproject.toml` and `uv.lock` to version 0.76.0.
- Updated the `crewai` version in `__init__.py` to 0.203.0.
- Updated the dependency versions in the crew, flow, and tool templates to reflect the new `crewai` version.
2025-10-08 14:34:51 -07:00
Lorenze Jay
316b1cea69 docs: add guide for capturing telemetry logs in CrewAI AMP (#3673)
- Introduced a new documentation page detailing how to capture telemetry logs from CrewAI AMP deployments.
- Updated the main documentation to include the new guide in the enterprise section.
- Added prerequisites and step-by-step instructions for configuring OTEL collector setup.
- Included an example image for OTEL log collection capture to Datadog.
2025-10-08 14:06:10 -07:00
Lorenze Jay
6f2e39c0dd feat: enhance knowledge and guardrail event handling in Agent class (#3672)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
* feat: enhance knowledge event handling in Agent class

- Updated the Agent class to include task context in knowledge retrieval events.
- Emitted new events for knowledge retrieval and query processes, capturing task and agent details.
- Refactored knowledge event classes to inherit from a base class for better structure and maintainability.
- Added tracing for knowledge events in the TraceCollectionListener to improve observability.

This change improves the tracking and management of knowledge queries and retrievals, facilitating better debugging and performance monitoring.

* refactor: remove task_id from knowledge event emissions in Agent class

- Removed the task_id parameter from various knowledge event emissions in the Agent class to streamline event handling.
- This change simplifies the event structure and focuses on the essential context of knowledge retrieval and query processes.

This refactor enhances the clarity of knowledge events and aligns with the recent improvements in event handling.

* surface association for guardrail events

* fix: improve LLM selection logic in converter

- Updated the logic for selecting the LLM in the convert_with_instructions function to handle cases where the agent may not have a function_calling_llm attribute.
- This change ensures that the converter can still function correctly by falling back to the standard LLM if necessary, enhancing robustness and preventing potential errors.

This fix improves the reliability of the conversion process when working with different agent configurations.

* fix test

* fix: enforce valid LLM instance requirement in converter

- Updated the convert_with_instructions function to ensure that a valid LLM instance is provided by the agent.
- If neither function_calling_llm nor the standard llm is available, a ValueError is raised, enhancing error handling and robustness.
- Improved error messaging for conversion failures to provide clearer feedback on issues encountered during the conversion process.

This change strengthens the reliability of the conversion process by ensuring that agents are properly configured with a valid LLM.
2025-10-08 11:53:13 -07:00
Lucas Gomide
8d93361cb3 docs: add missing /resume files (#3661)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-07 12:21:27 -04:00
Lucas Gomide
54ec245d84 docs: clarify webhook URL parameter in HITL workflows (#3660) 2025-10-07 12:06:11 -04:00
Vidit Ostwal
f589ab9b80 chore: load json tool input before console output
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-07 10:18:28 -04:00
Greyson LaLonde
fadb59e0f0 chore: add scheduled cache rebuild to prevent expiration
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-06 11:45:28 -04:00
Greyson LaLonde
1a60848425 chore: remove crewAI.excalidraw file 2025-10-06 11:03:55 -04:00
Greyson LaLonde
0135163040 chore: remove mkdocs cache directory
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Remove obsolete .cache directory from mkdocs-material social plugin as the project no longer uses mkdocs for documentation.
2025-10-05 21:41:09 -04:00
Greyson LaLonde
dac5d6d664 fix: use system PATH for Docker binary instead of hardcoded path 2025-10-05 21:36:05 -04:00
Rip&Tear
f0f94f2540 fix: add CodeQL configuration to properly exclude template directories (#3641) 2025-10-06 08:21:51 +08:00
1535 changed files with 157991 additions and 39344 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

28
.github/codeql/codeql-config.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality

63
.github/security.md vendored
View File

@@ -1,27 +1,50 @@
## CrewAI Security Vulnerability Reporting Policy
## CrewAI Security Policy
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
### Scope
Email all vulnerability reports directly to:
**security@crewai.com**
We welcome reports for vulnerabilities that could impact:
### Required Information
To help us quickly validate and remediate the issue, your report must include:
- CrewAI-maintained source code and repositories
- CrewAI-operated infrastructure and services
- Official CrewAI releases, packages, and distributions
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
### How to Report
### Reward Notice
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.

View File

@@ -7,6 +7,8 @@ on:
paths:
- "uv.lock"
- "pyproject.toml"
schedule:
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
workflow_dispatch:
permissions:

View File

@@ -15,11 +15,11 @@ on:
push:
branches: [ "main" ]
paths-ignore:
- "src/crewai/cli/templates/**"
- "lib/crewai/src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "src/crewai/cli/templates/**"
- "lib/crewai/src/crewai/cli/templates/**"
jobs:
analyze:
@@ -73,6 +73,7 @@ jobs:
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
config-file: ./.github/codeql/codeql-config.yml
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.

View File

@@ -52,10 +52,11 @@ jobs:
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| xargs -I{} uv run ruff check "{}"
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| grep -v '/tests/' \
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'

81
.github/workflows/publish.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Publish to PyPI
on:
release:
types: [ published ]
workflow_dispatch:
jobs:
build:
name: Build packages
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

View File

@@ -8,6 +8,14 @@ permissions:
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
BRAVE_API_KEY: fake-brave-key
SNOWFLAKE_USER: fake-snowflake-user
SNOWFLAKE_PASSWORD: fake-snowflake-password
SNOWFLAKE_ACCOUNT: fake-snowflake-account
SNOWFLAKE_WAREHOUSE: fake-snowflake-warehouse
SNOWFLAKE_DATABASE: fake-snowflake-database
SNOWFLAKE_SCHEMA: fake-snowflake-schema
EMBEDCHAIN_DB_URI: sqlite:///test.db
jobs:
tests:
@@ -56,13 +64,13 @@ jobs:
- name: Run tests (group ${{ matrix.group }} of 8)
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
DURATION_FILE=".test_durations_py${PYTHON_VERSION_SAFE}"
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}"
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
@@ -74,8 +82,8 @@ jobs:
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
uv run pytest \
cd lib/crewai && uv run pytest \
--block-network \
--timeout=30 \
-vv \
@@ -86,6 +94,19 @@ jobs:
-n auto \
--maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8)
run: |
cd lib/crewai-tools && uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
--durations=10 \
-n auto \
--maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4

1
.gitignore vendored
View File

@@ -2,7 +2,6 @@
.pytest_cache
__pycache__
dist/
lib/
.env
assets/*
.idea

View File

@@ -3,17 +3,25 @@ repos:
hooks:
- id: ruff
name: ruff
entry: uv run ruff check
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: ruff-format
name: ruff-format
entry: uv run ruff format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: mypy
name: mypy
entry: uv run mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
exclude: ^tests/
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock

File diff suppressed because it is too large Load Diff

View File

@@ -134,6 +134,7 @@
"group": "MCP Integration",
"pages": [
"en/mcp/overview",
"en/mcp/dsl-integration",
"en/mcp/stdio",
"en/mcp/sse",
"en/mcp/streamable-http",
@@ -275,6 +276,7 @@
"en/observability/overview",
"en/observability/arize-phoenix",
"en/observability/braintrust",
"en/observability/datadog",
"en/observability/langdb",
"en/observability/langfuse",
"en/observability/langtrace",
@@ -361,10 +363,20 @@
"en/enterprise/integrations/github",
"en/enterprise/integrations/gmail",
"en/enterprise/integrations/google_calendar",
"en/enterprise/integrations/google_contacts",
"en/enterprise/integrations/google_docs",
"en/enterprise/integrations/google_drive",
"en/enterprise/integrations/google_sheets",
"en/enterprise/integrations/google_slides",
"en/enterprise/integrations/hubspot",
"en/enterprise/integrations/jira",
"en/enterprise/integrations/linear",
"en/enterprise/integrations/microsoft_excel",
"en/enterprise/integrations/microsoft_onedrive",
"en/enterprise/integrations/microsoft_outlook",
"en/enterprise/integrations/microsoft_sharepoint",
"en/enterprise/integrations/microsoft_teams",
"en/enterprise/integrations/microsoft_word",
"en/enterprise/integrations/notion",
"en/enterprise/integrations/salesforce",
"en/enterprise/integrations/shopify",
@@ -397,6 +409,7 @@
"en/enterprise/guides/kickoff-crew",
"en/enterprise/guides/update-crew",
"en/enterprise/guides/enable-crew-studio",
"en/enterprise/guides/capture_telemetry_logs",
"en/enterprise/guides/azure-openai-setup",
"en/enterprise/guides/tool-repository",
"en/enterprise/guides/react-component-export",
@@ -421,6 +434,7 @@
"en/api-reference/introduction",
"en/api-reference/inputs",
"en/api-reference/kickoff",
"en/api-reference/resume",
"en/api-reference/status"
]
}
@@ -558,6 +572,7 @@
"group": "Integração MCP",
"pages": [
"pt-BR/mcp/overview",
"pt-BR/mcp/dsl-integration",
"pt-BR/mcp/stdio",
"pt-BR/mcp/sse",
"pt-BR/mcp/streamable-http",
@@ -686,6 +701,7 @@
"pt-BR/observability/overview",
"pt-BR/observability/arize-phoenix",
"pt-BR/observability/braintrust",
"pt-BR/observability/datadog",
"pt-BR/observability/langdb",
"pt-BR/observability/langfuse",
"pt-BR/observability/langtrace",
@@ -771,10 +787,20 @@
"pt-BR/enterprise/integrations/github",
"pt-BR/enterprise/integrations/gmail",
"pt-BR/enterprise/integrations/google_calendar",
"pt-BR/enterprise/integrations/google_contacts",
"pt-BR/enterprise/integrations/google_docs",
"pt-BR/enterprise/integrations/google_drive",
"pt-BR/enterprise/integrations/google_sheets",
"pt-BR/enterprise/integrations/google_slides",
"pt-BR/enterprise/integrations/hubspot",
"pt-BR/enterprise/integrations/jira",
"pt-BR/enterprise/integrations/linear",
"pt-BR/enterprise/integrations/microsoft_excel",
"pt-BR/enterprise/integrations/microsoft_onedrive",
"pt-BR/enterprise/integrations/microsoft_outlook",
"pt-BR/enterprise/integrations/microsoft_sharepoint",
"pt-BR/enterprise/integrations/microsoft_teams",
"pt-BR/enterprise/integrations/microsoft_word",
"pt-BR/enterprise/integrations/notion",
"pt-BR/enterprise/integrations/salesforce",
"pt-BR/enterprise/integrations/shopify",
@@ -803,6 +829,12 @@
"group": "Triggers",
"pages": [
"pt-BR/enterprise/guides/automation-triggers",
"pt-BR/enterprise/guides/gmail-trigger",
"pt-BR/enterprise/guides/google-calendar-trigger",
"pt-BR/enterprise/guides/google-drive-trigger",
"pt-BR/enterprise/guides/outlook-trigger",
"pt-BR/enterprise/guides/onedrive-trigger",
"pt-BR/enterprise/guides/microsoft-teams-trigger",
"pt-BR/enterprise/guides/slack-trigger",
"pt-BR/enterprise/guides/hubspot-trigger",
"pt-BR/enterprise/guides/salesforce-trigger",
@@ -827,6 +859,7 @@
"pt-BR/api-reference/introduction",
"pt-BR/api-reference/inputs",
"pt-BR/api-reference/kickoff",
"pt-BR/api-reference/resume",
"pt-BR/api-reference/status"
]
}
@@ -960,6 +993,7 @@
"group": "MCP 통합",
"pages": [
"ko/mcp/overview",
"ko/mcp/dsl-integration",
"ko/mcp/stdio",
"ko/mcp/sse",
"ko/mcp/streamable-http",
@@ -1100,6 +1134,7 @@
"ko/observability/overview",
"ko/observability/arize-phoenix",
"ko/observability/braintrust",
"ko/observability/datadog",
"ko/observability/langdb",
"ko/observability/langfuse",
"ko/observability/langtrace",
@@ -1185,10 +1220,20 @@
"ko/enterprise/integrations/github",
"ko/enterprise/integrations/gmail",
"ko/enterprise/integrations/google_calendar",
"ko/enterprise/integrations/google_contacts",
"ko/enterprise/integrations/google_docs",
"ko/enterprise/integrations/google_drive",
"ko/enterprise/integrations/google_sheets",
"ko/enterprise/integrations/google_slides",
"ko/enterprise/integrations/hubspot",
"ko/enterprise/integrations/jira",
"ko/enterprise/integrations/linear",
"ko/enterprise/integrations/microsoft_excel",
"ko/enterprise/integrations/microsoft_onedrive",
"ko/enterprise/integrations/microsoft_outlook",
"ko/enterprise/integrations/microsoft_sharepoint",
"ko/enterprise/integrations/microsoft_teams",
"ko/enterprise/integrations/microsoft_word",
"ko/enterprise/integrations/notion",
"ko/enterprise/integrations/salesforce",
"ko/enterprise/integrations/shopify",
@@ -1217,6 +1262,12 @@
"group": "트리거",
"pages": [
"ko/enterprise/guides/automation-triggers",
"ko/enterprise/guides/gmail-trigger",
"ko/enterprise/guides/google-calendar-trigger",
"ko/enterprise/guides/google-drive-trigger",
"ko/enterprise/guides/outlook-trigger",
"ko/enterprise/guides/onedrive-trigger",
"ko/enterprise/guides/microsoft-teams-trigger",
"ko/enterprise/guides/slack-trigger",
"ko/enterprise/guides/hubspot-trigger",
"ko/enterprise/guides/salesforce-trigger",
@@ -1239,6 +1290,7 @@
"ko/api-reference/introduction",
"ko/api-reference/inputs",
"ko/api-reference/kickoff",
"ko/api-reference/resume",
"ko/api-reference/status"
]
}

View File

@@ -0,0 +1,6 @@
---
title: "POST /resume"
description: "Resume crew execution with human feedback"
openapi: "/enterprise-api.en.yaml POST /resume"
mode: "wide"
---

View File

@@ -7,7 +7,7 @@ mode: "wide"
## Overview
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
## What are LLMs?
@@ -113,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
<AccordionGroup>
<Accordion title="OpenAI">
Set the following environment variables in your `.env` file:
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
```toml Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
OPENAI_BASE_URL=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4", # call model by provider/model_name
temperature=0.8,
max_tokens=150,
model="openai/gpt-4o",
api_key="your-api-key", # Or set OPENAI_API_KEY
temperature=0.7,
max_tokens=4000
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4o",
api_key="your-api-key",
base_url="https://api.openai.com/v1", # Optional custom endpoint
organization="org-...", # Optional organization ID
project="proj_...", # Optional project ID
temperature=0.7,
max_tokens=4000,
max_completion_tokens=4000, # For newer models
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
seed=42, # For reproducible outputs
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3, # Maximum retry attempts
logprobs=True, # Return log probabilities
top_logprobs=5, # Number of most likely tokens
reasoning_effort="medium" # For o1 models: low, medium, high
)
```
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
**Structured Outputs:**
```python Code
from pydantic import BaseModel
from crewai import LLM
class ResponseFormat(BaseModel):
name: str
age: int
summary: str
llm = LLM(
model="openai/gpt-4o",
)
```
**Supported Environment Variables:**
- `OPENAI_API_KEY`: Your OpenAI API key (required)
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
**Features:**
- Native function calling support (except o1 models)
- Structured outputs with JSON schema
- Streaming support for real-time responses
- Token usage tracking
- Stop sequences support (except o1 models)
- Log probabilities for token-level insights
- Reasoning effort control for o1 models
**Supported Models:**
| Model | Context Window | Best For |
|---------------------|------------------|-----------------------------------------------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
| o1-mini | 128,000 tokens | Efficient reasoning model |
| o3-mini | 200,000 tokens | Lightweight reasoning model |
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
**Note:** To use OpenAI, install the required dependencies:
```bash
uv add "crewai[openai]"
```
</Accordion>
<Accordion title="Meta-Llama">
@@ -187,69 +247,186 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Anthropic">
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
max_tokens=4096 # Required for Anthropic
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key",
base_url="https://api.anthropic.com", # Optional custom endpoint
temperature=0.7,
max_tokens=4096, # Required parameter
top_p=0.9,
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3 # Maximum retry attempts
)
```
**Supported Environment Variables:**
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
**Features:**
- Native tool use support for Claude 3+ models
- Streaming support for real-time responses
- Automatic system message handling
- Stop sequences for controlled output
- Token usage tracking
- Multi-turn tool use conversations
**Important Notes:**
- `max_tokens` is a **required** parameter for all Anthropic models
- Claude uses `stop_sequences` instead of `stop`
- System messages are handled separately from conversation messages
- First message must be from the user (automatically handled)
- Messages must alternate between user and assistant
**Supported Models:**
| Model | Context Window | Best For |
|------------------------------|----------------|-----------------------------------------------|
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
| claude-2 | 100,000 tokens | Versatile model for various tasks |
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
**Note:** To use Anthropic, install the required dependencies:
```bash
uv add "crewai[anthropic]"
```
</Accordion>
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
```toml .env
# https://ai.google.dev/gemini-api/docs/api-key
# Required (one of the following)
GOOGLE_API_KEY=<your-api-key>
GEMINI_API_KEY=<your-api-key>
# Optional - for Vertex AI
GOOGLE_CLOUD_PROJECT=<your-project-id>
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
temperature=0.7
)
```
### Gemini models
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.5-flash",
api_key="your-api-key",
temperature=0.7,
top_p=0.9,
top_k=40, # Top-k sampling parameter
max_output_tokens=8192,
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
safety_settings={
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
}
)
```
**Vertex AI Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro",
project="your-gcp-project-id",
location="us-central1" # GCP region
)
```
**Supported Environment Variables:**
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
**Features:**
- Native function calling support for Gemini 1.5+ and 2.x models
- Streaming support for real-time responses
- Multimodal capabilities (text, images, video)
- Safety settings configuration
- Support for both Gemini API and Vertex AI
- Automatic system instruction handling
- Token usage tracking
**Gemini Models:**
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
**Gemma Models:**
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window | Best For |
|----------------|----------------|------------------------------------|
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
| gemma-3-27b | 128,000 tokens | High-performance tasks |
**Note:** To use Google Gemini, install the required dependencies:
```bash
uv add "crewai[google-genai]"
```
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
@@ -291,43 +468,146 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Azure">
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
```toml Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
AZURE_ENDPOINT=<your-endpoint-url>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
```
Example usage in your CrewAI project:
**Endpoint URL Formats:**
For Azure OpenAI deployments:
```
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
```
For Azure AI Inference endpoints:
```
https://<resource-name>.inference.azure.com
```
**Basic Usage:**
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
api_key="<your-api-key>", # Or set AZURE_API_KEY
endpoint="<your-endpoint-url>",
api_version="2024-06-01"
)
```
**Advanced Configuration:**
```python Code
llm = LLM(
model="azure/gpt-4o",
temperature=0.7,
max_tokens=4000,
top_p=0.9,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["END"],
stream=True,
timeout=60.0,
max_retries=3
)
```
**Supported Environment Variables:**
- `AZURE_API_KEY`: Your Azure API key (required)
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
**Features:**
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
- Streaming support for real-time responses
- Automatic endpoint URL validation and correction
- Comprehensive error handling with retry logic
- Token usage tracking
**Note:** To use Azure AI Inference, install the required dependencies:
```bash
uv add "crewai[azure-ai-inference]"
```
</Accordion>
<Accordion title="AWS Bedrock">
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
```toml Code
# Required
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
# Optional
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
region_name="us-east-1"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
**Advanced Configuration:**
```python Code
from crewai import LLM
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
llm = LLM(
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
aws_session_token="your-session-token", # For temporary credentials
region_name="us-east-1",
temperature=0.7,
max_tokens=4096,
top_p=0.9,
top_k=250, # For Claude models
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
guardrail_config={ # Optional content filtering
"guardrailIdentifier": "your-guardrail-id",
"guardrailVersion": "1"
},
additional_model_request_fields={ # Model-specific parameters
"top_k": 250
}
)
```
**Supported Environment Variables:**
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
**Features:**
- Native tool calling support via Converse API
- Streaming and non-streaming responses
- Comprehensive error handling with retry logic
- Guardrail configuration for content filtering
- Model-specific parameters via `additional_model_request_fields`
- Token usage tracking and stop reason logging
- Support for all Bedrock foundation models
- Automatic conversation format handling
**Important Notes:**
- Uses the modern Converse API for unified model access
- Automatic handling of model-specific conversation requirements
- System messages are handled separately from conversation
- First message must be from user (automatically handled)
- Some models (like Cohere) require conversation to end with user message
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
@@ -357,7 +637,12 @@ In this section, you'll find detailed examples that help you select, configure,
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
**Note:** To use AWS Bedrock, install the required dependencies:
```bash
uv add "crewai[bedrock]"
```
</Accordion>
<Accordion title="Amazon SageMaker">
@@ -899,7 +1184,7 @@ Learn how to get the most out of your LLM configuration:
</Accordion>
<Accordion title="Drop Additional Parameters">
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
```python
@@ -915,6 +1200,52 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Transport Interceptors">
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
**Supported Providers:**
- ✅ OpenAI
- ✅ Anthropic
**Basic Usage:**
```python
import httpx
from crewai import LLM
from crewai.llms.hooks import BaseInterceptor
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
"""Custom interceptor to modify requests and responses."""
def on_outbound(self, request: httpx.Request) -> httpx.Request:
"""Print request before sending to the LLM provider."""
print(request)
return request
def on_inbound(self, response: httpx.Response) -> httpx.Response:
"""Process response after receiving from the LLM provider."""
print(f"Status: {response.status_code}")
print(f"Response time: {response.elapsed}")
return response
# Use the interceptor with an LLM
llm = LLM(
model="openai/gpt-4o",
interceptor=CustomInterceptor()
)
```
**Important Notes:**
- Both methods must return the received object or type of object.
- Modifying received objects may result in unexpected behavior or application crashes.
- Not all providers support interceptors - check the supported providers list above
<Info>
Interceptors operate at the transport layer. This is particularly useful for:
- Message transformation and filtering
- Debugging API interactions
</Info>
</Accordion>
</AccordionGroup>
## Common Issues and Solutions

View File

@@ -43,7 +43,7 @@ Tools & Integrations is the central hub for connecting thirdparty apps and ma
1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link>
2. Click <b>Connect</b> on the desired service
3. Complete the OAuth flow and grant scopes
4. Copy your Enterprise Token from the <b>Integration</b> tab
4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link>
<Frame>
![Enterprise Token](/images/enterprise/enterprise_action_auth_token.png)
@@ -57,29 +57,37 @@ Tools & Integrations is the central hub for connecting thirdparty apps and ma
uv add crewai-tools
```
### Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
### Usage Example
<Tip>
All services you have authenticated will be available as tools. Add `CrewaiEnterpriseTools` to your agent and youre set.
Use the new streamlined approach to integrate enterprise apps. Simply specify the app and its actions directly in the Agent configuration.
</Tip>
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Gmail tool will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# print the tools
print(enterprise_tools)
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
tools=enterprise_tools
apps=['gmail', 'gmail/send_email'] # Using canonical name 'gmail'
)
# Task to send an email
@@ -102,21 +110,14 @@ Tools & Integrations is the central hub for connecting thirdparty apps and ma
### Filtering Tools
```python
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
)
gmail_tool = enterprise_tools["gmail_find_email"]
from crewai import Agent, Task, Crew
# Create agent with specific Gmail actions only
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
tools=[gmail_tool]
apps=['gmail/fetch_emails'] # Using canonical name with specific action
)
notification_task = Task(

View File

@@ -117,27 +117,50 @@ Before wiring a trigger into production, make sure you:
- Decide whether to pass trigger context automatically using `allow_crewai_trigger_context`
- Set up monitoring—webhook logs, CrewAI execution history, and optional external alerting
### Payload & Crew Examples Repository
### Testing Triggers Locally with CLI
We maintain a comprehensive repository with end-to-end trigger examples to help you build and test your automations:
The CrewAI CLI provides powerful commands to help you develop and test trigger-driven automations without deploying to production.
This repository contains:
#### List Available Triggers
- **Realistic payload samples** for every supported trigger integration
- **Ready-to-run crew implementations** that parse each payload and turn it into a business workflow
- **Multiple scenarios per integration** (e.g., new events, updates, deletions) so you can match the shape of your data
View all available triggers for your connected integrations:
| Integration | When it fires | Payload Samples | Crew Examples |
| :-- | :-- | :-- | :-- |
| Gmail | New messages, thread updates | [New alerts, thread updates](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) | [`new-email-crew.py`, `gmail-alert-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) |
| Google Calendar | Event created / updated / started / ended / cancelled | [Event lifecycle payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) | [`calendar-event-crew.py`, `calendar-meeting-crew.py`, `calendar-working-location-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) |
| Google Drive | File created / updated / deleted | [File lifecycle payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) | [`drive-file-crew.py`, `drive-file-deletion-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) |
| Outlook | New email, calendar event removed | [Outlook payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) | [`outlook-message-crew.py`, `outlook-event-removal-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) |
| OneDrive | File operations (create, update, share, delete) | [OneDrive payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) | [`onedrive-file-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) |
| HubSpot | Record created / updated (contacts, companies, deals) | [HubSpot payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot) | [`hubspot-company-crew.py`, `hubspot-contact-crew.py`, `hubspot-record-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot) |
| Microsoft Teams | Chat thread created | [Teams chat payload](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) | [`teams-chat-created-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) |
```bash
crewai triggers list
```
This command displays all triggers available based on your connected integrations, showing:
- Integration name and connection status
- Available trigger types
- Trigger names and descriptions
#### Simulate Trigger Execution
Test your crew with realistic trigger payloads before deployment:
```bash
crewai triggers run <trigger_name>
```
For example:
```bash
crewai triggers run microsoft_onedrive/file_changed
```
This command:
- Executes your crew locally
- Passes a complete, realistic trigger payload
- Simulates exactly how your crew will be called in production
<Warning>
**Important Development Notes:**
- Use `crewai triggers run <trigger>` to simulate trigger execution during development
- Using `crewai run` will NOT simulate trigger calls and won't pass the trigger payload
- After deployment, your crew will be executed with the actual trigger payload
- If your crew expects parameters that aren't in the trigger payload, execution may fail
</Warning>
Use these samples to understand payload shape, copy the matching crew, and then replace the test payload with your live trigger data.
### Triggers with Crew
@@ -241,15 +264,20 @@ def delegate_to_crew(self, crewai_trigger_payload: dict = None):
## Troubleshooting
**Trigger not firing:**
- Verify the trigger is enabled
- Check integration connection status
- Verify the trigger is enabled in your deployment's Triggers tab
- Check integration connection status under Tools & Integrations
- Ensure all required environment variables are properly configured
**Execution failures:**
- Check the execution logs for error details
- If you are developing, make sure the inputs include the `crewai_trigger_payload` parameter with the correct payload
- Use `crewai triggers run <trigger_name>` to test locally and see the exact payload structure
- Verify your crew can handle the `crewai_trigger_payload` parameter
- Ensure your crew doesn't expect parameters that aren't included in the trigger payload
**Development issues:**
- Always test with `crewai triggers run <trigger>` before deploying to see the complete payload
- Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead
- Use `crewai triggers list` to verify which triggers are available for your connected integrations
- After deployment, your crew will receive the actual trigger payload, so test thoroughly locally first
Automation triggers transform your CrewAI deployments into responsive, event-driven systems that can seamlessly integrate with your existing business processes and tools.
<Card title="CrewAI AMP Trigger Examples" href="https://github.com/crewAIInc/crewai-enterprise-trigger-examples" icon="github">
Check them out on GitHub!
</Card>

View File

@@ -0,0 +1,35 @@
---
title: "Open Telemetry Logs"
description: "Understand how to capture telemetry logs from your CrewAI AMP deployments"
icon: "magnifying-glass-chart"
mode: "wide"
---
CrewAI AMP provides a powerful way to capture telemetry logs from your deployments. This allows you to monitor the performance of your agents and workflows, and to debug issues that may arise.
## Prerequisites
<CardGroup cols={2}>
<Card title="ENTERPRISE OTEL SETUP enabled" icon="users">
Your organization should have ENTERPRISE OTEL SETUP enabled
</Card>
<Card title="OTEL collector setup" icon="server">
Your organization should have an OTEL collector setup or a provider like Datadog log intake setup
</Card>
</CardGroup>
## How to capture telemetry logs
1. Go to settings/organization tab
2. Configure your OTEL collector setup
3. Save
Example to setup OTEL log collection capture to Datadog.
<Frame>
![Capture Telemetry Logs](/images/crewai-otel-export.png)
</Frame>

View File

@@ -51,16 +51,25 @@ class GmailProcessingCrew:
)
```
The Gmail payload will be available via the standard context mechanisms. See the payload samples repository for structure and fields.
The Gmail payload will be available via the standard context mechanisms.
### Sample payloads & crews
### Testing Locally
The [CrewAI AMP Trigger Examples repository](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) includes:
Test your Gmail trigger integration locally using the CrewAI CLI:
- `new-email-payload-1.json` / `new-email-payload-2.json` — production-style new message alerts with matching crews in `new-email-crew.py`
- `thread-updated-sample-1.json` — follow-up messages on an existing thread, processed by `gmail-alert-crew.py`
```bash
# View all available triggers
crewai triggers list
Use these samples to validate your parsing logic locally before wiring the trigger to your live Gmail accounts.
# Simulate a Gmail trigger with realistic payload
crewai triggers run gmail/new_email
```
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
@@ -70,16 +79,10 @@ Track history and performance of triggered runs:
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
</Frame>
## Payload Reference
See the sample payloads and field descriptions:
<Card title="Gmail samples in Trigger Examples Repo" href="https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail" icon="envelopes-bulk">
Gmail samples in Trigger Examples Repo
</Card>
## Troubleshooting
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -39,16 +39,23 @@ print(result.raw)
Use `crewai_trigger_payload` exactly as it is delivered by the trigger so the crew can extract the proper fields.
## Sample payloads & crews
## Testing Locally
The [Google Calendar examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) show how to handle multiple event types:
Test your Google Calendar trigger integration locally using the CrewAI CLI:
- `new-event.json` → standard event creation handled by `calendar-event-crew.py`
- `event-updated.json` / `event-started.json` / `event-ended.json` → in-flight updates processed by `calendar-meeting-crew.py`
- `event-canceled.json` → cancellation workflow that alerts attendees via `calendar-meeting-crew.py`
- Working location events use `calendar-working-location-crew.py` to extract on-site schedules
```bash
# View all available triggers
crewai triggers list
Each crew transforms raw event metadata (attendees, rooms, working locations) into the summaries your teams need.
# Simulate a Google Calendar trigger with realistic payload
crewai triggers run google_calendar/event_changed
```
The `crewai triggers run` command will execute your crew with a complete Calendar payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
@@ -61,5 +68,7 @@ The **Executions** list in the deployment dashboard tracks every triggered run a
## Troubleshooting
- Ensure the correct Google account is connected and the trigger is enabled
- Test locally with `crewai triggers run google_calendar/event_changed` to see the exact payload structure
- Confirm your workflow handles all-day events (payloads use `start.date` and `end.date` instead of timestamps)
- Check execution logs if reminders or attendee arrays are missing—calendar permissions can limit fields in the payload
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -36,15 +36,23 @@ crew.kickoff({
})
```
## Sample payloads & crews
## Testing Locally
Explore the [Google Drive examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) to cover different operations:
Test your Google Drive trigger integration locally using the CrewAI CLI:
- `new-file.json` → new uploads processed by `drive-file-crew.py`
- `updated-file.json` → file edits and metadata changes handled by `drive-file-crew.py`
- `deleted-file.json` → deletion events routed through `drive-file-deletion-crew.py`
```bash
# View all available triggers
crewai triggers list
Each crew highlights the file name, operation type, owner, permissions, and security considerations so downstream systems can respond appropriately.
# Simulate a Google Drive trigger with realistic payload
crewai triggers run google_drive/file_changed
```
The `crewai triggers run` command will execute your crew with a complete Drive payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
@@ -57,5 +65,7 @@ Track history and performance of triggered runs with the **Executions** list in
## Troubleshooting
- Verify Google Drive is connected and the trigger toggle is enabled
- Test locally with `crewai triggers run google_drive/file_changed` to see the exact payload structure
- If a payload is missing permission data, ensure the connected account has access to the file or folder
- The trigger sends file IDs only; use the Drive API if you need to fetch binary content during the crew run
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -49,16 +49,4 @@ This guide provides a step-by-step process to set up HubSpot triggers for CrewAI
</Step>
</Steps>
## Additional Resources
### Sample payloads & crews
You can jump-start development with the [HubSpot examples in the trigger repository](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot):
- `record-created-contact.json`, `record-updated-contact.json` → contact lifecycle events handled by `hubspot-contact-crew.py`
- `record-created-company.json`, `record-updated-company.json` → company enrichment flows in `hubspot-company-crew.py`
- `record-created-deals.json`, `record-updated-deals.json` → deal pipeline automation in `hubspot-record-crew.py`
Each crew demonstrates how to parse HubSpot record fields, enrich context, and return structured insights.
For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows).

View File

@@ -40,6 +40,28 @@ Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelli
<Frame>
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
</Frame>
<Warning>
**Critical: Webhook URLs Must Be Provided Again**:
You **must** provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) in the resume call that you used in the kickoff call. Webhook configurations are **NOT** automatically carried over from kickoff - they must be explicitly included in the resume request to continue receiving notifications for task completion, agent steps, and crew completion.
</Warning>
Example resume call with webhooks:
```bash
curl -X POST {BASE_URL}/resume \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"human_feedback": "Great work! Please add more details.",
"is_approve": true,
"taskWebhookUrl": "https://your-server.com/webhooks/task",
"stepWebhookUrl": "https://your-server.com/webhooks/step",
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
}'
```
<Warning>
**Feedback Impact on Task Execution**:
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
@@ -76,4 +98,4 @@ HITL workflows are particularly valuable for:
- Complex decision-making scenarios
- Sensitive or high-stakes operations
- Creative tasks requiring human judgment
- Compliance and regulatory reviews
- Compliance and regulatory reviews

View File

@@ -37,16 +37,28 @@ print(result.raw)
The crew parses thread metadata (subject, created time, roster) and generates an action plan for the receiving team.
## Sample payloads & crews
## Testing Locally
The [Microsoft Teams examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) include:
Test your Microsoft Teams trigger integration locally using the CrewAI CLI:
- `chat-created.json` → chat creation payload processed by `teams-chat-created-crew.py`
```bash
# View all available triggers
crewai triggers list
The crew demonstrates how to extract participants, initial messages, tenant information, and compliance metadata from the Microsoft Graph webhook payload.
# Simulate a Microsoft Teams trigger with realistic payload
crewai triggers run microsoft_teams/teams_message_created
```
The `crewai triggers run` command will execute your crew with a complete Teams payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_teams/teams_message_created` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Ensure the Teams connection is active; it must be refreshed if the tenant revokes permissions
- Test locally with `crewai triggers run microsoft_teams/teams_message_created` to see the exact payload structure
- Confirm the webhook subscription in Microsoft 365 is still valid if payloads stop arriving
- Review execution logs for payload shape mismatches—Graph notifications may omit fields when a chat is private or restricted
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -36,18 +36,28 @@ crew.kickoff({
The crew inspects file metadata, user activity, and permission changes to produce a compliance-friendly summary.
## Sample payloads & crews
## Testing Locally
The [OneDrive examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) showcase how to:
Test your OneDrive trigger integration locally using the CrewAI CLI:
- Parse file metadata, size, and folder paths
- Track who created and last modified the file
- Highlight permission and external sharing changes
```bash
# View all available triggers
crewai triggers list
`onedrive-file-crew.py` bundles the analysis and summarization tasks so you can add remediation steps as needed.
# Simulate a OneDrive trigger with realistic payload
crewai triggers run microsoft_onedrive/file_changed
```
The `crewai triggers run` command will execute your crew with a complete OneDrive payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_onedrive/file_changed` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Ensure the connected account has permission to read the file metadata included in the webhook
- Test locally with `crewai triggers run microsoft_onedrive/file_changed` to see the exact payload structure
- If the trigger fires but the payload is missing `permissions`, confirm the site-level sharing settings allow Graph to return this field
- For large tenants, filter notifications upstream so the crew only runs on relevant directories
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -36,17 +36,28 @@ crew.kickoff({
The crew extracts sender details, subject, body preview, and attachments before generating a structured response.
## Sample payloads & crews
## Testing Locally
Review the [Outlook examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) for two common scenarios:
Test your Outlook trigger integration locally using the CrewAI CLI:
- `new-message.json` → new mail notifications parsed by `outlook-message-crew.py`
- `event-removed.json` → calendar cleanup handled by `outlook-event-removal-crew.py`
```bash
# View all available triggers
crewai triggers list
Each crew demonstrates how to handle Microsoft Graph payloads, normalize headers, and keep humans in-the-loop with concise summaries.
# Simulate an Outlook trigger with realistic payload
crewai triggers run microsoft_outlook/email_received
```
The `crewai triggers run` command will execute your crew with a complete Outlook payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_outlook/email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Verify the Outlook connector is still authorized; the subscription must be renewed periodically
- Test locally with `crewai triggers run microsoft_outlook/email_received` to see the exact payload structure
- If attachments are missing, confirm the webhook subscription includes the `includeResourceData` flag
- Review execution logs when events fail to match—cancellation payloads lack attendee lists by design and the crew should account for that
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -151,3 +151,5 @@ You can check the security check status of a tool at:
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -25,7 +25,7 @@ Before using the Asana integration, ensure you have:
2. Find **Asana** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,26 @@ Before using the Asana integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="ASANA_CREATE_COMMENT">
<Accordion title="asana/create_comment">
**Description:** Create a comment in Asana.
**Parameters:**
@@ -44,7 +60,7 @@ uv add crewai-tools
- `text` (string, required): Text (example: "This is a comment.").
</Accordion>
<Accordion title="ASANA_CREATE_PROJECT">
<Accordion title="asana/create_project">
**Description:** Create a project in Asana.
**Parameters:**
@@ -54,7 +70,7 @@ uv add crewai-tools
- `notes` (string, optional): Notes (example: "These are things we need to purchase.").
</Accordion>
<Accordion title="ASANA_GET_PROJECTS">
<Accordion title="asana/get_projects">
**Description:** Get a list of projects in Asana.
**Parameters:**
@@ -62,14 +78,14 @@ uv add crewai-tools
- Options: `default`, `true`, `false`
</Accordion>
<Accordion title="ASANA_GET_PROJECT_BY_ID">
<Accordion title="asana/get_project_by_id">
**Description:** Get a project by ID in Asana.
**Parameters:**
- `projectFilterId` (string, required): Project ID.
</Accordion>
<Accordion title="ASANA_CREATE_TASK">
<Accordion title="asana/create_task">
**Description:** Create a task in Asana.
**Parameters:**
@@ -83,7 +99,7 @@ uv add crewai-tools
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="ASANA_UPDATE_TASK">
<Accordion title="asana/update_task">
**Description:** Update a task in Asana.
**Parameters:**
@@ -98,7 +114,7 @@ uv add crewai-tools
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="ASANA_GET_TASKS">
<Accordion title="asana/get_tasks">
**Description:** Get a list of tasks in Asana.
**Parameters:**
@@ -108,21 +124,21 @@ uv add crewai-tools
- `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00").
</Accordion>
<Accordion title="ASANA_GET_TASKS_BY_ID">
<Accordion title="asana/get_tasks_by_id">
**Description:** Get a list of tasks by ID in Asana.
**Parameters:**
- `taskId` (string, required): Task ID.
</Accordion>
<Accordion title="ASANA_GET_TASK_BY_EXTERNAL_ID">
<Accordion title="asana/get_task_by_external_id">
**Description:** Get a task by external ID in Asana.
**Parameters:**
- `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application.
</Accordion>
<Accordion title="ASANA_ADD_TASK_TO_SECTION">
<Accordion title="asana/add_task_to_section">
**Description:** Add a task to a section in Asana.
**Parameters:**
@@ -132,14 +148,14 @@ uv add crewai-tools
- `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340").
</Accordion>
<Accordion title="ASANA_GET_TEAMS">
<Accordion title="asana/get_teams">
**Description:** Get a list of teams in Asana.
**Parameters:**
- `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user.
</Accordion>
<Accordion title="ASANA_GET_WORKSPACES">
<Accordion title="asana/get_workspaces">
**Description:** Get a list of workspaces in Asana.
**Parameters:** None required.
@@ -152,19 +168,13 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Asana tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Asana capabilities
asana_agent = Agent(
role="Project Manager",
goal="Manage tasks and projects in Asana efficiently",
backstory="An AI assistant specialized in project management and task coordination.",
tools=[enterprise_tools]
apps=['asana'] # All Asana actions will be available
)
# Task to create a new project
@@ -186,19 +196,18 @@ crew.kickoff()
### Filtering Specific Asana Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Asana tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["asana_create_task", "asana_update_task", "asana_get_tasks"]
)
from crewai import Agent, Task, Crew
# Create agent with specific Asana actions only
task_manager_agent = Agent(
role="Task Manager",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and management.",
tools=enterprise_tools
apps=[
'asana/create_task',
'asana/update_task',
'asana/get_tasks'
] # Specific Asana actions
)
# Task to create and assign a task
@@ -220,17 +229,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_coordinator = Agent(
role="Project Coordinator",
goal="Coordinate project activities and track progress",
backstory="An experienced project coordinator who ensures projects run smoothly.",
tools=[enterprise_tools]
apps=['asana']
)
# Complex task involving multiple Asana operations

View File

@@ -25,7 +25,7 @@ Before using the Box integration, ensure you have:
2. Find **Box** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for file and folder management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,26 @@ Before using the Box integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="BOX_SAVE_FILE">
<Accordion title="box/save_file">
**Description:** Save a file from URL in Box.
**Parameters:**
@@ -52,7 +68,7 @@ uv add crewai-tools
- `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "https://picsum.photos/200/300").
</Accordion>
<Accordion title="BOX_SAVE_FILE_FROM_OBJECT">
<Accordion title="box/save_file_from_object">
**Description:** Save a file in Box.
**Parameters:**
@@ -61,14 +77,14 @@ uv add crewai-tools
- `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank.
</Accordion>
<Accordion title="BOX_GET_FILE_BY_ID">
<Accordion title="box/get_file_by_id">
**Description:** Get a file by ID in Box.
**Parameters:**
- `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345").
</Accordion>
<Accordion title="BOX_LIST_FILES">
<Accordion title="box/list_files">
**Description:** List files in Box.
**Parameters:**
@@ -93,7 +109,7 @@ uv add crewai-tools
```
</Accordion>
<Accordion title="BOX_CREATE_FOLDER">
<Accordion title="box/create_folder">
**Description:** Create a folder in Box.
**Parameters:**
@@ -106,7 +122,7 @@ uv add crewai-tools
```
</Accordion>
<Accordion title="BOX_MOVE_FOLDER">
<Accordion title="box/move_folder">
**Description:** Move a folder in Box.
**Parameters:**
@@ -120,14 +136,14 @@ uv add crewai-tools
```
</Accordion>
<Accordion title="BOX_GET_FOLDER_BY_ID">
<Accordion title="box/get_folder_by_id">
**Description:** Get a folder by ID in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
</Accordion>
<Accordion title="BOX_SEARCH_FOLDERS">
<Accordion title="box/search_folders">
**Description:** Search folders in Box.
**Parameters:**
@@ -152,7 +168,7 @@ uv add crewai-tools
```
</Accordion>
<Accordion title="BOX_DELETE_FOLDER">
<Accordion title="box/delete_folder">
**Description:** Delete a folder in Box.
**Parameters:**
@@ -167,19 +183,14 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Box tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
from crewai import Agent, Task, Crew
# Create an agent with Box capabilities
box_agent = Agent(
role="Document Manager",
goal="Manage files and folders in Box efficiently",
backstory="An AI assistant specialized in document management and file organization.",
tools=[enterprise_tools]
apps=['box'] # All Box actions will be available
)
# Task to create a folder structure
@@ -201,19 +212,14 @@ crew.kickoff()
### Filtering Specific Box Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Box tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["box_create_folder", "box_save_file", "box_list_files"]
)
from crewai import Agent, Task, Crew
# Create agent with specific Box actions only
file_organizer_agent = Agent(
role="File Organizer",
goal="Organize and manage file storage efficiently",
backstory="An AI assistant that focuses on file organization and storage management.",
tools=enterprise_tools
apps=['box/create_folder', 'box/save_file', 'box/list_files'] # Specific Box actions
)
# Task to organize files
@@ -235,17 +241,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
file_manager = Agent(
role="File Manager",
goal="Maintain organized file structure and manage document lifecycle",
backstory="An experienced file manager who ensures documents are properly organized and accessible.",
tools=[enterprise_tools]
apps=['box']
)
# Complex task involving multiple Box operations

View File

@@ -25,7 +25,7 @@ Before using the ClickUp integration, ensure you have:
2. Find **ClickUp** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,26 @@ Before using the ClickUp integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="CLICKUP_SEARCH_TASKS">
<Accordion title="clickup/search_tasks">
**Description:** Search for tasks in ClickUp using advanced filters.
**Parameters:**
@@ -61,7 +77,7 @@ uv add crewai-tools
Available fields: `space_ids%5B%5D`, `project_ids%5B%5D`, `list_ids%5B%5D`, `statuses%5B%5D`, `include_closed`, `assignees%5B%5D`, `tags%5B%5D`, `due_date_gt`, `due_date_lt`, `date_created_gt`, `date_created_lt`, `date_updated_gt`, `date_updated_lt`
</Accordion>
<Accordion title="CLICKUP_GET_TASK_IN_LIST">
<Accordion title="clickup/get_task_in_list">
**Description:** Get tasks in a specific list in ClickUp.
**Parameters:**
@@ -69,7 +85,7 @@ uv add crewai-tools
- `taskFilterFormula` (string, optional): Search for tasks that match specified filters. For example: name=task1.
</Accordion>
<Accordion title="CLICKUP_CREATE_TASK">
<Accordion title="clickup/create_task">
**Description:** Create a task in ClickUp.
**Parameters:**
@@ -82,7 +98,7 @@ uv add crewai-tools
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="CLICKUP_UPDATE_TASK">
<Accordion title="clickup/update_task">
**Description:** Update a task in ClickUp.
**Parameters:**
@@ -96,49 +112,49 @@ uv add crewai-tools
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="CLICKUP_DELETE_TASK">
<Accordion title="clickup/delete_task">
**Description:** Delete a task in ClickUp.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the task to delete.
</Accordion>
<Accordion title="CLICKUP_GET_LIST">
<Accordion title="clickup/get_list">
**Description:** Get List information in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the lists.
</Accordion>
<Accordion title="CLICKUP_GET_CUSTOM_FIELDS_IN_LIST">
<Accordion title="clickup/get_custom_fields_in_list">
**Description:** Get Custom Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get custom fields from.
</Accordion>
<Accordion title="CLICKUP_GET_ALL_FIELDS_IN_LIST">
<Accordion title="clickup/get_all_fields_in_list">
**Description:** Get All Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get all fields from.
</Accordion>
<Accordion title="CLICKUP_GET_SPACE">
<Accordion title="clickup/get_space">
**Description:** Get Space information in ClickUp.
**Parameters:**
- `spaceId` (string, optional): Space ID - The ID of the space to retrieve.
</Accordion>
<Accordion title="CLICKUP_GET_FOLDERS">
<Accordion title="clickup/get_folders">
**Description:** Get Folders in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the folders.
</Accordion>
<Accordion title="CLICKUP_GET_MEMBER">
<Accordion title="clickup/get_member">
**Description:** Get Member information in ClickUp.
**Parameters:** None required.
@@ -151,19 +167,14 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
from crewai import Agent, Task, Crew
# Get enterprise tools (ClickUp tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with ClickUp capabilities
# Create an agent with Clickup capabilities
clickup_agent = Agent(
role="Task Manager",
goal="Manage tasks and projects in ClickUp efficiently",
backstory="An AI assistant specialized in task management and productivity coordination.",
tools=[enterprise_tools]
apps=['clickup'] # All Clickup actions will be available
)
# Task to create a new task
@@ -185,19 +196,12 @@ crew.kickoff()
### Filtering Specific ClickUp Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific ClickUp tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["clickup_create_task", "clickup_update_task", "clickup_search_tasks"]
)
task_coordinator = Agent(
role="Task Coordinator",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and status management.",
tools=enterprise_tools
apps=['clickup/create_task']
)
# Task to manage task workflow
@@ -219,17 +223,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_manager = Agent(
role="Project Manager",
goal="Coordinate project activities and track team productivity",
backstory="An experienced project manager who ensures projects are delivered on time.",
tools=[enterprise_tools]
apps=['clickup']
)
# Complex task involving multiple ClickUp operations
@@ -256,17 +255,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
task_analyst = Agent(
role="Task Analyst",
goal="Analyze task patterns and optimize team productivity",
backstory="An AI assistant that analyzes task data to improve team efficiency.",
tools=[enterprise_tools]
apps=['clickup']
)
# Task to analyze and optimize task distribution

View File

@@ -25,7 +25,7 @@ Before using the GitHub integration, ensure you have:
2. Find **GitHub** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for repository and issue management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,26 @@ Before using the GitHub integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="GITHUB_CREATE_ISSUE">
<Accordion title="github/create_issue">
**Description:** Create an issue in GitHub.
**Parameters:**
@@ -47,7 +63,7 @@ uv add crewai-tools
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
</Accordion>
<Accordion title="GITHUB_UPDATE_ISSUE">
<Accordion title="github/update_issue">
**Description:** Update an issue in GitHub.
**Parameters:**
@@ -61,7 +77,7 @@ uv add crewai-tools
- Options: `open`, `closed`
</Accordion>
<Accordion title="GITHUB_GET_ISSUE_BY_NUMBER">
<Accordion title="github/get_issue_by_number">
**Description:** Get an issue by number in GitHub.
**Parameters:**
@@ -70,7 +86,7 @@ uv add crewai-tools
- `issue_number` (string, required): Issue Number - Specify the number of the issue to fetch.
</Accordion>
<Accordion title="GITHUB_LOCK_ISSUE">
<Accordion title="github/lock_issue">
**Description:** Lock an issue in GitHub.
**Parameters:**
@@ -81,7 +97,7 @@ uv add crewai-tools
- Options: `off-topic`, `too heated`, `resolved`, `spam`
</Accordion>
<Accordion title="GITHUB_SEARCH_ISSUE">
<Accordion title="github/search_issue">
**Description:** Search for issues in GitHub.
**Parameters:**
@@ -108,7 +124,7 @@ uv add crewai-tools
Available fields: `assignee`, `creator`, `mentioned`, `labels`
</Accordion>
<Accordion title="GITHUB_CREATE_RELEASE">
<Accordion title="github/create_release">
**Description:** Create a release in GitHub.
**Parameters:**
@@ -126,7 +142,7 @@ uv add crewai-tools
- Options: `true`, `false`
</Accordion>
<Accordion title="GITHUB_UPDATE_RELEASE">
<Accordion title="github/update_release">
**Description:** Update a release in GitHub.
**Parameters:**
@@ -145,7 +161,7 @@ uv add crewai-tools
- Options: `true`, `false`
</Accordion>
<Accordion title="GITHUB_GET_RELEASE_BY_ID">
<Accordion title="github/get_release_by_id">
**Description:** Get a release by ID in GitHub.
**Parameters:**
@@ -154,7 +170,7 @@ uv add crewai-tools
- `id` (string, required): Release ID - Specify the release ID of the release to fetch.
</Accordion>
<Accordion title="GITHUB_GET_RELEASE_BY_TAG_NAME">
<Accordion title="github/get_release_by_tag_name">
**Description:** Get a release by tag name in GitHub.
**Parameters:**
@@ -163,7 +179,7 @@ uv add crewai-tools
- `tag_name` (string, required): Name - Specify the tag of the release to fetch. (example: "v1.0.0").
</Accordion>
<Accordion title="GITHUB_DELETE_RELEASE">
<Accordion title="github/delete_release">
**Description:** Delete a release in GitHub.
**Parameters:**
@@ -179,19 +195,14 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
from crewai import Agent, Task, Crew
# Get enterprise tools (GitHub tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with GitHub capabilities
# Create an agent with Github capabilities
github_agent = Agent(
role="Repository Manager",
goal="Manage GitHub repositories, issues, and releases efficiently",
backstory="An AI assistant specialized in repository management and issue tracking.",
tools=[enterprise_tools]
apps=['github'] # All Github actions will be available
)
# Task to create a new issue
@@ -213,19 +224,12 @@ crew.kickoff()
### Filtering Specific GitHub Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific GitHub tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["github_create_issue", "github_update_issue", "github_search_issue"]
)
issue_manager = Agent(
role="Issue Manager",
goal="Create and manage GitHub issues efficiently",
backstory="An AI assistant that focuses on issue tracking and management.",
tools=enterprise_tools
apps=['github/create_issue']
)
# Task to manage issue workflow
@@ -247,17 +251,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
release_manager = Agent(
role="Release Manager",
goal="Manage software releases and versioning",
backstory="An experienced release manager who handles version control and release processes.",
tools=[enterprise_tools]
apps=['github']
)
# Task to create a new release
@@ -284,17 +283,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_coordinator = Agent(
role="Project Coordinator",
goal="Track and coordinate project issues and development progress",
backstory="An AI assistant that helps coordinate development work and track project progress.",
tools=[enterprise_tools]
apps=['github']
)
# Complex task involving multiple GitHub operations

View File

@@ -25,7 +25,7 @@ Before using the Gmail integration, ensure you have:
2. Find **Gmail** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for email and contact management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,141 +33,122 @@ Before using the Gmail integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="GMAIL_SEND_EMAIL">
**Description:** Send an email in Gmail.
<Accordion title="gmail/fetch_emails">
**Description:** Retrieve a list of messages.
**Parameters:**
- `toRecipients` (array, required): To - Specify the recipients as either a single string or a JSON array.
```json
[
"recipient1@domain.com",
"recipient2@domain.com"
]
```
- `from` (string, required): From - Specify the email of the sender.
- `subject` (string, required): Subject - Specify the subject of the message.
- `messageContent` (string, required): Message Content - Specify the content of the email message as plain text or HTML.
- `attachments` (string, optional): Attachments - Accepts either a single file object or a JSON array of file objects.
- `additionalHeaders` (object, optional): Additional Headers - Specify any additional header fields here.
```json
{
"reply-to": "Sender Name <sender@domain.com>"
}
```
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `q` (string, optional): Search query to filter messages (e.g., 'from:someone@example.com is:unread').
- `maxResults` (integer, optional): Maximum number of messages to return (1-500). (default: 100)
- `pageToken` (string, optional): Page token to retrieve a specific page of results.
- `labelIds` (array, optional): Only return messages with labels that match all of the specified label IDs.
- `includeSpamTrash` (boolean, optional): Include messages from SPAM and TRASH in the results. (default: false)
</Accordion>
<Accordion title="GMAIL_GET_EMAIL_BY_ID">
**Description:** Get an email by ID in Gmail.
<Accordion title="gmail/send_email">
**Description:** Send an email.
**Parameters:**
- `userId` (string, required): User ID - Specify the user's email address. (example: "user@domain.com").
- `messageId` (string, required): Message ID - Specify the ID of the message to retrieve.
- `to` (string, required): Recipient email address.
- `subject` (string, required): Email subject line.
- `body` (string, required): Email message content.
- `userId` (string, optional): The user's email address or 'me' for the authenticated user. (default: "me")
- `cc` (string, optional): CC email addresses (comma-separated).
- `bcc` (string, optional): BCC email addresses (comma-separated).
- `from` (string, optional): Sender email address (if different from authenticated user).
- `replyTo` (string, optional): Reply-to email address.
- `threadId` (string, optional): Thread ID if replying to an existing conversation.
</Accordion>
<Accordion title="GMAIL_SEARCH_FOR_EMAIL">
**Description:** Search for emails in Gmail using advanced filters.
<Accordion title="gmail/delete_email">
**Description:** Delete an email by ID.
**Parameters:**
- `emailFilterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "from",
"operator": "$stringContains",
"value": "example@domain.com"
}
]
}
]
}
```
Available fields: `from`, `to`, `date`, `label`, `subject`, `cc`, `bcc`, `category`, `deliveredto:`, `size`, `filename`, `older_than`, `newer_than`, `list`, `is:important`, `is:unread`, `is:snoozed`, `is:starred`, `is:read`, `has:drive`, `has:document`, `has:spreadsheet`, `has:presentation`, `has:attachment`, `has:youtube`, `has:userlabels`
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
- `userId` (string, required): The user's email address or 'me' for the authenticated user.
- `id` (string, required): The ID of the message to delete.
</Accordion>
<Accordion title="GMAIL_DELETE_EMAIL">
**Description:** Delete an email in Gmail.
<Accordion title="gmail/create_draft">
**Description:** Create a new draft email.
**Parameters:**
- `userId` (string, required): User ID - Specify the user's email address. (example: "user@domain.com").
- `messageId` (string, required): Message ID - Specify the ID of the message to trash.
- `userId` (string, required): The user's email address or 'me' for the authenticated user.
- `message` (object, required): Message object containing the draft content.
- `raw` (string, required): Base64url encoded email message.
</Accordion>
<Accordion title="GMAIL_CREATE_A_CONTACT">
**Description:** Create a contact in Gmail.
<Accordion title="gmail/get_message">
**Description:** Retrieve a specific message by ID.
**Parameters:**
- `givenName` (string, required): Given Name - Specify the Given Name of the Contact to create. (example: "John").
- `familyName` (string, required): Family Name - Specify the Family Name of the Contact to create. (example: "Doe").
- `email` (string, required): Email - Specify the Email Address of the Contact to create.
- `additionalFields` (object, optional): Additional Fields - Additional contact information.
```json
{
"addresses": [
{
"streetAddress": "1000 North St.",
"city": "Los Angeles"
}
]
}
```
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the message to retrieve.
- `format` (string, optional): The format to return the message in. Options: "full", "metadata", "minimal", "raw". (default: "full")
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
</Accordion>
<Accordion title="GMAIL_GET_CONTACT_BY_RESOURCE_NAME">
**Description:** Get a contact by resource name in Gmail.
<Accordion title="gmail/get_attachment">
**Description:** Retrieve a message attachment.
**Parameters:**
- `resourceName` (string, required): Resource Name - Specify the resource name of the contact to fetch.
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `messageId` (string, required): The ID of the message containing the attachment.
- `id` (string, required): The ID of the attachment to retrieve.
</Accordion>
<Accordion title="GMAIL_SEARCH_FOR_CONTACT">
**Description:** Search for a contact in Gmail.
<Accordion title="gmail/fetch_thread">
**Description:** Retrieve a specific email thread by ID.
**Parameters:**
- `searchTerm` (string, required): Term - Specify a search term to search for near or exact matches on the names, nickNames, emailAddresses, phoneNumbers, or organizations Contact properties.
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to retrieve.
- `format` (string, optional): The format to return the messages in. Options: "full", "metadata", "minimal". (default: "full")
- `metadataHeaders` (array, optional): When given and format is METADATA, only include headers specified.
</Accordion>
<Accordion title="GMAIL_DELETE_CONTACT">
**Description:** Delete a contact in Gmail.
<Accordion title="gmail/modify_thread">
**Description:** Modify the labels applied to a thread.
**Parameters:**
- `resourceName` (string, required): Resource Name - Specify the resource name of the contact to delete.
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to modify.
- `addLabelIds` (array, optional): A list of IDs of labels to add to this thread.
- `removeLabelIds` (array, optional): A list of IDs of labels to remove from this thread.
</Accordion>
<Accordion title="GMAIL_CREATE_DRAFT">
**Description:** Create a draft in Gmail.
<Accordion title="gmail/trash_thread">
**Description:** Move a thread to the trash.
**Parameters:**
- `toRecipients` (array, optional): To - Specify the recipients as either a single string or a JSON array.
```json
[
"recipient1@domain.com",
"recipient2@domain.com"
]
```
- `from` (string, optional): From - Specify the email of the sender.
- `subject` (string, optional): Subject - Specify the subject of the message.
- `messageContent` (string, optional): Message Content - Specify the content of the email message as plain text or HTML.
- `attachments` (string, optional): Attachments - Accepts either a single file object or a JSON array of file objects.
- `additionalHeaders` (object, optional): Additional Headers - Specify any additional header fields here.
```json
{
"reply-to": "Sender Name <sender@domain.com>"
}
```
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to trash.
</Accordion>
<Accordion title="gmail/untrash_thread">
**Description:** Remove a thread from the trash.
**Parameters:**
- `userId` (string, required): The user's email address or 'me' for the authenticated user. (default: "me")
- `id` (string, required): The ID of the thread to untrash.
</Accordion>
</AccordionGroup>
@@ -177,19 +158,13 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Gmail tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Gmail capabilities
gmail_agent = Agent(
role="Email Manager",
goal="Manage email communications and contacts efficiently",
goal="Manage email communications and messages efficiently",
backstory="An AI assistant specialized in email management and communication.",
tools=[enterprise_tools]
apps=['gmail'] # All Gmail actions will be available
)
# Task to send a follow-up email
@@ -211,19 +186,18 @@ crew.kickoff()
### Filtering Specific Gmail Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Gmail tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["gmail_send_email", "gmail_search_for_email", "gmail_create_draft"]
)
from crewai import Agent, Task, Crew
# Create agent with specific Gmail actions only
email_coordinator = Agent(
role="Email Coordinator",
goal="Coordinate email communications and manage drafts",
backstory="An AI assistant that focuses on email coordination and draft management.",
tools=enterprise_tools
apps=[
'gmail/send_email',
'gmail/fetch_emails',
'gmail/create_draft'
]
)
# Task to prepare and send emails
@@ -241,57 +215,17 @@ crew = Crew(
crew.kickoff()
```
### Contact Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
contact_manager = Agent(
role="Contact Manager",
goal="Manage and organize email contacts efficiently",
backstory="An experienced contact manager who maintains organized contact databases.",
tools=[enterprise_tools]
)
# Task to manage contacts
contact_task = Task(
description="""
1. Search for contacts from the 'example.com' domain
2. Create new contacts for recent email senders not in the contact list
3. Update contact information with recent interaction data
""",
agent=contact_manager,
expected_output="Contact database updated with new contacts and recent interactions"
)
crew = Crew(
agents=[contact_manager],
tasks=[contact_task]
)
crew.kickoff()
```
### Email Search and Analysis
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create agent with Gmail search and analysis capabilities
email_analyst = Agent(
role="Email Analyst",
goal="Analyze email patterns and provide insights",
backstory="An AI assistant that analyzes email data to provide actionable insights.",
tools=[enterprise_tools]
apps=['gmail/fetch_emails', 'gmail/get_message'] # Specific actions for email analysis
)
# Task to analyze email patterns
@@ -313,38 +247,37 @@ crew = Crew(
crew.kickoff()
```
### Automated Email Workflows
### Thread Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
# Create agent with Gmail thread management capabilities
thread_manager = Agent(
role="Thread Manager",
goal="Organize and manage email threads efficiently",
backstory="An AI assistant that specializes in email thread organization and management.",
apps=[
'gmail/fetch_thread',
'gmail/modify_thread',
'gmail/trash_thread'
]
)
workflow_manager = Agent(
role="Email Workflow Manager",
goal="Automate email workflows and responses",
backstory="An AI assistant that manages automated email workflows and responses.",
tools=[enterprise_tools]
)
# Complex task involving multiple Gmail operations
workflow_task = Task(
# Task to organize email threads
thread_task = Task(
description="""
1. Search for emails with 'urgent' in the subject from the last 24 hours
2. Create draft responses for each urgent email
3. Send automated acknowledgment emails to senders
4. Create a summary report of urgent items requiring attention
1. Fetch all threads from the last month
2. Apply appropriate labels to organize threads by project
3. Archive or trash threads that are no longer relevant
""",
agent=workflow_manager,
expected_output="Urgent emails processed with automated responses and summary report"
agent=thread_manager,
expected_output="Email threads organized with appropriate labels and cleanup completed"
)
crew = Crew(
agents=[workflow_manager],
tasks=[workflow_task]
agents=[thread_manager],
tasks=[thread_task]
)
crew.kickoff()

View File

@@ -24,8 +24,8 @@ Before using the Google Calendar integration, ensure you have:
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Calendar** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for calendar and contact access
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
4. Grant the necessary permissions for calendar access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,144 +33,140 @@ Before using the Google Calendar integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="GOOGLE_CALENDAR_CREATE_EVENT">
**Description:** Create an event in Google Calendar.
<Accordion title="google_calendar/get_availability">
**Description:** Get calendar availability (free/busy information).
**Parameters:**
- `eventName` (string, required): Event name.
- `startTime` (string, required): Start time - Accepts Unix timestamp or ISO8601 date formats.
- `endTime` (string, optional): End time - Defaults to one hour after the start time if left blank.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
- `attendees` (string, optional): Attendees - Accepts an array of email addresses or email addresses separated by commas.
- `eventLocation` (string, optional): Event location.
- `eventDescription` (string, optional): Event description.
- `eventId` (string, optional): Event ID - An ID from your application to associate this event with. You can use this ID to sync updates to this event later.
- `includeMeetLink` (boolean, optional): Include Google Meet link? - Automatically creates Google Meet conference link for this event.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_UPDATE_EVENT">
**Description:** Update an existing event in Google Calendar.
**Parameters:**
- `eventId` (string, required): Event ID - The ID of the event to update.
- `eventName` (string, optional): Event name.
- `startTime` (string, optional): Start time - Accepts Unix timestamp or ISO8601 date formats.
- `endTime` (string, optional): End time - Defaults to one hour after the start time if left blank.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
- `attendees` (string, optional): Attendees - Accepts an array of email addresses or email addresses separated by commas.
- `eventLocation` (string, optional): Event location.
- `eventDescription` (string, optional): Event description.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_LIST_EVENTS">
**Description:** List events from Google Calendar.
**Parameters:**
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
- `after` (string, optional): After - Filters events that start after the provided date (Unix in milliseconds or ISO timestamp). (example: "2025-04-12T10:00:00Z or 1712908800000").
- `before` (string, optional): Before - Filters events that end before the provided date (Unix in milliseconds or ISO timestamp). (example: "2025-04-12T10:00:00Z or 1712908800000").
</Accordion>
<Accordion title="GOOGLE_CALENDAR_GET_EVENT_BY_ID">
**Description:** Get a specific event by ID from Google Calendar.
**Parameters:**
- `eventId` (string, required): Event ID.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_DELETE_EVENT">
**Description:** Delete an event from Google Calendar.
**Parameters:**
- `eventId` (string, required): Event ID - The ID of the calendar event to be deleted.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_GET_CONTACTS">
**Description:** Get contacts from Google Calendar.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_SEARCH_CONTACTS">
**Description:** Search for contacts in Google Calendar.
**Parameters:**
- `query` (string, optional): Search query to search contacts.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_LIST_DIRECTORY_PEOPLE">
**Description:** List directory people.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_SEARCH_DIRECTORY_PEOPLE">
**Description:** Search directory people.
**Parameters:**
- `query` (string, required): Search query to search contacts.
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_LIST_OTHER_CONTACTS">
**Description:** List other contacts.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_SEARCH_OTHER_CONTACTS">
**Description:** Search other contacts.
**Parameters:**
- `query` (string, optional): Search query to search contacts.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_GET_AVAILABILITY">
**Description:** Get availability information for calendars.
**Parameters:**
- `timeMin` (string, required): The start of the interval. In ISO format.
- `timeMax` (string, required): The end of the interval. In ISO format.
- `timeZone` (string, optional): Time zone used in the response. Optional. The default is UTC.
- `items` (array, optional): List of calendars and/or groups to query. Defaults to the user default calendar.
- `timeMin` (string, required): Start time (RFC3339 format)
- `timeMax` (string, required): End time (RFC3339 format)
- `items` (array, required): Calendar IDs to check
```json
[
{
"id": "calendar_id_1"
},
{
"id": "calendar_id_2"
"id": "calendar_id"
}
]
```
- `timeZone` (string, optional): Time zone used in the response. The default is UTC.
- `groupExpansionMax` (integer, optional): Maximal number of calendar identifiers to be provided for a single group. Maximum: 100
- `calendarExpansionMax` (integer, optional): Maximal number of calendars for which FreeBusy information is to be provided. Maximum: 50
</Accordion>
<Accordion title="google_calendar/create_event">
**Description:** Create a new event in the specified calendar.
**Parameters:**
- `calendarId` (string, required): Calendar ID (use 'primary' for main calendar)
- `summary` (string, required): Event title/summary
- `start_dateTime` (string, required): Start time in RFC3339 format (e.g., 2024-01-20T10:00:00-07:00)
- `end_dateTime` (string, required): End time in RFC3339 format
- `description` (string, optional): Event description
- `timeZone` (string, optional): Time zone (e.g., America/Los_Angeles)
- `location` (string, optional): Geographic location of the event as free-form text.
- `attendees` (array, optional): List of attendees for the event.
```json
[
{
"email": "attendee@example.com",
"displayName": "Attendee Name",
"optional": false
}
]
```
- `reminders` (object, optional): Information about the event's reminders.
```json
{
"useDefault": true,
"overrides": [
{
"method": "email",
"minutes": 15
}
]
}
```
- `conferenceData` (object, optional): The conference-related information, such as details of a Google Meet conference.
```json
{
"createRequest": {
"requestId": "unique-request-id",
"conferenceSolutionKey": {
"type": "hangoutsMeet"
}
}
}
```
- `visibility` (string, optional): Visibility of the event. Options: default, public, private, confidential. Default: default
- `transparency` (string, optional): Whether the event blocks time on the calendar. Options: opaque, transparent. Default: opaque
</Accordion>
<Accordion title="google_calendar/view_events">
**Description:** Retrieve events for the specified calendar.
**Parameters:**
- `calendarId` (string, required): Calendar ID (use 'primary' for main calendar)
- `timeMin` (string, optional): Lower bound for events (RFC3339)
- `timeMax` (string, optional): Upper bound for events (RFC3339)
- `maxResults` (integer, optional): Maximum number of events (default 10). Minimum: 1, Maximum: 2500
- `orderBy` (string, optional): The order of the events returned in the result. Options: startTime, updated. Default: startTime
- `singleEvents` (boolean, optional): Whether to expand recurring events into instances and only return single one-off events and instances of recurring events. Default: true
- `showDeleted` (boolean, optional): Whether to include deleted events (with status equals cancelled) in the result. Default: false
- `showHiddenInvitations` (boolean, optional): Whether to include hidden invitations in the result. Default: false
- `q` (string, optional): Free text search terms to find events that match these terms in any field.
- `pageToken` (string, optional): Token specifying which result page to return.
- `timeZone` (string, optional): Time zone used in the response.
- `updatedMin` (string, optional): Lower bound for an event's last modification time (RFC3339) to filter by.
- `iCalUID` (string, optional): Specifies an event ID in the iCalendar format to be provided in the response.
</Accordion>
<Accordion title="google_calendar/update_event">
**Description:** Update an existing event.
**Parameters:**
- `calendarId` (string, required): Calendar ID
- `eventId` (string, required): Event ID to update
- `summary` (string, optional): Updated event title
- `description` (string, optional): Updated event description
- `start_dateTime` (string, optional): Updated start time
- `end_dateTime` (string, optional): Updated end time
</Accordion>
<Accordion title="google_calendar/delete_event">
**Description:** Delete a specified event.
**Parameters:**
- `calendarId` (string, required): Calendar ID
- `eventId` (string, required): Event ID to delete
</Accordion>
<Accordion title="google_calendar/view_calendar_list">
**Description:** Retrieve user's calendar list.
**Parameters:**
- `maxResults` (integer, optional): Maximum number of entries returned on one result page. Minimum: 1
- `pageToken` (string, optional): Token specifying which result page to return.
- `showDeleted` (boolean, optional): Whether to include deleted calendar list entries in the result. Default: false
- `showHidden` (boolean, optional): Whether to show hidden entries. Default: false
- `minAccessRole` (string, optional): The minimum access role for the user in the returned entries. Options: freeBusyReader, owner, reader, writer
</Accordion>
</AccordionGroup>
@@ -180,19 +176,13 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Google Calendar tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Google Calendar capabilities
calendar_agent = Agent(
role="Schedule Manager",
goal="Manage calendar events and scheduling efficiently",
backstory="An AI assistant specialized in calendar management and scheduling coordination.",
tools=[enterprise_tools]
apps=['google_calendar'] # All Google Calendar actions will be available
)
# Task to create a meeting
@@ -214,19 +204,11 @@ crew.kickoff()
### Filtering Specific Calendar Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Google Calendar tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["google_calendar_create_event", "google_calendar_list_events", "google_calendar_get_availability"]
)
meeting_coordinator = Agent(
role="Meeting Coordinator",
goal="Coordinate meetings and check availability",
backstory="An AI assistant that focuses on meeting scheduling and availability management.",
tools=enterprise_tools
apps=['google_calendar/create_event', 'google_calendar/get_availability']
)
# Task to schedule a meeting with availability check
@@ -248,17 +230,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
event_manager = Agent(
role="Event Manager",
goal="Manage and update calendar events efficiently",
backstory="An experienced event manager who handles event logistics and updates.",
tools=[enterprise_tools]
apps=['google_calendar']
)
# Task to manage event updates
@@ -266,10 +243,10 @@ event_management = Task(
description="""
1. List all events for this week
2. Update any events that need location changes to include video conference links
3. Send calendar invitations to new team members for recurring meetings
3. Check availability for upcoming meetings
""",
agent=event_manager,
expected_output="Weekly events updated with proper locations and new attendees added"
expected_output="Weekly events updated with proper locations and availability checked"
)
crew = Crew(
@@ -280,33 +257,28 @@ crew = Crew(
crew.kickoff()
```
### Contact and Availability Management
### Availability and Calendar Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
availability_coordinator = Agent(
role="Availability Coordinator",
goal="Coordinate availability and manage contacts for scheduling",
backstory="An AI assistant that specializes in availability management and contact coordination.",
tools=[enterprise_tools]
goal="Coordinate availability and manage calendars for scheduling",
backstory="An AI assistant that specializes in availability management and calendar coordination.",
apps=['google_calendar']
)
# Task to coordinate availability
availability_task = Task(
description="""
1. Search for contacts in the engineering department
2. Check availability for all engineers next Friday afternoon
1. Get the list of available calendars
2. Check availability for all calendars next Friday afternoon
3. Create a team meeting for the first available 2-hour slot
4. Include Google Meet link and send invitations
""",
agent=availability_coordinator,
expected_output="Team meeting scheduled based on availability with all engineers invited"
expected_output="Team meeting scheduled based on availability with all team members invited"
)
crew = Crew(
@@ -321,17 +293,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
scheduling_automator = Agent(
role="Scheduling Automator",
goal="Automate scheduling workflows and calendar management",
backstory="An AI assistant that automates complex scheduling scenarios and calendar workflows.",
tools=[enterprise_tools]
apps=['google_calendar']
)
# Complex scheduling automation task
@@ -365,21 +332,16 @@ crew.kickoff()
- Check if calendar sharing settings allow the required access level
**Event Creation Issues**
- Verify that time formats are correct (ISO8601 or Unix timestamps)
- Verify that time formats are correct (RFC3339 format)
- Ensure attendee email addresses are properly formatted
- Check that the target calendar exists and is accessible
- Verify time zones are correctly specified
**Availability and Time Conflicts**
- Use proper ISO format for time ranges when checking availability
- Use proper RFC3339 format for time ranges when checking availability
- Ensure time zones are consistent across all operations
- Verify that calendar IDs are correct when checking multiple calendars
**Contact and People Search**
- Ensure search queries are properly formatted
- Check that directory access permissions are granted
- Verify that contact information is up to date and accessible
**Event Updates and Deletions**
- Verify that event IDs are correct and events exist
- Ensure you have edit permissions for the events

View File

@@ -0,0 +1,418 @@
---
title: Google Contacts Integration
description: "Contact and directory management with Google Contacts integration for CrewAI."
icon: "address-book"
mode: "wide"
---
## Overview
Enable your agents to manage contacts and directory information through Google Contacts. Access personal contacts, search directory people, create and update contact information, and manage contact groups with AI-powered automation.
## Prerequisites
Before using the Google Contacts integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Contacts access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Contacts Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Contacts** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for contacts and directory access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="google_contacts/get_contacts">
**Description:** Retrieve user's contacts from Google Contacts.
**Parameters:**
- `pageSize` (integer, optional): Number of contacts to return (max 1000). Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): The token of the page to retrieve.
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
- `sortOrder` (string, optional): The order in which the connections should be sorted. Options: LAST_MODIFIED_ASCENDING, LAST_MODIFIED_DESCENDING, FIRST_NAME_ASCENDING, LAST_NAME_ASCENDING
</Accordion>
<Accordion title="google_contacts/search_contacts">
**Description:** Search for contacts using a query string.
**Parameters:**
- `query` (string, required): Search query string
- `readMask` (string, required): Fields to read (e.g., 'names,emailAddresses,phoneNumbers')
- `pageSize` (integer, optional): Number of results to return. Minimum: 1, Maximum: 30
- `pageToken` (string, optional): Token specifying which result page to return.
- `sources` (array, optional): The sources to search in. Options: READ_SOURCE_TYPE_CONTACT, READ_SOURCE_TYPE_PROFILE. Default: READ_SOURCE_TYPE_CONTACT
</Accordion>
<Accordion title="google_contacts/list_directory_people">
**Description:** List people in the authenticated user's directory.
**Parameters:**
- `sources` (array, required): Directory sources to search within. Options: DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE, DIRECTORY_SOURCE_TYPE_DOMAIN_CONTACT. Default: DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE
- `pageSize` (integer, optional): Number of people to return. Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): Token specifying which result page to return.
- `readMask` (string, optional): Fields to read (e.g., 'names,emailAddresses')
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
- `mergeSources` (array, optional): Additional data to merge into the directory people responses. Options: CONTACT
</Accordion>
<Accordion title="google_contacts/search_directory_people">
**Description:** Search for people in the directory.
**Parameters:**
- `query` (string, required): Search query
- `sources` (string, required): Directory sources (use 'DIRECTORY_SOURCE_TYPE_DOMAIN_PROFILE')
- `pageSize` (integer, optional): Number of results to return
- `readMask` (string, optional): Fields to read
</Accordion>
<Accordion title="google_contacts/list_other_contacts">
**Description:** List other contacts (not in user's personal contacts).
**Parameters:**
- `pageSize` (integer, optional): Number of contacts to return. Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): Token specifying which result page to return.
- `readMask` (string, optional): Fields to read
- `requestSyncToken` (boolean, optional): Whether the response should include a sync token. Default: false
</Accordion>
<Accordion title="google_contacts/search_other_contacts">
**Description:** Search other contacts.
**Parameters:**
- `query` (string, required): Search query
- `readMask` (string, required): Fields to read (e.g., 'names,emailAddresses')
- `pageSize` (integer, optional): Number of results
</Accordion>
<Accordion title="google_contacts/get_person">
**Description:** Get a single person's contact information by resource name.
**Parameters:**
- `resourceName` (string, required): The resource name of the person to get (e.g., 'people/c123456789')
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
</Accordion>
<Accordion title="google_contacts/create_contact">
**Description:** Create a new contact in the user's address book.
**Parameters:**
- `names` (array, optional): Person's names
```json
[
{
"givenName": "John",
"familyName": "Doe",
"displayName": "John Doe"
}
]
```
- `emailAddresses` (array, optional): Email addresses
```json
[
{
"value": "john.doe@example.com",
"type": "work"
}
]
```
- `phoneNumbers` (array, optional): Phone numbers
```json
[
{
"value": "+1234567890",
"type": "mobile"
}
]
```
- `addresses` (array, optional): Postal addresses
```json
[
{
"formattedValue": "123 Main St, City, State 12345",
"type": "home"
}
]
```
- `organizations` (array, optional): Organizations/companies
```json
[
{
"name": "Company Name",
"title": "Job Title",
"type": "work"
}
]
```
</Accordion>
<Accordion title="google_contacts/update_contact">
**Description:** Update an existing contact's information.
**Parameters:**
- `resourceName` (string, required): The resource name of the person to update (e.g., 'people/c123456789')
- `updatePersonFields` (string, required): Fields to update (e.g., 'names,emailAddresses,phoneNumbers')
- `names` (array, optional): Person's names
- `emailAddresses` (array, optional): Email addresses
- `phoneNumbers` (array, optional): Phone numbers
</Accordion>
<Accordion title="google_contacts/delete_contact">
**Description:** Delete a contact from the user's address book.
**Parameters:**
- `resourceName` (string, required): The resource name of the person to delete (e.g., 'people/c123456789')
</Accordion>
<Accordion title="google_contacts/batch_get_people">
**Description:** Get information about multiple people in a single request.
**Parameters:**
- `resourceNames` (array, required): Resource names of people to get. Maximum: 200 items
- `personFields` (string, optional): Fields to include (e.g., 'names,emailAddresses,phoneNumbers'). Default: names,emailAddresses,phoneNumbers
</Accordion>
<Accordion title="google_contacts/list_contact_groups">
**Description:** List the user's contact groups (labels).
**Parameters:**
- `pageSize` (integer, optional): Number of contact groups to return. Minimum: 1, Maximum: 1000
- `pageToken` (string, optional): Token specifying which result page to return.
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Contacts Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Contacts capabilities
contacts_agent = Agent(
role="Contact Manager",
goal="Manage contacts and directory information efficiently",
backstory="An AI assistant specialized in contact management and directory operations.",
apps=['google_contacts'] # All Google Contacts actions will be available
)
# Task to retrieve and organize contacts
contact_management_task = Task(
description="Retrieve all contacts and organize them by company affiliation",
agent=contacts_agent,
expected_output="Contacts retrieved and organized by company with summary report"
)
# Run the task
crew = Crew(
agents=[contacts_agent],
tasks=[contact_management_task]
)
crew.kickoff()
```
### Directory Search and Management
```python
from crewai import Agent, Task, Crew
directory_manager = Agent(
role="Directory Manager",
goal="Search and manage directory people and contacts",
backstory="An AI assistant that specializes in directory management and people search.",
apps=[
'google_contacts/search_directory_people',
'google_contacts/list_directory_people',
'google_contacts/search_contacts'
]
)
# Task to search and manage directory
directory_task = Task(
description="Search for team members in the company directory and create a team contact list",
agent=directory_manager,
expected_output="Team directory compiled with contact information"
)
crew = Crew(
agents=[directory_manager],
tasks=[directory_task]
)
crew.kickoff()
```
### Contact Creation and Updates
```python
from crewai import Agent, Task, Crew
contact_curator = Agent(
role="Contact Curator",
goal="Create and update contact information systematically",
backstory="An AI assistant that maintains accurate and up-to-date contact information.",
apps=['google_contacts']
)
# Task to create and update contacts
curation_task = Task(
description="""
1. Search for existing contacts related to new business partners
2. Create new contacts for partners not in the system
3. Update existing contact information with latest details
4. Organize contacts into appropriate groups
""",
agent=contact_curator,
expected_output="Contact database updated with new partners and organized groups"
)
crew = Crew(
agents=[contact_curator],
tasks=[curation_task]
)
crew.kickoff()
```
### Contact Group Management
```python
from crewai import Agent, Task, Crew
group_organizer = Agent(
role="Contact Group Organizer",
goal="Organize contacts into meaningful groups and categories",
backstory="An AI assistant that specializes in contact organization and group management.",
apps=['google_contacts']
)
# Task to organize contact groups
organization_task = Task(
description="""
1. List all existing contact groups
2. Analyze contact distribution across groups
3. Create new groups for better organization
4. Move contacts to appropriate groups based on their information
""",
agent=group_organizer,
expected_output="Contacts organized into logical groups with improved structure"
)
crew = Crew(
agents=[group_organizer],
tasks=[organization_task]
)
crew.kickoff()
```
### Comprehensive Contact Management
```python
from crewai import Agent, Task, Crew
contact_specialist = Agent(
role="Contact Management Specialist",
goal="Provide comprehensive contact management across all sources",
backstory="An AI assistant that handles all aspects of contact management including personal, directory, and other contacts.",
apps=['google_contacts']
)
# Complex contact management task
comprehensive_task = Task(
description="""
1. Retrieve contacts from all sources (personal, directory, other)
2. Search for duplicate contacts and merge information
3. Update outdated contact information
4. Create missing contacts for important stakeholders
5. Organize contacts into meaningful groups
6. Generate a comprehensive contact report
""",
agent=contact_specialist,
expected_output="Complete contact management performed with unified contact database and detailed report"
)
crew = Crew(
agents=[contact_specialist],
tasks=[comprehensive_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Google account has appropriate permissions for contacts access
- Verify that the OAuth connection includes required scopes for Google Contacts API
- Check that directory access permissions are granted for organization contacts
**Resource Name Format Issues**
- Ensure resource names follow the correct format (e.g., 'people/c123456789' for contacts)
- Verify that contact group resource names use the format 'contactGroups/groupId'
- Check that resource names exist and are accessible
**Search and Query Issues**
- Ensure search queries are properly formatted and not empty
- Use appropriate readMask fields for the data you need
- Verify that search sources are correctly specified (contacts vs profiles)
**Contact Creation and Updates**
- Ensure required fields are provided when creating contacts
- Verify that email addresses and phone numbers are properly formatted
- Check that updatePersonFields parameter includes all fields being updated
**Directory Access Issues**
- Ensure you have appropriate permissions to access organization directory
- Verify that directory sources are correctly specified
- Check that your organization allows API access to directory information
**Pagination and Limits**
- Be mindful of page size limits (varies by endpoint)
- Use pageToken for pagination through large result sets
- Respect API rate limits and implement appropriate delays
**Contact Groups and Organization**
- Ensure contact group names are unique when creating new groups
- Verify that contacts exist before adding them to groups
- Check that you have permissions to modify contact groups
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Contacts integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,244 @@
---
title: Google Docs Integration
description: "Document creation and editing with Google Docs integration for CrewAI."
icon: "file-lines"
mode: "wide"
---
## Overview
Enable your agents to create, edit, and manage Google Docs documents with text manipulation and formatting. Automate document creation, insert and replace text, manage content ranges, and streamline your document workflows with AI-powered automation.
## Prerequisites
Before using the Google Docs integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Docs access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Docs Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Docs** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for document access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="google_docs/create_document">
**Description:** Create a new Google Document.
**Parameters:**
- `title` (string, optional): The title for the new document.
</Accordion>
<Accordion title="google_docs/get_document">
**Description:** Get the contents and metadata of a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to retrieve.
- `includeTabsContent` (boolean, optional): Whether to include tab content. Default is `false`.
- `suggestionsViewMode` (string, optional): The suggestions view mode to apply to the document. Enum: `DEFAULT_FOR_CURRENT_ACCESS`, `PREVIEW_SUGGESTIONS_ACCEPTED`, `PREVIEW_WITHOUT_SUGGESTIONS`. Default is `DEFAULT_FOR_CURRENT_ACCESS`.
</Accordion>
<Accordion title="google_docs/batch_update">
**Description:** Apply one or more updates to a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `requests` (array, required): A list of updates to apply to the document. Each item is an object representing a request.
- `writeControl` (object, optional): Provides control over how write requests are executed. Contains `requiredRevisionId` (string) and `targetRevisionId` (string).
</Accordion>
<Accordion title="google_docs/insert_text">
**Description:** Insert text into a Google Document at a specific location.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `text` (string, required): The text to insert.
- `index` (integer, optional): The zero-based index where to insert the text. Default is `1`.
</Accordion>
<Accordion title="google_docs/replace_text">
**Description:** Replace all instances of text in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `containsText` (string, required): The text to find and replace.
- `replaceText` (string, required): The text to replace it with.
- `matchCase` (boolean, optional): Whether the search should respect case. Default is `false`.
</Accordion>
<Accordion title="google_docs/delete_content_range">
**Description:** Delete content from a specific range in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `startIndex` (integer, required): The start index of the range to delete.
- `endIndex` (integer, required): The end index of the range to delete.
</Accordion>
<Accordion title="google_docs/insert_page_break">
**Description:** Insert a page break at a specific location in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `index` (integer, optional): The zero-based index where to insert the page break. Default is `1`.
</Accordion>
<Accordion title="google_docs/create_named_range">
**Description:** Create a named range in a Google Document.
**Parameters:**
- `documentId` (string, required): The ID of the document to update.
- `name` (string, required): The name for the named range.
- `startIndex` (integer, required): The start index of the range.
- `endIndex` (integer, required): The end index of the range.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Docs Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Docs capabilities
docs_agent = Agent(
role="Document Creator",
goal="Create and manage Google Docs documents efficiently",
backstory="An AI assistant specialized in Google Docs document creation and editing.",
apps=['google_docs'] # All Google Docs actions will be available
)
# Task to create a new document
create_doc_task = Task(
description="Create a new Google Document titled 'Project Status Report'",
agent=docs_agent,
expected_output="New Google Document 'Project Status Report' created successfully"
)
# Run the task
crew = Crew(
agents=[docs_agent],
tasks=[create_doc_task]
)
crew.kickoff()
```
### Text Editing and Content Management
```python
from crewai import Agent, Task, Crew
# Create an agent focused on text editing
text_editor = Agent(
role="Document Editor",
goal="Edit and update content in Google Docs documents",
backstory="An AI assistant skilled in precise text editing and content management.",
apps=['google_docs/insert_text', 'google_docs/replace_text', 'google_docs/delete_content_range']
)
# Task to edit document content
edit_content_task = Task(
description="In document 'your_document_id', insert the text 'Executive Summary: ' at the beginning, then replace all instances of 'TODO' with 'COMPLETED'.",
agent=text_editor,
expected_output="Document updated with new text inserted and TODO items replaced."
)
crew = Crew(
agents=[text_editor],
tasks=[edit_content_task]
)
crew.kickoff()
```
### Advanced Document Operations
```python
from crewai import Agent, Task, Crew
# Create an agent for advanced document operations
document_formatter = Agent(
role="Document Formatter",
goal="Apply advanced formatting and structure to Google Docs",
backstory="An AI assistant that handles complex document formatting and organization.",
apps=['google_docs/batch_update', 'google_docs/insert_page_break', 'google_docs/create_named_range']
)
# Task to format document
format_doc_task = Task(
description="In document 'your_document_id', insert a page break at position 100, create a named range called 'Introduction' for characters 1-50, and apply batch formatting updates.",
agent=document_formatter,
expected_output="Document formatted with page break, named range, and styling applied."
)
crew = Crew(
agents=[document_formatter],
tasks=[format_doc_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Authentication Errors**
- Ensure your Google account has the necessary permissions for Google Docs access.
- Verify that the OAuth connection includes all required scopes (`https://www.googleapis.com/auth/documents`).
**Document ID Issues**
- Double-check document IDs for correctness.
- Ensure the document exists and is accessible to your account.
- Document IDs can be found in the Google Docs URL.
**Text Insertion and Range Operations**
- When using `insert_text` or `delete_content_range`, ensure index positions are valid.
- Remember that Google Docs uses zero-based indexing.
- The document must have content at the specified index positions.
**Batch Update Request Formatting**
- When using `batch_update`, ensure the `requests` array is correctly formatted according to the Google Docs API documentation.
- Complex updates require specific JSON structures for each request type.
**Replace Text Operations**
- For `replace_text`, ensure the `containsText` parameter exactly matches the text you want to replace.
- Use `matchCase` parameter to control case sensitivity.
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Docs integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,229 @@
---
title: Google Drive Integration
description: "File storage and management with Google Drive integration for CrewAI."
icon: "google"
mode: "wide"
---
## Overview
Enable your agents to manage files and folders through Google Drive. Upload, download, organize, and share files, create folders, and streamline your document management workflows with AI-powered automation.
## Prerequisites
Before using the Google Drive integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Drive access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Drive Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Drive** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for file and folder management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="google_drive/get_file">
**Description:** Get a file by ID from Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to retrieve.
</Accordion>
<Accordion title="google_drive/list_files">
**Description:** List files in Google Drive.
**Parameters:**
- `q` (string, optional): Query string to filter files (example: "name contains 'report'").
- `page_size` (integer, optional): Maximum number of files to return (default: 100, max: 1000).
- `page_token` (string, optional): Token for retrieving the next page of results.
- `order_by` (string, optional): Sort order (example: "name", "createdTime desc", "modifiedTime").
- `spaces` (string, optional): Comma-separated list of spaces to query (drive, appDataFolder, photos).
</Accordion>
<Accordion title="google_drive/upload_file">
**Description:** Upload a file to Google Drive.
**Parameters:**
- `name` (string, required): Name of the file to create.
- `content` (string, required): Content of the file to upload.
- `mime_type` (string, optional): MIME type of the file (example: "text/plain", "application/pdf").
- `parent_folder_id` (string, optional): ID of the parent folder where the file should be created.
- `description` (string, optional): Description of the file.
</Accordion>
<Accordion title="google_drive/download_file">
**Description:** Download a file from Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to download.
- `mime_type` (string, optional): MIME type for export (required for Google Workspace documents).
</Accordion>
<Accordion title="google_drive/create_folder">
**Description:** Create a new folder in Google Drive.
**Parameters:**
- `name` (string, required): Name of the folder to create.
- `parent_folder_id` (string, optional): ID of the parent folder where the new folder should be created.
- `description` (string, optional): Description of the folder.
</Accordion>
<Accordion title="google_drive/delete_file">
**Description:** Delete a file from Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to delete.
</Accordion>
<Accordion title="google_drive/share_file">
**Description:** Share a file in Google Drive with specific users or make it public.
**Parameters:**
- `file_id` (string, required): The ID of the file to share.
- `role` (string, required): The role granted by this permission (reader, writer, commenter, owner).
- `type` (string, required): The type of the grantee (user, group, domain, anyone).
- `email_address` (string, optional): The email address of the user or group to share with (required for user/group types).
- `domain` (string, optional): The domain to share with (required for domain type).
- `send_notification_email` (boolean, optional): Whether to send a notification email (default: true).
- `email_message` (string, optional): A plain text custom message to include in the notification email.
</Accordion>
<Accordion title="google_drive/update_file">
**Description:** Update an existing file in Google Drive.
**Parameters:**
- `file_id` (string, required): The ID of the file to update.
- `name` (string, optional): New name for the file.
- `content` (string, optional): New content for the file.
- `mime_type` (string, optional): New MIME type for the file.
- `description` (string, optional): New description for the file.
- `add_parents` (string, optional): Comma-separated list of parent folder IDs to add.
- `remove_parents` (string, optional): Comma-separated list of parent folder IDs to remove.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Drive Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Drive capabilities
drive_agent = Agent(
role="File Manager",
goal="Manage files and folders in Google Drive efficiently",
backstory="An AI assistant specialized in document and file management.",
apps=['google_drive'] # All Google Drive actions will be available
)
# Task to organize files
organize_files_task = Task(
description="List all files in the root directory and organize them into appropriate folders",
agent=drive_agent,
expected_output="Summary of files organized with folder structure"
)
# Run the task
crew = Crew(
agents=[drive_agent],
tasks=[organize_files_task]
)
crew.kickoff()
```
### Filtering Specific Google Drive Tools
```python
from crewai import Agent, Task, Crew
# Create agent with specific Google Drive actions only
file_manager_agent = Agent(
role="Document Manager",
goal="Upload and manage documents efficiently",
backstory="An AI assistant that focuses on document upload and organization.",
apps=[
'google_drive/upload_file',
'google_drive/create_folder',
'google_drive/share_file'
] # Specific Google Drive actions
)
# Task to upload and share documents
document_task = Task(
description="Upload the quarterly report and share it with the finance team",
agent=file_manager_agent,
expected_output="Document uploaded and sharing permissions configured"
)
crew = Crew(
agents=[file_manager_agent],
tasks=[document_task]
)
crew.kickoff()
```
### Advanced File Management
```python
from crewai import Agent, Task, Crew
file_organizer = Agent(
role="File Organizer",
goal="Maintain organized file structure and manage permissions",
backstory="An experienced file manager who ensures proper organization and access control.",
apps=['google_drive']
)
# Complex task involving multiple Google Drive operations
organization_task = Task(
description="""
1. List all files in the shared folder
2. Create folders for different document types (Reports, Presentations, Spreadsheets)
3. Move files to appropriate folders based on their type
4. Set appropriate sharing permissions for each folder
5. Create a summary document of the organization changes
""",
agent=file_organizer,
expected_output="Files organized into categorized folders with proper permissions and summary report"
)
crew = Crew(
agents=[file_organizer],
tasks=[organization_task]
)
crew.kickoff()
```

View File

@@ -26,7 +26,7 @@ Before using the Google Sheets integration, ensure you have:
2. Find **Google Sheets** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for spreadsheet access
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -34,67 +34,93 @@ Before using the Google Sheets integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="GOOGLE_SHEETS_GET_ROW">
**Description:** Get rows from a Google Sheets spreadsheet.
<Accordion title="google_sheets/get_spreadsheet">
**Description:** Retrieve properties and data of a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): Spreadsheet - Use Connect Portal Workflow Settings to allow users to select a spreadsheet. Defaults to using the first worksheet in the selected spreadsheet.
- `limit` (string, optional): Limit rows - Limit the maximum number of rows to return.
- `spreadsheetId` (string, required): The ID of the spreadsheet to retrieve.
- `ranges` (array, optional): The ranges to retrieve from the spreadsheet.
- `includeGridData` (boolean, optional): True if grid data should be returned. Default: false
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
</Accordion>
<Accordion title="GOOGLE_SHEETS_CREATE_ROW">
**Description:** Create a new row in a Google Sheets spreadsheet.
<Accordion title="google_sheets/get_values">
**Description:** Returns a range of values from a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): Spreadsheet - Use Connect Portal Workflow Settings to allow users to select a spreadsheet. Defaults to using the first worksheet in the selected spreadsheet..
- `worksheet` (string, required): Worksheet - Your worksheet must have column headers.
- `additionalFields` (object, required): Fields - Include fields to create this row with, as an object with keys of Column Names. Use Connect Portal Workflow Settings to allow users to select a Column Mapping.
- `spreadsheetId` (string, required): The ID of the spreadsheet to retrieve data from.
- `range` (string, required): The A1 notation or R1C1 notation of the range to retrieve values from.
- `valueRenderOption` (string, optional): How values should be represented in the output. Options: FORMATTED_VALUE, UNFORMATTED_VALUE, FORMULA. Default: FORMATTED_VALUE
- `dateTimeRenderOption` (string, optional): How dates, times, and durations should be represented in the output. Options: SERIAL_NUMBER, FORMATTED_STRING. Default: SERIAL_NUMBER
- `majorDimension` (string, optional): The major dimension that results should use. Options: ROWS, COLUMNS. Default: ROWS
</Accordion>
<Accordion title="google_sheets/update_values">
**Description:** Sets values in a range of a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): The ID of the spreadsheet to update.
- `range` (string, required): The A1 notation of the range to update.
- `values` (array, required): The data to be written. Each array represents a row.
```json
{
"columnName1": "columnValue1",
"columnName2": "columnValue2",
"columnName3": "columnValue3",
"columnName4": "columnValue4"
}
[
["Value1", "Value2", "Value3"],
["Value4", "Value5", "Value6"]
]
```
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
</Accordion>
<Accordion title="GOOGLE_SHEETS_UPDATE_ROW">
**Description:** Update existing rows in a Google Sheets spreadsheet.
<Accordion title="google_sheets/append_values">
**Description:** Appends values to a spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): Spreadsheet - Use Connect Portal Workflow Settings to allow users to select a spreadsheet. Defaults to using the first worksheet in the selected spreadsheet.
- `worksheet` (string, required): Worksheet - Your worksheet must have column headers.
- `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions to identify which rows to update.
- `spreadsheetId` (string, required): The ID of the spreadsheet to update.
- `range` (string, required): The A1 notation of a range to search for a logical table of data.
- `values` (array, required): The data to append. Each array represents a row.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "status",
"operator": "$stringExactlyMatches",
"value": "pending"
}
]
[
["Value1", "Value2", "Value3"],
["Value4", "Value5", "Value6"]
]
```
- `valueInputOption` (string, optional): How the input data should be interpreted. Options: RAW, USER_ENTERED. Default: USER_ENTERED
- `insertDataOption` (string, optional): How the input data should be inserted. Options: OVERWRITE, INSERT_ROWS. Default: INSERT_ROWS
</Accordion>
<Accordion title="google_sheets/create_spreadsheet">
**Description:** Creates a new spreadsheet.
**Parameters:**
- `title` (string, required): The title of the new spreadsheet.
- `sheets` (array, optional): The sheets that are part of the spreadsheet.
```json
[
{
"properties": {
"title": "Sheet1"
}
]
}
```
Available operators: `$stringContains`, `$stringDoesNotContain`, `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringStartsWith`, `$stringDoesNotStartWith`, `$stringEndsWith`, `$stringDoesNotEndWith`, `$numberGreaterThan`, `$numberLessThan`, `$numberEquals`, `$numberDoesNotEqual`, `$dateTimeAfter`, `$dateTimeBefore`, `$dateTimeEquals`, `$booleanTrue`, `$booleanFalse`, `$exists`, `$doesNotExist`
- `additionalFields` (object, required): Fields - Include fields to update, as an object with keys of Column Names. Use Connect Portal Workflow Settings to allow users to select a Column Mapping.
```json
{
"columnName1": "newValue1",
"columnName2": "newValue2",
"columnName3": "newValue3",
"columnName4": "newValue4"
}
}
]
```
</Accordion>
</AccordionGroup>
@@ -105,19 +131,13 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Google Sheets tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Google Sheets capabilities
sheets_agent = Agent(
role="Data Manager",
goal="Manage spreadsheet data and track information efficiently",
backstory="An AI assistant specialized in data management and spreadsheet operations.",
tools=[enterprise_tools]
apps=['google_sheets']
)
# Task to add new data to a spreadsheet
@@ -139,19 +159,17 @@ crew.kickoff()
### Filtering Specific Google Sheets Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Google Sheets tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["google_sheets_get_row", "google_sheets_create_row"]
)
from crewai import Agent, Task, Crew
# Create agent with specific Google Sheets actions only
data_collector = Agent(
role="Data Collector",
goal="Collect and organize data in spreadsheets",
backstory="An AI assistant that focuses on data collection and organization.",
tools=enterprise_tools
apps=[
'google_sheets/get_values',
'google_sheets/update_values'
]
)
# Task to collect and organize data
@@ -173,17 +191,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
data_analyst = Agent(
role="Data Analyst",
goal="Analyze spreadsheet data and generate insights",
backstory="An experienced data analyst who extracts insights from spreadsheet data.",
tools=[enterprise_tools]
apps=['google_sheets']
)
# Task to analyze data and create reports
@@ -205,33 +218,59 @@ crew = Crew(
crew.kickoff()
```
### Spreadsheet Creation and Management
```python
from crewai import Agent, Task, Crew
spreadsheet_manager = Agent(
role="Spreadsheet Manager",
goal="Create and manage spreadsheets efficiently",
backstory="An AI assistant that specializes in creating and organizing spreadsheets.",
apps=['google_sheets']
)
# Task to create and set up new spreadsheets
setup_task = Task(
description="""
1. Create a new spreadsheet for quarterly reports
2. Set up proper headers and structure
3. Add initial data and formatting
""",
agent=spreadsheet_manager,
expected_output="New quarterly report spreadsheet created and properly structured"
)
crew = Crew(
agents=[spreadsheet_manager],
tasks=[setup_task]
)
crew.kickoff()
```
### Automated Data Updates
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
data_updater = Agent(
role="Data Updater",
goal="Automatically update and maintain spreadsheet data",
backstory="An AI assistant that maintains data accuracy and updates records automatically.",
tools=[enterprise_tools]
apps=['google_sheets']
)
# Task to update data based on conditions
update_task = Task(
description="""
1. Find all pending orders in the orders spreadsheet
2. Update their status to 'processing'
3. Add a timestamp for when the status was updated
4. Log the changes in a separate tracking sheet
1. Get spreadsheet properties and structure
2. Read current data from specific ranges
3. Update values in target ranges with new data
4. Append new records to the bottom of the sheet
""",
agent=data_updater,
expected_output="All pending orders updated to processing status with timestamps logged"
expected_output="Spreadsheet data updated successfully with new values and records"
)
crew = Crew(
@@ -246,30 +285,25 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
workflow_manager = Agent(
role="Data Workflow Manager",
goal="Manage complex data workflows across multiple spreadsheets",
backstory="An AI assistant that orchestrates complex data operations across multiple spreadsheets.",
tools=[enterprise_tools]
apps=['google_sheets']
)
# Complex workflow task
workflow_task = Task(
description="""
1. Get all customer data from the main customer spreadsheet
2. Create monthly summary entries for active customers
3. Update customer status based on activity in the last 30 days
4. Generate a monthly report with customer metrics
5. Archive inactive customer records to a separate sheet
2. Create a new monthly summary spreadsheet
3. Append summary data to the new spreadsheet
4. Update customer status based on activity metrics
5. Generate reports with proper formatting
""",
agent=workflow_manager,
expected_output="Monthly customer workflow completed with updated statuses and generated reports"
expected_output="Monthly customer workflow completed with new spreadsheet and updated data"
)
crew = Crew(
@@ -291,29 +325,28 @@ crew.kickoff()
**Spreadsheet Structure Issues**
- Ensure worksheets have proper column headers before creating or updating rows
- Verify that column names in `additionalFields` match the actual column headers
- Check that the specified worksheet exists in the spreadsheet
- Verify that range notation (A1 format) is correct for the target cells
- Check that the specified spreadsheet ID exists and is accessible
**Data Type and Format Issues**
- Ensure data values match the expected format for each column
- Use proper date formats for date columns (ISO format recommended)
- Verify that numeric values are properly formatted for number columns
**Filter Formula Issues**
- Ensure filter formulas follow the correct JSON structure for disjunctive normal form
- Use valid field names that match actual column headers
- Test simple filters before building complex multi-condition queries
- Verify that operator types match the data types in the columns
**Range and Cell Reference Issues**
- Use proper A1 notation for ranges (e.g., "A1:C10", "Sheet1!A1:B5")
- Ensure range references don't exceed the actual spreadsheet dimensions
- Verify that sheet names in range references match actual sheet names
**Row Limits and Performance**
- Be mindful of row limits when using `GOOGLE_SHEETS_GET_ROW`
- Consider pagination for large datasets
- Use specific filters to reduce the amount of data processed
**Value Input and Rendering Options**
- Choose appropriate `valueInputOption` (RAW vs USER_ENTERED) for your data
- Select proper `valueRenderOption` based on how you want data formatted
- Consider `dateTimeRenderOption` for consistent date/time handling
**Update Operations**
- Ensure filter conditions properly identify the intended rows for updates
- Test filter conditions with small datasets before large updates
- Verify that all required fields are included in update operations
**Spreadsheet Creation Issues**
- Ensure spreadsheet titles are unique and follow naming conventions
- Verify that sheet properties are properly structured when creating sheets
- Check that you have permissions to create new spreadsheets in your account
### Getting Help

View File

@@ -0,0 +1,387 @@
---
title: Google Slides Integration
description: "Presentation creation and management with Google Slides integration for CrewAI."
icon: "chart-bar"
mode: "wide"
---
## Overview
Enable your agents to create, edit, and manage Google Slides presentations. Create presentations, update content, import data from Google Sheets, manage pages and thumbnails, and streamline your presentation workflows with AI-powered automation.
## Prerequisites
Before using the Google Slides integration, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account with an active subscription
- A Google account with Google Slides access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Slides Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Slides** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for presentations, spreadsheets, and drive access
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="google_slides/create_blank_presentation">
**Description:** Creates a blank presentation with no content.
**Parameters:**
- `title` (string, required): The title of the presentation.
</Accordion>
<Accordion title="google_slides/get_presentation">
**Description:** Retrieves a presentation by ID.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation to retrieve.
- `fields` (string, optional): The fields to include in the response. Use this to improve performance by only returning needed data.
</Accordion>
<Accordion title="google_slides/batch_update_presentation">
**Description:** Applies updates, add content, or remove content from a presentation.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation to update.
- `requests` (array, required): A list of updates to apply to the presentation.
```json
[
{
"insertText": {
"objectId": "slide_id",
"text": "Your text content here"
}
}
]
```
- `writeControl` (object, optional): Provides control over how write requests are executed.
```json
{
"requiredRevisionId": "revision_id_string"
}
```
</Accordion>
<Accordion title="google_slides/get_page">
**Description:** Retrieves a specific page by its ID.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `pageObjectId` (string, required): The ID of the page to retrieve.
</Accordion>
<Accordion title="google_slides/get_thumbnail">
**Description:** Generates a page thumbnail.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `pageObjectId` (string, required): The ID of the page for thumbnail generation.
</Accordion>
<Accordion title="google_slides/import_data_from_sheet">
**Description:** Imports data from a Google Sheet into a presentation.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `sheetId` (string, required): The ID of the Google Sheet to import from.
- `dataRange` (string, required): The range of data to import from the sheet.
</Accordion>
<Accordion title="google_slides/upload_file_to_drive">
**Description:** Uploads a file to Google Drive associated with the presentation.
**Parameters:**
- `file` (string, required): The file data to upload.
- `presentationId` (string, required): The ID of the presentation to link the uploaded file.
</Accordion>
<Accordion title="google_slides/link_file_to_presentation">
**Description:** Links a file in Google Drive to a presentation.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation.
- `fileId` (string, required): The ID of the file to link.
</Accordion>
<Accordion title="google_slides/get_all_presentations">
**Description:** Lists all presentations accessible to the user.
**Parameters:**
- `pageSize` (integer, optional): The number of presentations to return per page.
- `pageToken` (string, optional): A token for pagination.
</Accordion>
<Accordion title="google_slides/delete_presentation">
**Description:** Deletes a presentation by ID.
**Parameters:**
- `presentationId` (string, required): The ID of the presentation to delete.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Slides Agent Setup
```python
from crewai import Agent, Task, Crew
# Create an agent with Google Slides capabilities
slides_agent = Agent(
role="Presentation Manager",
goal="Create and manage presentations efficiently",
backstory="An AI assistant specialized in presentation creation and content management.",
apps=['google_slides'] # All Google Slides actions will be available
)
# Task to create a presentation
create_presentation_task = Task(
description="Create a new presentation for the quarterly business review with key slides",
agent=slides_agent,
expected_output="Quarterly business review presentation created with structured content"
)
# Run the task
crew = Crew(
agents=[slides_agent],
tasks=[create_presentation_task]
)
crew.kickoff()
```
### Presentation Content Management
```python
from crewai import Agent, Task, Crew
content_manager = Agent(
role="Content Manager",
goal="Manage presentation content and updates",
backstory="An AI assistant that focuses on content creation and presentation updates.",
apps=[
'google_slides/create_blank_presentation',
'google_slides/batch_update_presentation',
'google_slides/get_presentation'
]
)
# Task to create and update presentations
content_task = Task(
description="Create a new presentation and add content slides with charts and text",
agent=content_manager,
expected_output="Presentation created with updated content and visual elements"
)
crew = Crew(
agents=[content_manager],
tasks=[content_task]
)
crew.kickoff()
```
### Data Integration and Visualization
```python
from crewai import Agent, Task, Crew
data_visualizer = Agent(
role="Data Visualizer",
goal="Create presentations with data imported from spreadsheets",
backstory="An AI assistant that specializes in data visualization and presentation integration.",
apps=['google_slides']
)
# Task to create data-driven presentations
visualization_task = Task(
description="""
1. Create a new presentation for monthly sales report
2. Import data from the sales spreadsheet
3. Create charts and visualizations from the imported data
4. Generate thumbnails for slide previews
""",
agent=data_visualizer,
expected_output="Data-driven presentation created with imported spreadsheet data and visualizations"
)
crew = Crew(
agents=[data_visualizer],
tasks=[visualization_task]
)
crew.kickoff()
```
### Presentation Library Management
```python
from crewai import Agent, Task, Crew
library_manager = Agent(
role="Presentation Library Manager",
goal="Manage and organize presentation libraries",
backstory="An AI assistant that manages presentation collections and file organization.",
apps=['google_slides']
)
# Task to manage presentation library
library_task = Task(
description="""
1. List all existing presentations
2. Generate thumbnails for presentation previews
3. Upload supporting files to Drive and link to presentations
4. Organize presentations by topic and date
""",
agent=library_manager,
expected_output="Presentation library organized with thumbnails and linked supporting files"
)
crew = Crew(
agents=[library_manager],
tasks=[library_task]
)
crew.kickoff()
```
### Automated Presentation Workflows
```python
from crewai import Agent, Task, Crew
presentation_automator = Agent(
role="Presentation Automator",
goal="Automate presentation creation and management workflows",
backstory="An AI assistant that automates complex presentation workflows and content generation.",
apps=['google_slides']
)
# Complex presentation automation task
automation_task = Task(
description="""
1. Create multiple presentations for different departments
2. Import relevant data from various spreadsheets
3. Update existing presentations with new content
4. Generate thumbnails for all presentations
5. Link supporting documents from Drive
6. Create a master index presentation with links to all others
""",
agent=presentation_automator,
expected_output="Automated presentation workflow completed with multiple presentations and organized structure"
)
crew = Crew(
agents=[presentation_automator],
tasks=[automation_task]
)
crew.kickoff()
```
### Template and Content Creation
```python
from crewai import Agent, Task, Crew
template_creator = Agent(
role="Template Creator",
goal="Create presentation templates and standardized content",
backstory="An AI assistant that creates consistent presentation templates and content standards.",
apps=['google_slides']
)
# Task to create templates
template_task = Task(
description="""
1. Create blank presentation templates for different use cases
2. Add standard layouts and content placeholders
3. Create sample presentations with best practices
4. Generate thumbnails for template previews
5. Upload template assets to Drive and link appropriately
""",
agent=template_creator,
expected_output="Presentation templates created with standardized layouts and linked assets"
)
crew = Crew(
agents=[template_creator],
tasks=[template_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Google account has appropriate permissions for Google Slides
- Verify that the OAuth connection includes required scopes for presentations, spreadsheets, and drive access
- Check that presentations are shared with the authenticated account
**Presentation ID Issues**
- Verify that presentation IDs are correct and presentations exist
- Ensure you have access permissions to the presentations you're trying to modify
- Check that presentation IDs are properly formatted
**Content Update Issues**
- Ensure batch update requests are properly formatted according to Google Slides API specifications
- Verify that object IDs for slides and elements exist in the presentation
- Check that write control revision IDs are current if using optimistic concurrency
**Data Import Issues**
- Verify that Google Sheet IDs are correct and accessible
- Ensure data ranges are properly specified using A1 notation
- Check that you have read permissions for the source spreadsheets
**File Upload and Linking Issues**
- Ensure file data is properly encoded for upload
- Verify that Drive file IDs are correct when linking files
- Check that you have appropriate Drive permissions for file operations
**Page and Thumbnail Operations**
- Verify that page object IDs exist in the specified presentation
- Ensure presentations have content before attempting to generate thumbnails
- Check that page structure is valid for thumbnail generation
**Pagination and Listing Issues**
- Use appropriate page sizes for listing presentations
- Implement proper pagination using page tokens for large result sets
- Handle empty result sets gracefully
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Slides integration setup or troubleshooting.
</Card>

View File

@@ -25,7 +25,7 @@ Before using the HubSpot integration, ensure you have:
2. Find **HubSpot** in the Authentication Integrations section.
3. Click **Connect** and complete the OAuth flow.
4. Grant the necessary permissions for company and contact management.
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account).
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,26 @@ Before using the HubSpot integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="HUBSPOT_CREATE_RECORD_COMPANIES">
<Accordion title="hubspot/create_company">
**Description:** Create a new company record in HubSpot.
**Parameters:**
@@ -101,7 +117,7 @@ uv add crewai-tools
- `founded_year` (string, optional): Year Founded.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_CONTACTS">
<Accordion title="hubspot/create_contact">
**Description:** Create a new contact record in HubSpot.
**Parameters:**
@@ -200,7 +216,7 @@ uv add crewai-tools
- `hs_googleplusid` (string, optional): googleplus ID.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_DEALS">
<Accordion title="hubspot/create_deal">
**Description:** Create a new deal record in HubSpot.
**Parameters:**
@@ -215,7 +231,7 @@ uv add crewai-tools
- `hs_priority` (string, optional): The priority of the deal. Available values: `low`, `medium`, `high`.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_ENGAGEMENTS">
<Accordion title="hubspot/create_record_engagements">
**Description:** Create a new engagement (e.g., note, email, call, meeting, task) in HubSpot.
**Parameters:**
@@ -232,7 +248,7 @@ uv add crewai-tools
- `hs_meeting_end_time` (string, optional): The end time of the meeting. (Used for `MEETING`)
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_COMPANIES">
<Accordion title="hubspot/update_company">
**Description:** Update an existing company record in HubSpot.
**Parameters:**
@@ -249,7 +265,7 @@ uv add crewai-tools
- `description` (string, optional): Description.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_ANY">
<Accordion title="hubspot/create_record_any">
**Description:** Create a record for a specified object type in HubSpot.
**Parameters:**
@@ -257,7 +273,7 @@ uv add crewai-tools
- Additional parameters depend on the custom object's schema.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_CONTACTS">
<Accordion title="hubspot/update_contact">
**Description:** Update an existing contact record in HubSpot.
**Parameters:**
@@ -271,7 +287,7 @@ uv add crewai-tools
- `lifecyclestage` (string, optional): Lifecycle Stage.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_DEALS">
<Accordion title="hubspot/update_deal">
**Description:** Update an existing deal record in HubSpot.
**Parameters:**
@@ -284,7 +300,7 @@ uv add crewai-tools
- `dealtype` (string, optional): The type of deal.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_ENGAGEMENTS">
<Accordion title="hubspot/update_record_engagements">
**Description:** Update an existing engagement in HubSpot.
**Parameters:**
@@ -295,7 +311,7 @@ uv add crewai-tools
- `hs_task_status` (string, optional): The status of the task.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_ANY">
<Accordion title="hubspot/update_record_any">
**Description:** Update a record for a specified object type in HubSpot.
**Parameters:**
@@ -304,28 +320,28 @@ uv add crewai-tools
- Additional parameters depend on the custom object's schema.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_COMPANIES">
<Accordion title="hubspot/list_companies">
**Description:** Get a list of company records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_CONTACTS">
<Accordion title="hubspot/list_contacts">
**Description:** Get a list of contact records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_DEALS">
<Accordion title="hubspot/list_deals">
**Description:** Get a list of deal records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_ENGAGEMENTS">
<Accordion title="hubspot/get_records_engagements">
**Description:** Get a list of engagement records from HubSpot.
**Parameters:**
@@ -333,7 +349,7 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_ANY">
<Accordion title="hubspot/get_records_any">
**Description:** Get a list of records for any specified object type in HubSpot.
**Parameters:**
@@ -341,35 +357,35 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_COMPANIES">
<Accordion title="hubspot/get_company">
**Description:** Get a single company record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the company to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_CONTACTS">
<Accordion title="hubspot/get_contact">
**Description:** Get a single contact record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the contact to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_DEALS">
<Accordion title="hubspot/get_deal">
**Description:** Get a single deal record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the deal to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_ENGAGEMENTS">
<Accordion title="hubspot/get_record_by_id_engagements">
**Description:** Get a single engagement record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_ANY">
<Accordion title="hubspot/get_record_by_id_any">
**Description:** Get a single record of any specified object type by its ID.
**Parameters:**
@@ -377,7 +393,7 @@ uv add crewai-tools
- `recordId` (string, required): The ID of the record to retrieve.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_COMPANIES">
<Accordion title="hubspot/search_companies">
**Description:** Search for company records in HubSpot using a filter formula.
**Parameters:**
@@ -385,7 +401,7 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_CONTACTS">
<Accordion title="hubspot/search_contacts">
**Description:** Search for contact records in HubSpot using a filter formula.
**Parameters:**
@@ -393,7 +409,7 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_DEALS">
<Accordion title="hubspot/search_deals">
**Description:** Search for deal records in HubSpot using a filter formula.
**Parameters:**
@@ -401,7 +417,7 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_ENGAGEMENTS">
<Accordion title="hubspot/search_records_engagements">
**Description:** Search for engagement records in HubSpot using a filter formula.
**Parameters:**
@@ -409,7 +425,7 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_ANY">
<Accordion title="hubspot/search_records_any">
**Description:** Search for records of any specified object type in HubSpot.
**Parameters:**
@@ -418,35 +434,35 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_COMPANIES">
<Accordion title="hubspot/delete_record_companies">
**Description:** Delete a company record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the company to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_CONTACTS">
<Accordion title="hubspot/delete_record_contacts">
**Description:** Delete a contact record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the contact to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_DEALS">
<Accordion title="hubspot/delete_record_deals">
**Description:** Delete a deal record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the deal to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_ENGAGEMENTS">
<Accordion title="hubspot/delete_record_engagements">
**Description:** Delete an engagement record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_ANY">
<Accordion title="hubspot/delete_record_any">
**Description:** Delete a record of any specified object type by its ID.
**Parameters:**
@@ -454,7 +470,7 @@ uv add crewai-tools
- `recordId` (string, required): The ID of the record to delete.
</Accordion>
<Accordion title="HUBSPOT_GET_CONTACTS_BY_LIST_ID">
<Accordion title="hubspot/get_contacts_by_list_id">
**Description:** Get contacts from a specific list by its ID.
**Parameters:**
@@ -462,7 +478,7 @@ uv add crewai-tools
- `paginationParameters` (object, optional): Use `pageCursor` for subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_DESCRIBE_ACTION_SCHEMA">
<Accordion title="hubspot/describe_action_schema">
**Description:** Get the expected schema for a given object type and operation.
**Parameters:**
@@ -477,19 +493,13 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (HubSpot tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with HubSpot capabilities
hubspot_agent = Agent(
role="CRM Manager",
goal="Manage company and contact records in HubSpot",
backstory="An AI assistant specialized in CRM management.",
tools=[enterprise_tools]
apps=['hubspot'] # All HubSpot actions will be available
)
# Task to create a new company
@@ -511,19 +521,14 @@ crew.kickoff()
### Filtering Specific HubSpot Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only the tool to create contacts
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["hubspot_create_record_contacts"]
)
from crewai import Agent, Task, Crew
# Create agent with specific HubSpot actions only
contact_creator = Agent(
role="Contact Creator",
goal="Create new contacts in HubSpot",
backstory="An AI assistant that focuses on creating new contact entries in the CRM.",
tools=[enterprise_tools]
apps=['hubspot/create_contact'] # Only contact creation action
)
# Task to create a contact
@@ -545,17 +550,13 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create agent with HubSpot contact management capabilities
crm_manager = Agent(
role="CRM Manager",
goal="Manage and organize HubSpot contacts efficiently.",
backstory="An experienced CRM manager who maintains an organized contact database.",
tools=[enterprise_tools]
apps=['hubspot'] # All HubSpot actions including contact management
)
# Task to manage contacts

View File

@@ -25,7 +25,7 @@ Before using the Jira integration, ensure you have:
2. Find **Jira** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for issue and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,26 @@ Before using the Jira integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="JIRA_CREATE_ISSUE">
<Accordion title="jira/create_issue">
**Description:** Create an issue in Jira.
**Parameters:**
@@ -56,7 +72,7 @@ uv add crewai-tools
```
</Accordion>
<Accordion title="JIRA_UPDATE_ISSUE">
<Accordion title="jira/update_issue">
**Description:** Update an issue in Jira.
**Parameters:**
@@ -71,14 +87,14 @@ uv add crewai-tools
- `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format.
</Accordion>
<Accordion title="JIRA_GET_ISSUE_BY_KEY">
<Accordion title="jira/get_issue_by_key">
**Description:** Get an issue by key in Jira.
**Parameters:**
- `issueKey` (string, required): Issue Key (example: "TEST-1234").
</Accordion>
<Accordion title="JIRA_FILTER_ISSUES">
<Accordion title="jira/filter_issues">
**Description:** Search issues in Jira using filters.
**Parameters:**
@@ -104,7 +120,7 @@ uv add crewai-tools
- `limit` (string, optional): Limit results - Limit the maximum number of issues to return. Defaults to 10 if left blank.
</Accordion>
<Accordion title="JIRA_SEARCH_BY_JQL">
<Accordion title="jira/search_by_jql">
**Description:** Search issues by JQL in Jira.
**Parameters:**
@@ -117,13 +133,13 @@ uv add crewai-tools
```
</Accordion>
<Accordion title="JIRA_UPDATE_ISSUE_ANY">
<Accordion title="jira/update_issue_any">
**Description:** Update any issue in Jira. Use DESCRIBE_ACTION_SCHEMA to get properties schema for this function.
**Parameters:** No specific parameters - use JIRA_DESCRIBE_ACTION_SCHEMA first to get the expected schema.
</Accordion>
<Accordion title="JIRA_DESCRIBE_ACTION_SCHEMA">
<Accordion title="jira/describe_action_schema">
**Description:** Get the expected schema for an issue type. Use this function first if no other function matches the issue type you want to operate on.
**Parameters:**
@@ -132,7 +148,7 @@ uv add crewai-tools
- `operation` (string, required): Operation Type value, for example CREATE_ISSUE or UPDATE_ISSUE.
</Accordion>
<Accordion title="JIRA_GET_PROJECTS">
<Accordion title="jira/get_projects">
**Description:** Get Projects in Jira.
**Parameters:**
@@ -144,27 +160,27 @@ uv add crewai-tools
```
</Accordion>
<Accordion title="JIRA_GET_ISSUE_TYPES_BY_PROJECT">
<Accordion title="jira/get_issue_types_by_project">
**Description:** Get Issue Types by project in Jira.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
<Accordion title="JIRA_GET_ISSUE_TYPES">
<Accordion title="jira/get_issue_types">
**Description:** Get all Issue Types in Jira.
**Parameters:** None required.
</Accordion>
<Accordion title="JIRA_GET_ISSUE_STATUS_BY_PROJECT">
<Accordion title="jira/get_issue_status_by_project">
**Description:** Get issue statuses for a given project.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
<Accordion title="JIRA_GET_ALL_ASSIGNEES_BY_PROJECT">
<Accordion title="jira/get_all_assignees_by_project">
**Description:** Get assignees for a given project.
**Parameters:**
@@ -178,19 +194,14 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Jira tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
from crewai import Agent, Task, Crew
# Create an agent with Jira capabilities
jira_agent = Agent(
role="Issue Manager",
goal="Manage Jira issues and track project progress efficiently",
backstory="An AI assistant specialized in issue tracking and project management.",
tools=[enterprise_tools]
apps=['jira'] # All Jira actions will be available
)
# Task to create a bug report
@@ -212,19 +223,12 @@ crew.kickoff()
### Filtering Specific Jira Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Jira tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["jira_create_issue", "jira_update_issue", "jira_search_by_jql"]
)
issue_coordinator = Agent(
role="Issue Coordinator",
goal="Create and manage Jira issues efficiently",
backstory="An AI assistant that focuses on issue creation and management.",
tools=enterprise_tools
apps=['jira']
)
# Task to manage issue workflow
@@ -246,17 +250,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_analyst = Agent(
role="Project Analyst",
goal="Analyze project data and generate insights from Jira",
backstory="An experienced project analyst who extracts insights from project management data.",
tools=[enterprise_tools]
apps=['jira']
)
# Task to analyze project status
@@ -283,17 +282,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
automation_manager = Agent(
role="Automation Manager",
goal="Automate issue management and workflow processes",
backstory="An AI assistant that automates repetitive issue management tasks.",
tools=[enterprise_tools]
apps=['jira']
)
# Task to automate issue management
@@ -321,17 +315,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
schema_specialist = Agent(
role="Schema Specialist",
goal="Handle complex Jira operations using dynamic schemas",
backstory="An AI assistant that can work with dynamic Jira schemas and custom issue types.",
tools=[enterprise_tools]
apps=['jira']
)
# Task using schema-based operations

Some files were not shown because too many files have changed in this diff Show More