Compare commits

..

39 Commits

Author SHA1 Message Date
Devin AI
7a0feb8c43 Fix tool output token overflow issue
- Add max_tool_output_tokens parameter to Agent (default: 4096)
- Implement token counting and truncation utilities in agent_utils
- Truncate large tool outputs before appending to messages
- Update CONTEXT_LIMIT_ERRORS to recognize negative max_tokens error
- Add comprehensive tests for tool output truncation

Fixes #3843

Co-Authored-By: João <joao@crewai.com>
2025-11-06 13:01:52 +00:00
Greyson LaLonde
e4cc9a664c fix: handle unpickleable values in flow state
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 01:29:21 -05:00
Greyson LaLonde
7e6171d5bc fix: ensure lite agents course-correct on validation errors
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: ensure lite agents course-correct on validation errors

* chore: update cassettes and test expectations

* fix: ensure multiple guardrails propogate
2025-11-05 19:02:11 -05:00
Greyson LaLonde
61ad1fb112 feat: add support for llm message interceptor hooks 2025-11-05 11:38:44 -05:00
Greyson LaLonde
54710a8711 fix: hash callback args correctly to ensure caching works
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-05 07:19:09 -05:00
Lucas Gomide
5abf976373 fix: allow adding RAG source content from valid URLs (#3831)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-04 07:58:40 -05:00
Greyson LaLonde
329567153b fix: make plot node selection smoother
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-03 07:49:31 -05:00
Greyson LaLonde
60332e0b19 feat: cache i18n prompts for efficient use 2025-11-03 07:39:05 -05:00
Lorenze Jay
40932af3fa feat: bump versions to 1.3.0 (#3820)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.3.0

* chore: update crew and flow templates to use crewai[tools] version 1.3.0
2025-10-31 18:54:02 -07:00
Greyson LaLonde
e134e5305b Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers

* feat: a2a schemas and utils

* chore: move agent class, update imports

* refactor: organize imports to avoid circularity, add a2a to console

* feat: pass response_model through call chain

* feat: add standard openapi spec serialization to tools and structured output

* feat: a2a events

* chore: add a2a to pyproject

* docs: minimal base for learn docs

* fix: adjust a2a conversation flow, allow llm to decide exit until max_retries

* fix: inject agent skills into initial prompt

* fix: format agent card as json in prompt

* refactor: simplify A2A agent prompt formatting and improve skill display

* chore: wide cleanup

* chore: cleanup logic, add auth cache, use json for messages in prompt

* chore: update docs

* fix: doc snippets formatting

* feat: optimize A2A agent card fetching and improve error reporting

* chore: move imports to top of file

* chore: refactor hasattr check

* chore: add httpx-auth, update lockfile

* feat: create base public api

* chore: cleanup modules, add docstrings, types

* fix: exclude extra fields in prompt

* chore: update docs

* tests: update to correct import

* chore: lint for ruff, add missing import

* fix: tweak openai streaming logic for response model

* tests: add reimport for test

* tests: add reimport for test

* fix: don't set a2a attr if not set

* fix: don't set a2a attr if not set

* chore: update cassettes

* tests: fix tests

* fix: use instructor and dont pass response_format for litellm

* chore: consolidate event listeners, add typing

* fix: address race condition in test, update cassettes

* tests: add correct mocks, rerun cassette for json

* tests: update cassette

* chore: regenerate cassette after new run

* fix: make token manager access-safe

* fix: make token manager access-safe

* merge

* chore: update test and cassete for output pydantic

* fix: tweak to disallow deadlock

* chore: linter

* fix: adjust event ordering for threading

* fix: use conditional for batch check

* tests: tweak for emission

* tests: simplify api + event check

* fix: ensure non-function calling llms see json formatted string

* tests: tweak message comparison

* fix: use internal instructor for litellm structure responses

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
2025-10-31 18:42:03 -07:00
Greyson LaLonde
e229ef4e19 refactor: improve flow handling, typing, and logging; update UI and tests
fix: refine nested flow conditionals and ensure router methods and routes are fully parsed
fix: improve docstrings, typing, and logging coverage across all events
feat: update flow.plot feature with new UI enhancements
chore: apply Ruff linting, reorganize imports, and remove deprecated utilities/files
chore: split constants and utils, clean JS comments, and add typing for linters
tests: strengthen test coverage for flow execution paths and router logic
2025-10-31 21:15:06 -04:00
Greyson LaLonde
2e9eb8c32d fix: refactor use_stop_words to property, add check for stop words
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-10-29 19:14:01 +01:00
Lucas Gomide
4ebb5114ed Fix Firecrawl tools & adding tests (#3810)
* fix: fix Firecrawl Scrape tool

* fix: fix Firecrawl Search tool

* fix: fix Firecrawl Website tool

* tests: adding tests for Firecrawl
2025-10-29 13:37:57 -04:00
Daniel Barreto
70b083945f Enhance QdrantVectorSearchTool (#3806)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-28 13:42:40 -04:00
Tony Kipkemboi
410db1ff39 docs: migrate embedder→embedding_model and require vectordb across tool docs; add provider examples (en/ko/pt-BR) (#3804)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs(tools): migrate embedder->embedding_model, require vectordb; add Chroma/Qdrant examples across en/ko/pt-BR PDF/TXT/XML/MDX/DOCX/CSV/Directory docs

* docs(observability): apply latest Datadog tweaks in ko and pt-BR
2025-10-27 13:29:21 -04:00
Lorenze Jay
5d6b4c922b feat: bump versions to 1.2.1 (#3800)
* feat: bump versions to 1.2.1

* updated templates too
2025-10-27 09:12:04 -07:00
Lucas Gomide
b07c0fc45c docs: describe mandatory env-var to call Platform tools for each integration (#3803) 2025-10-27 10:01:41 -04:00
Sam Brenner
97853199c7 Add Datadog Integration Documentation (#3642)
* add datadog llm observability integration guide

* spacing fix

* wording changes

* alphabetize docs listing

* Update docs/en/observability/datadog.mdx

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>

* add translations

* fix korean code block

---------

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-27 09:48:38 -04:00
Lorenze Jay
494ed7e671 liteagent supports apps and mcps (#3794)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* liteagent supports apps and mcps

* generated cassettes for these
2025-10-24 18:42:08 -07:00
Lorenze Jay
a83c57a2f2 feat: bump versions to 1.2.0 (#3787)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.2.0

* also include projects
2025-10-23 18:04:34 -07:00
Lorenze Jay
08e15ab267 fix: update default LLM model and improve error logging in LLM utilities (#3785)
* fix: update default LLM model and improve error logging in LLM utilities

* Updated the default LLM model from "gpt-4o-mini" to "gpt-4.1-mini" for better performance.
* Enhanced error logging in the LLM utilities to use logger.error instead of logger.debug, ensuring that errors are properly reported and raised.
* Added tests to verify behavior when OpenAI API key is missing and when Anthropic dependency is not available, improving robustness and error handling in LLM creation.

* fix: update test for default LLM model usage

* Refactored the test_create_llm_with_none_uses_default_model to use the imported DEFAULT_LLM_MODEL constant instead of a hardcoded string.
* Ensured that the test correctly asserts the model used is the current default, improving maintainability and consistency across tests.

* change default model to gpt-4.1-mini

* change default model use defualt
2025-10-23 17:54:11 -07:00
Greyson LaLonde
9728388ea7 fix: change flow viz del dir; method inspection
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update flow viz deletion dir, add typing
* tests: add flow viz tests to ensure lib dir is not deleted
2025-10-22 19:32:38 -04:00
Greyson LaLonde
4371cf5690 chore: remove aisuite
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Little usage + blocking some features
2025-10-21 23:18:06 -04:00
Lorenze Jay
d28daa26cd feat: bump versions to 1.1.0 (#3770)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.1.0

* chore: bump template versions

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-21 15:52:44 -07:00
Lorenze Jay
a850813f2b feat: enhance InternalInstructor to support multiple LLM providers (#3767)
* feat: enhance InternalInstructor to support multiple LLM providers

- Updated InternalInstructor to conditionally create an instructor client based on the LLM provider.
- Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern.
- Added functionality to extract the provider from the LLM model name.
- Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling.

This update improves flexibility and extensibility for different LLM integrations.

* fix test
2025-10-21 15:24:59 -07:00
Cameron Warren
5944a39629 fix: correct broken integration documentation links
Fix navigation paths for two integration tool cards that were redirecting to the
introduction page instead of their intended documentation pages.

Fixes #3516

Co-authored-by: Cwarre33 <cwarre33@charlotte.edu>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 18:12:08 -04:00
Greyson LaLonde
c594859ed0 feat: mypy plugin base
* feat: base mypy plugin with CrewBase

* fix: add crew method to protocol
2025-10-21 17:36:08 -04:00
Daniel Barreto
2ee27efca7 feat: improvements on QdrantVectorSearchTool
* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 16:50:08 -04:00
Greyson LaLonde
f6e13eb890 chore: update codeql config paths to new folders
* chore: update codeql config paths to new folders

* tests: use threading.Condition for event check

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:43:25 -04:00
Lorenze Jay
e7b3ce27ca docs: update LLM integration details and examples
* docs: update LLM integration details and examples

- Changed references from LiteLLM to native SDKs for LLM providers.
- Enhanced OpenAI and AWS Bedrock sections with new usage examples and advanced configuration options.
- Added structured output examples and supported environment variables for better clarity.
- Improved documentation on additional parameters and features for LLM configurations.

* drop this example - should use strucutred output from task instead

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 14:39:50 -04:00
Greyson LaLonde
dba27cf8b5 fix: fix double trace call; add types
* fix: fix double trace call; add types

* tests: skip long running uv install test, refactor in future

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:15:39 -04:00
Greyson LaLonde
6469f224f6 chore: improve CrewBase typing 2025-10-21 13:58:35 -04:00
Greyson LaLonde
f3a63be215 tests: cassettes, threading for flow tests 2025-10-21 13:48:21 -04:00
Greyson LaLonde
01d8c189f0 fix: pin template versions to latest
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-10-21 10:56:41 -04:00
Lorenze Jay
cc83c1ead5 feat: bump versions to 1.0.0
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-20 17:34:38 -04:00
Greyson LaLonde
7578901f6d chore: use main for devtools branch 2025-10-20 17:29:09 -04:00
Lorenze Jay
d1343b96ed Release/v1.0.0 (#3618)
* feat: add `apps` & `actions` attributes to Agent (#3504)

* feat: add app attributes to Agent

* feat: add actions attribute to Agent

* chore: resolve linter issues

* refactor: merge the apps and actions parameters into a single one

* fix: remove unnecessary print

* feat: logging error when CrewaiPlatformTools fails

* chore: export CrewaiPlatformTools directly from crewai_tools

* style: resolver linter issues

* test: fix broken tests

* style: solve linter issues

* fix: fix broken test

* feat: monorepo restructure and test/ci updates

- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules

* chore: update python version to 3.13 and package metadata

* feat: add crewai-tools workspace and fix tests/dependencies

* feat: add crewai-tools workspace structure

* Squashed 'temp-crewai-tools/' content from commit 9bae5633

git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9

* feat: configure crewai-tools workspace package with dependencies

* fix: apply ruff auto-formatting to crewai-tools code

* chore: update lockfile

* fix: don't allow tool tests yet

* fix: comment out extra pytest flags for now

* fix: remove conflicting conftest.py from crewai-tools tests

* fix: resolve dependency conflicts and test issues

- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0

* chore: update CI workflows and docs for monorepo structure

* chore: update CI workflows and docs for monorepo structure

* fix: actions syntax

* chore: ci publish and pin versions

* fix: add permission to action

* chore: bump version to 1.0.0a1 across all packages

- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency

* WIP: v1 docs (#3626)

(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)

* docs: parity for all translations

* docs: full name of acronym AMP

* docs: fix lingering unused code

* docs: expand contextual options in docs.json

* docs: add contextual action to request feature on GitHub (#3635)

* chore: apply linting fixes to crewai-tools

* feat: add required env var validation for brightdata

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* fix: handle properly anyOf oneOf allOf schema's props

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: bump version to 1.0.0a2

* Lorenze/native inference sdks (#3619)

* ruff linted

* using native sdks with litellm fallback

* drop exa

* drop print on completion

* Refactor LLM and utility functions for type consistency

- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.

* fix agent_tests

* fix litellm tests and usagemetrics fix

* drop print

* Refactor LLM event handling and improve test coverage

- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.

* fix test_converter

* fixed tests/agents/test_agent.py

* Refactor LLM context length exception handling and improve provider integration

- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.

* Enhance LLM context window handling across providers

- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.

* fix test agent again

* fix test agent

* feat: add native LLM providers for Anthropic, Azure, and Gemini

- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.

* chore: update dependencies and improve context management

- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.

* refactor(tests): update LLM instantiation to include is_litellm flag in test cases

- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.

* linter

* linted

* revert constants

* fix(tests): correct type hint in expected model description

- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.

* refactor(llm): enhance LLM instantiation and error handling

- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.

* fixed test

* refactor(llm): enhance token usage tracking and add copy methods

- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.

* refactor(tests): reorganize imports and enhance error messages in test cases

- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.

* feat: add base devtooling

* fix: ensure dep refs are updated for devtools

* fix: allow pre-release

* feat: allow release after tag

* feat: bump versions to 1.0.0a3 

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: match tag and release title, ignore devtools build for pypi

* fix: allow failed pypi publish

* feat: introduce trigger listing and execution commands for local development (#3643)

* chore: exclude tests from ruff linting

* chore: exclude tests from GitHub Actions linter

* fix: replace print statements with logger in agent and memory handling

* chore: add noqa for intentional print in printer utility

* fix: resolve linting errors across codebase

* feat: update docs with new approach to consume Platform Actions (#3675)

* fix: remove duplicate line and add explicit env var

* feat: bump versions to 1.0.0a4 (#3686)

* Update triggers docs (#3678)

* docs: introduce triggers list & triggers run command

* docs: add KO triggers docs

* docs: ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned on docs (#3687)

* Lorenze/bedrock llm (#3693)

* feat: add AWS Bedrock support and update dependencies

- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.

* using converse api

* converse

* linted

* refactor: update BedrockCompletion class to improve parameter handling

- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.

* feat: improve event bus thread safety and async support

Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.

Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.

* feat: enhance OpenAICompletion class with additional client parameters (#3701)

* feat: enhance OpenAICompletion class with additional client parameters

- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.

* fix: correct test case for unsupported OpenAI model

- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.

* fix: enhance error handling in OpenAICompletion class

- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.

* fix: improve test for unsupported OpenAI model handling

- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.

* feat: bump versions to 1.0.0b1 (#3706)

* Lorenze/tools drop litellm (#3710)

* completely drop litellm and correctly pass config for qdrant

* feat: add support for additional embedding models in EmbeddingService

- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.

* fix: update collection parameter handling in CrewAIRagAdapter

- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.

* moved stagehand as optional dep (#3712)

* feat: bump versions to 1.0.0b2 (#3713)

* feat: enhance AnthropicCompletion class with additional client parame… (#3707)

* feat: enhance AnthropicCompletion class with additional client parameters and tool handling

- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.

* drop print statements

* test: add fixture to mock ANTHROPIC_API_KEY for tests

- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.

* refactor: streamline streaming message handling in AnthropicCompletion class

- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.

* fix streaming here too

* fix: improve error handling in tool conversion for AnthropicCompletion class

- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.

* feat: enhance GeminiCompletion class with client parameter support (#3717)

* feat: enhance GeminiCompletion class with client parameter support

- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.

* add optional dependancies

* refactor: update test fixture to mock GOOGLE_API_KEY

- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.

* fix tests

* feat: enhance BedrockCompletion class with advanced features

* feat: enhance BedrockCompletion class with advanced features and error handling

- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.

* chore: add boto typing

* fix: use typing_extensions.Required for Python 3.10 compatibility

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: azure native tests

* feat: add Azure AI Inference support and related tests

- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.

* feat: enhance AzureCompletion class with Azure OpenAI support

- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.

* refactor: convert project module to metaclass with full typing

* Lorenze/OpenAI base url backwards support (#3723)

* fix: enhance OpenAICompletion class base URL handling

- Updated the base URL assignment in the OpenAICompletion class to prioritize the new `api_base` attribute and fallback to the environment variable `OPENAI_BASE_URL` if both are not set.
- Added `api_base` to the list of parameters in the OpenAICompletion class to ensure proper configuration and flexibility in API endpoint management.

* feat: enhance OpenAICompletion class with api_base support

- Added the `api_base` parameter to the OpenAICompletion class to allow for flexible API endpoint configuration.
- Updated the `_get_client_params` method to prioritize `base_url` over `api_base`, ensuring correct URL handling.
- Introduced comprehensive tests to validate the behavior of `api_base` and `base_url` in various scenarios, including environment variable fallback.
- Enhanced test coverage for client parameter retrieval, ensuring robust integration with the OpenAI API.

* fix: improve OpenAICompletion class configuration handling

- Added a debug print statement to log the client configuration parameters during initialization for better traceability.
- Updated the base URL assignment logic to ensure it defaults to None if no valid base URL is provided, enhancing robustness in API endpoint configuration.
- Refined the retrieval of the `api_base` environment variable to streamline the configuration process.

* drop print

* feat: improvements on import native sdk support (#3725)

* feat: add support for Anthropic provider and enhance logging

- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.

* feat: enhance LLM support with Bedrock provider and update dependencies

- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.

* test: update native provider fallback tests to expect ImportError

* adjust the test with the expected bevaior - raising ImportError

* this is exoecting the litellm format, all gemini native tests are in test_google.py

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: remove stdout prints, improve test determinism, and update trace handling

Removed `print` statements from the `LLMStreamChunkEvent` handler to prevent
LLM response chunks from being written directly to stdout. The listener now
only tracks chunks internally.

Fixes #3715

Added explicit return statements for trace-related tests.

Updated cassette for `test_failed_evaluation` to reflect new behavior where
an empty trace dict is used instead of returning early.

Ensured deterministic cleanup order in test fixtures by making
`clear_event_bus_handlers` depend on `setup_test_environment`. This guarantees
event bus shutdown and file handle cleanup occur before temporary directory
deletion, resolving intermittent “Directory not empty” errors in CI.

* chore: remove lib/crewai exclusion from pre-commit hooks

* feat: enhance task guardrail functionality and validation

* feat: enhance task guardrail functionality and validation

- Introduced support for multiple guardrails in the Task class, allowing for sequential processing of guardrails.
- Added a new `guardrails` field to the Task model to accept a list of callable guardrails or string descriptions.
- Implemented validation to ensure guardrails are processed correctly, including handling of retries and error messages.
- Enhanced the `_invoke_guardrail_function` method to manage guardrail execution and integrate with existing task output processing.
- Updated tests to cover various scenarios involving multiple guardrails, including success, failure, and retry mechanisms.

This update improves the flexibility and robustness of task execution by allowing for more complex validation scenarios.

* refactor: enhance guardrail type handling in Task model

- Updated the Task class to improve guardrail type definitions, introducing GuardrailType and GuardrailsType for better clarity and type safety.
- Simplified the validation logic for guardrails, ensuring that both single and multiple guardrails are processed correctly.
- Enhanced error messages for guardrail validation to provide clearer feedback when incorrect types are provided.
- This refactor improves the maintainability and robustness of task execution by standardizing guardrail handling.

* feat: implement per-guardrail retry tracking in Task model

- Introduced a new private attribute `_guardrail_retry_counts` to the Task class for tracking retry attempts on a per-guardrail basis.
- Updated the guardrail processing logic to utilize the new retry tracking, allowing for independent retry counts for each guardrail.
- Enhanced error handling to provide clearer feedback when guardrails fail validation after exceeding retry limits.
- Modified existing tests to validate the new retry tracking behavior, ensuring accurate assertions on guardrail retries.

This update improves the robustness and flexibility of task execution by allowing for more granular control over guardrail validation and retry mechanisms.

* chore: 1.0.0b3 bump (#3734)

* chore: full ruff and mypy

improved linting, pre-commit setup, and internal architecture. Configured Ruff to respect .gitignore, added stricter rules, and introduced a lock pre-commit hook with virtualenv activation. Fixed type shadowing in EXASearchTool using a type_ alias to avoid PEP 563 conflicts and resolved circular imports in agent executor and guardrail modules. Removed agent-ops attributes, deprecated watson alias, and dropped crewai-enterprise tools with corresponding test updates. Refactored cache and memoization for thread safety and cleaned up structured output adapters and related logic.

* New MCL DSL (#3738)

* Adding MCP implementation

* New tests for MCP implementation

* fix tests

* update docs

* Revert "New tests for MCP implementation"

This reverts commit 0bbe6dee90.

* linter

* linter

* fix

* verify mcp pacakge exists

* adjust docs to be clear only remote servers are supported

* reverted

* ensure args schema generated properly

* properly close out

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: a2a experimental

experimental a2a support

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Mike Plachta <mplachta@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-10-20 14:10:19 -07:00
Greyson LaLonde
42f2b4d551 fix: preserve nested condition structure in Flow decorators
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Fixes nested boolean conditions being flattened in @listen, @start, and @router decorators. The or_() and and_() combinators now preserve their nested structure using a "conditions" key instead of flattening to a list. Added recursive evaluation logic to properly handle complex patterns like or_(and_(A, B), and_(C, D)).
2025-10-17 17:06:19 -04:00
Greyson LaLonde
0229390ad1 fix: add standard print parameters to Printer.print method
- Adds sep, end, file, and flush parameters to match Python's built-in print function signature.
2025-10-17 15:27:22 -04:00
590 changed files with 52958 additions and 19745 deletions

View File

@@ -2,20 +2,27 @@ name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "src/crewai/cli/templates/**"
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "tests/cassettes/**"
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code
- "src/**"
# Include tests (but exclude cassettes)
- "tests/**"
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality
# - uses: security-and-quality

View File

@@ -7,7 +7,6 @@ on:
jobs:
build:
if: github.event.release.prerelease == true
name: Build packages
runs-on: ubuntu-latest
permissions:
@@ -25,7 +24,7 @@ jobs:
- name: Build packages
run: |
uv build --prerelease="allow" --all-packages
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
@@ -35,7 +34,6 @@ jobs:
path: dist/
publish:
if: github.event.release.prerelease == true
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest

View File

@@ -3,19 +3,25 @@ repos:
hooks:
- id: ruff
name: ruff
entry: uv run ruff check
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
exclude: ^lib/crewai/
- id: ruff-format
name: ruff-format
entry: uv run ruff format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
exclude: ^lib/crewai/
- id: mypy
name: mypy
entry: uv run mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
exclude: ^lib/crewai/
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock

View File

@@ -134,6 +134,7 @@
"group": "MCP Integration",
"pages": [
"en/mcp/overview",
"en/mcp/dsl-integration",
"en/mcp/stdio",
"en/mcp/sse",
"en/mcp/streamable-http",
@@ -275,6 +276,7 @@
"en/observability/overview",
"en/observability/arize-phoenix",
"en/observability/braintrust",
"en/observability/datadog",
"en/observability/langdb",
"en/observability/langfuse",
"en/observability/langtrace",
@@ -570,6 +572,7 @@
"group": "Integração MCP",
"pages": [
"pt-BR/mcp/overview",
"pt-BR/mcp/dsl-integration",
"pt-BR/mcp/stdio",
"pt-BR/mcp/sse",
"pt-BR/mcp/streamable-http",
@@ -698,6 +701,7 @@
"pt-BR/observability/overview",
"pt-BR/observability/arize-phoenix",
"pt-BR/observability/braintrust",
"pt-BR/observability/datadog",
"pt-BR/observability/langdb",
"pt-BR/observability/langfuse",
"pt-BR/observability/langtrace",
@@ -989,6 +993,7 @@
"group": "MCP 통합",
"pages": [
"ko/mcp/overview",
"ko/mcp/dsl-integration",
"ko/mcp/stdio",
"ko/mcp/sse",
"ko/mcp/streamable-http",
@@ -1129,6 +1134,7 @@
"ko/observability/overview",
"ko/observability/arize-phoenix",
"ko/observability/braintrust",
"ko/observability/datadog",
"ko/observability/langdb",
"ko/observability/langfuse",
"ko/observability/langtrace",

View File

@@ -7,7 +7,7 @@ mode: "wide"
## Overview
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
## What are LLMs?
@@ -113,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
<AccordionGroup>
<Accordion title="OpenAI">
Set the following environment variables in your `.env` file:
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
```toml Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
OPENAI_BASE_URL=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4", # call model by provider/model_name
temperature=0.8,
max_tokens=150,
model="openai/gpt-4o",
api_key="your-api-key", # Or set OPENAI_API_KEY
temperature=0.7,
max_tokens=4000
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4o",
api_key="your-api-key",
base_url="https://api.openai.com/v1", # Optional custom endpoint
organization="org-...", # Optional organization ID
project="proj_...", # Optional project ID
temperature=0.7,
max_tokens=4000,
max_completion_tokens=4000, # For newer models
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
seed=42, # For reproducible outputs
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3, # Maximum retry attempts
logprobs=True, # Return log probabilities
top_logprobs=5, # Number of most likely tokens
reasoning_effort="medium" # For o1 models: low, medium, high
)
```
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
**Structured Outputs:**
```python Code
from pydantic import BaseModel
from crewai import LLM
class ResponseFormat(BaseModel):
name: str
age: int
summary: str
llm = LLM(
model="openai/gpt-4o",
)
```
**Supported Environment Variables:**
- `OPENAI_API_KEY`: Your OpenAI API key (required)
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
**Features:**
- Native function calling support (except o1 models)
- Structured outputs with JSON schema
- Streaming support for real-time responses
- Token usage tracking
- Stop sequences support (except o1 models)
- Log probabilities for token-level insights
- Reasoning effort control for o1 models
**Supported Models:**
| Model | Context Window | Best For |
|---------------------|------------------|-----------------------------------------------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
| o1-mini | 128,000 tokens | Efficient reasoning model |
| o3-mini | 200,000 tokens | Lightweight reasoning model |
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
**Note:** To use OpenAI, install the required dependencies:
```bash
uv add "crewai[openai]"
```
</Accordion>
<Accordion title="Meta-Llama">
@@ -187,69 +247,186 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Anthropic">
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
max_tokens=4096 # Required for Anthropic
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key",
base_url="https://api.anthropic.com", # Optional custom endpoint
temperature=0.7,
max_tokens=4096, # Required parameter
top_p=0.9,
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3 # Maximum retry attempts
)
```
**Supported Environment Variables:**
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
**Features:**
- Native tool use support for Claude 3+ models
- Streaming support for real-time responses
- Automatic system message handling
- Stop sequences for controlled output
- Token usage tracking
- Multi-turn tool use conversations
**Important Notes:**
- `max_tokens` is a **required** parameter for all Anthropic models
- Claude uses `stop_sequences` instead of `stop`
- System messages are handled separately from conversation messages
- First message must be from the user (automatically handled)
- Messages must alternate between user and assistant
**Supported Models:**
| Model | Context Window | Best For |
|------------------------------|----------------|-----------------------------------------------|
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
| claude-2 | 100,000 tokens | Versatile model for various tasks |
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
**Note:** To use Anthropic, install the required dependencies:
```bash
uv add "crewai[anthropic]"
```
</Accordion>
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
```toml .env
# https://ai.google.dev/gemini-api/docs/api-key
# Required (one of the following)
GOOGLE_API_KEY=<your-api-key>
GEMINI_API_KEY=<your-api-key>
# Optional - for Vertex AI
GOOGLE_CLOUD_PROJECT=<your-project-id>
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
temperature=0.7
)
```
### Gemini models
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.5-flash",
api_key="your-api-key",
temperature=0.7,
top_p=0.9,
top_k=40, # Top-k sampling parameter
max_output_tokens=8192,
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
safety_settings={
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
}
)
```
**Vertex AI Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro",
project="your-gcp-project-id",
location="us-central1" # GCP region
)
```
**Supported Environment Variables:**
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
**Features:**
- Native function calling support for Gemini 1.5+ and 2.x models
- Streaming support for real-time responses
- Multimodal capabilities (text, images, video)
- Safety settings configuration
- Support for both Gemini API and Vertex AI
- Automatic system instruction handling
- Token usage tracking
**Gemini Models:**
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
**Gemma Models:**
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window | Best For |
|----------------|----------------|------------------------------------|
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
| gemma-3-27b | 128,000 tokens | High-performance tasks |
**Note:** To use Google Gemini, install the required dependencies:
```bash
uv add "crewai[google-genai]"
```
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
@@ -291,43 +468,146 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Azure">
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
```toml Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
AZURE_ENDPOINT=<your-endpoint-url>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
```
Example usage in your CrewAI project:
**Endpoint URL Formats:**
For Azure OpenAI deployments:
```
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
```
For Azure AI Inference endpoints:
```
https://<resource-name>.inference.azure.com
```
**Basic Usage:**
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
api_key="<your-api-key>", # Or set AZURE_API_KEY
endpoint="<your-endpoint-url>",
api_version="2024-06-01"
)
```
**Advanced Configuration:**
```python Code
llm = LLM(
model="azure/gpt-4o",
temperature=0.7,
max_tokens=4000,
top_p=0.9,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["END"],
stream=True,
timeout=60.0,
max_retries=3
)
```
**Supported Environment Variables:**
- `AZURE_API_KEY`: Your Azure API key (required)
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
**Features:**
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
- Streaming support for real-time responses
- Automatic endpoint URL validation and correction
- Comprehensive error handling with retry logic
- Token usage tracking
**Note:** To use Azure AI Inference, install the required dependencies:
```bash
uv add "crewai[azure-ai-inference]"
```
</Accordion>
<Accordion title="AWS Bedrock">
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
```toml Code
# Required
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
# Optional
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
region_name="us-east-1"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
**Advanced Configuration:**
```python Code
from crewai import LLM
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
llm = LLM(
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
aws_session_token="your-session-token", # For temporary credentials
region_name="us-east-1",
temperature=0.7,
max_tokens=4096,
top_p=0.9,
top_k=250, # For Claude models
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
guardrail_config={ # Optional content filtering
"guardrailIdentifier": "your-guardrail-id",
"guardrailVersion": "1"
},
additional_model_request_fields={ # Model-specific parameters
"top_k": 250
}
)
```
**Supported Environment Variables:**
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
**Features:**
- Native tool calling support via Converse API
- Streaming and non-streaming responses
- Comprehensive error handling with retry logic
- Guardrail configuration for content filtering
- Model-specific parameters via `additional_model_request_fields`
- Token usage tracking and stop reason logging
- Support for all Bedrock foundation models
- Automatic conversation format handling
**Important Notes:**
- Uses the modern Converse API for unified model access
- Automatic handling of model-specific conversation requirements
- System messages are handled separately from conversation
- First message must be from user (automatically handled)
- Some models (like Cohere) require conversation to end with user message
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
@@ -357,7 +637,12 @@ In this section, you'll find detailed examples that help you select, configure,
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
**Note:** To use AWS Bedrock, install the required dependencies:
```bash
uv add "crewai[bedrock]"
```
</Accordion>
<Accordion title="Amazon SageMaker">
@@ -899,7 +1184,7 @@ Learn how to get the most out of your LLM configuration:
</Accordion>
<Accordion title="Drop Additional Parameters">
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
```python
@@ -915,6 +1200,52 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Transport Interceptors">
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
**Supported Providers:**
- ✅ OpenAI
- ✅ Anthropic
**Basic Usage:**
```python
import httpx
from crewai import LLM
from crewai.llms.hooks import BaseInterceptor
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
"""Custom interceptor to modify requests and responses."""
def on_outbound(self, request: httpx.Request) -> httpx.Request:
"""Print request before sending to the LLM provider."""
print(request)
return request
def on_inbound(self, response: httpx.Response) -> httpx.Response:
"""Process response after receiving from the LLM provider."""
print(f"Status: {response.status_code}")
print(f"Response time: {response.elapsed}")
return response
# Use the interceptor with an LLM
llm = LLM(
model="openai/gpt-4o",
interceptor=CustomInterceptor()
)
```
**Important Notes:**
- Both methods must return the received object or type of object.
- Modifying received objects may result in unexpected behavior or application crashes.
- Not all providers support interceptors - check the supported providers list above
<Info>
Interceptors operate at the transport layer. This is particularly useful for:
- Message transformation and filtering
- Debugging API interactions
</Info>
</Accordion>
</AccordionGroup>
## Common Issues and Solutions

View File

@@ -33,6 +33,22 @@ Before using the Asana integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Box integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the ClickUp integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the GitHub integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Gmail integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Google Calendar integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Google Contacts integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Google Docs integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Google Drive integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -34,6 +34,22 @@ Before using the Google Sheets integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Google Slides integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the HubSpot integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Jira integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Linear integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Microsoft Excel integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Microsoft OneDrive integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Microsoft Outlook integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Microsoft SharePoint integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Microsoft Teams integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Microsoft Word integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Before using the Notion integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>

View File

@@ -17,6 +17,38 @@ Before using the Salesforce integration, ensure you have:
- A Salesforce account with appropriate permissions
- Connected your Salesforce account through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Salesforce Integration
### 1. Connect Your Salesforce Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Salesforce** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for CRM and sales management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools
### **Record Management**

View File

@@ -17,6 +17,38 @@ Before using the Shopify integration, ensure you have:
- A Shopify store with appropriate admin permissions
- Connected your Shopify store through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Shopify Integration
### 1. Connect Your Shopify Store
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Shopify** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for store and product management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools
### **Customer Management**

View File

@@ -17,6 +17,38 @@ Before using the Slack integration, ensure you have:
- A Slack workspace with appropriate permissions
- Connected your Slack workspace through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Slack Integration
### 1. Connect Your Slack Workspace
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Slack** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for team communication
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools
### **User Management**

View File

@@ -17,6 +17,38 @@ Before using the Stripe integration, ensure you have:
- A Stripe account with appropriate API permissions
- Connected your Stripe account through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Stripe Integration
### 1. Connect Your Stripe Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Stripe** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for payment processing
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools
### **Customer Management**

View File

@@ -17,6 +17,38 @@ Before using the Zendesk integration, ensure you have:
- A Zendesk account with appropriate API permissions
- Connected your Zendesk account through the [Integrations page](https://app.crewai.com/integrations)
## Setting Up Zendesk Integration
### 1. Connect Your Zendesk Account
1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Zendesk** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for ticket and user management
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
```bash
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Tools
### **Ticket Management**

View File

@@ -0,0 +1,291 @@
---
title: Agent-to-Agent (A2A) Protocol
description: Enable CrewAI agents to delegate tasks to remote A2A-compliant agents for specialized handling
icon: network-wired
mode: "wide"
---
## A2A Agent Delegation
CrewAI supports the Agent-to-Agent (A2A) protocol, allowing agents to delegate tasks to remote specialized agents. The agent's LLM automatically decides whether to handle a task directly or delegate to an A2A agent based on the task requirements.
<Note>
A2A delegation requires the `a2a-sdk` package. Install with: `uv add 'crewai[a2a]'` or `pip install 'crewai[a2a]'`
</Note>
## How It Works
When an agent is configured with A2A capabilities:
1. The LLM analyzes each task
2. It decides to either:
- Handle the task directly using its own capabilities
- Delegate to a remote A2A agent for specialized handling
3. If delegating, the agent communicates with the remote A2A agent through the protocol
4. Results are returned to the CrewAI workflow
## Basic Configuration
Configure an agent for A2A delegation by setting the `a2a` parameter:
```python Code
from crewai import Agent, Crew, Task
from crewai.a2a import A2AConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks efficiently",
backstory="Expert at delegating to specialized research agents",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://example.com/.well-known/agent-card.json",
timeout=120,
max_turns=10
)
)
task = Task(
description="Research the latest developments in quantum computing",
expected_output="A comprehensive research report",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task], verbose=True)
result = crew.kickoff()
```
## Configuration Options
The `A2AConfig` class accepts the following parameters:
<ParamField path="endpoint" type="str" required>
The A2A agent endpoint URL (typically points to `.well-known/agent-card.json`)
</ParamField>
<ParamField path="auth" type="AuthScheme" default="None">
Authentication scheme for the A2A agent. Supports Bearer tokens, OAuth2, API keys, and HTTP authentication.
</ParamField>
<ParamField path="timeout" type="int" default="120">
Request timeout in seconds
</ParamField>
<ParamField path="max_turns" type="int" default="10">
Maximum number of conversation turns with the A2A agent
</ParamField>
<ParamField path="response_model" type="type[BaseModel]" default="None">
Optional Pydantic model for requesting structured output from an A2A agent. A2A protocol does not
enforce this, so an A2A agent does not need to honor this request.
</ParamField>
<ParamField path="fail_fast" type="bool" default="True">
Whether to raise an error immediately if agent connection fails. When `False`, the agent continues with available agents and informs the LLM about unavailable ones.
</ParamField>
## Authentication
For A2A agents that require authentication, use one of the provided auth schemes:
<Tabs>
<Tab title="Bearer Token">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Secure Coordinator",
goal="Coordinate tasks with secured agents",
backstory="Manages secure agent communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://secure-agent.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="your-bearer-token"),
timeout=120
)
)
```
</Tab>
<Tab title="API Key">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import APIKeyAuth
agent = Agent(
role="API Coordinator",
goal="Coordinate with API-based agents",
backstory="Manages API-authenticated communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://api-agent.example.com/.well-known/agent-card.json",
auth=APIKeyAuth(
api_key="your-api-key",
location="header", # or "query" or "cookie"
name="X-API-Key"
),
timeout=120
)
)
```
</Tab>
<Tab title="OAuth2">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import OAuth2ClientCredentials
agent = Agent(
role="OAuth Coordinator",
goal="Coordinate with OAuth-secured agents",
backstory="Manages OAuth-authenticated communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://oauth-agent.example.com/.well-known/agent-card.json",
auth=OAuth2ClientCredentials(
token_url="https://auth.example.com/oauth/token",
client_id="your-client-id",
client_secret="your-client-secret",
scopes=["read", "write"]
),
timeout=120
)
)
```
</Tab>
<Tab title="HTTP Basic">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import HTTPBasicAuth
agent = Agent(
role="Basic Auth Coordinator",
goal="Coordinate with basic auth agents",
backstory="Manages basic authentication communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://basic-agent.example.com/.well-known/agent-card.json",
auth=HTTPBasicAuth(
username="your-username",
password="your-password"
),
timeout=120
)
)
```
</Tab>
</Tabs>
## Multiple A2A Agents
Configure multiple A2A agents for delegation by passing a list:
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple specialized agents",
backstory="Expert at delegating to the right specialist",
llm="gpt-4o",
a2a=[
A2AConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
timeout=120
),
A2AConfig(
endpoint="https://data.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="data-token"),
timeout=90
)
]
)
```
The LLM will automatically choose which A2A agent to delegate to based on the task requirements.
## Error Handling
Control how agent connection failures are handled using the `fail_fast` parameter:
```python Code
from crewai.a2a import A2AConfig
# Fail immediately on connection errors (default)
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
fail_fast=True
)
)
# Continue with available agents
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple agents",
backstory="Expert at working with available resources",
llm="gpt-4o",
a2a=[
A2AConfig(
endpoint="https://primary.example.com/.well-known/agent-card.json",
fail_fast=False
),
A2AConfig(
endpoint="https://backup.example.com/.well-known/agent-card.json",
fail_fast=False
)
]
)
```
When `fail_fast=False`:
- If some agents fail, the LLM is informed which agents are unavailable and can delegate to working agents
- If all agents fail, the LLM receives a notice about unavailable agents and handles the task directly
- Connection errors are captured and included in the context for better decision-making
## Best Practices
<CardGroup cols={2}>
<Card title="Set Appropriate Timeouts" icon="clock">
Configure timeouts based on expected A2A agent response times. Longer-running tasks may need higher timeout values.
</Card>
<Card title="Limit Conversation Turns" icon="comments">
Use `max_turns` to prevent excessive back-and-forth. The agent will automatically conclude conversations before hitting the limit.
</Card>
<Card title="Use Resilient Error Handling" icon="shield-check">
Set `fail_fast=False` for production environments with multiple agents to gracefully handle connection failures and maintain workflow continuity.
</Card>
<Card title="Secure Your Credentials" icon="lock">
Store authentication tokens and credentials as environment variables, not in code.
</Card>
<Card title="Monitor Delegation Decisions" icon="eye">
Use verbose mode to observe when the LLM chooses to delegate versus handle tasks directly.
</Card>
</CardGroup>
## Supported Authentication Methods
- **Bearer Token** - Simple token-based authentication
- **OAuth2 Client Credentials** - OAuth2 flow for machine-to-machine communication
- **OAuth2 Authorization Code** - OAuth2 flow requiring user authorization
- **API Key** - Key-based authentication (header, query param, or cookie)
- **HTTP Basic** - Username/password authentication
- **HTTP Digest** - Digest authentication (requires `httpx-auth` package)
## Learn More
For more information about the A2A protocol and reference implementations:
- [A2A Protocol Documentation](https://a2a-protocol.org)
- [A2A Sample Implementations](https://github.com/a2aproject/a2a-samples)
- [A2A Python SDK](https://github.com/a2aproject/a2a-python)

View File

@@ -0,0 +1,344 @@
---
title: MCP DSL Integration
description: Learn how to use CrewAI's simple DSL syntax to integrate MCP servers directly with your agents using the mcps field.
icon: code
mode: "wide"
---
## Overview
CrewAI's MCP DSL (Domain Specific Language) integration provides the **simplest way** to connect your agents to MCP (Model Context Protocol) servers. Just add an `mcps` field to your agent and CrewAI handles all the complexity automatically.
<Info>
This is the **recommended approach** for most MCP use cases. For advanced scenarios requiring manual connection management, see [MCPServerAdapter](/en/mcp/overview#advanced-mcpserveradapter).
</Info>
## Basic Usage
Add MCP servers to your agent using the `mcps` field:
```python
from crewai import Agent
agent = Agent(
role="Research Assistant",
goal="Help with research and analysis tasks",
backstory="Expert assistant with access to advanced research tools",
mcps=[
"https://mcp.exa.ai/mcp?api_key=your_key&profile=research"
]
)
# MCP tools are now automatically available!
# No need for manual connection management or tool configuration
```
## Supported Reference Formats
### External MCP Remote Servers
```python
# Basic HTTPS server
"https://api.example.com/mcp"
# Server with authentication
"https://mcp.exa.ai/mcp?api_key=your_key&profile=your_profile"
# Server with custom path
"https://services.company.com/api/v1/mcp"
```
### Specific Tool Selection
Use the `#` syntax to select specific tools from a server:
```python
# Get only the forecast tool from weather server
"https://weather.api.com/mcp#get_forecast"
# Get only the search tool from Exa
"https://mcp.exa.ai/mcp?api_key=your_key#web_search_exa"
```
### CrewAI AMP Marketplace
Access tools from the CrewAI AMP marketplace:
```python
# Full service with all tools
"crewai-amp:financial-data"
# Specific tool from AMP service
"crewai-amp:research-tools#pubmed_search"
# Multiple AMP services
mcps=[
"crewai-amp:weather-insights",
"crewai-amp:market-analysis",
"crewai-amp:social-media-monitoring"
]
```
## Complete Example
Here's a complete example using multiple MCP servers:
```python
from crewai import Agent, Task, Crew, Process
# Create agent with multiple MCP sources
multi_source_agent = Agent(
role="Multi-Source Research Analyst",
goal="Conduct comprehensive research using multiple data sources",
backstory="""Expert researcher with access to web search, weather data,
financial information, and academic research tools""",
mcps=[
# External MCP servers
"https://mcp.exa.ai/mcp?api_key=your_exa_key&profile=research",
"https://weather.api.com/mcp#get_current_conditions",
# CrewAI AMP marketplace
"crewai-amp:financial-insights",
"crewai-amp:academic-research#pubmed_search",
"crewai-amp:market-intelligence#competitor_analysis"
]
)
# Create comprehensive research task
research_task = Task(
description="""Research the impact of AI agents on business productivity.
Include current weather impacts on remote work, financial market trends,
and recent academic publications on AI agent frameworks.""",
expected_output="""Comprehensive report covering:
1. AI agent business impact analysis
2. Weather considerations for remote work
3. Financial market trends related to AI
4. Academic research citations and insights
5. Competitive landscape analysis""",
agent=multi_source_agent
)
# Create and execute crew
research_crew = Crew(
agents=[multi_source_agent],
tasks=[research_task],
process=Process.sequential,
verbose=True
)
result = research_crew.kickoff()
print(f"Research completed with {len(multi_source_agent.mcps)} MCP data sources")
```
## Tool Naming and Organization
CrewAI automatically handles tool naming to prevent conflicts:
```python
# Original MCP server has tools: "search", "analyze"
# CrewAI creates tools: "mcp_exa_ai_search", "mcp_exa_ai_analyze"
agent = Agent(
role="Tool Organization Demo",
goal="Show how tool naming works",
backstory="Demonstrates automatic tool organization",
mcps=[
"https://mcp.exa.ai/mcp?api_key=key", # Tools: mcp_exa_ai_*
"https://weather.service.com/mcp", # Tools: weather_service_com_*
"crewai-amp:financial-data" # Tools: financial_data_*
]
)
# Each server's tools get unique prefixes based on the server name
# This prevents naming conflicts between different MCP servers
```
## Error Handling and Resilience
The MCP DSL is designed to be robust and user-friendly:
### Graceful Server Failures
```python
agent = Agent(
role="Resilient Researcher",
goal="Research despite server issues",
backstory="Experienced researcher who adapts to available tools",
mcps=[
"https://primary-server.com/mcp", # Primary data source
"https://backup-server.com/mcp", # Backup if primary fails
"https://unreachable-server.com/mcp", # Will be skipped with warning
"crewai-amp:reliable-service" # Reliable AMP service
]
)
# Agent will:
# 1. Successfully connect to working servers
# 2. Log warnings for failing servers
# 3. Continue with available tools
# 4. Not crash or hang on server failures
```
### Timeout Protection
All MCP operations have built-in timeouts:
- **Connection timeout**: 10 seconds
- **Tool execution timeout**: 30 seconds
- **Discovery timeout**: 15 seconds
```python
# These servers will timeout gracefully if unresponsive
mcps=[
"https://slow-server.com/mcp", # Will timeout after 10s if unresponsive
"https://overloaded-api.com/mcp" # Will timeout if discovery takes > 15s
]
```
## Performance Features
### Automatic Caching
Tool schemas are cached for 5 minutes to improve performance:
```python
# First agent creation - discovers tools from server
agent1 = Agent(role="First", goal="Test", backstory="Test",
mcps=["https://api.example.com/mcp"])
# Second agent creation (within 5 minutes) - uses cached tool schemas
agent2 = Agent(role="Second", goal="Test", backstory="Test",
mcps=["https://api.example.com/mcp"]) # Much faster!
```
### On-Demand Connections
Tool connections are established only when tools are actually used:
```python
# Agent creation is fast - no MCP connections made yet
agent = Agent(
role="On-Demand Agent",
goal="Use tools efficiently",
backstory="Efficient agent that connects only when needed",
mcps=["https://api.example.com/mcp"]
)
# MCP connection is made only when a tool is actually executed
# This minimizes connection overhead and improves startup performance
```
## Integration with Existing Features
MCP tools work seamlessly with other CrewAI features:
```python
from crewai.tools import BaseTool
class CustomTool(BaseTool):
name: str = "custom_analysis"
description: str = "Custom analysis tool"
def _run(self, **kwargs):
return "Custom analysis result"
agent = Agent(
role="Full-Featured Agent",
goal="Use all available tool types",
backstory="Agent with comprehensive tool access",
# All tool types work together
tools=[CustomTool()], # Custom tools
apps=["gmail", "slack"], # Platform integrations
mcps=[ # MCP servers
"https://mcp.exa.ai/mcp?api_key=key",
"crewai-amp:research-tools"
],
verbose=True,
max_iter=15
)
```
## Best Practices
### 1. Use Specific Tools When Possible
```python
# Good - only get the tools you need
mcps=["https://weather.api.com/mcp#get_forecast"]
# Less efficient - gets all tools from server
mcps=["https://weather.api.com/mcp"]
```
### 2. Handle Authentication Securely
```python
import os
# Store API keys in environment variables
exa_key = os.getenv("EXA_API_KEY")
exa_profile = os.getenv("EXA_PROFILE")
agent = Agent(
role="Secure Agent",
goal="Use MCP tools securely",
backstory="Security-conscious agent",
mcps=[f"https://mcp.exa.ai/mcp?api_key={exa_key}&profile={exa_profile}"]
)
```
### 3. Plan for Server Failures
```python
# Always include backup options
mcps=[
"https://primary-api.com/mcp", # Primary choice
"https://backup-api.com/mcp", # Backup option
"crewai-amp:reliable-service" # AMP fallback
]
```
### 4. Use Descriptive Agent Roles
```python
agent = Agent(
role="Weather-Enhanced Market Analyst",
goal="Analyze markets considering weather impacts",
backstory="Financial analyst with access to weather data for agricultural market insights",
mcps=[
"https://weather.service.com/mcp#get_forecast",
"crewai-amp:financial-data#stock_analysis"
]
)
```
## Troubleshooting
### Common Issues
**No tools discovered:**
```python
# Check your MCP server URL and authentication
# Verify the server is running and accessible
mcps=["https://mcp.example.com/mcp?api_key=valid_key"]
```
**Connection timeouts:**
```python
# Server may be slow or overloaded
# CrewAI will log warnings and continue with other servers
# Check server status or try backup servers
```
**Authentication failures:**
```python
# Verify API keys and credentials
# Check server documentation for required parameters
# Ensure query parameters are properly URL encoded
```
## Advanced: MCPServerAdapter
For complex scenarios requiring manual connection management, use the `MCPServerAdapter` class from `crewai-tools`. Using a Python context manager (`with` statement) is the recommended approach as it automatically handles starting and stopping the connection to the MCP server.

View File

@@ -8,14 +8,39 @@ mode: "wide"
## Overview
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
This gives your crews access to a vast ecosystem of functionalities.
CrewAI offers **two approaches** for MCP integration:
### Simple DSL Integration** (Recommended)
Use the `mcps` field directly on agents for seamless MCP tool integration:
```python
from crewai import Agent
agent = Agent(
role="Research Analyst",
goal="Research and analyze information",
backstory="Expert researcher with access to external tools",
mcps=[
"https://mcp.exa.ai/mcp?api_key=your_key", # External MCP server
"https://api.weather.com/mcp#get_forecast", # Specific tool from server
"crewai-amp:financial-data", # CrewAI AMP marketplace
"crewai-amp:research-tools#pubmed_search" # Specific AMP tool
]
)
# MCP tools are now automatically available to your agent!
```
### 🔧 **Advanced: MCPServerAdapter** (For Complex Scenarios)
For advanced use cases requiring manual connection management, the `crewai-tools` library provides the `MCPServerAdapter` class.
We currently support the following transport mechanisms:
- **Stdio**: for local servers (communication via standard input/output between processes on the same machine)
- **Server-Sent Events (SSE)**: for remote servers (unidirectional, real-time data streaming from server to client over HTTP)
- **Streamable HTTP**: for remote servers (flexible, potentially bi-directional communication over HTTP, often utilizing SSE for server-to-client streams)
- **Streamable HTTPS**: for remote servers (flexible, potentially bi-directional communication over HTTPS, often utilizing SSE for server-to-client streams)
## Video Tutorial
Watch this video tutorial for a comprehensive guide on MCP integration with CrewAI:
@@ -31,17 +56,125 @@ Watch this video tutorial for a comprehensive guide on MCP integration with Crew
## Installation
Before you start using MCP with `crewai-tools`, you need to install the `mcp` extra `crewai-tools` dependency with the following command:
CrewAI MCP integration requires the `mcp` library:
```shell
# For Simple DSL Integration (Recommended)
uv add mcp
# For Advanced MCPServerAdapter usage
uv pip install 'crewai-tools[mcp]'
```
## Key Concepts & Getting Started
## Quick Start: Simple DSL Integration
The `MCPServerAdapter` class from `crewai-tools` is the primary way to connect to an MCP server and make its tools available to your CrewAI agents. It supports different transport mechanisms and simplifies connection management.
The easiest way to integrate MCP servers is using the `mcps` field on your agents:
Using a Python context manager (`with` statement) is the **recommended approach** for `MCPServerAdapter`. It automatically handles starting and stopping the connection to the MCP server.
```python
from crewai import Agent, Task, Crew
# Create agent with MCP tools
research_agent = Agent(
role="Research Analyst",
goal="Find and analyze information using advanced search tools",
backstory="Expert researcher with access to multiple data sources",
mcps=[
"https://mcp.exa.ai/mcp?api_key=your_key&profile=your_profile",
"crewai-amp:weather-service#current_conditions"
]
)
# Create task
research_task = Task(
description="Research the latest developments in AI agent frameworks",
expected_output="Comprehensive research report with citations",
agent=research_agent
)
# Create and run crew
crew = Crew(agents=[research_agent], tasks=[research_task])
result = crew.kickoff()
```
That's it! The MCP tools are automatically discovered and available to your agent.
## MCP Reference Formats
The `mcps` field supports various reference formats for maximum flexibility:
### External MCP Servers
```python
mcps=[
# Full server - get all available tools
"https://mcp.example.com/api",
# Specific tool from server using # syntax
"https://api.weather.com/mcp#get_current_weather",
# Server with authentication parameters
"https://mcp.exa.ai/mcp?api_key=your_key&profile=your_profile"
]
```
### CrewAI AMP Marketplace
```python
mcps=[
# Full AMP MCP service - get all available tools
"crewai-amp:financial-data",
# Specific tool from AMP service using # syntax
"crewai-amp:research-tools#pubmed_search",
# Multiple AMP services
"crewai-amp:weather-service",
"crewai-amp:market-analysis"
]
```
### Mixed References
```python
mcps=[
"https://external-api.com/mcp", # External server
"https://weather.service.com/mcp#forecast", # Specific external tool
"crewai-amp:financial-insights", # AMP service
"crewai-amp:data-analysis#sentiment_tool" # Specific AMP tool
]
```
## Key Features
- 🔄 **Automatic Tool Discovery**: Tools are automatically discovered and integrated
- 🏷️ **Name Collision Prevention**: Server names are prefixed to tool names
- ⚡ **Performance Optimized**: On-demand connections with schema caching
- 🛡️ **Error Resilience**: Graceful handling of unavailable servers
- ⏱️ **Timeout Protection**: Built-in timeouts prevent hanging connections
- 📊 **Transparent Integration**: Works seamlessly with existing CrewAI features
## Error Handling
The MCP DSL integration is designed to be resilient:
```python
agent = Agent(
role="Resilient Agent",
goal="Continue working despite server issues",
backstory="Agent that handles failures gracefully",
mcps=[
"https://reliable-server.com/mcp", # Will work
"https://unreachable-server.com/mcp", # Will be skipped gracefully
"https://slow-server.com/mcp", # Will timeout gracefully
"crewai-amp:working-service" # Will work
]
)
# Agent will use tools from working servers and log warnings for failing ones
```
## Advanced: MCPServerAdapter
For complex scenarios requiring manual connection management, use the `MCPServerAdapter` class from `crewai-tools`. Using a Python context manager (`with` statement) is the recommended approach as it automatically handles starting and stopping the connection to the MCP server.
## Connection Configuration
@@ -241,11 +374,19 @@ class CrewWithCustomTimeout:
## Explore MCP Integrations
<CardGroup cols={2}>
<Card
title="Simple DSL Integration"
icon="code"
href="/en/mcp/dsl-integration"
color="#3B82F6"
>
**Recommended**: Use the simple `mcps=[]` field syntax for effortless MCP integration.
</Card>
<Card
title="Stdio Transport"
icon="server"
href="/en/mcp/stdio"
color="#3B82F6"
color="#10B981"
>
Connect to local MCP servers via standard input/output. Ideal for scripts and local executables.
</Card>
@@ -253,7 +394,7 @@ class CrewWithCustomTimeout:
title="SSE Transport"
icon="wifi"
href="/en/mcp/sse"
color="#10B981"
color="#F59E0B"
>
Integrate with remote MCP servers using Server-Sent Events for real-time data streaming.
</Card>
@@ -261,7 +402,7 @@ class CrewWithCustomTimeout:
title="Streamable HTTP Transport"
icon="globe"
href="/en/mcp/streamable-http"
color="#F59E0B"
color="#8B5CF6"
>
Utilize flexible Streamable HTTP for robust communication with remote MCP servers.
</Card>
@@ -269,7 +410,7 @@ class CrewWithCustomTimeout:
title="Connecting to Multiple Servers"
icon="layer-group"
href="/en/mcp/multiple-servers"
color="#8B5CF6"
color="#EF4444"
>
Aggregate tools from several MCP servers simultaneously using a single adapter.
</Card>
@@ -277,7 +418,7 @@ class CrewWithCustomTimeout:
title="Security Considerations"
icon="lock"
href="/en/mcp/security"
color="#EF4444"
color="#DC2626"
>
Review important security best practices for MCP integration to keep your agents safe.
</Card>

View File

@@ -0,0 +1,109 @@
---
title: Datadog Integration
description: Learn how to integrate Datadog with CrewAI to submit LLM Observability traces to Datadog.
icon: dog
mode: "wide"
---
# Integrate Datadog with CrewAI
This guide will demonstrate how to integrate **[Datadog LLM Observability](https://docs.datadoghq.com/llm_observability/)** with **CrewAI** using [Datadog auto-instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python). By the end of this guide, you will be able to submit LLM Observability traces to Datadog and view your CrewAI agent runs in Datadog LLM Observability's [Agentic Execution View](https://docs.datadoghq.com/llm_observability/monitoring/agent_monitoring).
## What is Datadog LLM Observability?
[Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/) helps AI engineers, data scientists, and application developers quickly develop, evaluate, and monitor LLM applications. Confidently improve output quality, performance, costs, and overall risk with structured experiments, end-to-end tracing across AI agents, and evaluations.
## Getting Started
### Install Dependencies
```shell
pip install ddtrace crewai crewai-tools
```
### Set Environment Variables
If you do not have a Datadog API key, you can [create an account](https://www.datadoghq.com/) and [get your API key](https://docs.datadoghq.com/account_management/api-app-keys/#api-keys).
You will also need to specify an ML Application name in the following environment variables. An ML Application is a grouping of LLM Observability traces associated with a specific LLM-based application. See [ML Application Naming Guidelines](https://docs.datadoghq.com/llm_observability/instrumentation/sdk?tab=python#application-naming-guidelines) for more information on limitations with ML Application names.
```shell
export DD_API_KEY=<YOUR_DD_API_KEY>
export DD_SITE=<YOUR_DD_SITE>
export DD_LLMOBS_ENABLED=true
export DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>
export DD_LLMOBS_AGENTLESS_ENABLED=true
export DD_APM_TRACING_ENABLED=false
```
Additionally, configure any LLM provider API keys
```shell
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
export ANTHROPIC_API_KEY=<YOUR_ANTHROPIC_API_KEY>
export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>
...
```
### Create a CrewAI Agent Application
```python
# crewai_agent.py
from crewai import Agent, Task, Crew
from crewai_tools import (
WebsiteSearchTool
)
web_rag_tool = WebsiteSearchTool()
writer = Agent(
role="Writer",
goal="You make math engaging and understandable for young children through poetry",
backstory="You're an expert in writing haikus but you know nothing of math.",
tools=[web_rag_tool],
)
task = Task(
description=("What is {multiplication}?"),
expected_output=("Compose a haiku that includes the answer."),
agent=writer
)
crew = Crew(
agents=[writer],
tasks=[task],
share_crew=False
)
output = crew.kickoff(dict(multiplication="2 * 2"))
```
### Run the Application with Datadog Auto-Instrumentation
With the [environment variables](#set-environment-variables) set, you can now run the application with Datadog auto-instrumentation.
```shell
ddtrace-run python crewai_agent.py
```
### View the Traces in Datadog
After running the application, you can view the traces in [Datadog LLM Observability's Traces View](https://app.datadoghq.com/llm/traces), selecting the ML Application name you chose from the top-left dropdown.
Clicking on a trace will show you the details of the trace, including total tokens used, number of LLM calls, models used, and estimated cost. Clicking into a specific span will narrow down these details, and show related input, output, and metadata.
<Frame>
<img src="/images/datadog-llm-observability-1.png" alt="Datadog LLM Observability Trace View" />
</Frame>
Additionally, you can view the execution graph view of the trace, which shows the control and data flow of the trace, which will scale with larger agents to show handoffs and relationships between LLM calls, tool calls, and agent interactions.
<Frame>
<img src="/images/datadog-llm-observability-2.png" alt="Datadog LLM Observability Agent Execution Flow View" />
</Frame>
## References
- [Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/)
- [Datadog LLM Observability CrewAI Auto-Instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python#crew-ai)

View File

@@ -23,13 +23,15 @@ Here's a minimal example of how to use the tool:
```python
from crewai import Agent
from crewai_tools import QdrantVectorSearchTool
from crewai_tools import QdrantVectorSearchTool, QdrantConfig
# Initialize the tool
# Initialize the tool with QdrantConfig
qdrant_tool = QdrantVectorSearchTool(
qdrant_url="your_qdrant_url",
qdrant_api_key="your_qdrant_api_key",
collection_name="your_collection"
qdrant_config=QdrantConfig(
qdrant_url="your_qdrant_url",
qdrant_api_key="your_qdrant_api_key",
collection_name="your_collection"
)
)
# Create an agent that uses the tool
@@ -82,7 +84,7 @@ def extract_text_from_pdf(pdf_path):
def get_openai_embedding(text):
response = client.embeddings.create(
input=text,
model="text-embedding-3-small"
model="text-embedding-3-large"
)
return response.data[0].embedding
@@ -90,13 +92,13 @@ def get_openai_embedding(text):
def load_pdf_to_qdrant(pdf_path, qdrant, collection_name):
# Extract text from PDF
text_chunks = extract_text_from_pdf(pdf_path)
# Create Qdrant collection
if qdrant.collection_exists(collection_name):
qdrant.delete_collection(collection_name)
qdrant.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(size=1536, distance=Distance.COSINE)
vectors_config=VectorParams(size=3072, distance=Distance.COSINE)
)
# Store embeddings
@@ -120,19 +122,23 @@ pdf_path = "path/to/your/document.pdf"
load_pdf_to_qdrant(pdf_path, qdrant, collection_name)
# Initialize Qdrant search tool
from crewai_tools import QdrantConfig
qdrant_tool = QdrantVectorSearchTool(
qdrant_url=os.getenv("QDRANT_URL"),
qdrant_api_key=os.getenv("QDRANT_API_KEY"),
collection_name=collection_name,
limit=3,
score_threshold=0.35
qdrant_config=QdrantConfig(
qdrant_url=os.getenv("QDRANT_URL"),
qdrant_api_key=os.getenv("QDRANT_API_KEY"),
collection_name=collection_name,
limit=3,
score_threshold=0.35
)
)
# Create CrewAI agents
search_agent = Agent(
role="Senior Semantic Search Agent",
goal="Find and analyze documents based on semantic search",
backstory="""You are an expert research assistant who can find relevant
backstory="""You are an expert research assistant who can find relevant
information using semantic search in a Qdrant database.""",
tools=[qdrant_tool],
verbose=True
@@ -141,7 +147,7 @@ search_agent = Agent(
answer_agent = Agent(
role="Senior Answer Assistant",
goal="Generate answers to questions based on the context provided",
backstory="""You are an expert answer assistant who can generate
backstory="""You are an expert answer assistant who can generate
answers to questions based on the context provided.""",
tools=[qdrant_tool],
verbose=True
@@ -180,21 +186,82 @@ print(result)
## Tool Parameters
### Required Parameters
- `qdrant_url` (str): The URL of your Qdrant server
- `qdrant_api_key` (str): API key for authentication with Qdrant
- `collection_name` (str): Name of the Qdrant collection to search
- `qdrant_config` (QdrantConfig): Configuration object containing all Qdrant settings
### Optional Parameters
### QdrantConfig Parameters
- `qdrant_url` (str): The URL of your Qdrant server
- `qdrant_api_key` (str, optional): API key for authentication with Qdrant
- `collection_name` (str): Name of the Qdrant collection to search
- `limit` (int): Maximum number of results to return (default: 3)
- `score_threshold` (float): Minimum similarity score threshold (default: 0.35)
- `filter` (Any, optional): Qdrant Filter instance for advanced filtering (default: None)
### Optional Tool Parameters
- `custom_embedding_fn` (Callable[[str], list[float]]): Custom function for text vectorization
- `qdrant_package` (str): Base package path for Qdrant (default: "qdrant_client")
- `client` (Any): Pre-initialized Qdrant client (optional)
## Advanced Filtering
The QdrantVectorSearchTool supports powerful filtering capabilities to refine your search results:
### Dynamic Filtering
Use `filter_by` and `filter_value` parameters in your search to filter results on-the-fly:
```python
# Agent will use these parameters when calling the tool
# The tool schema accepts filter_by and filter_value
# Example: search with category filter
# Results will be filtered where category == "technology"
```
### Preset Filters with QdrantConfig
For complex filtering, use Qdrant Filter instances in your configuration:
```python
from qdrant_client.http import models as qmodels
from crewai_tools import QdrantVectorSearchTool, QdrantConfig
# Create a filter for specific conditions
preset_filter = qmodels.Filter(
must=[
qmodels.FieldCondition(
key="category",
match=qmodels.MatchValue(value="research")
),
qmodels.FieldCondition(
key="year",
match=qmodels.MatchValue(value=2024)
)
]
)
# Initialize tool with preset filter
qdrant_tool = QdrantVectorSearchTool(
qdrant_config=QdrantConfig(
qdrant_url="your_url",
qdrant_api_key="your_key",
collection_name="your_collection",
filter=preset_filter # Preset filter applied to all searches
)
)
```
### Combining Filters
The tool automatically combines preset filters from `QdrantConfig` with dynamic filters from `filter_by` and `filter_value`:
```python
# If QdrantConfig has a preset filter for category="research"
# And the search uses filter_by="year", filter_value=2024
# Both filters will be combined (AND logic)
```
## Search Parameters
The tool accepts these parameters in its schema:
- `query` (str): The search query to find similar documents
- `filter_by` (str, optional): Metadata field to filter on
- `filter_value` (str, optional): Value to filter by
- `filter_value` (Any, optional): Value to filter by
## Return Format
@@ -214,7 +281,7 @@ The tool returns results in JSON format:
## Default Embedding
By default, the tool uses OpenAI's `text-embedding-3-small` model for vectorization. This requires:
By default, the tool uses OpenAI's `text-embedding-3-large` model for vectorization. This requires:
- OpenAI API key set in environment: `OPENAI_API_KEY`
## Custom Embeddings
@@ -240,18 +307,22 @@ def custom_embeddings(text: str) -> list[float]:
# Tokenize and get model outputs
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
# Use mean pooling to get text embedding
embeddings = outputs.last_hidden_state.mean(dim=1)
# Convert to list of floats and return
return embeddings[0].tolist()
# Use custom embeddings with the tool
from crewai_tools import QdrantConfig
tool = QdrantVectorSearchTool(
qdrant_url="your_url",
qdrant_api_key="your_key",
collection_name="your_collection",
qdrant_config=QdrantConfig(
qdrant_url="your_url",
qdrant_api_key="your_key",
collection_name="your_collection"
),
custom_embedding_fn=custom_embeddings # Pass your custom function
)
```
@@ -269,4 +340,4 @@ Required environment variables:
```bash
export QDRANT_URL="your_qdrant_url" # If not provided in constructor
export QDRANT_API_KEY="your_api_key" # If not provided in constructor
export OPENAI_API_KEY="your_openai_key" # If using default embeddings
export OPENAI_API_KEY="your_openai_key" # If using default embeddings

View File

@@ -54,25 +54,25 @@ The following parameters can be used to customize the `CSVSearchTool`'s behavior
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python Code
from chromadb.config import Settings
tool = CSVSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # or "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -46,23 +46,25 @@ tool = DirectorySearchTool(directory='/path/to/directory')
The DirectorySearchTool uses OpenAI for embeddings and summarization by default. Customization options for these settings include changing the model provider and configuration, enhancing flexibility for advanced users.
```python Code
from chromadb.config import Settings
tool = DirectorySearchTool(
config=dict(
llm=dict(
provider="ollama", # Options include ollama, google, anthropic, llama2, and more
config=dict(
model="llama2",
# Additional configurations here
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # or "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -56,25 +56,25 @@ The following parameters can be used to customize the `DOCXSearchTool`'s behavio
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python Code
from chromadb.config import Settings
tool = DOCXSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # or "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -48,27 +48,25 @@ tool = MDXSearchTool(mdx='path/to/your/document.mdx')
The tool defaults to using OpenAI for embeddings and summarization. For customization, utilize a configuration dictionary as shown below:
```python Code
from chromadb.config import Settings
tool = MDXSearchTool(
config=dict(
llm=dict(
provider="ollama", # Options include google, openai, anthropic, llama2, etc.
config=dict(
model="llama2",
# Optional parameters can be included here.
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# Optional title for the embeddings can be added here.
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # or "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -45,28 +45,64 @@ tool = PDFSearchTool(pdf='path/to/your/document.pdf')
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows. Note: a vector database is required because generated embeddings must be stored and queried from a vectordb.
```python Code
from crewai_tools import PDFSearchTool
# - embedding_model (required): choose provider + provider-specific config
# - vectordb (required): choose vector DB and pass its config
tool = PDFSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
# Supported providers: "openai", "azure", "google-generativeai", "google-vertex",
# "voyageai", "cohere", "huggingface", "jina", "sentence-transformer",
# "text2vec", "ollama", "openclip", "instructor", "onnx", "roboflow", "watsonx", "custom"
"provider": "openai", # or: "google-generativeai", "cohere", "ollama", ...
"config": {
# Model identifier for the chosen provider. "model" will be auto-mapped to "model_name" internally.
"model": "text-embedding-3-small",
# Optional: API key. If omitted, the tool will use provider-specific env vars when available
# (e.g., OPENAI_API_KEY for provider="openai").
# "api_key": "sk-...",
# Provider-specific examples:
# --- Google Generative AI ---
# (Set provider="google-generativeai" above)
# "model": "models/embedding-001",
# "task_type": "retrieval_document",
# "title": "Embeddings",
# --- Cohere ---
# (Set provider="cohere" above)
# "model": "embed-english-v3.0",
# --- Ollama (local) ---
# (Set provider="ollama" above)
# "model": "nomic-embed-text",
},
},
"vectordb": {
"provider": "chromadb", # or "qdrant"
"config": {
# For ChromaDB: pass "settings" (chromadb.config.Settings) or rely on defaults.
# Example (uncomment and import):
# from chromadb.config import Settings
# "settings": Settings(
# persist_directory="/content/chroma",
# allow_reset=True,
# is_persistent=True,
# ),
# For Qdrant: pass "vectors_config" (qdrant_client.models.VectorParams).
# Example (uncomment and import):
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
# Note: collection name is controlled by the tool (default: "rag_tool_collection"), not set here.
}
},
}
)
```

View File

@@ -57,25 +57,41 @@ By default, the tool uses OpenAI for both embeddings and summarization.
To customize the model, you can use a config dictionary as follows:
```python Code
from chromadb.config import Settings
tool = TXTSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
# Required: embeddings provider + config
"embedding_model": {
"provider": "openai", # or google-generativeai, cohere, ollama, ...
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...", # optional if env var is set
# Provider examples:
# Google → model: "models/embedding-001", task_type: "retrieval_document"
# Cohere → model: "embed-english-v3.0"
# Ollama → model: "nomic-embed-text"
},
},
# Required: vector database config
"vectordb": {
"provider": "chromadb", # or "qdrant"
"config": {
# Chroma settings (optional persistence)
# "settings": Settings(
# persist_directory="/content/chroma",
# allow_reset=True,
# is_persistent=True,
# ),
# Qdrant vector params example:
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
# Note: collection name is controlled by the tool (default: "rag_tool_collection").
}
},
}
)
```

View File

@@ -54,25 +54,25 @@ It is an optional parameter during the tool's initialization but must be provide
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python Code
from chromadb.config import Settings
tool = XMLSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # or "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -11,7 +11,7 @@ mode: "wide"
<Card
title="Bedrock Invoke Agent Tool"
icon="cloud"
href="/en/tools/tool-integrations/bedrockinvokeagenttool"
href="/en/tools/integration/bedrockinvokeagenttool"
color="#0891B2"
>
Invoke Amazon Bedrock Agents from CrewAI to orchestrate actions across AWS services.
@@ -20,7 +20,7 @@ mode: "wide"
<Card
title="CrewAI Automation Tool"
icon="bolt"
href="/en/tools/tool-integrations/crewaiautomationtool"
href="/en/tools/integration/crewaiautomationtool"
color="#7C3AED"
>
Automate deployment and operations by integrating CrewAI with external platforms and workflows.

Binary file not shown.

After

Width:  |  Height:  |  Size: 370 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 738 KiB

View File

@@ -33,6 +33,22 @@ Asana 연동을 사용하기 전에 다음을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Box 통합을 사용하기 전에 다음을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 액션
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ ClickUp 통합을 사용하기 전에 다음을 준비해야 합니다:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 동작
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ GitHub 통합을 사용하기 전에 다음을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Gmail 통합을 사용하기 전에 다음을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Google Calendar 통합을 사용하기 전에 다음을 준비해야 합니다:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Google Contacts 통합을 사용하기 전에 다음 사항을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Google Docs 통합을 사용하기 전에 다음 사항을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -17,6 +17,38 @@ Google Drive 통합을 사용하기 전에 다음 사항을 확인하세요:
- Google Drive 액세스 권한이 있는 Google 계정
- [통합 페이지](https://app.crewai.com/crewai_plus/connectors)를 통해 Google 계정 연결
## Google Drive 통합 설정
### 1. Google 계정 연결
1. [CrewAI AMP 통합](https://app.crewai.com/crewai_plus/connectors)으로 이동합니다.
2. 인증 통합 섹션에서 **Google Drive**를 찾습니다.
3. **연결**을 클릭하고 OAuth 과정을 완료합니다.
4. 파일 및 폴더 관리에 필요한 권한을 부여합니다.
5. [통합 설정](https://app.crewai.com/crewai_plus/settings/integrations)에서 Enterprise Token을 복사합니다.
### 2. 필수 패키지 설치
```bash
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
자세한 매개변수 및 사용법은 [영어 문서](../../../en/enterprise/integrations/google_drive)를 참조하세요.

View File

@@ -34,6 +34,22 @@ Google Sheets 통합을 사용하기 전에 다음을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Google Slides 통합을 사용하기 전에 다음 사항을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ HubSpot 통합을 사용하기 전에 다음을 확인하세요.
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 액션
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Jira 통합을 사용하기 전에 다음을 준비하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Linear 통합을 사용하기 전에 다음을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Microsoft Excel 통합을 사용하기 전에 다음 사항을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Microsoft OneDrive 통합을 사용하기 전에 다음 사항을 확인하세
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Microsoft Outlook 통합을 사용하기 전에 다음 사항을 확인하세요
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Microsoft SharePoint 통합을 사용하기 전에 다음 사항을 확인하세
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Microsoft Teams 통합을 사용하기 전에 다음 사항을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Microsoft Word 통합을 사용하기 전에 다음 사항을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 작업
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Notion 통합을 사용하기 전에 다음을 확인하세요:
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 액션
<AccordionGroup>

View File

@@ -17,6 +17,38 @@ Salesforce 통합을 사용하기 전에 다음을 확인하세요:
- 적절한 권한이 있는 Salesforce 계정
- [통합 페이지](https://app.crewai.com/integrations)를 통해 Salesforce 계정 연결
## Salesforce 통합 설정
### 1. Salesforce 계정 연결
1. [CrewAI AMP 통합](https://app.crewai.com/crewai_plus/connectors)으로 이동합니다.
2. 인증 통합 섹션에서 **Salesforce**를 찾습니다.
3. **연결**을 클릭하고 OAuth 과정을 완료합니다.
4. CRM 및 영업 관리에 필요한 권한을 부여합니다.
5. [통합 설정](https://app.crewai.com/crewai_plus/settings/integrations)에서 Enterprise Token을 복사합니다.
### 2. 필수 패키지 설치
```bash
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 도구
### **레코드 관리**

View File

@@ -17,6 +17,38 @@ Shopify 연동을 사용하기 전에 다음을 확인하세요:
- 적절한 관리자 권한이 있는 Shopify 스토어
- [통합 페이지](https://app.crewai.com/integrations)를 통해 Shopify 스토어 연결
## Shopify 통합 설정
### 1. Shopify 스토어 연결
1. [CrewAI AMP 통합](https://app.crewai.com/crewai_plus/connectors)으로 이동합니다.
2. 인증 통합 섹션에서 **Shopify**를 찾습니다.
3. **연결**을 클릭하고 OAuth 과정을 완료합니다.
4. 스토어 및 제품 관리에 필요한 권한을 부여합니다.
5. [통합 설정](https://app.crewai.com/crewai_plus/settings/integrations)에서 Enterprise Token을 복사합니다.
### 2. 필수 패키지 설치
```bash
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 도구
### **고객 관리**

View File

@@ -17,6 +17,38 @@ Slack 통합을 사용하기 전에 다음을 확인하십시오:
- 적절한 권한이 있는 Slack 워크스페이스
- [통합 페이지](https://app.crewai.com/integrations)를 통해 Slack 워크스페이스를 연결함
## Slack 통합 설정
### 1. Slack 워크스페이스 연결
1. [CrewAI AMP 통합](https://app.crewai.com/crewai_plus/connectors)으로 이동합니다.
2. 인증 통합 섹션에서 **Slack**을 찾습니다.
3. **연결**을 클릭하고 OAuth 과정을 완료합니다.
4. 팀 커뮤니케이션에 필요한 권한을 부여합니다.
5. [통합 설정](https://app.crewai.com/crewai_plus/settings/integrations)에서 Enterprise Token을 복사합니다.
### 2. 필수 패키지 설치
```bash
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 도구
### **사용자 관리**

View File

@@ -17,6 +17,38 @@ Stripe 통합을 사용하기 전에 다음 사항을 확인하세요:
- 적절한 API 권한이 있는 Stripe 계정
- [통합 페이지](https://app.crewai.com/integrations)를 통해 Stripe 계정 연결
## Stripe 통합 설정
### 1. Stripe 계정 연결
1. [CrewAI AMP 통합](https://app.crewai.com/crewai_plus/connectors)으로 이동합니다.
2. 인증 통합 섹션에서 **Stripe**를 찾습니다.
3. **연결**을 클릭하고 OAuth 과정을 완료합니다.
4. 결제 처리에 필요한 권한을 부여합니다.
5. [통합 설정](https://app.crewai.com/crewai_plus/settings/integrations)에서 Enterprise Token을 복사합니다.
### 2. 필수 패키지 설치
```bash
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 도구
### **고객 관리**

View File

@@ -17,6 +17,38 @@ Zendesk 통합을 사용하기 전에 다음을 확인하세요.
- 적절한 API 권한이 있는 Zendesk 계정
- [통합 페이지](https://app.crewai.com/integrations)를 통해 Zendesk 계정 연결
## Zendesk 통합 설정
### 1. Zendesk 계정 연결
1. [CrewAI AMP 통합](https://app.crewai.com/crewai_plus/connectors)으로 이동합니다.
2. 인증 통합 섹션에서 **Zendesk**를 찾습니다.
3. **연결**을 클릭하고 OAuth 과정을 완료합니다.
4. 티켓 및 사용자 관리에 필요한 권한을 부여합니다.
5. [통합 설정](https://app.crewai.com/crewai_plus/settings/integrations)에서 Enterprise Token을 복사합니다.
### 2. 필수 패키지 설치
```bash
uv add crewai-tools
```
### 3. 환경 변수 설정
<Note>
`Agent(apps=[])`와 함께 통합을 사용하려면 Enterprise Token으로 `CREWAI_PLATFORM_INTEGRATION_TOKEN` 환경 변수를 설정해야 합니다.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
또는 `.env` 파일에 추가하세요:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## 사용 가능한 도구
### **티켓 관리**

View File

@@ -0,0 +1,232 @@
---
title: MCP DSL 통합
description: CrewAI의 간단한 DSL 구문을 사용하여 mcps 필드로 MCP 서버를 에이전트와 직접 통합하는 방법을 알아보세요.
icon: code
mode: "wide"
---
## 개요
CrewAI의 MCP DSL(Domain Specific Language) 통합은 에이전트를 MCP(Model Context Protocol) 서버에 연결하는 **가장 간단한 방법**을 제공합니다. 에이전트에 `mcps` 필드만 추가하면 CrewAI가 모든 복잡성을 자동으로 처리합니다.
<Info>
이는 대부분의 MCP 사용 사례에 **권장되는 접근 방식**입니다. 수동 연결 관리가 필요한 고급 시나리오의 경우 [MCPServerAdapter](/ko/mcp/overview#advanced-mcpserveradapter)를 참조하세요.
</Info>
## 기본 사용법
`mcps` 필드를 사용하여 에이전트에 MCP 서버를 추가하세요:
```python
from crewai import Agent
agent = Agent(
role="연구 보조원",
goal="연구 및 분석 업무 지원",
backstory="고급 연구 도구에 접근할 수 있는 전문가 보조원",
mcps=[
"https://mcp.exa.ai/mcp?api_key=your_key&profile=research"
]
)
# MCP 도구들이 이제 자동으로 사용 가능합니다!
# 수동 연결 관리나 도구 구성이 필요 없습니다
```
## 지원되는 참조 형식
### 외부 MCP 원격 서버
```python
# 기본 HTTPS 서버
"https://api.example.com/mcp"
# 인증이 포함된 서버
"https://mcp.exa.ai/mcp?api_key=your_key&profile=your_profile"
# 사용자 정의 경로가 있는 서버
"https://services.company.com/api/v1/mcp"
```
### 특정 도구 선택
`#` 구문을 사용하여 서버에서 특정 도구를 선택하세요:
```python
# 날씨 서버에서 예보 도구만 가져오기
"https://weather.api.com/mcp#get_forecast"
# Exa에서 검색 도구만 가져오기
"https://mcp.exa.ai/mcp?api_key=your_key#web_search_exa"
```
### CrewAI AMP 마켓플레이스
CrewAI AMP 마켓플레이스의 도구에 액세스하세요:
```python
# 모든 도구가 포함된 전체 서비스
"crewai-amp:financial-data"
# AMP 서비스의 특정 도구
"crewai-amp:research-tools#pubmed_search"
# 다중 AMP 서비스
mcps=[
"crewai-amp:weather-insights",
"crewai-amp:market-analysis",
"crewai-amp:social-media-monitoring"
]
```
## 완전한 예제
다음은 여러 MCP 서버를 사용하는 완전한 예제입니다:
```python
from crewai import Agent, Task, Crew, Process
# 다중 MCP 소스를 가진 에이전트 생성
multi_source_agent = Agent(
role="다중 소스 연구 분석가",
goal="다중 데이터 소스를 사용한 종합적인 연구 수행",
backstory="""웹 검색, 날씨 데이터, 금융 정보,
학술 연구 도구에 접근할 수 있는 전문가 연구원""",
mcps=[
# 외부 MCP 서버
"https://mcp.exa.ai/mcp?api_key=your_exa_key&profile=research",
"https://weather.api.com/mcp#get_current_conditions",
# CrewAI AMP 마켓플레이스
"crewai-amp:financial-insights",
"crewai-amp:academic-research#pubmed_search",
"crewai-amp:market-intelligence#competitor_analysis"
]
)
# 종합적인 연구 작업 생성
research_task = Task(
description="""AI 에이전트가 비즈니스 생산성에 미치는 영향을 연구하세요.
원격 근무에 대한 현재 날씨 영향, 금융 시장 트렌드,
AI 에이전트 프레임워크에 대한 최근 학술 발표를 포함하세요.""",
expected_output="""다음을 다루는 종합 보고서:
1. AI 에이전트 비즈니스 영향 분석
2. 원격 근무를 위한 날씨 고려사항
3. AI 관련 금융 시장 트렌드
4. 학술 연구 인용 및 통찰
5. 경쟁 환경 분석""",
agent=multi_source_agent
)
# crew 생성 및 실행
research_crew = Crew(
agents=[multi_source_agent],
tasks=[research_task],
process=Process.sequential,
verbose=True
)
result = research_crew.kickoff()
print(f"{len(multi_source_agent.mcps)}개의 MCP 데이터 소스로 연구 완료")
```
## 주요 기능
- 🔄 **자동 도구 발견**: 도구들이 자동으로 발견되고 통합됩니다
- 🏷️ **이름 충돌 방지**: 서버 이름이 도구 이름에 접두사로 붙습니다
- ⚡ **성능 최적화**: 스키마 캐싱과 온디맨드 연결
- 🛡️ **오류 복원력**: 사용할 수 없는 서버의 우아한 처리
- ⏱️ **타임아웃 보호**: 내장 타임아웃으로 연결 중단 방지
- 📊 **투명한 통합**: 기존 CrewAI 기능과 완벽한 연동
## 오류 처리
MCP DSL 통합은 복원력 있게 설계되었습니다:
```python
agent = Agent(
role="복원력 있는 에이전트",
goal="서버 문제에도 불구하고 작업 계속",
backstory="장애를 우아하게 처리하는 에이전트",
mcps=[
"https://reliable-server.com/mcp", # 작동할 것
"https://unreachable-server.com/mcp", # 우아하게 건너뛸 것
"https://slow-server.com/mcp", # 우아하게 타임아웃될 것
"crewai-amp:working-service" # 작동할 것
]
)
# 에이전트는 작동하는 서버의 도구를 사용하고 실패한 서버에 대한 경고를 로그에 남깁니다
```
## 성능 기능
### 자동 캐싱
도구 스키마는 성능 향상을 위해 5분간 캐시됩니다:
```python
# 첫 번째 에이전트 생성 - 서버에서 도구 발견
agent1 = Agent(role="첫 번째", goal="테스트", backstory="테스트",
mcps=["https://api.example.com/mcp"])
# 두 번째 에이전트 생성 (5분 이내) - 캐시된 도구 스키마 사용
agent2 = Agent(role="두 번째", goal="테스트", backstory="테스트",
mcps=["https://api.example.com/mcp"]) # 훨씬 빠릅니다!
```
### 온디맨드 연결
도구 연결은 실제로 사용될 때만 설정됩니다:
```python
# 에이전트 생성은 빠름 - 아직 MCP 연결을 만들지 않음
agent = Agent(
role="온디맨드 에이전트",
goal="도구를 효율적으로 사용",
backstory="필요할 때만 연결하는 효율적인 에이전트",
mcps=["https://api.example.com/mcp"]
)
# MCP 연결은 도구가 실제로 실행될 때만 만들어집니다
# 이는 연결 오버헤드를 최소화하고 시작 성능을 개선합니다
```
## 모범 사례
### 1. 가능하면 특정 도구 사용
```python
# 좋음 - 필요한 도구만 가져오기
mcps=["https://weather.api.com/mcp#get_forecast"]
# 덜 효율적 - 서버의 모든 도구 가져오기
mcps=["https://weather.api.com/mcp"]
```
### 2. 인증을 안전하게 처리
```python
import os
# 환경 변수에 API 키 저장
exa_key = os.getenv("EXA_API_KEY")
exa_profile = os.getenv("EXA_PROFILE")
agent = Agent(
role="안전한 에이전트",
goal="MCP 도구를 안전하게 사용",
backstory="보안을 고려하는 에이전트",
mcps=[f"https://mcp.exa.ai/mcp?api_key={exa_key}&profile={exa_profile}"]
)
```
### 3. 서버 장애 계획
```python
# 항상 백업 옵션 포함
mcps=[
"https://primary-api.com/mcp", # 주요 선택
"https://backup-api.com/mcp", # 백업 옵션
"crewai-amp:reliable-service" # AMP 폴백
]
```

View File

@@ -8,12 +8,37 @@ mode: "wide"
## 개요
[Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP)는 AI 에이전트가 MCP 서버로 알려진 외부 서비스와 통신함으로써 LLM에 컨텍스트를 제공할 수 있도록 표준화된 방식을 제공합니다.
`crewai-tools` 라이브러리는 CrewAI의 기능을 확장하여, 이러한 MCP 서버에서 제공하는 툴을 에이전트에 원활하게 통합할 수 있도록 해줍니다.
이를 통해 여러분의 crew는 방대한 기능 에코시스템에 접근할 수 있습니다.
CrewAI는 MCP 통합을 위한 **두 가지 접근 방식**을 제공합니다:
### 🚀 **새로운 기능: 간단한 DSL 통합** (권장)
에이전트에 `mcps` 필드를 직접 사용하여 완벽한 MCP 도구 통합을 구현하세요:
```python
from crewai import Agent
agent = Agent(
role="연구 분석가",
goal="정보를 연구하고 분석",
backstory="외부 도구에 접근할 수 있는 전문가 연구원",
mcps=[
"https://mcp.exa.ai/mcp?api_key=your_key", # 외부 MCP 서버
"https://api.weather.com/mcp#get_forecast", # 서버의 특정 도구
"crewai-amp:financial-data", # CrewAI AMP 마켓플레이스
"crewai-amp:research-tools#pubmed_search" # 특정 AMP 도구
]
)
# MCP 도구들이 이제 자동으로 에이전트에서 사용 가능합니다!
```
### 🔧 **고급: MCPServerAdapter** (복잡한 시나리오용)
수동 연결 관리가 필요한 고급 사용 사례의 경우 `crewai-tools` 라이브러리는 `MCPServerAdapter` 클래스를 제공합니다.
현재 다음과 같은 전송 메커니즘을 지원합니다:
- **Stdio**: 로컬 서버용 (동일 머신 내 프로세스 간 표준 입력/출력을 통한 통신)
- **HTTPS**: 원격 서버용 (HTTPS를 통한 보안 통신)
- **Server-Sent Events (SSE)**: 원격 서버용 (서버에서 클라이언트로의 일방향, 실시간 데이터 스트리밍, HTTP 기반)
- **Streamable HTTP**: 원격 서버용 (유연하며 잠재적으로 양방향 통신이 가능, 주로 SSE를 활용한 서버-클라이언트 스트림 제공, HTTP 기반)

View File

@@ -0,0 +1,109 @@
---
title: Datadog 통합
description: Datadog을 CrewAI와 통합하여 LLM Observability 트레이스들을 Datadog에 제출하는 방법을 알아보세요.
icon: dog
mode: "wide"
---
# Datadog을 CrewAI와 통합하기
이 가이드에서는 Datadog 자동 계측을 사용하여 **Datadog**을 **CrewAI**와 통합하는 방법을 보여드립니다. 이 가이드가 끝나면 LLM Observability 트레이스를 Datadog에 제출하고 CrewAI 에이전트 실행을 Datadog LLM Observability의 에이전트 실행 보기에서 볼 수 있게 됩니다.
## Datadog LLM Observability란 무엇인가요?
[Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/)는 AI 엔지니어, 데이터 과학자, 애플리케이션 개발자가 LLM 애플리케이션을 신속하게 개발, 평가, 모니터링할 수 있도록 도와줍니다. 구조화된 실험, AI 에이전트 전반의 엔드투엔드 추적, 평가를 통해 결과물 품질, 성능, 비용, 전반적인 위험을 확실하게 개선할 수 있습니다.
## 시작하기
### 설치 종속성
```shell
pip install ddtrace crewai crewai-tools
```
### 환경 변수 설정하기
Datadog API 키가 없는 경우, [계정 만들기](https://www.datadoghq.com/) 및 [API 키 받기](https://docs.datadoghq.com/account_management/api-app-keys/#api-keys)를 할 수 있습니다.
또한 다음 환경 변수에 ML 애플리케이션 이름을 지정해야 합니다. ML 애플리케이션은 특정 LLM 기반 애플리케이션과 관련된 LLM Observability 트레이스의 그룹입니다. ML 애플리케이션 이름 제한에 대한 자세한 내용은 [ML 애플리케이션 이름 지정 가이드라인](https://docs.datadoghq.com/llm_observability/instrumentation/sdk?tab=python#application-naming-guidelines)을 참조하세요.
```shell
export DD_API_KEY=<YOUR_DD_API_KEY>
export DD_SITE=<YOUR_DD_SITE>
export DD_LLMOBS_ENABLED=true
export DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>
export DD_LLMOBS_AGENTLESS_ENABLED=true
export DD_APM_TRACING_ENABLED=false
```
또한 LLM 공급자 API 키를 설정합니다.
```shell
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
export ANTHROPIC_API_KEY=<YOUR_ANTHROPIC_API_KEY>
export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>
...
```
### 크루AI 에이전트 애플리케이션 생성하기
```python
# crewai_agent.py
from crewai import Agent, Task, Crew
from crewai_tools import (
WebsiteSearchTool
)
web_rag_tool = WebsiteSearchTool()
writer = Agent(
role="작가",
goal="시를 통해 어린이들이 수학을 흥미롭고 이해하기 쉽게 설명합니다",
backstory="당신은 하이쿠를 쓰는 전문가이지만 수학은 전혀 모릅니다.",
tools=[web_rag_tool],
)
task = Task(
description=("{곱셈}이란 무엇인가요?"),
expected_output=("답을 포함하는 하이쿠를 작성하세요."),
agent=writer
)
crew = Crew(
agents=[writer],
tasks=[task],
share_crew=False
)
output = crew.kickoff(dict(곱셈="2 * 2"))
```
### Datadog 자동 계측을 사용하여 애플리케이션 실행하기
[환경 변수](#환경-변수-설정하기)를 설정하면 이제 Datadog 자동 계측을 통해 애플리케이션을 실행할 수 있습니다.
```shell
ddtrace-run python crewai_agent.py
```
### Datadog에서 트레이스 추적하기
애플리케이션을 실행한 후 왼쪽 상단 드롭다운에서 선택한 ML 애플리케이션 이름을 선택하면 [Datadog LLM Observability의 트레이스 보기](https://app.datadoghq.com/llm/traces)에서 트레이스들을 확인할 수 있습니다.
트레이스를 클릭하면 사용된 총 토큰, LLM 호출 수, 사용된 모델, 예상 비용 등 트레이스에 대한 세부 정보가 표시됩니다. 특정 스팬(span)을 클릭하면 이러한 세부 정보의 범위가 좁혀지고 관련 입력, 출력 및 메타데이터가 표시됩니다.
<Frame>
<img src="/images/datadog-llm-observability-1.png" alt="Datadog LLM 옵저버빌리티 추적 보기" />
</Frame>
또한, 트레이스의 제어 및 데이터 흐름을 보여주는 트레이스의 실행 그래프 보기를 볼 수 있으며, 이는 더 큰 에이전트로 확장하여 LLM 호출, 도구 호출 및 에이전트 상호 작용 간의 핸드오프와 관계를 보여줍니다.
<Frame>
<img src="/images/datadog-llm-observability-2.png" alt="Datadog LLM Observability 에이전트 실행 흐름 보기" />
</Frame>
## 참조
- [Datadog LLM Observability](https://www.datadoghq.com/product/llm-observability/)
- [Datadog LLM 옵저버빌리티 크루AI 자동 계측](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python#crew-ai)

View File

@@ -23,13 +23,15 @@ uv add qdrant-client
```python
from crewai import Agent
from crewai_tools import QdrantVectorSearchTool
from crewai_tools import QdrantVectorSearchTool, QdrantConfig
# Initialize the tool
# QdrantConfig로 도구 초기화
qdrant_tool = QdrantVectorSearchTool(
qdrant_url="your_qdrant_url",
qdrant_api_key="your_qdrant_api_key",
collection_name="your_collection"
qdrant_config=QdrantConfig(
qdrant_url="your_qdrant_url",
qdrant_api_key="your_qdrant_api_key",
collection_name="your_collection"
)
)
# Create an agent that uses the tool
@@ -82,7 +84,7 @@ def extract_text_from_pdf(pdf_path):
def get_openai_embedding(text):
response = client.embeddings.create(
input=text,
model="text-embedding-3-small"
model="text-embedding-3-large"
)
return response.data[0].embedding
@@ -90,13 +92,13 @@ def get_openai_embedding(text):
def load_pdf_to_qdrant(pdf_path, qdrant, collection_name):
# Extract text from PDF
text_chunks = extract_text_from_pdf(pdf_path)
# Create Qdrant collection
if qdrant.collection_exists(collection_name):
qdrant.delete_collection(collection_name)
qdrant.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(size=1536, distance=Distance.COSINE)
vectors_config=VectorParams(size=3072, distance=Distance.COSINE)
)
# Store embeddings
@@ -120,19 +122,23 @@ pdf_path = "path/to/your/document.pdf"
load_pdf_to_qdrant(pdf_path, qdrant, collection_name)
# Initialize Qdrant search tool
from crewai_tools import QdrantConfig
qdrant_tool = QdrantVectorSearchTool(
qdrant_url=os.getenv("QDRANT_URL"),
qdrant_api_key=os.getenv("QDRANT_API_KEY"),
collection_name=collection_name,
limit=3,
score_threshold=0.35
qdrant_config=QdrantConfig(
qdrant_url=os.getenv("QDRANT_URL"),
qdrant_api_key=os.getenv("QDRANT_API_KEY"),
collection_name=collection_name,
limit=3,
score_threshold=0.35
)
)
# Create CrewAI agents
search_agent = Agent(
role="Senior Semantic Search Agent",
goal="Find and analyze documents based on semantic search",
backstory="""You are an expert research assistant who can find relevant
backstory="""You are an expert research assistant who can find relevant
information using semantic search in a Qdrant database.""",
tools=[qdrant_tool],
verbose=True
@@ -141,7 +147,7 @@ search_agent = Agent(
answer_agent = Agent(
role="Senior Answer Assistant",
goal="Generate answers to questions based on the context provided",
backstory="""You are an expert answer assistant who can generate
backstory="""You are an expert answer assistant who can generate
answers to questions based on the context provided.""",
tools=[qdrant_tool],
verbose=True
@@ -180,21 +186,82 @@ print(result)
## 도구 매개변수
### 필수 파라미터
- `qdrant_url` (str): Qdrant 서버의 URL
- `qdrant_api_key` (str): Qdrant 인증을 위한 API 키
- `collection_name` (str): 검색할 Qdrant 컬렉션의 이름
- `qdrant_config` (QdrantConfig): 모든 Qdrant 설정을 포함하는 구성 객체
### 선택적 매개변수
### QdrantConfig 매개변수
- `qdrant_url` (str): Qdrant 서버의 URL
- `qdrant_api_key` (str, 선택 사항): Qdrant 인증을 위한 API 키
- `collection_name` (str): 검색할 Qdrant 컬렉션의 이름
- `limit` (int): 반환할 최대 결과 수 (기본값: 3)
- `score_threshold` (float): 최소 유사도 점수 임계값 (기본값: 0.35)
- `filter` (Any, 선택 사항): 고급 필터링을 위한 Qdrant Filter 인스턴스 (기본값: None)
### 선택적 도구 매개변수
- `custom_embedding_fn` (Callable[[str], list[float]]): 텍스트 벡터화를 위한 사용자 지정 함수
- `qdrant_package` (str): Qdrant의 기본 패키지 경로 (기본값: "qdrant_client")
- `client` (Any): 사전 초기화된 Qdrant 클라이언트 (선택 사항)
## 고급 필터링
QdrantVectorSearchTool은 검색 결과를 세밀하게 조정할 수 있는 강력한 필터링 기능을 지원합니다:
### 동적 필터링
검색 시 `filter_by` 및 `filter_value` 매개변수를 사용하여 즉석에서 결과를 필터링할 수 있습니다:
```python
# 에이전트는 도구를 호출할 때 이러한 매개변수를 사용합니다
# 도구 스키마는 filter_by 및 filter_value를 허용합니다
# 예시: 카테고리 필터를 사용한 검색
# 결과는 category == "기술"인 항목으로 필터링됩니다
```
### QdrantConfig를 사용한 사전 설정 필터
복잡한 필터링의 경우 구성에서 Qdrant Filter 인스턴스를 사용하세요:
```python
from qdrant_client.http import models as qmodels
from crewai_tools import QdrantVectorSearchTool, QdrantConfig
# 특정 조건에 대한 필터 생성
preset_filter = qmodels.Filter(
must=[
qmodels.FieldCondition(
key="category",
match=qmodels.MatchValue(value="research")
),
qmodels.FieldCondition(
key="year",
match=qmodels.MatchValue(value=2024)
)
]
)
# 사전 설정 필터로 도구 초기화
qdrant_tool = QdrantVectorSearchTool(
qdrant_config=QdrantConfig(
qdrant_url="your_url",
qdrant_api_key="your_key",
collection_name="your_collection",
filter=preset_filter # 모든 검색에 적용되는 사전 설정 필터
)
)
```
### 필터 결합
도구는 `QdrantConfig`의 사전 설정 필터와 `filter_by` 및 `filter_value`의 동적 필터를 자동으로 결합합니다:
```python
# QdrantConfig에 category="research"에 대한 사전 설정 필터가 있고
# 검색에서 filter_by="year", filter_value=2024를 사용하는 경우
# 두 필터가 모두 결합됩니다 (AND 논리)
```
## 검색 매개변수
이 도구는 스키마에서 다음과 같은 매개변수를 허용합니다:
- `query` (str): 유사한 문서를 찾기 위한 검색 쿼리
- `filter_by` (str, 선택 사항): 필터링할 메타데이터 필드
- `filter_value` (str, 선택 사항): 필터 기준 값
- `filter_value` (Any, 선택 사항): 필터 기준 값
## 반환 형식
@@ -214,7 +281,7 @@ print(result)
## 기본 임베딩
기본적으로, 이 도구는 벡터화를 위해 OpenAI의 `text-embedding-3-small` 모델을 사용합니다. 이를 위해서는 다음이 필요합니다:
기본적으로, 이 도구는 벡터화를 위해 OpenAI의 `text-embedding-3-large` 모델을 사용합니다. 이를 위해서는 다음이 필요합니다:
- 환경변수에 설정된 OpenAI API 키: `OPENAI_API_KEY`
## 커스텀 임베딩
@@ -240,18 +307,22 @@ def custom_embeddings(text: str) -> list[float]:
# Tokenize and get model outputs
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
# Use mean pooling to get text embedding
embeddings = outputs.last_hidden_state.mean(dim=1)
# Convert to list of floats and return
return embeddings[0].tolist()
# Use custom embeddings with the tool
from crewai_tools import QdrantConfig
tool = QdrantVectorSearchTool(
qdrant_url="your_url",
qdrant_api_key="your_key",
collection_name="your_collection",
qdrant_config=QdrantConfig(
qdrant_url="your_url",
qdrant_api_key="your_key",
collection_name="your_collection"
),
custom_embedding_fn=custom_embeddings # Pass your custom function
)
```
@@ -270,4 +341,4 @@ tool = QdrantVectorSearchTool(
export QDRANT_URL="your_qdrant_url" # If not provided in constructor
export QDRANT_API_KEY="your_api_key" # If not provided in constructor
export OPENAI_API_KEY="your_openai_key" # If using default embeddings
```
```

View File

@@ -54,25 +54,25 @@ tool = CSVSearchTool()
기본적으로 이 도구는 임베딩과 요약 모두에 OpenAI를 사용합니다. 모델을 사용자 지정하려면 다음과 같이 config 딕셔너리를 사용할 수 있습니다:
```python Code
from chromadb.config import Settings
tool = CSVSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # 또는 "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -46,23 +46,25 @@ tool = DirectorySearchTool(directory='/path/to/directory')
DirectorySearchTool은 기본적으로 OpenAI를 사용하여 임베딩 및 요약을 수행합니다. 이 설정의 커스터마이즈 옵션에는 모델 공급자 및 구성을 변경하는 것이 포함되어 있어, 고급 사용자를 위한 유연성을 향상시킵니다.
```python Code
from chromadb.config import Settings
tool = DirectorySearchTool(
config=dict(
llm=dict(
provider="ollama", # Options include ollama, google, anthropic, llama2, and more
config=dict(
model="llama2",
# Additional configurations here
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # 또는 "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -56,25 +56,25 @@ tool = DOCXSearchTool(docx='path/to/your/document.docx')
기본적으로 이 도구는 임베딩과 요약 모두에 OpenAI를 사용합니다. 모델을 커스터마이즈하려면 다음과 같이 config 딕셔너리를 사용할 수 있습니다:
```python Code
from chromadb.config import Settings
tool = DOCXSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # 또는 "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -48,27 +48,25 @@ tool = MDXSearchTool(mdx='path/to/your/document.mdx')
이 도구는 기본적으로 임베딩과 요약을 위해 OpenAI를 사용합니다. 커스터마이징을 위해 아래와 같이 설정 딕셔너리를 사용할 수 있습니다.
```python Code
from chromadb.config import Settings
tool = MDXSearchTool(
config=dict(
llm=dict(
provider="ollama", # 옵션에는 google, openai, anthropic, llama2 등이 있습니다.
config=dict(
model="llama2",
# 선택적 파라미터를 여기에 포함할 수 있습니다.
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # 또는 openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# 임베딩에 대한 선택적 제목을 여기에 추가할 수 있습니다.
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # 또는 "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -45,28 +45,60 @@ tool = PDFSearchTool(pdf='path/to/your/document.pdf')
## 커스텀 모델 및 임베딩
기본적으로 이 도구는 임베딩과 요약 모두에 OpenAI를 사용합니다. 모델을 커스터마이즈하려면 다음과 같이 config 딕셔너리를 사용할 수 있습니다:
기본적으로 이 도구는 임베딩과 요약 모두에 OpenAI를 사용합니다. 모델을 커스터마이즈하려면 다음과 같이 config 딕셔너리를 사용할 수 있습니다. 참고: 임베딩은 벡터DB에 저장되어야 하므로 vectordb 설정이 필요합니다.
```python Code
from crewai_tools import PDFSearchTool
from chromadb.config import Settings # Chroma 영속성 설정
tool = PDFSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
# 필수: 임베딩 제공자와 설정
"embedding_model": {
# 사용 가능 공급자: "openai", "azure", "google-generativeai", "google-vertex",
# "voyageai", "cohere", "huggingface", "jina", "sentence-transformer",
# "text2vec", "ollama", "openclip", "instructor", "onnx", "roboflow", "watsonx", "custom"
"provider": "openai",
"config": {
# "model" 키는 내부적으로 "model_name"으로 매핑됩니다.
"model": "text-embedding-3-small",
# 선택: API 키 (미설정 시 환경변수 사용)
# "api_key": "sk-...",
# 공급자별 예시
# --- Google ---
# (provider를 "google-generativeai"로 설정)
# "model": "models/embedding-001",
# "task_type": "retrieval_document",
# --- Cohere ---
# (provider를 "cohere"로 설정)
# "model": "embed-english-v3.0",
# --- Ollama(로컬) ---
# (provider를 "ollama"로 설정)
# "model": "nomic-embed-text",
},
},
# 필수: 벡터DB 설정
"vectordb": {
"provider": "chromadb", # 또는 "qdrant"
"config": {
# Chroma 설정 예시
# "settings": Settings(
# persist_directory="/content/chroma",
# allow_reset=True,
# is_persistent=True,
# ),
# Qdrant 설정 예시
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
# 참고: 컬렉션 이름은 도구에서 관리합니다(기본값: "rag_tool_collection").
}
},
}
)
```

View File

@@ -57,25 +57,34 @@ tool = TXTSearchTool(txt='path/to/text/file.txt')
모델을 커스터마이징하려면 다음과 같이 config 딕셔너리를 사용할 수 있습니다:
```python Code
from chromadb.config import Settings
tool = TXTSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
# 필수: 임베딩 제공자 + 설정
"embedding_model": {
"provider": "openai", # 또는 google-generativeai, cohere, ollama 등
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...", # 환경변수 사용 시 생략 가능
# 공급자별 예시: Google → model: "models/embedding-001", task_type: "retrieval_document"
},
},
# 필수: 벡터DB 설정
"vectordb": {
"provider": "chromadb", # 또는 "qdrant"
"config": {
# Chroma 설정(영속성 예시)
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# Qdrant 벡터 파라미터 예시:
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
# 참고: 컬렉션 이름은 도구에서 관리합니다(기본값: "rag_tool_collection").
}
},
}
)
```

View File

@@ -54,25 +54,25 @@ tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
기본적으로 이 도구는 임베딩과 요약 모두에 OpenAI를 사용합니다. 모델을 커스터마이징하려면 다음과 같이 config 딕셔너리를 사용할 수 있습니다.
```python Code
from chromadb.config import Settings
tool = XMLSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
config={
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
# "api_key": "sk-...",
},
},
"vectordb": {
"provider": "chromadb", # 또는 "qdrant"
"config": {
# "settings": Settings(persist_directory="/content/chroma", allow_reset=True, is_persistent=True),
# from qdrant_client.models import VectorParams, Distance
# "vectors_config": VectorParams(size=384, distance=Distance.COSINE),
}
},
}
)
```

View File

@@ -33,6 +33,22 @@ Antes de usar a integração com o Asana, assegure-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de utilizar a integração com o ClickUp, certifique-se de que você possu
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração com o Gmail, certifique-se de que você possui:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração com o Google Calendar, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Google Contacts, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Google Docs, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Google Drive, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
Para informações detalhadas sobre parâmetros e uso, consulte a [documentação em inglês](../../../en/enterprise/integrations/google_drive).

View File

@@ -34,6 +34,22 @@ Antes de utilizar a integração com o Google Sheets, certifique-se de que você
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Google Slides, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de utilizar a integração com o HubSpot, certifique-se de que você possu
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de utilizar a integração com o Linear, certifique-se de que você possui
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Microsoft Excel, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Microsoft OneDrive, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Microsoft Outlook, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Microsoft SharePoint, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Microsoft Teams, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

View File

@@ -33,6 +33,22 @@ Antes de usar a integração Microsoft Word, certifique-se de ter:
uv add crewai-tools
```
### 3. Configuração de variável de ambiente
<Note>
Para usar integrações com `Agent(apps=[])`, você deve definir a variável de ambiente `CREWAI_PLATFORM_INTEGRATION_TOKEN` com seu Enterprise Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="seu_enterprise_token"
```
Ou adicione ao seu arquivo `.env`:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
```
## Ações Disponíveis
<AccordionGroup>

Some files were not shown because too many files have changed in this diff Show More