Compare commits

...

237 Commits

Author SHA1 Message Date
Joao Moura
c47ff15bf6 set tracing if user enables it 2025-09-18 17:01:52 -07:00
Joao Moura
270e0b6edd avoinding line breaking 2025-09-18 16:40:51 -07:00
Joao Moura
a0cbb5cfdb feat(tracing): enhance first-time trace display and auto-open browser 2025-09-18 16:37:48 -07:00
Lorenze Jay
2f682e1564 feat: update ChromaDB embedding function to use OpenAI API (#3538)
- Refactor the default embedding function to utilize OpenAI's embedding function with API key support.
- Import necessary OpenAI embedding function and configure it with the environment variable for the API key.
- Ensure compatibility with existing ChromaDB configuration model.
2025-09-18 14:50:35 -07:00
Greyson LaLonde
d4aa676195 feat: add configurable search parameters for RAG, knowledge, and memory (#3531)
- Add limit and score_threshold to BaseRagConfig, propagate to clients  
- Update default search params in RAG storage, knowledge, and memory (limit=5, threshold=0.6)  
- Fix linting (ruff, mypy, PERF203) and refactor save logic  
- Update tests for new defaults and ChromaDB behavior
2025-09-18 16:58:03 -04:00
Lorenze Jay
578fa8c2e4 Lorenze/ephemeral trace ask (#3530)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
* feat(tracing): implement first-time trace handling and improve event management

- Added FirstTimeTraceHandler for managing first-time user trace collection and display.
- Enhanced TraceBatchManager to support ephemeral trace URLs and improved event buffering.
- Updated TraceCollectionListener to utilize the new FirstTimeTraceHandler.
- Refactored type annotations across multiple files for consistency and clarity.
- Improved error handling and logging for trace-related operations.
- Introduced utility functions for trace viewing prompts and first execution checks.

* brought back crew finalize batch events

* refactor(trace): move instance variables to __init__ in TraceBatchManager

- Refactored TraceBatchManager to initialize instance variables in the constructor instead of as class variables.
- Improved clarity and encapsulation of the class state.

* fix(tracing): improve error handling in user data loading and saving

- Enhanced error handling in _load_user_data and _save_user_data functions to log warnings for JSON decoding and file access issues.
- Updated documentation for trace usage to clarify the addition of tracing parameters in Crew and Flow initialization.
- Refined state management in Flow class to ensure proper handling of state IDs when persistence is enabled.

* add some tests

* fix test

* fix tests

* refactor(tracing): enhance user input handling for trace viewing

- Replaced signal-based timeout handling with threading for user input in prompt_user_for_trace_viewing function.
- Improved user experience by allowing a configurable timeout for viewing execution traces.
- Updated tests to mock threading behavior and verify timeout handling correctly.

* fix(tracing): improve machine ID retrieval with error handling

- Added error handling to the _get_machine_id function to log warnings when retrieving the machine ID fails.
- Ensured that the function continues to provide a stable, privacy-preserving machine fingerprint even in case of errors.

* refactor(flow): streamline state ID assignment in Flow class

- Replaced direct attribute assignment with setattr for improved flexibility in handling state IDs.
- Enhanced code readability by simplifying the logic for setting the state ID when persistence is enabled.
2025-09-18 10:17:34 -07:00
Rip&Tear
6f5af2b27c Update CodeQL workflow to ignore specific paths (#3534)
Code QL, when configured through the GUI, does not allow for advanced configuration. This PR upgrades from an advanced file-based config which allows us to exclude certain paths.
2025-09-18 23:26:15 +08:00
Greyson LaLonde
8ee3cf4874 test: fix flaky agent repeated tool usage test (#3533)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Make assertion resilient to race condition with max iterations in CI  
- Add investigation notes and TODOs for deterministic executor flow
2025-09-17 22:00:32 -04:00
Greyson LaLonde
f2d3fd0c0f fix(events): add missing event exports to __init__.py (#3532) 2025-09-17 21:50:27 -04:00
Greyson LaLonde
f28e78c5ba refactor: unify rag storage with instance-specific client support (#3455)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- ignore line length errors globally
- migrate knowledge/memory and crew query_knowledge to `SearchResult`
- remove legacy chromadb utils; fix empty metadata handling
- restore openai as default embedding provider; support instance-specific clients
- update and fix tests for `SearchResult` migration and rag changes
2025-09-17 14:46:54 -04:00
Greyson LaLonde
81bd81e5f5 fix: handle model parameter in OpenAI adapter initialization (#3510)
Some checks failed
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-09-12 17:31:53 -04:00
Vidit Ostwal
1b00cc71ef Dropping messages from metadata in Mem0 Storage (#3390)
* Dropped messages from metadata and added user-assistant interaction directly

* Fixed test cases for this

* Fixed static type checking issue

* Changed logic to take latest user and assistant messages

* Added default value to be string

* Linting checks

* Removed duplication of tool calling

* Fixed Linting Changes

* Ruff check

* Removed console formatter file from commit

* Linting fixed

* Linting checks

* Ignoring missing imports error

* Added suggested changes

* Fixed import untyped error
2025-09-12 15:25:29 -04:00
Greyson LaLonde
45d0c9912c chore: add type annotations and docstrings to openai agent adapters (#3505) 2025-09-12 10:41:39 -04:00
Greyson LaLonde
1f1ab14b07 fix: resolve test duration cache issues in CI workflows (#3506)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-09-12 08:38:47 -04:00
Lucas Gomide
1a70f1698e feat: add thread-safe platform context management (#3502)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-09-11 17:32:51 -04:00
Greyson LaLonde
8883fb656b feat(tests): add duration caching for pytest-split
- Cache test durations for optimized splitting
2025-09-11 15:16:05 -04:00
Greyson LaLonde
79d65e55a1 chore: add type annotations and docstrings to langgraph adapters (#3503) 2025-09-11 13:06:44 -04:00
Lorenze Jay
dde76bfec5 chore: bump CrewAI version to 0.186.1 and update dependencies in CLI templates (#3499)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Updated CrewAI version from 0.186.0 to 0.186.1 in `__init__.py`.
- Updated `crewai[tools]` dependency version in `pyproject.toml` for crew, flow, and tool templates to reflect the new CrewAI version.
2025-09-10 17:01:19 -07:00
Lorenze Jay
f554123af6 fix (#3498) 2025-09-10 16:55:25 -07:00
Lorenze Jay
4336e945b8 chore: update dependencies and version for CrewAI (#3497)
- Updated `crewai-tools` dependency from version 0.69.0 to 0.71.0 in `pyproject.toml`.
- Bumped CrewAI version from 0.177.0 to 0.186.0 in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool to reflect the new CrewAI version.
2025-09-10 16:03:58 -07:00
Lorenze Jay
75b916c85a Lorenze/fix tool call twice (#3495)
* test: add test to ensure tool is called only once during crew execution

- Introduced a new test case to validate that the counting_tool is executed exactly once during crew execution.
- Created a CountingTool class to track execution counts and log call history.
- Enhanced the test suite with a YAML cassette for consistent tool behavior verification.

* ensure tool function called only once

* refactor: simplify error handling in CrewStructuredTool

- Removed unnecessary try-except block around the tool function call to streamline execution flow.
- Ensured that the tool function is called directly, improving readability and maintainability.

* linted

* need to ignore for now as we cant infer the complex generic type within pydantic create_model_func

* fix tests
2025-09-10 15:20:21 -07:00
Greyson LaLonde
01be26ce2a chore: add build-cache, update jobs, remove redundant security check
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Build and cache uv dependencies; update type-checker, tests, and linter to use cache  
- Remove separate security-checker
- Add explicit workflow permissions for compliance  
- Remove pull_request trigger from build-cache workflow
2025-09-10 13:02:24 -04:00
Greyson LaLonde
c3ad5887ef chore: add type annotations to utilities module (#3484)
- Update to Python 3.10+ typing across LLM, callbacks, storage, and errors
- Complete typing updates for crew_chat and hitl
- Add stop attr to mock LLM, suppress test warnings
- Add type-ignore for aisuite import
2025-09-10 10:56:17 -04:00
Lucas Gomide
260b49c10a fix: support to define MPC connection timeout on CrewBase instance (#3465)
* fix: support to define MPC connection timeout on CrewBase instance

* fix: resolve linter issues

* chore: ignore specific rule N802 on CrewBase class

* fix: ignore untyped import
2025-09-10 09:58:46 -04:00
Greyson LaLonde
1dc4f2e897 chore: add typing and docstrings to base_token_process module (#3486)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-09-10 09:23:39 -04:00
Greyson LaLonde
b126ab22dd chore: refactor telemetry module with utility functions and modern typing (#3485)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-09-10 09:18:21 -04:00
Greyson LaLonde
079cb72f6e chore: update typing in types module to Python 3.10+ syntax (#3482)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-09-10 09:07:36 -04:00
Greyson LaLonde
83682d511f chore: modernize LLM interface typing and add constants (#3483)
* chore: update LLM interfaces to Python 3.10+ typing

* fix: add missing stop attribute to mock LLM and improve test infrastructure

* fix: correct type ignore comment for aisuite import
2025-09-10 08:30:49 -04:00
Samarth Rawat
6676d94ba1 Doc Fix: fixed number of memory types (#3288)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Update memory.mdx

* Update memory.mdx

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-09 14:11:56 -04:00
Greyson LaLonde
d5126d159b chore: improve typing and docs in agents leaf files (#3461)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add typing and Google-style docstrings to agents leaf files
- Add TODO notes
2025-09-08 11:57:34 -04:00
Greyson LaLonde
fa06aea8d5 chore: modernize security module typing (#3469)
- Disable E501, apply Ruff formatting
- Update typing (Self, BeforeValidator), remove dead code
- Convert Fingerprint to Pydantic dataclass and fix serialization/copy behavior
- Add TODO for dynamic namespace config
2025-09-08 11:52:59 -04:00
Greyson LaLonde
f936e0f69b chore: enhance typing and documentation in tasks module (#3467)
- Disable E501 line length linting rule
- Add Google-style docstrings to tasks leaf file
- Modernize typing and docs in task_output.py
- Improve typing and documentation in conditional_task.py
2025-09-08 11:42:23 -04:00
Greyson LaLonde
37c5e88d02 ci: configure pre-commit hooks and github actions to use uv run (#3479) 2025-09-08 11:30:28 -04:00
Kim
1a96ed7b00 fix: rebranding of Azure AI Studio (Azure OpenAI Studio) to Azure AI Foundry (#3424)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-05 20:42:05 -04:00
Tony Kipkemboi
1a1bb0ca3d docs: Docs updates (#3459)
* docs(cli): document device-code login and config reset guidance; renumber sections

* docs(cli): fix duplicate numbering (renumber Login/API Keys/Configuration sections)

* docs: Fix webhook documentation to include meta dict in all webhook payloads

- Add note explaining that meta objects from kickoff requests are included in all webhook payloads
- Update webhook examples to show proper payload structure including meta field
- Fix webhook examples to match actual API implementation
- Apply changes to English, Korean, and Portuguese documentation

Resolves the documentation gap where meta dict passing to webhooks was not documented despite being implemented in the API.

* WIP: CrewAI docs theme, changelog, GEO, localization

* docs(cli): fix merge markers; ensure mode: "wide"; convert ASCII tables to Markdown (en/pt-BR/ko)

* docs: add group icons across locales; split Automation/Integrations; update tools overviews and links
2025-09-05 17:40:11 -04:00
Mike Plachta
99b79ab20d docs: move Bedrock tool docs to integration folder and add CrewAI automation tool docs (#3403)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-05 15:12:35 -04:00
Mike Plachta
80974fec6c docs: expand webhook event types with detailed categorization and descriptions (#3369)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-05 14:57:01 -04:00
Greyson LaLonde
30b9cdd944 chore: expand ruff rules with comprehensive linting (#3453) 2025-09-05 14:38:56 -04:00
Greyson LaLonde
610c1f70c0 chore: relax mypy configuration and exclude tests from CI (#3452) 2025-09-05 10:00:05 -04:00
Greyson LaLonde
ab82da02f9 refactor: cleanup crew agent executor (#3440)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
refactor: cleanup crew agent executor & add docs

- Remove dead code, unused imports, and obsolete methods
- Modernize with updated type hints and static _format_prompt
- Add docstrings for clarity
2025-09-04 15:32:47 -04:00
Lorenze Jay
f0def350a4 chore: update crewAI and tools dependencies to latest versions (#3444)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Updated `crewai-tools` dependency from version 0.65.0 to 0.69.0 in `pyproject.toml` and `uv.lock`.
- Bumped crewAI version from 0.175.0 to 0.177.0 in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new crewAI version.
2025-09-03 17:27:05 -07:00
Lorenze Jay
f4f32b5f7f fix: suppress Pydantic deprecation warnings in initialization (#3443)
* fix: suppress Pydantic deprecation warnings in initialization

- Implemented a function to filter out Pydantic deprecation warnings, enhancing the user experience by preventing unnecessary warning messages during execution.
- Removed the previous warning filter setup to streamline the warning suppression process.
- Updated the User-Agent header formatting for consistency.

* fix type check

* dropped

* fix: update type-checker workflow and suppress warnings

- Updated the Python version matrix in the type-checker workflow to use double quotes for consistency.
- Added the `# type: ignore[assignment]` comment to the warning suppression assignment in `__init__.py` to address type checking issues.
- Ensured that the mypy command in the workflow allows for untyped calls and generics, enhancing type checking flexibility.

* better
2025-09-03 16:36:50 -07:00
Tony Kipkemboi
49a5ae0e16 Docs/release 0.175.0 docs (#3441)
* docs(install): note OpenAI SDK requirement openai>=1.13.3 for 0.175.0

* docs(cli): document device-code login and config reset guidance; renumber sections

* docs(flows): document conditional @start and resumable execution semantics

* docs(tasks): move max_retries to deprecation note under attributes table

* docs: provider-neutral RAG client config; entity memory batching; trigger payload note; tracing batch manager

* docs(cli): fix duplicate numbering (renumber Login/API Keys/Configuration sections)
2025-09-03 17:27:11 -04:00
Lucas Gomide
d31ffdbb90 docs: update Enterprise Action Auth Token section docs (#3437)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-09-02 17:36:28 -04:00
Greyson LaLonde
4555ada91e fix(ruff): remove Python 3.12+ only rules for compatibility (#3436) 2025-09-02 14:15:25 -04:00
Greyson LaLonde
92d71f7f06 chore: migrate CI workflows to uv and update dev tooling (#3426)
chore(dev): update tooling & CI workflows

- Upgrade ruff, mypy (strict), pre-commit; add hooks, stubs, config consolidation
- Add bandit to dev deps and update uv.lock
- Enhance ruff rules (modern Python style, B006 for mutable defaults)
- Update workflows to use uv, matrix strategy, and changed-file type checking
- Include tests in type checking; fix job names and add summary job for branch protection
2025-09-02 12:35:02 -04:00
ZhangYier
dada9f140f fix: README.md example link 404 (#3432)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-02 10:29:40 -04:00
Greyson LaLonde
878c1a649a refactor: Move events module to crewai.events (#3425)
refactor(events): relocate events module & update imports

- Move events from utilities/ to top-level events/ with types/, listeners/, utils/ structure
- Update all source/tests/docs to new import paths
- Add backwards compatibility stubs in crewai.utilities.events with deprecation warnings
- Restore test mocks and fix related test imports
2025-09-02 10:06:42 -04:00
Greyson LaLonde
1b1a8fdbf4 fix: replace mutable default arguments with None (#3429)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-31 18:57:45 -04:00
Lorenze Jay
2633b33afc fix: enhance LLM event handling with task and agent metadata (#3422)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: enhance LLM event handling with task and agent metadata

- Added `from_task` and `from_agent` parameters to LLM event emissions for improved traceability.
- Updated `_send_events_to_backend` method in TraceBatchManager to return status codes for better error handling.
- Modified `CREWAI_BASE_URL` to remove trailing slash for consistency.
- Improved logging and graceful failure handling in event sending process.

* drop print
2025-08-29 13:48:49 -07:00
Greyson LaLonde
e4c4b81e63 chore: refactor parser & constants, improve tools_handler, update tests
- Move parser constants to dedicated module with pre-compiled regex
- Refactor CrewAgentParser to module functions; remove unused params
- Improve tools_handler with instance attributes
- Update tests to use module-level parser functions
2025-08-29 14:35:08 -04:00
Greyson LaLonde
ec1eff02a8 fix: achieve parity between rag package and current impl (#3418)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Sanitize ChromaDB collection names and use original dir naming
- Add persistent client with file locking to the ChromaDB factory
- Add upsert support to the ChromaDB client
- Suppress ChromaDB deprecation warnings for `model_fields`
- Extract `suppress_logging` into shared `logger_utils`
- Update tests to reflect upsert behavior
- Docs: add additional note
2025-08-28 11:22:36 -04:00
Lorenze Jay
0f1b764c3e chore: update crewAI version and dependencies to 0.175.0 and tools to 0.65.0 (#3417)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Bump crewAI version from 0.165.1 to 0.175.0 in __init__.py.
* Update tools dependency from 0.62.1 to 0.65.0 in pyproject.toml and uv.lock files.
* Reflect changes in CLI templates for crew, flow, and tool configurations.
2025-08-27 19:33:32 -07:00
Lorenze Jay
6ee9db1d4a fix: enhance PlusAPI and TraceBatchManager with timeout handling and graceful failure logging (#3416)
* Added timeout parameters to PlusAPI trace event methods for improved reliability.
* Updated TraceBatchManager to handle None responses gracefully, logging warnings instead of errors.
* Improved logging messages to provide clearer context during trace batch initialization and event sending failures.
2025-08-27 18:43:03 -07:00
Greyson LaLonde
109de91d08 fix: batch entity memory items to reduce redundant operations (#3409)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: batch save entity memory items to reduce redundant operations

* test: update memory event count after entity batch save implementation
2025-08-27 10:47:20 -04:00
Erika Shorten
92b70e652d Add hybrid search alpha parameter to the docs (#3397)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-08-27 10:36:39 -04:00
Heitor Carvalho
fc3f2c49d2 chore: remove auth0 and the need of typing the email on 'crewai login' (#3408)
* Remove the need of typing the email on 'crewai login'

* Remove auth0 constants, update tests
2025-08-27 10:12:57 -04:00
Lucas Gomide
88d2968fd5 chore: add deprecation notices to Task.max_retries (#3379)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-26 17:24:58 -04:00
Lorenze Jay
7addda9398 Lorenze/better tracing events (#3382)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: implement tool usage limit exception handling

- Introduced `ToolUsageLimitExceeded` exception to manage maximum usage limits for tools.
- Enhanced `CrewStructuredTool` to check and raise this exception when the usage limit is reached.
- Updated `_run` and `_execute` methods to include usage limit checks and handle exceptions appropriately, improving reliability and user feedback.

* feat: enhance PlusAPI and ToolUsage with task metadata

- Removed the `send_trace_batch` method from PlusAPI to streamline the API.
- Added timeout parameters to trace event methods in PlusAPI for improved reliability.
- Updated ToolUsage to include task metadata (task name and ID) in event emissions, enhancing traceability and context during tool usage.
- Refactored event handling in LLM and ToolUsage events to ensure task information is consistently captured.

* feat: enhance memory and event handling with task and agent metadata

- Added task and agent metadata to various memory and event classes, improving traceability and context during memory operations.
- Updated the `ContextualMemory` and `Memory` classes to associate tasks and agents, allowing for better context management.
- Enhanced event emissions in `LLM`, `ToolUsage`, and memory events to include task and agent information, facilitating improved debugging and monitoring.
- Refactored event handling to ensure consistent capture of task and agent details across the system.

* drop

* refactor: clean up unused imports in memory and event modules

- Removed unused TYPE_CHECKING imports from long_term_memory.py to streamline the code.
- Eliminated unnecessary import from memory_events.py, enhancing clarity and maintainability.

* fix memory tests

* fix task_completed payload

* fix: remove unused test agent variable in external memory tests

* refactor: remove unused agent parameter from Memory class save method

- Eliminated the agent parameter from the save method in the Memory class to streamline the code and improve clarity.
- Updated the TraceBatchManager class by moving initialization of attributes into the constructor for better organization and readability.

* refactor: enhance ExecutionState and ReasoningEvent classes with optional task and agent identifiers

- Added optional `current_agent_id` and `current_task_id` attributes to the `ExecutionState` class for better tracking of agent and task states.
- Updated the `from_task` attribute in the `ReasoningEvent` class to use `Optional[Any]` instead of a specific type, improving flexibility in event handling.

* refactor: update ExecutionState class by removing unused agent and task identifiers

- Removed the `current_agent_id` and `current_task_id` attributes from the `ExecutionState` class to simplify the code and enhance clarity.
- Adjusted the import statements to include `Optional` for better type handling.

* refactor: streamline LLM event handling in LiteAgent

- Removed unused LLM event emissions (LLMCallStartedEvent, LLMCallCompletedEvent, LLMCallFailedEvent) from the LiteAgent class to simplify the code and improve performance.
- Adjusted the flow of LLM response handling by eliminating unnecessary event bus interactions, enhancing clarity and maintainability.

* flow ownership and not emitting events when a crew is done

* refactor: remove unused agent parameter from ShortTermMemory save method

- Eliminated the agent parameter from the save method in the ShortTermMemory class to streamline the code and improve clarity.
- This change enhances the maintainability of the memory management system by reducing unnecessary complexity.

* runtype check fix

* fixing tests

* fix lints

* fix: update event assertions in test_llm_emits_event_with_lite_agent

- Adjusted the expected counts for completed and started events in the test to reflect the correct behavior of the LiteAgent.
- Updated assertions for agent roles and IDs to match the expected values after recent changes in event handling.

* fix: update task name assertions in event tests

- Modified assertions in `test_stream_llm_emits_event_with_task_and_agent_info` and `test_llm_emits_event_with_task_and_agent_info` to use `task.description` as a fallback for `task.name`. This ensures that the tests correctly validate the task name even when it is not explicitly set.

* fix: update test assertions for output values and improve readability

- Updated assertions in `test_output_json_dict_hierarchical` to reflect the correct expected score value.
- Enhanced readability of assertions in `test_output_pydantic_to_another_task` and `test_key` by formatting the error messages for clarity.
- These changes ensure that the tests accurately validate the expected outputs and improve overall code quality.

* test fixes

* fix crew_test

* added another fixture

* fix: ensure agent and task assignments in contextual memory are conditional

- Updated the ContextualMemory class to check for the existence of short-term, long-term, external, and extended memory before assigning agent and task attributes. This prevents potential attribute errors when memory types are not initialized.
2025-08-26 09:09:46 -07:00
Greyson LaLonde
4b4a119a9f refactor: simplify rag client initialization (#3401)
* Simplified Qdrant and ChromaDB client initialization
* Refactored factory structure and updated tests accordingly
2025-08-26 08:54:51 -04:00
Greyson LaLonde
869bb115c8 Qdrant RAG Provider Support (#3400)
* Added Qdrant provider support with factory, config, and protocols
* Improved default embeddings and type definitions
* Fixed ChromaDB factory embedding assignment
2025-08-26 08:44:02 -04:00
Greyson LaLonde
7ac482c7c9 feat: rag configuration with optional dependency support (#3394)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
### RAG Config System

* Added ChromaDB client creation via config with sensible defaults
* Introduced optional imports and shared RAG config utilities/schema
* Enabled embedding function support with ChromaDB provider integration
* Refactored configs for immutability and stronger type safety
* Removed unused code and expanded test coverage
2025-08-26 00:00:22 -04:00
Greyson LaLonde
2e4bd3f49d feat: qdrant generic client (#3377)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
### Qdrant Client

* Add core client with collection, search, and document APIs (sync + async)
* Refactor utilities, types, and vector params (default 384-dim)
* Improve error handling with `ClientMethodMismatchError`
* Add score normalization, async embeddings, and optional `qdrant-client` dep
* Expand tests and type safety throughout
2025-08-25 16:02:25 -04:00
Greyson LaLonde
c02997d956 Add import utilities for optional dependencies (#3389)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-24 22:57:44 -04:00
Heitor Carvalho
f96b779df5 feat: reset tokens on crewai config reset (#3365)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-22 16:16:42 -04:00
Greyson LaLonde
842bed4e9c feat: chromadb generic client (#3374)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Add ChromaDB client implementation with async support

- Implement core collection operations (create, get_or_create, delete)
- Add search functionality with cosine similarity scoring
- Include both sync and async method variants
- Add type safety with NamedTuples and TypeGuards
- Extract utility functions to separate modules
- Default to cosine distance metric for text similarity
- Add comprehensive test coverage

TODO:
- l2, ip score calculations are not settled on
2025-08-21 18:18:46 -04:00
Lucas Gomide
1217935b31 feat: add docs about Automation triggers (#3375)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-20 22:02:47 -04:00
Greyson LaLonde
641c156c17 fix: address flaky tests (#3363)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
fix: resolve flaky tests and race conditions in test suite

- Fix telemetry/event tests by patching class methods instead of instances
- Use unique temp files/directories to prevent CI race conditions
- Reset singleton state between tests
- Mock embedchain.Client.setup() to prevent JSON corruption
- Rename test files to test_*.py convention
- Move agent tests to tests/agents directory
- Fix repeated tool usage detection
- Remove database-dependent tools causing initialization errors
2025-08-20 13:34:09 -04:00
Tony Kipkemboi
7fdf9f9290 docs: fix API Reference OpenAPI sources and redirects (#3368)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* docs: fix API Reference OpenAPI sources and redirects; clarify training data usage; add Mermaid diagram; correct CLI usage and notes

* docs(mintlify): use explicit openapi {source, directory} with absolute paths to fix branch deployment routing

* docs(mintlify): add explicit endpoint MDX pages and include in nav; keep OpenAPI auto-gen as fallback

* docs(mintlify): remove OpenAPI Endpoints groups; add localized MDX endpoint pages for pt-BR and ko
2025-08-20 11:55:35 -04:00
Greyson LaLonde
c0d2bf4c12 fix: flow listener resumability for HITL and cyclic flows (#3322)
* fix: flow listener resumability for HITL and cyclic flows

- Add resumption context flag to distinguish HITL resumption from cyclic execution
- Skip method re-execution only during HITL resumption, not for cyclic flows
- Ensure cyclic flows like test_cyclic_flow continue to work correctly

* fix: prevent duplicate execution of conditional start methods in flows

* fix: resolve type error in flow.py line 1040 assignment
2025-08-20 10:06:18 -04:00
Greyson LaLonde
ed187b495b feat: centralize embedding types and create base client (#3246)
feat: add RAG system foundation with generic vector store support

- Add BaseClient protocol for vector stores
- Move BaseRAGStorage to rag/core
- Centralize embedding types in embeddings/types.py
- Remove unused storage models
2025-08-20 09:35:27 -04:00
Wajeeh ul Hassan
2773996b49 fix: revert pin openai<1.100.0 to openai>=1.13.3 (#3364) 2025-08-20 09:16:26 -04:00
Damian Silbergleith
95923b78c6 feat: display task name in verbose output (#3308)
* feat: display task name in verbose output

- Modified event_listener.py to pass task names to the formatter
- Updated console_formatter.py to display task names when available
- Maintains backward compatibility by showing UUID for tasks without names
- Makes verbose output more informative and readable

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unnecessary f-string prefixes in console formatter

Remove extraneous f prefixes from string literals without placeholders
in console_formatter.py to resolve ruff F541 linting errors.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-20 08:43:05 -04:00
Lucas Gomide
7065ad4336 feat: adding additional parameter to Flow' start methods (#3356)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: adding additional parameter to Flow' start methods

When the `crewai_trigger_payload` parameter exists in the input Flow, we will add it in the start Flow methods as parameter

* fix: support crewai_trigger_payload in async Flow start methods
2025-08-19 17:32:19 -04:00
Lorenze Jay
d6254918fd Lorenze/max retry defaults tools (#3362)
* feat: enhance BaseTool and CrewStructuredTool with usage tracking

This commit introduces a mechanism to track the usage count of tools within the CrewAI framework. The `BaseTool` class now includes a `_increment_usage_count` method that updates the current usage count, which is also reflected in the associated `CrewStructuredTool`. Additionally, a new test has been added to ensure that the maximum usage count is respected when invoking tools, enhancing the overall reliability and functionality of the tool system.

* feat: add max usage count feature to tools documentation

This commit introduces a new section in the tools overview documentation that explains the maximum usage count feature for tools within the CrewAI framework. Users can now set a limit on how many times a tool can be used, enhancing control over tool usage. An example of implementing the `FileReadTool` with a maximum usage count is also provided, improving the clarity and usability of the documentation.

* undo field string
2025-08-19 10:44:55 -07:00
Heitor Carvalho
95e3d6db7a fix: add 'tool' section migration when running crewai update (#3341)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-19 08:11:30 -04:00
Lorenze Jay
d7f8002baa chore: update crewAI version to 0.165.1 and tools dependency in templates (#3359) (#3359)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-19 00:06:31 -03:00
Lorenze Jay
d743e12a06 refactor: streamline tracing condition checks and clean up deprecated warnings (#3358)
This commit simplifies the conditions for enabling tracing in both the Crew and Flow classes by removing the redundant call to `on_first_execution_tracing_confirmation()`. Additionally, it removes deprecated warning filters related to Pydantic in the KnowledgeStorage and RAGStorage classes, improving code clarity and maintainability.
2025-08-18 19:56:00 -07:00
Lorenze Jay
6068fe941f chore: update crewAI version to 0.165.0 and tools dependency to 0.62.1 (#3357) 2025-08-18 18:25:59 -07:00
Lucas Gomide
2a0cefc98b feat: pin openai<1.100.0 due ResponseTextConfigParam import issue (#3355)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-18 18:31:18 -04:00
Lucas Gomide
a4f65e4870 chore: renaming inject_trigger_input to allow_crewai_trigger_context (#3353)
* chore: renaming inject_trigger_input to allow_crewai_trigger_context

* test: add missing cassetes
2025-08-18 17:57:21 -04:00
Lorenze Jay
a1b3edd79c Refactor tracing logic to consolidate conditions for enabling tracing… (#3347)
* Refactor tracing logic to consolidate conditions for enabling tracing in Crew class and update TraceBatchManager to handle ephemeral batches more effectively. Added tests for trace listener handling of both ephemeral and authenticated user batches.

* drop print

* linted

* refactor: streamline ephemeral handling in TraceBatchManager

This commit removes the ephemeral parameter from the _send_events_to_backend and _finalize_backend_batch methods, replacing it with internal logic that checks the current batch's ephemeral status. This change simplifies the method signatures and enhances the clarity of the code by directly using the is_current_batch_ephemeral attribute for conditional logic.
2025-08-18 14:16:51 -07:00
Lucas Gomide
80b3d9689a Auto inject crewai_trigger_payload (#3351)
* feat: add props to inject trigger payload

* feat: auto-inject trigger_input in the first crew task
2025-08-18 16:36:08 -04:00
Vini Brasil
ec03a53121 Add example to Tool Repository docs (#3352) 2025-08-18 13:19:35 -07:00
Vini Brasil
2fdf3f3a6a Move Chroma lockfile to db/ (#3342)
This commit fixes an issue where using Chroma would spam lockfiles over
the root path of the crew.
2025-08-18 11:00:50 -07:00
Greyson LaLonde
1d3d7ebf5e fix: convert XMLSearchTool config values to strings for configparser compatibility (#3344) 2025-08-18 13:23:58 -04:00
Gabe Milani
2c2196f415 fix: flaky test with PytestUnraisableExceptionWarning (#3346) 2025-08-18 14:07:51 -03:00
Gabe Milani
c9f30b175c chore: ignore deprecation warning from chromadb (#3328)
* chore: ignore deprecation warning from chromadb

* adding TODO: in the comment
2025-08-18 13:24:11 -03:00
Greyson LaLonde
a17b93a7f8 Mock telemetry in pytest tests (#3340)
* Add telemetry mocking for pytest tests

- Mock telemetry by default for all tests except telemetry-specific tests
- Add @pytest.mark.telemetry marker for real telemetry tests
- Reduce test overhead and improve isolation

* Fix telemetry test isolation

- Properly isolate telemetry tests from mocking environment
- Preserve API keys and other necessary environment variables
- Ensure telemetry tests can run with real telemetry instances
2025-08-18 11:55:30 -04:00
namho kim
0d3e462791 fix: Revised Korean translation and sentence structure improvement (#3337)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-18 10:46:13 -04:00
Greyson LaLonde
947c9552f0 chore: remove AgentOps integration (#3334)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-17 23:07:41 -04:00
Lorenze Jay
04a03d332f Lorenze/emphemeral tracing (#3323)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* for ephemeral traces

* default false

* simpler and consolidated

* keep raising exception but catch it and continue if its for trace batches

* cleanup

* more cleanup

* not using logger

* refactor: rename TEMP_TRACING_RESOURCE to EPHEMERAL_TRACING_RESOURCE for clarity and consistency in PlusAPI; update related method calls accordingly

* default true

* drop print
2025-08-15 13:37:16 -07:00
Vidit Ostwal
992e093610 Update Docs: Added Mem0 integration with Short Term and Entity Memory (#3293)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Added Mem0 integration with Short Term and Entity Memory

* Flaky test case of telemetry
2025-08-14 22:50:24 -04:00
Lucas Gomide
07f8e73958 feat: include exchanged agent messages into ExternalMemory metadata (#3290)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-14 09:41:09 -04:00
Lorenze Jay
66c2fa1623 chore: update crewAI and tools dependencies to version 0.159.0 and 0.62.0 respectively (#3318)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Bump crewAI version from 0.157.0 to 0.159.0
- Update tools dependency from 0.60.0 to 0.62.0 in pyproject.toml and uv.lock
- Ensure compatibility with the latest features and improvements in the tools package
2025-08-13 16:52:58 -07:00
Greyson LaLonde
7a52cc9667 fix: comment out listener resumability check (#3316) 2025-08-13 19:04:16 -04:00
Greyson LaLonde
8b686fb0c6 feat: add flow resumability support (#3312)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Add reload() method to restore flow state from execution data
- Add FlowExecutionData type definitions
- Track completed methods for proper flow resumption
- Support OpenTelemetry baggage context for flow inputs
2025-08-13 13:45:08 -04:00
Tony Kipkemboi
dc6771ae95 docs: Fix API Reference, add RBAC, revamp Examples/Cookbooks (EN/PT-BR/KO) (#3314)
* docs: add RBAC docs and other chores

* docs: fix API Reference rendering; per-locale OpenAPI; add Enterprise RBAC; restructure Examples (EN/PT-BR/KO) + Cookbooks; update nav and links

* docs(i18n): add RBAC docs for pt-BR and ko; update Enterprise Features nav
2025-08-13 13:13:24 -04:00
Tony Kipkemboi
e9b1e5a8f6 docs: add RBAC docs and other chores (#3313) 2025-08-13 12:08:42 -04:00
rishiraj
57c787f919 Docs update/add truefoundry (#3245)
* Add TrueFoundry observability integration documentation

- Added comprehensive TrueFoundry integration guide for CrewAI
- Included AI Gateway overview with key features
- Added technical architecture details for Traceloop SDK integration
- Provided step-by-step setup instructions
- Added advanced configuration examples
- Included tracing dashboard screenshot
- Added support contact and documentation links

* Update TrueFoundry integration documentation

Major improvements and fixes:
- Fixed integration pattern to follow LLM provider approach (base_url + api_key)
- Added technical architecture details showing LLM provider and observability flows
- Updated model names to use correct TrueFoundry format (openai-main/gpt-4o, anthropic/claude-3.5-sonnet)
- Added unified-code-tfy.png image for visual code example
- Reorganized document structure with better section placement
- Moved Additional Tracing section to better position
- Added link to TrueFoundry quick start guide
- Added comprehensive observability details and dashboard explanation
- Removed complex tracing setup in favor of simpler LLM provider integration

* Finalize TrueFoundry integration documentation

Key improvements:
- Updated base_url references to use placeholder from code snippet
- Added gateway-metrics.png image for observability dashboard
- Formatted metrics description with proper bullet points and bold headers
- Added link to TrueFoundry tracing overview documentation
- Improved readability and consistency throughout the documentation
- Updated Portuguese translation (pt-BR) version

* added truefoundry.mdx

* updated tfy mdx

* Update docs/en/observability/truefoundry.mdx

Co-authored-by: Nikhil Popli <97437109+nikp1172@users.noreply.github.com>

* Update truefoundry.mdx

* Update truefoundry.mdx

-minor updates

* Update truefoundry.mdx

* updated truefoundry.mdx PT-BR

---------

Co-authored-by: Nikhil Popli <97437109+nikp1172@users.noreply.github.com>
2025-08-13 10:08:15 -04:00
Daniel Barreto
a0eadf783b Add Korean translations (#3307)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-12 15:58:12 -07:00
Lorenze Jay
251ae00b8b Lorenze/tracing-improvements-cleanup (#3291)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add tracing support to Crew and Flow classes

- Introduced a new `tracing` optional field in both the `Crew` and `Flow` classes to enable tracing functionality.
- Updated the initialization logic to conditionally set up the `TraceCollectionListener` based on the `tracing` flag or the `CREWAI_TRACING_ENABLED` environment variable.
- Removed the obsolete `interfaces.py` file related to tracing.
- Enhanced the `TraceCollectionListener` to accept a `tracing` parameter and adjusted its internal logic accordingly.
- Added tests to verify the correct setup of the trace listener when tracing is enabled.

This change improves the observability of the crew execution process and allows for better debugging and performance monitoring.

* fix flow name

* refactor: replace _send_batch method with finalize_batch calls in TraceCollectionListener

- Updated the TraceCollectionListener to use the batch_manager's finalize_batch method instead of the deprecated _send_batch method for handling trace events.
- This change improves the clarity of the code and ensures that batch finalization is consistently managed through the batch manager.
- Removed the obsolete _send_batch method to streamline the listener's functionality.

* removed comments

* refactor: enhance tracing functionality by introducing utility for tracing checks

- Added a new utility function `is_tracing_enabled` to streamline the logic for checking if tracing is enabled based on the `CREWAI_TRACING_ENABLED` environment variable.
- Updated the `Crew` and `Flow` classes to utilize this utility for improved readability and maintainability.
- Refactored the `TraceCollectionListener` to simplify tracing checks and ensure consistent behavior across components.
- Introduced a new module for tracing utilities to encapsulate related functions, enhancing code organization.

* refactor: remove unused imports from crew and flow modules

- Removed unnecessary `os` imports from both `crew.py` and `flow.py` files to enhance code cleanliness and maintainability.
2025-08-08 13:42:25 -07:00
Tony Kipkemboi
a221295394 WIP: docs updates (#3296) 2025-08-08 13:05:43 -07:00
Lucas Gomide
a92211f0ba fix: use correct endpoint to get auth/parameters on enterprise configuration (#3295)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-08 10:56:44 -04:00
Lucas Gomide
f9481cf10d feat: add enterprise configure command (#3289)
* feat: add enterprise configure command

* refactor: renaming EnterpriseCommand to EnterpriseConfigureCommand
2025-08-08 08:50:01 -04:00
633WHU
915857541e feat: improve LLM message formatting performance (#3251)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* optimize: improve LLM message formatting performance

Replace inefficient copy+append operations with list concatenation
in _format_messages_for_provider method. This optimization reduces
memory allocation and improves performance for large conversation
histories.

**Changes:**
- Mistral models: Use list concatenation instead of copy() + append()
- Ollama models: Use list concatenation instead of copy() + append()
- Add comprehensive performance tests to verify improvements

**Performance impact:**
- Reduces memory allocations for large message lists
- Improves processing speed by 2-25% depending on message list size
- Maintains exact same functionality with better efficiency

cliu_whu@yeah.net

* remove useless comment

---------

Co-authored-by: chiliu <chiliu@paypal.com>
2025-08-07 09:07:47 -04:00
Lorenze Jay
7c162411b7 chore: update crewai to 0.157.0 and crewai-tools dependency to version 0.60.0 (#3281)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update crewai-tools dependency to version 0.60.0

- Updated the `pyproject.toml` and `uv.lock` files to reflect the new version of `crewai-tools`.
- This change ensures compatibility with the latest features and improvements in the tools package.

* chore: bump CrewAI version to 0.157.0

- Updated the version in `__init__.py` to reflect the new release.
- Adjusted dependency versions in `pyproject.toml` files for crew, flow, and tool templates to ensure compatibility with the latest features and improvements in CrewAI.
- This change maintains consistency across the project and prepares for upcoming enhancements.
2025-08-06 14:47:50 -07:00
Lorenze Jay
8f4a6cc61c Lorenze/tracing v1 (#3279)
* initial setup

* feat: enhance CrewKickoffCompletedEvent to include total token usage

- Added total_tokens attribute to CrewKickoffCompletedEvent for better tracking of token usage during crew execution.
- Updated Crew class to emit total token usage upon kickoff completion.
- Removed obsolete context handler and execution context tracker files to streamline event handling.

* cleanup

* remove print statements for loggers

* feat: add CrewAI base URL and improve logging in tracing

- Introduced `CREWAI_BASE_URL` constant for easy access to the CrewAI application URL.
- Replaced print statements with logging in the `TraceSender` class for better error tracking.
- Enhanced the `TraceBatchManager` to provide default values for flow names and removed unnecessary comments.
- Implemented singleton pattern in `TraceCollectionListener` to ensure a single instance is used.
- Added a new test case to verify that the trace listener correctly collects events during crew execution.

* clear

* fix: update datetime serialization in tracing interfaces

- Removed the 'Z' suffix from datetime serialization in TraceSender and TraceEvent to ensure consistent ISO format.
- Added new test cases to validate the functionality of the TraceBatchManager and event collection during crew execution.
- Introduced fixtures to clear event bus listeners before each test to maintain isolation.

* test: enhance tracing tests with mock authentication token

- Added a mock authentication token to the tracing tests to ensure proper setup and event collection.
- Updated test methods to include the mock token, improving isolation and reliability of tests related to the TraceListener and BatchManager.
- Ensured that the tests validate the correct behavior of event collection during crew execution.

* test: refactor tracing tests to improve mock usage

- Moved the mock authentication token patching inside the test class to enhance readability and maintainability.
- Updated test methods to remove unnecessary mock parameters, streamlining the test signatures.
- Ensured that the tests continue to validate the correct behavior of event collection during crew execution while improving isolation.

* test: refactor tracing tests for improved mock usage and consistency

- Moved mock authentication token patching into individual test methods for better clarity and maintainability.
- Corrected the backstory string in the `Agent` instantiation to fix a typo.
- Ensured that all tests validate the correct behavior of event collection during crew execution while enhancing isolation and readability.

* test: add new tracing test for disabled trace listener

- Introduced a new test case to verify that the trace listener does not make HTTP calls when tracing is disabled via environment variables.
- Enhanced existing tests by mocking PlusAPI HTTP calls to avoid authentication and network requests, improving test isolation and reliability.
- Updated the test setup to ensure proper initialization of the trace listener and its components during crew execution.

* refactor: update LLM class to utilize new completion function and improve cost calculation

- Replaced direct calls to `litellm.completion` with a new import for better clarity and maintainability.
- Introduced a new optional attribute `completion_cost` in the LLM class to track the cost of completions.
- Updated the handling of completion responses to ensure accurate cost calculations and improved error handling.
- Removed outdated test cassettes for gemini models to streamline test suite and avoid redundancy.
- Enhanced existing tests to reflect changes in the LLM class and ensure proper functionality.

* test: enhance tracing tests with additional request and response scenarios

- Added new test cases to validate the behavior of the trace listener and batch manager when handling 404 responses from the tracing API.
- Updated existing test cassettes to include detailed request and response structures, ensuring comprehensive coverage of edge cases.
- Improved mock setup to avoid unnecessary network calls and enhance test reliability.
- Ensured that the tests validate the correct behavior of event collection during crew execution, particularly in scenarios where the tracing service is unavailable.

* feat: enable conditional tracing based on environment variable

- Added support for enabling or disabling the trace listener based on the `CREWAI_TRACING_ENABLED` environment variable.
- Updated the `Crew` class to conditionally set up the trace listener only when tracing is enabled, improving performance and resource management.
- Refactored test cases to ensure proper cleanup of event bus listeners before and after each test, enhancing test reliability and isolation.
- Improved mock setup in tracing tests to validate the behavior of the trace listener when tracing is disabled.

* fix: downgrade litellm version from 1.74.9 to 1.74.3

- Updated the `pyproject.toml` and `uv.lock` files to reflect the change in the `litellm` dependency version.
- This downgrade addresses compatibility issues and ensures stability in the project environment.

* refactor: improve tracing test setup by moving mock authentication token patching

- Removed the module-level patch for the authentication token and implemented a fixture to mock the token for all tests in the class, enhancing test isolation and readability.
- Updated the event bus clearing logic to ensure original handlers are restored after tests, improving reliability of the test environment.
- This refactor streamlines the test setup and ensures consistent behavior across tracing tests.

* test: enhance tracing test setup with comprehensive mock authentication

- Expanded the mock authentication token patching to cover all instances where `get_auth_token` is used across different modules, ensuring consistent behavior in tests.
- Introduced a new fixture to reset tracing singleton instances between tests, improving test isolation and reliability.
- This update enhances the overall robustness of the tracing tests by ensuring that all necessary components are properly mocked and reset, leading to more reliable test outcomes.

* just drop the test for now

* refactor: comment out completion-related code in LLM and LLM event classes

- Commented out the `completion` and `completion_cost` imports and their usage in the `LLM` class to prevent potential issues during execution.
- Updated the `LLMCallCompletedEvent` class to comment out the `response_cost` attribute, ensuring consistency with the changes in the LLM class.
- This refactor aims to streamline the code and prepare for future updates without affecting current functionality.

* refactor: update LLM response handling in LiteAgent

- Commented out the `response_cost` attribute in the LLM response handling to align with recent refactoring in the LLM class.
- This change aims to maintain consistency in the codebase and prepare for future updates without affecting current functionality.

* refactor: remove commented-out response cost attributes in LLM and LiteAgent

- Commented out the `response_cost` attribute in both the `LiteAgent` and `LLM` classes to maintain consistency with recent refactoring efforts.
- This change aligns with previous updates aimed at streamlining the codebase and preparing for future enhancements without impacting current functionality.

* bring back litellm upgrade version
2025-08-06 14:05:14 -07:00
633WHU
7dc86dc79a perf: optimize string operations with partition() over split()[0] (#3255)
Replace inefficient split()[0] operations with partition()[0] for better performance
when extracting the first part of a string before a delimiter.

Key improvements:
• Agent role processing: 29% faster with partition()
• Model provider extraction: 16% faster
• Console formatting: Improved responsiveness
• Better readability and explicit intent

Changes:
- agent_utils.py: Use partition('\n')[0] for agent role extraction
- console_formatter.py: Optimize agent role processing in logging
- llm_utils.py: Improve model provider parsing
- llm.py: Optimize model name parsing

Performance impact: 15-30% improvement in string processing operations
that are frequently used in agent execution and console output.

cliu_whu@yeah.net

Co-authored-by: chiliu <chiliu@paypal.com>
2025-08-06 15:04:53 -04:00
Vidit Ostwal
7ce20cfcc6 Dropping User Memory (#3225)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Dropping User Memory

* Dropping checks for user memory

* changed memory.mdx documentation removed user memory.

* Flaky Test Case Maybe

* Drop memory_config

* Fixed test cases

* Fixed some test cases

* Changed docs

* Changed BR docs

* Docs fixing

* Fix minor doc

* Fix minor doc

* Fix minor doc

* Added fallback mechanism in Mem0
2025-08-06 13:08:10 -04:00
Mrunmay Shelar
1d9523c98f docs: add LangDB integration documentation (#3228)
docs: update LangDB links in observability documentation

- Removed references to the AI Gateway features in both English and Portuguese documentation.
- Updated the Model Catalog links to point to the new app.langdb.ai domain.
- Ensured consistency across both language versions of the documentation.
2025-08-06 11:13:58 -04:00
Lucas Gomide
9f1d7d1aa9 fix: allow persist Flow state with BaseModel entries (#3276)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-08-06 09:04:59 -04:00
Lucas Gomide
79b375f6fa build: bump LiteLLM to 1.74.9 (#3278)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-05 17:10:23 -04:00
Lucas Gomide
75752479c2 docs: add CLI config docs (#3275) 2025-08-05 15:24:34 -04:00
Lucas Gomide
477bc1f09e feat: add default value for crew.name (#3252)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-08-05 12:25:50 -04:00
Lucas Gomide
66567bdc2f Support Device authorization with Okta (#3271)
* feat: support oauth2 config for authentication

* refactor: improve OAuth2 settings management

The CLI now supports seamless integration with other authentication providers, since the client_id, issue, domain are now manage by the user

* feat: support okta Device Authorization flow

* chore: resolve linter issues

* test: fix tests

* test: adding tests for auth providers

* test: fix broken test

* refator: adding WorkOS paramenters as default settings auth

* chore: improve oauth2 attributes description

* refactor: simplify WorkOS getting values

* fix: ensure Auth0 parameters is set when overrinding default auth provider

* chore: remove TODO Auth0 no longer provides default values

---------

Co-authored-by: Heitor Carvalho <heitor.scz@gmail.com>
2025-08-05 12:16:21 -04:00
Lucas Gomide
0b31bbe957 fix: enable word wrapping for long input tool (#3274) 2025-08-05 11:05:38 -04:00
Lucas Gomide
246cf588cd docs: updating MCP docs with connect_timeout attribute (#3273) 2025-08-05 10:27:18 -04:00
Heitor Carvalho
88ed91561f feat: add crewai config command group and tests (#3206)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-31 10:38:51 -04:00
Lorenze Jay
9a347ad458 chore: update crewai-tools dependency to version 0.59.0 and bump CrewAI version to 0.152.0 (#3244)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Updated `crewai-tools` dependency from `0.58.0` to `0.59.0` in `pyproject.toml` and `uv.lock`.
- Bumped the version of the CrewAI library from `0.150.0` to `0.152.0` in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new CrewAI version.
2025-07-30 14:38:24 -07:00
Lucas Gomide
34c3075fdb fix: support to add memories to Mem0 with agent_id (#3217)
* fix: support to add memories to Mem0 with agent_id

* feat: removing memory_type checkings from Mem0Storage

* feat: ensure agent_id is always present while saving memory into Mem0

* fix: use OR operator when querying Mem0 memories with both user_id and agent_id
2025-07-30 11:56:46 -04:00
Vidit Ostwal
498e8dc6e8 Changed the import error to show missing module files (#2423)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Fix issue #2421: Handle missing google.genai dependency gracefully

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting in test file

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting with ruff

Co-Authored-By: Joe Moura <joao@crewai.com>

* Removed unwatned test case

* Added dynamic catching for all the embedder function

* Dropped the comment

* Added test case

* Fixed Linting Issue

* Flaky test case in 3.13

* Test Case fixed

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-30 10:01:17 -04:00
Lorenze Jay
cb522cf500 Enhance Flow class to support custom flow names (#3234)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Added an optional `name` attribute to the Flow class for better identification.
- Updated event emissions to utilize the new `name` attribute, ensuring accurate flow naming in events.
- Added tests to verify the correct flow name is set and emitted during flow execution.
2025-07-29 15:41:30 -07:00
Vini Brasil
017acc74f5 Add timezone to event timestamps (#3231)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Events were lacking timezone information, making them naive datetimes,
which can be ambiguous.
2025-07-28 17:09:06 -03:00
Greyson LaLonde
fab86d197a Refactor: Move RAG components to dedicated top-level module (#3222)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Move RAG components to top-level module

- Create src/crewai/rag directory structure
- Move embeddings configurator from utilities to rag module
- Update imports across codebase and documentation
- Remove deprecated embedding files

* Remove empty knowledge/embedder directory
2025-07-25 10:55:31 -04:00
Vidit Ostwal
864e9bfb76 Changed the default value in Mem0 config (#3216)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Changed the default value in Mem0 config

* Added regression test for this

* Fixed Linting issues
2025-07-24 13:20:18 -04:00
Lucas Gomide
d3b45d197c fix: remove crewai signup references, replaced by crewai login (#3213) 2025-07-24 07:47:35 -04:00
Manuka Yasas
579153b070 docs: fix incorrect model naming in Google Vertex AI documentation (#3189)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Change model format from "gemini/gemini-1.5-pro-latest" to "gemini-1.5-pro-latest"
  in Vertex AI section examples
- Update both English and Portuguese documentation files
- Fixes incorrect provider prefix usage for Vertex AI models
- Ensures consistency with Vertex AI provider requirements

Files changed:
- docs/en/concepts/llms.mdx (line 272)
- docs/pt-BR/concepts/llms.mdx (line 270)

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-07-23 16:58:57 -04:00
Lorenze Jay
b1fdcdfa6e chore: update dependencies and version in project files (#3212)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Updated `crewai-tools` dependency from `0.55.0` to `0.58.0` in `pyproject.toml` and `uv.lock`.
- Added new packages `anthropic`, `browserbase`, `playwright`, `pyee`, and `stagehand` with their respective versions in `uv.lock`.
- Bumped the version of the CrewAI library from `0.148.0` to `0.150.0` in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new CrewAI version.
2025-07-23 11:03:50 -07:00
Mike Plachta
18d76a270c docs: add SerperScrapeWebsiteTool documentation and reorganize SerperDevTool setup instructions (#3211) 2025-07-23 12:12:59 -04:00
Vidit Ostwal
30541239ad Changed Mem0 Storage v1.1 -> v2 (#2893)
* Changed v1.1 -> v2

* Fixed Test Cases:

* Fixed linting issues

* Changed docs

* Refractored the storage

* Fixed test cases

* Fixing run-time checks

* Fixed Test Case

* Updated docs and added test case for custom categories

* Add the TODO back

* Minor Changes

* Added output_format in search

* Minor changes

* Added output_format and version in both search and save

* Small change

* Minor bugs

* Fixed test cases

* Changed docs

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-23 08:30:52 -04:00
Tony Kipkemboi
9a65573955 Feature/update docs (#3205)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs: add create_directory parameter

* docs: remove string guardrails to focus on function guardrails

* docs: remove get help from docs.json

* docs: update pt-BR docs.json changes
2025-07-22 13:55:27 -04:00
Lucas Gomide
27623a1d01 feat: remove duplicate print on LLM call error (#3183)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
By improving litellm handler error / outputs

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-07-21 22:08:07 -04:00
João Moura
2593242234 Adding Support to adhoc tool calling using the internal LLM class (#3195)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Adding Support to adhoc tool calling using the internal LLM class

* fix type
2025-07-21 19:36:48 -03:00
Greyson LaLonde
2ab6c31544 chore: add deprecation notices to UserMemory (#3201)
- Mark UserMemory and UserMemoryItem for removal in v0.156.0 or 2025-08-04
- Update all references with deprecation warnings
- Users should migrate to ExternalMemory
2025-07-21 15:26:34 -04:00
Lucas Gomide
3c55c8a22a fix: append user message when last message is from assistent when using Ollama models (#3200)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Ollama doesn't supports last message to be 'assistant'
We can drop this commit after merging https://github.com/BerriAI/litellm/pull/10917
2025-07-21 13:30:40 -04:00
Ranuga Disansa
424433ff58 docs: Add Tavily Search & Extractor tools to Search-Research suite (#3146)
* docs: Add Tavily Search and Extractor tools documentation

* docs: Add Tavily Search and Extractor tools to the documentation

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-07-21 12:01:29 -04:00
Lucas Gomide
2fd99503ed build: upgrade LiteLLM to 1.74.3 (#3199) 2025-07-21 09:58:47 -04:00
Vidit Ostwal
942014962e fixed save method, changed the test cases (#3187)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fixed save method, changed the test cases

* Linting fixed
2025-07-18 15:10:26 -04:00
Lucas Gomide
2ab79a7dd5 feat: drop unsupported stop parameter for LLM models automatically (#3184) 2025-07-18 13:54:28 -04:00
Lucas Gomide
27c449c9c4 test: remove workaround related to SQLite without FTS5 (#3179)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
For more details check out [here](actions/runner-images#12576)
2025-07-18 09:37:15 -04:00
Vini Brasil
9737333ffd Use file lock around Chroma client initialization (#3181)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This commit fixes a bug with concurrent processess and Chroma where
`table collections already exists` (and similar) were raised.

https://cookbook.chromadb.dev/core/system_constraints/
2025-07-17 11:50:45 -03:00
Lucas Gomide
bf248d5118 docs: fix neatlogs documentation (#3171)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-16 21:18:04 -04:00
Lorenze Jay
2490e8cd46 Update CrewAI version to 0.148.0 in project templates and dependencies (#3172)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Update CrewAI version to 0.148.0 in project templates and dependencies

* Update crewai-tools dependency to version 0.55.0 in pyproject.toml and uv.lock for improved functionality and performance.
2025-07-16 12:36:43 -07:00
Lucas Gomide
9b67e5a15f Emit events about Agent eval (#3168)
* feat: emit events abou Agent Eval

We are triggering events when an evaluation has started/completed/failed

* style: fix type checking issues
2025-07-16 13:18:59 -04:00
Lucas Gomide
6ebb6c9b63 Supporting eval single Agent/LiteAgent (#3167)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* refactor: rely on task completion event to evaluate agents

* feat: remove Crew dependency to evaluate agent

* feat: drop execution_context in AgentEvaluator

* chore: drop experimental Agent Eval feature from stable crew.test

* feat: support eval LiteAgent

* resolve linter issues
2025-07-15 09:22:41 -04:00
Lucas Gomide
53f674be60 chore: remove evaluation folder (#3159)
This folder was moved to `experimental` folder
2025-07-15 08:30:20 -04:00
Paras Sakarwal
11717a5213 docs: added integration with neatlogs (#3138)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-14 11:08:24 -04:00
Lucas Gomide
b6d699f764 Implement thread-safe AgentEvaluator (#3157)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* refactor: implement thread-safe AgentEvaluator with hybrid state management

* chore: remove useless comments
2025-07-14 10:05:42 -04:00
Lucas Gomide
5b15061b87 test: add test helper to assert Agent Experiments (#3156) 2025-07-14 09:24:49 -04:00
Lucas Gomide
1b6b2b36d9 Introduce Evaluator Experiment (#3133)
* feat: add exchanged messages in LLMCallCompletedEvent

* feat: add GoalAlignment metric for Agent evaluation

* feat: add SemanticQuality metric for Agent evaluation

* feat: add Tool Metrics for Agent evaluation

* feat: add Reasoning Metrics for Agent evaluation, still in progress

* feat: add AgentEvaluator class

This class will evaluate Agent' results and report to user

* fix: do not evaluate Agent by default

This is a experimental feature we still need refine it further

* test: add Agent eval tests

* fix: render all feedback per iteration

* style: resolve linter issues

* style: fix mypy issues

* fix: allow messages be empty on LLMCallCompletedEvent

* feat: add Experiment evaluation framework with baseline comparison

* fix: reset evaluator for each experiement iteraction

* fix: fix track of new test cases

* chore: split Experimental evaluation classes

* refactor: remove unused method

* refactor: isolate Console print in a dedicated class

* fix: make crew required to run an experiment

* fix: use time-aware to define experiment result

* test: add tests for Evaluator Experiment

* style: fix linter issues

* fix: encode string before hashing

* style: resolve linter issues

* feat: add experimental folder for beta features (#3141)

* test: move tests to experimental folder
2025-07-14 09:06:45 -04:00
devin-ai-integration[bot]
3ada4053bd Fix #3149: Add missing create_directory parameter to Task class (#3150)
* Fix #3149: Add missing create_directory parameter to Task class

- Add create_directory field with default value True for backward compatibility
- Update _save_file method to respect create_directory parameter
- Add comprehensive tests covering all scenarios
- Maintain existing behavior when create_directory=True (default)

The create_directory parameter was documented but missing from implementation.
Users can now control directory creation behavior:
- create_directory=True (default): Creates directories if they don't exist
- create_directory=False: Raises RuntimeError if directory doesn't exist

Fixes issue where users got TypeError when trying to use the documented
create_directory parameter.

Co-Authored-By: Jo\u00E3o <joao@crewai.com>

* Fix lint: Remove unused import os from test_create_directory_true

- Removes F401 lint error: 'os' imported but unused
- All lint checks should now pass

Co-Authored-By: Jo\u00E3o <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Jo\u00E3o <joao@crewai.com>
2025-07-14 08:15:41 -04:00
Vidit Ostwal
e7a5747c6b Comparing BaseLLM class instead of LLM (#3120)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Compaing BaseLLM class instead of LLM

* Fixed test cases

* Fixed Linting Issues

* removed last line

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-11 20:50:36 -04:00
Vidit Ostwal
eec1262d4f Fix agent knowledge (#2831)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Added add_sources()

* Fixed the agent knowledge querying

* Added test cases

* Fixed linting issue

* Fixed logic

* Seems like a falky test case

* Minor changes

* Added knowledge attriute to the crew documentation

* Flaky test

* fixed spaces

* Flaky Test Case

* Seems like a flaky test case

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-11 13:52:26 -04:00
Tony Kipkemboi
c6caa763d7 docs: Add guardrail attribute documentation and examples (#3139)
- Document string-based guardrails in tasks
- Add guardrail examples to YAML configuration
- Fix Python code formatting in PT-BR CLI docs
2025-07-11 13:32:59 -04:00
Lucas Gomide
08fa3797ca Introducing Agent evaluation (#3130)
* feat: add exchanged messages in LLMCallCompletedEvent

* feat: add GoalAlignment metric for Agent evaluation

* feat: add SemanticQuality metric for Agent evaluation

* feat: add Tool Metrics for Agent evaluation

* feat: add Reasoning Metrics for Agent evaluation, still in progress

* feat: add AgentEvaluator class

This class will evaluate Agent' results and report to user

* fix: do not evaluate Agent by default

This is a experimental feature we still need refine it further

* test: add Agent eval tests

* fix: render all feedback per iteration

* style: resolve linter issues

* style: fix mypy issues

* fix: allow messages be empty on LLMCallCompletedEvent
2025-07-11 13:18:03 -04:00
Greyson LaLonde
bf8fa3232b Add SQLite FTS5 support to test workflow (#3140)
* Add SQLite FTS5 support to test workflow

* Add explanatory comment for SQLite FTS5 workaround
2025-07-11 12:01:25 -04:00
Heitor Carvalho
a6e60a5d42 fix: use production workos environment id (#3129)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-09 17:09:01 -04:00
Lorenze Jay
7b0f3aabd9 chore: update crewAI and dependencies to version 0.141.0 and 0.51.0 (#3128)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Bump crewAI version to 0.141.0 in __init__.py for alignment with updated dependencies.
- Update `crewai-tools` dependency version to 0.51.0 in pyproject.toml and related template files.
- Add new testing dependencies: pytest-split and pytest-xdist for improved test execution.
- Ensure compatibility with the latest package versions in uv.lock and template files.
2025-07-09 10:37:06 -07:00
Lucas Gomide
f071966951 docs: add docs about Agent.kickoff usage (#3121)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-07-08 16:15:40 -04:00
Lucas Gomide
318310bb7a docs: add docs about Agent repository (#3122) 2025-07-08 15:56:08 -04:00
Greyson LaLonde
34a03f882c feat: add crew context tracking for LLM guardrail events (#3111)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Add crew context tracking using OpenTelemetry baggage for thread-safe propagation. Context is set during kickoff and cleaned up in finally block. Added thread safety tests with mocked agent execution.
2025-07-07 16:33:07 -04:00
Greyson LaLonde
a0fcc0c8d1 Speed up GitHub Actions tests with parallelization (#3107)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add pytest-xdist and pytest-split to dev dependencies for parallel test execution
- Split tests into 8 parallel groups per Python version for better distribution
- Enable CPU-level parallelization with -n auto to maximize resource usage
- Add fail-fast strategy and maxfail=3 to stop early on failures
- Add job name to match branch protection rules
- Reduce test timeout from default to 30s for faster failure detection
- Remove redundant cache configuration
2025-07-03 21:08:00 -04:00
Lorenze Jay
748c25451c Lorenze/new version 0.140.0 (#3106)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: clean up whitespace and update dependencies

* Removed unnecessary whitespace in multiple files for consistency.
* Updated `crewai-tools` dependency version to `0.49.0` in `pyproject.toml` and related template files.
* Bumped CrewAI version to `0.140.0` in `__init__.py` for alignment with updated dependencies.

* chore: update pyproject.toml to exclude documentation from build targets

* Added exclusions for the `docs` directory in both wheel and sdist build targets to streamline the build process and reduce unnecessary file inclusion.

* chore: update uv.lock for dependency resolution and Python version compatibility

* Incremented revision to 2.
* Updated resolution markers to include support for Python 3.13 and adjusted platform checks for better compatibility.
* Added new wheel URLs for zstandard version 0.23.0 to ensure availability across various platforms.

* chore: pin json-repair dependency version in pyproject.toml and uv.lock

* Updated json-repair dependency from a range to a specific version (0.25.2) for consistency and to avoid potential compatibility issues.
* Adjusted related entries in uv.lock to reflect the pinned version, ensuring alignment across project files.

* chore: pin agentops dependency version in pyproject.toml and uv.lock

* Updated agentops dependency from a range to a specific version (0.3.18) for consistency and to avoid potential compatibility issues.
* Adjusted related entries in uv.lock to reflect the pinned version, ensuring alignment across project files.

* test: enhance cache call assertions in crew tests

* Improved the test for cache hitting between agents by filtering mock calls to ensure they include the expected 'tool' and 'input' keywords.
* Added assertions to verify the number of cache calls and their expected arguments, enhancing the reliability of the test.
* Cleaned up whitespace and improved readability in various test cases for better maintainability.
2025-07-02 15:22:18 -07:00
Heitor Carvalho
a77dcdd419 feat: add multiple provider support (#3089)
* Remove `crewai signup` command, update docs

* Add `Settings.clear()` and clear settings before each login

* Add pyjwt

* Remove print statement from ToolCommand.login()

* Remove auth0 dependency

* Update docs
2025-07-02 16:44:47 -04:00
Greyson LaLonde
68f5bdf0d9 feat: add console logging for LLM guardrail events (#3105)
* feat: add console logging for memory events

* fix: emit guardrail events in correct order and handle exceptions

* fix: remove unreachable elif conditions in guardrail event listener

* fix: resolve mypy type errors in guardrail event handler
2025-07-02 16:19:22 -04:00
Irineu Brito
7f83947020 fix: correct code example language inconsistency in pt-BR docs (#3088)
* fix: correct code example language inconsistency in pt-BR docs

* fix: fix: fully standardize code example language and naming in pt-BR docs

* fix: fix: fully standardize code example language and naming in pt-BR docs fixed variables

* fix: fix: fully standardize code example language and naming in pt-BR docs fixed params

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-02 12:18:32 -04:00
Lucas Gomide
ceb310bcde docs: add docs about Memory Events (#3104)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-07-02 11:10:45 -04:00
Lucas Gomide
ae57e5723c feat: add console logging for memory system usage (#3103) 2025-07-02 11:00:26 -04:00
Lucas Gomide
ab39753a75 Introduce MemoryEvents to monitor their usage (#3098)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: emit events about memory usage

* test: add tests about memory events usage

* fixed linter issues

* test: use scoped_handlers while listener Memory events
2025-07-01 22:50:39 -04:00
Tony Kipkemboi
640e1a7bc2 Add docs redirects and development tools (#3096)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Add Reo.dev tracking script to documentation

* Comprehensive docs improvements and development tools

- Add comprehensive .cursorrules with CrewAI and Flow development patterns
- Add redirect rules for old doc links without /en/ prefix
- Replace changelog pages with direct GitHub releases links
- Fix installation page directory tree rendering issue
- Fix broken Visual Studio Build Tools link formatting
- Remove obsolete changelog files to reduce maintenance overhead

These changes improve developer experience and ensure all old documentation links continue working.
2025-07-01 14:41:34 -04:00
Lorenze Jay
e544ff8ba3 refactor: streamline collection handling in RAGStorage (#3097)
Replaced the try-except block for collection retrieval with a single call to get_or_create_collection, simplifying the code and improving readability. Added logging to confirm whether the collection was found or created.
2025-07-01 10:14:39 -07:00
Lucas Gomide
49c0144154 feat: improve data training for models up to 7B parameters (#3085)
* feat: improve data training for models up to 7B parameters.

* docs: training considerations for small models to the documentation
2025-07-01 11:47:47 -04:00
Tony Kipkemboi
2ab002a5bf Add Reo.dev tracking script to documentation (#3094)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-07-01 10:29:28 -04:00
Lucas Gomide
b7bf15681e feat: add capability to track LLM calls by task and agent (#3087)
* feat: add capability to track LLM calls by task and agent

This makes it possible to filter or scope LLM events by specific agents or tasks, which can be very useful for debugging or analytics in real-time application

* feat: add docs about LLM tracking by Agents and Tasks

* fix incompatible BaseLLM.call method signature

* feat: support to filter LLM Events from Lite Agent
2025-07-01 09:30:16 -04:00
Tony Kipkemboi
af9c01f5d3 Add Scarf analytics tracking (#3086)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add Scarf analytics tracking

* Fix bandit security warning for urlopen

* Fix linting errors

* Refactor telemetry: reuse existing logic and simplify exceptions
2025-06-30 17:48:45 -04:00
Irineu Brito
5a12b51ba2 fix: Correct typo 'depployments' to 'deployments' in documentation 'instalation' (#3081) 2025-06-30 12:19:31 -04:00
Michael Juliano
576b8ff836 Updated LiteLLM dependency. (#3047)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Updated LiteLLM dependency.

This moves to the latest stable release. Critically, this includes a fix
from https://github.com/BerriAI/litellm/pull/11563 which is required to
use grok-3-mini with crewAI.

* Ran `uv sync` as requested.
2025-06-27 09:54:12 -04:00
Lucas Gomide
b35c3e8024 fix: ensure env-vars are written in upper case (#3072)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
When creating a Crew via the CLI and selecting the Azure provider, the generated .env file had environment variables in lowercase.
This commit ensures that all environment variables are written in uppercase.
2025-06-26 12:29:06 -04:00
Mr. Ånand
b09796cd3f Adding Nebius to docs (#3070)
* Adding Nebius to docs

Submitting this PR on behalf of Nebius AI Studio to add Nebius models to the CrewAI documentation.

I tested with the latest CrewAI + Nebius setup to ensure compatibility.

cc @tonykipkemboi

* updated LiteLLM page

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 11:10:19 -04:00
devin-ai-integration[bot]
e0b46492fa Fix: Normalize project names by stripping trailing slashes in crew creation (#3060)
* fix: normalize project names by stripping trailing slashes in crew creation

- Strip trailing slashes from project names in create_folder_structure
- Add comprehensive tests for trailing slash scenarios
- Fixes #3059

The issue occurred because trailing slashes in project names like 'hello/'
were directly incorporated into pyproject.toml, creating invalid package
names and script entries. This fix silently normalizes project names by
stripping trailing slashes before processing, maintaining backward
compatibility while fixing the invalid template generation.

Co-Authored-By: João <joao@crewai.com>

* trigger CI re-run to check for flaky test issue

Co-Authored-By: João <joao@crewai.com>

* fix: resolve circular import in CLI authentication module

- Move ToolCommand import to be local inside _poll_for_token method
- Update test mock to patch ToolCommand at correct location
- Resolves Python 3.11 test collection failure in CI

Co-Authored-By: João <joao@crewai.com>

* feat: add comprehensive class name validation for Python identifiers

- Ensure generated class names are always valid Python identifiers
- Handle edge cases: names starting with numbers, special characters, keywords, built-ins
- Add sanitization logic to remove invalid characters and prefix with 'Crew' when needed
- Add comprehensive test coverage for class name validation edge cases
- Addresses GitHub PR comment from lucasgomide about class name validity

Fixes include:
- Names starting with numbers: '123project' -> 'Crew123Project'
- Python built-ins: 'True' -> 'TrueCrew', 'False' -> 'FalseCrew'
- Special characters: 'hello@world' -> 'HelloWorld'
- Empty/whitespace: '   ' -> 'DefaultCrew'
- All generated class names pass isidentifier() and keyword checks

Co-Authored-By: João <joao@crewai.com>

* refactor: change class name validation to raise errors instead of generating defaults

- Remove default value generation (Crew prefix/suffix, DefaultCrew fallback)
- Raise ValueError with descriptive messages for invalid class names
- Update tests to expect validation errors instead of default corrections
- Addresses GitHub comment feedback from lucasgomide about strict validation

Co-Authored-By: João <joao@crewai.com>

* fix: add working directory safety checks to prevent test interference

Co-Authored-By: João <joao@crewai.com>

* fix: standardize working directory handling in tests to prevent corruption

Co-Authored-By: João <joao@crewai.com>

* fix: eliminate os.chdir() usage in tests to prevent working directory corruption

- Replace os.chdir() with parent_folder parameter for create_folder_structure tests
- Mock create_folder_structure directly for create_crew tests to avoid directory changes
- All 12 tests now pass locally without working directory corruption
- Should resolve the 103 failing tests in Python 3.12 CI

Co-Authored-By: João <joao@crewai.com>

* fix: remove unused os import to resolve lint failure

- Remove unused 'import os' statement from test_create_crew.py
- All tests still pass locally after removing unused import
- Should resolve F401 lint error in CI

Co-Authored-By: João <joao@crewai.com>

* feat: add folder name validation for Python module names

- Implement validation to ensure folder_name is valid Python identifier
- Check that folder names don't start with digits
- Validate folder names are not Python keywords
- Sanitize invalid characters from folder names
- Raise ValueError with descriptive messages for invalid cases
- Update tests to validate both folder and class name requirements
- Addresses GitHub comment requiring folder names to be valid Python module names

Co-Authored-By: João <joao@crewai.com>

* fix: correct folder name validation logic to match test expectations

- Fix validation regex to catch names starting with invalid characters like '@#/'
- Ensure validation properly raises ValueError for cases expected by tests
- Maintain support for valid cases like 'my.project/' -> 'myproject'
- Address lucasgomide's comment about valid Python module names

Co-Authored-By: João <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 10:11:16 -04:00
Greyson LaLonde
ece13fbda0 refactor: implement PEP 621 dynamic versioning (#3068) 2025-06-26 10:02:26 -04:00
kilavvy
94a62d84e1 Update test_lite_agent.py (#3040)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 09:55:53 -04:00
Lucas Gomide
cdf8388b18 docs: update CLI LLM's documentation (#3071)
This change aims to be more generic, so we don’t have to constantly reflect all available LLM options suggested by the CLI when creating a crew.
2025-06-26 09:31:43 -04:00
Lorenze Jay
0f861338ef chore: update version to 0.134.0 across project files (#3067)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-25 16:06:43 -07:00
Lucas Gomide
4d1aabf620 feat: enhance CrewBase MCP tools support to allow selecting multiple tools per agent (#3065)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: enhance CrewBase MCP tools support to allow selecting multiple tools per agent

* docs: clarify how to access MCP tools

* build: upgrade crewai-tools
2025-06-25 14:59:55 -04:00
Daniel Barreto
a50fae3a4b Add pt-BR docs translation (#3039)
* docs: add pt-br translations

Powered by a CrewAI Flow https://github.com/danielfsbarreto/docs_translator

* Update mcp/overview.mdx brazilian docs

Its en-US counterpart was updated after I did a pass,
so now it includes the new section about @CrewBase
2025-06-25 11:52:33 -04:00
Lucas Gomide
f6dfec61d6 feat: add official way to use MCP Tools within a CrewBase (#3058)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-24 15:14:59 -04:00
Akshit Madan
060c486948 Updated Docs for maxim observability (#3003)
* docs: added Maxim support for Agent Observability

* enhanced the maxim integration doc page as per the github PR reviewer bot suggestions

* Update maxim-observability.mdx

* Update maxim-observability.mdx

- Fixed Python version, >=3.10
- added expected_output field in Task
- Removed marketing links and added github link

* added maxim in observability

* updated the maxim docs page

* fixed image paths

* removed demo link

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-24 14:36:51 -04:00
Lucas Gomide
8b176d0598 feat: improve Crew search while resetting their memories (#3057)
* test: add tests to test get_crews

* feat: improve Crew search while resetting their memories

Some memories couldn't be reset due to their reliance on relative external sources like `PDFKnowledge`. This was caused by the need to run the reset memories command from the `src` directory, which could break when external files weren't accessible from that path.

This commit allows the reset command to be executed from the root of the project — the same location typically used to run a crew — improving compatibility and reducing friction.

* feat: skip cli/templates folder while looking for Crew

* refactor: use console.print instead of print
2025-06-24 11:48:59 -04:00
Rostyslav Borovyk
c96d4a6823 Add Oxylabs Web Scraping tools (#2905)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add Oxylabs tools

* Review updates

* Review updates

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-23 13:58:16 -04:00
Lucas Gomide
59032817c7 docs: update recommendation filters for MCP and Enterprise tools (#3041)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-20 13:35:26 -04:00
Lucas Gomide
e9d8a853ea feat: support to initialize a tool from defined Tool attributes (#3023)
* feat: support to initialize a tool from defined Tool attributes

* fix: ensure Agent is able to load a list of Tools dynamically
2025-06-20 10:53:37 -04:00
Vidit Ostwal
463ea2b97f Fixed type annotation in task (#3021)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Added Union of List of Task, None, NotSpecified

* Seems like a flaky test

* Fixed run time issue

* Fixed Linting issues

* fix pydantic error

* aesthetic changes

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-19 14:37:46 -04:00
Jannik Maierhöfer
ec2903e5ee fix: upgrade langfuse code examples to langfuse python sdk v3 (#3030)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-19 12:18:33 -04:00
Daniel Barreto
4364585ebc Remove mkdocs from project dependencies (#3036)
CrewAI has been using https://mintlify.com/
to serve its docs
2025-06-19 11:21:08 -04:00
Lorenze Jay
0a6b7c655b docs: add comprehensive integration documentation for various services (#2999)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Introduced detailed documentation for integrations including Asana, Box, ClickUp, GitHub, Gmail, Google Calendar, Google Sheets, HubSpot, Jira, Linear, Notion, Salesforce, Shopify, Slack, Stripe, and Zendesk.
- Updated main docs.json to include a new "Integration Docs" section, organizing the documentation for easy access.
- Each integration includes setup instructions, available actions, and example tasks to streamline user onboarding and usage.
2025-06-18 10:21:18 -04:00
Lucas Gomide
db1e9e9b9a fix: fix pydantic support to 2.7.x (#3016)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Pydantic 2.7.x does not support a second parameter in model validators with mode="after"
2025-06-16 16:20:10 -04:00
Lucas Gomide
d92382b6cf fix: SSL error while getting LLM data from GH (#3014)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
When running behind cloud-based security users are struggling to donwload LLM data from Github. Usually the following error is raised

```
SSL certificate verification failed: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /BerriAI/litellm/main/model_prices_and_context_window.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1010)')))
Current CA bundle path: /usr/local/etc///.pem
```

This commit ensures the SSL config is beign provided while requesting data
2025-06-16 11:34:04 -04:00
Lucas Gomide
7c8f2a1325 docs: add missing docs about LLMGuardrail events (#3013) 2025-06-16 11:05:36 -04:00
Vidit Ostwal
a40447df29 updated docs (#2989)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-16 10:49:27 -04:00
leopardracer
5d6b467042 Update quickstart.mdx (#2998)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-16 10:35:52 -04:00
Greyson LaLonde
e0ff30c212 Fix tools parameter syntax 2025-06-16 10:25:34 -04:00
Lorenze Jay
a5b5c8ab37 Lorenze/console printer nice (#3004)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: possible fix for Thinking stuck

* feat: add agent logging events for execution tracking

- Introduced AgentLogsStartedEvent and AgentLogsExecutionEvent to enhance logging capabilities during agent execution.
- Updated CrewAgentExecutor to emit these events at the start and during execution, respectively.
- Modified EventListener to handle the new logging events and format output accordingly in the console.
- Enhanced ConsoleFormatter to display agent logs in a structured format, improving visibility of agent actions and outputs.

* drop emoji

* refactor: improve code structure and logging in LiteAgent and ConsoleFormatter

- Refactored imports in lite_agent.py for better readability.
- Enhanced guardrail property initialization in LiteAgent.
- Updated logging functionality to emit AgentLogsExecutionEvent for better tracking.
- Modified ConsoleFormatter to include tool arguments and final output in status updates.
- Improved output formatting for long text in ConsoleFormatter.

* fix tests

---------

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2025-06-14 12:21:46 -07:00
Vidit Ostwal
7f12e98de5 Added sanitize role feature in mem0 storage (#2988)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Added sanitize role feature in mme0 storage

* Used chroma db functionality
2025-06-12 13:14:34 -04:00
Lorenze Jay
99133104dd Update version to 0.130.0 and dependencies in pyproject.toml and uv.lock (#3002)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Bump CrewAI version from 0.126.0 to 0.130.0 in pyproject.toml and uv.lock.
- Update optional dependency 'crewai-tools' version from 0.46.0 to 0.47.1.
- Adjust dependency specifications in CLI templates to reflect the new version.
2025-06-11 17:01:11 -07:00
devin-ai-integration[bot]
970a63c13c Fix issue 2993: Prevent Flow status logs from hiding human input (#2994)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Fix issue 2993: Prevent Flow status logs from hiding human input

- Add pause_live_updates() and resume_live_updates() methods to ConsoleFormatter
- Modify _ask_human_input() to pause Flow status updates during human input
- Add comprehensive tests for pause/resume functionality and integration
- Ensure Live session is properly managed during human input prompts
- Fix prevents Flow status logs from overwriting user input prompts

Fixes #2993

Co-Authored-By: João <joao@crewai.com>

* Fix lint: Remove unused pytest import

- Remove unused pytest import from test_console_formatter_pause_resume.py
- Fixes F401 lint error identified in CI

Co-Authored-By: João <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
2025-06-11 12:08:00 -04:00
devin-ai-integration[bot]
06c991d8c3 Fix telemetry singleton pattern to respect dynamic environment variables (#2946)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Fix telemetry singleton pattern to respect dynamic environment variables

- Modified Telemetry.__init__ to prevent re-initialization with _initialized flag
- Updated _safe_telemetry_operation to check _is_telemetry_disabled() dynamically
- Added comprehensive tests for environment variables set after singleton creation
- Fixed singleton contamination in existing tests by adding proper reset
- Resolves issue #2945 where CREWAI_DISABLE_TELEMETRY=true was ignored when set after import

Co-Authored-By: João <joao@crewai.com>

* Implement code review improvements

- Move _initialized flag to __new__ method for better encapsulation
- Add type hints to _safe_telemetry_operation method
- Consolidate telemetry execution checks into _should_execute_telemetry helper
- Add pytest fixtures to reduce test setup redundancy
- Enhanced documentation for singleton behavior

Co-Authored-By: João <joao@crewai.com>

* Fix mypy type-checker errors

- Add explicit bool type annotation to _initialized field
- Fix return value in task_started method to not return _safe_telemetry_operation result
- Simplify initialization logic to set _initialized once in __init__

Co-Authored-By: João <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-06-10 17:38:40 -07:00
Lucas Gomide
739eb72fd0 LiteAgent w/ Guardrail (#2982)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: add guardrail support for Agents when using direct kickoff calls

* refactor: expose guardrail func in a proper utils file

* fix: resolve Self import on python 3.10
2025-06-10 13:32:32 -04:00
Lucas Gomide
b0d2e9fe31 docs: update Python version requirement from <=3.13 to <3.14 (#2987)
This correctly reflects support for all 3.13.x patch version
2025-06-10 12:44:28 -04:00
Lucas Gomide
5c51349a85 Support async tool executions (#2983)
* test: fix structured tool tests

No tests were being executed from this file

* feat: support to run async tool

Some Tool requires async execution. This commit allow us to collect tool result from coroutines

* docs: add docs about asynchronous tool support
2025-06-10 12:17:06 -04:00
Richard Luo
5b740467cb docs: fix the guide on persistence (#2849)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 14:09:56 -04:00
hegasz
e9d9dd2a79 Fix missing manager_agent tokens in usage_metrics from kickoff (#2848)
* fix(metrics): prevent usage_metrics from dropping manager_agent tokens

* Add test to verify hierarchical kickoff aggregates manager and agent usage metrics

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 13:16:05 -04:00
Lorenze Jay
3e74cb4832 docs: add integrations documentation and images for enterprise features (#2981)
- Introduced a new documentation file for Integrations, detailing supported services and setup instructions.
- Updated the main docs.json to include the new "integrations" feature in the contextual options.
- Added several images related to integrations to enhance the documentation.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 12:46:09 -04:00
Lucas Gomide
db3c8a49bd feat: improve docs and logging for Multi-Org actions in CLI (#2980)
* docs: add organization management in our CLI docs

* feat: improve user feedback when user is not authenticated

* feat: improve logging about current organization while publishing/install a Tool

* feat: improve logging when Agent repository is not found during fetch

* fix linter offences

* test: fix auth token error
2025-06-09 12:21:12 -04:00
Lucas Gomide
8a37b535ed docs: improve docs about planning LLM usage (#2977) 2025-06-09 10:17:04 -04:00
Lucas Gomide
e6ac1311e7 build: upgrade LiteLLM to support latest Openai version (#2963)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 08:55:12 -04:00
Akshit Madan
b0d89698fd docs: added Maxim support for Agent Observability (#2861)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs: added Maxim support for Agent Observability

* enhanced the maxim integration doc page as per the github PR reviewer bot suggestions

* Update maxim-observability.mdx

* Update maxim-observability.mdx

- Fixed Python version, >=3.10
- added expected_output field in Task
- Removed marketing links and added github link

* added maxim in observability

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-08 13:39:01 -04:00
Lucas Gomide
21d063a46c Support multi org in CLI (#2969)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: support to list, switch and see your current organization

* feat: store the current org after logged in

* feat: filtering agents, tools and their actions by organization_uuid if present

* fix linter offenses

* refactor: propagate the current org thought Header instead of params

* refactor: rename org column name to ID instead of Handle

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-06 15:28:09 -04:00
Mike Plachta
02912a653e Increasing the default X-axis spacing for flow plotting (#2967)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Increasing the default X-axis spacing for flow plotting

* removing unused imports
2025-06-06 09:43:38 -07:00
Greyson LaLonde
f1cfba7527 docs: update hallucination guardrail examples
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add basic usage example showing guardrail uses task's expected_output as default context
- Add explicit context example for custom reference content
2025-06-05 12:34:52 -04:00
Lucas Gomide
3e075cd48d docs: add minimum UV version required to use the Tool repository (#2965)
* docs: add minimum UV version required to use the Tool repository

* docs: remove memory from Agent docs

The Agent does not support `memory` attribute
2025-06-05 11:37:19 -04:00
Lucas Gomide
e03ec4d60f fix: remove duplicated message about Tool result (#2964)
We are currently inserting tool results into LLM messages twice, which may unnecessarily increase processing costs, especially for longer outputs.
2025-06-05 09:42:10 -04:00
Lorenze Jay
ba740c6157 Update version to 0.126.0 and dependencies in pyproject.toml and lock files (#2961)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-04 17:49:07 -07:00
Tony Kipkemboi
34c813ed79 Add enterprise testing image (#2960) 2025-06-04 15:05:35 -07:00
Tony Kipkemboi
545cc2ffe4 docs: Fix missing await keywords in async crew kickoff methods and add llm selection guide (#2959) 2025-06-04 14:12:52 -07:00
Mike Plachta
47b97d9b7f Azure embeddings documentation for knowledge (#2957)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-04 13:27:50 -07:00
Lucas Gomide
bf8fbb0a44 Minor adjustments on Tool publish and docs (#2958)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: fix tool publisher logger when available_exports is found

* docs: update docs and templates since we support Python 3.13
2025-06-04 11:58:26 -04:00
Lucas Gomide
552921cf83 feat: load Tool from Agent repository by their own module (#2940)
Previously, we only supported tools from the crewai-tools open-source repository. Now, we're introducing improved support for private tool repositories.
2025-06-04 09:53:25 -04:00
Lorenze Jay
372874fb3a agent add knowledge sources fix and test (#2948) 2025-06-04 06:47:15 -07:00
Lucas Gomide
2bd6b72aae Persist available tools from a Tool repository (#2851)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add capability to see and expose public Tool classes

* feat: persist available Tools from repository on publish

* ci: ignore explictly templates from ruff check

Ruff only applies --exclude to files it discovers itself. So we have to skip manually the same files excluded from `ruff.toml`

* sytle: fix linter issues

* refactor: renaming available_tools_classes by available_exports

* feat: provide more context about exportable tools

* feat: allow to install a Tool from pypi

* test: fix tests

* feat: add env_vars attribute to BaseTool

* remove TODO: security check since we are handle that on enterprise side
2025-06-03 10:09:02 -04:00
siddharth Sambharia
f02e0060fa feat/portkey-ai-docs-udpated (#2936) 2025-06-03 08:15:28 -04:00
Lucas Gomide
66b7628972 Support Python 3.13 (#2844)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* ci: support python 3.13 on CI

* docs: update docs about support python version

* build: adds requires python <3.14

* build: explicit tokenizers dependency

Added explicit tokenizers dependency: Added tokenizers>=0.20.3 to ensure a version compatible with Python 3.13 is used.

* build: drop fastembed is not longer used

* build: attempt to build PyTorch on Python 3.13

* feat: upgrade fastavro, pyarrow and lancedb

* build: ensure tiktoken greather than 0.8.0 due Python 3.13 compatibility
2025-06-02 18:12:24 -04:00
VirenG
c045399d6b Update README.md (#2923)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Added 'Multi-AI Agent' phrase for giving more clarity to key features section in clause 3 in README.md
2025-05-31 21:39:42 -07:00
Tony Kipkemboi
1da2fd2a5c Expand MCP Integration documentation structure (#2922)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-05-30 17:38:36 -04:00
Tony Kipkemboi
e07e11fbe7 docs(mcp): Comprehensive update to MCPServerAdapter documentation (#2921)
This commit includes several enhancements to the MCP integration guide:
- Adds a section on connecting to multiple MCP servers with a runnable example.
- Ensures consistent mention and examples for Streamable HTTP transport.
- Adds a manual lifecycle example for Streamable HTTP.
- Clarifies Stdio command examples.
- Refines definitions of Stdio, SSE, and Streamable HTTP transports.
- Simplifies comments in code examples for clarity.
2025-05-30 15:09:52 -04:00
Lucas Gomide
55ed91e313 feat: log usage tools when called by LLM (#2916)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: log usage tools when called by LLM

* feat: print llm tool usage in console
2025-05-29 14:34:34 -04:00
989 changed files with 143489 additions and 10652 deletions

1429
.cursorrules Normal file

File diff suppressed because it is too large Load Diff

46
.github/workflows/build-uv-cache.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: Build uv cache
on:
push:
branches:
- main
paths:
- "uv.lock"
- "pyproject.toml"
workflow_dispatch:
permissions:
contents: read
jobs:
build-cache:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies and populate cache
run: |
echo "Building global UV cache for Python ${{ matrix.python-version }}..."
uv sync --all-groups --all-extras --no-install-project
echo "Cache populated successfully"
- name: Save uv caches
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

102
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,102 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
push:
branches: [ "main" ]
paths-ignore:
- "src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "src/crewai/cli/templates/**"
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: actions
build-mode: none
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -2,6 +2,9 @@ name: Lint
on: [pull_request]
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
@@ -15,8 +18,27 @@ jobs:
- name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1
- name: Install Ruff
run: pip install ruff
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py3.11-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.11"
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras --no-install-project
- name: Get Changed Python Files
id: changed-files
@@ -30,4 +52,17 @@ jobs:
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" | tr " " "\n" | xargs -I{} ruff check "{}"
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}

View File

@@ -1,45 +0,0 @@
name: Deploy MkDocs
on:
release:
types: [published]
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Calculate requirements hash
id: req-hash
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
- name: Setup cache
uses: actions/cache@v4
with:
key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Requirements
run: |
sudo apt-get update &&
sudo apt-get install pngquant &&
pip install mkdocs-material mkdocs-material-extensions pillow cairosvg
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
- name: Build and deploy MkDocs
run: mkdocs gh-deploy --force

View File

@@ -1,23 +0,0 @@
name: Security Checker
on: [pull_request]
jobs:
security-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11.9"
- name: Install dependencies
run: pip install bandit
- name: Run Bandit
run: bandit -c pyproject.toml -r src/ -ll

View File

@@ -3,32 +3,95 @@ name: Run Tests
on: [pull_request]
permissions:
contents: write
contents: read
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
jobs:
tests:
name: tests (${{ matrix.python-version }})
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
fail-fast: true
matrix:
python-version: ['3.10', '3.11', '3.12']
python-version: ['3.10', '3.11', '3.12', '3.13']
group: [1, 2, 3, 4, 5, 6, 7, 8]
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v3
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --dev --all-extras
run: uv sync --all-groups --all-extras
- name: Run tests
run: uv run pytest --block-network --timeout=60 -vv
- name: Restore test durations
uses: actions/cache/restore@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Run tests (group ${{ matrix.group }} of 8)
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
DURATION_FILE=".test_durations_py${PYTHON_VERSION_SAFE}"
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
# DURATIONS_ARG=""
# elif git diff origin/${{ github.base_ref }}...HEAD --name-only 2>/dev/null | grep -q "^tests/.*\.py$"; then
# echo "Test files have changed, skipping cached durations to avoid mismatches"
# DURATIONS_ARG=""
# else
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
$DURATIONS_ARG \
--durations=10 \
-n auto \
--maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

View File

@@ -3,24 +3,99 @@ name: Run Type Checks
on: [pull_request]
permissions:
contents: write
contents: read
jobs:
type-checker:
type-checker-matrix:
name: type-checker (${{ matrix.python-version }})
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11.9"
fetch-depth: 0 # Fetch all history for proper diff
- name: Install Requirements
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras
- name: Get changed Python files
id: changed-files
run: |
pip install mypy
# Get the list of changed Python files compared to the base branch
echo "Fetching changed files..."
git diff --name-only --diff-filter=ACMRT origin/${{ github.base_ref }}...HEAD -- '*.py' > changed_files.txt
- name: Run type checks
run: mypy src
# Filter for files in src/ directory only (excluding tests/)
grep -E "^src/" changed_files.txt > filtered_changed_files.txt || true
# Check if there are any changed files
if [ -s filtered_changed_files.txt ]; then
echo "Changed Python files in src/:"
cat filtered_changed_files.txt
echo "has_changes=true" >> $GITHUB_OUTPUT
# Convert newlines to spaces for mypy command
echo "files=$(cat filtered_changed_files.txt | tr '\n' ' ')" >> $GITHUB_OUTPUT
else
echo "No Python files changed in src/"
echo "has_changes=false" >> $GITHUB_OUTPUT
fi
- name: Run type checks on changed files
if: steps.changed-files.outputs.has_changes == 'true'
run: |
echo "Running mypy on changed files with Python ${{ matrix.python-version }}..."
uv run mypy ${{ steps.changed-files.outputs.files }}
- name: No files to check
if: steps.changed-files.outputs.has_changes == 'false'
run: echo "No Python files in src/ were modified - skipping type checks"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
# Summary job to provide single status for branch protection
type-checker:
name: type-checker
runs-on: ubuntu-latest
needs: type-checker-matrix
if: always()
steps:
- name: Check matrix results
run: |
if [ "${{ needs.type-checker-matrix.result }}" == "success" ] || [ "${{ needs.type-checker-matrix.result }}" == "skipped" ]; then
echo "✅ All type checks passed"
else
echo "❌ Type checks failed"
exit 1
fi

View File

@@ -0,0 +1,71 @@
name: Update Test Durations
on:
push:
branches:
- main
paths:
- 'tests/**/*.py'
workflow_dispatch:
permissions:
contents: read
jobs:
update-durations:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --all-groups --all-extras
- name: Run all tests and store durations
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
uv run pytest --store-durations --durations-path=.test_durations_py${PYTHON_VERSION_SAFE} -n auto
continue-on-error: true
- name: Save durations to cache
if: always()
uses: actions/cache/save@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

4
.gitignore vendored
View File

@@ -21,9 +21,9 @@ crew_tasks_output.json
.mypy_cache
.ruff_cache
.venv
agentops.log
test_flow.html
crewairules.mdc
plan.md
conceptual_plan.md
build_image
build_image
chromadb-*.lock

View File

@@ -1,7 +1,19 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.2
- repo: local
hooks:
- id: ruff
args: ["--fix"]
name: ruff
entry: uv run ruff check
language: system
types: [python]
- id: ruff-format
name: ruff-format
entry: uv run ruff format
language: system
types: [python]
- id: mypy
name: mypy
entry: uv run mypy
language: system
types: [python]
exclude: ^tests/

View File

@@ -1,4 +0,0 @@
exclude = [
"templates",
"__init__.py",
]

View File

@@ -161,7 +161,7 @@ To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
@@ -403,7 +403,7 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features
CrewAI stands apart as a lean, standalone, high-performance framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
CrewAI stands apart as a lean, standalone, high-performance multi-AI Agent framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
@@ -418,10 +418,10 @@ Choose CrewAI to easily build powerful, adaptable, and production-ready AI autom
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis)
### Quick Tutorial
@@ -429,19 +429,19 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")

View File

@@ -1,473 +0,0 @@
---
title: Changelog
description: View the latest updates and changes to CrewAI
icon: timeline
---
<Update label="2024-05-22" description="v0.121.0" tags={["Latest"]}>
## Release Highlights
<Frame>
<img src="/images/releases/v01210.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.121.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed encoding error when creating tools
- Fixed failing llama test
- Updated logging configuration for consistency
- Enhanced telemetry initialization and event handling
**New Features & Enhancements**
- Added **markdown attribute** to the Task class
- Added **reasoning attribute** to the Agent class
- Added **inject_date flag** to Agent for automatic date injection
- Implemented **HallucinationGuardrail** (no-op with test coverage)
**Documentation & Guides**
- Added documentation for **StagehandTool** and improved MDX structure
- Added documentation for **MCP integration** and updated enterprise docs
- Documented knowledge events and updated reasoning docs
- Added stop parameter documentation
- Fixed import references in doc examples (before_kickoff, after_kickoff)
- General docs updates and restructuring for clarity
</Update>
<Update label="2024-05-15" description="v0.120.1">
## Release Highlights
<Frame>
<img src="/images/releases/v01201.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.120.1">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed **interpolation with hyphens**
</Update>
<Update label="2024-05-14" description="v0.120.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01200.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.120.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Enabled **full Ruff rule set** by default for stricter linting
- Addressed race condition in FilteredStream using context managers
- Fixed agent knowledge reset issue
- Refactored agent fetching logic into utility module
**New Features & Enhancements**
- Added support for **loading an Agent directly from a repository**
- Enabled setting an empty context for Task
- Enhanced Agent repository feedback and fixed Tool auto-import behavior
- Introduced direct initialization of knowledge (bypassing knowledge_sources)
**Documentation & Guides**
- Updated security.md for current security practices
- Cleaned up Google setup section for clarity
- Added link to AI Studio when entering Gemini key
- Updated Arize Phoenix observability guide
- Refreshed flow documentation
</Update>
<Update label="2024-05-08" description="v0.119.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01190.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.119.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Improved test reliability by enhancing pytest handling for flaky tests
- Fixed memory reset crash when embedding dimensions mismatch
- Enabled parent flow identification for Crew and LiteAgent
- Prevented telemetry-related crashes when unavailable
- Upgraded **LiteLLM version** for better compatibility
- Fixed llama converter tests by removing skip_external_api
**New Features & Enhancements**
- Introduced **knowledge retrieval prompt re-writing** in Agent for improved tracking and debugging
- Made LLM setup and quickstart guides model-agnostic
**Documentation & Guides**
- Added advanced configuration docs for the RAG tool
- Updated Windows troubleshooting guide
- Refined documentation examples for better clarity
- Fixed typos across docs and config files
</Update>
<Update label="2024-04-28" description="v0.118.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01180.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.118.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed issues with missing prompt or system templates
- Removed global logging configuration to avoid unintended overrides
- Renamed **TaskGuardrail to LLMGuardrail** for improved clarity
- Downgraded litellm to version 1.167.1 for compatibility
- Added missing init.py files to ensure proper module initialization
**New Features & Enhancements**
- Added support for **no-code Guardrail creation** to simplify AI behavior controls
**Documentation & Guides**
- Removed CrewStructuredTool from public documentation to reflect internal usage
- Updated enterprise documentation and YouTube embed for improved onboarding experience
</Update>
<Update label="2024-04-20" description="v0.117.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01170.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.117.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Added `result_as_answer` parameter support in `@tool` decorator.
- Introduced support for new language models: GPT-4.1, Gemini-2.0, and Gemini-2.5 Pro.
- Enhanced knowledge management capabilities.
- Added Huggingface provider option in CLI.
- Improved compatibility and CI support for Python 3.10+.
**Core Improvements & Fixes**
- Fixed issues with incorrect template parameters and missing inputs.
- Improved asynchronous flow handling with coroutine condition checks.
- Enhanced memory management with isolated configuration and correct memory object copying.
- Fixed initialization of lite agents with correct references.
- Addressed Python type hint issues and removed redundant imports.
- Updated event placement for improved tool usage tracking.
- Raised explicit exceptions when flows fail.
- Removed unused code and redundant comments from various modules.
- Updated GitHub App token action to v2.
**Documentation & Guides**
- Enhanced documentation structure, including enterprise deployment instructions.
- Automatically create output folders for documentation generation.
- Fixed broken link in WeaviateVectorSearchTool documentation.
- Fixed guardrail documentation usage and import paths for JSON search tools.
- Updated documentation for CodeInterpreterTool.
- Improved SEO, contextual navigation, and error handling for documentation pages.
</Update>
<Update label="2024-04-25" description="v0.117.1">
## Release Highlights
<Frame>
<img src="/images/releases/v01171.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.117.1">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Upgraded **crewai-tools** to latest version
- Upgraded **liteLLM** to latest version
- Fixed **Mem0 OSS**
</Update>
<Update label="2024-04-07" description="v0.114.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01140.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.114.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Agents as an atomic unit. (`Agent(...).kickoff()`)
- Support for [Custom LLM implementations](https://docs.crewai.com/guides/advanced/custom-llm).
- Integrated External Memory and [Opik observability](https://docs.crewai.com/how-to/opik-observability).
- Enhanced YAML extraction.
- Multimodal agent validation.
- Added Secure fingerprints for agents and crews.
**Core Improvements & Fixes**
- Improved serialization, agent copying, and Python compatibility.
- Added wildcard support to `emit()`
- Added support for additional router calls and context window adjustments.
- Fixed typing issues, validation, and import statements.
- Improved method performance.
- Enhanced agent task handling, event emissions, and memory management.
- Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs.
**Documentation & Guides**
- Improved documentation structure, theme, and organization.
- Added guides for Local NVIDIA NIM with WSL2, W&B Weave, and Arize Phoenix.
- Updated tool configuration examples, prompts, and observability docs.
- Guide on using singular agents within Flows.
</Update>
<Update label="2024-03-17" description="v0.108.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01080.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.108.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Converted tabs to spaces in `crew.py` template
- Enhanced LLM Streaming Response Handling and Event System
- Included `model_name`
- Enhanced Event Listener with rich visualization and improved logging
- Added fingerprints
**Bug Fixes**
- Fixed Mistral issues
- Fixed a bug in documentation
- Fixed type check error in fingerprint property
**Documentation Updates**
- Improved tool documentation
- Updated installation guide for the `uv` tool package
- Added instructions for upgrading crewAI with the `uv` tool
- Added documentation for `ApifyActorsTool`
</Update>
<Update label="2024-03-10" description="v0.105.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01050.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.105.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed issues with missing template variables and user memory configuration
- Improved async flow support and addressed agent response formatting
- Enhanced memory reset functionality and fixed CLI memory commands
- Fixed type issues, tool calling properties, and telemetry decoupling
**New Features & Enhancements**
- Added Flow state export and improved state utilities
- Enhanced agent knowledge setup with optional crew embedder
- Introduced event emitter for better observability and LLM call tracking
- Added support for Python 3.10 and ChatOllama from langchain_ollama
- Integrated context window size support for the o3-mini model
- Added support for multiple router calls
**Documentation & Guides**
- Improved documentation layout and hierarchical structure
- Added QdrantVectorSearchTool guide and clarified event listener usage
- Fixed typos in prompts and updated Amazon Bedrock model listings
</Update>
<Update label="2024-02-12" description="v0.102.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01020.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.102.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models
- Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks
- Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class
- Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types
**New Features & Enhancements**
- Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support
- Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation
- Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files
- General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues
- Adding new tool: `QdrantVectorSearchTool`
**Documentation & Guides**
- Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation
- Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation
- Fixed Various Typos & Formatting Issues
</Update>
<Update label="2024-01-28" description="v0.100.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01000.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.100.0">View on GitHub</a>
</div>
**Features**
- Add Composio docs
- Add SageMaker as a LLM provider
**Fixes**
- Overall LLM connection issues
- Using safe accessors on training
- Add version check to crew_chat.py
**Documentation**
- New docs for crewai chat
- Improve formatting and clarity in CLI and Composio Tool docs
</Update>
<Update label="2024-01-20" description="v0.98.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0980.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.98.0">View on GitHub</a>
</div>
**Features**
- Conversation crew v1
- Add unique ID to flow states
- Add @persist decorator with FlowPersistence interface
**Integrations**
- Add SambaNova integration
- Add NVIDIA NIM provider in cli
- Introducing VoyageAI
**Fixes**
- Fix API Key Behavior and Entity Handling in Mem0 Integration
- Fixed core invoke loop logic and relevant tests
- Make tool inputs actual objects and not strings
- Add important missing parts to creating tools
- Drop litellm version to prevent windows issue
- Before kickoff if inputs are none
- Fixed typos, nested pydantic model issue, and docling issues
</Update>
<Update label="2024-01-04" description="v0.95.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0950.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.95.0">View on GitHub</a>
</div>
**New Features**
- Adding Multimodal Abilities to Crew
- Programatic Guardrails
- HITL multiple rounds
- Gemini 2.0 Support
- CrewAI Flows Improvements
- Add Workflow Permissions
- Add support for langfuse with litellm
- Portkey Integration with CrewAI
- Add interpolate_only method and improve error handling
- Docling Support
- Weviate Support
**Fixes**
- output_file not respecting system path
- disk I/O error when resetting short-term memory
- CrewJSONEncoder now accepts enums
- Python max version
- Interpolation for output_file in Task
- Handle coworker role name case/whitespace properly
- Add tiktoken as explicit dependency and document Rust requirement
- Include agent knowledge in planning process
- Change storage initialization to None for KnowledgeStorage
- Fix optional storage checks
- include event emitter in flows
- Docstring, Error Handling, and Type Hints Improvements
- Suppressed userWarnings from litellm pydantic issues
</Update>
<Update label="2024-12-05" description="v0.86.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0860.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.86.0">View on GitHub</a>
</div>
**Changes**
- Remove all references to pipeline and pipeline router
- Add Nvidia NIM as provider in Custom LLM
- Add knowledge demo + improve knowledge docs
- Add HITL multiple rounds of followup
- New docs about yaml crew with decorators
- Simplify template crew
</Update>
<Update label="2024-12-04" description="v0.85.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0850.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.85.0">View on GitHub</a>
</div>
**Features**
- Added knowledge to agent level
- Feat/remove langchain
- Improve typed task outputs
- Log in to Tool Repository on crewai login
**Fixes**
- Fixes issues with result as answer not properly exiting LLM loop
- Fix missing key name when running with ollama provider
- Fix spelling issue found
**Documentation**
- Update readme for running mypy
- Add knowledge to mint.json
- Update Github actions
- Update Agents docs to include two approaches for creating an agent
- Improvements to LLM Configuration and Usage
</Update>
<Update label="2024-11-25" description="v0.83.0">
**New Features**
- New before_kickoff and after_kickoff crew callbacks
- Support to pre-seed agents with Knowledge
- Add support for retrieving user preferences and memories using Mem0
**Fixes**
- Fix Async Execution
- Upgrade chroma and adjust embedder function generator
- Update CLI Watson supported models + docs
- Reduce level for Bandit
- Fixing all tests
**Documentation**
- Update Docs
</Update>
<Update label="2024-11-13" description="v0.80.0">
**Fixes**
- Fixing Tokens callback replacement bug
- Fixing Step callback issue
- Add cached prompt tokens info on usage metrics
- Fix crew_train_success test
</Update>

View File

@@ -1,67 +0,0 @@
---
title: Training
description: Learn how to train your CrewAI agents by giving them feedback early on and get consistent results.
icon: dumbbell
---
## Overview
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
This helps the agents improve their understanding, decision-making, and problem-solving abilities.
### Training Your Crew Using the CLI
To use the training feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
```shell
crewai train -n <n_iterations> <filename> (optional)
```
<Tip>
Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`.
</Tip>
### Training Your Crew Programmatically
To train your crew programmatically, use the following steps:
1. Define the number of iterations for training.
2. Specify the input parameters for the training process.
3. Execute the training command within a try-except block to handle potential errors.
```python Code
n_iterations = 2
inputs = {"topic": "CrewAI Training"}
filename = "your_model.pkl"
try:
YourCrewName_Crew().crew().train(
n_iterations=n_iterations,
inputs=inputs,
filename=filename
)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
```
### Key Points to Note
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
- **Filename Requirement:** Ensure that the filename ends with `.pkl`. The code will raise a `ValueError` if this condition is not met.
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.
Happy training with CrewAI! 🚀

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,8 @@
---
title: "GET /inputs"
description: "Get required inputs for your crew"
openapi: "/enterprise-api.en.yaml GET /inputs"
mode: "wide"
---

View File

@@ -2,6 +2,7 @@
title: "Introduction"
description: "Complete reference for the CrewAI Enterprise REST API"
icon: "code"
mode: "wide"
---
# CrewAI Enterprise API

View File

@@ -0,0 +1,8 @@
---
title: "POST /kickoff"
description: "Start a crew execution"
openapi: "/enterprise-api.en.yaml POST /kickoff"
mode: "wide"
---

View File

@@ -0,0 +1,8 @@
---
title: "GET /status/{kickoff_id}"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
mode: "wide"
---

1763
docs/en/changelog.mdx Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,7 @@
title: Agents
description: Detailed guide on creating and managing agents within the CrewAI framework.
icon: robot
mode: "wide"
---
## Overview of an Agent
@@ -43,7 +44,6 @@ The Visual Agent Builder enables:
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Memory** _(optional)_ | `memory` | `bool` | Whether the agent should maintain memory of interactions. Default is True. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
@@ -72,7 +72,7 @@ There are two ways to create agents in CrewAI: using **YAML configuration (recom
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
@@ -156,7 +156,6 @@ agent = Agent(
"you excel at finding patterns in complex datasets.",
llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4"
function_calling_llm=None, # Optional: Separate LLM for tool calling
memory=True, # Default: True
verbose=False, # Default: False
allow_delegation=False, # Default: False
max_iter=20, # Default: 20 iterations
@@ -297,6 +296,11 @@ multimodal_agent = Agent(
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
Add the code interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks
@@ -309,7 +313,7 @@ multimodal_agent = Agent(
<Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
@@ -422,7 +426,7 @@ strict_agent = Agent(
```python Code
# Perfect for document processing
document_processor = Agent(
role="Document Analyst",
role="Document Analyst",
goal="Extract insights from large research papers",
backstory="Expert at analyzing extensive documentation",
respect_context_window=True, # Handle large documents gracefully
@@ -523,6 +527,103 @@ agent = Agent(
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
</Note>
## Direct Agent Interaction with `kickoff()`
Agents can be used directly without going through a task or crew workflow using the `kickoff()` method. This provides a simpler way to interact with an agent when you don't need the full crew orchestration capabilities.
### How `kickoff()` Works
The `kickoff()` method allows you to send messages directly to an agent and get a response, similar to how you would interact with an LLM but with all the agent's capabilities (tools, reasoning, etc.).
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool
# Create an agent
researcher = Agent(
role="AI Technology Researcher",
goal="Research the latest AI developments",
tools=[SerperDevTool()],
verbose=True
)
# Use kickoff() to interact directly with the agent
result = researcher.kickoff("What are the latest developments in language models?")
# Access the raw response
print(result.raw)
```
### Parameters and Return Values
| Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties:
- `raw`: String containing the raw output text
- `pydantic`: Parsed Pydantic model (if a `response_format` was provided)
- `agent_role`: Role of the agent that produced the output
- `usage_metrics`: Token usage metrics for the execution
### Structured Output
You can get structured output by providing a Pydantic model as the `response_format`:
```python Code
from pydantic import BaseModel
from typing import List
class ResearchFindings(BaseModel):
main_points: List[str]
key_technologies: List[str]
future_predictions: str
# Get structured output
result = researcher.kickoff(
"Summarize the latest developments in AI for 2025",
response_format=ResearchFindings
)
# Access structured data
print(result.pydantic.main_points)
print(result.pydantic.future_predictions)
```
### Multiple Messages
You can also provide a conversation history as a list of message dictionaries:
```python Code
messages = [
{"role": "user", "content": "I need information about large language models"},
{"role": "assistant", "content": "I'd be happy to help with that! What specifically would you like to know?"},
{"role": "user", "content": "What are the latest developments in 2025?"}
]
result = researcher.kickoff(messages)
```
### Async Support
An asynchronous version is available via `kickoff_async()` with the same parameters:
```python Code
import asyncio
async def main():
result = await researcher.kickoff_async("What are the latest developments in AI?")
print(result.raw)
asyncio.run(main())
```
<Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
@@ -537,7 +638,6 @@ The context window management feature works automatically in the background. You
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Use `memory: true` for tasks requiring historical context
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
@@ -585,7 +685,6 @@ The context window management feature works automatically in the background. You
- Review code sandbox settings
4. **Memory Issues**: If agent responses seem inconsistent:
- Verify memory is enabled
- Check knowledge source configuration
- Review conversation history management

View File

@@ -2,8 +2,11 @@
title: CLI
description: Learn how to use the CrewAI CLI to interact with CrewAI.
icon: terminal
mode: "wide"
---
<Warning>Since release 0.140.0, CrewAI Enterprise started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
## Overview
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
@@ -86,7 +89,7 @@ crewai replay [OPTIONS]
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell Terminal
```shell Terminal
crewai replay -t task_123456
```
@@ -132,7 +135,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell Terminal
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -149,7 +152,7 @@ Starting from version 0.103.0, the `crewai run` command can be used to run both
</Note>
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
@@ -186,10 +189,7 @@ def crew(self) -> Crew:
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
```shell Terminal
crewai signup
```
If you already have an account, you can login with:
You can login or create an account with:
```shell Terminal
crewai login
```
@@ -200,12 +200,43 @@ Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
Manage your CrewAI Enterprise organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI Enterprise to use these organization management commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
```shell Terminal
crewai deploy push
```
```
- Initiates the deployment process on the CrewAI Enterprise platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
@@ -252,15 +283,33 @@ Watch this video tutorial for a step-by-step demonstration of deploying your cre
allowfullscreen
></iframe>
### 11. API Keys
### 11. Login
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
Authenticate with CrewAI Enterprise using a secure device code flow (no email entry required).
Once you've selected an LLM provider, you will be prompted for API keys.
```shell Terminal
crewai login
```
#### Initial API key providers
What happens:
- A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL
- Enter/confirm the code to complete authentication
The CLI will initially prompt for API keys for the following services:
Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider and model, you will be prompted for API keys.
#### Available LLM Providers
Here's a list of the most popular LLM providers suggested by the CLI:
* OpenAI
* Groq
@@ -268,11 +317,11 @@ The CLI will initially prompt for API keys for the following services:
* Google Gemini
* SambaNova
When you select a provider, the CLI will prompt you to enter your API key.
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
#### Other Options
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
If you select "other", you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
@@ -280,5 +329,81 @@ See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management
Manage CLI configuration settings for CrewAI.
```shell Terminal
crewai config [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: Display all CLI configuration parameters
```shell Terminal
crewai config list
```
- `set`: Set a CLI configuration parameter
```shell Terminal
crewai config set <key> <value>
```
- `reset`: Reset all CLI configuration parameters to default values
```shell Terminal
crewai config reset
```
#### Available Configuration Parameters
- `enterprise_base_url`: Base URL of the CrewAI Enterprise instance
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
- `oauth2_domain`: OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens
#### Examples
Display current configuration:
```shell Terminal
crewai config list
```
Example output:
| Setting | Value | Description |
| :------------------ | :----------------------- | :---------------------------------------------------------- |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI Enterprise instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
Set the enterprise base URL:
```shell Terminal
crewai config set enterprise_base_url https://my-enterprise.crewai.com
```
Set OAuth2 provider:
```shell Terminal
crewai config set oauth2_provider auth0
```
Set OAuth2 domain:
```shell Terminal
crewai config set oauth2_domain my-company.auth0.com
```
Reset all configuration to defaults:
```shell Terminal
crewai config reset
```
<Tip>
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
<Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
</Note>

View File

@@ -2,6 +2,7 @@
title: Collaboration
description: How to enable agents to work together, delegate tasks, and communicate effectively within CrewAI teams.
icon: screen-users
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Crews
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
icon: people-group
mode: "wide"
---
## Overview
@@ -20,8 +21,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). | |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
@@ -32,6 +32,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
<Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -45,7 +46,7 @@ There are two ways to create crews in CrewAI: using **YAML configuration (recomm
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
#### Example Crew Class with Decorators
@@ -66,8 +67,8 @@ class YourCrewName:
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff
def prepare_inputs(self, inputs):
@@ -111,7 +112,7 @@ class YourCrewName:
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the @agent decorator
tasks=self.tasks, # Automatically collected by the @task decorator.
tasks=self.tasks, # Automatically collected by the @task decorator.
process=Process.sequential,
verbose=True,
)
@@ -325,12 +326,12 @@ for result in results:
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = my_crew.kickoff_async(inputs=inputs)
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```

View File

@@ -2,6 +2,7 @@
title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring'
icon: spinner
mode: "wide"
---
## Overview
@@ -44,12 +45,12 @@ To create a custom event listener, you need to:
Here's a simple example of a custom event listener class:
```python
from crewai.utilities.events import (
from crewai.events import (
CrewKickoffStartedEvent,
CrewKickoffCompletedEvent,
AgentExecutionCompletedEvent,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.events import BaseEventListener
class MyCustomListener(BaseEventListener):
def __init__(self):
@@ -146,7 +147,7 @@ my_project/
```python
# my_custom_listener.py
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.events import BaseEventListener
# ... import events ...
class MyCustomListener(BaseEventListener):
@@ -177,14 +178,7 @@ class MyCustomCrew:
# Your crew implementation...
```
This is exactly how CrewAI's built-in `agentops_listener` is registered. In the CrewAI codebase, you'll find:
```python
# src/crewai/utilities/events/third_party/__init__.py
from .agentops_listener import agentops_listener
```
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
This is how third-party event listeners are registered in the CrewAI codebase.
## Available Event Types
@@ -233,6 +227,11 @@ CrewAI provides a wide range of events that you can listen for:
- **KnowledgeQueryFailedEvent**: Emitted when a knowledge query fails
- **KnowledgeSearchQueryFailedEvent**: Emitted when a knowledge search query fails
### LLM Guardrail Events
- **LLMGuardrailStartedEvent**: Emitted when a guardrail validation starts. Contains details about the guardrail being applied and retry count.
- **LLMGuardrailCompletedEvent**: Emitted when a guardrail validation completes. Contains details about validation success/failure, results, and error messages if any.
### Flow Events
- **FlowCreatedEvent**: Emitted when a Flow is created
@@ -250,6 +249,17 @@ CrewAI provides a wide range of events that you can listen for:
- **LLMCallFailedEvent**: Emitted when an LLM call fails
- **LLMStreamChunkEvent**: Emitted for each chunk received during streaming LLM responses
### Memory Events
- **MemoryQueryStartedEvent**: Emitted when a memory query is started. Contains the query, limit, and optional score threshold.
- **MemoryQueryCompletedEvent**: Emitted when a memory query is completed successfully. Contains the query, results, limit, score threshold, and query execution time.
- **MemoryQueryFailedEvent**: Emitted when a memory query fails. Contains the query, limit, score threshold, and error message.
- **MemorySaveStartedEvent**: Emitted when a memory save operation is started. Contains the value to be saved, metadata, and optional agent role.
- **MemorySaveCompletedEvent**: Emitted when a memory save operation is completed successfully. Contains the saved value, metadata, agent role, and save execution time.
- **MemorySaveFailedEvent**: Emitted when a memory save operation fails. Contains the value, metadata, agent role, and error message.
- **MemoryRetrievalStartedEvent**: Emitted when memory retrieval for a task prompt starts. Contains the optional task ID.
- **MemoryRetrievalCompletedEvent**: Emitted when memory retrieval for a task prompt completes successfully. Contains the task ID, memory content, and retrieval execution time.
## Event Handler Structure
Each event handler receives two parameters:
@@ -264,84 +274,13 @@ The structure of the event object depends on the event type, but all events inhe
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Real-World Example: Integration with AgentOps
CrewAI includes an example of a third-party integration with [AgentOps](https://github.com/AgentOps-AI/agentops), a monitoring and observability platform for AI agents. Here's how it's implemented:
```python
from typing import Optional
from crewai.utilities.events import (
CrewKickoffCompletedEvent,
ToolUsageErrorEvent,
ToolUsageStartedEvent,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.utilities.events.crew_events import CrewKickoffStartedEvent
from crewai.utilities.events.task_events import TaskEvaluationEvent
try:
import agentops
AGENTOPS_INSTALLED = True
except ImportError:
AGENTOPS_INSTALLED = False
class AgentOpsListener(BaseEventListener):
tool_event: Optional["agentops.ToolEvent"] = None
session: Optional["agentops.Session"] = None
def __init__(self):
super().__init__()
def setup_listeners(self, crewai_event_bus):
if not AGENTOPS_INSTALLED:
return
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_kickoff_started(source, event: CrewKickoffStartedEvent):
self.session = agentops.init()
for agent in source.agents:
if self.session:
self.session.create_agent(
name=agent.role,
agent_id=str(agent.id),
)
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_kickoff_completed(source, event: CrewKickoffCompletedEvent):
if self.session:
self.session.end_session(
end_state="Success",
end_state_reason="Finished Execution",
)
@crewai_event_bus.on(ToolUsageStartedEvent)
def on_tool_usage_started(source, event: ToolUsageStartedEvent):
self.tool_event = agentops.ToolEvent(name=event.tool_name)
if self.session:
self.session.record(self.tool_event)
@crewai_event_bus.on(ToolUsageErrorEvent)
def on_tool_usage_error(source, event: ToolUsageErrorEvent):
agentops.ErrorEvent(exception=event.error, trigger_event=self.tool_event)
```
This listener initializes an AgentOps session when a Crew starts, registers agents with AgentOps, tracks tool usage, and ends the session when the Crew completes.
The AgentOps listener is registered in CrewAI's event system through the import in `src/crewai/utilities/events/third_party/__init__.py`:
```python
from .agentops_listener import agentops_listener
```
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:
```python
from crewai.utilities.events import crewai_event_bus, CrewKickoffStartedEvent
from crewai.events import crewai_event_bus, CrewKickoffStartedEvent
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStartedEvent)

View File

@@ -2,6 +2,7 @@
title: Flows
description: Learn how to create and manage AI workflows using CrewAI Flows.
icon: arrow-progress
mode: "wide"
---
## Overview
@@ -97,7 +98,13 @@ The state's unique ID and stored data can be useful for tracking flow executions
### @start()
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
The `@start()` decorator marks entry points for a Flow. You can:
- Declare multiple unconditional starts: `@start()`
- Gate a start on a prior method or router label: `@start("method_or_label")`
- Provide a callable condition to control when a start should fire
All satisfied `@start()` methods will execute (often in parallel) when the Flow begins or resumes.
### @listen()

View File

@@ -2,6 +2,7 @@
title: Knowledge
description: What is knowledge in CrewAI and how to use it.
icon: book
mode: "wide"
---
## Overview
@@ -24,6 +25,41 @@ For file-based Knowledge Sources, make sure to place your files in a `knowledge`
Also, use relative paths from the `knowledge` directory when creating the source.
</Tip>
### Vector store (RAG) client configuration
CrewAI exposes a provider-neutral RAG client abstraction for vector stores. The default provider is ChromaDB, and Qdrant is supported as well. You can switch providers using configuration utilities.
Supported today:
- ChromaDB (default)
- Qdrant
```python Code
from crewai.rag.config.utils import set_rag_config, get_rag_client, clear_rag_config
# ChromaDB (default)
from crewai.rag.chromadb.config import ChromaDBConfig
set_rag_config(ChromaDBConfig())
chromadb_client = get_rag_client()
# Qdrant
from crewai.rag.qdrant.config import QdrantConfig
set_rag_config(QdrantConfig())
qdrant_client = get_rag_client()
# Example operations (same API for any provider)
client = qdrant_client # or chromadb_client
client.create_collection(collection_name="docs")
client.add_documents(
collection_name="docs",
documents=[{"id": "1", "content": "CrewAI enables collaborative AI agents."}],
)
results = client.search(collection_name="docs", query="collaborative agents", limit=3)
clear_rag_config() # optional reset
```
This RAG client is separate from Knowledges built-in storage. Use it when you need direct vector-store control or custom retrieval pipelines.
### Basic String Knowledge Example
```python Code
@@ -602,6 +638,30 @@ agent = Agent(
)
```
#### Configuring Azure OpenAI Embeddings
When using Azure OpenAI embeddings:
1. Make sure you deploy the embedding model in Azure platform first
2. Then you need to use the following configuration:
```python
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
knowledge_sources=[knowledge_source],
embedder={
"provider": "azure",
"config": {
"api_key": "your-azure-api-key",
"model": "text-embedding-ada-002", # change to the model you are using and is deployed in Azure
"api_base": "https://your-azure-endpoint.openai.azure.com/",
"api_version": "2024-02-01"
}
}
)
```
## Advanced Features
### Query Rewriting
@@ -657,11 +717,11 @@ CrewAI emits events during the knowledge retrieval process that you can listen f
#### Example: Monitoring Knowledge Retrieval
```python
from crewai.utilities.events import (
from crewai.events import (
KnowledgeRetrievalStartedEvent,
KnowledgeRetrievalCompletedEvent,
BaseEventListener,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
class KnowledgeMonitorListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus):

View File

@@ -2,6 +2,7 @@
title: 'LLMs'
description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects'
icon: 'microchip-ai'
mode: "wide"
---
## Overview
@@ -270,7 +271,7 @@ In this section, you'll find detailed examples that help you select, configure,
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro-latest",
model="gemini-1.5-pro-latest", # or vertex_ai/gemini-1.5-pro-latest
temperature=0.7,
vertex_credentials=vertex_credentials_json
)
@@ -684,6 +685,28 @@ In this section, you'll find detailed examples that help you select, configure,
- openrouter/deepseek/deepseek-chat
</Info>
</Accordion>
<Accordion title="Nebius AI Studio">
Set the following environment variables in your `.env` file:
```toml Code
NEBIUS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="nebius/Qwen/Qwen3-30B-A3B"
)
```
<Info>
Nebius AI Studio features:
- Large collection of open source models
- Higher rate limits
- Competitive pricing
- Good balance of speed and quality
</Info>
</Accordion>
</AccordionGroup>
## Streaming Responses
@@ -711,10 +734,10 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
CrewAI emits events for each chunk received during streaming:
```python
from crewai.utilities.events import (
from crewai.events import (
LLMStreamChunkEvent
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.events import BaseEventListener
class MyCustomListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@@ -727,9 +750,58 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
```
<Tip>
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
</Tip>
</Tab>
<Tab title="Agent & Task Tracking">
All LLM events in CrewAI include agent and task information, allowing you to track and filter LLM interactions by specific agents or tasks:
```python
from crewai import LLM, Agent, Task, Crew
from crewai.events import LLMStreamChunkEvent
from crewai.events import BaseEventListener
class MyCustomListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(LLMStreamChunkEvent)
def on_llm_stream_chunk(source, event):
if researcher.id == event.agent_id:
print("\n==============\n Got event:", event, "\n==============\n")
my_listener = MyCustomListener()
llm = LLM(model="gpt-4o-mini", temperature=0, stream=True)
researcher = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
llm=llm,
)
search = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[search])
result = crew.kickoff(
inputs={"question": "..."}
)
```
<Info>
This feature is particularly useful for:
- Debugging specific agent behaviors
- Logging LLM usage by task type
- Auditing which agents are making what types of LLM calls
- Performance monitoring of specific tasks
</Info>
</Tab>
</Tabs>
## Structured LLM Calls
@@ -825,7 +897,7 @@ Learn how to get the most out of your LLM configuration:
Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance.
</Info>
</Accordion>
<Accordion title="Drop Additional Parameters">
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:

View File

@@ -2,15 +2,15 @@
title: Memory
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
icon: database
mode: "wide"
---
## Overview
The CrewAI framework provides a sophisticated memory system designed to significantly enhance AI agent capabilities. CrewAI offers **three distinct memory approaches** that serve different use cases:
The CrewAI framework provides a sophisticated memory system designed to significantly enhance AI agent capabilities. CrewAI offers **two distinct memory approaches** that serve different use cases:
1. **Basic Memory System** - Built-in short-term, long-term, and entity memory
2. **User Memory** - User-specific memory with Mem0 integration (legacy approach)
3. **External Memory** - Standalone external memory providers (new approach)
2. **External Memory** - Standalone external memory providers
## Memory System Components
@@ -19,7 +19,7 @@ The CrewAI framework provides a sophisticated memory system designed to signific
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, `ExternalMemory` and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## 1. Basic Memory System (Recommended)
@@ -62,7 +62,7 @@ By default, CrewAI uses the `appdirs` library to determine storage locations fol
```
~/Library/Application Support/CrewAI/{project_name}/
├── knowledge/ # Knowledge base ChromaDB files
├── short_term_memory/ # Short-term memory ChromaDB files
├── short_term_memory/ # Short-term memory ChromaDB files
├── long_term_memory/ # Long-term memory ChromaDB files
├── entities/ # Entity memory ChromaDB files
└── long_term_memory_storage.db # SQLite database
@@ -202,7 +202,7 @@ crew = Crew(
tasks=[task],
memory=True,
embedder={
"provider": "anthropic", # Match your LLM provider
"provider": "anthropic", # Match your LLM provider
"config": {
"api_key": "your-anthropic-key",
"model": "text-embedding-3-small"
@@ -252,7 +252,7 @@ chroma_path = os.path.join(storage_path, "knowledge")
if os.path.exists(chroma_path):
client = chromadb.PersistentClient(path=chroma_path)
collections = client.list_collections()
print("ChromaDB Collections:")
for collection in collections:
print(f" - {collection.name}: {collection.count()} documents")
@@ -269,7 +269,7 @@ crew = Crew(agents=[...], tasks=[...], memory=True)
# Reset specific memory types
crew.reset_memories(command_type='short') # Short-term memory
crew.reset_memories(command_type='long') # Long-term memory
crew.reset_memories(command_type='long') # Long-term memory
crew.reset_memories(command_type='entity') # Entity memory
crew.reset_memories(command_type='knowledge') # Knowledge storage
```
@@ -540,16 +540,71 @@ crew = Crew(
)
```
### Mem0 Provider
Short-Term Memory and Entity Memory both supports a tight integration with both Mem0 OSS and Mem0 Client as a provider. Here is how you can use Mem0 as a provider.
```python
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.entity_entity_memory import EntityMemory
mem0_oss_embedder_config = {
"provider": "mem0",
"config": {
"user_id": "john",
"local_mem0_config": {
"vector_store": {"provider": "qdrant","config": {"host": "localhost", "port": 6333}},
"llm": {"provider": "openai","config": {"api_key": "your-api-key", "model": "gpt-4"}},
"embedder": {"provider": "openai","config": {"api_key": "your-api-key", "model": "text-embedding-3-small"}}
},
"infer": True # Optional defaults to True
},
}
mem0_client_embedder_config = {
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory
},
}
short_term_memory_mem0_oss = ShortTermMemory(embedder_config=mem0_oss_embedder_config) # Short Term Memory with Mem0 OSS
short_term_memory_mem0_client = ShortTermMemory(embedder_config=mem0_client_embedder_config) # Short Term Memory with Mem0 Client
entity_memory_mem0_oss = EntityMemory(embedder_config=mem0_oss_embedder_config) # Entity Memory with Mem0 OSS
entity_memory_mem0_client = EntityMemory(embedder_config=mem0_client_embedder_config) # Short Term Memory with Mem0 Client
crew = Crew(
memory=True,
short_term_memory=short_term_memory_mem0_oss, # or short_term_memory_mem0_client
entity_memory=entity_memory_mem0_oss # or entity_memory_mem0_client
)
```
### Choosing the Right Embedding Provider
| Provider | Best For | Pros | Cons |
|:---------|:----------|:------|:------|
| **OpenAI** | General use, reliability | High quality, well-tested | Cost, requires API key |
| **Ollama** | Privacy, cost savings | Free, local, private | Requires local setup |
| **Google AI** | Google ecosystem | Good performance | Requires Google account |
| **Azure OpenAI** | Enterprise, compliance | Enterprise features | Complex setup |
| **Cohere** | Multilingual content | Great language support | Specialized use case |
| **VoyageAI** | Retrieval tasks | Optimized for search | Newer provider |
When selecting an embedding provider, consider factors like performance, privacy, cost, and integration needs.
Below is a comparison to help you decide:
| Provider | Best For | Pros | Cons |
| -------------- | ------------------------------ | --------------------------------- | ------------------------- |
| **OpenAI** | General use, high reliability | High quality, widely tested | Paid service, API key required |
| **Ollama** | Privacy-focused, cost savings | Free, runs locally, fully private | Requires local installation/setup |
| **Google AI** | Integration in Google ecosystem| Strong performance, good support | Google account required |
| **Azure OpenAI** | Enterprise & compliance needs| Enterprise-grade features, security | More complex setup process |
| **Cohere** | Multilingual content handling | Excellent language support | More niche use cases |
| **VoyageAI** | Information retrieval & search | Optimized for retrieval tasks | Relatively new provider |
| **Mem0** | Per-user personalization | Search-optimized embeddings | Paid service, API key required |
### Environment Variable Configuration
@@ -596,7 +651,7 @@ providers_to_test = [
{
"name": "Ollama",
"config": {
"provider": "ollama",
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
}
}
@@ -604,7 +659,7 @@ providers_to_test = [
for provider in providers_to_test:
print(f"\nTesting {provider['name']} embeddings...")
# Create crew with specific embedder
crew = Crew(
agents=[...],
@@ -612,7 +667,7 @@ for provider in providers_to_test:
memory=True,
embedder=provider['config']
)
# Run your test and measure performance
result = crew.kickoff()
print(f"{provider['name']} completed successfully")
@@ -623,7 +678,7 @@ for provider in providers_to_test:
**Model not found errors:**
```python
# Verify model availability
from crewai.utilities.embedding_configurator import EmbeddingConfigurator
from crewai.rag.embeddings.configurator import EmbeddingConfigurator
configurator = EmbeddingConfigurator()
try:
@@ -655,17 +710,17 @@ import time
def test_embedding_performance(embedder_config, test_text="This is a test document"):
start_time = time.time()
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder=embedder_config
)
# Simulate memory operation
crew.kickoff()
end_time = time.time()
return end_time - start_time
@@ -676,7 +731,7 @@ openai_time = test_embedding_performance({
})
ollama_time = test_embedding_performance({
"provider": "ollama",
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
})
@@ -684,67 +739,29 @@ print(f"OpenAI: {openai_time:.2f}s")
print(f"Ollama: {ollama_time:.2f}s")
```
## 2. User Memory with Mem0 (Legacy)
### Entity Memory batching behavior
<Warning>
**Legacy Approach**: While fully functional, this approach is considered legacy. For new projects requiring user-specific memory, consider using External Memory instead.
</Warning>
Entity Memory supports batching when saving multiple entities at once. When you pass a list of `EntityMemoryItem`, the system:
User Memory integrates with [Mem0](https://mem0.ai/) to provide user-specific memory that persists across sessions and integrates with the crew's contextual memory system.
- Emits a single MemorySaveStartedEvent with `entity_count`
- Saves each entity internally, collecting any partial errors
- Emits MemorySaveCompletedEvent with aggregate metadata (saved count, errors)
- Raises a partial-save exception if some entities failed (includes counts)
### Prerequisites
```bash
pip install mem0ai
```
This improves performance and observability when writing many entities in one operation.
### Mem0 Cloud Configuration
## 2. External Memory
External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing.
### Basic External Memory with Mem0
```python
import os
from crewai import Crew, Process
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
# Set your Mem0 API key
os.environ["MEM0_API_KEY"] = "m0-your-api-key"
crew = Crew(
agents=[...],
tasks=[...],
memory=True, # Required for contextual memory integration
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
"user_memory": {} # Required - triggers user memory initialization
},
process=Process.sequential,
verbose=True
)
```
### Advanced Mem0 Configuration
```python
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
memory_config={
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
},
"user_memory": {}
}
)
```
### Local Mem0 Configuration
```python
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
memory_config={
# Create external memory instance with local Mem0 Configuration
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {
"user_id": "john",
@@ -761,37 +778,60 @@ crew = Crew(
"provider": "openai",
"config": {"api_key": "your-api-key", "model": "text-embedding-3-small"}
}
}
},
"infer": True # Optional defaults to True
},
"user_memory": {}
}
)
```
## 3. External Memory (New Approach)
External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing.
### Basic External Memory with Mem0
```python
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
os.environ["MEM0_API_KEY"] = "your-api-key"
# Create external memory instance
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {"user_id": "U-123"}
}
)
crew = Crew(
agents=[...],
tasks=[...],
external_memory=external_memory, # Separate from basic memory
external_memory=external_memory, # Separate from basic memory
process=Process.sequential,
verbose=True
)
```
### Advanced External Memory with Mem0 Client
When using Mem0 Client, you can customize the memory configuration further, by using parameters like 'includes', 'excludes', 'custom_categories', 'infer' and 'run_id' (this is only for short-term memory).
You can find more details in the [Mem0 documentation](https://docs.mem0.ai/).
```python
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
new_categories = [
{"lifestyle_management_concerns": "Tracks daily routines, habits, hobbies and interests including cooking, time management and work-life balance"},
{"seeking_structure": "Documents goals around creating routines, schedules, and organized systems in various life areas"},
{"personal_information": "Basic information about the user including name, preferences, and personality traits"}
]
os.environ["MEM0_API_KEY"] = "your-api-key"
# Create external memory instance with Mem0 Client
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory
},
}
)
crew = Crew(
agents=[...],
tasks=[...],
external_memory=external_memory, # Separate from basic memory
process=Process.sequential,
verbose=True
)
@@ -808,8 +848,8 @@ class CustomStorage(Storage):
def save(self, value, metadata=None, agent=None):
self.memories.append({
"value": value,
"metadata": metadata,
"value": value,
"metadata": metadata,
"agent": agent
})
@@ -830,17 +870,18 @@ crew = Crew(
)
```
## Memory System Comparison
## 🧠 Memory System Comparison
| **Category** | **Feature** | **Basic Memory** | **External Memory** |
|---------------------|------------------------|-----------------------------|------------------------------|
| **Ease of Use** | Setup Complexity | Simple | Moderate |
| | Integration | Built-in (contextual) | Standalone |
| **Persistence** | Storage | Local files | Custom / Mem0 |
| | Cross-session Support | ✅ | ✅ |
| **Personalization** | User-specific Memory | ❌ | ✅ |
| | Custom Providers | Limited | Any provider |
| **Use Case Fit** | Recommended For | Most general use cases | Specialized / custom needs |
| Feature | Basic Memory | User Memory (Legacy) | External Memory |
|---------|-------------|---------------------|----------------|
| **Setup Complexity** | Simple | Medium | Medium |
| **Integration** | Built-in contextual | Contextual + User-specific | Standalone |
| **Storage** | Local files | Mem0 Cloud/Local | Custom/Mem0 |
| **Cross-session** | ✅ | ✅ | ✅ |
| **User-specific** | ❌ | ✅ | ✅ |
| **Custom providers** | Limited | Mem0 only | Any provider |
| **Recommended for** | Most use cases | Legacy projects | Specialized needs |
## Supported Embedding Providers
@@ -986,7 +1027,201 @@ crew = Crew(
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
## Memory Events
CrewAI's event system provides powerful insights into memory operations. By leveraging memory events, you can monitor, debug, and optimize your memory system's performance and behavior.
### Available Memory Events
CrewAI emits the following memory-related events:
| Event | Description | Key Properties |
| :---- | :---------- | :------------- |
| **MemoryQueryStartedEvent** | Emitted when a memory query begins | `query`, `limit`, `score_threshold` |
| **MemoryQueryCompletedEvent** | Emitted when a memory query completes successfully | `query`, `results`, `limit`, `score_threshold`, `query_time_ms` |
| **MemoryQueryFailedEvent** | Emitted when a memory query fails | `query`, `limit`, `score_threshold`, `error` |
| **MemorySaveStartedEvent** | Emitted when a memory save operation begins | `value`, `metadata`, `agent_role` |
| **MemorySaveCompletedEvent** | Emitted when a memory save operation completes successfully | `value`, `metadata`, `agent_role`, `save_time_ms` |
| **MemorySaveFailedEvent** | Emitted when a memory save operation fails | `value`, `metadata`, `agent_role`, `error` |
| **MemoryRetrievalStartedEvent** | Emitted when memory retrieval for a task prompt starts | `task_id` |
| **MemoryRetrievalCompletedEvent** | Emitted when memory retrieval completes successfully | `task_id`, `memory_content`, `retrieval_time_ms` |
### Practical Applications
#### 1. Memory Performance Monitoring
Track memory operation timing to optimize your application:
```python
from crewai.events import (
BaseEventListener,
MemoryQueryCompletedEvent,
MemorySaveCompletedEvent
)
import time
class MemoryPerformanceMonitor(BaseEventListener):
def __init__(self):
super().__init__()
self.query_times = []
self.save_times = []
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
self.query_times.append(event.query_time_ms)
print(f"Memory query completed in {event.query_time_ms:.2f}ms. Query: '{event.query}'")
print(f"Average query time: {sum(self.query_times)/len(self.query_times):.2f}ms")
@crewai_event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
self.save_times.append(event.save_time_ms)
print(f"Memory save completed in {event.save_time_ms:.2f}ms")
print(f"Average save time: {sum(self.save_times)/len(self.save_times):.2f}ms")
# Create an instance of your listener
memory_monitor = MemoryPerformanceMonitor()
```
#### 2. Memory Content Logging
Log memory operations for debugging and insights:
```python
from crewai.events import (
BaseEventListener,
MemorySaveStartedEvent,
MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent
)
import logging
# Configure logging
logger = logging.getLogger('memory_events')
class MemoryLogger(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemorySaveStartedEvent)
def on_memory_save_started(source, event: MemorySaveStartedEvent):
if event.agent_role:
logger.info(f"Agent '{event.agent_role}' saving memory: {event.value[:50]}...")
else:
logger.info(f"Saving memory: {event.value[:50]}...")
@crewai_event_bus.on(MemoryQueryStartedEvent)
def on_memory_query_started(source, event: MemoryQueryStartedEvent):
logger.info(f"Memory query started: '{event.query}' (limit: {event.limit})")
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
def on_memory_retrieval_completed(source, event: MemoryRetrievalCompletedEvent):
if event.task_id:
logger.info(f"Memory retrieved for task {event.task_id} in {event.retrieval_time_ms:.2f}ms")
else:
logger.info(f"Memory retrieved in {event.retrieval_time_ms:.2f}ms")
logger.debug(f"Memory content: {event.memory_content}")
# Create an instance of your listener
memory_logger = MemoryLogger()
```
#### 3. Error Tracking and Notifications
Capture and respond to memory errors:
```python
from crewai.events import (
BaseEventListener,
MemorySaveFailedEvent,
MemoryQueryFailedEvent
)
import logging
from typing import Optional
# Configure logging
logger = logging.getLogger('memory_errors')
class MemoryErrorTracker(BaseEventListener):
def __init__(self, notify_email: Optional[str] = None):
super().__init__()
self.notify_email = notify_email
self.error_count = 0
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemorySaveFailedEvent)
def on_memory_save_failed(source, event: MemorySaveFailedEvent):
self.error_count += 1
agent_info = f"Agent '{event.agent_role}'" if event.agent_role else "Unknown agent"
error_message = f"Memory save failed: {event.error}. {agent_info}"
logger.error(error_message)
if self.notify_email and self.error_count % 5 == 0:
self._send_notification(error_message)
@crewai_event_bus.on(MemoryQueryFailedEvent)
def on_memory_query_failed(source, event: MemoryQueryFailedEvent):
self.error_count += 1
error_message = f"Memory query failed: {event.error}. Query: '{event.query}'"
logger.error(error_message)
if self.notify_email and self.error_count % 5 == 0:
self._send_notification(error_message)
def _send_notification(self, message):
# Implement your notification system (email, Slack, etc.)
print(f"[NOTIFICATION] Would send to {self.notify_email}: {message}")
# Create an instance of your listener
error_tracker = MemoryErrorTracker(notify_email="admin@example.com")
```
### Integrating with Analytics Platforms
Memory events can be forwarded to analytics and monitoring platforms to track performance metrics, detect anomalies, and visualize memory usage patterns:
```python
from crewai.events import (
BaseEventListener,
MemoryQueryCompletedEvent,
MemorySaveCompletedEvent
)
class MemoryAnalyticsForwarder(BaseEventListener):
def __init__(self, analytics_client):
super().__init__()
self.client = analytics_client
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
# Forward query metrics to analytics platform
self.client.track_metric({
"event_type": "memory_query",
"query": event.query,
"duration_ms": event.query_time_ms,
"result_count": len(event.results) if hasattr(event.results, "__len__") else 0,
"timestamp": event.timestamp
})
@crewai_event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
# Forward save metrics to analytics platform
self.client.track_metric({
"event_type": "memory_save",
"agent_role": event.agent_role,
"duration_ms": event.save_time_ms,
"timestamp": event.timestamp
})
```
### Best Practices for Memory Event Listeners
1. **Keep handlers lightweight**: Avoid complex processing in event handlers to prevent performance impacts
2. **Use appropriate logging levels**: Use INFO for normal operations, DEBUG for details, ERROR for issues
3. **Batch metrics when possible**: Accumulate metrics before sending to external systems
4. **Handle exceptions gracefully**: Ensure your event handlers don't crash due to unexpected data
5. **Consider memory consumption**: Be mindful of storing large amounts of event data
## Conclusion
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.

View File

@@ -2,6 +2,7 @@
title: Planning
description: Learn how to add planning to your CrewAI Crew and improve their performance.
icon: ruler-combined
mode: "wide"
---
## Overview
@@ -29,6 +30,10 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.

View File

@@ -2,6 +2,7 @@
title: Processes
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
icon: bars-staggered
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Reasoning
description: "Learn how to enable and use agent reasoning to improve task execution."
icon: brain
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Tasks
description: Detailed guide on managing and creating tasks within the CrewAI framework.
icon: list-check
mode: "wide"
---
## Overview
@@ -54,9 +55,17 @@ crew = Crew(
| **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
<Note type="warning" title="Deprecated: max_retries">
The task attribute `max_retries` is deprecated and will be removed in v1.0.0.
Use `guardrail_max_retries` instead to control retry attempts when a guardrail fails.
</Note>
## Creating Tasks
@@ -66,7 +75,7 @@ There are two ways to create tasks in CrewAI: using **YAML configuration (recomm
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
@@ -277,7 +286,7 @@ formatted_task = Task(
When `markdown=True`, the agent will receive additional instructions to format the output using:
- `#` for headers
- `**text**` for bold text
- `**text**` for bold text
- `*text*` for italic text
- `-` or `*` for bullet points
- `` `code` `` for inline code
@@ -332,9 +341,11 @@ Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides
feedback to agents when their output doesn't meet specific criteria.
### Using Task Guardrails
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
To add a guardrail to a task, provide a validation function through the `guardrail` parameter:
### Function-Based Guardrails
To add a function-based guardrail to a task, provide a validation function through the `guardrail` parameter:
```python Code
from typing import Tuple, Union, Dict, Any
@@ -372,9 +383,7 @@ blog_task = Task(
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### LLMGuardrail
The `LLMGuardrail` class offers a robust mechanism for validating task outputs.
### Error Handling Best Practices
@@ -429,7 +438,7 @@ When a guardrail returns `(False, error)`:
2. The agent attempts to fix the issue
3. The process repeats until:
- The guardrail returns `(True, result)`
- Maximum retries are reached
- Maximum retries are reached (`guardrail_max_retries`)
Example with retry handling:
```python Code
@@ -450,7 +459,7 @@ task = Task(
expected_output="A valid JSON object",
agent=analyst,
guardrail=validate_json_output,
max_retries=3 # Limit retry attempts
guardrail_max_retries=3 # Limit retry attempts
)
```
@@ -798,184 +807,91 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Task Guardrails
Task guardrails provide a powerful way to validate, transform, or filter task outputs before they are passed to the next task. Guardrails are optional functions that execute before the next task starts, allowing you to ensure that task outputs meet specific requirements or formats.
### Basic Usage
#### Define your own logic to validate
```python Code
from typing import Tuple, Union
from crewai import Task
def validate_json_output(result: str) -> Tuple[bool, Union[dict, str]]:
"""Validate that the output is valid JSON."""
try:
json_data = json.loads(result)
return (True, json_data)
except json.JSONDecodeError:
return (False, "Output must be valid JSON")
task = Task(
description="Generate JSON data",
expected_output="Valid JSON object",
guardrail=validate_json_output
)
```
#### Leverage a no-code approach for validation
```python Code
from crewai import Task
task = Task(
description="Generate JSON data",
expected_output="Valid JSON object",
guardrail="Ensure the response is a valid JSON object"
)
```
#### Using YAML
```yaml
research_task:
...
guardrail: make sure each bullet contains a minimum of 100 words
...
```
```python Code
@CrewBase
class InternalCrew:
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
...
@task
def research_task(self):
return Task(config=self.tasks_config["research_task"]) # type: ignore[index]
...
```
#### Use custom models for code generation
```python Code
from crewai import Task
from crewai.llm import LLM
task = Task(
description="Generate JSON data",
expected_output="Valid JSON object",
guardrail=LLMGuardrail(
description="Ensure the response is a valid JSON object",
llm=LLM(model="gpt-4o-mini"),
)
)
```
### How Guardrails Work
1. **Optional Attribute**: Guardrails are an optional attribute at the task level, allowing you to add validation only where needed.
2. **Execution Timing**: The guardrail function is executed before the next task starts, ensuring valid data flow between tasks.
3. **Return Format**: Guardrails must return a tuple of `(success, data)`:
- If `success` is `True`, `data` is the validated/transformed result
- If `success` is `False`, `data` is the error message
4. **Result Routing**:
- On success (`True`), the result is automatically passed to the next task
- On failure (`False`), the error is sent back to the agent to generate a new answer
### Common Use Cases
#### Data Format Validation
```python Code
def validate_email_format(result: str) -> Tuple[bool, Union[str, str]]:
"""Ensure the output contains a valid email address."""
import re
email_pattern = r'^[\w\.-]+@[\w\.-]+\.\w+$'
if re.match(email_pattern, result.strip()):
return (True, result.strip())
return (False, "Output must be a valid email address")
```
#### Content Filtering
```python Code
def filter_sensitive_info(result: str) -> Tuple[bool, Union[str, str]]:
"""Remove or validate sensitive information."""
sensitive_patterns = ['SSN:', 'password:', 'secret:']
for pattern in sensitive_patterns:
if pattern.lower() in result.lower():
return (False, f"Output contains sensitive information ({pattern})")
return (True, result)
```
#### Data Transformation
```python Code
def normalize_phone_number(result: str) -> Tuple[bool, Union[str, str]]:
"""Ensure phone numbers are in a consistent format."""
import re
digits = re.sub(r'\D', '', result)
if len(digits) == 10:
formatted = f"({digits[:3]}) {digits[3:6]}-{digits[6:]}"
return (True, formatted)
return (False, "Output must be a 10-digit phone number")
```
### Advanced Features
#### Chaining Multiple Validations
```python Code
def chain_validations(*validators):
"""Chain multiple validators together."""
def combined_validator(result):
for validator in validators:
success, data = validator(result)
if not success:
return (False, data)
result = data
return (True, result)
return combined_validator
# Usage
task = Task(
description="Get user contact info",
expected_output="Email and phone",
guardrail=chain_validations(
validate_email_format,
filter_sensitive_info
)
)
```
#### Custom Retry Logic
```python Code
task = Task(
description="Generate data",
expected_output="Valid data",
guardrail=validate_data,
max_retries=5 # Override default retry limit
)
```
## Creating Directories when Saving Files
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
The `create_directory` parameter controls whether CrewAI should automatically create directories when saving task outputs to files. This feature is particularly useful for organizing outputs and ensuring that file paths are correctly structured, especially when working with complex project hierarchies.
### Default Behavior
By default, `create_directory=True`, which means CrewAI will automatically create any missing directories in the output file path:
```python Code
# ...
save_output_task = Task(
description='Save the summarized AI news to a file',
expected_output='File saved successfully',
agent=research_agent,
tools=[file_save_tool],
output_file='outputs/ai_news_summary.txt',
create_directory=True
# Default behavior - directories are created automatically
report_task = Task(
description='Generate a comprehensive market analysis report',
expected_output='A detailed market analysis with charts and insights',
agent=analyst_agent,
output_file='reports/2025/market_analysis.md', # Creates 'reports/2025/' if it doesn't exist
markdown=True
)
```
#...
### Disabling Directory Creation
If you want to prevent automatic directory creation and ensure that the directory already exists, set `create_directory=False`:
```python Code
# Strict mode - directory must already exist
strict_output_task = Task(
description='Save critical data that requires existing infrastructure',
expected_output='Data saved to pre-configured location',
agent=data_agent,
output_file='secure/vault/critical_data.json',
create_directory=False # Will raise RuntimeError if 'secure/vault/' doesn't exist
)
```
### YAML Configuration
You can also configure this behavior in your YAML task definitions:
```yaml tasks.yaml
analysis_task:
description: >
Generate quarterly financial analysis
expected_output: >
A comprehensive financial report with quarterly insights
agent: financial_analyst
output_file: reports/quarterly/q4_2024_analysis.pdf
create_directory: true # Automatically create 'reports/quarterly/' directory
audit_task:
description: >
Perform compliance audit and save to existing audit directory
expected_output: >
A compliance audit report
agent: auditor
output_file: audit/compliance_report.md
create_directory: false # Directory must already exist
```
### Use Cases
**Automatic Directory Creation (`create_directory=True`):**
- Development and prototyping environments
- Dynamic report generation with date-based folders
- Automated workflows where directory structure may vary
- Multi-tenant applications with user-specific folders
**Manual Directory Management (`create_directory=False`):**
- Production environments with strict file system controls
- Security-sensitive applications where directories must be pre-configured
- Systems with specific permission requirements
- Compliance environments where directory creation is audited
### Error Handling
When `create_directory=False` and the directory doesn't exist, CrewAI will raise a `RuntimeError`:
```python Code
try:
result = crew.kickoff()
except RuntimeError as e:
# Handle missing directory error
print(f"Directory creation failed: {e}")
# Create directory manually or use fallback location
```
Check out the video below to see how to use structured outputs in CrewAI:

View File

@@ -2,6 +2,7 @@
title: Testing
description: Learn how to test your CrewAI Crew and evaluate their performance.
icon: vial
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Tools
description: Understanding and leveraging tools within the CrewAI framework for agent collaboration and task execution.
icon: screwdriver-wrench
mode: "wide"
---
## Overview
@@ -32,6 +33,7 @@ The Enterprise Tools Repository includes:
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
- **Asynchronous Support**: Handles both synchronous and asynchronous tools, enabling non-blocking operations.
## Using CrewAI Tools
@@ -177,6 +179,62 @@ class MyCustomTool(BaseTool):
return "Tool's result"
```
## Asynchronous Tool Support
CrewAI supports asynchronous tools, allowing you to implement tools that perform non-blocking operations like network requests, file I/O, or other async operations without blocking the main execution thread.
### Creating Async Tools
You can create async tools in two ways:
#### 1. Using the `tool` Decorator with Async Functions
```python Code
from crewai.tools import tool
@tool("fetch_data_async")
async def fetch_data_async(query: str) -> str:
"""Asynchronously fetch data based on the query."""
# Simulate async operation
await asyncio.sleep(1)
return f"Data retrieved for {query}"
```
#### 2. Implementing Async Methods in Custom Tool Classes
```python Code
from crewai.tools import BaseTool
class AsyncCustomTool(BaseTool):
name: str = "async_custom_tool"
description: str = "An asynchronous custom tool"
async def _run(self, query: str = "") -> str:
"""Asynchronously run the tool"""
# Your async implementation here
await asyncio.sleep(1)
return f"Processed {query} asynchronously"
```
### Using Async Tools
Async tools work seamlessly in both standard Crew workflows and Flow-based workflows:
```python Code
# In standard Crew
agent = Agent(role="researcher", tools=[async_custom_tool])
# In Flow
class MyFlow(Flow):
@start()
async def begin(self):
crew = Crew(agents=[agent])
result = await crew.kickoff_async()
return result
```
The CrewAI framework automatically handles the execution of both synchronous and asynchronous tools, so you don't need to worry about how to call them differently.
### Utilizing the `tool` Decorator
```python Code

View File

@@ -0,0 +1,197 @@
---
title: Training
description: Learn how to train your CrewAI agents by giving them feedback early on and get consistent results.
icon: dumbbell
mode: "wide"
---
## Overview
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
This helps the agents improve their understanding, decision-making, and problem-solving abilities.
### Training Your Crew Using the CLI
To use the training feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
```shell
crewai train -n <n_iterations> -f <filename.pkl>
```
<Tip>
Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`.
</Tip>
<Note>
If you omit `-f`, the output defaults to `trained_agents_data.pkl` in the current working directory. You can pass an absolute path to control where the file is written.
</Note>
### Training your Crew programmatically
To train your crew programmatically, use the following steps:
1. Define the number of iterations for training.
2. Specify the input parameters for the training process.
3. Execute the training command within a try-except block to handle potential errors.
```python Code
n_iterations = 2
inputs = {"topic": "CrewAI Training"}
filename = "your_model.pkl"
try:
YourCrewName_Crew().crew().train(
n_iterations=n_iterations,
inputs=inputs,
filename=filename
)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
```
## How trained data is used by agents
CrewAI uses the training artifacts in two ways: during training to incorporate your human feedback, and after training to guide agents with consolidated suggestions.
### Training data flow
```mermaid
flowchart TD
A["Start training<br/>CLI: crewai train -n -f<br/>or Python: crew.train(...)"] --> B["Setup training mode<br/>- task.human_input = true<br/>- disable delegation<br/>- init training_data.pkl + trained file"]
subgraph "Iterations"
direction LR
C["Iteration i<br/>initial_output"] --> D["User human_feedback"]
D --> E["improved_output"]
E --> F["Append to training_data.pkl<br/>by agent_id and iteration"]
end
B --> C
F --> G{"More iterations?"}
G -- "Yes" --> C
G -- "No" --> H["Evaluate per agent<br/>aggregate iterations"]
H --> I["Consolidate<br/>suggestions[] + quality + final_summary"]
I --> J["Save by agent role to trained file<br/>(default: trained_agents_data.pkl)"]
J --> K["Normal (non-training) runs"]
K --> L["Auto-load suggestions<br/>from trained_agents_data.pkl"]
L --> M["Append to prompt<br/>for consistent improvements"]
```
### During training runs
- On each iteration, the system records for every agent:
- `initial_output`: the agents first answer
- `human_feedback`: your inline feedback when prompted
- `improved_output`: the agents follow-up answer after feedback
- This data is stored in a working file named `training_data.pkl` keyed by the agents internal ID and iteration.
- While training is active, the agent automatically appends your prior human feedback to its prompt to enforce those instructions on subsequent attempts within the training session.
Training is interactive: tasks set `human_input = true`, so running in a non-interactive environment will block on user input.
### After training completes
- When `train(...)` finishes, CrewAI evaluates the collected training data per agent and produces a consolidated result containing:
- `suggestions`: clear, actionable instructions distilled from your feedback and the difference between initial/improved outputs
- `quality`: a 010 score capturing improvement
- `final_summary`: a step-by-step set of action items for future tasks
- These consolidated results are saved to the filename you pass to `train(...)` (default via CLI is `trained_agents_data.pkl`). Entries are keyed by the agents `role` so they can be applied across sessions.
- During normal (non-training) execution, each agent automatically loads its consolidated `suggestions` and appends them to the task prompt as mandatory instructions. This gives you consistent improvements without changing your agent definitions.
### File summary
- `training_data.pkl` (ephemeral, per-session):
- Structure: `agent_id -> { iteration_number: { initial_output, human_feedback, improved_output } }`
- Purpose: capture raw data and human feedback during training
- Location: saved in the current working directory (CWD)
- `trained_agents_data.pkl` (or your custom filename):
- Structure: `agent_role -> { suggestions: string[], quality: number, final_summary: string }`
- Purpose: persist consolidated guidance for future runs
- Location: written to the CWD by default; use `-f` to set a custom (including absolute) path
## Small Language Model Considerations
<Warning>
When using smaller language models (≤7B parameters) for training data evaluation, be aware that they may face challenges with generating structured outputs and following complex instructions.
</Warning>
### Limitations of Small Models in Training Evaluation
<CardGroup cols={2}>
<Card title="JSON Output Accuracy" icon="triangle-exclamation">
Smaller models often struggle with producing valid JSON responses needed for structured training evaluations, leading to parsing errors and incomplete data.
</Card>
<Card title="Evaluation Quality" icon="chart-line">
Models under 7B parameters may provide less nuanced evaluations with limited reasoning depth compared to larger models.
</Card>
<Card title="Instruction Following" icon="list-check">
Complex training evaluation criteria may not be fully followed or considered by smaller models.
</Card>
<Card title="Consistency" icon="rotate">
Evaluations across multiple training iterations may lack consistency with smaller models.
</Card>
</CardGroup>
### Recommendations for Training
<Tabs>
<Tab title="Best Practice">
For optimal training quality and reliable evaluations, we strongly recommend using models with at least 7B parameters or larger:
```python
from crewai import Agent, Crew, Task, LLM
# Recommended minimum for training evaluation
llm = LLM(model="mistral/open-mistral-7b")
# Better options for reliable training evaluation
llm = LLM(model="anthropic/claude-3-sonnet-20240229-v1:0")
llm = LLM(model="gpt-4o")
# Use this LLM with your agents
agent = Agent(
role="Training Evaluator",
goal="Provide accurate training feedback",
llm=llm
)
```
<Tip>
More powerful models provide higher quality feedback with better reasoning, leading to more effective training iterations.
</Tip>
</Tab>
<Tab title="Small Model Usage">
If you must use smaller models for training evaluation, be aware of these constraints:
```python
# Using a smaller model (expect some limitations)
llm = LLM(model="huggingface/microsoft/Phi-3-mini-4k-instruct")
```
<Warning>
While CrewAI includes optimizations for small models, expect less reliable and less nuanced evaluation results that may require more human intervention during training.
</Warning>
</Tab>
</Tabs>
### Key Points to Note
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
- **Filename Requirement:** Ensure that the filename ends with `.pkl`. The code will raise a `ValueError` if this condition is not met.
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
- Trained guidance is applied at prompt time; it does not modify your Python/YAML agent configuration.
- Agents automatically load trained suggestions from a file named `trained_agents_data.pkl` located in the current working directory. If you trained to a different filename, either rename it to `trained_agents_data.pkl` before running, or adjust the loader in code.
- You can change the output filename when calling `crewai train` with `-f/--filename`. Absolute paths are supported if you want to save outside the CWD.
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.

View File

@@ -0,0 +1,156 @@
---
title: 'Agent Repositories'
description: 'Learn how to use Agent Repositories to share and reuse your agents across teams and projects'
icon: 'database'
mode: "wide"
---
Agent Repositories allow enterprise users to store, share, and reuse agent definitions across teams and projects. This feature enables organizations to maintain a centralized library of standardized agents, promoting consistency and reducing duplication of effort.
## Benefits of Agent Repositories
- **Standardization**: Maintain consistent agent definitions across your organization
- **Reusability**: Create an agent once and use it in multiple crews and projects
- **Governance**: Implement organization-wide policies for agent configurations
- **Collaboration**: Enable teams to share and build upon each other's work
## Using Agent Repositories
### Prerequisites
1. You must have an account at CrewAI, try the [free plan](https://app.crewai.com).
2. You need to be authenticated using the CrewAI CLI.
3. If you have more than one organization, make sure you are switched to the correct organization using the CLI command:
```bash
crewai org switch <org_id>
```
### Creating and Managing Agents in Repositories
To create and manage agents in repositories,Enterprise Dashboard.
### Loading Agents from Repositories
You can load agents from repositories in your code using the `from_repository` parameter:
```python
from crewai import Agent
# Create an agent by loading it from a repository
# The agent is loaded with all its predefined configurations
researcher = Agent(
from_repository="market-research-agent"
)
```
### Overriding Repository Settings
You can override specific settings from the repository by providing them in the configuration:
```python
researcher = Agent(
from_repository="market-research-agent",
goal="Research the latest trends in AI development", # Override the repository goal
verbose=True # Add a setting not in the repository
)
```
### Example: Creating a Crew with Repository Agents
```python
from crewai import Crew, Agent, Task
# Load agents from repositories
researcher = Agent(
from_repository="market-research-agent"
)
writer = Agent(
from_repository="content-writer-agent"
)
# Create tasks
research_task = Task(
description="Research the latest trends in AI",
agent=researcher
)
writing_task = Task(
description="Write a comprehensive report based on the research",
agent=writer
)
# Create the crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True
)
# Run the crew
result = crew.kickoff()
```
### Example: Using `kickoff()` with Repository Agents
You can also use repository agents directly with the `kickoff()` method for simpler interactions:
```python
from crewai import Agent
from pydantic import BaseModel
from typing import List
# Define a structured output format
class MarketAnalysis(BaseModel):
key_trends: List[str]
opportunities: List[str]
recommendation: str
# Load an agent from repository
analyst = Agent(
from_repository="market-analyst-agent",
verbose=True
)
# Get a free-form response
result = analyst.kickoff("Analyze the AI market in 2025")
print(result.raw) # Access the raw response
# Get structured output
structured_result = analyst.kickoff(
"Provide a structured analysis of the AI market in 2025",
response_format=MarketAnalysis
)
# Access structured data
print(f"Key Trends: {structured_result.pydantic.key_trends}")
print(f"Recommendation: {structured_result.pydantic.recommendation}")
```
## Best Practices
1. **Naming Convention**: Use clear, descriptive names for your repository agents
2. **Documentation**: Include comprehensive descriptions for each agent
3. **Tool Management**: Ensure that tools referenced by repository agents are available in your environment
4. **Access Control**: Manage permissions to ensure only authorized team members can modify repository agents
## Organization Management
To switch between organizations or see your current organization, use the CrewAI CLI:
```bash
# View current organization
crewai org current
# Switch to a different organization
crewai org switch <org_id>
# List all available organizations
crewai org list
```
<Note>
When loading agents from repositories, you must be authenticated and switched to the correct organization. If you receive errors, check your authentication status and organization settings using the CLI commands above.
</Note>

View File

@@ -2,6 +2,7 @@
title: Hallucination Guardrail
description: "Prevent and detect AI hallucinations in your CrewAI tasks"
icon: "shield-check"
mode: "wide"
---
## Overview
@@ -25,8 +26,13 @@ AI hallucinations occur when language models generate content that appears plaus
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
from crewai import LLM
# Initialize the guardrail with reference context
# Basic usage - will use task's expected_output as context
guardrail = HallucinationGuardrail(
llm=LLM(model="gpt-4o-mini")
)
# With explicit reference context
context_guardrail = HallucinationGuardrail(
context="AI helps with various tasks including analysis and generation.",
llm=LLM(model="gpt-4o-mini")
)

View File

@@ -0,0 +1,186 @@
---
title: Integrations
description: "Connected applications for your agents to take actions."
icon: "plug"
mode: "wide"
---
## Overview
Enable your agents to authenticate with any OAuth enabled provider and take actions. From Salesforce and HubSpot to Google and GitHub, we've got you covered with 16+ integrated services.
<Frame>
![Integrations](/images/enterprise/crew_connectors.png)
</Frame>
## Supported Integrations
### **Communication & Collaboration**
- **Gmail** - Manage emails and drafts
- **Slack** - Workspace notifications and alerts
- **Microsoft** - Office 365 and Teams integration
### **Project Management**
- **Jira** - Issue tracking and project management
- **ClickUp** - Task and productivity management
- **Asana** - Team task and project coordination
- **Notion** - Page and database management
- **Linear** - Software project and bug tracking
- **GitHub** - Repository and issue management
### **Customer Relationship Management**
- **Salesforce** - CRM account and opportunity management
- **HubSpot** - Sales pipeline and contact management
- **Zendesk** - Customer support ticket management
### **Business & Finance**
- **Stripe** - Payment processing and customer management
- **Shopify** - E-commerce store and product management
### **Productivity & Storage**
- **Google Sheets** - Spreadsheet data synchronization
- **Google Calendar** - Event and schedule management
- **Box** - File storage and document management
and more to come!
## Prerequisites
Before using Authentication Integrations, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account. You can get started with a free trial.
## Setting Up Integrations
### 1. Connect Your Account
1. Navigate to [CrewAI Enterprise](https://app.crewai.com)
2. Go to **Integrations** tab - https://app.crewai.com/crewai_plus/connectors
3. Click **Connect** on your desired service from the Authentication Integrations section
4. Complete the OAuth authentication flow
5. Grant necessary permissions for your use case
6. All set! Get your Enterprise Token from your [CrewAI Enterprise](https://app.crewai.com) in **Integration** tab
<Frame>
![Integrations](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
### 2. Install Integration Tools
All you need is the latest version of `crewai-tools` package.
```bash
uv add crewai-tools
```
## Usage Examples
### Basic Usage
<Tip>
All the services you are authenticated into will be available as tools. So all you need to do is add the `CrewaiEnterpriseTools` to your agent and you are good to go.
</Tip>
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Gmail tool will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# print the tools
print(enterprise_tools)
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
tools=enterprise_tools
)
# Task to send an email
email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent,
expected_output="Confirmation that email was sent successfully"
)
# Run the task
crew = Crew(
agents=[email_agent],
tasks=[email_task]
)
# Run the crew
crew.kickoff()
```
### Filtering Tools
```python
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
)
gmail_tool = enterprise_tools["gmail_find_email"]
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
tools=[gmail_tool]
)
notification_task = Task(
description="Find the email from john@example.com",
agent=gmail_agent,
expected_output="Email found from john@example.com"
)
# Run the task
crew = Crew(
agents=[slack_agent],
tasks=[notification_task]
)
```
## Best Practices
### Security
- **Principle of Least Privilege**: Only grant the minimum permissions required for your agents' tasks
- **Regular Audits**: Periodically review connected integrations and their permissions
- **Secure Credentials**: Never hardcode credentials; use CrewAI's secure authentication flow
### Filtering Tools
On a deployed crew, you can specify which actions are avialbel for each integration from the settings page of the service you connected to.
<Frame>
![Integrations](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
### Scoped Deployments for multi user organizations
You can deploy your crew and scope each integration to a specific user. For example, a crew that connects to google can use a specific user's gmail account.
<Tip>
This is useful for multi user organizations where you want to scope the integration to a specific user.
</Tip>
Use the `user_bearer_token` to scope the integration to a specific user so that when the crew is kicked off, it will use the user's bearer token to authenticate with the integration. If user is not logged in, then the crew will not use any connected integrations. Use the default bearer token to authenticate with the integrations thats deployed with the crew.
<Frame>
![Integrations](/images/enterprise/user_bearer_token.png)
</Frame>
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,104 @@
---
title: "Role-Based Access Control (RBAC)"
description: "Control access to crews, tools, and data with roles, scopes, and granular permissions."
icon: "shield"
mode: "wide"
---
## Overview
RBAC in CrewAI Enterprise enables secure, scalable access management through a combination of organizationlevel roles and automationlevel visibility controls.
<Frame>
<img src="/images/enterprise/users_and_roles.png" alt="RBAC overview in CrewAI Enterprise" />
</Frame>
## Users and Roles
Each member in your CrewAI workspace is assigned a role, which determines their access across various features.
You can:
- Use predefined roles (Owner, Member)
- Create custom roles tailored to specific permissions
- Assign roles at any time through the settings panel
You can configure users and roles in Settings → Roles.
<Steps>
<Step title="Open Roles settings">
Go to <b>Settings → Roles</b> in CrewAI Enterprise.
</Step>
<Step title="Choose a role type">
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click <b>Create role</b> to define a custom one.
</Step>
<Step title="Assign to members">
Select users and assign the role. You can change this anytime.
</Step>
</Steps>
### Configuration summary
| Area | Where to configure | Options |
|:---|:---|:---|
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
## Automationlevel Access Control
In addition to organizationwide roles, CrewAI Automations support finegrained visibility settings that let you restrict access to specific automations by user or role.
This is useful for:
- Keeping sensitive or experimental automations private
- Managing visibility across large teams or external collaborators
- Testing automations in isolated contexts
Deployments can be configured as private, meaning only whitelisted users and roles will be able to:
- View the deployment
- Run it or interact with its API
- Access its logs, metrics, and settings
The organization owner always has access, regardless of visibility settings.
You can configure automationlevel access control in Automation → Settings → Visibility tab.
<Steps>
<Step title="Open Visibility tab">
Navigate to <b>Automation → Settings → Visibility</b>.
</Step>
<Step title="Set visibility">
Choose <b>Private</b> to restrict access. The organization owner always retains access.
</Step>
<Step title="Whitelist access">
Add specific users and roles allowed to view, run, and access logs/metrics/settings.
</Step>
<Step title="Save and verify">
Save changes, then confirm that nonwhitelisted users cannot view or run the automation.
</Step>
</Steps>
### Private visibility: access outcomes
| Action | Owner | Whitelisted user/role | Not whitelisted |
|:---|:---|:---|:---|
| View automation | ✓ | ✓ | ✗ |
| Run automation/API | ✓ | ✓ | ✗ |
| Access logs/metrics/settings | ✓ | ✓ | ✗ |
<Tip>
The organization owner always has access. In private mode, only whitelisted users and roles can view, run, and access logs/metrics/settings.
</Tip>
<Frame>
<img src="/images/enterprise/visibility.png" alt="Automation Visibility settings in CrewAI Enterprise" />
</Frame>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with RBAC questions.
</Card>

View File

@@ -2,6 +2,7 @@
title: Tool Repository
description: "Using the Tool Repository to manage your tools"
icon: "toolbox"
mode: "wide"
---
## Overview
@@ -21,6 +22,7 @@ Before using the Tool Repository, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
- uv>=0.5.0 installed. Check out [how to upgrade](https://docs.astral.sh/uv/getting-started/installation/#upgrading-uv)
- [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI Enterprise organization
@@ -34,6 +36,22 @@ crewai tool install <tool-name>
This installs the tool and adds it to `pyproject.toml`.
You can use the tool by importing it and adding it to your agents:
```python
from your_tool.tool import YourTool
custom_tool = YourTool()
researcher = Agent(
role='Market Research Analyst',
goal='Provide up-to-date market analysis of the AI industry',
backstory='An expert analyst with a keen eye for market trends.',
tools=[custom_tool],
verbose=True
)
```
## Creating and Publishing Tools
To create a new tool project:

View File

@@ -2,6 +2,7 @@
title: Traces
description: "Using Traces to monitor your Crews"
icon: "timeline"
mode: "wide"
---
## Overview
@@ -141,6 +142,16 @@ Traces are invaluable for troubleshooting issues with your crews:
</Step>
</Steps>
## Performance and batching
CrewAI batches trace uploads to reduce overhead on high-volume runs:
- A TraceBatchManager buffers events and sends them in batches via the Plus API client
- Reduces network chatter and improves reliability on flaky connections
- Automatically enabled in the default trace listener; no configuration needed
This yields more stable tracing under load while preserving detailed task/agent telemetry.
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with trace analysis or any other CrewAI Enterprise features.
</Card>

View File

@@ -2,6 +2,7 @@
title: Webhook Streaming
description: "Using Webhook Streaming to stream events to your webhook"
icon: "webhook"
mode: "wide"
---
## Overview
@@ -62,16 +63,96 @@ As requests are sent over HTTP, the order of events can't be guaranteed. If you
CrewAI supports both system events and custom events in Enterprise Event Streaming. These events are sent to your configured webhook endpoint during crew and flow execution.
- `crew_kickoff_started`
- `crew_step_started`
- `crew_step_completed`
- `crew_execution_completed`
- `llm_call_started`
- `llm_call_completed`
- `tool_usage_started`
- `tool_usage_completed`
- `crew_test_failed`
- *...and others*
### Flow Events:
- flow_created
- flow_started
- flow_finished
- flow_plot
- method_execution_started
- method_execution_finished
- method_execution_failed
### Agent Events:
- agent_execution_started
- agent_execution_completed
- agent_execution_error
- lite_agent_execution_started
- lite_agent_execution_completed
- lite_agent_execution_error
- agent_logs_started
- agent_logs_execution
- agent_evaluation_started
- agent_evaluation_completed
- agent_evaluation_failed
### Crew Events:
- crew_kickoff_started
- crew_kickoff_completed
- crew_kickoff_failed
- crew_train_started
- crew_train_completed
- crew_train_failed
- crew_test_started
- crew_test_completed
- crew_test_failed
- crew_test_result
### Task Events:
- task_started
- task_completed
- task_failed
- task_evaluation
### Tool Usage Events:
- tool_usage_started
- tool_usage_finished
- tool_usage_error
- tool_validate_input_error
- tool_selection_error
- tool_execution_error
### LLM Events:
- llm_call_started
- llm_call_completed
- llm_call_failed
- llm_stream_chunk
### LLM Guardrail Events:
- llm_guardrail_started
- llm_guardrail_completed
### Memory Events:
- memory_query_started
- memory_query_completed
- memory_query_failed
- memory_save_started
- memory_save_completed
- memory_save_failed
- memory_retrieval_started
- memory_retrieval_completed
### Knowledge Events:
- knowledge_search_query_started
- knowledge_search_query_completed
- knowledge_search_query_failed
- knowledge_query_started
- knowledge_query_completed
- knowledge_query_failed
### Reasoning Events:
- agent_reasoning_started
- agent_reasoning_completed
- agent_reasoning_failed
Event names match the internal event bus. See [GitHub source](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) for the full list.

View File

@@ -0,0 +1,179 @@
---
title: "Automation Triggers"
description: "Automatically execute your CrewAI workflows when specific events occur in connected integrations"
icon: "bolt"
mode: "wide"
---
Automation triggers enable you to automatically run your CrewAI deployments when specific events occur in your connected integrations, creating powerful event-driven workflows that respond to real-time changes in your business systems.
## Overview
With automation triggers, you can:
- **Respond to real-time events** - Automatically execute workflows when specific conditions are met
- **Integrate with external systems** - Connect with platforms like Gmail, Outlook, OneDrive, JIRA, Slack, Stripe and more
- **Scale your automation** - Handle high-volume events without manual intervention
- **Maintain context** - Access trigger data within your crews and flows
## Managing Automation Triggers
### Viewing Available Triggers
To access and manage your automation triggers:
1. Navigate to your deployment in the CrewAI dashboard
2. Click on the **Triggers** tab to view all available trigger integrations
<Frame>
<img src="/images/enterprise/list-available-triggers.png" alt="List of available automation triggers" />
</Frame>
This view shows all the trigger integrations available for your deployment, along with their current connection status.
### Enabling and Disabling Triggers
Each trigger can be easily enabled or disabled using the toggle switch:
<Frame>
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
</Frame>
- **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur
- **Disabled (gray toggle)**: The trigger is inactive and will not respond to events
Simply click the toggle to change the trigger state. Changes take effect immediately.
### Monitoring Trigger Executions
Track the performance and history of your triggered executions:
<Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
</Frame>
## Building Automation
Before building your automation, it's helpful to understand the structure of trigger payloads that your crews and flows will receive.
### Payload Samples Repository
We maintain a comprehensive repository with sample payloads from various trigger sources to help you build and test your automations:
**🔗 [CrewAI Enterprise Trigger Payload Samples](https://github.com/crewAIInc/crewai-enterprise-trigger-payload-samples)**
This repository contains:
- **Real payload examples** from different trigger sources (Gmail, Google Drive, etc.)
- **Payload structure documentation** showing the format and available fields
### Triggers with Crew
Your existing crew definitions work seamlessly with triggers, you just need to have a task to parse the received payload:
```python
@CrewBase
class MyAutomatedCrew:
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
)
@task
def parse_trigger_payload(self) -> Task:
return Task(
config=self.tasks_config['parse_trigger_payload'],
agent=self.researcher(),
)
@task
def analyze_trigger_content(self) -> Task:
return Task(
config=self.tasks_config['analyze_trigger_data'],
agent=self.researcher(),
)
```
The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms.
<Note>
Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI automatically injects this payload:
- Tasks: appended to the first task's description by default ("Trigger Payload: {crewai_trigger_payload}")
- Control via `allow_crewai_trigger_context`: set `True` to always inject, `False` to never inject
- Flows: any `@start()` method that accepts a `crewai_trigger_payload` parameter will receive it
</Note>
### Integration with Flows
For flows, you have more control over how trigger data is handled:
#### Accessing Trigger Payload
All `@start()` methods in your flows will accept an additional parameter called `crewai_trigger_payload`:
```python
from crewai.flow import Flow, start, listen
class MyAutomatedFlow(Flow):
@start()
def handle_trigger(self, crewai_trigger_payload: dict = None):
"""
This start method can receive trigger data
"""
if crewai_trigger_payload:
# Process the trigger data
trigger_id = crewai_trigger_payload.get('id')
event_data = crewai_trigger_payload.get('payload', {})
# Store in flow state for use by other methods
self.state.trigger_id = trigger_id
self.state.trigger_type = event_data
return event_data
# Handle manual execution
return None
@listen(handle_trigger)
def process_data(self, trigger_data):
"""
Process the data from the trigger
"""
# ... process the trigger
```
#### Triggering Crews from Flows
When kicking off a crew within a flow that was triggered, pass the trigger payload as it:
```python
@start()
def delegate_to_crew(self, crewai_trigger_payload: dict = None):
"""
Delegate processing to a specialized crew
"""
crew = MySpecializedCrew()
# Pass the trigger payload to the crew
result = crew.crew().kickoff(
inputs={
'a_custom_parameter': "custom_value",
'crewai_trigger_payload': crewai_trigger_payload
},
)
return result
```
## Troubleshooting
**Trigger not firing:**
- Verify the trigger is enabled
- Check integration connection status
**Execution failures:**
- Check the execution logs for error details
- If you are developing, make sure the inputs include the `crewai_trigger_payload` parameter with the correct payload
Automation triggers transform your CrewAI deployments into responsive, event-driven systems that can seamlessly integrate with your existing business processes and tools.

View File

@@ -2,6 +2,7 @@
title: "Azure OpenAI Setup"
description: "Configure Azure OpenAI with Crew Studio for enterprise LLM connections"
icon: "microsoft"
mode: "wide"
---
This guide walks you through connecting Azure OpenAI with Crew Studio for seamless enterprise AI operations.
@@ -9,12 +10,12 @@ This guide walks you through connecting Azure OpenAI with Crew Studio for seamle
## Setup Process
<Steps>
<Step title="Access Azure OpenAI Studio">
1. In Azure, go to `Azure AI Services > select your deployment > open Azure OpenAI Studio`.
<Step title="Access Azure AI Foundry">
1. In Azure, go to [Azure AI Foundry](https://ai.azure.com/) > select your Azure OpenAI deployment.
2. On the left menu, click `Deployments`. If you don't have one, create a deployment with your desired model.
3. Once created, select your deployment and locate the `Target URI` and `Key` on the right side of the page. Keep this page open, as you'll need this information.
<Frame>
<img src="/images/enterprise/azure-openai-studio.png" alt="Azure OpenAI Studio" />
<img src="/images/enterprise/azure-openai-studio.png" alt="Azure AI Foundry" />
</Frame>
</Step>
@@ -48,4 +49,4 @@ If you encounter issues:
- Verify the Target URI format matches the expected pattern
- Check that the API key is correct and has proper permissions
- Ensure network access is configured to allow CrewAI connections
- Confirm the deployment model matches what you've configured in CrewAI
- Confirm the deployment model matches what you've configured in CrewAI

View File

@@ -2,6 +2,7 @@
title: "Build Crew"
description: "A Crew is a group of agents that work together to complete a task."
icon: "people-arrows"
mode: "wide"
---
## Overview
@@ -10,11 +11,11 @@ icon: "people-arrows"
## Getting Started
<iframe
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/-kSOTtYzgEw"
title="Building Crews with CrewAI CLI"
title="Building Crews with CrewAI CLI"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
@@ -23,13 +24,13 @@ icon: "people-arrows"
### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/installation">
<Card title="Follow Standard Installation" icon="wrench" href="/en/installation">
Follow our standard installation guide to set up CrewAI CLI and create your first project.
</Card>
### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/quickstart">
<Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration.
</Card>
@@ -40,4 +41,4 @@ For Enterprise-specific support or questions, contact our dedicated support team
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
</Card>
</Card>

View File

@@ -2,6 +2,7 @@
title: "Deploy Crew"
description: "Deploying a Crew on CrewAI Enterprise"
icon: "rocket"
mode: "wide"
---
<Note>
@@ -41,11 +42,8 @@ The CLI provides the fastest way to deploy locally developed crews to the Enterp
First, you need to authenticate your CLI with the CrewAI Enterprise platform:
```bash
# If you already have a CrewAI Enterprise account
# If you already have a CrewAI Enterprise account, or want to create one:
crewai login
# If you're creating a new account
crewai signup
```
When you run either command, the CLI will:
@@ -122,7 +120,7 @@ The CrewAI CLI offers several commands to manage your deployments:
# Remove a deployment
crewai deploy remove <deployment_id>
```
```
## Option 2: Deploy Directly via Web Interface
@@ -132,14 +130,14 @@ You can also deploy your crews directly through the CrewAI Enterprise web interf
<Step title="Pushing to GitHub">
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/quickstart).
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart).
</Step>
<Step title="Connecting GitHub to CrewAI Enterprise">
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the button "Connect GitHub"
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
@@ -201,20 +199,20 @@ For security reasons, the following environment variable naming patterns are **a
**Blocked Patterns:**
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
- Variables ending with `_SECRET` (e.g., `API_SECRET`)
- Variables ending with `_KEY` in certain contexts
**Specific Blocked Variables:**
- `GITHUB_USER`, `GITHUB_TOKEN`
- `AWS_REGION`, `AWS_DEFAULT_REGION`
- `AWS_REGION`, `AWS_DEFAULT_REGION`
- Various internal CrewAI system variables
### Allowed Exceptions
Some variables are explicitly allowed despite matching blocked patterns:
- `AZURE_AD_TOKEN`
- `AZURE_OPENAI_AD_TOKEN`
- `AZURE_OPENAI_AD_TOKEN`
- `ENTERPRISE_ACTION_TOKEN`
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
@@ -228,7 +226,7 @@ OPENAI_TOKEN=sk-...
DATABASE_PASSWORD=mypassword
API_SECRET=secret123
# ✅ Use these naming patterns instead
# ✅ Use these naming patterns instead
OPENAI_API_KEY=sk-...
DATABASE_CREDENTIALS=mypassword
API_CONFIG=secret123

View File

@@ -2,6 +2,7 @@
title: "Enable Crew Studio"
description: "Enabling Crew Studio on CrewAI Enterprise"
icon: "comments"
mode: "wide"
---
<Tip>

View File

@@ -2,6 +2,7 @@
title: "HubSpot Trigger"
description: "Trigger CrewAI crews directly from HubSpot Workflows"
icon: "hubspot"
mode: "wide"
---
This guide provides a step-by-step process to set up HubSpot triggers for CrewAI Enterprise, enabling you to initiate crews directly from HubSpot Workflows.

View File

@@ -2,6 +2,7 @@
title: "HITL Workflows"
description: "Learn how to implement Human-In-The-Loop workflows in CrewAI for enhanced decision-making"
icon: "user-check"
mode: "wide"
---
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.

View File

@@ -2,6 +2,7 @@
title: "Kickoff Crew"
description: "Kickoff a Crew on CrewAI Enterprise"
icon: "flag-checkered"
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: "React Component Export"
description: "Learn how to export and integrate CrewAI Enterprise React components into your applications"
icon: "react"
mode: "wide"
---
This guide explains how to export CrewAI Enterprise crews as React components and integrate them into your own applications.

View File

@@ -2,6 +2,7 @@
title: "Salesforce Trigger"
description: "Trigger CrewAI crews from Salesforce workflows for CRM automation"
icon: "salesforce"
mode: "wide"
---
CrewAI Enterprise can be triggered from Salesforce to automate customer relationship management workflows and enhance your sales operations.

View File

@@ -2,6 +2,7 @@
title: "Slack Trigger"
description: "Trigger CrewAI crews directly from Slack using slash commands"
icon: "slack"
mode: "wide"
---
This guide explains how to start a crew directly from Slack using CrewAI triggers.

View File

@@ -2,6 +2,7 @@
title: "Team Management"
description: "Learn how to invite and manage team members in your CrewAI Enterprise organization"
icon: "users"
mode: "wide"
---
As an administrator of a CrewAI Enterprise account, you can easily invite new team members to join your organization. This guide will walk you through the process step-by-step.

View File

@@ -2,6 +2,7 @@
title: "Update Crew"
description: "Updating a Crew on CrewAI Enterprise"
icon: "pencil"
mode: "wide"
---
<Note>

View File

@@ -2,6 +2,7 @@
title: "Webhook Automation"
description: "Automate CrewAI Enterprise workflows using webhooks with platforms like ActivePieces, Zapier, and Make.com"
icon: "webhook"
mode: "wide"
---
CrewAI Enterprise allows you to automate your workflow using webhooks. This article will guide you through the process of setting up and using webhooks to kickoff your crew execution, with a focus on integration with ActivePieces, a workflow automation platform similar to Zapier and Make.com.
@@ -79,14 +80,24 @@ CrewAI Enterprise allows you to automate your workflow using webhooks. This arti
## Webhook Output Examples
**Note:** Any `meta` object provided in your kickoff request will be included in all webhook payloads, allowing you to track requests and maintain context across the entire crew execution lifecycle.
<Tabs>
<Tab title="Step Webhook">
`stepWebhookUrl` - Callback that will be executed upon each agent inner thought
```json
{
"action": "**Preliminary Research Report on the Financial Industry for crewai Enterprise Solution**\n1. Industry Overview and Trends\nThe financial industry in ....\nConclusion:\nThe financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"task_id": "97eba64f-958c-40a0-b61c-625fe635a3c0"
"prompt": "Research the financial industry for potential AI solutions",
"thought": "I need to conduct preliminary research on the financial industry",
"tool": "research_tool",
"tool_input": "financial industry AI solutions",
"result": "**Preliminary Research Report on the Financial Industry for crewai Enterprise Solution**\n1. Industry Overview and Trends\nThe financial industry in ....\nConclusion:\nThe financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"kickoff_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"meta": {
"requestId": "travel-req-123",
"source": "web-app"
}
}
```
</Tab>
@@ -95,8 +106,21 @@ CrewAI Enterprise allows you to automate your workflow using webhooks. This arti
```json
{
"description": "Using the information gathered from the lead's data, conduct preliminary research on the lead's industry, company background, and potential use cases for crewai. Focus on finding relevant data that can aid in scoring the lead and planning a strategy to pitch them crewai.The financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"task_id": "97eba64f-958c-40a0-b61c-625fe635a3c0"
"description": "Using the information gathered from the lead's data, conduct preliminary research on the lead's industry, company background, and potential use cases for crewai. Focus on finding relevant data that can aid in scoring the lead and planning a strategy to pitch them crewai.",
"name": "Industry Research Task",
"expected_output": "Detailed research report on the financial industry",
"summary": "The financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"agent": "Research Agent",
"output": "**Preliminary Research Report on the Financial Industry for crewai Enterprise Solution**\n1. Industry Overview and Trends\nThe financial industry in ....\nConclusion:\nThe financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance.",
"output_json": {
"industry": "financial",
"key_opportunities": ["digital customer engagement", "risk management", "regulatory compliance"]
},
"kickoff_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"meta": {
"requestId": "travel-req-123",
"source": "web-app"
}
}
```
</Tab>
@@ -105,8 +129,9 @@ CrewAI Enterprise allows you to automate your workflow using webhooks. This arti
```json
{
"task_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"result": {
"kickoff_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"result": "**Final Analysis Report**\n\nLead Score: Customer service enhancement and compliance are particularly relevant.\n\nTalking Points:\n- Highlight how crewai's AI solutions can transform customer service\n- Discuss crewai's potential for sustainability goals\n- Emphasize compliance capabilities\n- Stress adaptability for various operation scales",
"result_json": {
"lead_score": "Customer service enhancement, and compliance are particularly relevant.",
"talking_points": [
"Highlight how crewai's AI solutions can transform customer service with automated, personalized experiences and 24/7 support, improving both customer satisfaction and operational efficiency.",
@@ -114,6 +139,15 @@ CrewAI Enterprise allows you to automate your workflow using webhooks. This arti
"Emphasize crewai's ability to enhance compliance with evolving regulations through efficient data processing and reporting, reducing the risk of non-compliance penalties.",
"Stress the adaptability of crewai to support both extensive multinational operations and smaller, targeted projects, ensuring the solution grows with the institution's needs."
]
},
"token_usage": {
"total_tokens": 1250,
"prompt_tokens": 800,
"completion_tokens": 450
},
"meta": {
"requestId": "travel-req-123",
"source": "web-app"
}
}
```

View File

@@ -2,6 +2,7 @@
title: "Zapier Trigger"
description: "Trigger CrewAI crews from Zapier workflows to automate cross-app workflows"
icon: "bolt"
mode: "wide"
---
This guide will walk you through the process of setting up Zapier triggers for CrewAI Enterprise, allowing you to automate workflows between CrewAI Enterprise and other applications.

View File

@@ -0,0 +1,254 @@
---
title: Asana Integration
description: "Team task and project coordination with Asana integration for CrewAI."
icon: "circle"
mode: "wide"
---
## Overview
Enable your agents to manage tasks, projects, and team coordination through Asana. Create tasks, update project status, manage assignments, and streamline your team's workflow with AI-powered automation.
## Prerequisites
Before using the Asana integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- An Asana account with appropriate permissions
- Connected your Asana account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Asana Integration
### 1. Connect Your Asana Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Asana** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="ASANA_CREATE_COMMENT">
**Description:** Create a comment in Asana.
**Parameters:**
- `task` (string, required): Task ID - The ID of the Task the comment will be added to. The comment will be authored by the currently authenticated user.
- `text` (string, required): Text (example: "This is a comment.").
</Accordion>
<Accordion title="ASANA_CREATE_PROJECT">
**Description:** Create a project in Asana.
**Parameters:**
- `name` (string, required): Name (example: "Stuff to buy").
- `workspace` (string, required): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Projects in. Defaults to the user's first Workspace if left blank.
- `team` (string, optional): Team - Use Connect Portal Workflow Settings to allow users to select which Team to share this Project with. Defaults to the user's first Team if left blank.
- `notes` (string, optional): Notes (example: "These are things we need to purchase.").
</Accordion>
<Accordion title="ASANA_GET_PROJECTS">
**Description:** Get a list of projects in Asana.
**Parameters:**
- `archived` (string, optional): Archived - Choose "true" to show archived projects, "false" to display only active projects, or "default" to show both archived and active projects.
- Options: `default`, `true`, `false`
</Accordion>
<Accordion title="ASANA_GET_PROJECT_BY_ID">
**Description:** Get a project by ID in Asana.
**Parameters:**
- `projectFilterId` (string, required): Project ID.
</Accordion>
<Accordion title="ASANA_CREATE_TASK">
**Description:** Create a task in Asana.
**Parameters:**
- `name` (string, required): Name (example: "Task Name").
- `workspace` (string, optional): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Tasks in. Defaults to the user's first Workspace if left blank..
- `project` (string, optional): Project - Use Connect Portal Workflow Settings to allow users to select which Project to create this Task in.
- `notes` (string, optional): Notes.
- `dueOnDate` (string, optional): Due On - The date on which this task is due. Cannot be used together with Due At. (example: "YYYY-MM-DD").
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="ASANA_UPDATE_TASK">
**Description:** Update a task in Asana.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the Task that will be updated.
- `completeStatus` (string, optional): Completed Status.
- Options: `true`, `false`
- `name` (string, optional): Name (example: "Task Name").
- `notes` (string, optional): Notes.
- `dueOnDate` (string, optional): Due On - The date on which this task is due. Cannot be used together with Due At. (example: "YYYY-MM-DD").
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="ASANA_GET_TASKS">
**Description:** Get a list of tasks in Asana.
**Parameters:**
- `workspace` (string, optional): Workspace - The ID of the Workspace to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Workspace.
- `project` (string, optional): Project - The ID of the Project to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Project.
- `assignee` (string, optional): Assignee - The ID of the assignee to filter tasks on. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00").
</Accordion>
<Accordion title="ASANA_GET_TASKS_BY_ID">
**Description:** Get a list of tasks by ID in Asana.
**Parameters:**
- `taskId` (string, required): Task ID.
</Accordion>
<Accordion title="ASANA_GET_TASK_BY_EXTERNAL_ID">
**Description:** Get a task by external ID in Asana.
**Parameters:**
- `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application.
</Accordion>
<Accordion title="ASANA_ADD_TASK_TO_SECTION">
**Description:** Add a task to a section in Asana.
**Parameters:**
- `sectionId` (string, required): Section ID - The ID of the section to add this task to.
- `taskId` (string, required): Task ID - The ID of the task. (example: "1204619611402340").
- `beforeTaskId` (string, optional): Before Task ID - The ID of a task in this section that this task will be inserted before. Cannot be used with After Task ID. (example: "1204619611402340").
- `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340").
</Accordion>
<Accordion title="ASANA_GET_TEAMS">
**Description:** Get a list of teams in Asana.
**Parameters:**
- `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user.
</Accordion>
<Accordion title="ASANA_GET_WORKSPACES">
**Description:** Get a list of workspaces in Asana.
**Parameters:** None required.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Asana Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Asana tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Asana capabilities
asana_agent = Agent(
role="Project Manager",
goal="Manage tasks and projects in Asana efficiently",
backstory="An AI assistant specialized in project management and task coordination.",
tools=[enterprise_tools]
)
# Task to create a new project
create_project_task = Task(
description="Create a new project called 'Q1 Marketing Campaign' in the Marketing workspace",
agent=asana_agent,
expected_output="Confirmation that the project was created successfully with project ID"
)
# Run the task
crew = Crew(
agents=[asana_agent],
tasks=[create_project_task]
)
crew.kickoff()
```
### Filtering Specific Asana Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Asana tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["asana_create_task", "asana_update_task", "asana_get_tasks"]
)
task_manager_agent = Agent(
role="Task Manager",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and management.",
tools=enterprise_tools
)
# Task to create and assign a task
task_management = Task(
description="Create a task called 'Review quarterly reports' and assign it to the appropriate team member",
agent=task_manager_agent,
expected_output="Task created and assigned successfully"
)
crew = Crew(
agents=[task_manager_agent],
tasks=[task_management]
)
crew.kickoff()
```
### Advanced Project Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_coordinator = Agent(
role="Project Coordinator",
goal="Coordinate project activities and track progress",
backstory="An experienced project coordinator who ensures projects run smoothly.",
tools=[enterprise_tools]
)
# Complex task involving multiple Asana operations
coordination_task = Task(
description="""
1. Get all active projects in the workspace
2. For each project, get the list of incomplete tasks
3. Create a summary report task in the 'Management Reports' project
4. Add comments to overdue tasks to request status updates
""",
agent=project_coordinator,
expected_output="Summary report created and status update requests sent for overdue tasks"
)
crew = Crew(
agents=[project_coordinator],
tasks=[coordination_task]
)
crew.kickoff()
```

View File

@@ -0,0 +1,269 @@
---
title: Box Integration
description: "File storage and document management with Box integration for CrewAI."
icon: "box"
mode: "wide"
---
## Overview
Enable your agents to manage files, folders, and documents through Box. Upload files, organize folder structures, search content, and streamline your team's document management with AI-powered automation.
## Prerequisites
Before using the Box integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Box account with appropriate permissions
- Connected your Box account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Box Integration
### 1. Connect Your Box Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Box** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for file and folder management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="BOX_SAVE_FILE">
**Description:** Save a file from URL in Box.
**Parameters:**
- `fileAttributes` (object, required): Attributes - File metadata including name, parent folder, and timestamps.
```json
{
"content_created_at": "2012-12-12T10:53:43-08:00",
"content_modified_at": "2012-12-12T10:53:43-08:00",
"name": "qwerty.png",
"parent": { "id": "1234567" }
}
```
- `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "https://picsum.photos/200/300").
</Accordion>
<Accordion title="BOX_SAVE_FILE_FROM_OBJECT">
**Description:** Save a file in Box.
**Parameters:**
- `file` (string, required): File - Accepts a File Object containing file data. Files must be smaller than 50MB in size.
- `fileName` (string, required): File Name (example: "qwerty.png").
- `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank.
</Accordion>
<Accordion title="BOX_GET_FILE_BY_ID">
**Description:** Get a file by ID in Box.
**Parameters:**
- `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345").
</Accordion>
<Accordion title="BOX_LIST_FILES">
**Description:** List files in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "direction",
"operator": "$stringExactlyMatches",
"value": "ASC"
}
]
}
]
}
```
</Accordion>
<Accordion title="BOX_CREATE_FOLDER">
**Description:** Create a folder in Box.
**Parameters:**
- `folderName` (string, required): Name - The name for the new folder. (example: "New Folder").
- `folderParent` (object, required): Parent Folder - The parent folder where the new folder will be created.
```json
{
"id": "123456"
}
```
</Accordion>
<Accordion title="BOX_MOVE_FOLDER">
**Description:** Move a folder in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `folderName` (string, required): Name - The name for the folder. (example: "New Folder").
- `folderParent` (object, required): Parent Folder - The new parent folder destination.
```json
{
"id": "123456"
}
```
</Accordion>
<Accordion title="BOX_GET_FOLDER_BY_ID">
**Description:** Get a folder by ID in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
</Accordion>
<Accordion title="BOX_SEARCH_FOLDERS">
**Description:** Search folders in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The folder to search within.
- `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "sort",
"operator": "$stringExactlyMatches",
"value": "name"
}
]
}
]
}
```
</Accordion>
<Accordion title="BOX_DELETE_FOLDER">
**Description:** Delete a folder in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `recursive` (boolean, optional): Recursive - Delete a folder that is not empty by recursively deleting the folder and all of its content.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Box Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Box tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Box capabilities
box_agent = Agent(
role="Document Manager",
goal="Manage files and folders in Box efficiently",
backstory="An AI assistant specialized in document management and file organization.",
tools=[enterprise_tools]
)
# Task to create a folder structure
create_structure_task = Task(
description="Create a folder called 'Project Files' in the root directory and upload a document from URL",
agent=box_agent,
expected_output="Folder created and file uploaded successfully"
)
# Run the task
crew = Crew(
agents=[box_agent],
tasks=[create_structure_task]
)
crew.kickoff()
```
### Filtering Specific Box Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Box tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["box_create_folder", "box_save_file", "box_list_files"]
)
file_organizer_agent = Agent(
role="File Organizer",
goal="Organize and manage file storage efficiently",
backstory="An AI assistant that focuses on file organization and storage management.",
tools=enterprise_tools
)
# Task to organize files
organization_task = Task(
description="Create a folder structure for the marketing team and organize existing files",
agent=file_organizer_agent,
expected_output="Folder structure created and files organized"
)
crew = Crew(
agents=[file_organizer_agent],
tasks=[organization_task]
)
crew.kickoff()
```
### Advanced File Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
file_manager = Agent(
role="File Manager",
goal="Maintain organized file structure and manage document lifecycle",
backstory="An experienced file manager who ensures documents are properly organized and accessible.",
tools=[enterprise_tools]
)
# Complex task involving multiple Box operations
management_task = Task(
description="""
1. List all files in the root folder
2. Create monthly archive folders for the current year
3. Move old files to appropriate archive folders
4. Generate a summary report of the file organization
""",
agent=file_manager,
expected_output="Files organized into archive structure with summary report"
)
crew = Crew(
agents=[file_manager],
tasks=[management_task]
)
crew.kickoff()
```

View File

@@ -0,0 +1,294 @@
---
title: ClickUp Integration
description: "Task and productivity management with ClickUp integration for CrewAI."
icon: "list-check"
mode: "wide"
---
## Overview
Enable your agents to manage tasks, projects, and productivity workflows through ClickUp. Create and update tasks, organize projects, manage team assignments, and streamline your productivity management with AI-powered automation.
## Prerequisites
Before using the ClickUp integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A ClickUp account with appropriate permissions
- Connected your ClickUp account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up ClickUp Integration
### 1. Connect Your ClickUp Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **ClickUp** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="CLICKUP_SEARCH_TASKS">
**Description:** Search for tasks in ClickUp using advanced filters.
**Parameters:**
- `taskFilterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "statuses%5B%5D",
"operator": "$stringExactlyMatches",
"value": "open"
}
]
}
]
}
```
Available fields: `space_ids%5B%5D`, `project_ids%5B%5D`, `list_ids%5B%5D`, `statuses%5B%5D`, `include_closed`, `assignees%5B%5D`, `tags%5B%5D`, `due_date_gt`, `due_date_lt`, `date_created_gt`, `date_created_lt`, `date_updated_gt`, `date_updated_lt`
</Accordion>
<Accordion title="CLICKUP_GET_TASK_IN_LIST">
**Description:** Get tasks in a specific list in ClickUp.
**Parameters:**
- `listId` (string, required): List - Select a List to get tasks from. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `taskFilterFormula` (string, optional): Search for tasks that match specified filters. For example: name=task1.
</Accordion>
<Accordion title="CLICKUP_CREATE_TASK">
**Description:** Create a task in ClickUp.
**Parameters:**
- `listId` (string, required): List - Select a List to create this task in. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `name` (string, required): Name - The task name.
- `description` (string, optional): Description - Task description.
- `status` (string, optional): Status - Select a Status for this task. Use Connect Portal User Settings to allow users to select a ClickUp Status.
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="CLICKUP_UPDATE_TASK">
**Description:** Update a task in ClickUp.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the task to update.
- `listId` (string, required): List - Select a List to create this task in. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `name` (string, optional): Name - The task name.
- `description` (string, optional): Description - Task description.
- `status` (string, optional): Status - Select a Status for this task. Use Connect Portal User Settings to allow users to select a ClickUp Status.
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="CLICKUP_DELETE_TASK">
**Description:** Delete a task in ClickUp.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the task to delete.
</Accordion>
<Accordion title="CLICKUP_GET_LIST">
**Description:** Get List information in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the lists.
</Accordion>
<Accordion title="CLICKUP_GET_CUSTOM_FIELDS_IN_LIST">
**Description:** Get Custom Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get custom fields from.
</Accordion>
<Accordion title="CLICKUP_GET_ALL_FIELDS_IN_LIST">
**Description:** Get All Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get all fields from.
</Accordion>
<Accordion title="CLICKUP_GET_SPACE">
**Description:** Get Space information in ClickUp.
**Parameters:**
- `spaceId` (string, optional): Space ID - The ID of the space to retrieve.
</Accordion>
<Accordion title="CLICKUP_GET_FOLDERS">
**Description:** Get Folders in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the folders.
</Accordion>
<Accordion title="CLICKUP_GET_MEMBER">
**Description:** Get Member information in ClickUp.
**Parameters:** None required.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic ClickUp Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (ClickUp tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with ClickUp capabilities
clickup_agent = Agent(
role="Task Manager",
goal="Manage tasks and projects in ClickUp efficiently",
backstory="An AI assistant specialized in task management and productivity coordination.",
tools=[enterprise_tools]
)
# Task to create a new task
create_task = Task(
description="Create a task called 'Review Q1 Reports' in the Marketing list with high priority",
agent=clickup_agent,
expected_output="Task created successfully with task ID"
)
# Run the task
crew = Crew(
agents=[clickup_agent],
tasks=[create_task]
)
crew.kickoff()
```
### Filtering Specific ClickUp Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific ClickUp tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["clickup_create_task", "clickup_update_task", "clickup_search_tasks"]
)
task_coordinator = Agent(
role="Task Coordinator",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and status management.",
tools=enterprise_tools
)
# Task to manage task workflow
task_workflow = Task(
description="Create a task for project planning and assign it to the development team",
agent=task_coordinator,
expected_output="Task created and assigned successfully"
)
crew = Crew(
agents=[task_coordinator],
tasks=[task_workflow]
)
crew.kickoff()
```
### Advanced Project Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_manager = Agent(
role="Project Manager",
goal="Coordinate project activities and track team productivity",
backstory="An experienced project manager who ensures projects are delivered on time.",
tools=[enterprise_tools]
)
# Complex task involving multiple ClickUp operations
project_coordination = Task(
description="""
1. Get all open tasks in the current space
2. Identify overdue tasks and update their status
3. Create a weekly report task summarizing project progress
4. Assign the report task to the team lead
""",
agent=project_manager,
expected_output="Project status updated and weekly report task created and assigned"
)
crew = Crew(
agents=[project_manager],
tasks=[project_coordination]
)
crew.kickoff()
```
### Task Search and Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
task_analyst = Agent(
role="Task Analyst",
goal="Analyze task patterns and optimize team productivity",
backstory="An AI assistant that analyzes task data to improve team efficiency.",
tools=[enterprise_tools]
)
# Task to analyze and optimize task distribution
task_analysis = Task(
description="""
Search for all tasks assigned to team members in the last 30 days,
analyze completion patterns, and create optimization recommendations
""",
agent=task_analyst,
expected_output="Task analysis report with optimization recommendations"
)
crew = Crew(
agents=[task_analyst],
tasks=[task_analysis]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with ClickUp integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,324 @@
---
title: GitHub Integration
description: "Repository and issue management with GitHub integration for CrewAI."
icon: "github"
mode: "wide"
---
## Overview
Enable your agents to manage repositories, issues, and releases through GitHub. Create and update issues, manage releases, track project development, and streamline your software development workflow with AI-powered automation.
## Prerequisites
Before using the GitHub integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A GitHub account with appropriate repository permissions
- Connected your GitHub account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up GitHub Integration
### 1. Connect Your GitHub Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **GitHub** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for repository and issue management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="GITHUB_CREATE_ISSUE">
**Description:** Create an issue in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `title` (string, required): Issue Title - Specify the title of the issue to create.
- `body` (string, optional): Issue Body - Specify the body contents of the issue to create.
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
</Accordion>
<Accordion title="GITHUB_UPDATE_ISSUE">
**Description:** Update an issue in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `issue_number` (string, required): Issue Number - Specify the number of the issue to update.
- `title` (string, required): Issue Title - Specify the title of the issue to update.
- `body` (string, optional): Issue Body - Specify the body contents of the issue to update.
- `assignees` (string, optional): Assignees - Specify the assignee(s)' GitHub login as an array of strings for this issue. (example: `["octocat"]`).
- `state` (string, optional): State - Specify the updated state of the issue.
- Options: `open`, `closed`
</Accordion>
<Accordion title="GITHUB_GET_ISSUE_BY_NUMBER">
**Description:** Get an issue by number in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `issue_number` (string, required): Issue Number - Specify the number of the issue to fetch.
</Accordion>
<Accordion title="GITHUB_LOCK_ISSUE">
**Description:** Lock an issue in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `issue_number` (string, required): Issue Number - Specify the number of the issue to lock.
- `lock_reason` (string, required): Lock Reason - Specify a reason for locking the issue or pull request conversation.
- Options: `off-topic`, `too heated`, `resolved`, `spam`
</Accordion>
<Accordion title="GITHUB_SEARCH_ISSUE">
**Description:** Search for issues in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Issue. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Issue.
- `filter` (object, required): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "assignee",
"operator": "$stringExactlyMatches",
"value": "octocat"
}
]
}
]
}
```
Available fields: `assignee`, `creator`, `mentioned`, `labels`
</Accordion>
<Accordion title="GITHUB_CREATE_RELEASE">
**Description:** Create a release in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `tag_name` (string, required): Name - Specify the name of the release tag to be created. (example: "v1.0.0").
- `target_commitish` (string, optional): Target - Specify the target of the release. This can either be a branch name or a commit SHA. Defaults to the main branch. (example: "master").
- `body` (string, optional): Body - Specify a description for this release.
- `draft` (string, optional): Draft - Specify whether the created release should be a draft (unpublished) release.
- Options: `true`, `false`
- `prerelease` (string, optional): Prerelease - Specify whether the created release should be a prerelease.
- Options: `true`, `false`
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
- Options: `true`, `false`
</Accordion>
<Accordion title="GITHUB_UPDATE_RELEASE">
**Description:** Update a release in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the ID of the release to update.
- `tag_name` (string, optional): Name - Specify the name of the release tag to be updated. (example: "v1.0.0").
- `target_commitish` (string, optional): Target - Specify the target of the release. This can either be a branch name or a commit SHA. Defaults to the main branch. (example: "master").
- `body` (string, optional): Body - Specify a description for this release.
- `draft` (string, optional): Draft - Specify whether the created release should be a draft (unpublished) release.
- Options: `true`, `false`
- `prerelease` (string, optional): Prerelease - Specify whether the created release should be a prerelease.
- Options: `true`, `false`
- `discussion_category_name` (string, optional): Discussion Category Name - If specified, a discussion of the specified category is created and linked to the release. The value must be a category that already exists in the repository.
- `generate_release_notes` (string, optional): Release Notes - Specify whether the created release should automatically create release notes using the provided name and body specified.
- Options: `true`, `false`
</Accordion>
<Accordion title="GITHUB_GET_RELEASE_BY_ID">
**Description:** Get a release by ID in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the release ID of the release to fetch.
</Accordion>
<Accordion title="GITHUB_GET_RELEASE_BY_TAG_NAME">
**Description:** Get a release by tag name in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `tag_name` (string, required): Name - Specify the tag of the release to fetch. (example: "v1.0.0").
</Accordion>
<Accordion title="GITHUB_DELETE_RELEASE">
**Description:** Delete a release in GitHub.
**Parameters:**
- `owner` (string, required): Owner - Specify the name of the account owner of the associated repository for this Release. (example: "abc").
- `repo` (string, required): Repository - Specify the name of the associated repository for this Release.
- `id` (string, required): Release ID - Specify the ID of the release to delete.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic GitHub Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (GitHub tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with GitHub capabilities
github_agent = Agent(
role="Repository Manager",
goal="Manage GitHub repositories, issues, and releases efficiently",
backstory="An AI assistant specialized in repository management and issue tracking.",
tools=[enterprise_tools]
)
# Task to create a new issue
create_issue_task = Task(
description="Create a bug report issue for the login functionality in the main repository",
agent=github_agent,
expected_output="Issue created successfully with issue number"
)
# Run the task
crew = Crew(
agents=[github_agent],
tasks=[create_issue_task]
)
crew.kickoff()
```
### Filtering Specific GitHub Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific GitHub tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["github_create_issue", "github_update_issue", "github_search_issue"]
)
issue_manager = Agent(
role="Issue Manager",
goal="Create and manage GitHub issues efficiently",
backstory="An AI assistant that focuses on issue tracking and management.",
tools=enterprise_tools
)
# Task to manage issue workflow
issue_workflow = Task(
description="Create a feature request issue and assign it to the development team",
agent=issue_manager,
expected_output="Feature request issue created and assigned successfully"
)
crew = Crew(
agents=[issue_manager],
tasks=[issue_workflow]
)
crew.kickoff()
```
### Release Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
release_manager = Agent(
role="Release Manager",
goal="Manage software releases and versioning",
backstory="An experienced release manager who handles version control and release processes.",
tools=[enterprise_tools]
)
# Task to create a new release
release_task = Task(
description="""
Create a new release v2.1.0 for the project with:
- Auto-generated release notes
- Target the main branch
- Include a description of new features and bug fixes
""",
agent=release_manager,
expected_output="Release v2.1.0 created successfully with release notes"
)
crew = Crew(
agents=[release_manager],
tasks=[release_task]
)
crew.kickoff()
```
### Issue Tracking and Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_coordinator = Agent(
role="Project Coordinator",
goal="Track and coordinate project issues and development progress",
backstory="An AI assistant that helps coordinate development work and track project progress.",
tools=[enterprise_tools]
)
# Complex task involving multiple GitHub operations
coordination_task = Task(
description="""
1. Search for all open issues assigned to the current milestone
2. Identify overdue issues and update their priority labels
3. Create a weekly progress report issue
4. Lock resolved issues that have been inactive for 30 days
""",
agent=project_coordinator,
expected_output="Project coordination completed with progress report and issue management"
)
crew = Crew(
agents=[project_coordinator],
tasks=[coordination_task]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with GitHub integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,357 @@
---
title: Gmail Integration
description: "Email and contact management with Gmail integration for CrewAI."
icon: "envelope"
mode: "wide"
---
## Overview
Enable your agents to manage emails, contacts, and drafts through Gmail. Send emails, search messages, manage contacts, create drafts, and streamline your email communications with AI-powered automation.
## Prerequisites
Before using the Gmail integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Gmail account with appropriate permissions
- Connected your Gmail account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Gmail Integration
### 1. Connect Your Gmail Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Gmail** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for email and contact management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="GMAIL_SEND_EMAIL">
**Description:** Send an email in Gmail.
**Parameters:**
- `toRecipients` (array, required): To - Specify the recipients as either a single string or a JSON array.
```json
[
"recipient1@domain.com",
"recipient2@domain.com"
]
```
- `from` (string, required): From - Specify the email of the sender.
- `subject` (string, required): Subject - Specify the subject of the message.
- `messageContent` (string, required): Message Content - Specify the content of the email message as plain text or HTML.
- `attachments` (string, optional): Attachments - Accepts either a single file object or a JSON array of file objects.
- `additionalHeaders` (object, optional): Additional Headers - Specify any additional header fields here.
```json
{
"reply-to": "Sender Name <sender@domain.com>"
}
```
</Accordion>
<Accordion title="GMAIL_GET_EMAIL_BY_ID">
**Description:** Get an email by ID in Gmail.
**Parameters:**
- `userId` (string, required): User ID - Specify the user's email address. (example: "user@domain.com").
- `messageId` (string, required): Message ID - Specify the ID of the message to retrieve.
</Accordion>
<Accordion title="GMAIL_SEARCH_FOR_EMAIL">
**Description:** Search for emails in Gmail using advanced filters.
**Parameters:**
- `emailFilterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "from",
"operator": "$stringContains",
"value": "example@domain.com"
}
]
}
]
}
```
Available fields: `from`, `to`, `date`, `label`, `subject`, `cc`, `bcc`, `category`, `deliveredto:`, `size`, `filename`, `older_than`, `newer_than`, `list`, `is:important`, `is:unread`, `is:snoozed`, `is:starred`, `is:read`, `has:drive`, `has:document`, `has:spreadsheet`, `has:presentation`, `has:attachment`, `has:youtube`, `has:userlabels`
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GMAIL_DELETE_EMAIL">
**Description:** Delete an email in Gmail.
**Parameters:**
- `userId` (string, required): User ID - Specify the user's email address. (example: "user@domain.com").
- `messageId` (string, required): Message ID - Specify the ID of the message to trash.
</Accordion>
<Accordion title="GMAIL_CREATE_A_CONTACT">
**Description:** Create a contact in Gmail.
**Parameters:**
- `givenName` (string, required): Given Name - Specify the Given Name of the Contact to create. (example: "John").
- `familyName` (string, required): Family Name - Specify the Family Name of the Contact to create. (example: "Doe").
- `email` (string, required): Email - Specify the Email Address of the Contact to create.
- `additionalFields` (object, optional): Additional Fields - Additional contact information.
```json
{
"addresses": [
{
"streetAddress": "1000 North St.",
"city": "Los Angeles"
}
]
}
```
</Accordion>
<Accordion title="GMAIL_GET_CONTACT_BY_RESOURCE_NAME">
**Description:** Get a contact by resource name in Gmail.
**Parameters:**
- `resourceName` (string, required): Resource Name - Specify the resource name of the contact to fetch.
</Accordion>
<Accordion title="GMAIL_SEARCH_FOR_CONTACT">
**Description:** Search for a contact in Gmail.
**Parameters:**
- `searchTerm` (string, required): Term - Specify a search term to search for near or exact matches on the names, nickNames, emailAddresses, phoneNumbers, or organizations Contact properties.
</Accordion>
<Accordion title="GMAIL_DELETE_CONTACT">
**Description:** Delete a contact in Gmail.
**Parameters:**
- `resourceName` (string, required): Resource Name - Specify the resource name of the contact to delete.
</Accordion>
<Accordion title="GMAIL_CREATE_DRAFT">
**Description:** Create a draft in Gmail.
**Parameters:**
- `toRecipients` (array, optional): To - Specify the recipients as either a single string or a JSON array.
```json
[
"recipient1@domain.com",
"recipient2@domain.com"
]
```
- `from` (string, optional): From - Specify the email of the sender.
- `subject` (string, optional): Subject - Specify the subject of the message.
- `messageContent` (string, optional): Message Content - Specify the content of the email message as plain text or HTML.
- `attachments` (string, optional): Attachments - Accepts either a single file object or a JSON array of file objects.
- `additionalHeaders` (object, optional): Additional Headers - Specify any additional header fields here.
```json
{
"reply-to": "Sender Name <sender@domain.com>"
}
```
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Gmail Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Gmail tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Gmail capabilities
gmail_agent = Agent(
role="Email Manager",
goal="Manage email communications and contacts efficiently",
backstory="An AI assistant specialized in email management and communication.",
tools=[enterprise_tools]
)
# Task to send a follow-up email
send_email_task = Task(
description="Send a follow-up email to john@example.com about the project update meeting",
agent=gmail_agent,
expected_output="Email sent successfully with confirmation"
)
# Run the task
crew = Crew(
agents=[gmail_agent],
tasks=[send_email_task]
)
crew.kickoff()
```
### Filtering Specific Gmail Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Gmail tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["gmail_send_email", "gmail_search_for_email", "gmail_create_draft"]
)
email_coordinator = Agent(
role="Email Coordinator",
goal="Coordinate email communications and manage drafts",
backstory="An AI assistant that focuses on email coordination and draft management.",
tools=enterprise_tools
)
# Task to prepare and send emails
email_coordination = Task(
description="Search for emails from the marketing team, create a summary draft, and send it to stakeholders",
agent=email_coordinator,
expected_output="Summary email sent to stakeholders"
)
crew = Crew(
agents=[email_coordinator],
tasks=[email_coordination]
)
crew.kickoff()
```
### Contact Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
contact_manager = Agent(
role="Contact Manager",
goal="Manage and organize email contacts efficiently",
backstory="An experienced contact manager who maintains organized contact databases.",
tools=[enterprise_tools]
)
# Task to manage contacts
contact_task = Task(
description="""
1. Search for contacts from the 'example.com' domain
2. Create new contacts for recent email senders not in the contact list
3. Update contact information with recent interaction data
""",
agent=contact_manager,
expected_output="Contact database updated with new contacts and recent interactions"
)
crew = Crew(
agents=[contact_manager],
tasks=[contact_task]
)
crew.kickoff()
```
### Email Search and Analysis
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
email_analyst = Agent(
role="Email Analyst",
goal="Analyze email patterns and provide insights",
backstory="An AI assistant that analyzes email data to provide actionable insights.",
tools=[enterprise_tools]
)
# Task to analyze email patterns
analysis_task = Task(
description="""
Search for all unread emails from the last 7 days,
categorize them by sender domain,
and create a summary report of communication patterns
""",
agent=email_analyst,
expected_output="Email analysis report with communication patterns and recommendations"
)
crew = Crew(
agents=[email_analyst],
tasks=[analysis_task]
)
crew.kickoff()
```
### Automated Email Workflows
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
workflow_manager = Agent(
role="Email Workflow Manager",
goal="Automate email workflows and responses",
backstory="An AI assistant that manages automated email workflows and responses.",
tools=[enterprise_tools]
)
# Complex task involving multiple Gmail operations
workflow_task = Task(
description="""
1. Search for emails with 'urgent' in the subject from the last 24 hours
2. Create draft responses for each urgent email
3. Send automated acknowledgment emails to senders
4. Create a summary report of urgent items requiring attention
""",
agent=workflow_manager,
expected_output="Urgent emails processed with automated responses and summary report"
)
crew = Crew(
agents=[workflow_manager],
tasks=[workflow_task]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Gmail integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,392 @@
---
title: Google Calendar Integration
description: "Event and schedule management with Google Calendar integration for CrewAI."
icon: "calendar"
mode: "wide"
---
## Overview
Enable your agents to manage calendar events, schedules, and availability through Google Calendar. Create and update events, manage attendees, check availability, and streamline your scheduling workflows with AI-powered automation.
## Prerequisites
Before using the Google Calendar integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Google account with Google Calendar access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Google Calendar Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Calendar** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for calendar and contact access
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="GOOGLE_CALENDAR_CREATE_EVENT">
**Description:** Create an event in Google Calendar.
**Parameters:**
- `eventName` (string, required): Event name.
- `startTime` (string, required): Start time - Accepts Unix timestamp or ISO8601 date formats.
- `endTime` (string, optional): End time - Defaults to one hour after the start time if left blank.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
- `attendees` (string, optional): Attendees - Accepts an array of email addresses or email addresses separated by commas.
- `eventLocation` (string, optional): Event location.
- `eventDescription` (string, optional): Event description.
- `eventId` (string, optional): Event ID - An ID from your application to associate this event with. You can use this ID to sync updates to this event later.
- `includeMeetLink` (boolean, optional): Include Google Meet link? - Automatically creates Google Meet conference link for this event.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_UPDATE_EVENT">
**Description:** Update an existing event in Google Calendar.
**Parameters:**
- `eventId` (string, required): Event ID - The ID of the event to update.
- `eventName` (string, optional): Event name.
- `startTime` (string, optional): Start time - Accepts Unix timestamp or ISO8601 date formats.
- `endTime` (string, optional): End time - Defaults to one hour after the start time if left blank.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
- `attendees` (string, optional): Attendees - Accepts an array of email addresses or email addresses separated by commas.
- `eventLocation` (string, optional): Event location.
- `eventDescription` (string, optional): Event description.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_LIST_EVENTS">
**Description:** List events from Google Calendar.
**Parameters:**
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
- `after` (string, optional): After - Filters events that start after the provided date (Unix in milliseconds or ISO timestamp). (example: "2025-04-12T10:00:00Z or 1712908800000").
- `before` (string, optional): Before - Filters events that end before the provided date (Unix in milliseconds or ISO timestamp). (example: "2025-04-12T10:00:00Z or 1712908800000").
</Accordion>
<Accordion title="GOOGLE_CALENDAR_GET_EVENT_BY_ID">
**Description:** Get a specific event by ID from Google Calendar.
**Parameters:**
- `eventId` (string, required): Event ID.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_DELETE_EVENT">
**Description:** Delete an event from Google Calendar.
**Parameters:**
- `eventId` (string, required): Event ID - The ID of the calendar event to be deleted.
- `calendar` (string, optional): Calendar - Use Connect Portal Workflow Settings to allow users to select which calendar the event will be added to. Defaults to the user's primary calendar if left blank.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_GET_CONTACTS">
**Description:** Get contacts from Google Calendar.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_SEARCH_CONTACTS">
**Description:** Search for contacts in Google Calendar.
**Parameters:**
- `query` (string, optional): Search query to search contacts.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_LIST_DIRECTORY_PEOPLE">
**Description:** List directory people.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_SEARCH_DIRECTORY_PEOPLE">
**Description:** Search directory people.
**Parameters:**
- `query` (string, required): Search query to search contacts.
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_LIST_OTHER_CONTACTS">
**Description:** List other contacts.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "page_cursor_string"
}
```
</Accordion>
<Accordion title="GOOGLE_CALENDAR_SEARCH_OTHER_CONTACTS">
**Description:** Search other contacts.
**Parameters:**
- `query` (string, optional): Search query to search contacts.
</Accordion>
<Accordion title="GOOGLE_CALENDAR_GET_AVAILABILITY">
**Description:** Get availability information for calendars.
**Parameters:**
- `timeMin` (string, required): The start of the interval. In ISO format.
- `timeMax` (string, required): The end of the interval. In ISO format.
- `timeZone` (string, optional): Time zone used in the response. Optional. The default is UTC.
- `items` (array, optional): List of calendars and/or groups to query. Defaults to the user default calendar.
```json
[
{
"id": "calendar_id_1"
},
{
"id": "calendar_id_2"
}
]
```
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Calendar Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Google Calendar tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Google Calendar capabilities
calendar_agent = Agent(
role="Schedule Manager",
goal="Manage calendar events and scheduling efficiently",
backstory="An AI assistant specialized in calendar management and scheduling coordination.",
tools=[enterprise_tools]
)
# Task to create a meeting
create_meeting_task = Task(
description="Create a team standup meeting for tomorrow at 9 AM with the development team",
agent=calendar_agent,
expected_output="Meeting created successfully with Google Meet link"
)
# Run the task
crew = Crew(
agents=[calendar_agent],
tasks=[create_meeting_task]
)
crew.kickoff()
```
### Filtering Specific Calendar Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Google Calendar tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["google_calendar_create_event", "google_calendar_list_events", "google_calendar_get_availability"]
)
meeting_coordinator = Agent(
role="Meeting Coordinator",
goal="Coordinate meetings and check availability",
backstory="An AI assistant that focuses on meeting scheduling and availability management.",
tools=enterprise_tools
)
# Task to schedule a meeting with availability check
schedule_meeting = Task(
description="Check availability for next week and schedule a project review meeting with stakeholders",
agent=meeting_coordinator,
expected_output="Meeting scheduled after checking availability of all participants"
)
crew = Crew(
agents=[meeting_coordinator],
tasks=[schedule_meeting]
)
crew.kickoff()
```
### Event Management and Updates
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
event_manager = Agent(
role="Event Manager",
goal="Manage and update calendar events efficiently",
backstory="An experienced event manager who handles event logistics and updates.",
tools=[enterprise_tools]
)
# Task to manage event updates
event_management = Task(
description="""
1. List all events for this week
2. Update any events that need location changes to include video conference links
3. Send calendar invitations to new team members for recurring meetings
""",
agent=event_manager,
expected_output="Weekly events updated with proper locations and new attendees added"
)
crew = Crew(
agents=[event_manager],
tasks=[event_management]
)
crew.kickoff()
```
### Contact and Availability Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
availability_coordinator = Agent(
role="Availability Coordinator",
goal="Coordinate availability and manage contacts for scheduling",
backstory="An AI assistant that specializes in availability management and contact coordination.",
tools=[enterprise_tools]
)
# Task to coordinate availability
availability_task = Task(
description="""
1. Search for contacts in the engineering department
2. Check availability for all engineers next Friday afternoon
3. Create a team meeting for the first available 2-hour slot
4. Include Google Meet link and send invitations
""",
agent=availability_coordinator,
expected_output="Team meeting scheduled based on availability with all engineers invited"
)
crew = Crew(
agents=[availability_coordinator],
tasks=[availability_task]
)
crew.kickoff()
```
### Automated Scheduling Workflows
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
scheduling_automator = Agent(
role="Scheduling Automator",
goal="Automate scheduling workflows and calendar management",
backstory="An AI assistant that automates complex scheduling scenarios and calendar workflows.",
tools=[enterprise_tools]
)
# Complex scheduling automation task
automation_task = Task(
description="""
1. List all upcoming events for the next two weeks
2. Identify any scheduling conflicts or back-to-back meetings
3. Suggest optimal meeting times by checking availability
4. Create buffer time between meetings where needed
5. Update event descriptions with agenda items and meeting links
""",
agent=scheduling_automator,
expected_output="Calendar optimized with resolved conflicts, buffer times, and updated meeting details"
)
crew = Crew(
agents=[scheduling_automator],
tasks=[automation_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Authentication Errors**
- Ensure your Google account has the necessary permissions for calendar access
- Verify that the OAuth connection includes all required scopes for Google Calendar API
- Check if calendar sharing settings allow the required access level
**Event Creation Issues**
- Verify that time formats are correct (ISO8601 or Unix timestamps)
- Ensure attendee email addresses are properly formatted
- Check that the target calendar exists and is accessible
- Verify time zones are correctly specified
**Availability and Time Conflicts**
- Use proper ISO format for time ranges when checking availability
- Ensure time zones are consistent across all operations
- Verify that calendar IDs are correct when checking multiple calendars
**Contact and People Search**
- Ensure search queries are properly formatted
- Check that directory access permissions are granted
- Verify that contact information is up to date and accessible
**Event Updates and Deletions**
- Verify that event IDs are correct and events exist
- Ensure you have edit permissions for the events
- Check that calendar ownership allows modifications
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Calendar integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,322 @@
---
title: Google Sheets Integration
description: "Spreadsheet data synchronization with Google Sheets integration for CrewAI."
icon: "google"
mode: "wide"
---
## Overview
Enable your agents to manage spreadsheet data through Google Sheets. Read rows, create new entries, update existing data, and streamline your data management workflows with AI-powered automation. Perfect for data tracking, reporting, and collaborative data management.
## Prerequisites
Before using the Google Sheets integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Google account with Google Sheets access
- Connected your Google account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
- Spreadsheets with proper column headers for data operations
## Setting Up Google Sheets Integration
### 1. Connect Your Google Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Google Sheets** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for spreadsheet access
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="GOOGLE_SHEETS_GET_ROW">
**Description:** Get rows from a Google Sheets spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): Spreadsheet - Use Connect Portal Workflow Settings to allow users to select a spreadsheet. Defaults to using the first worksheet in the selected spreadsheet.
- `limit` (string, optional): Limit rows - Limit the maximum number of rows to return.
</Accordion>
<Accordion title="GOOGLE_SHEETS_CREATE_ROW">
**Description:** Create a new row in a Google Sheets spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): Spreadsheet - Use Connect Portal Workflow Settings to allow users to select a spreadsheet. Defaults to using the first worksheet in the selected spreadsheet..
- `worksheet` (string, required): Worksheet - Your worksheet must have column headers.
- `additionalFields` (object, required): Fields - Include fields to create this row with, as an object with keys of Column Names. Use Connect Portal Workflow Settings to allow users to select a Column Mapping.
```json
{
"columnName1": "columnValue1",
"columnName2": "columnValue2",
"columnName3": "columnValue3",
"columnName4": "columnValue4"
}
```
</Accordion>
<Accordion title="GOOGLE_SHEETS_UPDATE_ROW">
**Description:** Update existing rows in a Google Sheets spreadsheet.
**Parameters:**
- `spreadsheetId` (string, required): Spreadsheet - Use Connect Portal Workflow Settings to allow users to select a spreadsheet. Defaults to using the first worksheet in the selected spreadsheet.
- `worksheet` (string, required): Worksheet - Your worksheet must have column headers.
- `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions to identify which rows to update.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "status",
"operator": "$stringExactlyMatches",
"value": "pending"
}
]
}
]
}
```
Available operators: `$stringContains`, `$stringDoesNotContain`, `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringStartsWith`, `$stringDoesNotStartWith`, `$stringEndsWith`, `$stringDoesNotEndWith`, `$numberGreaterThan`, `$numberLessThan`, `$numberEquals`, `$numberDoesNotEqual`, `$dateTimeAfter`, `$dateTimeBefore`, `$dateTimeEquals`, `$booleanTrue`, `$booleanFalse`, `$exists`, `$doesNotExist`
- `additionalFields` (object, required): Fields - Include fields to update, as an object with keys of Column Names. Use Connect Portal Workflow Settings to allow users to select a Column Mapping.
```json
{
"columnName1": "newValue1",
"columnName2": "newValue2",
"columnName3": "newValue3",
"columnName4": "newValue4"
}
```
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Google Sheets Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Google Sheets tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Google Sheets capabilities
sheets_agent = Agent(
role="Data Manager",
goal="Manage spreadsheet data and track information efficiently",
backstory="An AI assistant specialized in data management and spreadsheet operations.",
tools=[enterprise_tools]
)
# Task to add new data to a spreadsheet
data_entry_task = Task(
description="Add a new customer record to the customer database spreadsheet with name, email, and signup date",
agent=sheets_agent,
expected_output="New customer record added successfully to the spreadsheet"
)
# Run the task
crew = Crew(
agents=[sheets_agent],
tasks=[data_entry_task]
)
crew.kickoff()
```
### Filtering Specific Google Sheets Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Google Sheets tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["google_sheets_get_row", "google_sheets_create_row"]
)
data_collector = Agent(
role="Data Collector",
goal="Collect and organize data in spreadsheets",
backstory="An AI assistant that focuses on data collection and organization.",
tools=enterprise_tools
)
# Task to collect and organize data
data_collection = Task(
description="Retrieve current inventory data and add new product entries to the inventory spreadsheet",
agent=data_collector,
expected_output="Inventory data retrieved and new products added successfully"
)
crew = Crew(
agents=[data_collector],
tasks=[data_collection]
)
crew.kickoff()
```
### Data Analysis and Reporting
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
data_analyst = Agent(
role="Data Analyst",
goal="Analyze spreadsheet data and generate insights",
backstory="An experienced data analyst who extracts insights from spreadsheet data.",
tools=[enterprise_tools]
)
# Task to analyze data and create reports
analysis_task = Task(
description="""
1. Retrieve all sales data from the current month's spreadsheet
2. Analyze the data for trends and patterns
3. Create a summary report in a new row with key metrics
""",
agent=data_analyst,
expected_output="Sales data analyzed and summary report created with key insights"
)
crew = Crew(
agents=[data_analyst],
tasks=[analysis_task]
)
crew.kickoff()
```
### Automated Data Updates
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
data_updater = Agent(
role="Data Updater",
goal="Automatically update and maintain spreadsheet data",
backstory="An AI assistant that maintains data accuracy and updates records automatically.",
tools=[enterprise_tools]
)
# Task to update data based on conditions
update_task = Task(
description="""
1. Find all pending orders in the orders spreadsheet
2. Update their status to 'processing'
3. Add a timestamp for when the status was updated
4. Log the changes in a separate tracking sheet
""",
agent=data_updater,
expected_output="All pending orders updated to processing status with timestamps logged"
)
crew = Crew(
agents=[data_updater],
tasks=[update_task]
)
crew.kickoff()
```
### Complex Data Management Workflow
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
workflow_manager = Agent(
role="Data Workflow Manager",
goal="Manage complex data workflows across multiple spreadsheets",
backstory="An AI assistant that orchestrates complex data operations across multiple spreadsheets.",
tools=[enterprise_tools]
)
# Complex workflow task
workflow_task = Task(
description="""
1. Get all customer data from the main customer spreadsheet
2. Create monthly summary entries for active customers
3. Update customer status based on activity in the last 30 days
4. Generate a monthly report with customer metrics
5. Archive inactive customer records to a separate sheet
""",
agent=workflow_manager,
expected_output="Monthly customer workflow completed with updated statuses and generated reports"
)
crew = Crew(
agents=[workflow_manager],
tasks=[workflow_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Google account has edit access to the target spreadsheets
- Verify that the OAuth connection includes required scopes for Google Sheets API
- Check that spreadsheets are shared with the authenticated account
**Spreadsheet Structure Issues**
- Ensure worksheets have proper column headers before creating or updating rows
- Verify that column names in `additionalFields` match the actual column headers
- Check that the specified worksheet exists in the spreadsheet
**Data Type and Format Issues**
- Ensure data values match the expected format for each column
- Use proper date formats for date columns (ISO format recommended)
- Verify that numeric values are properly formatted for number columns
**Filter Formula Issues**
- Ensure filter formulas follow the correct JSON structure for disjunctive normal form
- Use valid field names that match actual column headers
- Test simple filters before building complex multi-condition queries
- Verify that operator types match the data types in the columns
**Row Limits and Performance**
- Be mindful of row limits when using `GOOGLE_SHEETS_GET_ROW`
- Consider pagination for large datasets
- Use specific filters to reduce the amount of data processed
**Update Operations**
- Ensure filter conditions properly identify the intended rows for updates
- Test filter conditions with small datasets before large updates
- Verify that all required fields are included in update operations
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Google Sheets integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,580 @@
---
title: "HubSpot Integration"
description: "Manage companies and contacts in HubSpot with CrewAI."
icon: "briefcase"
mode: "wide"
---
## Overview
Enable your agents to manage companies and contacts within HubSpot. Create new records and streamline your CRM processes with AI-powered automation.
## Prerequisites
Before using the HubSpot integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription.
- A HubSpot account with appropriate permissions.
- Connected your HubSpot account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors).
## Setting Up HubSpot Integration
### 1. Connect Your HubSpot Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors).
2. Find **HubSpot** in the Authentication Integrations section.
3. Click **Connect** and complete the OAuth flow.
4. Grant the necessary permissions for company and contact management.
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account).
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="HUBSPOT_CREATE_RECORD_COMPANIES">
**Description:** Create a new company record in HubSpot.
**Parameters:**
- `name` (string, required): Name of the company.
- `domain` (string, optional): Company Domain Name.
- `industry` (string, optional): Industry. Must be one of the predefined values from HubSpot.
- `phone` (string, optional): Phone Number.
- `hubspot_owner_id` (string, optional): Company owner ID.
- `type` (string, optional): Type of the company. Available values: `PROSPECT`, `PARTNER`, `RESELLER`, `VENDOR`, `OTHER`.
- `city` (string, optional): City.
- `state` (string, optional): State/Region.
- `zip` (string, optional): Postal Code.
- `numberofemployees` (number, optional): Number of Employees.
- `annualrevenue` (number, optional): Annual Revenue.
- `timezone` (string, optional): Time Zone.
- `description` (string, optional): Description.
- `linkedin_company_page` (string, optional): LinkedIn Company Page URL.
- `company_email` (string, optional): Company Email.
- `first_name` (string, optional): First Name of a contact at the company.
- `last_name` (string, optional): Last Name of a contact at the company.
- `about_us` (string, optional): About Us.
- `hs_csm_sentiment` (string, optional): CSM Sentiment. Available values: `at_risk`, `neutral`, `healthy`.
- `closedate` (string, optional): Close Date.
- `hs_keywords` (string, optional): Company Keywords. Must be one of the predefined values.
- `country` (string, optional): Country/Region.
- `hs_country_code` (string, optional): Country/Region Code.
- `hs_employee_range` (string, optional): Employee range.
- `facebook_company_page` (string, optional): Facebook Company Page URL.
- `facebookfans` (number, optional): Number of Facebook Fans.
- `hs_gps_coordinates` (string, optional): GPS Coordinates.
- `hs_gps_error` (string, optional): GPS Error.
- `googleplus_page` (string, optional): Google Plus Page URL.
- `owneremail` (string, optional): HubSpot Owner Email.
- `ownername` (string, optional): HubSpot Owner Name.
- `hs_ideal_customer_profile` (string, optional): Ideal Customer Profile Tier. Available values: `tier_1`, `tier_2`, `tier_3`.
- `hs_industry_group` (string, optional): Industry group.
- `is_public` (boolean, optional): Is Public.
- `hs_last_metered_enrichment_timestamp` (string, optional): Last Metered Enrichment Timestamp.
- `hs_lead_status` (string, optional): Lead Status. Available values: `NEW`, `OPEN`, `IN_PROGRESS`, `OPEN_DEAL`, `UNQUALIFIED`, `ATTEMPTED_TO_CONTACT`, `CONNECTED`, `BAD_TIMING`.
- `lifecyclestage` (string, optional): Lifecycle Stage. Available values: `subscriber`, `lead`, `marketingqualifiedlead`, `salesqualifiedlead`, `opportunity`, `customer`, `evangelist`, `other`.
- `linkedinbio` (string, optional): LinkedIn Bio.
- `hs_linkedin_handle` (string, optional): LinkedIn handle.
- `hs_live_enrichment_deadline` (string, optional): Live enrichment deadline.
- `hs_logo_url` (string, optional): Logo URL.
- `hs_analytics_source` (string, optional): Original Traffic Source.
- `hs_pinned_engagement_id` (number, optional): Pinned Engagement ID.
- `hs_quick_context` (string, optional): Quick context.
- `hs_revenue_range` (string, optional): Revenue range.
- `hs_state_code` (string, optional): State/Region Code.
- `address` (string, optional): Street Address.
- `address2` (string, optional): Street Address 2.
- `hs_is_target_account` (boolean, optional): Target Account.
- `hs_target_account` (string, optional): Target Account Tier. Available values: `tier_1`, `tier_2`, `tier_3`.
- `hs_target_account_recommendation_snooze_time` (string, optional): Target Account Recommendation Snooze Time.
- `hs_target_account_recommendation_state` (string, optional): Target Account Recommendation State. Available values: `DISMISSED`, `NONE`, `SNOOZED`.
- `total_money_raised` (string, optional): Total Money Raised.
- `twitterbio` (string, optional): Twitter Bio.
- `twitterfollowers` (number, optional): Twitter Followers.
- `twitterhandle` (string, optional): Twitter Handle.
- `web_technologies` (string, optional): Web Technologies used. Must be one of the predefined values.
- `website` (string, optional): Website URL.
- `founded_year` (string, optional): Year Founded.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_CONTACTS">
**Description:** Create a new contact record in HubSpot.
**Parameters:**
- `email` (string, required): Email address of the contact.
- `firstname` (string, optional): First Name.
- `lastname` (string, optional): Last Name.
- `phone` (string, optional): Phone Number.
- `hubspot_owner_id` (string, optional): Contact owner.
- `lifecyclestage` (string, optional): Lifecycle Stage. Available values: `subscriber`, `lead`, `marketingqualifiedlead`, `salesqualifiedlead`, `opportunity`, `customer`, `evangelist`, `other`.
- `hs_lead_status` (string, optional): Lead Status. Available values: `NEW`, `OPEN`, `IN_PROGRESS`, `OPEN_DEAL`, `UNQUALIFIED`, `ATTEMPTED_TO_CONTACT`, `CONNECTED`, `BAD_TIMING`.
- `annualrevenue` (string, optional): Annual Revenue.
- `hs_buying_role` (string, optional): Buying Role.
- `cc_emails` (string, optional): CC Emails.
- `ch_customer_id` (string, optional): Chargify Customer ID.
- `ch_customer_reference` (string, optional): Chargify Customer Reference.
- `chargify_sites` (string, optional): Chargify Site(s).
- `city` (string, optional): City.
- `hs_facebook_ad_clicked` (boolean, optional): Clicked Facebook ad.
- `hs_linkedin_ad_clicked` (string, optional): Clicked LinkedIn Ad.
- `hs_clicked_linkedin_ad` (string, optional): Clicked on a LinkedIn Ad.
- `closedate` (string, optional): Close Date.
- `company` (string, optional): Company Name.
- `company_size` (string, optional): Company size.
- `country` (string, optional): Country/Region.
- `hs_country_region_code` (string, optional): Country/Region Code.
- `date_of_birth` (string, optional): Date of birth.
- `degree` (string, optional): Degree.
- `hs_email_customer_quarantined_reason` (string, optional): Email address quarantine reason.
- `hs_role` (string, optional): Employment Role. Must be one of the predefined values.
- `hs_seniority` (string, optional): Employment Seniority. Must be one of the predefined values.
- `hs_sub_role` (string, optional): Employment Sub Role. Must be one of the predefined values.
- `hs_employment_change_detected_date` (string, optional): Employment change detected date.
- `hs_enriched_email_bounce_detected` (boolean, optional): Enriched Email Bounce Detected.
- `hs_facebookid` (string, optional): Facebook ID.
- `hs_facebook_click_id` (string, optional): Facebook click id.
- `fax` (string, optional): Fax Number.
- `field_of_study` (string, optional): Field of study.
- `followercount` (number, optional): Follower Count.
- `gender` (string, optional): Gender.
- `hs_google_click_id` (string, optional): Google ad click id.
- `graduation_date` (string, optional): Graduation date.
- `owneremail` (string, optional): HubSpot Owner Email (legacy).
- `ownername` (string, optional): HubSpot Owner Name (legacy).
- `industry` (string, optional): Industry.
- `hs_inferred_language_codes` (string, optional): Inferred Language Codes. Must be one of the predefined values.
- `jobtitle` (string, optional): Job Title.
- `hs_job_change_detected_date` (string, optional): Job change detected date.
- `job_function` (string, optional): Job function.
- `hs_journey_stage` (string, optional): Journey Stage. Must be one of the predefined values.
- `kloutscoregeneral` (number, optional): Klout Score.
- `hs_last_metered_enrichment_timestamp` (string, optional): Last Metered Enrichment Timestamp.
- `hs_latest_source` (string, optional): Latest Traffic Source.
- `hs_latest_source_timestamp` (string, optional): Latest Traffic Source Date.
- `hs_legal_basis` (string, optional): Legal basis for processing contact's data.
- `linkedinbio` (string, optional): LinkedIn Bio.
- `linkedinconnections` (number, optional): LinkedIn Connections.
- `hs_linkedin_url` (string, optional): LinkedIn URL.
- `hs_linkedinid` (string, optional): Linkedin ID.
- `hs_live_enrichment_deadline` (string, optional): Live enrichment deadline.
- `marital_status` (string, optional): Marital Status.
- `hs_content_membership_email` (string, optional): Member email.
- `hs_content_membership_notes` (string, optional): Membership Notes.
- `message` (string, optional): Message.
- `military_status` (string, optional): Military status.
- `mobilephone` (string, optional): Mobile Phone Number.
- `numemployees` (string, optional): Number of Employees.
- `hs_analytics_source` (string, optional): Original Traffic Source.
- `photo` (string, optional): Photo.
- `hs_pinned_engagement_id` (number, optional): Pinned engagement ID.
- `zip` (string, optional): Postal Code.
- `hs_language` (string, optional): Preferred language. Must be one of the predefined values.
- `associatedcompanyid` (number, optional): Primary Associated Company ID.
- `hs_email_optout_survey_reason` (string, optional): Reason for opting out of email.
- `relationship_status` (string, optional): Relationship Status.
- `hs_returning_to_office_detected_date` (string, optional): Returning to office detected date.
- `salutation` (string, optional): Salutation.
- `school` (string, optional): School.
- `seniority` (string, optional): Seniority.
- `hs_feedback_show_nps_web_survey` (boolean, optional): Should be shown an NPS web survey.
- `start_date` (string, optional): Start date.
- `state` (string, optional): State/Region.
- `hs_state_code` (string, optional): State/Region Code.
- `hs_content_membership_status` (string, optional): Status.
- `address` (string, optional): Street Address.
- `tax_exempt` (string, optional): Tax Exempt.
- `hs_timezone` (string, optional): Time Zone. Must be one of the predefined values.
- `twitterbio` (string, optional): Twitter Bio.
- `hs_twitterid` (string, optional): Twitter ID.
- `twitterprofilephoto` (string, optional): Twitter Profile Photo.
- `twitterhandle` (string, optional): Twitter Username.
- `vat_number` (string, optional): VAT Number.
- `ch_verified` (string, optional): Verified for ACH/eCheck Payments.
- `website` (string, optional): Website URL.
- `hs_whatsapp_phone_number` (string, optional): WhatsApp Phone Number.
- `work_email` (string, optional): Work email.
- `hs_googleplusid` (string, optional): googleplus ID.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_DEALS">
**Description:** Create a new deal record in HubSpot.
**Parameters:**
- `dealname` (string, required): Name of the deal.
- `amount` (number, optional): The value of the deal.
- `dealstage` (string, optional): The pipeline stage of the deal.
- `pipeline` (string, optional): The pipeline the deal belongs to.
- `closedate` (string, optional): The date the deal is expected to close.
- `hubspot_owner_id` (string, optional): The owner of the deal.
- `dealtype` (string, optional): The type of deal. Available values: `newbusiness`, `existingbusiness`.
- `description` (string, optional): A description of the deal.
- `hs_priority` (string, optional): The priority of the deal. Available values: `low`, `medium`, `high`.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_ENGAGEMENTS">
**Description:** Create a new engagement (e.g., note, email, call, meeting, task) in HubSpot.
**Parameters:**
- `engagementType` (string, required): The type of engagement. Available values: `NOTE`, `EMAIL`, `CALL`, `MEETING`, `TASK`.
- `hubspot_owner_id` (string, optional): The user the activity is assigned to.
- `hs_timestamp` (string, optional): The date and time of the activity.
- `hs_note_body` (string, optional): The body of the note. (Used for `NOTE`)
- `hs_task_subject` (string, optional): The title of the task. (Used for `TASK`)
- `hs_task_body` (string, optional): The notes for the task. (Used for `TASK`)
- `hs_task_status` (string, optional): The status of the task. (Used for `TASK`)
- `hs_meeting_title` (string, optional): The title of the meeting. (Used for `MEETING`)
- `hs_meeting_body` (string, optional): The description for the meeting. (Used for `MEETING`)
- `hs_meeting_start_time` (string, optional): The start time of the meeting. (Used for `MEETING`)
- `hs_meeting_end_time` (string, optional): The end time of the meeting. (Used for `MEETING`)
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_COMPANIES">
**Description:** Update an existing company record in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the company to update.
- `name` (string, optional): Name of the company.
- `domain` (string, optional): Company Domain Name.
- `industry` (string, optional): Industry.
- `phone` (string, optional): Phone Number.
- `city` (string, optional): City.
- `state` (string, optional): State/Region.
- `zip` (string, optional): Postal Code.
- `numberofemployees` (number, optional): Number of Employees.
- `annualrevenue` (number, optional): Annual Revenue.
- `description` (string, optional): Description.
</Accordion>
<Accordion title="HUBSPOT_CREATE_RECORD_ANY">
**Description:** Create a record for a specified object type in HubSpot.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- Additional parameters depend on the custom object's schema.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_CONTACTS">
**Description:** Update an existing contact record in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the contact to update.
- `firstname` (string, optional): First Name.
- `lastname` (string, optional): Last Name.
- `email` (string, optional): Email address.
- `phone` (string, optional): Phone Number.
- `company` (string, optional): Company Name.
- `jobtitle` (string, optional): Job Title.
- `lifecyclestage` (string, optional): Lifecycle Stage.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_DEALS">
**Description:** Update an existing deal record in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the deal to update.
- `dealname` (string, optional): Name of the deal.
- `amount` (number, optional): The value of the deal.
- `dealstage` (string, optional): The pipeline stage of the deal.
- `pipeline` (string, optional): The pipeline the deal belongs to.
- `closedate` (string, optional): The date the deal is expected to close.
- `dealtype` (string, optional): The type of deal.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_ENGAGEMENTS">
**Description:** Update an existing engagement in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to update.
- `hs_note_body` (string, optional): The body of the note.
- `hs_task_subject` (string, optional): The title of the task.
- `hs_task_body` (string, optional): The notes for the task.
- `hs_task_status` (string, optional): The status of the task.
</Accordion>
<Accordion title="HUBSPOT_UPDATE_RECORD_ANY">
**Description:** Update a record for a specified object type in HubSpot.
**Parameters:**
- `recordId` (string, required): The ID of the record to update.
- `recordType` (string, required): The object type ID of the custom object.
- Additional parameters depend on the custom object's schema.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_COMPANIES">
**Description:** Get a list of company records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_CONTACTS">
**Description:** Get a list of contact records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_DEALS">
**Description:** Get a list of deal records from HubSpot.
**Parameters:**
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_ENGAGEMENTS">
**Description:** Get a list of engagement records from HubSpot.
**Parameters:**
- `objectName` (string, required): The type of engagement to fetch (e.g., "notes").
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORDS_ANY">
**Description:** Get a list of records for any specified object type in HubSpot.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_COMPANIES">
**Description:** Get a single company record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the company to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_CONTACTS">
**Description:** Get a single contact record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the contact to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_DEALS">
**Description:** Get a single deal record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the deal to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_ENGAGEMENTS">
**Description:** Get a single engagement record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to retrieve.
</Accordion>
<Accordion title="HUBSPOT_GET_RECORD_BY_ID_ANY">
**Description:** Get a single record of any specified object type by its ID.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- `recordId` (string, required): The ID of the record to retrieve.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_COMPANIES">
**Description:** Search for company records in HubSpot using a filter formula.
**Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_CONTACTS">
**Description:** Search for contact records in HubSpot using a filter formula.
**Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_DEALS">
**Description:** Search for deal records in HubSpot using a filter formula.
**Parameters:**
- `filterFormula` (object, optional): A filter in disjunctive normal form (OR of ANDs).
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_ENGAGEMENTS">
**Description:** Search for engagement records in HubSpot using a filter formula.
**Parameters:**
- `engagementFilterFormula` (object, optional): A filter for engagements.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_SEARCH_RECORDS_ANY">
**Description:** Search for records of any specified object type in HubSpot.
**Parameters:**
- `recordType` (string, required): The object type ID to search.
- `filterFormula` (string, optional): The filter formula to apply.
- `paginationParameters` (object, optional): Use `pageCursor` to fetch subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_COMPANIES">
**Description:** Delete a company record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the company to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_CONTACTS">
**Description:** Delete a contact record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the contact to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_DEALS">
**Description:** Delete a deal record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the deal to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_ENGAGEMENTS">
**Description:** Delete an engagement record by its ID.
**Parameters:**
- `recordId` (string, required): The ID of the engagement to delete.
</Accordion>
<Accordion title="HUBSPOT_DELETE_RECORD_ANY">
**Description:** Delete a record of any specified object type by its ID.
**Parameters:**
- `recordType` (string, required): The object type ID of the custom object.
- `recordId` (string, required): The ID of the record to delete.
</Accordion>
<Accordion title="HUBSPOT_GET_CONTACTS_BY_LIST_ID">
**Description:** Get contacts from a specific list by its ID.
**Parameters:**
- `listId` (string, required): The ID of the list to get contacts from.
- `paginationParameters` (object, optional): Use `pageCursor` for subsequent pages.
</Accordion>
<Accordion title="HUBSPOT_DESCRIBE_ACTION_SCHEMA">
**Description:** Get the expected schema for a given object type and operation.
**Parameters:**
- `recordType` (string, required): The object type ID (e.g., 'companies').
- `operation` (string, required): The operation type (e.g., 'CREATE_RECORD').
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic HubSpot Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (HubSpot tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with HubSpot capabilities
hubspot_agent = Agent(
role="CRM Manager",
goal="Manage company and contact records in HubSpot",
backstory="An AI assistant specialized in CRM management.",
tools=[enterprise_tools]
)
# Task to create a new company
create_company_task = Task(
description="Create a new company in HubSpot with name 'Innovate Corp' and domain 'innovatecorp.com'.",
agent=hubspot_agent,
expected_output="Company created successfully with confirmation"
)
# Run the task
crew = Crew(
agents=[hubspot_agent],
tasks=[create_company_task]
)
crew.kickoff()
```
### Filtering Specific HubSpot Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only the tool to create contacts
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["hubspot_create_record_contacts"]
)
contact_creator = Agent(
role="Contact Creator",
goal="Create new contacts in HubSpot",
backstory="An AI assistant that focuses on creating new contact entries in the CRM.",
tools=[enterprise_tools]
)
# Task to create a contact
create_contact = Task(
description="Create a new contact for 'John Doe' with email 'john.doe@example.com'.",
agent=contact_creator,
expected_output="Contact created successfully in HubSpot."
)
crew = Crew(
agents=[contact_creator],
tasks=[create_contact]
)
crew.kickoff()
```
### Contact Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
crm_manager = Agent(
role="CRM Manager",
goal="Manage and organize HubSpot contacts efficiently.",
backstory="An experienced CRM manager who maintains an organized contact database.",
tools=[enterprise_tools]
)
# Task to manage contacts
contact_task = Task(
description="Create a new contact for 'Jane Smith' at 'Global Tech Inc.' with email 'jane.smith@globaltech.com'.",
agent=crm_manager,
expected_output="Contact database updated with the new contact."
)
crew = Crew(
agents=[crm_manager],
tasks=[contact_task]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with HubSpot integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,395 @@
---
title: Jira Integration
description: "Issue tracking and project management with Jira integration for CrewAI."
icon: "bug"
mode: "wide"
---
## Overview
Enable your agents to manage issues, projects, and workflows through Jira. Create and update issues, track project progress, manage assignments, and streamline your project management with AI-powered automation.
## Prerequisites
Before using the Jira integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Jira account with appropriate project permissions
- Connected your Jira account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Jira Integration
### 1. Connect Your Jira Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Jira** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for issue and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="JIRA_CREATE_ISSUE">
**Description:** Create an issue in Jira.
**Parameters:**
- `summary` (string, required): Summary - A brief one-line summary of the issue. (example: "The printer stopped working").
- `project` (string, optional): Project - The project which the issue belongs to. Defaults to the user's first project if not provided. Use Connect Portal Workflow Settings to allow users to select a Project.
- `issueType` (string, optional): Issue type - Defaults to Task if not provided.
- `jiraIssueStatus` (string, optional): Status - Defaults to the project's first status if not provided.
- `assignee` (string, optional): Assignee - Defaults to the authenticated user if not provided.
- `descriptionType` (string, optional): Description Type - Select the Description Type.
- Options: `description`, `descriptionJSON`
- `description` (string, optional): Description - A detailed description of the issue. This field appears only when 'descriptionType' = 'description'.
- `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format. Use Connect Portal Workflow Settings to allow users to select which Issue Fields to update.
```json
{
"customfield_10001": "value"
}
```
</Accordion>
<Accordion title="JIRA_UPDATE_ISSUE">
**Description:** Update an issue in Jira.
**Parameters:**
- `issueKey` (string, required): Issue Key (example: "TEST-1234").
- `summary` (string, optional): Summary - A brief one-line summary of the issue. (example: "The printer stopped working").
- `issueType` (string, optional): Issue type - Use Connect Portal Workflow Settings to allow users to select an Issue Type.
- `jiraIssueStatus` (string, optional): Status - Use Connect Portal Workflow Settings to allow users to select a Status.
- `assignee` (string, optional): Assignee - Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `descriptionType` (string, optional): Description Type - Select the Description Type.
- Options: `description`, `descriptionJSON`
- `description` (string, optional): Description - A detailed description of the issue. This field appears only when 'descriptionType' = 'description'.
- `additionalFields` (string, optional): Additional Fields - Specify any other fields that should be included in JSON format.
</Accordion>
<Accordion title="JIRA_GET_ISSUE_BY_KEY">
**Description:** Get an issue by key in Jira.
**Parameters:**
- `issueKey` (string, required): Issue Key (example: "TEST-1234").
</Accordion>
<Accordion title="JIRA_FILTER_ISSUES">
**Description:** Search issues in Jira using filters.
**Parameters:**
- `jqlQuery` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "status",
"operator": "$stringExactlyMatches",
"value": "Open"
}
]
}
]
}
```
Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`
- `limit` (string, optional): Limit results - Limit the maximum number of issues to return. Defaults to 10 if left blank.
</Accordion>
<Accordion title="JIRA_SEARCH_BY_JQL">
**Description:** Search issues by JQL in Jira.
**Parameters:**
- `jqlQuery` (string, required): JQL Query (example: "project = PROJECT").
- `paginationParameters` (object, optional): Pagination parameters for paginated results.
```json
{
"pageCursor": "cursor_string"
}
```
</Accordion>
<Accordion title="JIRA_UPDATE_ISSUE_ANY">
**Description:** Update any issue in Jira. Use DESCRIBE_ACTION_SCHEMA to get properties schema for this function.
**Parameters:** No specific parameters - use JIRA_DESCRIBE_ACTION_SCHEMA first to get the expected schema.
</Accordion>
<Accordion title="JIRA_DESCRIBE_ACTION_SCHEMA">
**Description:** Get the expected schema for an issue type. Use this function first if no other function matches the issue type you want to operate on.
**Parameters:**
- `issueTypeId` (string, required): Issue Type ID.
- `projectKey` (string, required): Project key.
- `operation` (string, required): Operation Type value, for example CREATE_ISSUE or UPDATE_ISSUE.
</Accordion>
<Accordion title="JIRA_GET_PROJECTS">
**Description:** Get Projects in Jira.
**Parameters:**
- `paginationParameters` (object, optional): Pagination Parameters.
```json
{
"pageCursor": "cursor_string"
}
```
</Accordion>
<Accordion title="JIRA_GET_ISSUE_TYPES_BY_PROJECT">
**Description:** Get Issue Types by project in Jira.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
<Accordion title="JIRA_GET_ISSUE_TYPES">
**Description:** Get all Issue Types in Jira.
**Parameters:** None required.
</Accordion>
<Accordion title="JIRA_GET_ISSUE_STATUS_BY_PROJECT">
**Description:** Get issue statuses for a given project.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
<Accordion title="JIRA_GET_ALL_ASSIGNEES_BY_PROJECT">
**Description:** Get assignees for a given project.
**Parameters:**
- `project` (string, required): Project key.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Jira Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Jira tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Jira capabilities
jira_agent = Agent(
role="Issue Manager",
goal="Manage Jira issues and track project progress efficiently",
backstory="An AI assistant specialized in issue tracking and project management.",
tools=[enterprise_tools]
)
# Task to create a bug report
create_bug_task = Task(
description="Create a bug report for the login functionality with high priority and assign it to the development team",
agent=jira_agent,
expected_output="Bug report created successfully with issue key"
)
# Run the task
crew = Crew(
agents=[jira_agent],
tasks=[create_bug_task]
)
crew.kickoff()
```
### Filtering Specific Jira Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Jira tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["jira_create_issue", "jira_update_issue", "jira_search_by_jql"]
)
issue_coordinator = Agent(
role="Issue Coordinator",
goal="Create and manage Jira issues efficiently",
backstory="An AI assistant that focuses on issue creation and management.",
tools=enterprise_tools
)
# Task to manage issue workflow
issue_workflow = Task(
description="Create a feature request issue and update the status of related issues",
agent=issue_coordinator,
expected_output="Feature request created and related issues updated"
)
crew = Crew(
agents=[issue_coordinator],
tasks=[issue_workflow]
)
crew.kickoff()
```
### Project Analysis and Reporting
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_analyst = Agent(
role="Project Analyst",
goal="Analyze project data and generate insights from Jira",
backstory="An experienced project analyst who extracts insights from project management data.",
tools=[enterprise_tools]
)
# Task to analyze project status
analysis_task = Task(
description="""
1. Get all projects and their issue types
2. Search for all open issues across projects
3. Analyze issue distribution by status and assignee
4. Create a summary report issue with findings
""",
agent=project_analyst,
expected_output="Project analysis completed with summary report created"
)
crew = Crew(
agents=[project_analyst],
tasks=[analysis_task]
)
crew.kickoff()
```
### Automated Issue Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
automation_manager = Agent(
role="Automation Manager",
goal="Automate issue management and workflow processes",
backstory="An AI assistant that automates repetitive issue management tasks.",
tools=[enterprise_tools]
)
# Task to automate issue management
automation_task = Task(
description="""
1. Search for all unassigned issues using JQL
2. Get available assignees for each project
3. Automatically assign issues based on workload and expertise
4. Update issue priorities based on age and type
5. Create weekly sprint planning issues
""",
agent=automation_manager,
expected_output="Issues automatically assigned and sprint planning issues created"
)
crew = Crew(
agents=[automation_manager],
tasks=[automation_task]
)
crew.kickoff()
```
### Advanced Schema-Based Operations
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
schema_specialist = Agent(
role="Schema Specialist",
goal="Handle complex Jira operations using dynamic schemas",
backstory="An AI assistant that can work with dynamic Jira schemas and custom issue types.",
tools=[enterprise_tools]
)
# Task using schema-based operations
schema_task = Task(
description="""
1. Get all projects and their custom issue types
2. For each custom issue type, describe the action schema
3. Create issues using the dynamic schema for complex custom fields
4. Update issues with custom field values based on business rules
""",
agent=schema_specialist,
expected_output="Custom issues created and updated using dynamic schemas"
)
crew = Crew(
agents=[schema_specialist],
tasks=[schema_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Jira account has necessary permissions for the target projects
- Verify that the OAuth connection includes required scopes for Jira API
- Check if you have create/edit permissions for issues in the specified projects
**Invalid Project or Issue Keys**
- Double-check project keys and issue keys for correct format (e.g., "PROJ-123")
- Ensure projects exist and are accessible to your account
- Verify that issue keys reference existing issues
**Issue Type and Status Issues**
- Use JIRA_GET_ISSUE_TYPES_BY_PROJECT to get valid issue types for a project
- Use JIRA_GET_ISSUE_STATUS_BY_PROJECT to get valid statuses
- Ensure issue types and statuses are available in the target project
**JQL Query Problems**
- Test JQL queries in Jira's issue search before using in API calls
- Ensure field names in JQL are spelled correctly and exist in your Jira instance
- Use proper JQL syntax for complex queries
**Custom Fields and Schema Issues**
- Use JIRA_DESCRIBE_ACTION_SCHEMA to get the correct schema for complex issue types
- Ensure custom field IDs are correct (e.g., "customfield_10001")
- Verify that custom fields are available in the target project and issue type
**Filter Formula Issues**
- Ensure filter formulas follow the correct JSON structure for disjunctive normal form
- Use valid field names that exist in your Jira configuration
- Test simple filters before building complex multi-condition queries
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Jira integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,454 @@
---
title: Linear Integration
description: "Software project and bug tracking with Linear integration for CrewAI."
icon: "list-check"
mode: "wide"
---
## Overview
Enable your agents to manage issues, projects, and development workflows through Linear. Create and update issues, manage project timelines, organize teams, and streamline your software development process with AI-powered automation.
## Prerequisites
Before using the Linear integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Linear account with appropriate workspace permissions
- Connected your Linear account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Linear Integration
### 1. Connect Your Linear Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Linear** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for issue and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="LINEAR_CREATE_ISSUE">
**Description:** Create a new issue in Linear.
**Parameters:**
- `teamId` (string, required): Team ID - Specify the Team ID of the parent for this new issue. Use Connect Portal Workflow Settings to allow users to select a Team ID. (example: "a70bdf0f-530a-4887-857d-46151b52b47c").
- `title` (string, required): Title - Specify a title for this issue.
- `description` (string, optional): Description - Specify a description for this issue.
- `statusId` (string, optional): Status - Specify the state or status of this issue.
- `priority` (string, optional): Priority - Specify the priority of this issue as an integer.
- `dueDate` (string, optional): Due Date - Specify the due date of this issue in ISO 8601 format.
- `cycleId` (string, optional): Cycle ID - Specify the cycle associated with this issue.
- `additionalFields` (object, optional): Additional Fields.
```json
{
"assigneeId": "a70bdf0f-530a-4887-857d-46151b52b47c",
"labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"]
}
```
</Accordion>
<Accordion title="LINEAR_UPDATE_ISSUE">
**Description:** Update an issue in Linear.
**Parameters:**
- `issueId` (string, required): Issue ID - Specify the Issue ID of the issue to update. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
- `title` (string, optional): Title - Specify a title for this issue.
- `description` (string, optional): Description - Specify a description for this issue.
- `statusId` (string, optional): Status - Specify the state or status of this issue.
- `priority` (string, optional): Priority - Specify the priority of this issue as an integer.
- `dueDate` (string, optional): Due Date - Specify the due date of this issue in ISO 8601 format.
- `cycleId` (string, optional): Cycle ID - Specify the cycle associated with this issue.
- `additionalFields` (object, optional): Additional Fields.
```json
{
"assigneeId": "a70bdf0f-530a-4887-857d-46151b52b47c",
"labelIds": ["a70bdf0f-530a-4887-857d-46151b52b47c"]
}
```
</Accordion>
<Accordion title="LINEAR_GET_ISSUE_BY_ID">
**Description:** Get an issue by ID in Linear.
**Parameters:**
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to fetch. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
</Accordion>
<Accordion title="LINEAR_GET_ISSUE_BY_ISSUE_IDENTIFIER">
**Description:** Get an issue by issue identifier in Linear.
**Parameters:**
- `externalId` (string, required): External ID - Specify the human-readable Issue identifier of the issue to fetch. (example: "ABC-1").
</Accordion>
<Accordion title="LINEAR_SEARCH_ISSUE">
**Description:** Search issues in Linear.
**Parameters:**
- `queryTerm` (string, required): Query Term - The search term to look for.
- `issueFilterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "title",
"operator": "$stringContains",
"value": "bug"
}
]
}
]
}
```
Available fields: `title`, `number`, `project`, `createdAt`
Available operators: `$stringExactlyMatches`, `$stringDoesNotExactlyMatch`, `$stringIsIn`, `$stringIsNotIn`, `$stringStartsWith`, `$stringDoesNotStartWith`, `$stringEndsWith`, `$stringDoesNotEndWith`, `$stringContains`, `$stringDoesNotContain`, `$stringGreaterThan`, `$stringLessThan`, `$numberGreaterThanOrEqualTo`, `$numberLessThanOrEqualTo`, `$numberGreaterThan`, `$numberLessThan`, `$dateTimeAfter`, `$dateTimeBefore`
</Accordion>
<Accordion title="LINEAR_DELETE_ISSUE">
**Description:** Delete an issue in Linear.
**Parameters:**
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to delete. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
</Accordion>
<Accordion title="LINEAR_ARCHIVE_ISSUE">
**Description:** Archive an issue in Linear.
**Parameters:**
- `issueId` (string, required): Issue ID - Specify the record ID of the issue to archive. (example: "90fbc706-18cd-42c9-ae66-6bd344cc8977").
</Accordion>
<Accordion title="LINEAR_CREATE_SUB_ISSUE">
**Description:** Create a sub-issue in Linear.
**Parameters:**
- `parentId` (string, required): Parent ID - Specify the Issue ID for the parent of this new issue.
- `teamId` (string, required): Team ID - Specify the Team ID of the parent for this new sub-issue. Use Connect Portal Workflow Settings to allow users to select a Team ID. (example: "a70bdf0f-530a-4887-857d-46151b52b47c").
- `title` (string, required): Title - Specify a title for this issue.
- `description` (string, optional): Description - Specify a description for this issue.
- `additionalFields` (object, optional): Additional Fields.
```json
{
"lead": "linear_user_id"
}
```
</Accordion>
<Accordion title="LINEAR_CREATE_PROJECT">
**Description:** Create a new project in Linear.
**Parameters:**
- `teamIds` (object, required): Team ID - Specify the team ID(s) this project is associated with as a string or a JSON array. Use Connect Portal User Settings to allow your user to select a Team ID.
```json
[
"a70bdf0f-530a-4887-857d-46151b52b47c",
"4ac7..."
]
```
- `projectName` (string, required): Project Name - Specify the name of the project. (example: "My Linear Project").
- `description` (string, optional): Project Description - Specify a description for this project.
- `additionalFields` (object, optional): Additional Fields.
```json
{
"state": "planned",
"description": ""
}
```
</Accordion>
<Accordion title="LINEAR_UPDATE_PROJECT">
**Description:** Update a project in Linear.
**Parameters:**
- `projectId` (string, required): Project ID - Specify the ID of the project to update. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b").
- `projectName` (string, optional): Project Name - Specify the name of the project to update. (example: "My Linear Project").
- `description` (string, optional): Project Description - Specify a description for this project.
- `additionalFields` (object, optional): Additional Fields.
```json
{
"state": "planned",
"description": ""
}
```
</Accordion>
<Accordion title="LINEAR_GET_PROJECT_BY_ID">
**Description:** Get a project by ID in Linear.
**Parameters:**
- `projectId` (string, required): Project ID - Specify the Project ID of the project to fetch. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b").
</Accordion>
<Accordion title="LINEAR_DELETE_PROJECT">
**Description:** Delete a project in Linear.
**Parameters:**
- `projectId` (string, required): Project ID - Specify the Project ID of the project to delete. (example: "a6634484-6061-4ac7-9739-7dc5e52c796b").
</Accordion>
<Accordion title="LINEAR_SEARCH_TEAMS">
**Description:** Search teams in Linear.
**Parameters:**
- `teamFilterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "name",
"operator": "$stringContains",
"value": "Engineering"
}
]
}
]
}
```
Available fields: `id`, `name`
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Linear Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Linear tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Linear capabilities
linear_agent = Agent(
role="Development Manager",
goal="Manage Linear issues and track development progress efficiently",
backstory="An AI assistant specialized in software development project management.",
tools=[enterprise_tools]
)
# Task to create a bug report
create_bug_task = Task(
description="Create a high-priority bug report for the authentication system and assign it to the backend team",
agent=linear_agent,
expected_output="Bug report created successfully with issue ID"
)
# Run the task
crew = Crew(
agents=[linear_agent],
tasks=[create_bug_task]
)
crew.kickoff()
```
### Filtering Specific Linear Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Linear tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["linear_create_issue", "linear_update_issue", "linear_search_issue"]
)
issue_manager = Agent(
role="Issue Manager",
goal="Create and manage Linear issues efficiently",
backstory="An AI assistant that focuses on issue creation and lifecycle management.",
tools=enterprise_tools
)
# Task to manage issue workflow
issue_workflow = Task(
description="Create a feature request issue and update the status of related issues to reflect current progress",
agent=issue_manager,
expected_output="Feature request created and related issues updated"
)
crew = Crew(
agents=[issue_manager],
tasks=[issue_workflow]
)
crew.kickoff()
```
### Project and Team Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_coordinator = Agent(
role="Project Coordinator",
goal="Coordinate projects and teams in Linear efficiently",
backstory="An experienced project coordinator who manages development cycles and team workflows.",
tools=[enterprise_tools]
)
# Task to coordinate project setup
project_coordination = Task(
description="""
1. Search for engineering teams in Linear
2. Create a new project for Q2 feature development
3. Associate the project with relevant teams
4. Create initial project milestones as issues
""",
agent=project_coordinator,
expected_output="Q2 project created with teams assigned and initial milestones established"
)
crew = Crew(
agents=[project_coordinator],
tasks=[project_coordination]
)
crew.kickoff()
```
### Issue Hierarchy and Sub-task Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
task_organizer = Agent(
role="Task Organizer",
goal="Organize complex issues into manageable sub-tasks",
backstory="An AI assistant that breaks down complex development work into organized sub-tasks.",
tools=[enterprise_tools]
)
# Task to create issue hierarchy
hierarchy_task = Task(
description="""
1. Search for large feature issues that need to be broken down
2. For each complex issue, create sub-issues for different components
3. Update the parent issues with proper descriptions and links to sub-issues
4. Assign sub-issues to appropriate team members based on expertise
""",
agent=task_organizer,
expected_output="Complex issues broken down into manageable sub-tasks with proper assignments"
)
crew = Crew(
agents=[task_organizer],
tasks=[hierarchy_task]
)
crew.kickoff()
```
### Automated Development Workflow
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
workflow_automator = Agent(
role="Workflow Automator",
goal="Automate development workflow processes in Linear",
backstory="An AI assistant that automates repetitive development workflow tasks.",
tools=[enterprise_tools]
)
# Complex workflow automation task
automation_task = Task(
description="""
1. Search for issues that have been in progress for more than 7 days
2. Update their priorities based on due dates and project importance
3. Create weekly sprint planning issues for each team
4. Archive completed issues from the previous cycle
5. Generate project status reports as new issues
""",
agent=workflow_automator,
expected_output="Development workflow automated with updated priorities, sprint planning, and status reports"
)
crew = Crew(
agents=[workflow_automator],
tasks=[automation_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Linear account has necessary permissions for the target workspace
- Verify that the OAuth connection includes required scopes for Linear API
- Check if you have create/edit permissions for issues and projects in the workspace
**Invalid IDs and References**
- Double-check team IDs, issue IDs, and project IDs for correct UUID format
- Ensure referenced entities (teams, projects, cycles) exist and are accessible
- Verify that issue identifiers follow the correct format (e.g., "ABC-1")
**Team and Project Association Issues**
- Use LINEAR_SEARCH_TEAMS to get valid team IDs before creating issues or projects
- Ensure teams exist and are active in your workspace
- Verify that team IDs are properly formatted as UUIDs
**Issue Status and Priority Problems**
- Check that status IDs reference valid workflow states for the team
- Ensure priority values are within the valid range for your Linear configuration
- Verify that custom fields and labels exist before referencing them
**Date and Time Format Issues**
- Use ISO 8601 format for due dates and timestamps
- Ensure time zones are handled correctly for due date calculations
- Verify that date values are valid and in the future for due dates
**Search and Filter Issues**
- Ensure search queries are properly formatted and not empty
- Use valid field names in filter formulas: `title`, `number`, `project`, `createdAt`
- Test simple filters before building complex multi-condition queries
- Verify that operator types match the data types of the fields being filtered
**Sub-issue Creation Problems**
- Ensure parent issue IDs are valid and accessible
- Verify that the team ID for sub-issues matches or is compatible with the parent issue's team
- Check that parent issues are not already archived or deleted
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Linear integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,510 @@
---
title: Notion Integration
description: "Page and database management with Notion integration for CrewAI."
icon: "book"
mode: "wide"
---
## Overview
Enable your agents to manage pages, databases, and content through Notion. Create and update pages, manage content blocks, organize knowledge bases, and streamline your documentation workflows with AI-powered automation.
## Prerequisites
Before using the Notion integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Notion account with appropriate workspace permissions
- Connected your Notion account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors)
## Setting Up Notion Integration
### 1. Connect Your Notion Account
1. Navigate to [CrewAI Enterprise Integrations](https://app.crewai.com/crewai_plus/connectors)
2. Find **Notion** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for page and database management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
### 2. Install Required Package
```bash
uv add crewai-tools
```
## Available Actions
<AccordionGroup>
<Accordion title="NOTION_CREATE_PAGE">
**Description:** Create a page in Notion.
**Parameters:**
- `parent` (object, required): Parent - The parent page or database where the new page is inserted, represented as a JSON object with a page_id or database_id key.
```json
{
"database_id": "DATABASE_ID"
}
```
- `properties` (object, required): Properties - The values of the page's properties. If the parent is a database, then the schema must match the parent database's properties.
```json
{
"title": [
{
"text": {
"content": "My Page"
}
}
]
}
```
- `icon` (object, required): Icon - The page icon.
```json
{
"emoji": "🥬"
}
```
- `children` (object, optional): Children - Content blocks to add to the page.
```json
[
{
"object": "block",
"type": "heading_2",
"heading_2": {
"rich_text": [
{
"type": "text",
"text": {
"content": "Lacinato kale"
}
}
]
}
}
]
```
- `cover` (object, optional): Cover - The page cover image.
```json
{
"external": {
"url": "https://upload.wikimedia.org/wikipedia/commons/6/62/Tuscankale.jpg"
}
}
```
</Accordion>
<Accordion title="NOTION_UPDATE_PAGE">
**Description:** Update a page in Notion.
**Parameters:**
- `pageId` (string, required): Page ID - Specify the ID of the Page to Update. (example: "59833787-2cf9-4fdf-8782-e53db20768a5").
- `icon` (object, required): Icon - The page icon.
```json
{
"emoji": "🥬"
}
```
- `archived` (boolean, optional): Archived - Whether the page is archived (deleted). Set to true to archive a page. Set to false to un-archive (restore) a page.
- `properties` (object, optional): Properties - The property values to update for the page.
```json
{
"title": [
{
"text": {
"content": "My Updated Page"
}
}
]
}
```
- `cover` (object, optional): Cover - The page cover image.
```json
{
"external": {
"url": "https://upload.wikimedia.org/wikipedia/commons/6/62/Tuscankale.jpg"
}
}
```
</Accordion>
<Accordion title="NOTION_GET_PAGE_BY_ID">
**Description:** Get a page by ID in Notion.
**Parameters:**
- `pageId` (string, required): Page ID - Specify the ID of the Page to Get. (example: "59833787-2cf9-4fdf-8782-e53db20768a5").
</Accordion>
<Accordion title="NOTION_ARCHIVE_PAGE">
**Description:** Archive a page in Notion.
**Parameters:**
- `pageId` (string, required): Page ID - Specify the ID of the Page to Archive. (example: "59833787-2cf9-4fdf-8782-e53db20768a5").
</Accordion>
<Accordion title="NOTION_SEARCH_PAGES">
**Description:** Search pages in Notion using filters.
**Parameters:**
- `searchByTitleFilterSearch` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions.
```json
{
"operator": "OR",
"conditions": [
{
"operator": "AND",
"conditions": [
{
"field": "query",
"operator": "$stringExactlyMatches",
"value": "meeting notes"
}
]
}
]
}
```
Available fields: `query`, `filter.value`, `direction`, `page_size`
</Accordion>
<Accordion title="NOTION_GET_PAGE_CONTENT">
**Description:** Get page content (blocks) in Notion.
**Parameters:**
- `blockId` (string, required): Page ID - Specify a Block or Page ID to receive all of its block's children in order. (example: "59833787-2cf9-4fdf-8782-e53db20768a5").
</Accordion>
<Accordion title="NOTION_UPDATE_BLOCK">
**Description:** Update a block in Notion.
**Parameters:**
- `blockId` (string, required): Block ID - Specify the ID of the Block to Update. (example: "9bc30ad4-9373-46a5-84ab-0a7845ee52e6").
- `archived` (boolean, optional): Archived - Set to true to archive (delete) a block. Set to false to un-archive (restore) a block.
- `paragraph` (object, optional): Paragraph content.
```json
{
"rich_text": [
{
"type": "text",
"text": {
"content": "Lacinato kale",
"link": null
}
}
],
"color": "default"
}
```
- `image` (object, optional): Image block.
```json
{
"type": "external",
"external": {
"url": "https://website.domain/images/image.png"
}
}
```
- `bookmark` (object, optional): Bookmark block.
```json
{
"caption": [],
"url": "https://companywebsite.com"
}
```
- `code` (object, optional): Code block.
```json
{
"rich_text": [
{
"type": "text",
"text": {
"content": "const a = 3"
}
}
],
"language": "javascript"
}
```
- `pdf` (object, optional): PDF block.
```json
{
"type": "external",
"external": {
"url": "https://website.domain/files/doc.pdf"
}
}
```
- `table` (object, optional): Table block.
```json
{
"table_width": 2,
"has_column_header": false,
"has_row_header": false
}
```
- `tableOfContent` (object, optional): Table of Contents block.
```json
{
"color": "default"
}
```
- `additionalFields` (object, optional): Additional block types.
```json
{
"child_page": {
"title": "Lacinato kale"
},
"child_database": {
"title": "My database"
}
}
```
</Accordion>
<Accordion title="NOTION_GET_BLOCK_BY_ID">
**Description:** Get a block by ID in Notion.
**Parameters:**
- `blockId` (string, required): Block ID - Specify the ID of the Block to Get. (example: "9bc30ad4-9373-46a5-84ab-0a7845ee52e6").
</Accordion>
<Accordion title="NOTION_DELETE_BLOCK">
**Description:** Delete a block in Notion.
**Parameters:**
- `blockId` (string, required): Block ID - Specify the ID of the Block to Delete. (example: "9bc30ad4-9373-46a5-84ab-0a7845ee52e6").
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Notion Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Notion tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Notion capabilities
notion_agent = Agent(
role="Documentation Manager",
goal="Manage documentation and knowledge base in Notion efficiently",
backstory="An AI assistant specialized in content management and documentation.",
tools=[enterprise_tools]
)
# Task to create a meeting notes page
create_notes_task = Task(
description="Create a new meeting notes page in the team database with today's date and agenda items",
agent=notion_agent,
expected_output="Meeting notes page created successfully with structured content"
)
# Run the task
crew = Crew(
agents=[notion_agent],
tasks=[create_notes_task]
)
crew.kickoff()
```
### Filtering Specific Notion Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Notion tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["notion_create_page", "notion_update_block", "notion_search_pages"]
)
content_manager = Agent(
role="Content Manager",
goal="Create and manage content pages efficiently",
backstory="An AI assistant that focuses on content creation and management.",
tools=enterprise_tools
)
# Task to manage content workflow
content_workflow = Task(
description="Create a new project documentation page and add structured content blocks for requirements and specifications",
agent=content_manager,
expected_output="Project documentation created with organized content sections"
)
crew = Crew(
agents=[content_manager],
tasks=[content_workflow]
)
crew.kickoff()
```
### Knowledge Base Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
knowledge_curator = Agent(
role="Knowledge Curator",
goal="Curate and organize knowledge base content in Notion",
backstory="An experienced knowledge manager who organizes and maintains comprehensive documentation.",
tools=[enterprise_tools]
)
# Task to curate knowledge base
curation_task = Task(
description="""
1. Search for existing documentation pages related to our new product feature
2. Create a comprehensive feature documentation page with proper structure
3. Add code examples, images, and links to related resources
4. Update existing pages with cross-references to the new documentation
""",
agent=knowledge_curator,
expected_output="Feature documentation created and integrated with existing knowledge base"
)
crew = Crew(
agents=[knowledge_curator],
tasks=[curation_task]
)
crew.kickoff()
```
### Content Structure and Organization
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
content_organizer = Agent(
role="Content Organizer",
goal="Organize and structure content blocks for optimal readability",
backstory="An AI assistant that specializes in content structure and user experience.",
tools=[enterprise_tools]
)
# Task to organize content structure
organization_task = Task(
description="""
1. Get content from existing project pages
2. Analyze the structure and identify improvement opportunities
3. Update content blocks to use proper headings, tables, and formatting
4. Add table of contents and improve navigation between related pages
5. Create templates for future documentation consistency
""",
agent=content_organizer,
expected_output="Content reorganized with improved structure and navigation"
)
crew = Crew(
agents=[content_organizer],
tasks=[organization_task]
)
crew.kickoff()
```
### Automated Documentation Workflows
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
doc_automator = Agent(
role="Documentation Automator",
goal="Automate documentation workflows and maintenance",
backstory="An AI assistant that automates repetitive documentation tasks.",
tools=[enterprise_tools]
)
# Complex documentation automation task
automation_task = Task(
description="""
1. Search for pages that haven't been updated in the last 30 days
2. Review and update outdated content blocks
3. Create weekly team update pages with consistent formatting
4. Add status indicators and progress tracking to project pages
5. Generate monthly documentation health reports
6. Archive completed project pages and organize them in archive sections
""",
agent=doc_automator,
expected_output="Documentation automated with updated content, weekly reports, and organized archives"
)
crew = Crew(
agents=[doc_automator],
tasks=[automation_task]
)
crew.kickoff()
```
## Troubleshooting
### Common Issues
**Permission Errors**
- Ensure your Notion account has edit access to the target workspace
- Verify that the OAuth connection includes required scopes for Notion API
- Check that pages and databases are shared with the authenticated integration
**Invalid Page and Block IDs**
- Double-check page IDs and block IDs for correct UUID format
- Ensure referenced pages and blocks exist and are accessible
- Verify that parent page or database IDs are valid when creating new pages
**Property Schema Issues**
- Ensure page properties match the database schema when creating pages in databases
- Verify that property names and types are correct for the target database
- Check that required properties are included when creating or updating pages
**Content Block Structure**
- Ensure block content follows Notion's rich text format specifications
- Verify that nested block structures are properly formatted
- Check that media URLs are accessible and properly formatted
**Search and Filter Issues**
- Ensure search queries are properly formatted and not empty
- Use valid field names in filter formulas: `query`, `filter.value`, `direction`, `page_size`
- Test simple searches before building complex filter conditions
**Parent-Child Relationships**
- Verify that parent page or database exists before creating child pages
- Ensure proper permissions exist for the parent container
- Check that database schemas allow the properties you're trying to set
**Rich Text and Media Content**
- Ensure URLs for external images, PDFs, and bookmarks are accessible
- Verify that rich text formatting follows Notion's API specifications
- Check that code block language types are supported by Notion
**Archive and Deletion Operations**
- Understand the difference between archiving (reversible) and deleting (permanent)
- Verify that you have permissions to archive or delete the target content
- Be cautious with bulk operations that might affect multiple pages or blocks
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Notion integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,633 @@
---
title: Salesforce Integration
description: "CRM and sales automation with Salesforce integration for CrewAI."
icon: "salesforce"
mode: "wide"
---
## Overview
Enable your agents to manage customer relationships, sales processes, and data through Salesforce. Create and update records, manage leads and opportunities, execute SOQL queries, and streamline your CRM workflows with AI-powered automation.
## Prerequisites
Before using the Salesforce integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Salesforce account with appropriate permissions
- Connected your Salesforce account through the [Integrations page](https://app.crewai.com/integrations)
## Available Tools
### **Record Management**
<AccordionGroup>
<Accordion title="SALESFORCE_CREATE_RECORD_CONTACT">
**Description:** Create a new Contact record in Salesforce.
**Parameters:**
- `FirstName` (string, optional): First Name
- `LastName` (string, required): Last Name - This field is required
- `accountId` (string, optional): Account ID - The Account that the Contact belongs to
- `Email` (string, optional): Email address
- `Title` (string, optional): Title of the contact, such as CEO or Vice President
- `Description` (string, optional): A description of the Contact
- `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields
</Accordion>
<Accordion title="SALESFORCE_CREATE_RECORD_LEAD">
**Description:** Create a new Lead record in Salesforce.
**Parameters:**
- `FirstName` (string, optional): First Name
- `LastName` (string, required): Last Name - This field is required
- `Company` (string, required): Company - This field is required
- `Email` (string, optional): Email address
- `Phone` (string, optional): Phone number
- `Website` (string, optional): Website URL
- `Title` (string, optional): Title of the contact, such as CEO or Vice President
- `Status` (string, optional): Lead Status - Use Connect Portal Workflow Settings to select Lead Status
- `Description` (string, optional): A description of the Lead
- `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields
</Accordion>
<Accordion title="SALESFORCE_CREATE_RECORD_OPPORTUNITY">
**Description:** Create a new Opportunity record in Salesforce.
**Parameters:**
- `Name` (string, required): The Opportunity name - This field is required
- `StageName` (string, optional): Opportunity Stage - Use Connect Portal Workflow Settings to select stage
- `CloseDate` (string, optional): Close Date in YYYY-MM-DD format - Defaults to 30 days from current date
- `AccountId` (string, optional): The Account that the Opportunity belongs to
- `Amount` (string, optional): Estimated total sale amount
- `Description` (string, optional): A description of the Opportunity
- `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity
- `NextStep` (string, optional): Description of next task in closing Opportunity
- `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields
</Accordion>
<Accordion title="SALESFORCE_CREATE_RECORD_TASK">
**Description:** Create a new Task record in Salesforce.
**Parameters:**
- `whatId` (string, optional): Related to ID - The ID of the Account or Opportunity this Task is related to
- `whoId` (string, optional): Name ID - The ID of the Contact or Lead this Task is related to
- `subject` (string, required): Subject of the task
- `activityDate` (string, optional): Activity Date in YYYY-MM-DD format
- `description` (string, optional): A description of the Task
- `taskSubtype` (string, required): Task Subtype - Options: task, email, listEmail, call
- `Status` (string, optional): Status - Options: Not Started, In Progress, Completed
- `ownerId` (string, optional): Assigned To ID - The Salesforce user assigned to this Task
- `callDurationInSeconds` (string, optional): Call Duration in seconds
- `isReminderSet` (boolean, optional): Whether reminder is set
- `reminderDateTime` (string, optional): Reminder Date/Time in ISO format
- `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields
</Accordion>
<Accordion title="SALESFORCE_CREATE_RECORD_ACCOUNT">
**Description:** Create a new Account record in Salesforce.
**Parameters:**
- `Name` (string, required): The Account name - This field is required
- `OwnerId` (string, optional): The Salesforce user assigned to this Account
- `Website` (string, optional): Website URL
- `Phone` (string, optional): Phone number
- `Description` (string, optional): Account description
- `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields
</Accordion>
<Accordion title="SALESFORCE_CREATE_RECORD_ANY">
**Description:** Create a record of any object type in Salesforce.
**Note:** This is a flexible tool for creating records of custom or unknown object types.
</Accordion>
</AccordionGroup>
### **Record Updates**
<AccordionGroup>
<Accordion title="SALESFORCE_UPDATE_RECORD_CONTACT">
**Description:** Update an existing Contact record in Salesforce.
**Parameters:**
- `recordId` (string, required): The ID of the record to update
- `FirstName` (string, optional): First Name
- `LastName` (string, optional): Last Name
- `accountId` (string, optional): Account ID - The Account that the Contact belongs to
- `Email` (string, optional): Email address
- `Title` (string, optional): Title of the contact
- `Description` (string, optional): A description of the Contact
- `additionalFields` (object, optional): Additional fields in JSON format for custom Contact fields
</Accordion>
<Accordion title="SALESFORCE_UPDATE_RECORD_LEAD">
**Description:** Update an existing Lead record in Salesforce.
**Parameters:**
- `recordId` (string, required): The ID of the record to update
- `FirstName` (string, optional): First Name
- `LastName` (string, optional): Last Name
- `Company` (string, optional): Company name
- `Email` (string, optional): Email address
- `Phone` (string, optional): Phone number
- `Website` (string, optional): Website URL
- `Title` (string, optional): Title of the contact
- `Status` (string, optional): Lead Status
- `Description` (string, optional): A description of the Lead
- `additionalFields` (object, optional): Additional fields in JSON format for custom Lead fields
</Accordion>
<Accordion title="SALESFORCE_UPDATE_RECORD_OPPORTUNITY">
**Description:** Update an existing Opportunity record in Salesforce.
**Parameters:**
- `recordId` (string, required): The ID of the record to update
- `Name` (string, optional): The Opportunity name
- `StageName` (string, optional): Opportunity Stage
- `CloseDate` (string, optional): Close Date in YYYY-MM-DD format
- `AccountId` (string, optional): The Account that the Opportunity belongs to
- `Amount` (string, optional): Estimated total sale amount
- `Description` (string, optional): A description of the Opportunity
- `OwnerId` (string, optional): The Salesforce user assigned to work on this Opportunity
- `NextStep` (string, optional): Description of next task in closing Opportunity
- `additionalFields` (object, optional): Additional fields in JSON format for custom Opportunity fields
</Accordion>
<Accordion title="SALESFORCE_UPDATE_RECORD_TASK">
**Description:** Update an existing Task record in Salesforce.
**Parameters:**
- `recordId` (string, required): The ID of the record to update
- `whatId` (string, optional): Related to ID - The ID of the Account or Opportunity this Task is related to
- `whoId` (string, optional): Name ID - The ID of the Contact or Lead this Task is related to
- `subject` (string, optional): Subject of the task
- `activityDate` (string, optional): Activity Date in YYYY-MM-DD format
- `description` (string, optional): A description of the Task
- `Status` (string, optional): Status - Options: Not Started, In Progress, Completed
- `ownerId` (string, optional): Assigned To ID - The Salesforce user assigned to this Task
- `callDurationInSeconds` (string, optional): Call Duration in seconds
- `isReminderSet` (boolean, optional): Whether reminder is set
- `reminderDateTime` (string, optional): Reminder Date/Time in ISO format
- `additionalFields` (object, optional): Additional fields in JSON format for custom Task fields
</Accordion>
<Accordion title="SALESFORCE_UPDATE_RECORD_ACCOUNT">
**Description:** Update an existing Account record in Salesforce.
**Parameters:**
- `recordId` (string, required): The ID of the record to update
- `Name` (string, optional): The Account name
- `OwnerId` (string, optional): The Salesforce user assigned to this Account
- `Website` (string, optional): Website URL
- `Phone` (string, optional): Phone number
- `Description` (string, optional): Account description
- `additionalFields` (object, optional): Additional fields in JSON format for custom Account fields
</Accordion>
<Accordion title="SALESFORCE_UPDATE_RECORD_ANY">
**Description:** Update a record of any object type in Salesforce.
**Note:** This is a flexible tool for updating records of custom or unknown object types.
</Accordion>
</AccordionGroup>
### **Record Retrieval**
<AccordionGroup>
<Accordion title="SALESFORCE_GET_RECORD_BY_ID_CONTACT">
**Description:** Get a Contact record by its ID.
**Parameters:**
- `recordId` (string, required): Record ID of the Contact
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_ID_LEAD">
**Description:** Get a Lead record by its ID.
**Parameters:**
- `recordId` (string, required): Record ID of the Lead
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_ID_OPPORTUNITY">
**Description:** Get an Opportunity record by its ID.
**Parameters:**
- `recordId` (string, required): Record ID of the Opportunity
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_ID_TASK">
**Description:** Get a Task record by its ID.
**Parameters:**
- `recordId` (string, required): Record ID of the Task
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_ID_ACCOUNT">
**Description:** Get an Account record by its ID.
**Parameters:**
- `recordId` (string, required): Record ID of the Account
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_ID_ANY">
**Description:** Get a record of any object type by its ID.
**Parameters:**
- `recordType` (string, required): Record Type (e.g., "CustomObject__c")
- `recordId` (string, required): Record ID
</Accordion>
</AccordionGroup>
### **Record Search**
<AccordionGroup>
<Accordion title="SALESFORCE_SEARCH_RECORDS_CONTACT">
**Description:** Search for Contact records with advanced filtering.
**Parameters:**
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
- `sortBy` (string, optional): Sort field (e.g., "CreatedDate")
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_SEARCH_RECORDS_LEAD">
**Description:** Search for Lead records with advanced filtering.
**Parameters:**
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
- `sortBy` (string, optional): Sort field (e.g., "CreatedDate")
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_SEARCH_RECORDS_OPPORTUNITY">
**Description:** Search for Opportunity records with advanced filtering.
**Parameters:**
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
- `sortBy` (string, optional): Sort field (e.g., "CreatedDate")
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_SEARCH_RECORDS_TASK">
**Description:** Search for Task records with advanced filtering.
**Parameters:**
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
- `sortBy` (string, optional): Sort field (e.g., "CreatedDate")
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_SEARCH_RECORDS_ACCOUNT">
**Description:** Search for Account records with advanced filtering.
**Parameters:**
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
- `sortBy` (string, optional): Sort field (e.g., "CreatedDate")
- `sortDirection` (string, optional): Sort direction - Options: ASC, DESC
- `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_SEARCH_RECORDS_ANY">
**Description:** Search for records of any object type.
**Parameters:**
- `recordType` (string, required): Record Type to search
- `filterFormula` (string, optional): Filter search criteria
- `includeAllFields` (boolean, optional): Include all fields in results
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
</AccordionGroup>
### **List View Retrieval**
<AccordionGroup>
<Accordion title="SALESFORCE_GET_RECORD_BY_VIEW_ID_CONTACT">
**Description:** Get Contact records from a specific List View.
**Parameters:**
- `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_VIEW_ID_LEAD">
**Description:** Get Lead records from a specific List View.
**Parameters:**
- `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_VIEW_ID_OPPORTUNITY">
**Description:** Get Opportunity records from a specific List View.
**Parameters:**
- `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_VIEW_ID_TASK">
**Description:** Get Task records from a specific List View.
**Parameters:**
- `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_VIEW_ID_ACCOUNT">
**Description:** Get Account records from a specific List View.
**Parameters:**
- `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
<Accordion title="SALESFORCE_GET_RECORD_BY_VIEW_ID_ANY">
**Description:** Get records of any object type from a specific List View.
**Parameters:**
- `recordType` (string, required): Record Type
- `listViewId` (string, required): List View ID
- `paginationParameters` (object, optional): Pagination settings with pageCursor
</Accordion>
</AccordionGroup>
### **Custom Fields**
<AccordionGroup>
<Accordion title="SALESFORCE_CREATE_CUSTOM_FIELD_CONTACT">
**Description:** Deploy custom fields for Contact objects.
**Parameters:**
- `label` (string, required): Field Label for displays and internal reference
- `type` (string, required): Field Type - Options: Checkbox, Currency, Date, Email, Number, Percent, Phone, Picklist, MultiselectPicklist, Text, TextArea, LongTextArea, Html, Time, Url
- `defaultCheckboxValue` (boolean, optional): Default value for checkbox fields
- `length` (string, required): Length for numeric/text fields
- `decimalPlace` (string, required): Decimal places for numeric fields
- `pickListValues` (string, required): Values for picklist fields (separated by new lines)
- `visibleLines` (string, required): Visible lines for multiselect/text area fields
- `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value
</Accordion>
<Accordion title="SALESFORCE_CREATE_CUSTOM_FIELD_LEAD">
**Description:** Deploy custom fields for Lead objects.
**Parameters:**
- `label` (string, required): Field Label for displays and internal reference
- `type` (string, required): Field Type - Options: Checkbox, Currency, Date, Email, Number, Percent, Phone, Picklist, MultiselectPicklist, Text, TextArea, LongTextArea, Html, Time, Url
- `defaultCheckboxValue` (boolean, optional): Default value for checkbox fields
- `length` (string, required): Length for numeric/text fields
- `decimalPlace` (string, required): Decimal places for numeric fields
- `pickListValues` (string, required): Values for picklist fields (separated by new lines)
- `visibleLines` (string, required): Visible lines for multiselect/text area fields
- `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value
</Accordion>
<Accordion title="SALESFORCE_CREATE_CUSTOM_FIELD_OPPORTUNITY">
**Description:** Deploy custom fields for Opportunity objects.
**Parameters:**
- `label` (string, required): Field Label for displays and internal reference
- `type` (string, required): Field Type - Options: Checkbox, Currency, Date, Email, Number, Percent, Phone, Picklist, MultiselectPicklist, Text, TextArea, LongTextArea, Html, Time, Url
- `defaultCheckboxValue` (boolean, optional): Default value for checkbox fields
- `length` (string, required): Length for numeric/text fields
- `decimalPlace` (string, required): Decimal places for numeric fields
- `pickListValues` (string, required): Values for picklist fields (separated by new lines)
- `visibleLines` (string, required): Visible lines for multiselect/text area fields
- `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value
</Accordion>
<Accordion title="SALESFORCE_CREATE_CUSTOM_FIELD_TASK">
**Description:** Deploy custom fields for Task objects.
**Parameters:**
- `label` (string, required): Field Label for displays and internal reference
- `type` (string, required): Field Type - Options: Checkbox, Currency, Date, Email, Number, Percent, Phone, Picklist, MultiselectPicklist, Text, TextArea, Time, Url
- `defaultCheckboxValue` (boolean, optional): Default value for checkbox fields
- `length` (string, required): Length for numeric/text fields
- `decimalPlace` (string, required): Decimal places for numeric fields
- `pickListValues` (string, required): Values for picklist fields (separated by new lines)
- `visibleLines` (string, required): Visible lines for multiselect fields
- `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value
</Accordion>
<Accordion title="SALESFORCE_CREATE_CUSTOM_FIELD_ACCOUNT">
**Description:** Deploy custom fields for Account objects.
**Parameters:**
- `label` (string, required): Field Label for displays and internal reference
- `type` (string, required): Field Type - Options: Checkbox, Currency, Date, Email, Number, Percent, Phone, Picklist, MultiselectPicklist, Text, TextArea, LongTextArea, Html, Time, Url
- `defaultCheckboxValue` (boolean, optional): Default value for checkbox fields
- `length` (string, required): Length for numeric/text fields
- `decimalPlace` (string, required): Decimal places for numeric fields
- `pickListValues` (string, required): Values for picklist fields (separated by new lines)
- `visibleLines` (string, required): Visible lines for multiselect/text area fields
- `description` (string, optional): Field description
- `helperText` (string, optional): Helper text shown on hover
- `defaultFieldValue` (string, optional): Default field value
</Accordion>
<Accordion title="SALESFORCE_CREATE_CUSTOM_FIELD_ANY">
**Description:** Deploy custom fields for any object type.
**Note:** This is a flexible tool for creating custom fields on custom or unknown object types.
</Accordion>
</AccordionGroup>
### **Advanced Operations**
<AccordionGroup>
<Accordion title="SALESFORCE_WRITE_SOQL_QUERY">
**Description:** Execute custom SOQL queries against your Salesforce data.
**Parameters:**
- `query` (string, required): SOQL Query (e.g., "SELECT Id, Name FROM Account WHERE Name = 'Example'")
</Accordion>
<Accordion title="SALESFORCE_CREATE_CUSTOM_OBJECT">
**Description:** Deploy a new custom object in Salesforce.
**Parameters:**
- `label` (string, required): Object Label for tabs, page layouts, and reports
- `pluralLabel` (string, required): Plural Label (e.g., "Accounts")
- `description` (string, optional): A description of the Custom Object
- `recordName` (string, required): Record Name that appears in layouts and searches (e.g., "Account Name")
</Accordion>
<Accordion title="SALESFORCE_DESCRIBE_ACTION_SCHEMA">
**Description:** Get the expected schema for operations on specific object types.
**Parameters:**
- `recordType` (string, required): Record Type to describe
- `operation` (string, required): Operation Type (e.g., "CREATE_RECORD" or "UPDATE_RECORD")
**Note:** Use this function first when working with custom objects to understand their schema before performing operations.
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Salesforce Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Salesforce tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Salesforce capabilities
salesforce_agent = Agent(
role="CRM Manager",
goal="Manage customer relationships and sales processes efficiently",
backstory="An AI assistant specialized in CRM operations and sales automation.",
tools=[enterprise_tools]
)
# Task to create a new lead
create_lead_task = Task(
description="Create a new lead for John Doe from Example Corp with email john.doe@example.com",
agent=salesforce_agent,
expected_output="Lead created successfully with lead ID"
)
# Run the task
crew = Crew(
agents=[salesforce_agent],
tasks=[create_lead_task]
)
crew.kickoff()
```
### Filtering Specific Salesforce Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Salesforce tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["salesforce_create_record_lead", "salesforce_update_record_opportunity", "salesforce_search_records_contact"]
)
sales_manager = Agent(
role="Sales Manager",
goal="Manage leads and opportunities in the sales pipeline",
backstory="An experienced sales manager who handles lead qualification and opportunity management.",
tools=enterprise_tools
)
# Task to manage sales pipeline
pipeline_task = Task(
description="Create a qualified lead and convert it to an opportunity with $50,000 value",
agent=sales_manager,
expected_output="Lead created and opportunity established successfully"
)
crew = Crew(
agents=[sales_manager],
tasks=[pipeline_task]
)
crew.kickoff()
```
### Contact and Account Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
account_manager = Agent(
role="Account Manager",
goal="Manage customer accounts and maintain strong relationships",
backstory="An AI assistant that specializes in account management and customer relationship building.",
tools=[enterprise_tools]
)
# Task to manage customer accounts
account_task = Task(
description="""
1. Create a new account for TechCorp Inc.
2. Add John Doe as the primary contact for this account
3. Create a follow-up task for next week to check on their project status
""",
agent=account_manager,
expected_output="Account, contact, and follow-up task created successfully"
)
crew = Crew(
agents=[account_manager],
tasks=[account_task]
)
crew.kickoff()
```
### Advanced SOQL Queries and Reporting
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
data_analyst = Agent(
role="Sales Data Analyst",
goal="Generate insights from Salesforce data using SOQL queries",
backstory="An analytical AI that excels at extracting meaningful insights from CRM data.",
tools=[enterprise_tools]
)
# Complex task involving SOQL queries and data analysis
analysis_task = Task(
description="""
1. Execute a SOQL query to find all opportunities closing this quarter
2. Search for contacts at companies with opportunities over $100K
3. Create a summary report of the sales pipeline status
4. Update high-value opportunities with next steps
""",
agent=data_analyst,
expected_output="Comprehensive sales pipeline analysis with actionable insights"
)
crew = Crew(
agents=[data_analyst],
tasks=[analysis_task]
)
crew.kickoff()
```
This comprehensive documentation covers all the Salesforce tools organized by functionality, making it easy for users to find the specific operations they need for their CRM automation tasks.
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Salesforce integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,383 @@
---
title: Shopify Integration
description: "E-commerce and online store management with Shopify integration for CrewAI."
icon: "shopify"
mode: "wide"
---
## Overview
Enable your agents to manage e-commerce operations through Shopify. Handle customers, orders, products, inventory, and store analytics to streamline your online business with AI-powered automation.
## Prerequisites
Before using the Shopify integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Shopify store with appropriate admin permissions
- Connected your Shopify store through the [Integrations page](https://app.crewai.com/integrations)
## Available Tools
### **Customer Management**
<AccordionGroup>
<Accordion title="SHOPIFY_GET_CUSTOMERS">
**Description:** Retrieve a list of customers from your Shopify store.
**Parameters:**
- `customerIds` (string, optional): Comma-separated list of customer IDs to filter by (example: "207119551, 207119552")
- `createdAtMin` (string, optional): Only return customers created after this date (ISO or Unix timestamp)
- `createdAtMax` (string, optional): Only return customers created before this date (ISO or Unix timestamp)
- `updatedAtMin` (string, optional): Only return customers updated after this date (ISO or Unix timestamp)
- `updatedAtMax` (string, optional): Only return customers updated before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of customers to return (defaults to 250)
</Accordion>
<Accordion title="SHOPIFY_SEARCH_CUSTOMERS">
**Description:** Search for customers using advanced filtering criteria.
**Parameters:**
- `filterFormula` (object, optional): Advanced filter in disjunctive normal form with field-specific operators
- `limit` (string, optional): Maximum number of customers to return (defaults to 250)
</Accordion>
<Accordion title="SHOPIFY_CREATE_CUSTOMER">
**Description:** Create a new customer in your Shopify store.
**Parameters:**
- `firstName` (string, required): Customer's first name
- `lastName` (string, required): Customer's last name
- `email` (string, required): Customer's email address
- `company` (string, optional): Company name
- `streetAddressLine1` (string, optional): Street address
- `streetAddressLine2` (string, optional): Street address line 2
- `city` (string, optional): City
- `state` (string, optional): State or province code
- `country` (string, optional): Country
- `zipCode` (string, optional): Zip code
- `phone` (string, optional): Phone number
- `tags` (string, optional): Tags as array or comma-separated list
- `note` (string, optional): Customer note
- `sendEmailInvite` (boolean, optional): Whether to send email invitation
- `metafields` (object, optional): Additional metafields in JSON format
</Accordion>
<Accordion title="SHOPIFY_UPDATE_CUSTOMER">
**Description:** Update an existing customer in your Shopify store.
**Parameters:**
- `customerId` (string, required): The ID of the customer to update
- `firstName` (string, optional): Customer's first name
- `lastName` (string, optional): Customer's last name
- `email` (string, optional): Customer's email address
- `company` (string, optional): Company name
- `streetAddressLine1` (string, optional): Street address
- `streetAddressLine2` (string, optional): Street address line 2
- `city` (string, optional): City
- `state` (string, optional): State or province code
- `country` (string, optional): Country
- `zipCode` (string, optional): Zip code
- `phone` (string, optional): Phone number
- `tags` (string, optional): Tags as array or comma-separated list
- `note` (string, optional): Customer note
- `sendEmailInvite` (boolean, optional): Whether to send email invitation
- `metafields` (object, optional): Additional metafields in JSON format
</Accordion>
</AccordionGroup>
### **Order Management**
<AccordionGroup>
<Accordion title="SHOPIFY_GET_ORDERS">
**Description:** Retrieve a list of orders from your Shopify store.
**Parameters:**
- `orderIds` (string, optional): Comma-separated list of order IDs to filter by (example: "450789469, 450789470")
- `createdAtMin` (string, optional): Only return orders created after this date (ISO or Unix timestamp)
- `createdAtMax` (string, optional): Only return orders created before this date (ISO or Unix timestamp)
- `updatedAtMin` (string, optional): Only return orders updated after this date (ISO or Unix timestamp)
- `updatedAtMax` (string, optional): Only return orders updated before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of orders to return (defaults to 250)
</Accordion>
<Accordion title="SHOPIFY_CREATE_ORDER">
**Description:** Create a new order in your Shopify store.
**Parameters:**
- `email` (string, required): Customer email address
- `lineItems` (object, required): Order line items in JSON format with title, price, quantity, and variant_id
- `sendReceipt` (boolean, optional): Whether to send order receipt
- `fulfillmentStatus` (string, optional): Fulfillment status - Options: fulfilled, null, partial, restocked
- `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided
- `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy
- `note` (string, optional): Order note
</Accordion>
<Accordion title="SHOPIFY_UPDATE_ORDER">
**Description:** Update an existing order in your Shopify store.
**Parameters:**
- `orderId` (string, required): The ID of the order to update
- `email` (string, optional): Customer email address
- `lineItems` (object, optional): Updated order line items in JSON format
- `sendReceipt` (boolean, optional): Whether to send order receipt
- `fulfillmentStatus` (string, optional): Fulfillment status - Options: fulfilled, null, partial, restocked
- `financialStatus` (string, optional): Financial status - Options: pending, authorized, partially_paid, paid, partially_refunded, refunded, voided
- `inventoryBehaviour` (string, optional): Inventory behavior - Options: bypass, decrement_ignoring_policy, decrement_obeying_policy
- `note` (string, optional): Order note
</Accordion>
<Accordion title="SHOPIFY_GET_ABANDONED_CARTS">
**Description:** Retrieve abandoned carts from your Shopify store.
**Parameters:**
- `createdWithInLast` (string, optional): Restrict results to checkouts created within specified time
- `createdAfterId` (string, optional): Restrict results to after the specified ID
- `status` (string, optional): Show checkouts with given status - Options: open, closed (defaults to open)
- `createdAtMin` (string, optional): Only return carts created after this date (ISO or Unix timestamp)
- `createdAtMax` (string, optional): Only return carts created before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of carts to return (defaults to 250)
</Accordion>
</AccordionGroup>
### **Product Management (REST API)**
<AccordionGroup>
<Accordion title="SHOPIFY_GET_PRODUCTS">
**Description:** Retrieve a list of products from your Shopify store using REST API.
**Parameters:**
- `productIds` (string, optional): Comma-separated list of product IDs to filter by (example: "632910392, 632910393")
- `title` (string, optional): Filter by product title
- `productType` (string, optional): Filter by product type
- `vendor` (string, optional): Filter by vendor
- `status` (string, optional): Filter by status - Options: active, archived, draft
- `createdAtMin` (string, optional): Only return products created after this date (ISO or Unix timestamp)
- `createdAtMax` (string, optional): Only return products created before this date (ISO or Unix timestamp)
- `updatedAtMin` (string, optional): Only return products updated after this date (ISO or Unix timestamp)
- `updatedAtMax` (string, optional): Only return products updated before this date (ISO or Unix timestamp)
- `limit` (string, optional): Maximum number of products to return (defaults to 250)
</Accordion>
<Accordion title="SHOPIFY_CREATE_PRODUCT">
**Description:** Create a new product in your Shopify store using REST API.
**Parameters:**
- `title` (string, required): Product title
- `productType` (string, required): Product type/category
- `vendor` (string, required): Product vendor
- `productDescription` (string, optional): Product description (accepts plain text or HTML)
- `tags` (string, optional): Product tags as array or comma-separated list
- `price` (string, optional): Product price
- `inventoryPolicy` (string, optional): Inventory policy - Options: deny, continue
- `imageUrl` (string, optional): Product image URL
- `isPublished` (boolean, optional): Whether product is published
- `publishToPointToSale` (boolean, optional): Whether to publish to point of sale
</Accordion>
<Accordion title="SHOPIFY_UPDATE_PRODUCT">
**Description:** Update an existing product in your Shopify store using REST API.
**Parameters:**
- `productId` (string, required): The ID of the product to update
- `title` (string, optional): Product title
- `productType` (string, optional): Product type/category
- `vendor` (string, optional): Product vendor
- `productDescription` (string, optional): Product description (accepts plain text or HTML)
- `tags` (string, optional): Product tags as array or comma-separated list
- `price` (string, optional): Product price
- `inventoryPolicy` (string, optional): Inventory policy - Options: deny, continue
- `imageUrl` (string, optional): Product image URL
- `isPublished` (boolean, optional): Whether product is published
- `publishToPointToSale` (boolean, optional): Whether to publish to point of sale
</Accordion>
</AccordionGroup>
### **Product Management (GraphQL)**
<AccordionGroup>
<Accordion title="SHOPIFY_GET_PRODUCTS_GRAPHQL">
**Description:** Retrieve products using advanced GraphQL filtering capabilities.
**Parameters:**
- `productFilterFormula` (object, optional): Advanced filter in disjunctive normal form with support for fields like id, title, vendor, status, handle, tag, created_at, updated_at, published_at
</Accordion>
<Accordion title="SHOPIFY_CREATE_PRODUCT_GRAPHQL">
**Description:** Create a new product using GraphQL API with enhanced media support.
**Parameters:**
- `title` (string, required): Product title
- `productType` (string, required): Product type/category
- `vendor` (string, required): Product vendor
- `productDescription` (string, optional): Product description (accepts plain text or HTML)
- `tags` (string, optional): Product tags as array or comma-separated list
- `media` (object, optional): Media objects with alt text, content type, and source URL
- `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard
</Accordion>
<Accordion title="SHOPIFY_UPDATE_PRODUCT_GRAPHQL">
**Description:** Update an existing product using GraphQL API with enhanced media support.
**Parameters:**
- `productId` (string, required): The GraphQL ID of the product to update (e.g., "gid://shopify/Product/913144112")
- `title` (string, optional): Product title
- `productType` (string, optional): Product type/category
- `vendor` (string, optional): Product vendor
- `productDescription` (string, optional): Product description (accepts plain text or HTML)
- `tags` (string, optional): Product tags as array or comma-separated list
- `media` (object, optional): Updated media objects with alt text, content type, and source URL
- `additionalFields` (object, optional): Additional product fields like status, requiresSellingPlan, giftCard
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Shopify Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Shopify tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Shopify capabilities
shopify_agent = Agent(
role="E-commerce Manager",
goal="Manage online store operations and customer relationships efficiently",
backstory="An AI assistant specialized in e-commerce operations and online store management.",
tools=[enterprise_tools]
)
# Task to create a new customer
create_customer_task = Task(
description="Create a new VIP customer Jane Smith with email jane.smith@example.com and phone +1-555-0123",
agent=shopify_agent,
expected_output="Customer created successfully with customer ID"
)
# Run the task
crew = Crew(
agents=[shopify_agent],
tasks=[create_customer_task]
)
crew.kickoff()
```
### Filtering Specific Shopify Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Shopify tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["shopify_create_customer", "shopify_create_order", "shopify_get_products"]
)
store_manager = Agent(
role="Store Manager",
goal="Manage customer orders and product catalog",
backstory="An experienced store manager who handles customer relationships and inventory management.",
tools=enterprise_tools
)
# Task to manage store operations
store_task = Task(
description="Create a new customer and process their order for 2 Premium Coffee Mugs",
agent=store_manager,
expected_output="Customer created and order processed successfully"
)
crew = Crew(
agents=[store_manager],
tasks=[store_task]
)
crew.kickoff()
```
### Product Management with GraphQL
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
product_manager = Agent(
role="Product Manager",
goal="Manage product catalog and inventory with advanced GraphQL capabilities",
backstory="An AI assistant that specializes in product management and catalog optimization.",
tools=[enterprise_tools]
)
# Task to manage product catalog
catalog_task = Task(
description="""
1. Create a new product "Premium Coffee Mug" from Coffee Co vendor
2. Add high-quality product images and descriptions
3. Search for similar products from the same vendor
4. Update product tags and pricing strategy
""",
agent=product_manager,
expected_output="Product created and catalog optimized successfully"
)
crew = Crew(
agents=[product_manager],
tasks=[catalog_task]
)
crew.kickoff()
```
### Order and Customer Analytics
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
analytics_agent = Agent(
role="E-commerce Analyst",
goal="Analyze customer behavior and order patterns to optimize store performance",
backstory="An analytical AI that excels at extracting insights from e-commerce data.",
tools=[enterprise_tools]
)
# Complex task involving multiple operations
analytics_task = Task(
description="""
1. Retrieve recent customer data and order history
2. Identify abandoned carts from the last 7 days
3. Analyze product performance and inventory levels
4. Generate recommendations for customer retention
""",
agent=analytics_agent,
expected_output="Comprehensive e-commerce analytics report with actionable insights"
)
crew = Crew(
agents=[analytics_agent],
tasks=[analytics_task]
)
crew.kickoff()
```
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Shopify integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,294 @@
---
title: Slack Integration
description: "Team communication and collaboration with Slack integration for CrewAI."
icon: "slack"
mode: "wide"
---
## Overview
Enable your agents to manage team communication through Slack. Send messages, search conversations, manage channels, and coordinate team activities to streamline your collaboration workflows with AI-powered automation.
## Prerequisites
Before using the Slack integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Slack workspace with appropriate permissions
- Connected your Slack workspace through the [Integrations page](https://app.crewai.com/integrations)
## Available Tools
### **User Management**
<AccordionGroup>
<Accordion title="SLACK_LIST_MEMBERS">
**Description:** List all members in a Slack channel.
**Parameters:**
- No parameters required - retrieves all channel members
</Accordion>
<Accordion title="SLACK_GET_USER_BY_EMAIL">
**Description:** Find a user in your Slack workspace by their email address.
**Parameters:**
- `email` (string, required): The email address of a user in the workspace
</Accordion>
<Accordion title="SLACK_GET_USERS_BY_NAME">
**Description:** Search for users by their name or display name.
**Parameters:**
- `name` (string, required): User's real name to search for
- `displayName` (string, required): User's display name to search for
- `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination
</Accordion>
</AccordionGroup>
### **Channel Management**
<AccordionGroup>
<Accordion title="SLACK_LIST_CHANNELS">
**Description:** List all channels in your Slack workspace.
**Parameters:**
- No parameters required - retrieves all accessible channels
</Accordion>
</AccordionGroup>
### **Messaging**
<AccordionGroup>
<Accordion title="SLACK_SEND_MESSAGE">
**Description:** Send a message to a Slack channel.
**Parameters:**
- `channel` (string, required): Channel name or ID - Use Connect Portal Workflow Settings to allow users to select a channel, or enter a channel name to create a new channel
- `message` (string, required): The message text to send
- `botName` (string, required): The name of the bot that sends this message
- `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:")
- `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements
- `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false)
</Accordion>
<Accordion title="SLACK_SEND_DIRECT_MESSAGE">
**Description:** Send a direct message to a specific user in Slack.
**Parameters:**
- `memberId` (string, required): Recipient user ID - Use Connect Portal Workflow Settings to allow users to select a workspace member
- `message` (string, required): The message text to send
- `botName` (string, required): The name of the bot that sends this message
- `botIcon` (string, required): Bot icon - Can be either an image URL or an emoji (e.g., ":dog:")
- `blocks` (object, optional): Slack Block Kit JSON for rich message formatting with attachments and interactive elements
- `authenticatedUser` (boolean, optional): If true, message appears to come from your authenticated Slack user instead of the application (defaults to false)
</Accordion>
</AccordionGroup>
### **Search & Discovery**
<AccordionGroup>
<Accordion title="SLACK_SEARCH_MESSAGES">
**Description:** Search for messages across your Slack workspace.
**Parameters:**
- `query` (string, required): Search query using Slack search syntax to find messages that match specified criteria
**Search Query Examples:**
- `"project update"` - Search for messages containing "project update"
- `from:@john in:#general` - Search for messages from John in the #general channel
- `has:link after:2023-01-01` - Search for messages with links after January 1, 2023
- `in:@channel before:yesterday` - Search for messages in a specific channel before yesterday
</Accordion>
</AccordionGroup>
## Block Kit Integration
Slack's Block Kit allows you to create rich, interactive messages. Here are some examples of how to use the `blocks` parameter:
### Simple Text with Attachment
```json
[
{
"text": "I am a test message",
"attachments": [
{
"text": "And here's an attachment!"
}
]
}
]
```
### Rich Formatting with Sections
```json
[
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Project Update*\nStatus: ✅ Complete"
}
},
{
"type": "divider"
},
{
"type": "section",
"text": {
"type": "plain_text",
"text": "All tasks have been completed successfully."
}
}
]
```
## Usage Examples
### Basic Slack Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Slack tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Slack capabilities
slack_agent = Agent(
role="Team Communication Manager",
goal="Facilitate team communication and coordinate collaboration efficiently",
backstory="An AI assistant specialized in team communication and workspace coordination.",
tools=[enterprise_tools]
)
# Task to send project updates
update_task = Task(
description="Send a project status update to the #general channel with current progress",
agent=slack_agent,
expected_output="Project update message sent successfully to team channel"
)
# Run the task
crew = Crew(
agents=[slack_agent],
tasks=[update_task]
)
crew.kickoff()
```
### Filtering Specific Slack Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Slack tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["slack_send_message", "slack_send_direct_message", "slack_search_messages"]
)
communication_manager = Agent(
role="Communication Coordinator",
goal="Manage team communications and ensure important messages reach the right people",
backstory="An experienced communication coordinator who handles team messaging and notifications.",
tools=enterprise_tools
)
# Task to coordinate team communication
coordination_task = Task(
description="Send task completion notifications to team members and update project channels",
agent=communication_manager,
expected_output="Team notifications sent and project channels updated successfully"
)
crew = Crew(
agents=[communication_manager],
tasks=[coordination_task]
)
crew.kickoff()
```
### Advanced Messaging with Block Kit
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
notification_agent = Agent(
role="Notification Manager",
goal="Create rich, interactive notifications and manage workspace communication",
backstory="An AI assistant that specializes in creating engaging team notifications and updates.",
tools=[enterprise_tools]
)
# Task to send rich notifications
notification_task = Task(
description="""
1. Send a formatted project completion message to #general with progress charts
2. Send direct messages to team leads with task summaries
3. Create interactive notification with action buttons for team feedback
""",
agent=notification_agent,
expected_output="Rich notifications sent with interactive elements and formatted content"
)
crew = Crew(
agents=[notification_agent],
tasks=[notification_task]
)
crew.kickoff()
```
### Message Search and Analytics
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
analytics_agent = Agent(
role="Communication Analyst",
goal="Analyze team communication patterns and extract insights from conversations",
backstory="An analytical AI that excels at understanding team dynamics through communication data.",
tools=[enterprise_tools]
)
# Complex task involving search and analysis
analysis_task = Task(
description="""
1. Search for recent project-related messages across all channels
2. Find users by email to identify team members
3. Analyze communication patterns and response times
4. Generate weekly team communication summary
""",
agent=analytics_agent,
expected_output="Comprehensive communication analysis with team insights and recommendations"
)
crew = Crew(
agents=[analytics_agent],
tasks=[analysis_task]
)
crew.kickoff()
```
## Contact Support
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Slack integration setup or troubleshooting.
</Card>

View File

@@ -0,0 +1,306 @@
---
title: Stripe Integration
description: "Payment processing and subscription management with Stripe integration for CrewAI."
icon: "stripe"
mode: "wide"
---
## Overview
Enable your agents to manage payments, subscriptions, and customer billing through Stripe. Handle customer data, process subscriptions, manage products, and track financial transactions to streamline your payment workflows with AI-powered automation.
## Prerequisites
Before using the Stripe integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Stripe account with appropriate API permissions
- Connected your Stripe account through the [Integrations page](https://app.crewai.com/integrations)
## Available Tools
### **Customer Management**
<AccordionGroup>
<Accordion title="STRIPE_CREATE_CUSTOMER">
**Description:** Create a new customer in your Stripe account.
**Parameters:**
- `emailCreateCustomer` (string, required): Customer's email address
- `name` (string, optional): Customer's full name
- `description` (string, optional): Customer description for internal reference
- `metadataCreateCustomer` (object, optional): Additional metadata as key-value pairs (e.g., `{"field1": 1, "field2": 2}`)
</Accordion>
<Accordion title="STRIPE_GET_CUSTOMER_BY_ID">
**Description:** Retrieve a specific customer by their Stripe customer ID.
**Parameters:**
- `idGetCustomer` (string, required): The Stripe customer ID to retrieve
</Accordion>
<Accordion title="STRIPE_GET_CUSTOMERS">
**Description:** Retrieve a list of customers with optional filtering.
**Parameters:**
- `emailGetCustomers` (string, optional): Filter customers by email address
- `createdAfter` (string, optional): Filter customers created after this date (Unix timestamp)
- `createdBefore` (string, optional): Filter customers created before this date (Unix timestamp)
- `limitGetCustomers` (string, optional): Maximum number of customers to return (defaults to 10)
</Accordion>
<Accordion title="STRIPE_UPDATE_CUSTOMER">
**Description:** Update an existing customer's information.
**Parameters:**
- `customerId` (string, required): The ID of the customer to update
- `emailUpdateCustomer` (string, optional): Updated email address
- `name` (string, optional): Updated customer name
- `description` (string, optional): Updated customer description
- `metadataUpdateCustomer` (object, optional): Updated metadata as key-value pairs
</Accordion>
</AccordionGroup>
### **Subscription Management**
<AccordionGroup>
<Accordion title="STRIPE_CREATE_SUBSCRIPTION">
**Description:** Create a new subscription for a customer.
**Parameters:**
- `customerIdCreateSubscription` (string, required): The customer ID for whom the subscription will be created
- `plan` (string, required): The plan ID for the subscription - Use Connect Portal Workflow Settings to allow users to select a plan
- `metadataCreateSubscription` (object, optional): Additional metadata for the subscription
</Accordion>
<Accordion title="STRIPE_GET_SUBSCRIPTIONS">
**Description:** Retrieve subscriptions with optional filtering.
**Parameters:**
- `customerIdGetSubscriptions` (string, optional): Filter subscriptions by customer ID
- `subscriptionStatus` (string, optional): Filter by subscription status - Options: incomplete, incomplete_expired, trialing, active, past_due, canceled, unpaid
- `limitGetSubscriptions` (string, optional): Maximum number of subscriptions to return (defaults to 10)
</Accordion>
</AccordionGroup>
### **Product Management**
<AccordionGroup>
<Accordion title="STRIPE_CREATE_PRODUCT">
**Description:** Create a new product in your Stripe catalog.
**Parameters:**
- `productName` (string, required): The product name
- `description` (string, optional): Product description
- `metadataProduct` (object, optional): Additional product metadata as key-value pairs
</Accordion>
<Accordion title="STRIPE_GET_PRODUCT_BY_ID">
**Description:** Retrieve a specific product by its Stripe product ID.
**Parameters:**
- `productId` (string, required): The Stripe product ID to retrieve
</Accordion>
<Accordion title="STRIPE_GET_PRODUCTS">
**Description:** Retrieve a list of products with optional filtering.
**Parameters:**
- `createdAfter` (string, optional): Filter products created after this date (Unix timestamp)
- `createdBefore` (string, optional): Filter products created before this date (Unix timestamp)
- `limitGetProducts` (string, optional): Maximum number of products to return (defaults to 10)
</Accordion>
</AccordionGroup>
### **Financial Operations**
<AccordionGroup>
<Accordion title="STRIPE_GET_BALANCE_TRANSACTIONS">
**Description:** Retrieve balance transactions from your Stripe account.
**Parameters:**
- `balanceTransactionType` (string, optional): Filter by transaction type - Options: charge, refund, payment, payment_refund
- `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination
</Accordion>
<Accordion title="STRIPE_GET_PLANS">
**Description:** Retrieve subscription plans from your Stripe account.
**Parameters:**
- `isPlanActive` (boolean, optional): Filter by plan status - true for active plans, false for inactive plans
- `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination
</Accordion>
</AccordionGroup>
## Usage Examples
### Basic Stripe Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Stripe tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Stripe capabilities
stripe_agent = Agent(
role="Payment Manager",
goal="Manage customer payments, subscriptions, and billing operations efficiently",
backstory="An AI assistant specialized in payment processing and subscription management.",
tools=[enterprise_tools]
)
# Task to create a new customer
create_customer_task = Task(
description="Create a new premium customer John Doe with email john.doe@example.com",
agent=stripe_agent,
expected_output="Customer created successfully with customer ID"
)
# Run the task
crew = Crew(
agents=[stripe_agent],
tasks=[create_customer_task]
)
crew.kickoff()
```
### Filtering Specific Stripe Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Stripe tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["stripe_create_customer", "stripe_create_subscription", "stripe_get_balance_transactions"]
)
billing_manager = Agent(
role="Billing Manager",
goal="Handle customer billing, subscriptions, and payment processing",
backstory="An experienced billing manager who handles subscription lifecycle and payment operations.",
tools=enterprise_tools
)
# Task to manage billing operations
billing_task = Task(
description="Create a new customer and set up their premium subscription plan",
agent=billing_manager,
expected_output="Customer created and subscription activated successfully"
)
crew = Crew(
agents=[billing_manager],
tasks=[billing_task]
)
crew.kickoff()
```
### Subscription Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
subscription_manager = Agent(
role="Subscription Manager",
goal="Manage customer subscriptions and optimize recurring revenue",
backstory="An AI assistant that specializes in subscription lifecycle management and customer retention.",
tools=[enterprise_tools]
)
# Task to manage subscription operations
subscription_task = Task(
description="""
1. Create a new product "Premium Service Plan" with advanced features
2. Set up subscription plans with different tiers
3. Create customers and assign them to appropriate plans
4. Monitor subscription status and handle billing issues
""",
agent=subscription_manager,
expected_output="Subscription management system configured with customers and active plans"
)
crew = Crew(
agents=[subscription_manager],
tasks=[subscription_task]
)
crew.kickoff()
```
### Financial Analytics and Reporting
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
financial_analyst = Agent(
role="Financial Analyst",
goal="Analyze payment data and generate financial insights",
backstory="An analytical AI that excels at extracting insights from payment and subscription data.",
tools=[enterprise_tools]
)
# Complex task involving financial analysis
analytics_task = Task(
description="""
1. Retrieve balance transactions for the current month
2. Analyze customer payment patterns and subscription trends
3. Identify high-value customers and subscription performance
4. Generate monthly financial performance report
""",
agent=financial_analyst,
expected_output="Comprehensive financial analysis with payment insights and recommendations"
)
crew = Crew(
agents=[financial_analyst],
tasks=[analytics_task]
)
crew.kickoff()
```
## Subscription Status Reference
Understanding subscription statuses:
- **incomplete** - Subscription requires payment method or payment confirmation
- **incomplete_expired** - Subscription expired before payment was confirmed
- **trialing** - Subscription is in trial period
- **active** - Subscription is active and current
- **past_due** - Payment failed but subscription is still active
- **canceled** - Subscription has been canceled
- **unpaid** - Payment failed and subscription is no longer active
## Metadata Usage
Metadata allows you to store additional information about customers, subscriptions, and products:
```json
{
"customer_segment": "enterprise",
"acquisition_source": "google_ads",
"lifetime_value": "high",
"custom_field_1": "value1"
}
```
This integration enables comprehensive payment and subscription management automation, allowing your AI agents to handle billing operations seamlessly within your Stripe ecosystem.

View File

@@ -0,0 +1,344 @@
---
title: Zendesk Integration
description: "Customer support and helpdesk management with Zendesk integration for CrewAI."
icon: "headset"
mode: "wide"
---
## Overview
Enable your agents to manage customer support operations through Zendesk. Create and update tickets, manage users, track support metrics, and streamline your customer service workflows with AI-powered automation.
## Prerequisites
Before using the Zendesk integration, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account with an active subscription
- A Zendesk account with appropriate API permissions
- Connected your Zendesk account through the [Integrations page](https://app.crewai.com/integrations)
## Available Tools
### **Ticket Management**
<AccordionGroup>
<Accordion title="ZENDESK_CREATE_TICKET">
**Description:** Create a new support ticket in Zendesk.
**Parameters:**
- `ticketSubject` (string, required): Ticket subject line (e.g., "Help, my printer is on fire!")
- `ticketDescription` (string, required): First comment that appears on the ticket (e.g., "The smoke is very colorful.")
- `requesterName` (string, required): Name of the user requesting support (e.g., "Jane Customer")
- `requesterEmail` (string, required): Email of the user requesting support (e.g., "jane@example.com")
- `assigneeId` (string, optional): Zendesk Agent ID assigned to this ticket - Use Connect Portal Workflow Settings to allow users to select an assignee
- `ticketType` (string, optional): Ticket type - Options: problem, incident, question, task
- `ticketPriority` (string, optional): Priority level - Options: urgent, high, normal, low
- `ticketStatus` (string, optional): Ticket status - Options: new, open, pending, hold, solved, closed
- `ticketDueAt` (string, optional): Due date for task-type tickets (ISO 8601 timestamp)
- `ticketTags` (string, optional): Array of tags to apply (e.g., `["enterprise", "other_tag"]`)
- `ticketExternalId` (string, optional): External ID to link tickets to local records
- `ticketCustomFields` (object, optional): Custom field values in JSON format
</Accordion>
<Accordion title="ZENDESK_UPDATE_TICKET">
**Description:** Update an existing support ticket in Zendesk.
**Parameters:**
- `ticketId` (string, required): ID of the ticket to update (e.g., "35436")
- `ticketSubject` (string, optional): Updated ticket subject
- `requesterName` (string, required): Name of the user who requested this ticket
- `requesterEmail` (string, required): Email of the user who requested this ticket
- `assigneeId` (string, optional): Updated assignee ID - Use Connect Portal Workflow Settings
- `ticketType` (string, optional): Updated ticket type - Options: problem, incident, question, task
- `ticketPriority` (string, optional): Updated priority - Options: urgent, high, normal, low
- `ticketStatus` (string, optional): Updated status - Options: new, open, pending, hold, solved, closed
- `ticketDueAt` (string, optional): Updated due date (ISO 8601 timestamp)
- `ticketTags` (string, optional): Updated tags array
- `ticketExternalId` (string, optional): Updated external ID
- `ticketCustomFields` (object, optional): Updated custom field values
</Accordion>
<Accordion title="ZENDESK_GET_TICKET_BY_ID">
**Description:** Retrieve a specific ticket by its ID.
**Parameters:**
- `ticketId` (string, required): The ticket ID to retrieve (e.g., "35436")
</Accordion>
<Accordion title="ZENDESK_ADD_COMMENT_TO_TICKET">
**Description:** Add a comment or internal note to an existing ticket.
**Parameters:**
- `ticketId` (string, required): ID of the ticket to add comment to (e.g., "35436")
- `commentBody` (string, required): Comment message (accepts plain text or HTML, e.g., "Thanks for your help!")
- `isInternalNote` (boolean, optional): Set to true for internal notes instead of public replies (defaults to false)
- `isPublic` (boolean, optional): True for public comments, false for internal notes
</Accordion>
<Accordion title="ZENDESK_SEARCH_TICKETS">
**Description:** Search for tickets using various filters and criteria.
**Parameters:**
- `ticketSubject` (string, optional): Filter by text in ticket subject
- `ticketDescription` (string, optional): Filter by text in ticket description and comments
- `ticketStatus` (string, optional): Filter by status - Options: new, open, pending, hold, solved, closed
- `ticketType` (string, optional): Filter by type - Options: problem, incident, question, task, no_type
- `ticketPriority` (string, optional): Filter by priority - Options: urgent, high, normal, low, no_priority
- `requesterId` (string, optional): Filter by requester user ID
- `assigneeId` (string, optional): Filter by assigned agent ID
- `recipientEmail` (string, optional): Filter by original recipient email address
- `ticketTags` (string, optional): Filter by ticket tags
- `ticketExternalId` (string, optional): Filter by external ID
- `createdDate` (object, optional): Filter by creation date with operator (EQUALS, LESS_THAN_EQUALS, GREATER_THAN_EQUALS) and value
- `updatedDate` (object, optional): Filter by update date with operator and value
- `dueDate` (object, optional): Filter by due date with operator and value
- `sort_by` (string, optional): Sort field - Options: created_at, updated_at, priority, status, ticket_type
- `sort_order` (string, optional): Sort direction - Options: asc, desc
</Accordion>
</AccordionGroup>
### **User Management**
<AccordionGroup>
<Accordion title="ZENDESK_CREATE_USER">
**Description:** Create a new user in Zendesk.
**Parameters:**
- `name` (string, required): User's full name
- `email` (string, optional): User's email address (e.g., "jane@example.com")
- `phone` (string, optional): User's phone number
- `role` (string, optional): User role - Options: admin, agent, end-user
- `externalId` (string, optional): Unique identifier from another system
- `details` (string, optional): Additional user details
- `notes` (string, optional): Internal notes about the user
</Accordion>
<Accordion title="ZENDESK_UPDATE_USER">
**Description:** Update an existing user's information.
**Parameters:**
- `userId` (string, required): ID of the user to update
- `name` (string, optional): Updated user name
- `email` (string, optional): Updated email (adds as secondary email on update)
- `phone` (string, optional): Updated phone number
- `role` (string, optional): Updated role - Options: admin, agent, end-user
- `externalId` (string, optional): Updated external ID
- `details` (string, optional): Updated user details
- `notes` (string, optional): Updated internal notes
</Accordion>
<Accordion title="ZENDESK_GET_USER_BY_ID">
**Description:** Retrieve a specific user by their ID.
**Parameters:**
- `userId` (string, required): The user ID to retrieve
</Accordion>
<Accordion title="ZENDESK_SEARCH_USERS">
**Description:** Search for users using various criteria.
**Parameters:**
- `name` (string, optional): Filter by user name
- `email` (string, optional): Filter by user email (e.g., "jane@example.com")
- `role` (string, optional): Filter by role - Options: admin, agent, end-user
- `externalId` (string, optional): Filter by external ID
- `sort_by` (string, optional): Sort field - Options: created_at, updated_at
- `sort_order` (string, optional): Sort direction - Options: asc, desc
</Accordion>
</AccordionGroup>
### **Administrative Tools**
<AccordionGroup>
<Accordion title="ZENDESK_GET_TICKET_FIELDS">
**Description:** Retrieve all standard and custom fields available for tickets.
**Parameters:**
- `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination
</Accordion>
<Accordion title="ZENDESK_GET_TICKET_AUDITS">
**Description:** Get audit records (read-only history) for tickets.
**Parameters:**
- `ticketId` (string, optional): Get audits for specific ticket (if empty, retrieves audits for all non-archived tickets, e.g., "1234")
- `paginationParameters` (object, optional): Pagination settings
- `pageCursor` (string, optional): Page cursor for pagination
</Accordion>
</AccordionGroup>
## Custom Fields
Custom fields allow you to store additional information specific to your organization:
```json
[
{ "id": 27642, "value": "745" },
{ "id": 27648, "value": "yes" }
]
```
## Ticket Priority Levels
Understanding priority levels:
- **urgent** - Critical issues requiring immediate attention
- **high** - Important issues that should be addressed quickly
- **normal** - Standard priority for most tickets
- **low** - Minor issues that can be addressed when convenient
## Ticket Status Workflow
Standard ticket status progression:
- **new** - Recently created, not yet assigned
- **open** - Actively being worked on
- **pending** - Waiting for customer response or external action
- **hold** - Temporarily paused
- **solved** - Issue resolved, awaiting customer confirmation
- **closed** - Ticket completed and closed
## Usage Examples
### Basic Zendesk Agent Setup
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Zendesk tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Zendesk capabilities
zendesk_agent = Agent(
role="Support Manager",
goal="Manage customer support tickets and provide excellent customer service",
backstory="An AI assistant specialized in customer support operations and ticket management.",
tools=[enterprise_tools]
)
# Task to create a new support ticket
create_ticket_task = Task(
description="Create a high-priority support ticket for John Smith who is unable to access his account after password reset",
agent=zendesk_agent,
expected_output="Support ticket created successfully with ticket ID"
)
# Run the task
crew = Crew(
agents=[zendesk_agent],
tasks=[create_ticket_task]
)
crew.kickoff()
```
### Filtering Specific Zendesk Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Zendesk tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["zendesk_create_ticket", "zendesk_update_ticket", "zendesk_add_comment_to_ticket"]
)
support_agent = Agent(
role="Customer Support Agent",
goal="Handle customer inquiries and resolve support issues efficiently",
backstory="An experienced support agent who specializes in ticket resolution and customer communication.",
tools=enterprise_tools
)
# Task to manage support workflow
support_task = Task(
description="Create a ticket for login issues, add troubleshooting comments, and update status to resolved",
agent=support_agent,
expected_output="Support ticket managed through complete resolution workflow"
)
crew = Crew(
agents=[support_agent],
tasks=[support_task]
)
crew.kickoff()
```
### Advanced Ticket Management
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
ticket_manager = Agent(
role="Ticket Manager",
goal="Manage support ticket workflows and ensure timely resolution",
backstory="An AI assistant that specializes in support ticket triage and workflow optimization.",
tools=[enterprise_tools]
)
# Task to manage ticket lifecycle
ticket_workflow = Task(
description="""
1. Create a new support ticket for account access issues
2. Add internal notes with troubleshooting steps
3. Update ticket priority based on customer tier
4. Add resolution comments and close the ticket
""",
agent=ticket_manager,
expected_output="Complete ticket lifecycle managed from creation to resolution"
)
crew = Crew(
agents=[ticket_manager],
tasks=[ticket_workflow]
)
crew.kickoff()
```
### Support Analytics and Reporting
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
support_analyst = Agent(
role="Support Analyst",
goal="Analyze support metrics and generate insights for team performance",
backstory="An analytical AI that excels at extracting insights from support data and ticket patterns.",
tools=[enterprise_tools]
)
# Complex task involving analytics and reporting
analytics_task = Task(
description="""
1. Search for all open tickets from the last 30 days
2. Analyze ticket resolution times and customer satisfaction
3. Identify common issues and support patterns
4. Generate weekly support performance report
""",
agent=support_analyst,
expected_output="Comprehensive support analytics report with performance insights and recommendations"
)
crew = Crew(
agents=[support_analyst],
tasks=[analytics_task]
)
crew.kickoff()
```

View File

@@ -2,6 +2,7 @@
title: "CrewAI Enterprise"
description: "Deploy, monitor, and scale your AI agent workflows"
icon: "globe"
mode: "wide"
---
## Introduction
@@ -56,8 +57,8 @@ CrewAI Enterprise extends the power of the open-source framework with features d
<Steps>
<Step title="Sign up for an account">
Create your account at [app.crewai.com](https://app.crewai.com)
<Card
title="Sign Up"
<Card
title="Sign Up"
icon="user"
href="https://app.crewai.com/signup"
>
@@ -66,34 +67,34 @@ CrewAI Enterprise extends the power of the open-source framework with features d
</Step>
<Step title="Build your first crew">
Use code or Crew Studio to build your crew
<Card
title="Build Crew"
<Card
title="Build Crew"
icon="paintbrush"
href="/enterprise/guides/build-crew"
href="/en/enterprise/guides/build-crew"
>
Build Crew
</Card>
</Step>
<Step title="Deploy your crew">
Deploy your crew to the Enterprise platform
<Card
title="Deploy Crew"
<Card
title="Deploy Crew"
icon="rocket"
href="/enterprise/guides/deploy-crew"
href="/en/enterprise/guides/deploy-crew"
>
Deploy Crew
</Card>
</Step>
<Step title="Access your crew">
Integrate with your crew via the generated API endpoints
<Card
title="API Access"
<Card
title="API Access"
icon="code"
href="/enterprise/guides/use-crew-api"
href="/en/enterprise/guides/kickoff-crew"
>
Use the Crew API
</Card>
</Step>
</Steps>
For detailed instructions, check out our [deployment guide](/enterprise/guides/deploy-crew) or click the button below to get started.
For detailed instructions, check out our [deployment guide](/en/enterprise/guides/deploy-crew) or click the button below to get started.

View File

@@ -2,6 +2,7 @@
title: FAQs
description: "Frequently asked questions about CrewAI Enterprise"
icon: "circle-question"
mode: "wide"
---
<AccordionGroup>
@@ -48,12 +49,12 @@ icon: "circle-question"
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer. This input can provide extra context, clarify ambiguities, or validate the agent's output.
For detailed implementation guidance, see our [Human-in-the-Loop guide](/how-to/human-in-the-loop).
For detailed implementation guidance, see our [Human-in-the-Loop guide](/en/how-to/human-in-the-loop).
</Accordion>
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
CrewAI provides a range of advanced customization options:
- **Language Model Customization**: Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`)
- **Performance and Debugging Settings**: Adjust an agent's performance and monitor its operations
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization
@@ -129,12 +130,12 @@ icon: "circle-question"
Here's a tutorial on how to consistently get structured outputs from your agents:
<Frame>
<iframe
<iframe
height="400"
width="100%"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen></iframe>
</Frame>
</Accordion>
@@ -148,4 +149,4 @@ icon: "circle-question"
<Accordion title="How can you control the maximum number of requests per minute that the entire crew can perform?">
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Accordion>
</AccordionGroup>
</AccordionGroup>

View File

@@ -0,0 +1,23 @@
---
title: CrewAI Cookbooks
description: Feature-focused quickstarts and notebooks for learning patterns fast.
icon: book
mode: "wide"
---
## Quickstarts & Demos
<CardGroup cols={2}>
<Card title="Task Guardrails" icon="shield-check" href="https://github.com/crewAIInc/crewAI-quickstarts/tree/main/Task%20Guardrails">
Interactive notebooks for hands-on exploration.
</Card>
<Card title="Browse Quickstarts" icon="bolt" href="https://github.com/crewAIInc/crewAI-quickstarts">
Feature demos and small projects showcasing specific CrewAI capabilities.
</Card>
</CardGroup>
<Tip>
Use Cookbooks to learn a pattern quickly, then jump to Full Examples for productiongrade implementations.
</Tip>

View File

@@ -0,0 +1,86 @@
---
title: CrewAI Examples
description: Explore curated examples organized by Crews, Flows, Integrations, and Notebooks.
icon: rocket-launch
mode: "wide"
---
## Crews
<CardGroup cols={3}>
<Card title="Marketing Strategy" icon="bullhorn" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/marketing_strategy">
Multiagent marketing campaign planning.
</Card>
<Card title="Surprise Trip" icon="plane" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/surprise_trip">
Personalized surprise travel planning.
</Card>
<Card title="Match Profile to Positions" icon="id-card" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/match_profile_to_positions">
CVtojob matching with vector search.
</Card>
<Card title="Job Posting" icon="newspaper" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting">
Automated job description creation.
</Card>
<Card title="Game Builder Crew" icon="gamepad" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/game-builder-crew">
Multiagent team that designs and builds Python games.
</Card>
<Card title="Recruitment" icon="user-group" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/recruitment">
Candidate sourcing and evaluation.
</Card>
<Card title="Browse all Crews" icon="users" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews">
See the full list of crew examples.
</Card>
</CardGroup>
## Flows
<CardGroup cols={3}>
<Card title="Content Creator Flow" icon="pen" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/content_creator_flow">
Multicrew content generation with routing.
</Card>
<Card title="Email Auto Responder" icon="envelope" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/email_auto_responder_flow">
Automated email monitoring and replies.
</Card>
<Card title="Lead Score Flow" icon="chart-line" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/lead_score_flow">
Lead qualification with humanintheloop.
</Card>
<Card title="Meeting Assistant Flow" icon="calendar" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/meeting_assistant_flow">
Notes processing with integrations.
</Card>
<Card title="Self Evaluation Loop" icon="rotate" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/self_evaluation_loop_flow">
Iterative selfimprovement workflows.
</Card>
<Card title="Write a Book (Flows)" icon="book" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/write_a_book_with_flows">
Parallel chapter generation.
</Card>
<Card title="Browse all Flows" icon="diagram-project" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows">
See the full list of flow examples.
</Card>
</CardGroup>
## Integrations
<CardGroup cols={3}>
<Card title="CrewAI ↔ LangGraph" icon="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/integrations/crewai-langgraph">
Integration with LangGraph framework.
</Card>
<Card title="Azure OpenAI" icon="cloud" href="https://github.com/crewAIInc/crewAI-examples/tree/main/integrations/azure_model">
Using CrewAI with Azure OpenAI.
</Card>
<Card title="NVIDIA Models" icon="microchip" href="https://github.com/crewAIInc/crewAI-examples/tree/main/integrations/nvidia_models">
NVIDIA ecosystem integrations.
</Card>
<Card title="Browse Integrations" icon="puzzle-piece" href="https://github.com/crewAIInc/crewAI-examples/tree/main/integrations">
See all integration examples.
</Card>
</CardGroup>
## Notebooks
<CardGroup cols={2}>
<Card title="Simple QA Crew + Flow" icon="book" href="https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/Simple%20QA%20Crew%20%2B%20Flow">
Simple QA Crew + Flow.
</Card>
<Card title="All Notebooks" icon="book" href="https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks">
Interactive examples for learning and experimentation.
</Card>
</CardGroup>

View File

@@ -2,6 +2,7 @@
title: Customizing Prompts
description: Dive deeper into low-level prompt customization for CrewAI, enabling super custom and complex use cases for different models and languages.
icon: message-pen
mode: "wide"
---
## Why Customize Prompts?
@@ -44,7 +45,7 @@ Based on your agent configuration, CrewAI adds different default instructions:
"I MUST use these formats, my job depends on it!"
```
#### For Agents With Tools
#### For Agents With Tools
```text
"IMPORTANT: Use the following format in your response:
@@ -127,7 +128,7 @@ custom_prompt_template = """Task: {input}
Please complete this task thoughtfully."""
agent = Agent(
role="Research Assistant",
role="Research Assistant",
goal="Help users find accurate information",
backstory="You are a helpful research assistant.",
system_template=custom_system_template,
@@ -164,7 +165,7 @@ crew = Crew(
```python
agent = Agent(
role="Analyst",
goal="Analyze data",
goal="Analyze data",
backstory="Expert analyst",
use_system_prompt=False # Disables system prompt separation
)
@@ -174,13 +175,13 @@ agent = Agent(
For production transparency, integrate with observability platforms to monitor all prompts and LLM interactions. This allows you to see exactly what prompts (including default instructions) are being sent to your LLMs.
See our [Observability documentation](/how-to/observability) for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.
See our [Observability documentation](/en/observability/overview) for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.
### Best Practices for Production
1. **Always inspect generated prompts** before deploying to production
2. **Use custom templates** when you need full control over prompt content
3. **Integrate observability tools** for ongoing prompt monitoring (see [Observability docs](/how-to/observability))
3. **Integrate observability tools** for ongoing prompt monitoring (see [Observability docs](/en/observability/overview))
4. **Test with different LLMs** as default instructions may work differently across models
5. **Document your prompt customizations** for team transparency
@@ -313,4 +314,4 @@ Low-level prompt customization in CrewAI opens the door to super custom, complex
<Check>
You now have the foundation for advanced prompt customizations in CrewAI. Whether you're adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.
</Check>
</Check>

View File

@@ -2,6 +2,7 @@
title: Fingerprinting
description: Learn how to use CrewAI's fingerprinting system to uniquely identify and track components throughout their lifecycle.
icon: fingerprint
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Crafting Effective Agents
description: Learn best practices for designing powerful, specialized AI agents that collaborate effectively to solve complex problems.
icon: robot
mode: "wide"
---
## The Art and Science of Agent Design
@@ -448,5 +449,5 @@ Congratulations! You now understand the principles and practices of effective ag
## Next Steps
- Experiment with different agent configurations for your specific use case
- Learn about [building your first crew](/guides/crews/first-crew) to see how agents work together
- Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced orchestration
- Learn about [building your first crew](/en/guides/crews/first-crew) to see how agents work together
- Explore [CrewAI Flows](/en/guides/flows/first-flow) for more advanced orchestration

View File

@@ -2,6 +2,7 @@
title: Evaluating Use Cases for CrewAI
description: Learn how to assess your AI application needs and choose the right approach between Crews and Flows based on complexity and precision requirements.
icon: scale-balanced
mode: "wide"
---
## Understanding the Decision Framework
@@ -11,7 +12,7 @@ When building AI applications with CrewAI, one of the most important decisions y
At the heart of this decision is understanding the relationship between **complexity** and **precision** in your application:
<Frame caption="Complexity vs. Precision Matrix for CrewAI Applications">
<img src="../../images/complexity_precision.png" alt="Complexity vs. Precision Matrix" />
<img src="/images/complexity_precision.png" alt="Complexity vs. Precision Matrix" />
</Frame>
This matrix helps visualize how different approaches align with varying requirements for complexity and precision. Let's explore what each quadrant means and how it guides your architectural choices.
@@ -497,7 +498,7 @@ You now have a framework for evaluating CrewAI use cases and choosing the right
## Next Steps
- Learn more about [crafting effective agents](/guides/agents/crafting-effective-agents)
- Explore [building your first crew](/guides/crews/first-crew)
- Dive into [mastering flow state management](/guides/flows/mastering-flow-state)
- Check out the [core concepts](/concepts/agents) for deeper understanding
- Learn more about [crafting effective agents](/en/guides/agents/crafting-effective-agents)
- Explore [building your first crew](/en/guides/crews/first-crew)
- Dive into [mastering flow state management](/en/guides/flows/mastering-flow-state)
- Check out the [core concepts](/en/concepts/agents) for deeper understanding

View File

@@ -2,6 +2,7 @@
title: Build Your First Crew
description: Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
icon: users-gear
mode: "wide"
---
## Unleashing the Power of Collaborative AI
@@ -32,9 +33,9 @@ Let's get started building your first crew!
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
1. Installed CrewAI following the [installation guide](/en/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/concepts/llms#setting-up-your-llm)
guide](/en/concepts/llms#setting-up-your-llm)
3. Basic understanding of Python
## Step 1: Create a New CrewAI Project
@@ -54,7 +55,7 @@ This will generate a project with the basic structure needed for your crew. The
- A main script to run the crew
<Frame caption="CrewAI Framework Overview">
<img src="../../images/crews.png" alt="CrewAI Framework Overview" />
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
@@ -287,7 +288,7 @@ SERPER_API_KEY=your_serper_api_key
# Add your provider's API key here too.
```
See the [LLM Setup guide](/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
See the [LLM Setup guide](/en/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
## Step 8: Install Dependencies
@@ -388,7 +389,7 @@ Now that you've built your first crew, you can:
2. Try more complex task structures and workflows
3. Implement custom tools to give your agents new capabilities
4. Apply your crew to different topics or problem domains
5. Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced workflows with procedural programming
5. Explore [CrewAI Flows](/en/guides/flows/first-flow) for more advanced workflows with procedural programming
<Check>
Congratulations! You've successfully built your first CrewAI crew that can research and analyze any topic you provide. This foundational experience has equipped you with the skills to create increasingly sophisticated AI systems that can tackle complex, multi-stage problems through collaborative intelligence.

View File

@@ -2,6 +2,7 @@
title: Build Your First Flow
description: Learn how to create structured, event-driven workflows with precise control over execution.
icon: diagram-project
mode: "wide"
---
## Taking Control of AI Workflows with Flows
@@ -42,9 +43,9 @@ Let's dive in and build your first flow!
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
1. Installed CrewAI following the [installation guide](/en/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/concepts/llms#setting-up-your-llm)
guide](/en/concepts/llms#setting-up-your-llm)
3. Basic understanding of Python
## Step 1: Create a New CrewAI Flow Project
@@ -59,7 +60,7 @@ cd guide_creator_flow
This will generate a project with the basic structure needed for your flow.
<Frame caption="CrewAI Framework Overview">
<img src="../../images/flows.png" alt="CrewAI Framework Overview" />
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
</Frame>
## Step 2: Understanding the Project Structure
@@ -443,7 +444,7 @@ This is the power of flows - combining different types of processing (user inter
## Step 6: Set Up Your Environment Variables
Create a `.env` file in your project root with your API keys. See the [LLM setup
guide](/concepts/llms#setting-up-your-llm) for details on configuring a provider.
guide](/en/concepts/llms#setting-up-your-llm) for details on configuring a provider.
```sh .env
OPENAI_API_KEY=your_openai_api_key

View File

@@ -2,6 +2,7 @@
title: Mastering Flow State Management
description: A comprehensive guide to managing, persisting, and leveraging state in CrewAI Flows for building robust AI applications.
icon: diagram-project
mode: "wide"
---
## Understanding the Power of State in Flows
@@ -277,22 +278,23 @@ This pattern allows you to combine direct data passing with state updates for ma
One of CrewAI's most powerful features is the ability to persist flow state across executions. This enables workflows that can be paused, resumed, and even recovered after failures.
### The @persist Decorator
### The @persist() Decorator
The `@persist` decorator automates state persistence, saving your flow's state at key points in execution.
The `@persist()` decorator automates state persistence, saving your flow's state at key points in execution.
#### Class-Level Persistence
When applied at the class level, `@persist` saves state after every method execution:
When applied at the class level, `@persist()` saves state after every method execution:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
value: int = 0
@persist # Apply to the entire flow class
@persist() # Apply to the entire flow class
class PersistentCounterFlow(Flow[CounterState]):
@start()
def increment(self):
@@ -319,10 +321,11 @@ print(f"Second run result: {result2}") # Will be higher due to persisted state
#### Method-Level Persistence
For more granular control, you can apply `@persist` to specific methods:
For more granular control, you can apply `@persist()` to specific methods:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
class SelectivePersistFlow(Flow):
@start()
@@ -330,7 +333,7 @@ class SelectivePersistFlow(Flow):
self.state["count"] = 1
return "First step"
@persist # Only persist after this method
@persist() # Only persist after this method
@listen(first_step)
def important_step(self, prev_result):
self.state["count"] += 1
@@ -346,6 +349,31 @@ class SelectivePersistFlow(Flow):
## Advanced State Patterns
### Conditional starts and resumable execution
Flows support conditional `@start()` and resumable execution for HITL/cyclic scenarios:
```python
from crewai.flow.flow import Flow, start, listen, and_, or_
class ResumableFlow(Flow):
@start() # unconditional start
def init(self):
...
# Conditional start: run after "init" or external trigger name
@start("init")
def maybe_begin(self):
...
@listen(and_(init, maybe_begin))
def proceed(self):
...
```
- Conditional `@start()` accepts a method name, a router label, or a callable condition.
- During resume, listeners continue from prior checkpoints; cycle/router branches honor resumption flags.
### State-Based Conditional Logic
You can use state to implement complex conditional logic in your flows:
@@ -765,5 +793,5 @@ You've now mastered the concepts and practices of state management in CrewAI Flo
- Experiment with both structured and unstructured state in your flows
- Try implementing state persistence for long-running workflows
- Explore [building your first crew](/guides/crews/first-crew) to see how crews and flows can work together
- Check out the [Flow reference documentation](/concepts/flows) for more advanced features
- Explore [building your first crew](/en/guides/crews/first-crew) to see how crews and flows can work together
- Check out the [Flow reference documentation](/en/concepts/flows) for more advanced features

View File

@@ -2,6 +2,7 @@
title: Installation
description: Get started with CrewAI - Install, configure, and build your first AI crew
icon: wrench
mode: "wide"
---
## Video Tutorial
@@ -22,7 +23,7 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <3.13`. Here's how to check your version:
CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version:
```bash
python3 --version
```
@@ -30,6 +31,12 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
<Note>
**OpenAI SDK Requirement**
CrewAI 0.175.0 requires `openai >= 1.13.3`. If you manage dependencies yourself, ensure your environment satisfies this constraint to avoid import/runtime issues.
</Note>
CrewAI uses the `uv` as its dependency management and package handling tool. It simplifies project setup and execution, offering a seamless experience.
If you haven't installed `uv` yet, follow **step 1** to quickly get it set up on your system, else you can skip to **step 2**.
@@ -72,7 +79,7 @@ If you haven't installed `uv` yet, follow **step 1** to quickly get it set up on
</Warning>
<Warning>
If you encounter the `chroma-hnswlib==0.7.6` build error (`fatal error C1083: Cannot open include file: 'float.h'`) on Windows, install (Visual Studio Build Tools)[https://visualstudio.microsoft.com/downloads/] with *Desktop development with C++*.
If you encounter the `chroma-hnswlib==0.7.6` build error (`fatal error C1083: Cannot open include file: 'float.h'`) on Windows, install [Visual Studio Build Tools](https://visualstudio.microsoft.com/downloads/) with *Desktop development with C++*.
</Warning>
- To verify that `crewai` is installed, run:
@@ -104,7 +111,6 @@ We recommend using the `YAML` template scaffolding for a structured approach to
```
- This creates a new project with the following structure:
<Frame>
```
my_project/
├── .gitignore
@@ -124,7 +130,6 @@ We recommend using the `YAML` template scaffolding for a structured approach to
├── agents.yaml
└── tasks.yaml
```
</Frame>
</Step>
<Step title="Customize Your Project">
@@ -172,7 +177,7 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
### CrewAI Factory (Self-hosted)
- Containerized deployment for your infrastructure
- Supports any hyperscaler including on prem depployments
- Supports any hyperscaler including on prem deployments
- Integration with your existing security systems
<Card title="Explore Enterprise Options" icon="building" href="https://crewai.com/enterprise">
@@ -186,7 +191,7 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
<Card
title="Build Your First Agent"
icon="code"
href="/quickstart"
href="/en/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>

View File

@@ -2,6 +2,7 @@
title: Introduction
description: Build AI agent teams that work together to tackle complex tasks
icon: handshake
mode: "wide"
---
# What is CrewAI?
@@ -10,8 +11,8 @@ icon: handshake
CrewAI empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario:
- **[CrewAI Crews](/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
- **[CrewAI Flows](/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
- **[CrewAI Crews](/en/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
- **[CrewAI Flows](/en/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
@@ -23,7 +24,7 @@ With over 100,000 developers certified through our community courses, CrewAI is
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="images/crews.png" alt="CrewAI Framework Overview" />
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
@@ -64,7 +65,7 @@ With over 100,000 developers certified through our community courses, CrewAI is
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="images/flows.png" alt="CrewAI Framework Overview" />
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
@@ -94,21 +95,21 @@ With over 100,000 developers certified through our community courses, CrewAI is
## When to Use Crews vs. Flows
<Note>
Understanding when to use [Crews](/guides/crews/first-crew) versus [Flows](/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
Understanding when to use [Crews](/en/guides/crews/first-crew) versus [Flows](/en/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
</Note>
| Use Case | Recommended Approach | Why? |
|:---------|:---------------------|:-----|
| **Open-ended research** | [Crews](/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
| **Content generation** | [Crews](/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
| **Decision workflows** | [Flows](/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
| **API orchestration** | [Flows](/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
| **Hybrid applications** | Combined approach | Use [Flows](/guides/flows/first-flow) to orchestrate overall process with [Crews](/guides/crews/first-crew) handling complex subtasks |
| **Open-ended research** | [Crews](/en/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
| **Content generation** | [Crews](/en/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
| **Decision workflows** | [Flows](/en/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
| **API orchestration** | [Flows](/en/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
| **Hybrid applications** | Combined approach | Use [Flows](/en/guides/flows/first-flow) to orchestrate overall process with [Crews](/en/guides/crews/first-crew) handling complex subtasks |
### Decision Framework
- **Choose [Crews](/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
- **Choose [Flows](/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
- **Choose [Crews](/en/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
- **Choose [Flows](/en/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
- **Combine both when:** Your application needs both structured processes and pockets of autonomous intelligence
## Why Choose CrewAI?
@@ -126,14 +127,14 @@ With over 100,000 developers certified through our community courses, CrewAI is
<Card
title="Build Your First Crew"
icon="users-gear"
href="/guides/crews/first-crew"
href="/en/guides/crews/first-crew"
>
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
</Card>
<Card
title="Build Your First Flow"
icon="diagram-project"
href="/guides/flows/first-flow"
href="/en/guides/flows/first-flow"
>
Learn how to create structured, event-driven workflows with precise control over execution.
</Card>
@@ -143,14 +144,14 @@ With over 100,000 developers certified through our community courses, CrewAI is
<Card
title="Install CrewAI"
icon="wrench"
href="/installation"
href="/en/installation"
>
Get started with CrewAI in your development environment.
</Card>
<Card
title="Quick Start"
icon="bolt"
href="/quickstart"
href="en/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>
@@ -161,4 +162,4 @@ With over 100,000 developers certified through our community courses, CrewAI is
>
Connect with other developers, get help, and share your CrewAI experiences.
</Card>
</CardGroup>
</CardGroup>

View File

@@ -1,6 +1,7 @@
---
title: Before and After Kickoff Hooks
description: Learn how to use before and after kickoff hooks in CrewAI
mode: "wide"
---
CrewAI provides hooks that allow you to execute code before and after a crew's kickoff. These hooks are useful for preprocessing inputs or post-processing results.

View File

@@ -2,6 +2,7 @@
title: Bring your own agent
description: Learn how to bring your own agents that work within a Crew.
icon: robots
mode: "wide"
---
Interoperability is a core concept in CrewAI. This guide will show you how to bring your own agents that work within a Crew.

View File

@@ -2,6 +2,7 @@
title: Coding Agents
description: Learn how to enable your CrewAI Agents to write and execute code, and explore advanced features for enhanced functionality.
icon: rectangle-code
mode: "wide"
---
## Introduction

View File

@@ -2,6 +2,7 @@
title: Conditional Tasks
description: Learn how to use conditional tasks in a crewAI kickoff
icon: diagram-subtask
mode: "wide"
---
## Introduction

View File

@@ -2,6 +2,7 @@
title: Create Custom Tools
description: Comprehensive guide on crafting, using, and managing custom tools within the CrewAI framework, including new functionalities and error handling.
icon: hammer
mode: "wide"
---
## Creating and Utilizing Tools in CrewAI

View File

@@ -2,6 +2,7 @@
title: Custom LLM Implementation
description: Learn how to create custom LLM implementations in CrewAI.
icon: code
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Custom Manager Agent
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
icon: user-shield
mode: "wide"
---
# Setting a Specific Agent as Manager in CrewAI

View File

@@ -2,6 +2,7 @@
title: Customize Agents
description: A comprehensive guide to tailoring agents for specific roles, tasks, and advanced customizations within the CrewAI framework.
icon: user-pen
mode: "wide"
---
## Customizable Attributes

View File

@@ -2,6 +2,7 @@
title: "Image Generation with DALL-E"
description: "Learn how to use DALL-E for AI-powered image generation in your CrewAI projects"
icon: "image"
mode: "wide"
---
CrewAI supports integration with OpenAI's DALL-E, allowing your AI agents to generate images as part of their tasks. This guide will walk you through how to set up and use the DALL-E tool in your CrewAI projects.

View File

@@ -2,6 +2,7 @@
title: Force Tool Output as Result
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
icon: wrench-simple
mode: "wide"
---
## Introduction

View File

@@ -2,6 +2,7 @@
title: Hierarchical Process
description: A comprehensive guide to understanding and applying the hierarchical process within your CrewAI projects, updated to reflect the latest coding practices and functionalities.
icon: sitemap
mode: "wide"
---
## Introduction

View File

@@ -2,6 +2,7 @@
title: "Human-in-the-Loop (HITL) Workflows"
description: "Learn how to implement Human-in-the-Loop workflows in CrewAI for enhanced decision-making"
icon: "user-check"
mode: "wide"
---
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.

Some files were not shown because too many files have changed in this diff Show More