Compare commits

..

240 Commits

Author SHA1 Message Date
lorenzejay
38dcc35645 move it around 2026-02-19 11:39:53 -08:00
lorenzejay
21e9e7e8c9 better core concepts 2026-02-19 11:33:49 -08:00
lorenzejay
801908356b pass 1 for ai readable 2026-02-19 11:26:06 -08:00
Lucas Gomide
49aa29bb41 docs: correct broken human_feedback examples with working self-loop patterns (#4520)
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Check Documentation Broken Links / Check broken links (push) Waiting to run
2026-02-19 09:02:01 -08:00
João Moura
8df499d471 Fix cyclic flows silently breaking when persistence ID is passed in inputs (#4501)
Some checks failed
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
* Implement user input handling in Flow class

- Introduced `FlowInputRequestedEvent` and `FlowInputReceivedEvent` to manage user input requests and responses during flow execution.
- Added `InputProvider` protocol and `InputResponse` dataclass for customizable input handling.
- Enhanced `Flow` class with `ask()` method to request user input, including timeout handling and state checkpointing.
- Updated `FlowConfig` to support custom input providers.
- Created `input_provider.py` for default input provider implementations, including a console-based provider.
- Added comprehensive tests for `ask()` functionality, covering basic usage, timeout behavior, and integration with flow machinery.

* Potential fix for pull request finding 'Unused import'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* Refactor test_flow_ask.py to streamline flow kickoff calls

- Removed unnecessary variable assignments for the result of `flow.kickoff()` in two test cases, improving code clarity.
- Updated assertions to ensure the expected execution log entries are present after the flow kickoff, enhancing test reliability.

* Add current_flow_method_name context variable for flow method tracking

- Introduced a new context variable, `current_flow_method_name`, to store the name of the currently executing flow method, defaulting to "unknown".
- Updated the Flow class to set and reset this context variable during method execution, enhancing the ability to track method calls without stack inspection.
- Removed the obsolete `_resolve_calling_method_name` method, streamlining the code and improving clarity.

* Enhance input history management in Flow class

- Introduced a new `InputHistoryEntry` TypedDict to structure user input history for the `ask()` method, capturing details such as the question, user response, method name, timestamp, and associated metadata.
- Updated the `_input_history` attribute in the Flow class to utilize the new `InputHistoryEntry` type, improving type safety and clarity in input history management.

* Enhance timeout handling in Flow class input requests

- Updated the `ask()` method to improve timeout management by manually managing the `ThreadPoolExecutor`, preventing potential deadlocks when the provider call exceeds the timeout duration.
- Added clarifications in the documentation regarding the behavior of the timeout and the underlying request handling, ensuring better understanding for users.

* Enhance memory reset functionality in CLI commands

- Introduced flow memory reset capabilities in the `reset_memories_command`, allowing for both crew and flow memory resets.
- Added a new utility function `_reset_flow_memory` to handle memory resets for individual flow instances, improving modularity and clarity.
- Updated the `get_flows` utility to discover flow instances from project files, enhancing the CLI's ability to manage flow states.
- Expanded test coverage to validate the new flow memory reset features, ensuring robust functionality and error handling.

* LINTER

* Fix resumption flag logic in Flow class and add regression test for cyclic flow persistence

- Updated the logic for setting the `_is_execution_resuming` flag to ensure it only activates when there are completed methods to replay, preventing incorrect suppression of cyclic re-execution during state reloads.
- Added a regression test to validate that cyclic router flows complete all iterations when persistence is enabled and an 'id' is passed in inputs, ensuring robust handling of flow execution in these scenarios.

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-02-18 03:27:24 -03:00
João Moura
84d57c7a24 Implement user input handling in Flows (#4490)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Implement user input handling in Flow class
2026-02-16 18:41:03 -03:00
João Moura
4aedd58829 Enhance HITL self-loop functionality in human feedback integration tests (#4493)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Added tests to verify self-loop behavior in HITL routers, ensuring they can handle multiple rejections and immediate approvals.
- Implemented `test_hitl_self_loop_routes_back_to_same_method`, `test_hitl_self_loop_multiple_rejections`, and `test_hitl_self_loop_immediate_approval` to validate the expected execution order and outcomes.
- Updated the `or_()` listener to support looping back to the same method based on human feedback outcomes, improving flow control in complex scenarios.
2026-02-15 21:54:42 -05:00
João Moura
09e9229efc New Memory Improvements (#4484)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* better DevEx

* Refactor: Update supported native providers and enhance memory handling

- Removed "groq" and "meta" from the list of supported native providers in `llm.py`.
- Added a safeguard in `flow.py` to ensure all background memory saves complete before returning.
- Improved error handling in `unified_memory.py` to prevent exceptions during shutdown, ensuring smoother memory operations and event bus interactions.

* Enhance Memory System with Consolidation and Learning Features

- Introduced memory consolidation mechanisms to prevent duplicate records during content saving, utilizing similarity checks and LLM decision-making.
- Implemented non-blocking save operations in the memory system, allowing agents to continue tasks while memory is being saved.
- Added support for learning from human feedback, enabling the system to distill lessons from past corrections and improve future outputs.
- Updated documentation to reflect new features and usage examples for memory consolidation and HITL learning.

* Enhance cyclic flow handling for or_() listeners

- Updated the Flow class to ensure that all fired or_() listeners are cleared between cycle iterations, allowing them to fire again in subsequent cycles. This change addresses a bug where listeners remained suppressed across iterations.
- Added regression tests to verify that or_() listeners fire correctly on every iteration in cyclic flows, ensuring expected behavior in complex routing scenarios.
2026-02-15 04:57:56 -03:00
João Moura
18d266c8e7 New Unified Memory System (#4420)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update memory management and dependencies

- Enhance the memory system by introducing a unified memory API that consolidates short-term, long-term, entity, and external memory functionalities.
- Update the `.gitignore` to exclude new memory-related files and blog directories.
- Modify `conftest.py` to handle missing imports for vcr stubs more gracefully.
- Add new development dependencies in `pyproject.toml` for testing and memory management.
- Refactor the `Crew` class to utilize the new unified memory system, replacing deprecated memory attributes.
- Implement memory context injection in `LiteAgent` to improve memory recall during agent execution.
- Update documentation to reflect changes in memory usage and configuration.

* feat: introduce Memory TUI for enhanced memory management

- Add a new command to the CLI for launching a Textual User Interface (TUI) to browse and recall memories.
- Implement the MemoryTUI class to facilitate user interaction with memory scopes and records.
- Enhance the unified memory API by adding a method to list records within a specified scope.
- Update `pyproject.toml` to include the `textual` dependency for TUI functionality.
- Ensure proper error handling for missing dependencies when accessing the TUI.

* feat: implement consolidation flow for memory management

- Introduce the ConsolidationFlow class to handle the decision-making process for inserting, updating, or deleting memory records based on new content.
- Add new data models: ConsolidationAction and ConsolidationPlan to structure the actions taken during consolidation.
- Enhance the memory types with new fields for consolidation thresholds and limits.
- Update the unified memory API to utilize the new consolidation flow for managing memory records.
- Implement embedding functionality for new content to facilitate similarity checks.
- Refactor existing memory analysis methods to integrate with the consolidation process.
- Update translations to include prompts for consolidation actions and user interactions.

* feat: enhance Memory TUI with Rich markup and improved UI elements

- Update the MemoryTUI class to utilize Rich markup for better visual representation of memory scope information.
- Introduce a color palette for consistent branding across the TUI interface.
- Refactor the CSS styles to improve the layout and aesthetics of the memory browsing experience.
- Enhance the display of memory entries, including better formatting for records and importance ratings.
- Implement loading indicators and error messages with Rich styling for improved user feedback during recall operations.
- Update the action bindings and navigation prompts for a more intuitive user experience.

* feat: enhance Crew class memory management and configuration

- Update the Crew class to allow for more flexible memory configurations by accepting Memory, MemoryScope, or MemorySlice instances.
- Refactor memory initialization logic to support custom memory configurations while maintaining backward compatibility.
- Improve documentation for memory-related fields to clarify usage and expectations.
- Introduce a recall oversample factor to optimize memory recall processes.
- Update related memory types and configurations to ensure consistency across the memory management system.

* chore: update dependency overrides and enhance memory management

- Added an override for the 'rich' dependency to allow compatibility with 'textual' requirements.
- Updated the 'pyproject.toml' and 'uv.lock' files to reflect the new dependency specifications.
- Refactored the Crew class to simplify memory configuration handling by allowing any type for the memory attribute.
- Improved error messages in the CLI for missing 'textual' dependency to guide users on installation.
- Introduced new packages and dependencies in the project to enhance functionality and maintain compatibility.

* refactor: enhance thread safety in flow management

- Updated LockedListProxy and LockedDictProxy to subclass list and dict respectively, ensuring compatibility with libraries requiring strict type checks.
- Improved documentation to clarify the purpose of these proxies and their thread-safe operations.
- Ensured that all mutations are protected by locks while reads delegate to the underlying data structures, enhancing concurrency safety.

* chore: update dependency versions and improve Python compatibility

- Downgraded 'vcrpy' dependency to version 7.0.0 for compatibility.
- Enhanced 'uv.lock' to include more granular resolution markers for Python versions and implementations, ensuring better compatibility across different environments.
- Updated 'urllib3' and 'selenium' dependencies to specify versions based on Python implementation, improving stability and performance.
- Removed deprecated resolution markers for 'fastembed' and streamlined its dependencies for better clarity.

* fix linter

* chore: update uv.lock for improved dependency management and memory management enhancements

- Incremented revision number in uv.lock to reflect changes.
- Added a new development dependency group in uv.lock, specifying versions for tools like pytest, mypy, and pre-commit to streamline development workflows.
- Enhanced error handling in CLI memory functions to provide clearer feedback on missing dependencies.
- Refactored memory management classes to improve type hints and maintainability, ensuring better compatibility with future updates.

* fix tests

* refactor: remove obsolete RAGStorage tests and clean up error handling

- Deleted outdated tests for RAGStorage that were no longer relevant, including tests for client failures, save operation failures, and reset failures.
- Cleaned up the test suite to focus on current functionality and improve maintainability.
- Ensured that remaining tests continue to validate the expected behavior of knowledge storage components.

* fix test

* fix texts

* fix tests

* forcing new commit

* fix: add location parameter to Google Vertex embedder configuration for memory integration tests

* debugging CI

* adding debugging for CI

* refactor: remove unnecessary logging for memory checks in agent execution

- Eliminated redundant logging statements related to memory checks in the Agent and CrewAgentExecutor classes.
- Simplified the memory retrieval logic by directly checking for available memory without logging intermediate states.
- Improved code readability and maintainability by reducing clutter in the logging output.

* udpating desp

* feat: enhance thread safety in LockedListProxy and LockedDictProxy

- Added equality comparison methods (__eq__ and __ne__) to LockedListProxy and LockedDictProxy to allow for safe comparison of their contents.
- Implemented consistent locking mechanisms to prevent deadlocks during comparisons.
- Improved the overall robustness of these proxy classes in multi-threaded environments.

* feat: enhance memory functionality in Flows documentation and memory system

- Added a new section on memory usage within Flows, detailing built-in methods for storing and recalling memories.
- Included an example of a Research and Analyze Flow demonstrating the integration of memory for accumulating knowledge over time.
- Updated the Memory documentation to clarify the unified memory system and its capabilities, including adaptive-depth recall and composite scoring.
- Introduced a new configuration parameter, `recall_oversample_factor`, to improve the effectiveness of memory retrieval processes.

* update docs

* refactor: improve memory record handling and pagination in unified memory system

- Simplified the `get_record` method in the Memory class by directly accessing the storage's `get_record` method.
- Enhanced the `list_records` method to include an `offset` parameter for pagination, allowing users to skip a specified number of records.
- Updated documentation for both methods to clarify their functionality and parameters, improving overall code clarity and usability.

* test: update memory scope assertions in unified memory tests

- Modified assertions in `test_lancedb_list_scopes_get_scope_info` and `test_memory_list_scopes_info_tree` to check for the presence of the "/team" scope instead of the root scope.
- Clarified comments to indicate that `list_scopes` returns child scopes rather than the root itself, enhancing test clarity and accuracy.

* feat: integrate memory tools for agents and crews

- Added functionality to inject memory tools into agents during initialization, enhancing their ability to recall and remember information mid-task.
- Implemented a new `_add_memory_tools` method in the Crew class to facilitate the addition of memory tools when memory is available.
- Introduced `RecallMemoryTool` and `RememberTool` classes in a new `memory_tools.py` file, providing agents with active recall and memory storage capabilities.
- Updated English translations to include descriptions for the new memory tools, improving user guidance on their usage.

* refactor: streamline memory recall functionality across agents and tools

- Removed the 'depth' parameter from memory recall calls in LiteAgent and Agent classes, simplifying the recall process.
- Updated the MemoryTUI to use 'deep' depth by default for more comprehensive memory retrieval.
- Enhanced the MemoryScope and MemorySlice classes to default to 'deep' depth, improving recall accuracy.
- Introduced a new 'recall_queries' field in QueryAnalysis to optimize semantic vector searches with targeted phrases.
- Updated documentation and comments to reflect changes in memory recall behavior and parameters.

* refactor: optimize memory management in flow classes

- Enhanced memory auto-creation logic in Flow class to prevent unnecessary Memory instance creation for internal flows (RecallFlow, ConsolidationFlow) by introducing a _skip_auto_memory flag.
- Removed the deprecated time_hints field from QueryAnalysis and replaced it with a more flexible time_filter field to better handle time-based queries.
- Updated documentation and comments to reflect changes in memory handling and query analysis structure, improving clarity and usability.

* updates tests

* feat: introduce EncodingFlow for enhanced memory encoding pipeline

- Added a new EncodingFlow class to orchestrate the encoding process for memory, integrating LLM analysis and embedding.
- Updated the Memory class to utilize EncodingFlow for saving content, improving the overall memory management and conflict resolution.
- Enhanced the unified memory module to include the new EncodingFlow in its public API, facilitating better memory handling.
- Updated tests to ensure proper functionality of the new encoding flow and its integration with existing memory features.

* refactor: optimize memory tool integration and recall flow

- Streamlined the addition of memory tools in the Agent class by using list comprehension for cleaner code.
- Enhanced the RecallFlow class to build task lists more efficiently with list comprehensions, improving readability and performance.
- Updated the RecallMemoryTool to utilize list comprehensions for formatting memory results, simplifying the code structure.
- Adjusted test assertions in LiteAgent to reflect the default behavior of memory recall depth, ensuring clarity in expected outcomes.

* Potential fix for pull request finding 'Empty except'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* chore: gen missing cassette

* fix

* test: enhance memory extraction test by mocking recall to prevent LLM calls

Updated the test for memory extraction to include a mock for the recall method, ensuring that the test focuses on the save path without invoking external LLM calls. This improves test reliability and clarity.

* refactor: enhance memory handling by adding agent role parameter

Updated memory storage methods across multiple classes to include an optional `agent_role` parameter, improving the context of stored memories. Additionally, modified the initialization of several flow classes to suppress flow events, enhancing performance and reducing unnecessary event triggers.

* feat: enhance agent memory functionality with recall and save mechanisms

Implemented memory context injection during agent kickoff, allowing for memory recall before execution and passive saving of results afterward. Added new methods to handle memory saving and retrieval, including error handling for memory operations. Updated the BaseAgent class to support dynamic memory resolution and improved memory record structure with source and privacy attributes for better provenance tracking.

* test

* feat: add utility method to simplify tools field in console formatter

Introduced a new static method `_simplify_tools_field` in the console formatter to transform the 'tools' field from full tool objects to a comma-separated string of tool names. This enhancement improves the readability of tool information in the output.

* refactor: improve lazy initialization of LLM and embedder in Memory class

Refactored the Memory class to implement lazy initialization for the LLM and embedder, ensuring they are only created when first accessed. This change enhances the robustness of the Memory class by preventing initialization failures when constructed without an API key. Additionally, updated error handling to provide clearer guidance for users on resolving initialization issues.

* refactor: consolidate memory saving methods for improved efficiency

Refactored memory handling across multiple classes to replace individual memory saving calls with a batch method, `remember_many`, enhancing performance and reducing redundancy. Updated related tools and schemas to support single and multiple item memory operations, ensuring a more streamlined interface for memory interactions. Additionally, improved documentation and test coverage for the new functionality.

* feat: enhance MemoryTUI with improved layout and entry handling

Updated the MemoryTUI class to incorporate a new vertical layout, adding an OptionList for displaying entries and enhancing the detail view for selected records. Introduced methods for populating entry and recall lists, improving user interaction and data presentation. Additionally, refined CSS styles for better visual organization and focus handling.

* fix test

* feat: inject memory tools into LiteAgent for enhanced functionality

Added logic to the LiteAgent class to inject memory tools if memory is configured, ensuring that memory tools are only added if they are not already present. This change improves the agent's capability to utilize memory effectively during execution.

* feat: add synchronous execution method to ConsolidationFlow for improved integration

Introduced a new `run_sync()` method in the ConsolidationFlow class to facilitate procedural execution of the consolidation pipeline without relying on asynchronous event loops. Updated the EncodingFlow class to utilize this method for conflict resolution, ensuring compatibility within its async context. This change enhances the flow's ability to manage memory records effectively during nested executions.

* refactor: update ConsolidationFlow and EncodingFlow for improved async handling

Removed the synchronous `run_sync()` method from ConsolidationFlow and refactored the consolidate method in EncodingFlow to be asynchronous. This change allows for direct awaiting of the ConsolidationFlow's kickoff method, enhancing compatibility within the async event loop and preventing nested asyncio.run() issues. Additionally, updated the execution plan to listen for multiple paths, streamlining the consolidation process.

* fix: update flow documentation and remove unused ConsolidationFlow

Corrected the comment in Flow class regarding internal flows, replacing "ConsolidationFlow" with "EncodingFlow". Removed the ConsolidationFlow class as it is no longer needed, streamlining the memory handling process. Updated related imports and ensured that the memory module reflects these changes, enhancing clarity and maintainability.

* feat: enhance memory handling with background saving and query analysis optimization

Implemented a background saving mechanism in the Memory class to allow non-blocking memory operations, improving performance during high-load scenarios. Added a query analysis threshold to skip LLM calls for short queries, optimizing recall efficiency. Updated related methods and documentation to reflect these changes, ensuring a more responsive and efficient memory management system.

* fix test

* fix test

* fix: handle synchronous fallback for save operations in Memory class

Updated the Memory class to implement a synchronous fallback mechanism for save operations when the background thread pool is shut down. This change ensures that late save requests still succeed, improving reliability in memory management during shutdown scenarios.

* feat: implement HITL learning features in human feedback decorator

Added support for learning from human feedback in the human feedback decorator. Introduced parameters to enable lesson distillation and pre-review of outputs based on past feedback. Updated related tests to ensure proper functionality of the learning mechanism, including memory interactions and default LLM usage.

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-13 21:34:37 -03:00
Chujiang
670cdcacaa chore: update template files to use modern type annotations 2026-02-13 09:30:58 -05:00
Greyson LaLonde
f7e3b4dbe0 chore: remove downstream sync
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-02-12 14:35:23 -05:00
Rip&Tear
0ecf5d1fb0 docs: clarify NL2SQL security model and hardening guidance (#4465)
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-12 10:50:29 -08:00
Giovanni Vella
6c0fb7f970 fix broken tasks table
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Signed-off-by: Giovanni Vella <giovanni.vella98@gmail.com>
2026-02-12 10:55:40 -05:00
Greyson LaLonde
cde33fd981 feat: add yanked detection for version notes
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-02-11 23:31:06 -05:00
Lorenze Jay
2ed0c2c043 imp compaction (#4399)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
* imp compaction

* fix lint

* cassette gen

* cassette gen

* improve assert

* adding azure

* fix global docstring
2026-02-11 15:52:03 -08:00
Lorenze Jay
0341e5aee7 supporting prompt cache results show (#4447)
* supporting prompt cache

* droped azure tests

* fix tests

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-11 14:07:15 -08:00
Mike Plachta
397d14c772 fix: correct CLI flag format from --skip-provider to --skip_provider (#4462)
Update documentation to use underscore instead of hyphen in the `--skip_provider` flag across all CLI command examples for consistency with actual CLI implementation.
2026-02-11 13:51:54 -08:00
Lucas Gomide
fc3e86e9a3 docs Adding 96 missing actions across 9 integrations (#4460)
* docs: add missing integration actions from OAuth config

Sync enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations:
- Google Contacts: 4 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions

* docs: add missing integration actions from OAuth config

Sync pt-BR enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Portuguese:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions

* docs: add missing integration actions from OAuth config

Sync Korean enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Korean:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-11 15:17:54 -05:00
Mike Plachta
2882df5daf replace old .cursorrules with AGENTS.md (#4451)
* chore: remove .cursorrules file
feat: add AGENTS.md file to any newly created file

* move the copy of the tests
2026-02-11 10:07:24 -08:00
Greyson LaLonde
3a22e80764 fix: ensure openai tool call stream is finalized
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-02-11 10:02:31 -05:00
Greyson LaLonde
9b585a934d fix: pass started_event_id to crew 2026-02-11 09:30:07 -05:00
Rip&Tear
46e1b02154 chore: fix codeql coverage and action version (#4454) 2026-02-11 18:20:07 +08:00
Rip&Tear
87675b49fd test: avoid URL substring assertion in brave search test (#4453)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-02-11 14:32:10 +08:00
Lucas Gomide
a3bee66be8 Address OpenSSL CVE-2025-15467 vulnerability (#4426)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* fix(security): bump regex from 2024.9.11 to 2026.1.15

Address security vulnerability flagged in regex==2024.9.11

* bump mcp from 1.23.1 to 1.26.0

Address security vulnerability flagged in mcp==1.16.0 (resolved to 1.23.3)
2026-02-10 09:39:35 -08:00
Greyson LaLonde
f6fa04528a fix: add async HITL support and chained-router tests
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
asynchronous human-in-the-loop handling and related fixes.

- Extend human_input provider with async support: AsyncExecutorContext, handle_feedback_async, async prompt helpers (_prompt_input_async, _async_readline), and async training/regular feedback loops in SyncHumanInputProvider.
- Add async handler methods in CrewAgentExecutor and AgentExecutor (_ahandle_human_feedback, _ainvoke_loop) to integrate async provider flows.
- Change PlusAPI.get_agent to an async httpx call and adapt caller in agent_utils to run it via asyncio.run.
- Simplify listener execution in flow.Flow to correctly pass HumanFeedbackResult to listeners and unify execution path for router outcomes.
- Remove deprecated types/hitl.py definitions.
- Add tests covering chained router feedback, rejected paths, and mixed router/non-router listeners to prevent regressions.
2026-02-06 16:29:27 -05:00
Greyson LaLonde
7d498b29be fix: event ordering; flow state locks, routing
* fix: add current task id context and flow updates

introduce a context var for the current task id in `crewai.context` to track task scope. update `Flow._execute_single_listener` to return `(result, event_id)` and adjust callers to unpack it and append `FlowMethodName(str(result))` to `router_results`. set/reset the current task id at the start/end of task execution (async + sync) with minor import and call-site tweaks.

* fix: await event futures and flush event bus

call `crewai_event_bus.flush()` after crew kickoff. in `Flow`, await event handler futures instead of just collecting them: await pending `_event_futures` before finishing, await emitted futures immediately with try/except to log failures, then clear `_event_futures`. ensures handlers complete and errors surface.

* fix: continue iteration on tool completion events

expand the loop bridge listener to also trigger on tool completion events (`tool_completed` and `native_tool_completed`) so agent iteration resumes after tools finish. add a `requests.post` mock and response fixture in the liteagent test to simulate platform tool execution. refresh and sanitize vcr cassettes (updated model responses, timestamps, and header placeholders) to reflect tool-call flows and new recordings.

* fix: thread-safe state proxies & native routing

add thread-safe state proxies and refactor native tool routing.

* introduce `LockedListProxy` and `LockedDictProxy` in `flow.py` and update `StateProxy` to return them for list/dict attrs so mutations are protected by the flow lock.
* update `AgentExecutor` to use `StateProxy` on flow init, guard the messages setter with the state lock, and return a `StateProxy` from the temp state accessor.
* convert `call_llm_native_tools` into a listener (no direct routing return) and add `route_native_tool_result` to route based on state (pending tool calls, final answer, or context error).
* minor cleanup in `continue_iteration` to drop orphan listeners on init.
* update test cassettes for new native tool call responses, timestamps, and ids.

improves concurrency safety for shared state and makes native tool routing explicit.

* chore: regen cassettes

* chore: regen cassettes, remove duplicate listener call path
2026-02-06 14:02:43 -05:00
Greyson LaLonde
1308bdee63 feat: add started_event_id and set in eventbus
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add started_event_id and set in eventbus

* chore: update additional test assumption

* fix: restore event bus handlers on context exit

fix rollback in crewai events bus so that exiting the context restores
the previous _sync_handlers, _async_handlers, _handler_dependencies, and _execution_plan_cache by assigning shallow copies of the saved dicts. previously these
were set to empty dicts on exit, which caused registered handlers and cached execution plans to be lost.
2026-02-05 21:28:23 -05:00
Greyson LaLonde
6bb1b178a1 chore: extension points
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Introduce ContextVar-backed hooks and small API/behavior changes to improve extensibility and testability.

Changes include:
- agents: mark configure_structured_output as abstract and change its parameter to task to reflect use of task metadata.
- tracing: convert _first_time_trace_hook to a ContextVar and call .get() to safely retrieve the hook.
- console formatter: add _disable_version_check ContextVar and skip version checks when set (avoids noisy checks in certain contexts).
- flow: use current_triggering_event_id variable when scheduling listener tasks to keep naming consistent.
- hallucination guardrail: make context optional, add _validate_output_hook to allow custom validation hooks, update examples and return contract to allow hooks to override behavior.
- agent utilities: add _create_plus_client_hook for injecting a Plus client (used in tests/alternate flows), ensure structured tools have current_usage_count initialized and propagate to original tool, and fall back to creating PlusAPI client when no hook is provided.
2026-02-05 12:49:54 -05:00
Greyson LaLonde
fe2a4b4e40 chore: bug fixes and more refactor
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Refactor agent executor to delegate human interactions to a provider: add messages and ask_for_human_input properties, implement _invoke_loop and _format_feedback_message, and replace the internal iterative/training feedback logic with a call to get_provider().handle_feedback.

Make LLMGuardrail kickoff coroutine-aware by detecting coroutines and running them via asyncio.run so both sync and async agents are supported.

Make telemetry more robust by safely handling missing task.output (use empty string) and returning early if span is None before setting attributes.

Improve serialization to detect circular references via an _ancestors set, propagate it through recursive calls, and pass exclude/max_depth/_current_depth consistently to prevent infinite recursion and produce stable serializable output.
2026-02-04 21:21:54 -05:00
Greyson LaLonde
711e7171e1 chore: improve hook typing and registration
Allow hook registration to accept both typed hook types and plain callables by importing and using After*/Before*CallHookCallable types; add explicit LLMCallHookContext and ToolCallHookContext typing in crew_base. Introduce a post-initialize crew hook list and invoke hooks after Crew instance initialization. Refactor filtered hook factory functions to include precise typing and clearer local names (before_llm_hook/after_llm_hook/before_tool_hook/after_tool_hook) and register those with the instance. Update CrewInstance protocol to include _registered_hook_functions and _hooks_being_registered fields.
2026-02-04 21:16:20 -05:00
Vini Brasil
76b5f72e81 Fix tool error causing double event scope pop (#4373)
When a tool raises an error, both ToolUsageErrorEvent and
ToolUsageFinishedEvent were being emitted. Since both events pop the
event scope stack, this caused the agent scope to be incorrectly popped
along with the tool scope.
2026-02-04 20:34:08 -03:00
Greyson LaLonde
d86d43d3e0 chore: refactor crew to provider
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Enable dynamic extension exports and small behavior fixes across events and flow modules:

- events/__init__.py: Added _extension_exports and extended __getattr__ to lazily resolve registered extension values or import paths.
- events/event_bus.py: Implemented off() to unregister sync/async handlers, clean handler dependencies, and invalidate execution plan cache.
- events/listeners/tracing/utils.py: Added Callable import and _first_time_trace_hook to allow overriding first-time trace auto-collection behavior.
- events/types/tool_usage_events.py: Changed ToolUsageEvent.run_attempts default from None to 0 to avoid nullable handling.
- events/utils/console_formatter.py: Respect CREWAI_DISABLE_VERSION_CHECK env var to skip version checks in CI-like flows.
- flow/async_feedback/__init__.py: Added typing.Any import, _extension_exports and __getattr__ to support extensions via attribute lookup.

These changes add extension points and safer defaults, and provide a way to unregister event handlers.
2026-02-04 16:05:21 -05:00
Greyson LaLonde
6bfc98e960 refactor: extract hitl to provider pattern
* refactor: extract hitl to provider pattern

- add humaninputprovider protocol with setup_messages and handle_feedback
- move sync hitl logic from executor to synchuman inputprovider
- add _passthrough_exceptions extension point in agent/core.py
- create crewai.core.providers module for extensible components
- remove _ask_human_input from base_agent_executor_mixin
2026-02-04 15:40:22 -05:00
Greyson LaLonde
3cc33ef6ab fix: resolve complex schema $ref pointers in mcp tools
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: resolve complex schema $ref pointers in mcp tools

* chore: update tool specifications

* fix: adapt mcp tools; sanitize pydantic json schemas

* fix: strip nulls from json schemas and simplify mcp args

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-03 20:47:58 -05:00
Lorenze Jay
3fec4669af Lorenze/fix/anthropic available functions call (#4360)
* feat: enhance AnthropicCompletion to support available functions in tool execution

- Updated the `_prepare_completion_params` method to accept `available_functions` for better tool handling.
- Modified tool execution logic to directly return results from tools when `available_functions` is provided, aligning behavior with OpenAI's model.
- Added new test cases to validate the execution of tools with available functions, ensuring correct argument passing and result formatting.

This change improves the flexibility and usability of the Anthropic LLM integration, allowing for more complex interactions with tools.

* refactor: remove redundant event emission in AnthropicCompletion

* fix test

* dry up
2026-02-03 16:30:43 -08:00
dependabot[bot]
d3f424fd8f chore(deps-dev): bump types-aiofiles
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Bumps [types-aiofiles](https://github.com/typeshed-internal/stub_uploader) from 24.1.0.20250822 to 25.1.0.20251011.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-aiofiles
  dependency-version: 25.1.0.20251011
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-03 12:02:28 -05:00
Matt Aitchison
fee9445067 fix: add .python-version to fix Dependabot uv updates (#4352)
Dependabot's uv updater defaults to Python 3.14.2, which is incompatible
with the project's requires-python constraint (>=3.10, <3.14). Adding
.python-version pins the Python version to 3.13 for dependency updates.

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-02-03 10:55:01 -06:00
Greyson LaLonde
a3c01265ee feat: add version check & integrate update notices 2026-02-03 10:17:50 -05:00
Matt Aitchison
aa7e7785bc chore: group dependabot security updates into single PR (#4351)
Configure dependabot to batch security updates together while keeping
regular version updates as separate PRs.
2026-02-03 08:53:28 -06:00
Thiago Moretto
e30645e855 limit stagehand dep version to 0.5.9 due breaking changes (#4339)
* limit to 0.5.9 due breaking changes + add env vars requirements

* fix tool spec extract that was ignoring with default

* original tool spec

* update spec
2026-02-03 09:43:24 -05:00
Greyson LaLonde
c1d2801be2 fix: reject reserved script names for crew folders 2026-02-03 09:16:55 -05:00
Greyson LaLonde
6a8483fcb6 fix: resolve race condition in guardrail event emission test 2026-02-03 09:06:48 -05:00
Greyson LaLonde
5fb602dff2 fix: replace timing-based concurrency test with state tracking 2026-02-03 08:58:51 -05:00
Greyson LaLonde
b90cff580a fix: relax openai and litellm dependency constraints 2026-02-03 08:51:55 -05:00
Vini Brasil
576b74b2ef Add call_id to LLM events for correlating requests (#4281)
When monitoring LLM events, consumers need to know which events belong
to the same API call. Before this change, there was no way to correlate
LLMCallStartedEvent, LLMStreamChunkEvent, and LLMCallCompletedEvent
belonging to the same request.
2026-02-03 10:10:33 -03:00
Greyson LaLonde
7590d4c6e3 fix: enforce additionalProperties=false in schemas
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: enforce additionalProperties=false in schemas

* fix: ensure nested items have required properties
2026-02-02 22:19:04 -05:00
Sampson
8c6436234b adds additional search params (#4321)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Introduces support for additional Brave Search API web-search parameters.
2026-02-02 11:17:02 -08:00
Lucas Gomide
96bde4510b feat: auto update tools.specs (#4341) 2026-02-02 12:52:00 -05:00
Greyson LaLonde
9d7f45376a fix: use contextvars for flow execution context 2026-02-02 11:24:02 -05:00
Thiago Moretto
536447ab0e declare stagehand package as dep for StagehandTool (#4336) 2026-02-02 09:45:47 -05:00
Lorenze Jay
63a508f601 feat: bump versions to 1.9.3 (#4316)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* feat: bump versions to 1.9.3

* bump bump
2026-01-30 14:24:25 -08:00
Greyson LaLonde
102b6ae855 feat: add a2a liteagent, auth, transport negotiation, and file support
* feat: add server-side auth schemes and protocol extensions

- add server auth scheme base class and implementations (api key, bearer token, basic/digest auth, mtls)
- add server-side extension system for a2a protocol extensions
- add extensions middleware for x-a2a-extensions header management
- add extension validation and registry utilities
- enhance auth utilities with server-side support
- add async intercept method to match client call interceptor protocol
- fix type_checking import to resolve mypy errors with a2aconfig

* feat: add transport negotiation and content type handling

- add transport negotiation logic with fallback support
- add content type parser and encoder utilities
- add transport configuration models (client and server)
- add transport types and enums
- enhance config with transport settings
- add negotiation events for transport and content type

* feat: add a2a delegation support to LiteAgent

* feat: add file input support to a2a delegation and tasks

Introduces handling of file inputs in A2A delegation flows by converting file dictionaries to protocol-compatible parts and propagating them through delegation and task execution functions. Updates include utility functions for file conversion, changes to message construction, and passing input_files through relevant APIs.

* feat: liteagent a2a delegation support to kickoff methods
2026-01-30 17:10:00 -05:00
Lorenze Jay
19ce56032c fix: improve output handling and response model integration in agents (#4307)
* fix: improve output handling and response model integration in agents

- Refactored output handling in the Agent class to ensure proper conversion and formatting of outputs, including support for BaseModel instances.
- Enhanced the AgentExecutor class to correctly utilize response models during execution, improving the handling of structured outputs.
- Updated the Gemini and Anthropic completion providers to ensure compatibility with new response model handling, including the addition of strict mode for function definitions.
- Improved the OpenAI completion provider to enforce strict adherence to function schemas.
- Adjusted translations to clarify instructions regarding output formatting and schema adherence.

* drop what was a print that didnt get deleted properly

* fixes gemini

* azure working

* bedrock works

* added tests

* adjust test

* fix tests and regen

* fix tests and regen

* refactor: ensure stop words are applied correctly in Azure, Gemini, and OpenAI completions; add tests to validate behavior with structured outputs

* linting
2026-01-30 12:27:46 -08:00
Joao Moura
85f31459c1 docs link
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2026-01-30 09:05:36 -08:00
Joao Moura
6fcf748dae refactor: update Flow HITL Management documentation to emphasize email-first notifications, routing rules, and auto-response capabilities; remove outdated references to assignment and SLA management 2026-01-30 08:44:54 -08:00
Joao Moura
38065e29ce updating docs 2026-01-30 08:44:54 -08:00
Lorenze Jay
e291a97bdd chore: update version to 1.9.2 across all relevant files (#4299)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-01-28 17:11:44 -08:00
Lorenze Jay
2d05e59223 Lorenze/improve tool response pt2 (#4297)
* no need post tool reflection on native tools

* refactor: update prompt generation to prevent thought leakage

- Modified the prompt structure to ensure agents without tools use a simplified format, avoiding ReAct instructions.
- Introduced a new 'task_no_tools' slice for agents lacking tools, ensuring clean output without Thought: prefixes.
- Enhanced test coverage to verify that prompts do not encourage thought leakage, ensuring outputs remain focused and direct.
- Added integration tests to validate that real LLM calls produce clean outputs without internal reasoning artifacts.

* dont forget the cassettes
2026-01-28 16:53:19 -08:00
Greyson LaLonde
a731efac8d fix: improve structured output handling across providers and agents
- add gemini 2.0 schema support using response_json_schema with propertyordering while retaining backward compatibility for earlier models
- refactor llm completions to return validated pydantic models when a response_model is provided, updating hooks, types, and tests for consistent structured outputs
- extend agentfinish and executors to support basemodel outputs, improve anthropic structured parsing, and clean up schema utilities, tests, and original_json handling
2026-01-28 16:59:55 -05:00
Greyson LaLonde
1e27cf3f0f fix: ensure verbosity flag is applied
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-28 11:52:47 -05:00
Lorenze Jay
381ad3a9a8 chore: update version to 1.9.1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-01-27 20:08:53 -05:00
Lorenze Jay
f53bdb28ac feat: implement before and after tool call hooks in CrewAgentExecutor… (#4287)
* feat: implement before and after tool call hooks in CrewAgentExecutor and AgentExecutor

- Added support for before and after tool call hooks in both CrewAgentExecutor and AgentExecutor classes.
- Introduced ToolCallHookContext to manage context for hooks, allowing for enhanced control over tool execution.
- Implemented logic to block tool execution based on before hooks and to modify results based on after hooks.
- Added integration tests to validate the functionality of the new hooks, ensuring they work as expected in various scenarios.
- Enhanced the overall flexibility and extensibility of tool interactions within the CrewAI framework.

* Potential fix for pull request finding 'Unused local variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* Potential fix for pull request finding 'Unused local variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* test: add integration test for before hook blocking tool execution in Crew

- Implemented a new test to verify that the before hook can successfully block the execution of a tool within a crew.
- The test checks that the tool is not executed when the before hook returns False, ensuring proper control over tool interactions.
- Enhanced the validation of hook calls to confirm that both before and after hooks are triggered appropriately, even when execution is blocked.
- This addition strengthens the testing coverage for tool call hooks in the CrewAI framework.

* drop unused

* refactor(tests): remove OPENAI_API_KEY check from tool hook tests

- Eliminated the check for the OPENAI_API_KEY environment variable in the test cases for tool hooks.
- This change simplifies the test setup and allows for running tests without requiring the API key to be set, improving test accessibility and flexibility.

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-01-27 14:56:50 -08:00
Greyson LaLonde
3b17026082 fix: correct tool-calling content handling and schema serialization
- fix(gemini): prevent tool calls from using stale text content; correct key refs
- fix(agent-executor): resolve type errors
- refactor(schema): extract Pydantic schema utilities from platform tools
- fix(schema): properly serialize schemas and ensure Responses API uses a separate structure
- fix: preserve list identity to avoid mutation/aliasing issues
- chore(tests): update assumptions to match new behavior
2026-01-27 15:47:29 -05:00
Greyson LaLonde
d52dbc1f4b chore: add missing change logs (#4285)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: add missing change logs

* chore: add translations
2026-01-26 18:26:01 -08:00
Lorenze Jay
6b926b90d0 chore: update version to 1.9.0 across all relevant files (#4284)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Bumped the version number to 1.9.0 in pyproject.toml files and __init__.py files across the CrewAI library and its tools.
- Updated dependencies to use the new version of crewai-tools (1.9.0) for improved functionality and compatibility.
- Ensured consistency in versioning across the codebase to reflect the latest updates.
2026-01-26 16:36:35 -08:00
Lorenze Jay
fc84daadbb fix: enhance file store with fallback memory cache when aiocache is n… (#4283)
* fix: enhance file store with fallback memory cache when aiocache is not installed

- Added a simple in-memory cache implementation to serve as a fallback when the aiocache library is unavailable.
- Improved error handling for the aiocache import, ensuring that the application can still function without it.
- This change enhances the robustness of the file store utility by providing a reliable caching mechanism in various environments.

* drop fallback

* Potential fix for pull request finding 'Unused global variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-01-26 15:12:34 -08:00
Lorenze Jay
58b866a83d Lorenze/supporting vertex embeddings (#4282)
* feat: introduce GoogleGenAIVertexEmbeddingFunction for dual SDK support

- Added a new embedding function to support both the legacy vertexai.language_models SDK and the new google-genai SDK for Google Vertex AI.
- Updated factory methods to route to the new embedding function.
- Enhanced VertexAIProvider and related configurations to accommodate the new model options.
- Added integration tests for Google Vertex embeddings with Crew memory, ensuring compatibility and functionality with both authentication methods.

This update improves the flexibility and compatibility of Google Vertex AI embeddings within the CrewAI framework.

* fix test count

* rm comment

* regen cassettes

* regen

* drop variable from .envtest

* dreict to relevant trest only
2026-01-26 14:55:03 -08:00
Greyson LaLonde
9797567342 feat: add structured outputs and response_format support across providers (#4280)
* feat: add response_format parameter to Azure and Gemini providers

* feat: add structured outputs support to Bedrock and Anthropic providers

* chore: bump anthropic dep

* fix: use beta structured output for new models
2026-01-26 11:03:33 -08:00
Greyson LaLonde
a32de6bdac fix: ensure doc list is not empty
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-26 05:08:01 -05:00
Vidit Ostwal
06a58e463c feat: adding response_id in streaming response 2026-01-26 04:20:04 -05:00
Vidit Ostwal
3d771f03fa fix: ensure bedrock client handles stop sequences properly
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-25 20:28:09 -05:00
Greyson LaLonde
db7aeb5a00 chore: disable chroma telemetry
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-25 16:23:29 -05:00
Lorenze Jay
0cb40374de Enhance Gemini LLM Tool Handling and Add Test for Float Responses (#4273)
- Updated the GeminiCompletion class to handle non-dict values returned from tools, ensuring that floats are wrapped in a dictionary format for consistent response handling.
- Introduced a new YAML cassette to test the Gemini LLM's ability to process tools that return float values, verifying that the agent can correctly utilize the sum_numbers tool and return the expected results.
- Added a comprehensive test case to validate the integration of the sum_numbers tool within the Gemini LLM, ensuring accurate calculations and proper response formatting.

These changes improve the robustness of tool interactions within the Gemini LLM and enhance testing coverage for float return values.

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-25 12:50:49 -08:00
Greyson LaLonde
0f3208197f chore: native files and openai responses docs
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-01-23 18:24:00 -05:00
Greyson LaLonde
c4c9208229 feat: native multimodal file handling; openai responses api
- add input_files parameter to Crew.kickoff(), Flow.kickoff(), Task, and Agent.kickoff()
- add provider-specific file uploaders for OpenAI, Anthropic, Gemini, and Bedrock
- add file type detection, constraint validation, and automatic format conversion
- add URL file source support for multimodal content
- add streaming uploads for large files
- add prompt caching support for Anthropic
- add OpenAI Responses API support
2026-01-23 15:13:25 -05:00
Lorenze Jay
bd4d039f63 Lorenze/imp/native tool calling (#4258)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* wip restrcuturing agent executor and liteagent

* fix: handle None task in AgentExecutor to prevent errors

Added a check to ensure that if the task is None, the method returns early without attempting to access task properties. This change improves the robustness of the AgentExecutor by preventing potential errors when the task is not set.

* refactor: streamline AgentExecutor initialization by removing redundant parameters

Updated the Agent class to simplify the initialization of the AgentExecutor by removing unnecessary task and crew parameters in standalone mode. This change enhances code clarity and maintains backward compatibility by ensuring that the executor is correctly configured without redundant assignments.

* wip: clean

* ensure executors work inside a flow due to flow in flow async structure

* refactor: enhance agent kickoff preparation by separating common logic

Updated the Agent class to introduce a new private method  that consolidates the common setup logic for both synchronous and asynchronous kickoff executions. This change improves code clarity and maintainability by reducing redundancy in the kickoff process, while ensuring that the agent can still execute effectively within both standalone and flow contexts.

* linting and tests

* fix test

* refactor: improve test for Agent kickoff parameters

Updated the test for the Agent class to ensure that the kickoff method correctly preserves parameters. The test now verifies the configuration of the agent after kickoff, enhancing clarity and maintainability. Additionally, the test for asynchronous kickoff within a flow context has been updated to reflect the Agent class instead of LiteAgent.

* refactor: update test task guardrail process output for improved validation

Refactored the test for task guardrail process output to enhance the validation of the output against the OpenAPI schema. The changes include a more structured request body and updated response handling to ensure compliance with the guardrail requirements. This update aims to improve the clarity and reliability of the test cases, ensuring that task outputs are correctly validated and feedback is appropriately provided.

* test fix cassette

* test fix cassette

* working

* working cassette

* refactor: streamline agent execution and enhance flow compatibility

Refactored the Agent class to simplify the execution method by removing the event loop check and clarifying the behavior when called from synchronous and asynchronous contexts. The changes ensure that the method operates seamlessly within flow methods, improving clarity in the documentation. Additionally, updated the AgentExecutor to set the response model to None, enhancing flexibility. New test cassettes were added to validate the functionality of agents within flow contexts, ensuring robust testing for both synchronous and asynchronous operations.

* fixed cassette

* Enhance Flow Execution Logic

- Introduced conditional execution for start methods in the Flow class.
- Unconditional start methods are prioritized during kickoff, while conditional starts are executed only if no unconditional starts are present.
- Improved handling of cyclic flows by allowing re-execution of conditional start methods triggered by routers.
- Added checks to continue execution chains for completed conditional starts.

These changes improve the flexibility and control of flow execution, ensuring that the correct methods are triggered based on the defined conditions.

* Enhance Agent and Flow Execution Logic

- Updated the Agent class to automatically detect the event loop and return a coroutine when called within a Flow, simplifying async handling for users.
- Modified Flow class to execute listeners sequentially, preventing race conditions on shared state during listener execution.
- Improved handling of coroutine results from synchronous methods, ensuring proper execution flow and state management.

These changes enhance the overall execution logic and user experience when working with agents and flows in CrewAI.

* Enhance Flow Listener Logic and Agent Imports

- Updated the Flow class to track fired OR listeners, ensuring that multi-source OR listeners only trigger once during execution. This prevents redundant executions and improves flow efficiency.
- Cleared fired OR listeners during cyclic flow resets to allow re-execution in new cycles.
- Modified the Agent class imports to include Coroutine from collections.abc, enhancing type handling for asynchronous operations.

These changes improve the control and performance of flow execution in CrewAI, ensuring more predictable behavior in complex scenarios.

* adjusted test due to new cassette

* ensure native tool calling works with liteagent

* ensure response model is respected

* Enhance Tool Name Handling for LLM Compatibility

- Added a new function  to replace invalid characters in function names with underscores, ensuring compatibility with LLM providers.
- Updated the  function to sanitize tool names before validation.
- Modified the  function to use sanitized names for tool registration.

These changes improve the robustness of tool name handling, preventing potential issues with invalid characters in function names.

* ensure we dont finalize batch on just a liteagent finishing

* max tools per turn wip and ensure we drop print times

* fix sync main issues

* fix llm_call_completed event serialization issue

* drop max_tools_iterations

* for fixing model dump with state

* Add extract_tool_call_info function to handle various tool call formats

- Introduced a new utility function  to extract tool call ID, name, and arguments from different provider formats (OpenAI, Gemini, Anthropic, and dictionary).
- This enhancement improves the flexibility and compatibility of tool calls across multiple LLM providers, ensuring consistent handling of tool call information.
- The function returns a tuple containing the call ID, function name, and function arguments, or None if the format is unrecognized.

* Refactor AgentExecutor to support batch execution of native tool calls

- Updated the  method to process all tools from  in a single batch, enhancing efficiency and reducing the number of interactions with the LLM.
- Introduced a new utility function  to streamline the extraction of tool call details, improving compatibility with various tool formats.
- Removed the  parameter, simplifying the initialization of the .
- Enhanced logging and message handling to provide clearer insights during tool execution.
- This refactor improves the overall performance and usability of the agent execution flow.

* Update English translations for tool usage and reasoning instructions

- Revised the `post_tool_reasoning` message to clarify the analysis process after tool usage, emphasizing the need to provide only the final answer if requirements are met.
- Updated the `format` message to simplify the instructions for deciding between using a tool or providing a final answer, enhancing clarity for users.
- These changes improve the overall user experience by providing clearer guidance on task execution and response formatting.

* fix

* fixing azure tests

* organizae imports

* dropped unused

* Remove debug print statements from AgentExecutor to clean up the code and improve readability. This change enhances the overall performance of the agent execution flow by eliminating unnecessary console output during LLM calls and iterations.

* linted

* updated cassette

* regen cassette

* revert crew agent executor

* adjust cassettes and dropped tests due to native tool implementation

* adjust

* ensure we properly fail tools and emit their events

* Enhance tool handling and delegation tracking in agent executors

- Implemented immediate return for tools with result_as_answer=True in crew_agent_executor.py.
- Added delegation tracking functionality in agent_utils.py to increment delegations when specific tools are used.
- Updated tool usage logic to handle caching more effectively in tool_usage.py.
- Enhanced test cases to validate new delegation features and tool caching behavior.

This update improves the efficiency of tool execution and enhances the delegation capabilities of agents.

* Enhance tool handling and delegation tracking in agent executors

- Implemented immediate return for tools with result_as_answer=True in crew_agent_executor.py.
- Added delegation tracking functionality in agent_utils.py to increment delegations when specific tools are used.
- Updated tool usage logic to handle caching more effectively in tool_usage.py.
- Enhanced test cases to validate new delegation features and tool caching behavior.

This update improves the efficiency of tool execution and enhances the delegation capabilities of agents.

* fix cassettes

* fix

* regen cassettes

* regen gemini

* ensure we support bedrock

* supporting bedrock

* regen azure cassettes

* Implement max usage count tracking for tools in agent executors

- Added functionality to check if a tool has reached its maximum usage count before execution in both crew_agent_executor.py and agent_executor.py.
- Enhanced error handling to return a message when a tool's usage limit is reached.
- Updated tool usage logic in tool_usage.py to increment usage counts and print current usage status.
- Introduced tests to validate max usage count behavior for native tool calling, ensuring proper enforcement and tracking.

This update improves tool management by preventing overuse and providing clear feedback when limits are reached.

* fix other test

* fix test

* drop logs

* better tests

* regen

* regen all azure cassettes

* regen again placeholder for cassette matching

* fix: unify tool name sanitization across codebase

* fix: include tool role messages in save_last_messages

* fix: update sanitize_tool_name test expectations

Align test expectations with unified sanitize_tool_name behavior
that lowercases and splits camelCase for LLM provider compatibility.

* fix: apply sanitize_tool_name consistently across codebase

Unify tool name sanitization to ensure consistency between tool names
shown to LLMs and tool name matching/lookup logic.

* regen

* fix: sanitize tool names in native tool call processing

- Update extract_tool_call_info to return sanitized tool names
- Fix delegation tool name matching to use sanitized names
- Add sanitization in crew_agent_executor tool call extraction
- Add sanitization in experimental agent_executor
- Add sanitization in LLM.call function lookup
- Update streaming utility to use sanitized names
- Update base_agent_executor_mixin delegation check

* Extract text content from parts directly to avoid warning about non-text parts

* Add test case for Gemini token usage tracking

- Introduced a new YAML cassette for tracking token usage in Gemini API responses.
- Updated the test for Gemini to validate token usage metrics and response content.
- Ensured proper integration with the Gemini model and API key handling.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-22 17:44:03 -08:00
Vini Brasil
06d953bf46 Add model field to LLM failed events (#4267)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Move the `model` field from `LLMCallStartedEvent` and
`LLMCallCompletedEvent` to the base `LLMEventBase` class.
2026-01-22 16:19:18 +01:00
Greyson LaLonde
f997b73577 fix: bump mcp to ~=1.23.1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- resolves [cve](https://nvd.nist.gov/vuln/detail/CVE-2025-66416)
2026-01-21 12:43:48 -05:00
Greyson LaLonde
7a65baeb9c feat: add event ordering and parent-child hierarchy
adds emission sequencing, parent-child event hierarchy with scope management, and integrates both into the event bus. introduces flush() for deterministic handling, resets emission counters for test isolation, and adds chain tracking via previous_event_id/triggered_by_event_id plus context variables populated during emit and listener execution. includes tracing listener typing/sorting improvements, safer tool event pairing with try/finally, additional stack checks and cache-hit formatting, context isolation fixes, cassette regen/decoding, and test updates to handle vcr race conditions and flaky behavior.
2026-01-21 11:12:10 -05:00
Lorenze Jay
741bf12bf4 Lorenze/enh decouple executor from crew (#4209)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* wip restrcuturing agent executor and liteagent

* fix: handle None task in AgentExecutor to prevent errors

Added a check to ensure that if the task is None, the method returns early without attempting to access task properties. This change improves the robustness of the AgentExecutor by preventing potential errors when the task is not set.

* refactor: streamline AgentExecutor initialization by removing redundant parameters

Updated the Agent class to simplify the initialization of the AgentExecutor by removing unnecessary task and crew parameters in standalone mode. This change enhances code clarity and maintains backward compatibility by ensuring that the executor is correctly configured without redundant assignments.

* ensure executors work inside a flow due to flow in flow async structure

* refactor: enhance agent kickoff preparation by separating common logic

Updated the Agent class to introduce a new private method  that consolidates the common setup logic for both synchronous and asynchronous kickoff executions. This change improves code clarity and maintainability by reducing redundancy in the kickoff process, while ensuring that the agent can still execute effectively within both standalone and flow contexts.

* linting and tests

* fix test

* refactor: improve test for Agent kickoff parameters

Updated the test for the Agent class to ensure that the kickoff method correctly preserves parameters. The test now verifies the configuration of the agent after kickoff, enhancing clarity and maintainability. Additionally, the test for asynchronous kickoff within a flow context has been updated to reflect the Agent class instead of LiteAgent.

* refactor: update test task guardrail process output for improved validation

Refactored the test for task guardrail process output to enhance the validation of the output against the OpenAPI schema. The changes include a more structured request body and updated response handling to ensure compliance with the guardrail requirements. This update aims to improve the clarity and reliability of the test cases, ensuring that task outputs are correctly validated and feedback is appropriately provided.

* test fix cassette

* test fix cassette

* working

* working cassette

* refactor: streamline agent execution and enhance flow compatibility

Refactored the Agent class to simplify the execution method by removing the event loop check and clarifying the behavior when called from synchronous and asynchronous contexts. The changes ensure that the method operates seamlessly within flow methods, improving clarity in the documentation. Additionally, updated the AgentExecutor to set the response model to None, enhancing flexibility. New test cassettes were added to validate the functionality of agents within flow contexts, ensuring robust testing for both synchronous and asynchronous operations.

* fixed cassette

* Enhance Flow Execution Logic

- Introduced conditional execution for start methods in the Flow class.
- Unconditional start methods are prioritized during kickoff, while conditional starts are executed only if no unconditional starts are present.
- Improved handling of cyclic flows by allowing re-execution of conditional start methods triggered by routers.
- Added checks to continue execution chains for completed conditional starts.

These changes improve the flexibility and control of flow execution, ensuring that the correct methods are triggered based on the defined conditions.

* Enhance Agent and Flow Execution Logic

- Updated the Agent class to automatically detect the event loop and return a coroutine when called within a Flow, simplifying async handling for users.
- Modified Flow class to execute listeners sequentially, preventing race conditions on shared state during listener execution.
- Improved handling of coroutine results from synchronous methods, ensuring proper execution flow and state management.

These changes enhance the overall execution logic and user experience when working with agents and flows in CrewAI.

* Enhance Flow Listener Logic and Agent Imports

- Updated the Flow class to track fired OR listeners, ensuring that multi-source OR listeners only trigger once during execution. This prevents redundant executions and improves flow efficiency.
- Cleared fired OR listeners during cyclic flow resets to allow re-execution in new cycles.
- Modified the Agent class imports to include Coroutine from collections.abc, enhancing type handling for asynchronous operations.

These changes improve the control and performance of flow execution in CrewAI, ensuring more predictable behavior in complex scenarios.

* adjusted test due to new cassette

* ensure we dont finalize batch on just a liteagent finishing

* feat: cancellable parallelized flow methods

* feat: allow methods to be cancelled & run parallelized

* feat: ensure state is thread safe through proxy

* fix: check for proxy state

* fix: mimic BaseModel method

* chore: update final attr checks; test

* better description

* fix test

* chore: update test assumptions

* extra

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-20 21:44:45 -08:00
Lorenze Jay
b267bb4054 Lorenze/fix google vertex api using api keys (#4243)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
* supporting vertex through api key use - expo mode

* docs update here

* docs translations

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-20 09:34:36 -08:00
Greyson LaLonde
ceef062426 feat: add additional a2a events and enrich event metadata
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-01-16 16:57:31 -05:00
Heitor Carvalho
e44d778e0e feat: keycloak sso provider support (#4241)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-01-15 15:38:40 -03:00
nicoferdi96
5645cbb22e CrewAI AMP Deployment Guidelines (#4205)
* doc changes for better deplyment guidelines and checklist

* chore: remove .claude folder from version control

The .claude folder contains local Claude Code skills and configuration
that should not be tracked in the repository. Already in .gitignore.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Better project structure for flows

* docs.json updated structure

* Ko and Pt traslations for deploying guidelines to AMP

* fix broken links

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-15 16:32:20 +01:00
Lorenze Jay
8f022be106 feat: bump versions to 1.8.1 (#4242)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* feat: bump versions to 1.8.1

* bump bump
2026-01-14 20:49:14 -08:00
Greyson LaLonde
6a19b0a279 feat: a2a task execution utilities 2026-01-14 22:56:17 -05:00
Greyson LaLonde
641c336b2c chore: a2a agent card docs, refine existing a2a docs 2026-01-14 22:46:53 -05:00
Greyson LaLonde
22f1812824 feat: add a2a server config; agent card generation 2026-01-14 22:09:11 -05:00
Lorenze Jay
9edbf89b68 fix: enhance Azure model stop word support detection (#4227)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Updated the `supports_stop_words` method to accurately reflect support for stop sequences based on model type, specifically excluding GPT-5 and O-series models.
- Added comprehensive tests to verify that GPT-5 family and O-series models do not support stop words, ensuring correct behavior in completion parameter preparation.
- Ensured that stop words are not included in parameters for unsupported models while maintaining expected behavior for supported models.
2026-01-13 10:23:59 -08:00
Vini Brasil
685f7b9af1 Increase frame inspection depth to detect parent_flow (#4231)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
This commit fixes a bug where `parent_flow` was not being set because
the maximum depth was not sufficient to search for an instance of `Flow`
in the current call stack frame during Flow instantiation.
2026-01-13 18:40:22 +01:00
Anaisdg
595fdfb6e7 feat: add galileo to integrations page (#4130)
* feat: add galileo to integrations page

* fix: linting issues

* fix: clarification on hanlder

* fix: uv install, load_dotenv redundancy, spelling error

* add: translations fix uv install and typo

* fix: broken links

---------

Co-authored-by: Anais <anais@Anaiss-MacBook-Pro.local>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Anais <anais@Mac.lan>
2026-01-13 08:49:17 -08:00
Koushiv
8f99fa76ed feat: additional a2a transports
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Koushiv Sadhukhan <koushiv.777@gmail.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-12 12:03:06 -05:00
GininDenis
17e3fcbe1f fix: unlink task in execution spans
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2026-01-12 02:58:42 -05:00
Joao Moura
b858d705a8 updating docs
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-01-11 16:02:55 -08:00
Lorenze Jay
d60f7b360d WIP docs for pii-redaction feat (#4189)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* WIP docs for pii-redaction feat

* fix

* updated image

* Update PII Redaction documentation to clarify Enterprise plan requirements and version constraints

* visual re-ordering

* dropping not useful info

* improve docs

* better wording

* Add PII Redaction feature documentation in Korean and Portuguese, including details on activation, supported entity types, and best practices for usage.
2026-01-09 17:53:05 -08:00
Lorenze Jay
6050a7b3e0 chore: update changelog for version 1.8.0 release (#4206)
- Added new features including native async chain for a2a, a2a update mechanisms, and global flow configuration for human-in-the-loop feedback.
- Improved event handling with enhancements to EventListener and TraceCollectionListener.
- Fixed bugs related to missing a2a dependencies and WorkOS login polling.
- Updated documentation for webhook-streaming and adjusted language in AOP to AMP documentation.
- Acknowledged contributors for this release.
2026-01-09 16:44:45 -08:00
João Moura
46846bcace fix: improve error handling for HumanFeedbackPending in flow execution (#4203)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: handle HumanFeedbackPending in flow error management

Updated the flow error handling to treat HumanFeedbackPending as expected control flow rather than an error. This change ensures that the flow can appropriately manage human feedback scenarios without signaling an error, improving the robustness of the flow execution.

* fix: improve error handling for HumanFeedbackPending in flow execution

Refined the flow error management to emit a paused event for HumanFeedbackPending exceptions instead of treating them as failures. This enhancement allows the flow to better manage human feedback scenarios, ensuring that the execution state is preserved and appropriately handled without signaling an error. Regular failure events are still emitted for other exceptions, maintaining robust error reporting.
2026-01-08 03:40:02 -03:00
João Moura
d71e91e8f2 fix: handle HumanFeedbackPending in flow error management (#4200)
Updated the flow error handling to treat HumanFeedbackPending as expected control flow rather than an error. This change ensures that the flow can appropriately manage human feedback scenarios without signaling an error, improving the robustness of the flow execution.
2026-01-08 00:52:38 -03:00
Lorenze Jay
9a212b8e29 feat: bump versions to 1.8.0 (#4199)
* feat: bump versions to 1.8.0

* bump 1.8.0
2026-01-07 15:36:46 -08:00
Greyson LaLonde
67953b3a6a feat: a2a native async chain
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-01-07 14:07:40 -05:00
Greyson LaLonde
a760923c50 fix: handle missing a2a dep as optional 2026-01-07 14:01:36 -05:00
Vidit Ostwal
1c4f44af80 Adding usage info in llm.py (#4172)
* Adding usage info everywhere

* Changing the check

* Changing the logic

* Adding tests

* Adding casellets

* Minor change

* Fixing testcase

* remove the duplicated test case, thanks to cursor

* Adding async test cases

* Updating test case

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2026-01-07 10:42:27 -08:00
Greyson LaLonde
09014215a9 feat: add a2a update mechanisms (poll/stream/push) with handlers, config, and tests
introduces structured update config, shared task helpers/error types, polling + streaming handlers with activated events, and a push notification protocol/events + handler. refactors handlers into a unified protocol with shared message sending logic and python-version-compatible typing. adds a2a integration tests + async update docs, fixes push config propagation, response model parsing safeguards, failure-state handling, stream cleanup, polling timeout catching, agent-card fallback behavior, and prevents duplicate artifacts.
2026-01-07 11:36:36 -05:00
João Moura
0ccc155457 feat: Introduce global flow configuration for human-in-the-loop feedback (#4193)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: Introduce global flow configuration for human-in-the-loop feedback

- Added a new `flow_config` module to manage global Flow configuration, allowing customization of Flow behavior at runtime.
- Integrated the `hitl_provider` attribute to specify the human-in-the-loop feedback provider, enhancing flexibility in feedback collection.
- Updated the `human_feedback` function to utilize the configured HITL provider, improving the handling of feedback requests.

* TYPO
2026-01-07 05:42:28 -03:00
Lorenze Jay
8945457883 Lorenze/metrics for human feedback flows (#4188)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* measuring human feedback feat

* add some tests
2026-01-06 16:12:34 -08:00
Mike Plachta
b787d7e591 Update webhook-streaming.mdx (#4184)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
2026-01-06 09:09:48 -08:00
Lorenze Jay
25c0c030ce adjust aop to amp docs lang (#4179)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* adjust aop to amp docs lang

* whoop no print
2026-01-05 15:30:21 -08:00
Greyson LaLonde
f8deb0fd18 feat: add streaming tool call events; fix provider id tracking; add tests and cassettes
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Adds support for streaming tool call events with test coverage, fixes tool-stream ID tracking (including OpenAI-style tracking for Azure), improves Gemini tool calling + streaming tests, adds Anthropic tests, generates Azure cassettes, and fixes Azure cassette URIs.
2026-01-05 14:33:36 -05:00
Lorenze Jay
f3c17a249b feat: Introduce production-ready Flows and Crews architecture with ne… (#4003)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: Introduce production-ready Flows and Crews architecture with new runner and updated documentation across multiple languages.

* ko and pt-br for tracing missing links

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-31 14:29:42 -08:00
Lorenze Jay
467ee2917e Improve EventListener and TraceCollectionListener for improved event… (#4160)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* Refactor EventListener and TraceCollectionListener for improved event handling

- Removed unused threading and method branches from EventListener to simplify the code.
- Updated event handling methods in EventListener to use new formatter methods for better clarity and consistency.
- Refactored TraceCollectionListener to eliminate unnecessary parameters in formatter calls, enhancing readability.
- Simplified ConsoleFormatter by removing outdated tree management methods and focusing on panel-based output for status updates.
- Enhanced ToolUsage to track run attempts for better tool usage metrics.

* clearer for knowledge retrieval and dropped some reduancies

* Refactor EventListener and ConsoleFormatter for improved clarity and consistency

- Removed the MCPToolExecutionCompletedEvent handler from EventListener to streamline event processing.
- Updated ConsoleFormatter to enhance output formatting by adding line breaks for better readability in status content.
- Renamed status messages for MCP Tool execution to provide clearer context during tool operations.

* fix run attempt incrementation

* task name consistency

* memory events consistency

* ensure hitl works

* linting
2025-12-30 11:36:31 -08:00
Lorenze Jay
b9dd166a6b Lorenze/agent executor flow pattern (#3975)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* WIP gh pr refactor: update agent executor handling and introduce flow-based executor

* wip

* refactor: clean up comments and improve code clarity in agent executor flow

- Removed outdated comments and unnecessary explanations in  and  classes to enhance code readability.
- Simplified parameter updates in the agent executor to avoid confusion regarding executor recreation.
- Improved clarity in the  method to ensure proper handling of non-final answers without raising errors.

* bumping pytest-randomly numpy

* also bump versions of anthropic sdk

* ensure flow logs are not passed if its on executor

* revert anthropic bump

* fix

* refactor: update dependency markers in uv.lock for platform compatibility

- Enhanced dependency markers for , , , and others to ensure compatibility across different platforms (Linux, Darwin, and architecture-specific conditions).
- Removed unnecessary event emission in the  class during kickoff.
- Cleaned up commented-out code in the  class for better readability and maintainability.

* drop dupllicate

* test: enhance agent executor creation and stop word assertions

- Added calls to create_agent_executor in multiple test cases to ensure proper agent execution setup.
- Updated assertions for stop words in the agent tests to remove unnecessary checks and improve clarity.
- Ensured consistency in task handling by invoking create_agent_executor with the appropriate task parameter.

* refactor: reorganize agent executor imports and introduce CrewAgentExecutorFlow

- Removed the old import of CrewAgentExecutorFlow and replaced it with the new import from the experimental module.
- Updated relevant references in the codebase to ensure compatibility with the new structure.
- Enhanced the organization of imports in core.py and base_agent.py for better clarity and maintainability.

* updating name

* dropped usage of printer here for rich console and dropped non-added value logging

* address i18n

* Enhance concurrency control in CrewAgentExecutorFlow by introducing a threading lock to prevent concurrent executions. This change ensures that the executor instance cannot be invoked while already running, improving stability and reliability during flow execution.

* string literal returns

* string literal returns

* Enhance CrewAgentExecutor initialization by allowing optional i18n parameter for improved internationalization support. This change ensures that the executor can utilize a provided i18n instance or fallback to the default, enhancing flexibility in multilingual contexts.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-28 10:21:32 -08:00
João Moura
c73b36a4c5 Adding HITL for Flows (#4143)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: introduce human feedback events and decorator for flow methods

- Added HumanFeedbackRequestedEvent and HumanFeedbackReceivedEvent classes to handle human feedback interactions within flows.
- Implemented the @human_feedback decorator to facilitate human-in-the-loop workflows, allowing for feedback collection and routing based on responses.
- Enhanced Flow class to store human feedback history and manage feedback outcomes.
- Updated flow wrappers to preserve attributes from methods decorated with @human_feedback.
- Added integration and unit tests for the new human feedback functionality, ensuring proper validation and routing behavior.

* adding deployment docs

* New docs

* fix printer

* wrong change

* Adding Async Support
feat: enhance human feedback support in flows

- Updated the @human_feedback decorator to use 'message' parameter instead of 'request' for clarity.
- Introduced new FlowPausedEvent and MethodExecutionPausedEvent to handle flow and method pauses during human feedback.
- Added ConsoleProvider for synchronous feedback collection and integrated async feedback capabilities.
- Implemented SQLite persistence for managing pending feedback context.
- Expanded documentation to include examples of async human feedback usage and best practices.

* linter

* fix

* migrating off printer

* updating docs

* new tests

* doc update
2025-12-25 21:04:10 -03:00
Lucas Gomide
0c020991c4 docs: fix wrong trigger name in sample docs (#4147)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-23 08:41:51 -05:00
Heitor Carvalho
be70a04153 fix: correct error fetching for workos login polling (#4124)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-19 20:00:26 -03:00
Greyson LaLonde
0c359f4df8 feat: bump versions to 1.7.2
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-19 15:47:00 -05:00
Lucas Gomide
fe288dbe73 Resolving some connection issues (#4129)
* fix: use CREWAI_PLUS_URL env var in precedence over PlusAPI configured value

* feat: bypass TLS certificate verification when calling platform

* test: fix test
2025-12-19 10:15:20 -05:00
Heitor Carvalho
dc63bc2319 chore: remove CREWAI_BASE_URL and fetch url from settings instead
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-18 15:41:38 -03:00
Greyson LaLonde
8d0effafec chore: add commitizen pre-commit hook
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-17 15:49:24 -05:00
Greyson LaLonde
1cdbe79b34 chore: add deployment action, trigger for releases 2025-12-17 08:40:14 -05:00
Lorenze Jay
84328d9311 fixed api-reference/status docs page (#4109)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-16 15:31:30 -08:00
Lorenze Jay
88d3c0fa97 feat: bump versions to 1.7.1 (#4092)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.7.1

* bump projects
2025-12-15 21:51:53 -08:00
Matt Aitchison
75ff7dce0c feat: add --no-commit flag to bump command (#4087)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Allows updating version files without creating a commit, branch, or PR.
2025-12-15 15:32:37 -06:00
Greyson LaLonde
38b0b125d3 feat: use json schema for tool argument serialization
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Replace Python representation with JsonSchema for tool arguments
  - Remove deprecated PydanticSchemaParser in favor of direct schema generation
  - Add handling for VAR_POSITIONAL and VAR_KEYWORD parameters
  - Improve tool argument schema collection
2025-12-11 15:50:19 -05:00
Vini Brasil
9bd8ad51f7 Add docs for AOP Deploy API (#4076)
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 15:58:17 -03:00
Heitor Carvalho
0632a054ca chore: display error message from response when tool repository login fails (#4075)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-11 14:56:00 -03:00
Dragos Ciupureanu
feec6b440e fix: gracefully terminate the future when executing a task async
* fix: gracefully terminate the future when executing a task async

* core: add unit test

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 12:03:33 -05:00
Greyson LaLonde
e43c7debbd fix: add idx for task ordering, tests 2025-12-11 10:18:15 -05:00
Greyson LaLonde
8ef9fe2cab fix: check platform compat for windows signals 2025-12-11 08:38:19 -05:00
Alex Larionov
807f97114f fix: set rpm controller timer as daemon to prevent process hang
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 02:59:55 -05:00
Greyson LaLonde
bdafe0fac7 fix: ensure token usage recording, validate response model on stream
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-10 20:32:10 -05:00
Greyson LaLonde
8e99d490b0 chore: add translated docs for async
* chore: add translated docs for async

* chore: add missing pages
2025-12-10 14:17:10 -05:00
Gil Feig
34b909367b Add docs for the agent handler connector (#4012)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add docs for the agent handler connector

* Fix links

* Update docs
2025-12-09 15:49:52 -08:00
Greyson LaLonde
22684b513e chore: add docs on native async
Some checks failed
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-08 20:49:18 -05:00
Lorenze Jay
3e3b9df761 feat: bump versions to 1.7.0 (#4051)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.7.0

* bump
2025-12-08 16:42:12 -08:00
Greyson LaLonde
177294f588 fix: ensure nonetypes are not passed to otel (#4052)
* fix: ensure nonetypes are not passed to otel

* fix: ensure attribute is always set in span
2025-12-08 16:27:42 -08:00
Greyson LaLonde
beef712646 fix: ensure token store file ops do not deadlock
* fix: ensure token store file ops do not deadlock
* chore: update test method reference
2025-12-08 19:04:21 -05:00
Lorenze Jay
6125b866fd supporting thinking for anthropic models (#3978)
* supporting thinking for anthropic models

* drop comments here

* thinking and tool calling support

* fix: properly mock tool use and text block types in Anthropic tests

- Updated the test for the Anthropic tool use conversation flow to include type attributes for mocked ToolUseBlock and text blocks, ensuring accurate simulation of tool interactions during testing.

* feat: add AnthropicThinkingConfig for enhanced thinking capabilities

This update introduces the AnthropicThinkingConfig class to manage thinking parameters for the Anthropic completion model. The LLM and AnthropicCompletion classes have been updated to utilize this new configuration. Additionally, new test cassettes have been added to validate the functionality of thinking blocks across interactions.
2025-12-08 15:34:54 -08:00
Greyson LaLonde
f2f994612c fix: ensure otel span is closed
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-05 13:23:26 -05:00
Greyson LaLonde
7fff2b654c fix: use HuggingFaceEmbeddingFunction for embeddings, update keys and add tests (#4005)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-04 15:05:50 -08:00
Greyson LaLonde
34e09162ba feat: async flow kickoff
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Introduces akickoff alias to flows, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and removes duplicated logic.
2025-12-04 17:08:08 -05:00
Greyson LaLonde
24d1fad7ab feat: async crew support
native async crew execution. Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and removes duplicated logic.
2025-12-04 16:53:19 -05:00
Greyson LaLonde
9b8f31fa07 feat: async task support (#4024)
* feat: add async support for tools, add async tool tests

* chore: improve tool decorator typing

* fix: ensure _run backward compat

* chore: update docs

* chore: make docstrings a little more readable

* feat: add async execution support to agent executor

* chore: add tests

* feat: add aiosqlite dep; regenerate lockfile

* feat: add async ops to memory feat; create tests

* feat: async knowledge support; add tests

* feat: add async task support

* chore: dry out duplicate logic
2025-12-04 13:34:29 -08:00
Greyson LaLonde
d898d7c02c feat: async knowledge support (#4023)
* feat: add async support for tools, add async tool tests

* chore: improve tool decorator typing

* fix: ensure _run backward compat

* chore: update docs

* chore: make docstrings a little more readable

* feat: add async execution support to agent executor

* chore: add tests

* feat: add aiosqlite dep; regenerate lockfile

* feat: add async ops to memory feat; create tests

* feat: async knowledge support; add tests

* chore: regenerate lockfile
2025-12-04 10:27:52 -08:00
Greyson LaLonde
f04c40babf feat: async memory support
Adds async support for tools with tests, async execution in the agent executor, and async operations for memory (with aiosqlite). Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and regenerates lockfiles.
2025-12-04 12:54:49 -05:00
Lorenze Jay
c456e5c5fa Lorenze/ensure hooks work with lite agents flows (#3981)
* liteagent support hooks

* wip llm.call hooks work - needs tests for this

* fix tests

* fixed more

* more tool hooks test cassettes
2025-12-04 09:38:39 -08:00
Greyson LaLonde
633e279b51 feat: add async support for tools and agent executor; improve typing and docs
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Introduces async tool support with new tests, adds async execution to the agent executor, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, and adds additional tests.
2025-12-03 20:13:03 -05:00
Greyson LaLonde
a25778974d feat: a2a extensions API and async agent card caching; fix task propagation & streaming
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Adds initial extensions API (with registry temporarily no-op), introduces aiocache for async caching, ensures reference task IDs propagate correctly, fixes streamed response model handling, updates streaming tests, and regenerates lockfiles.
2025-12-03 16:29:48 -05:00
Greyson LaLonde
09f1ba6956 feat: native async tool support
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- add async support for tools
- add async tool tests
- improve tool decorator typing
- fix _run backward compatibility
- update docs and improve readability of docstrings
2025-12-02 16:39:58 -05:00
Greyson LaLonde
20704742e2 feat: async llm support
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
feat: introduce async contract to BaseLLM

feat: add async call support for:

Azure provider

Anthropic provider

OpenAI provider

Gemini provider

Bedrock provider

LiteLLM provider

chore: expand scrubbed header fields (conftest, anthropic, bedrock)

chore: update docs to cover async functionality

chore: update and harden tests to support acall; re-add uri for cassette compatibility

chore: generate missing cassette

fix: ensure acall is non-abstract and set supports_tools = true for supported Anthropic models

chore: improve Bedrock async docstring and general test robustness
2025-12-01 18:56:56 -05:00
Greyson LaLonde
59180e9c9f fix: ensure supports_tools is true for all supported anthropic models
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-01 07:21:09 -05:00
Greyson LaLonde
3ce019b07b chore: pin dependencies in crewai, crewai-tools, devtools
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-30 19:51:20 -05:00
Greyson LaLonde
2355ec0733 feat: create sys event types and handler
feat: add system event types and handler

chore: add tests and improve signal-related error logging
2025-11-30 17:44:40 -05:00
Greyson LaLonde
c925d2d519 chore: restructure test env, cassettes, and conftest; fix flaky tests
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Consolidates pytest config, standardizes env handling, reorganizes cassette layout, removes outdated VCR configs, improves sync with threading.Condition, updates event-waiting logic, ensures cleanup, regenerates Gemini cassettes, and reverts unintended test changes.
2025-11-29 16:55:24 -05:00
Lorenze Jay
bc4e6a3127 feat: bump versions to 1.6.1 (#3993)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.6.1

* chore: update crewAI dependency version to 1.6.1 in project templates
2025-11-28 17:57:15 -08:00
Vidit Ostwal
37526c693b Fixing ChatCompletionsClinet call (#3910)
* Fixing ChatCompletionsClinet call

* Moving from json-object -> JsonSchemaFormat

* Regex handling

* Adding additionalProperties explicitly

* fix: ensure additionalProperties is recursive

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-28 17:33:53 -08:00
Greyson LaLonde
c59173a762 fix: ensure async methods are executable for annotations 2025-11-28 19:54:40 -05:00
Lorenze Jay
4d8eec96e8 refactor: enhance model validation and provider inference in LLM class (#3976)
* refactor: enhance model validation and provider inference in LLM class

- Updated the model validation logic to support pattern matching for new models and "latest" versions, improving flexibility for various providers.
- Refactored the `_validate_model_in_constants` method to first check hardcoded constants and then fall back to pattern matching.
- Introduced `_matches_provider_pattern` to streamline provider-specific model checks.
- Enhanced the `_infer_provider_from_model` method to utilize pattern matching for better provider inference.

This refactor aims to improve the extensibility of the LLM class, allowing it to accommodate new models without requiring constant updates to the hardcoded lists.

* feat: add new Anthropic model versions to constants

- Introduced "claude-opus-4-5-20251101" and "claude-opus-4-5" to the AnthropicModels and ANTHROPIC_MODELS lists for enhanced model support.
- Added "anthropic.claude-opus-4-5-20251101-v1:0" to BedrockModels and BEDROCK_MODELS to ensure compatibility with the latest model offerings.
- Updated test cases to ensure proper environment variable handling for model validation, improving robustness in testing scenarios.

* dont infer this way - dropped
2025-11-28 13:54:40 -08:00
Greyson LaLonde
2025a26fc3 fix: ensure parameters in RagTool.add, add typing, tests (#3979)
Some checks failed
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* fix: ensure parameters in RagTool.add, add typing, tests

* feat: substitute pymupdf for pypdf, better parsing performance

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-26 22:32:43 -08:00
Greyson LaLonde
bed9a3847a fix: remove invalid param from sse client (#3980) 2025-11-26 21:37:55 -08:00
Heitor Carvalho
5239dc9859 fix: erase 'oauth2_extra' setting on 'crewai config reset' command
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-11-26 18:43:44 -05:00
Lorenze Jay
52444ad390 feat: bump versions to 1.6.0 (#3974)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* feat: bump versions to 1.6.0

* bump project templates
2025-11-24 17:56:30 -08:00
Greyson LaLonde
f070595e65 fix: ensure custom rag store persist path is set if passed
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-24 20:03:57 -05:00
Lorenze Jay
69c5eace2d Update references from AMP to AOP in documentation (#3972)
- Changed "AMP" to "AOP" in multiple locations across JSON and MDX files to reflect the correct terminology for the Agent Operations Platform.
- Updated the introduction sections in English, Korean, and Portuguese to ensure consistency in the platform's naming.
2025-11-24 16:43:30 -08:00
Vidit Ostwal
d88ac338d5 Adding drop parameters in ChatCompletionsClient
* Adding drop parameters

* Adding test case

* Just some spacing addition

* Adding drop params to maintain consistency

* Changing variable name

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-24 19:16:36 -05:00
Lorenze Jay
4ae8c36815 feat: enhance flow event state management (#3952)
* feat: enhance flow event state management

- Added `state` attribute to `FlowFinishedEvent` to capture the flow's state as a JSON-serialized dictionary.
- Updated flow event emissions to include the serialized state, improving traceability and debugging capabilities during flow execution.

* fix: improve state serialization in Flow class

- Enhanced the `_copy_and_serialize_state` method to handle exceptions during JSON serialization of Pydantic models, ensuring robustness in state management.
- Updated test assertions to access the state as a dictionary, aligning with the new state structure.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-24 15:55:49 -08:00
Greyson LaLonde
b049b73f2e fix: ensure fuzzy returns are more strict, show type warning 2025-11-24 17:35:12 -05:00
Greyson LaLonde
d2b9c54931 fix: re-add openai response_format param, add test 2025-11-24 17:13:20 -05:00
Greyson LaLonde
a928cde6ee fix: rag tool embeddings config
* fix: ensure config is not flattened, add tests

* chore: refactor inits to model_validator

* chore: refactor rag tool config parsing

* chore: add initial docs

* chore: add additional validation aliases for provider env vars

* chore: add solid docs

* chore: move imports to top

* fix: revert circular import

* fix: lazy import qdrant-client

* fix: allow collection name config

* chore: narrow model names for google

* chore: update additional docs

* chore: add backward compat on model name aliases

* chore: add tests for config changes
2025-11-24 16:51:28 -05:00
João Moura
9c84475691 Update AMP to AOP (#3941)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-11-24 13:15:24 -08:00
Greyson LaLonde
f3c5d1e351 feat: add streaming result support to flows and crews
* feat: add streaming result support to flows and crews
* docs: add streaming execution documentation and integration tests
2025-11-24 15:43:48 -05:00
Mark McDonald
a978267fa2 feat: Add gemini-3-pro-preview (#3950)
* Add gemini-3-pro-preview

Also refactors the tool support check for better forward compatibility.

* Add cassette for Gemini 3 Pro

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-24 14:49:29 -05:00
Heitor Carvalho
b759654e7d feat: support CLI login with Entra ID (#3943) 2025-11-24 15:35:59 -03:00
Greyson LaLonde
9da1f0c0aa fix: ensure flow execution start panel is not shown on plot 2025-11-24 12:50:18 -05:00
Greyson LaLonde
a559cedbd1 chore: ensure proper cassettes for agent tests
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* chore: ensure proper cassettes for agent tests
* chore: tweak eval test to avoid race condition
2025-11-24 12:29:11 -05:00
Gil Feig
bcc3e358cb feat: Add Merge Agent Handler tool (#3911)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: Add Merge Agent Handler tool

* Fix linting issues

* Empty
2025-11-20 16:58:41 -08:00
Greyson LaLonde
d160f0874a chore: don't fail on cleanup error
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-19 01:28:25 -05:00
Lorenze Jay
9fcf55198f feat: bump versions to 1.5.0 (#3924)
Some checks failed
Update Test Durations / update-durations (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.5.0

* chore: update crewAI tools dependency to version 1.5.0 in project templates
2025-11-15 18:00:11 -08:00
Lorenze Jay
f46a846ddc chore: remove unused hooks test file (#3923)
- Deleted the `__init__.py` file from the tests/hooks directory as it contained no tests or functionality. This cleanup helps maintain a tidy test structure.
2025-11-15 17:51:42 -08:00
Greyson LaLonde
b546982690 fix: ensure instrumentation flags 2025-11-15 20:48:40 -05:00
Greyson LaLonde
d7bdac12a2 feat: a2a trust remote completion status flag
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- add trust_remote_completion_status flag to A2AConfig, Adds configuration flag to control whether to trust A2A agent completion status. Resolves #3899
- update docs
2025-11-13 13:43:09 -05:00
Lorenze Jay
528d812263 Lorenze/feat hooks (#3902)
* feat: implement LLM call hooks and enhance agent execution context

- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.

* feat: implement LLM call hooks and enhance agent execution context

- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.

* fix verbose

* feat: introduce crew-scoped hook decorators and refactor hook registration

- Added decorators for before and after LLM and tool calls to enhance flexibility in modifying execution behavior.
- Implemented a centralized hook registration mechanism within CrewBase to automatically register crew-scoped hooks.
- Removed the obsolete base.py file as its functionality has been integrated into the new decorators and registration system.
- Enhanced tests for the new hook decorators to ensure proper registration and execution flow.
- Updated existing hook handling to accommodate the new decorator-based approach, improving code organization and maintainability.

* feat: enhance hook management with clear and unregister functions

- Introduced functions to unregister specific before and after hooks for both LLM and tool calls, improving flexibility in hook management.
- Added clear functions to remove all registered hooks of each type, facilitating easier state management and cleanup.
- Implemented a convenience function to clear all global hooks in one call, streamlining the process for testing and execution context resets.
- Enhanced tests to verify the functionality of unregistering and clearing hooks, ensuring robust behavior in various scenarios.

* refactor: enhance hook type management for LLM and tool hooks

- Updated hook type definitions to use generic protocols for better type safety and flexibility.
- Replaced Callable type annotations with specific BeforeLLMCallHookType and AfterLLMCallHookType for clarity.
- Improved the registration and retrieval functions for before and after hooks to align with the new type definitions.
- Enhanced the setup functions to handle hook execution results, allowing for blocking of LLM calls based on hook logic.
- Updated related tests to ensure proper functionality and type adherence across the hook management system.

* feat: add execution and tool hooks documentation

- Introduced new documentation for execution hooks, LLM call hooks, and tool call hooks to provide comprehensive guidance on their usage and implementation in CrewAI.
- Updated existing documentation to include references to the new hooks, enhancing the learning resources available for users.
- Ensured consistency across multiple languages (English, Portuguese, Korean) for the new documentation, improving accessibility for a wider audience.
- Added examples and troubleshooting sections to assist users in effectively utilizing hooks for agent operations.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-11-13 10:11:50 -08:00
Greyson LaLonde
ffd717c51a fix: custom tool docs links, add mintlify broken links action (#3903)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: update docs links to point to correct endpoints

* fix: update all broken doc links
2025-11-12 22:55:10 -08:00
Heitor Carvalho
fbe4aa4bd1 feat: fetch and store more data about okta authorization server (#3894)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-12 15:28:00 -03:00
Lorenze Jay
c205d2e8de feat: implement before and after LLM call hooks in CrewAgentExecutor (#3893)
- Added support for before and after LLM call hooks to allow modification of messages and responses during LLM interactions.
- Introduced LLMCallHookContext to provide hooks with access to the executor state, enabling in-place modifications of messages.
- Updated get_llm_response function to utilize the new hooks, ensuring that modifications persist across iterations.
- Enhanced tests to verify the functionality of the hooks and their error handling capabilities, ensuring robust execution flow.
2025-11-12 08:38:13 -08:00
Daniel Barreto
fcb5b19b2e Enhance schema description of QdrantVectorSearchTool (#3891)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-11 14:33:33 -08:00
Rip&Tear
01f0111d52 dependabot.yml creation (#3868)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* dependabot.yml creation

* Configure dependabot for pip package updates

Co-authored-by: matt <matt@crewai.com>

* Fix Dependabot package ecosystem

* Refactor: Use uv package-ecosystem in dependabot

Co-authored-by: matt <matt@crewai.com>

* fix: ensure dependabot uses uv ecosystem

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: matt <matt@crewai.com>
2025-11-11 12:14:16 +08:00
Lorenze Jay
6b52587c67 feat: expose messages to TaskOutput and LiteAgentOutputs (#3880)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: add messages to task and agent outputs

- Introduced a new  field in  and  to capture messages from the last task execution.
- Updated the  class to store the last messages and provide a property for easy access.
- Enhanced the  and  classes to include messages in their outputs.
- Added tests to ensure that messages are correctly included in task outputs and agent outputs during execution.

* using typing_extensions for 3.10 compatability

* feat: add last_messages attribute to agent for improved task tracking

- Introduced a new `last_messages` attribute in the agent class to store messages from the last task execution.
- Updated the `Crew` class to handle the new messages attribute in task outputs.
- Enhanced existing tests to ensure that the `last_messages` attribute is correctly initialized and utilized across various guardrail scenarios.

* fix: add messages field to TaskOutput in tests for consistency

- Updated multiple test cases to include the new `messages` field in the `TaskOutput` instances.
- Ensured that all relevant tests reflect the latest changes in the TaskOutput structure, maintaining consistency across the test suite.
- This change aligns with the recent addition of the `last_messages` attribute in the agent class for improved task tracking.

* feat: preserve messages in task outputs during replay

- Added functionality to the Crew class to store and retrieve messages in task outputs.
- Enhanced the replay mechanism to ensure that messages from stored task outputs are preserved and accessible.
- Introduced a new test case to verify that messages are correctly stored and replayed, ensuring consistency in task execution and output handling.
- This change improves the overall tracking and context retention of task interactions within the CrewAI framework.

* fix original test, prev was debugging
2025-11-10 17:38:30 -08:00
Lorenze Jay
629f7f34ce docs: enhance task guardrail documentation with LLM-based validation support (#3879)
- Added section on LLM-based guardrails, explaining their usage and requirements.
- Updated examples to demonstrate the implementation of multiple guardrails, including both function-based and LLM-based approaches.
- Clarified the distinction between single and multiple guardrails in task configurations.
- Improved explanations of guardrail functionality to ensure better understanding of validation processes.
2025-11-10 15:35:42 -08:00
Lorenze Jay
0f1c173d02 feat: bump versions to 1.4.1 (#3862)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* feat: bump versions to 1.4.1

* chore: update crewAI tools dependency to version 1.4.1 in project templates
2025-11-07 11:19:07 -08:00
Greyson LaLonde
19c5b9a35e fix: properly handle agent max iterations
fixes #3847
2025-11-07 13:54:11 -05:00
Greyson LaLonde
1ed307b58c fix: route llm model syntax to litellm
* fix: route llm model syntax to litellm

* wip: add list of supported models
2025-11-07 13:34:15 -05:00
Lorenze Jay
d29867bbb6 chore: update version numbers to 1.4.0
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 23:04:44 -05:00
Lorenze Jay
b2c278ed22 refactor: improve MCP tool execution handling with concurrent futures (#3854)
- Enhanced the MCP tool execution in both synchronous and asynchronous contexts by utilizing  for better event loop management.
- Updated error handling to provide clearer messages for connection issues and task cancellations.
- Added tests to validate MCP tool execution in both sync and async scenarios, ensuring robust functionality across different contexts.
2025-11-06 19:28:08 -08:00
Greyson LaLonde
f6aed9798b feat: allow non-ast plot routes 2025-11-06 21:17:29 -05:00
Greyson LaLonde
40a2d387a1 fix: keep stopwords updated 2025-11-06 21:10:25 -05:00
Lorenze Jay
6f36d7003b Lorenze/feat mcp first class support (#3850)
* WIP transport support mcp

* refactor: streamline MCP tool loading and error handling

* linted

* Self type from typing with typing_extensions in MCP transport modules

* added tests for mcp setup

* added tests for mcp setup

* docs: enhance MCP overview with detailed integration examples and structured configurations

* feat: implement MCP event handling and logging in event listener and client

- Added MCP event types and handlers for connection and tool execution events.
- Enhanced MCPClient to emit events on connection status and tool execution.
- Updated ConsoleFormatter to handle MCP event logging.
- Introduced new MCP event types for better integration and monitoring.
2025-11-06 17:45:16 -08:00
Greyson LaLonde
9e5906c52f feat: add pydantic validation dunder to BaseInterceptor
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-11-06 15:27:07 -05:00
Lorenze Jay
fc521839e4 Lorenze/fix duplicating doc ids for knowledge (#3840)
* fix: update document ID handling in ChromaDB utility functions to use SHA-256 hashing and include index for uniqueness

* test: add tests for hash-based ID generation in ChromaDB utility functions

* drop idx for preventing dups, upsert should handle dups

* fix: update document ID extraction logic in ChromaDB utility functions to check for doc_id at the top level of the document

* fix: enhance document ID generation in ChromaDB utility functions to deduplicate documents and ensure unique hash-based IDs without suffixes

* fix: improve error handling and document ID generation in ChromaDB utility functions to ensure robust processing and uniqueness
2025-11-06 10:59:52 -08:00
Greyson LaLonde
e4cc9a664c fix: handle unpickleable values in flow state
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 01:29:21 -05:00
Greyson LaLonde
7e6171d5bc fix: ensure lite agents course-correct on validation errors
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: ensure lite agents course-correct on validation errors

* chore: update cassettes and test expectations

* fix: ensure multiple guardrails propogate
2025-11-05 19:02:11 -05:00
Greyson LaLonde
61ad1fb112 feat: add support for llm message interceptor hooks 2025-11-05 11:38:44 -05:00
Greyson LaLonde
54710a8711 fix: hash callback args correctly to ensure caching works
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-05 07:19:09 -05:00
Lucas Gomide
5abf976373 fix: allow adding RAG source content from valid URLs (#3831)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-04 07:58:40 -05:00
Greyson LaLonde
329567153b fix: make plot node selection smoother
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-03 07:49:31 -05:00
Greyson LaLonde
60332e0b19 feat: cache i18n prompts for efficient use 2025-11-03 07:39:05 -05:00
Lorenze Jay
40932af3fa feat: bump versions to 1.3.0 (#3820)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.3.0

* chore: update crew and flow templates to use crewai[tools] version 1.3.0
2025-10-31 18:54:02 -07:00
Greyson LaLonde
e134e5305b Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers

* feat: a2a schemas and utils

* chore: move agent class, update imports

* refactor: organize imports to avoid circularity, add a2a to console

* feat: pass response_model through call chain

* feat: add standard openapi spec serialization to tools and structured output

* feat: a2a events

* chore: add a2a to pyproject

* docs: minimal base for learn docs

* fix: adjust a2a conversation flow, allow llm to decide exit until max_retries

* fix: inject agent skills into initial prompt

* fix: format agent card as json in prompt

* refactor: simplify A2A agent prompt formatting and improve skill display

* chore: wide cleanup

* chore: cleanup logic, add auth cache, use json for messages in prompt

* chore: update docs

* fix: doc snippets formatting

* feat: optimize A2A agent card fetching and improve error reporting

* chore: move imports to top of file

* chore: refactor hasattr check

* chore: add httpx-auth, update lockfile

* feat: create base public api

* chore: cleanup modules, add docstrings, types

* fix: exclude extra fields in prompt

* chore: update docs

* tests: update to correct import

* chore: lint for ruff, add missing import

* fix: tweak openai streaming logic for response model

* tests: add reimport for test

* tests: add reimport for test

* fix: don't set a2a attr if not set

* fix: don't set a2a attr if not set

* chore: update cassettes

* tests: fix tests

* fix: use instructor and dont pass response_format for litellm

* chore: consolidate event listeners, add typing

* fix: address race condition in test, update cassettes

* tests: add correct mocks, rerun cassette for json

* tests: update cassette

* chore: regenerate cassette after new run

* fix: make token manager access-safe

* fix: make token manager access-safe

* merge

* chore: update test and cassete for output pydantic

* fix: tweak to disallow deadlock

* chore: linter

* fix: adjust event ordering for threading

* fix: use conditional for batch check

* tests: tweak for emission

* tests: simplify api + event check

* fix: ensure non-function calling llms see json formatted string

* tests: tweak message comparison

* fix: use internal instructor for litellm structure responses

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
2025-10-31 18:42:03 -07:00
Greyson LaLonde
e229ef4e19 refactor: improve flow handling, typing, and logging; update UI and tests
fix: refine nested flow conditionals and ensure router methods and routes are fully parsed
fix: improve docstrings, typing, and logging coverage across all events
feat: update flow.plot feature with new UI enhancements
chore: apply Ruff linting, reorganize imports, and remove deprecated utilities/files
chore: split constants and utils, clean JS comments, and add typing for linters
tests: strengthen test coverage for flow execution paths and router logic
2025-10-31 21:15:06 -04:00
Greyson LaLonde
2e9eb8c32d fix: refactor use_stop_words to property, add check for stop words
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-10-29 19:14:01 +01:00
Lucas Gomide
4ebb5114ed Fix Firecrawl tools & adding tests (#3810)
* fix: fix Firecrawl Scrape tool

* fix: fix Firecrawl Search tool

* fix: fix Firecrawl Website tool

* tests: adding tests for Firecrawl
2025-10-29 13:37:57 -04:00
Daniel Barreto
70b083945f Enhance QdrantVectorSearchTool (#3806)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-28 13:42:40 -04:00
Tony Kipkemboi
410db1ff39 docs: migrate embedder→embedding_model and require vectordb across tool docs; add provider examples (en/ko/pt-BR) (#3804)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs(tools): migrate embedder->embedding_model, require vectordb; add Chroma/Qdrant examples across en/ko/pt-BR PDF/TXT/XML/MDX/DOCX/CSV/Directory docs

* docs(observability): apply latest Datadog tweaks in ko and pt-BR
2025-10-27 13:29:21 -04:00
Lorenze Jay
5d6b4c922b feat: bump versions to 1.2.1 (#3800)
* feat: bump versions to 1.2.1

* updated templates too
2025-10-27 09:12:04 -07:00
Lucas Gomide
b07c0fc45c docs: describe mandatory env-var to call Platform tools for each integration (#3803) 2025-10-27 10:01:41 -04:00
Sam Brenner
97853199c7 Add Datadog Integration Documentation (#3642)
* add datadog llm observability integration guide

* spacing fix

* wording changes

* alphabetize docs listing

* Update docs/en/observability/datadog.mdx

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>

* add translations

* fix korean code block

---------

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-27 09:48:38 -04:00
Lorenze Jay
494ed7e671 liteagent supports apps and mcps (#3794)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* liteagent supports apps and mcps

* generated cassettes for these
2025-10-24 18:42:08 -07:00
Lorenze Jay
a83c57a2f2 feat: bump versions to 1.2.0 (#3787)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.2.0

* also include projects
2025-10-23 18:04:34 -07:00
Lorenze Jay
08e15ab267 fix: update default LLM model and improve error logging in LLM utilities (#3785)
* fix: update default LLM model and improve error logging in LLM utilities

* Updated the default LLM model from "gpt-4o-mini" to "gpt-4.1-mini" for better performance.
* Enhanced error logging in the LLM utilities to use logger.error instead of logger.debug, ensuring that errors are properly reported and raised.
* Added tests to verify behavior when OpenAI API key is missing and when Anthropic dependency is not available, improving robustness and error handling in LLM creation.

* fix: update test for default LLM model usage

* Refactored the test_create_llm_with_none_uses_default_model to use the imported DEFAULT_LLM_MODEL constant instead of a hardcoded string.
* Ensured that the test correctly asserts the model used is the current default, improving maintainability and consistency across tests.

* change default model to gpt-4.1-mini

* change default model use defualt
2025-10-23 17:54:11 -07:00
Greyson LaLonde
9728388ea7 fix: change flow viz del dir; method inspection
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update flow viz deletion dir, add typing
* tests: add flow viz tests to ensure lib dir is not deleted
2025-10-22 19:32:38 -04:00
Greyson LaLonde
4371cf5690 chore: remove aisuite
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Little usage + blocking some features
2025-10-21 23:18:06 -04:00
Lorenze Jay
d28daa26cd feat: bump versions to 1.1.0 (#3770)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.1.0

* chore: bump template versions

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-21 15:52:44 -07:00
Lorenze Jay
a850813f2b feat: enhance InternalInstructor to support multiple LLM providers (#3767)
* feat: enhance InternalInstructor to support multiple LLM providers

- Updated InternalInstructor to conditionally create an instructor client based on the LLM provider.
- Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern.
- Added functionality to extract the provider from the LLM model name.
- Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling.

This update improves flexibility and extensibility for different LLM integrations.

* fix test
2025-10-21 15:24:59 -07:00
Cameron Warren
5944a39629 fix: correct broken integration documentation links
Fix navigation paths for two integration tool cards that were redirecting to the
introduction page instead of their intended documentation pages.

Fixes #3516

Co-authored-by: Cwarre33 <cwarre33@charlotte.edu>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 18:12:08 -04:00
Greyson LaLonde
c594859ed0 feat: mypy plugin base
* feat: base mypy plugin with CrewBase

* fix: add crew method to protocol
2025-10-21 17:36:08 -04:00
Daniel Barreto
2ee27efca7 feat: improvements on QdrantVectorSearchTool
* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 16:50:08 -04:00
Greyson LaLonde
f6e13eb890 chore: update codeql config paths to new folders
* chore: update codeql config paths to new folders

* tests: use threading.Condition for event check

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:43:25 -04:00
Lorenze Jay
e7b3ce27ca docs: update LLM integration details and examples
* docs: update LLM integration details and examples

- Changed references from LiteLLM to native SDKs for LLM providers.
- Enhanced OpenAI and AWS Bedrock sections with new usage examples and advanced configuration options.
- Added structured output examples and supported environment variables for better clarity.
- Improved documentation on additional parameters and features for LLM configurations.

* drop this example - should use strucutred output from task instead

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 14:39:50 -04:00
Greyson LaLonde
dba27cf8b5 fix: fix double trace call; add types
* fix: fix double trace call; add types

* tests: skip long running uv install test, refactor in future

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:15:39 -04:00
Greyson LaLonde
6469f224f6 chore: improve CrewBase typing 2025-10-21 13:58:35 -04:00
Greyson LaLonde
f3a63be215 tests: cassettes, threading for flow tests 2025-10-21 13:48:21 -04:00
Greyson LaLonde
01d8c189f0 fix: pin template versions to latest
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-10-21 10:56:41 -04:00
Lorenze Jay
cc83c1ead5 feat: bump versions to 1.0.0
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-20 17:34:38 -04:00
Greyson LaLonde
7578901f6d chore: use main for devtools branch 2025-10-20 17:29:09 -04:00
Lorenze Jay
d1343b96ed Release/v1.0.0 (#3618)
* feat: add `apps` & `actions` attributes to Agent (#3504)

* feat: add app attributes to Agent

* feat: add actions attribute to Agent

* chore: resolve linter issues

* refactor: merge the apps and actions parameters into a single one

* fix: remove unnecessary print

* feat: logging error when CrewaiPlatformTools fails

* chore: export CrewaiPlatformTools directly from crewai_tools

* style: resolver linter issues

* test: fix broken tests

* style: solve linter issues

* fix: fix broken test

* feat: monorepo restructure and test/ci updates

- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules

* chore: update python version to 3.13 and package metadata

* feat: add crewai-tools workspace and fix tests/dependencies

* feat: add crewai-tools workspace structure

* Squashed 'temp-crewai-tools/' content from commit 9bae5633

git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9

* feat: configure crewai-tools workspace package with dependencies

* fix: apply ruff auto-formatting to crewai-tools code

* chore: update lockfile

* fix: don't allow tool tests yet

* fix: comment out extra pytest flags for now

* fix: remove conflicting conftest.py from crewai-tools tests

* fix: resolve dependency conflicts and test issues

- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0

* chore: update CI workflows and docs for monorepo structure

* chore: update CI workflows and docs for monorepo structure

* fix: actions syntax

* chore: ci publish and pin versions

* fix: add permission to action

* chore: bump version to 1.0.0a1 across all packages

- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency

* WIP: v1 docs (#3626)

(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)

* docs: parity for all translations

* docs: full name of acronym AMP

* docs: fix lingering unused code

* docs: expand contextual options in docs.json

* docs: add contextual action to request feature on GitHub (#3635)

* chore: apply linting fixes to crewai-tools

* feat: add required env var validation for brightdata

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* fix: handle properly anyOf oneOf allOf schema's props

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: bump version to 1.0.0a2

* Lorenze/native inference sdks (#3619)

* ruff linted

* using native sdks with litellm fallback

* drop exa

* drop print on completion

* Refactor LLM and utility functions for type consistency

- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.

* fix agent_tests

* fix litellm tests and usagemetrics fix

* drop print

* Refactor LLM event handling and improve test coverage

- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.

* fix test_converter

* fixed tests/agents/test_agent.py

* Refactor LLM context length exception handling and improve provider integration

- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.

* Enhance LLM context window handling across providers

- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.

* fix test agent again

* fix test agent

* feat: add native LLM providers for Anthropic, Azure, and Gemini

- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.

* chore: update dependencies and improve context management

- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.

* refactor(tests): update LLM instantiation to include is_litellm flag in test cases

- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.

* linter

* linted

* revert constants

* fix(tests): correct type hint in expected model description

- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.

* refactor(llm): enhance LLM instantiation and error handling

- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.

* fixed test

* refactor(llm): enhance token usage tracking and add copy methods

- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.

* refactor(tests): reorganize imports and enhance error messages in test cases

- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.

* feat: add base devtooling

* fix: ensure dep refs are updated for devtools

* fix: allow pre-release

* feat: allow release after tag

* feat: bump versions to 1.0.0a3 

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: match tag and release title, ignore devtools build for pypi

* fix: allow failed pypi publish

* feat: introduce trigger listing and execution commands for local development (#3643)

* chore: exclude tests from ruff linting

* chore: exclude tests from GitHub Actions linter

* fix: replace print statements with logger in agent and memory handling

* chore: add noqa for intentional print in printer utility

* fix: resolve linting errors across codebase

* feat: update docs with new approach to consume Platform Actions (#3675)

* fix: remove duplicate line and add explicit env var

* feat: bump versions to 1.0.0a4 (#3686)

* Update triggers docs (#3678)

* docs: introduce triggers list & triggers run command

* docs: add KO triggers docs

* docs: ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned on docs (#3687)

* Lorenze/bedrock llm (#3693)

* feat: add AWS Bedrock support and update dependencies

- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.

* using converse api

* converse

* linted

* refactor: update BedrockCompletion class to improve parameter handling

- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.

* feat: improve event bus thread safety and async support

Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.

Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.

* feat: enhance OpenAICompletion class with additional client parameters (#3701)

* feat: enhance OpenAICompletion class with additional client parameters

- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.

* fix: correct test case for unsupported OpenAI model

- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.

* fix: enhance error handling in OpenAICompletion class

- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.

* fix: improve test for unsupported OpenAI model handling

- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.

* feat: bump versions to 1.0.0b1 (#3706)

* Lorenze/tools drop litellm (#3710)

* completely drop litellm and correctly pass config for qdrant

* feat: add support for additional embedding models in EmbeddingService

- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.

* fix: update collection parameter handling in CrewAIRagAdapter

- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.

* moved stagehand as optional dep (#3712)

* feat: bump versions to 1.0.0b2 (#3713)

* feat: enhance AnthropicCompletion class with additional client parame… (#3707)

* feat: enhance AnthropicCompletion class with additional client parameters and tool handling

- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.

* drop print statements

* test: add fixture to mock ANTHROPIC_API_KEY for tests

- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.

* refactor: streamline streaming message handling in AnthropicCompletion class

- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.

* fix streaming here too

* fix: improve error handling in tool conversion for AnthropicCompletion class

- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.

* feat: enhance GeminiCompletion class with client parameter support (#3717)

* feat: enhance GeminiCompletion class with client parameter support

- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.

* add optional dependancies

* refactor: update test fixture to mock GOOGLE_API_KEY

- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.

* fix tests

* feat: enhance BedrockCompletion class with advanced features

* feat: enhance BedrockCompletion class with advanced features and error handling

- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.

* chore: add boto typing

* fix: use typing_extensions.Required for Python 3.10 compatibility

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: azure native tests

* feat: add Azure AI Inference support and related tests

- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.

* feat: enhance AzureCompletion class with Azure OpenAI support

- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.

* refactor: convert project module to metaclass with full typing

* Lorenze/OpenAI base url backwards support (#3723)

* fix: enhance OpenAICompletion class base URL handling

- Updated the base URL assignment in the OpenAICompletion class to prioritize the new `api_base` attribute and fallback to the environment variable `OPENAI_BASE_URL` if both are not set.
- Added `api_base` to the list of parameters in the OpenAICompletion class to ensure proper configuration and flexibility in API endpoint management.

* feat: enhance OpenAICompletion class with api_base support

- Added the `api_base` parameter to the OpenAICompletion class to allow for flexible API endpoint configuration.
- Updated the `_get_client_params` method to prioritize `base_url` over `api_base`, ensuring correct URL handling.
- Introduced comprehensive tests to validate the behavior of `api_base` and `base_url` in various scenarios, including environment variable fallback.
- Enhanced test coverage for client parameter retrieval, ensuring robust integration with the OpenAI API.

* fix: improve OpenAICompletion class configuration handling

- Added a debug print statement to log the client configuration parameters during initialization for better traceability.
- Updated the base URL assignment logic to ensure it defaults to None if no valid base URL is provided, enhancing robustness in API endpoint configuration.
- Refined the retrieval of the `api_base` environment variable to streamline the configuration process.

* drop print

* feat: improvements on import native sdk support (#3725)

* feat: add support for Anthropic provider and enhance logging

- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.

* feat: enhance LLM support with Bedrock provider and update dependencies

- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.

* test: update native provider fallback tests to expect ImportError

* adjust the test with the expected bevaior - raising ImportError

* this is exoecting the litellm format, all gemini native tests are in test_google.py

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: remove stdout prints, improve test determinism, and update trace handling

Removed `print` statements from the `LLMStreamChunkEvent` handler to prevent
LLM response chunks from being written directly to stdout. The listener now
only tracks chunks internally.

Fixes #3715

Added explicit return statements for trace-related tests.

Updated cassette for `test_failed_evaluation` to reflect new behavior where
an empty trace dict is used instead of returning early.

Ensured deterministic cleanup order in test fixtures by making
`clear_event_bus_handlers` depend on `setup_test_environment`. This guarantees
event bus shutdown and file handle cleanup occur before temporary directory
deletion, resolving intermittent “Directory not empty” errors in CI.

* chore: remove lib/crewai exclusion from pre-commit hooks

* feat: enhance task guardrail functionality and validation

* feat: enhance task guardrail functionality and validation

- Introduced support for multiple guardrails in the Task class, allowing for sequential processing of guardrails.
- Added a new `guardrails` field to the Task model to accept a list of callable guardrails or string descriptions.
- Implemented validation to ensure guardrails are processed correctly, including handling of retries and error messages.
- Enhanced the `_invoke_guardrail_function` method to manage guardrail execution and integrate with existing task output processing.
- Updated tests to cover various scenarios involving multiple guardrails, including success, failure, and retry mechanisms.

This update improves the flexibility and robustness of task execution by allowing for more complex validation scenarios.

* refactor: enhance guardrail type handling in Task model

- Updated the Task class to improve guardrail type definitions, introducing GuardrailType and GuardrailsType for better clarity and type safety.
- Simplified the validation logic for guardrails, ensuring that both single and multiple guardrails are processed correctly.
- Enhanced error messages for guardrail validation to provide clearer feedback when incorrect types are provided.
- This refactor improves the maintainability and robustness of task execution by standardizing guardrail handling.

* feat: implement per-guardrail retry tracking in Task model

- Introduced a new private attribute `_guardrail_retry_counts` to the Task class for tracking retry attempts on a per-guardrail basis.
- Updated the guardrail processing logic to utilize the new retry tracking, allowing for independent retry counts for each guardrail.
- Enhanced error handling to provide clearer feedback when guardrails fail validation after exceeding retry limits.
- Modified existing tests to validate the new retry tracking behavior, ensuring accurate assertions on guardrail retries.

This update improves the robustness and flexibility of task execution by allowing for more granular control over guardrail validation and retry mechanisms.

* chore: 1.0.0b3 bump (#3734)

* chore: full ruff and mypy

improved linting, pre-commit setup, and internal architecture. Configured Ruff to respect .gitignore, added stricter rules, and introduced a lock pre-commit hook with virtualenv activation. Fixed type shadowing in EXASearchTool using a type_ alias to avoid PEP 563 conflicts and resolved circular imports in agent executor and guardrail modules. Removed agent-ops attributes, deprecated watson alias, and dropped crewai-enterprise tools with corresponding test updates. Refactored cache and memoization for thread safety and cleaned up structured output adapters and related logic.

* New MCL DSL (#3738)

* Adding MCP implementation

* New tests for MCP implementation

* fix tests

* update docs

* Revert "New tests for MCP implementation"

This reverts commit 0bbe6dee90.

* linter

* linter

* fix

* verify mcp pacakge exists

* adjust docs to be clear only remote servers are supported

* reverted

* ensure args schema generated properly

* properly close out

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: a2a experimental

experimental a2a support

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Mike Plachta <mplachta@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-10-20 14:10:19 -07:00
Greyson LaLonde
42f2b4d551 fix: preserve nested condition structure in Flow decorators
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Fixes nested boolean conditions being flattened in @listen, @start, and @router decorators. The or_() and and_() combinators now preserve their nested structure using a "conditions" key instead of flattening to a list. Added recursive evaluation logic to properly handle complex patterns like or_(and_(A, B), and_(C, D)).
2025-10-17 17:06:19 -04:00
Greyson LaLonde
0229390ad1 fix: add standard print parameters to Printer.print method
- Adds sep, end, file, and flush parameters to match Python's built-in print function signature.
2025-10-17 15:27:22 -04:00
Vidit Ostwal
f0fb349ddf Fixing copy and adding NOT_SPECIFIED check in task.py (#3690)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* Fixing copy and adding NOT_SPECIFIED check:

* Fixed mypy issues

* Added test Cases

* added linting checks

* Removed the docs bot folder

* Fixed ruff checks

* Remove secret_folder from tracking

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-14 09:52:39 -07:00
João Moura
bf2e2a42da fix: don't error out if there it no input() available
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Specific to jupyter notebooks
2025-10-13 22:36:19 -04:00
Lorenze Jay
814c962196 chore: update crewAI version to 0.203.1 in multiple templates (#3699)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
- Bumped the `crewai` version in `__init__.py` to 0.203.1.
- Updated the dependency versions in the crew, flow, and tool templates' `pyproject.toml` files to reflect the new `crewai` version.
2025-10-13 11:46:22 -07:00
Heitor Carvalho
2ebb2e845f fix: add a leeway of 10s when decoding jwt (#3698) 2025-10-13 12:42:03 -03:00
2318 changed files with 391869 additions and 136441 deletions

File diff suppressed because it is too large Load Diff

161
.env.test Normal file
View File

@@ -0,0 +1,161 @@
# =============================================================================
# Test Environment Variables
# =============================================================================
# This file contains all environment variables needed to run tests locally
# in a way that mimics the GitHub Actions CI environment.
# =============================================================================
# -----------------------------------------------------------------------------
# LLM Provider API Keys
# -----------------------------------------------------------------------------
OPENAI_API_KEY=fake-api-key
ANTHROPIC_API_KEY=fake-anthropic-key
GEMINI_API_KEY=fake-gemini-key
AZURE_API_KEY=fake-azure-key
OPENROUTER_API_KEY=fake-openrouter-key
# -----------------------------------------------------------------------------
# AWS Credentials
# -----------------------------------------------------------------------------
AWS_ACCESS_KEY_ID=fake-aws-access-key
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_REGION_NAME=us-east-1
# -----------------------------------------------------------------------------
# Azure OpenAI Configuration
# -----------------------------------------------------------------------------
AZURE_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_API_KEY=fake-azure-openai-key
AZURE_API_VERSION=2024-02-15-preview
OPENAI_API_VERSION=2024-02-15-preview
# -----------------------------------------------------------------------------
# Google Cloud Configuration
# -----------------------------------------------------------------------------
#GOOGLE_CLOUD_PROJECT=fake-gcp-project
#GOOGLE_CLOUD_LOCATION=us-central1
# -----------------------------------------------------------------------------
# OpenAI Configuration
# -----------------------------------------------------------------------------
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_BASE=https://api.openai.com/v1
# -----------------------------------------------------------------------------
# Search & Scraping Tool API Keys
# -----------------------------------------------------------------------------
SERPER_API_KEY=fake-serper-key
EXA_API_KEY=fake-exa-key
BRAVE_API_KEY=fake-brave-key
FIRECRAWL_API_KEY=fake-firecrawl-key
TAVILY_API_KEY=fake-tavily-key
SERPAPI_API_KEY=fake-serpapi-key
SERPLY_API_KEY=fake-serply-key
LINKUP_API_KEY=fake-linkup-key
PARALLEL_API_KEY=fake-parallel-key
# -----------------------------------------------------------------------------
# Exa Configuration
# -----------------------------------------------------------------------------
EXA_BASE_URL=https://api.exa.ai
# -----------------------------------------------------------------------------
# Web Scraping & Automation
# -----------------------------------------------------------------------------
BRIGHT_DATA_API_KEY=fake-brightdata-key
BRIGHT_DATA_ZONE=fake-zone
BRIGHTDATA_API_URL=https://api.brightdata.com
BRIGHTDATA_DEFAULT_TIMEOUT=600
BRIGHTDATA_DEFAULT_POLLING_INTERVAL=1
OXYLABS_USERNAME=fake-oxylabs-user
OXYLABS_PASSWORD=fake-oxylabs-pass
SCRAPFLY_API_KEY=fake-scrapfly-key
SCRAPEGRAPH_API_KEY=fake-scrapegraph-key
BROWSERBASE_API_KEY=fake-browserbase-key
BROWSERBASE_PROJECT_ID=fake-browserbase-project
HYPERBROWSER_API_KEY=fake-hyperbrowser-key
MULTION_API_KEY=fake-multion-key
APIFY_API_TOKEN=fake-apify-token
# -----------------------------------------------------------------------------
# Database & Vector Store Credentials
# -----------------------------------------------------------------------------
SINGLESTOREDB_URL=mysql://fake:fake@localhost:3306/fake
SINGLESTOREDB_HOST=localhost
SINGLESTOREDB_PORT=3306
SINGLESTOREDB_USER=fake-user
SINGLESTOREDB_PASSWORD=fake-password
SINGLESTOREDB_DATABASE=fake-database
SINGLESTOREDB_CONNECT_TIMEOUT=30
SNOWFLAKE_USER=fake-snowflake-user
SNOWFLAKE_PASSWORD=fake-snowflake-password
SNOWFLAKE_ACCOUNT=fake-snowflake-account
SNOWFLAKE_WAREHOUSE=fake-snowflake-warehouse
SNOWFLAKE_DATABASE=fake-snowflake-database
SNOWFLAKE_SCHEMA=fake-snowflake-schema
WEAVIATE_URL=http://localhost:8080
WEAVIATE_API_KEY=fake-weaviate-key
EMBEDCHAIN_DB_URI=sqlite:///test.db
# Databricks Credentials
DATABRICKS_HOST=https://fake-databricks.cloud.databricks.com
DATABRICKS_TOKEN=fake-databricks-token
DATABRICKS_CONFIG_PROFILE=fake-profile
# MongoDB Credentials
MONGODB_URI=mongodb://fake:fake@localhost:27017/fake
# -----------------------------------------------------------------------------
# CrewAI Platform & Enterprise
# -----------------------------------------------------------------------------
# setting CREWAI_PLATFORM_INTEGRATION_TOKEN causes these test to fail:
#=========================== short test summary info ============================
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_platform_context_manager_basic_usage - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_context_var_isolation_between_tests - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_multiple_sequential_context_managers - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#CREWAI_PLATFORM_INTEGRATION_TOKEN=fake-platform-token
CREWAI_PERSONAL_ACCESS_TOKEN=fake-personal-token
CREWAI_PLUS_URL=https://fake.crewai.com
# -----------------------------------------------------------------------------
# Other Service API Keys
# -----------------------------------------------------------------------------
ZAPIER_API_KEY=fake-zapier-key
PATRONUS_API_KEY=fake-patronus-key
MINDS_API_KEY=fake-minds-key
HF_TOKEN=fake-hf-token
# -----------------------------------------------------------------------------
# Feature Flags/Testing Modes
# -----------------------------------------------------------------------------
CREWAI_DISABLE_TELEMETRY=true
OTEL_SDK_DISABLED=true
CREWAI_TESTING=true
CREWAI_TRACING_ENABLED=false
# -----------------------------------------------------------------------------
# Testing/CI Configuration
# -----------------------------------------------------------------------------
# VCR recording mode: "none" (default), "new_episodes", "all", "once"
PYTEST_VCR_RECORD_MODE=none
# Set to "true" by GitHub when running in GitHub Actions
# GITHUB_ACTIONS=false
# -----------------------------------------------------------------------------
# Python Configuration
# -----------------------------------------------------------------------------
PYTHONUNBUFFERED=1

View File

@@ -2,20 +2,32 @@ name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "src/crewai/cli/templates/**"
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "tests/cassettes/**"
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code
- "src/**"
# Include tests (but exclude cassettes)
- "tests/**"
# Include GitHub Actions workflows/composite actions for CodeQL actions analysis
- ".github/workflows/**"
- ".github/actions/**"
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/crewai-files/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/crewai-files/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality
# - uses: security-and-quality

16
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: uv
directory: "/"
schedule:
interval: "weekly"
groups:
security-updates:
applies-to: security-updates
patterns:
- "*"

View File

@@ -15,11 +15,11 @@ on:
push:
branches: [ "main" ]
paths-ignore:
- "src/crewai/cli/templates/**"
- "lib/crewai/src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "src/crewai/cli/templates/**"
- "lib/crewai/src/crewai/cli/templates/**"
jobs:
analyze:
@@ -69,7 +69,7 @@ jobs:
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v4
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -98,6 +98,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v4
with:
category: "/language:${{matrix.language}}"

35
.github/workflows/docs-broken-links.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Check Documentation Broken Links
on:
pull_request:
paths:
- "docs/**"
- "docs.json"
push:
branches:
- main
paths:
- "docs/**"
- "docs.json"
workflow_dispatch:
jobs:
check-links:
name: Check broken links
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "latest"
- name: Install Mintlify CLI
run: npm i -g mintlify
- name: Run broken link checker
run: |
# Auto-answer the prompt with yes command
yes "" | mintlify broken-links || test $? -eq 141
working-directory: ./docs

View File

@@ -0,0 +1,63 @@
name: Generate Tool Specifications
on:
pull_request:
branches:
- main
paths:
- 'lib/crewai-tools/src/crewai_tools/**'
workflow_dispatch:
permissions:
contents: write
pull-requests: write
jobs:
generate-specs:
runs-on: ubuntu-latest
env:
PYTHONUNBUFFERED: 1
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.CREWAI_TOOL_SPECS_APP_ID }}
private_key: ${{ secrets.CREWAI_TOOL_SPECS_PRIVATE_KEY }}
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.head_ref }}
token: ${{ steps.app-token.outputs.token }}
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: true
- name: Install the project
working-directory: lib/crewai-tools
run: uv sync --dev --all-extras
- name: Generate tool specifications
working-directory: lib/crewai-tools
run: uv run python src/crewai_tools/generate_tool_specs.py
- name: Check for changes and commit
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add lib/crewai-tools/tool.specs.json
if git diff --quiet --staged; then
echo "No changes detected in tool.specs.json"
else
echo "Changes detected in tool.specs.json, committing..."
git commit -m "chore: update tool specifications"
git push
fi

View File

@@ -52,10 +52,11 @@ jobs:
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| xargs -I{} uv run ruff check "{}"
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| grep -v '/tests/' \
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'

View File

@@ -1,33 +0,0 @@
name: Notify Downstream
on:
push:
branches:
- main
permissions:
contents: read
jobs:
notify-downstream:
runs-on: ubuntu-latest
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.OSS_SYNC_APP_ID }}
private_key: ${{ secrets.OSS_SYNC_APP_PRIVATE_KEY }}
- name: Notify Repo B
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ steps.app-token.outputs.token }}
repository: ${{ secrets.OSS_SYNC_DOWNSTREAM_REPO }}
event-type: upstream-commit
client-payload: |
{
"commit_sha": "${{ github.sha }}"
}

100
.github/workflows/publish.yml vendored Normal file
View File

@@ -0,0 +1,100 @@
name: Publish to PyPI
on:
repository_dispatch:
types: [deployment-tests-passed]
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag to publish'
required: false
type: string
jobs:
build:
name: Build packages
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Determine release tag
id: release
run: |
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
if [ -n "${{ inputs.release_tag }}" ]; then
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
else
echo "tag=" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v4
with:
ref: ${{ steps.release.outputs.tag || github.ref }}
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

View File

@@ -5,10 +5,6 @@ on: [pull_request]
permissions:
contents: read
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
jobs:
tests:
name: tests (${{ matrix.python-version }})
@@ -56,13 +52,13 @@ jobs:
- name: Run tests (group ${{ matrix.group }} of 8)
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
DURATION_FILE=".test_durations_py${PYTHON_VERSION_SAFE}"
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}"
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
@@ -74,18 +70,25 @@ jobs:
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
uv run pytest \
--block-network \
--timeout=30 \
cd lib/crewai && uv run pytest \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
$DURATIONS_ARG \
--durations=10 \
-n auto \
--maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8)
run: |
cd lib/crewai-tools && uv run pytest \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
--durations=10 \
--maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4

View File

@@ -0,0 +1,18 @@
name: Trigger Deployment Tests
on:
release:
types: [published]
jobs:
trigger:
name: Trigger deployment tests
runs-on: ubuntu-latest
steps:
- name: Trigger deployment tests
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
event-type: crewai-release
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'

5
.gitignore vendored
View File

@@ -2,7 +2,6 @@
.pytest_cache
__pycache__
dist/
lib/
.env
assets/*
.idea
@@ -27,3 +26,7 @@ plan.md
conceptual_plan.md
build_image
chromadb-*.lock
.claude
.crewai/memory
blogs/*
secrets/*

View File

@@ -3,17 +3,31 @@ repos:
hooks:
- id: ruff
name: ruff
entry: uv run ruff check
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: ruff-format
name: ruff-format
entry: uv run ruff format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: mypy
name: mypy
entry: uv run mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
exclude: ^tests/
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/|lib/crewai-files/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:
- id: commitizen
- id: commitizen-branch
stages: [ pre-push ]

1
.python-version Normal file
View File

@@ -0,0 +1 @@
3.13

View File

@@ -57,7 +57,7 @@
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
- **CrewAI Flows**: The **enterprise and production architecture** for building and deploying multi-agent systems. Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
@@ -124,7 +124,8 @@ Setup and run your first CrewAI agents by following this tutorial.
[![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learning Resources
Learn CrewAI through our comprehensive courses:
@@ -141,6 +142,7 @@ CrewAI offers two powerful, complementary approaches that work seamlessly togeth
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
@@ -166,13 +168,13 @@ Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](h
First, install CrewAI:
```shell
pip install crewai
uv pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
pip install 'crewai[tools]'
uv pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
@@ -185,14 +187,15 @@ If you encounter issues during installation or usage, here are some common solut
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
- Install tiktoken explicitly: `uv pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `uv pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
- Try upgrading pip: `uv pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `uv pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
@@ -270,7 +273,7 @@ reporting_analyst:
**tasks.yaml**
```yaml
````yaml
# src/my_project/config/tasks.yaml
research_task:
description: >
@@ -290,7 +293,7 @@ reporting_task:
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
```
````
**crew.py**
@@ -556,7 +559,7 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
_P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb))._
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -611,7 +614,7 @@ uv build
### Installing Locally
```bash
pip install dist/*.tar.gz
uv pip install dist/*.tar.gz
```
## Telemetry
@@ -687,13 +690,13 @@ A: CrewAI is a standalone, lean, and fast Python framework built specifically fo
A: Install CrewAI using pip:
```shell
pip install crewai
uv pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
uv pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
@@ -775,4 +778,3 @@ A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and docu
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.
# test

266
conftest.py Normal file
View File

@@ -0,0 +1,266 @@
"""Pytest configuration for crewAI workspace."""
import base64
from collections.abc import Generator
import gzip
import os
from pathlib import Path
import tempfile
from typing import Any
from dotenv import load_dotenv
import pytest
from vcr.request import Request # type: ignore[import-untyped]
try:
import vcr.stubs.httpx_stubs as httpx_stubs # type: ignore[import-untyped]
except ModuleNotFoundError:
import vcr.stubs.httpcore_stubs as httpx_stubs # type: ignore[import-untyped]
env_test_path = Path(__file__).parent / ".env.test"
load_dotenv(env_test_path, override=True)
load_dotenv(override=True)
def _patched_make_vcr_request(httpx_request: Any, **kwargs: Any) -> Any:
"""Patched version of VCR's _make_vcr_request that handles binary content.
The original implementation fails on binary request bodies (like file uploads)
because it assumes all content can be decoded as UTF-8.
"""
raw_body = httpx_request.read()
try:
body = raw_body.decode("utf-8")
except UnicodeDecodeError:
body = base64.b64encode(raw_body).decode("ascii")
uri = str(httpx_request.url)
headers = dict(httpx_request.headers)
return Request(httpx_request.method, uri, body, headers)
httpx_stubs._make_vcr_request = _patched_make_vcr_request
@pytest.fixture(autouse=True, scope="function")
def cleanup_event_handlers() -> Generator[None, Any, None]:
"""Clean up event bus handlers after each test to prevent test pollution."""
yield
try:
from crewai.events.event_bus import crewai_event_bus
with crewai_event_bus._rwlock.w_locked():
crewai_event_bus._sync_handlers.clear()
crewai_event_bus._async_handlers.clear()
except Exception: # noqa: S110
pass
@pytest.fixture(autouse=True, scope="function")
def reset_event_state() -> None:
"""Reset event system state before each test for isolation."""
from crewai.events.base_events import reset_emission_counter
from crewai.events.event_context import (
EventContextConfig,
_event_context_config,
_event_id_stack,
)
reset_emission_counter()
_event_id_stack.set(())
_event_context_config.set(EventContextConfig())
@pytest.fixture(autouse=True, scope="function")
def setup_test_environment() -> Generator[None, Any, None]:
"""Setup test environment for crewAI workspace."""
with tempfile.TemporaryDirectory() as temp_dir:
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(
f"Failed to create test storage directory: {storage_dir}"
)
try:
test_file = storage_dir / ".permissions_test"
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(
f"Test storage directory {storage_dir} is not writable: {e}"
) from e
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
os.environ["CREWAI_TESTING"] = "true"
try:
yield
finally:
os.environ.pop("CREWAI_TESTING", "true")
os.environ.pop("CREWAI_STORAGE_DIR", None)
os.environ.pop("CREWAI_DISABLE_TELEMETRY", "true")
os.environ.pop("OTEL_SDK_DISABLED", "true")
os.environ.pop("OPENAI_BASE_URL", "https://api.openai.com/v1")
os.environ.pop("OPENAI_API_BASE", "https://api.openai.com/v1")
HEADERS_TO_FILTER = {
"authorization": "AUTHORIZATION-XXX",
"content-security-policy": "CSP-FILTERED",
"cookie": "COOKIE-XXX",
"set-cookie": "SET-COOKIE-XXX",
"permissions-policy": "PERMISSIONS-POLICY-XXX",
"referrer-policy": "REFERRER-POLICY-XXX",
"strict-transport-security": "STS-XXX",
"x-content-type-options": "X-CONTENT-TYPE-XXX",
"x-frame-options": "X-FRAME-OPTIONS-XXX",
"x-permitted-cross-domain-policies": "X-PERMITTED-XXX",
"x-request-id": "X-REQUEST-ID-XXX",
"x-runtime": "X-RUNTIME-XXX",
"x-xss-protection": "X-XSS-PROTECTION-XXX",
"x-stainless-arch": "X-STAINLESS-ARCH-XXX",
"x-stainless-os": "X-STAINLESS-OS-XXX",
"x-stainless-read-timeout": "X-STAINLESS-READ-TIMEOUT-XXX",
"cf-ray": "CF-RAY-XXX",
"etag": "ETAG-XXX",
"Strict-Transport-Security": "STS-XXX",
"access-control-expose-headers": "ACCESS-CONTROL-XXX",
"openai-organization": "OPENAI-ORG-XXX",
"openai-project": "OPENAI-PROJECT-XXX",
"x-ratelimit-limit-requests": "X-RATELIMIT-LIMIT-REQUESTS-XXX",
"x-ratelimit-limit-tokens": "X-RATELIMIT-LIMIT-TOKENS-XXX",
"x-ratelimit-remaining-requests": "X-RATELIMIT-REMAINING-REQUESTS-XXX",
"x-ratelimit-remaining-tokens": "X-RATELIMIT-REMAINING-TOKENS-XXX",
"x-ratelimit-reset-requests": "X-RATELIMIT-RESET-REQUESTS-XXX",
"x-ratelimit-reset-tokens": "X-RATELIMIT-RESET-TOKENS-XXX",
"x-goog-api-key": "X-GOOG-API-KEY-XXX",
"api-key": "X-API-KEY-XXX",
"User-Agent": "X-USER-AGENT-XXX",
"apim-request-id:": "X-API-CLIENT-REQUEST-ID-XXX",
"azureml-model-session": "AZUREML-MODEL-SESSION-XXX",
"x-ms-client-request-id": "X-MS-CLIENT-REQUEST-ID-XXX",
"x-ms-region": "X-MS-REGION-XXX",
"apim-request-id": "APIM-REQUEST-ID-XXX",
"x-api-key": "X-API-KEY-XXX",
"anthropic-organization-id": "ANTHROPIC-ORGANIZATION-ID-XXX",
"request-id": "REQUEST-ID-XXX",
"anthropic-ratelimit-input-tokens-limit": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-input-tokens-remaining": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-input-tokens-reset": "ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-output-tokens-limit": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-output-tokens-remaining": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-output-tokens-reset": "ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX",
"anthropic-ratelimit-tokens-limit": "ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX",
"anthropic-ratelimit-tokens-remaining": "ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX",
"anthropic-ratelimit-tokens-reset": "ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX",
"x-amz-date": "X-AMZ-DATE-XXX",
"amz-sdk-invocation-id": "AMZ-SDK-INVOCATION-ID-XXX",
"accept-encoding": "ACCEPT-ENCODING-XXX",
"x-amzn-requestid": "X-AMZN-REQUESTID-XXX",
"x-amzn-RequestId": "X-AMZN-REQUESTID-XXX",
"x-a2a-notification-token": "X-A2A-NOTIFICATION-TOKEN-XXX",
"x-a2a-version": "X-A2A-VERSION-XXX",
}
def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any-unimported]
"""Filter sensitive headers from request before recording."""
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in request.headers:
request.headers[variant] = [replacement]
request.method = request.method.upper()
# Normalize Azure OpenAI endpoints to a consistent placeholder for cassette matching.
if request.host and request.host.endswith(".openai.azure.com"):
original_host = request.host
placeholder_host = "fake-azure-endpoint.openai.azure.com"
request.uri = request.uri.replace(original_host, placeholder_host)
return request
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any] | None:
"""Filter sensitive headers from response before recording.
Returns None to skip recording responses with empty bodies. This handles
duplicate recordings caused by OpenAI's stainless client using
with_raw_response which triggers httpx to re-read the consumed stream.
"""
body = response.get("body", {}).get("string", "")
headers = response.get("headers", {})
content_length = headers.get("content-length", headers.get("Content-Length", []))
if body == "" or body == b"" or content_length == ["0"]:
return None
for encoding_header in ["Content-Encoding", "content-encoding"]:
if encoding_header in headers:
encoding = headers.pop(encoding_header)
if encoding and encoding[0] == "gzip":
body = response.get("body", {}).get("string", b"")
if isinstance(body, bytes) and body.startswith(b"\x1f\x8b"):
response["body"]["string"] = gzip.decompress(body).decode("utf-8")
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in headers:
headers[variant] = [replacement]
return response
@pytest.fixture(scope="module")
def vcr_cassette_dir(request: Any) -> str:
"""Generate cassette directory path based on test module location.
Organizes cassettes to mirror test directory structure within each package:
lib/crewai/tests/llms/google/test_google.py -> lib/crewai/tests/cassettes/llms/google/
lib/crewai-tools/tests/tools/test_search.py -> lib/crewai-tools/tests/cassettes/tools/
"""
test_file = Path(request.fspath)
for parent in test_file.parents:
if (
parent.name in ("crewai", "crewai-tools", "crewai-files")
and parent.parent.name == "lib"
):
package_root = parent
break
else:
package_root = test_file.parent
tests_root = package_root / "tests"
test_dir = test_file.parent
if test_dir != tests_root:
relative_path = test_dir.relative_to(tests_root)
cassette_dir = tests_root / "cassettes" / relative_path
else:
cassette_dir = tests_root / "cassettes"
cassette_dir.mkdir(parents=True, exist_ok=True)
return str(cassette_dir)
@pytest.fixture(scope="module")
def vcr_config(vcr_cassette_dir: str) -> dict[str, Any]:
"""Configure VCR with organized cassette storage."""
config = {
"cassette_library_dir": vcr_cassette_dir,
"record_mode": os.getenv("PYTEST_VCR_RECORD_MODE", "once"),
"filter_headers": [(k, v) for k, v in HEADERS_TO_FILTER.items()],
"before_record_request": _filter_request_headers,
"before_record_response": _filter_response_headers,
"filter_query_parameters": ["key"],
"match_on": ["method", "scheme", "host", "port", "path"],
}
if os.getenv("GITHUB_ACTIONS") == "true":
config["record_mode"] = "none"
return config

View File

@@ -61,7 +61,9 @@
"groups": [
{
"group": "Welcome",
"pages": ["index"]
"pages": [
"index"
]
}
]
},
@@ -71,40 +73,80 @@
"groups": [
{
"group": "Get Started",
"pages": ["en/introduction", "en/installation", "en/quickstart"]
"pages": [
"en/introduction",
"en/installation",
"en/quickstart"
]
},
{
"group": "Guides",
"group": "AI Docs",
"pages": [
"en/ai/overview",
{
"group": "Strategy",
"icon": "compass",
"pages": ["en/guides/concepts/evaluating-use-cases"]
"group": "Flows",
"icon": "arrow-progress",
"pages": [
"en/ai/flows/index",
"en/ai/flows/reference",
"en/ai/flows/patterns",
"en/ai/flows/troubleshooting",
"en/ai/flows/examples"
]
},
{
"group": "Agents",
"icon": "user",
"pages": ["en/guides/agents/crafting-effective-agents"]
"pages": [
"en/ai/agents/index",
"en/ai/agents/reference",
"en/ai/agents/patterns",
"en/ai/agents/troubleshooting",
"en/ai/agents/examples"
]
},
{
"group": "Crews",
"icon": "users",
"pages": ["en/guides/crews/first-crew"]
},
{
"group": "Flows",
"icon": "code-branch",
"pages": [
"en/guides/flows/first-flow",
"en/guides/flows/mastering-flow-state"
"en/ai/crews/index",
"en/ai/crews/reference",
"en/ai/crews/patterns",
"en/ai/crews/troubleshooting",
"en/ai/crews/examples"
]
},
{
"group": "Advanced",
"icon": "gear",
"group": "LLMs",
"icon": "microchip-ai",
"pages": [
"en/guides/advanced/customizing-prompts",
"en/guides/advanced/fingerprinting"
"en/ai/llms/index",
"en/ai/llms/reference",
"en/ai/llms/patterns",
"en/ai/llms/troubleshooting",
"en/ai/llms/examples"
]
},
{
"group": "Memory",
"icon": "database",
"pages": [
"en/ai/memory/index",
"en/ai/memory/reference",
"en/ai/memory/patterns",
"en/ai/memory/troubleshooting",
"en/ai/memory/examples"
]
},
{
"group": "Tools",
"icon": "wrench",
"pages": [
"en/ai/tools/index",
"en/ai/tools/reference",
"en/ai/tools/patterns",
"en/ai/tools/troubleshooting",
"en/ai/tools/examples"
]
}
]
@@ -116,8 +158,10 @@
"en/concepts/tasks",
"en/concepts/crews",
"en/concepts/flows",
"en/concepts/production-architecture",
"en/concepts/knowledge",
"en/concepts/llms",
"en/concepts/files",
"en/concepts/processes",
"en/concepts/collaboration",
"en/concepts/training",
@@ -130,10 +174,60 @@
"en/concepts/event-listener"
]
},
{
"group": "Guides",
"pages": [
{
"group": "Strategy",
"icon": "compass",
"pages": [
"en/guides/concepts/evaluating-use-cases"
]
},
{
"group": "Agents",
"icon": "user",
"pages": [
"en/guides/agents/crafting-effective-agents"
]
},
{
"group": "Crews",
"icon": "users",
"pages": [
"en/guides/crews/first-crew"
]
},
{
"group": "Flows",
"icon": "code-branch",
"pages": [
"en/guides/flows/first-flow",
"en/guides/flows/mastering-flow-state"
]
},
{
"group": "Coding Tools",
"icon": "terminal",
"pages": [
"en/guides/coding-tools/agents-md"
]
},
{
"group": "Advanced",
"icon": "gear",
"pages": [
"en/guides/advanced/customizing-prompts",
"en/guides/advanced/fingerprinting"
]
}
]
},
{
"group": "MCP Integration",
"pages": [
"en/mcp/overview",
"en/mcp/dsl-integration",
"en/mcp/stdio",
"en/mcp/sse",
"en/mcp/streamable-http",
@@ -252,7 +346,8 @@
"pages": [
"en/tools/integration/overview",
"en/tools/integration/bedrockinvokeagenttool",
"en/tools/integration/crewaiautomationtool"
"en/tools/integration/crewaiautomationtool",
"en/tools/integration/mergeagenthandlertool"
]
},
{
@@ -275,6 +370,8 @@
"en/observability/overview",
"en/observability/arize-phoenix",
"en/observability/braintrust",
"en/observability/datadog",
"en/observability/galileo",
"en/observability/langdb",
"en/observability/langfuse",
"en/observability/langtrace",
@@ -305,18 +402,25 @@
"en/learn/hierarchical-process",
"en/learn/human-input-on-execution",
"en/learn/human-in-the-loop",
"en/learn/human-feedback-in-flows",
"en/learn/flowstate-chat-history",
"en/learn/kickoff-async",
"en/learn/kickoff-for-each",
"en/learn/llm-connections",
"en/learn/multimodal-agents",
"en/learn/replay-tasks-from-latest-crew-kickoff",
"en/learn/sequential-process",
"en/learn/using-annotations"
"en/learn/using-annotations",
"en/learn/execution-hooks",
"en/learn/llm-hooks",
"en/learn/tool-hooks"
]
},
{
"group": "Telemetry",
"pages": ["en/telemetry"]
"pages": [
"en/telemetry"
]
}
]
},
@@ -326,7 +430,9 @@
"groups": [
{
"group": "Getting Started",
"pages": ["en/enterprise/introduction"]
"pages": [
"en/enterprise/introduction"
]
},
{
"group": "Build",
@@ -335,7 +441,8 @@
"en/enterprise/features/crew-studio",
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations"
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -343,7 +450,8 @@
"pages": [
"en/enterprise/features/traces",
"en/enterprise/features/webhook-streaming",
"en/enterprise/features/hallucination-guardrail"
"en/enterprise/features/hallucination-guardrail",
"en/enterprise/features/flow-hitl-management"
]
},
{
@@ -361,10 +469,20 @@
"en/enterprise/integrations/github",
"en/enterprise/integrations/gmail",
"en/enterprise/integrations/google_calendar",
"en/enterprise/integrations/google_contacts",
"en/enterprise/integrations/google_docs",
"en/enterprise/integrations/google_drive",
"en/enterprise/integrations/google_sheets",
"en/enterprise/integrations/google_slides",
"en/enterprise/integrations/hubspot",
"en/enterprise/integrations/jira",
"en/enterprise/integrations/linear",
"en/enterprise/integrations/microsoft_excel",
"en/enterprise/integrations/microsoft_onedrive",
"en/enterprise/integrations/microsoft_outlook",
"en/enterprise/integrations/microsoft_sharepoint",
"en/enterprise/integrations/microsoft_teams",
"en/enterprise/integrations/microsoft_word",
"en/enterprise/integrations/notion",
"en/enterprise/integrations/salesforce",
"en/enterprise/integrations/shopify",
@@ -393,7 +511,8 @@
"group": "How-To Guides",
"pages": [
"en/enterprise/guides/build-crew",
"en/enterprise/guides/deploy-crew",
"en/enterprise/guides/prepare-for-deployment",
"en/enterprise/guides/deploy-to-amp",
"en/enterprise/guides/kickoff-crew",
"en/enterprise/guides/update-crew",
"en/enterprise/guides/enable-crew-studio",
@@ -408,7 +527,9 @@
},
{
"group": "Resources",
"pages": ["en/enterprise/resources/frequently-asked-questions"]
"pages": [
"en/enterprise/resources/frequently-asked-questions"
]
}
]
},
@@ -434,7 +555,9 @@
"groups": [
{
"group": "Examples",
"pages": ["en/examples/example", "en/examples/cookbooks"]
"pages": [
"en/examples/cookbooks"
]
}
]
},
@@ -444,7 +567,9 @@
"groups": [
{
"group": "Release Notes",
"pages": ["en/changelog"]
"pages": [
"en/changelog"
]
}
]
}
@@ -483,7 +608,9 @@
"groups": [
{
"group": "Bem-vindo",
"pages": ["pt-BR/index"]
"pages": [
"pt-BR/index"
]
}
]
},
@@ -505,17 +632,23 @@
{
"group": "Estratégia",
"icon": "compass",
"pages": ["pt-BR/guides/concepts/evaluating-use-cases"]
"pages": [
"pt-BR/guides/concepts/evaluating-use-cases"
]
},
{
"group": "Agentes",
"icon": "user",
"pages": ["pt-BR/guides/agents/crafting-effective-agents"]
"pages": [
"pt-BR/guides/agents/crafting-effective-agents"
]
},
{
"group": "Crews",
"icon": "users",
"pages": ["pt-BR/guides/crews/first-crew"]
"pages": [
"pt-BR/guides/crews/first-crew"
]
},
{
"group": "Flows",
@@ -542,8 +675,10 @@
"pt-BR/concepts/tasks",
"pt-BR/concepts/crews",
"pt-BR/concepts/flows",
"pt-BR/concepts/production-architecture",
"pt-BR/concepts/knowledge",
"pt-BR/concepts/llms",
"pt-BR/concepts/files",
"pt-BR/concepts/processes",
"pt-BR/concepts/collaboration",
"pt-BR/concepts/training",
@@ -560,6 +695,7 @@
"group": "Integração MCP",
"pages": [
"pt-BR/mcp/overview",
"pt-BR/mcp/dsl-integration",
"pt-BR/mcp/stdio",
"pt-BR/mcp/sse",
"pt-BR/mcp/streamable-http",
@@ -685,9 +821,12 @@
{
"group": "Observabilidade",
"pages": [
"pt-BR/observability/tracing",
"pt-BR/observability/overview",
"pt-BR/observability/arize-phoenix",
"pt-BR/observability/braintrust",
"pt-BR/observability/datadog",
"pt-BR/observability/galileo",
"pt-BR/observability/langdb",
"pt-BR/observability/langfuse",
"pt-BR/observability/langtrace",
@@ -717,18 +856,24 @@
"pt-BR/learn/hierarchical-process",
"pt-BR/learn/human-input-on-execution",
"pt-BR/learn/human-in-the-loop",
"pt-BR/learn/human-feedback-in-flows",
"pt-BR/learn/kickoff-async",
"pt-BR/learn/kickoff-for-each",
"pt-BR/learn/llm-connections",
"pt-BR/learn/multimodal-agents",
"pt-BR/learn/replay-tasks-from-latest-crew-kickoff",
"pt-BR/learn/sequential-process",
"pt-BR/learn/using-annotations"
"pt-BR/learn/using-annotations",
"pt-BR/learn/execution-hooks",
"pt-BR/learn/llm-hooks",
"pt-BR/learn/tool-hooks"
]
},
{
"group": "Telemetria",
"pages": ["pt-BR/telemetry"]
"pages": [
"pt-BR/telemetry"
]
}
]
},
@@ -738,7 +883,9 @@
"groups": [
{
"group": "Começando",
"pages": ["pt-BR/enterprise/introduction"]
"pages": [
"pt-BR/enterprise/introduction"
]
},
{
"group": "Construir",
@@ -747,7 +894,8 @@
"pt-BR/enterprise/features/crew-studio",
"pt-BR/enterprise/features/marketplace",
"pt-BR/enterprise/features/agent-repositories",
"pt-BR/enterprise/features/tools-and-integrations"
"pt-BR/enterprise/features/tools-and-integrations",
"pt-BR/enterprise/features/pii-trace-redactions"
]
},
{
@@ -755,7 +903,8 @@
"pages": [
"pt-BR/enterprise/features/traces",
"pt-BR/enterprise/features/webhook-streaming",
"pt-BR/enterprise/features/hallucination-guardrail"
"pt-BR/enterprise/features/hallucination-guardrail",
"pt-BR/enterprise/features/flow-hitl-management"
]
},
{
@@ -773,10 +922,20 @@
"pt-BR/enterprise/integrations/github",
"pt-BR/enterprise/integrations/gmail",
"pt-BR/enterprise/integrations/google_calendar",
"pt-BR/enterprise/integrations/google_contacts",
"pt-BR/enterprise/integrations/google_docs",
"pt-BR/enterprise/integrations/google_drive",
"pt-BR/enterprise/integrations/google_sheets",
"pt-BR/enterprise/integrations/google_slides",
"pt-BR/enterprise/integrations/hubspot",
"pt-BR/enterprise/integrations/jira",
"pt-BR/enterprise/integrations/linear",
"pt-BR/enterprise/integrations/microsoft_excel",
"pt-BR/enterprise/integrations/microsoft_onedrive",
"pt-BR/enterprise/integrations/microsoft_outlook",
"pt-BR/enterprise/integrations/microsoft_sharepoint",
"pt-BR/enterprise/integrations/microsoft_teams",
"pt-BR/enterprise/integrations/microsoft_word",
"pt-BR/enterprise/integrations/notion",
"pt-BR/enterprise/integrations/salesforce",
"pt-BR/enterprise/integrations/shopify",
@@ -789,7 +948,8 @@
"group": "Guias",
"pages": [
"pt-BR/enterprise/guides/build-crew",
"pt-BR/enterprise/guides/deploy-crew",
"pt-BR/enterprise/guides/prepare-for-deployment",
"pt-BR/enterprise/guides/deploy-to-amp",
"pt-BR/enterprise/guides/kickoff-crew",
"pt-BR/enterprise/guides/update-crew",
"pt-BR/enterprise/guides/enable-crew-studio",
@@ -805,6 +965,12 @@
"group": "Triggers",
"pages": [
"pt-BR/enterprise/guides/automation-triggers",
"pt-BR/enterprise/guides/gmail-trigger",
"pt-BR/enterprise/guides/google-calendar-trigger",
"pt-BR/enterprise/guides/google-drive-trigger",
"pt-BR/enterprise/guides/outlook-trigger",
"pt-BR/enterprise/guides/onedrive-trigger",
"pt-BR/enterprise/guides/microsoft-teams-trigger",
"pt-BR/enterprise/guides/slack-trigger",
"pt-BR/enterprise/guides/hubspot-trigger",
"pt-BR/enterprise/guides/salesforce-trigger",
@@ -841,7 +1007,10 @@
"groups": [
{
"group": "Exemplos",
"pages": ["pt-BR/examples/example", "pt-BR/examples/cookbooks"]
"pages": [
"pt-BR/examples/example",
"pt-BR/examples/cookbooks"
]
}
]
},
@@ -851,7 +1020,9 @@
"groups": [
{
"group": "Notas de Versão",
"pages": ["pt-BR/changelog"]
"pages": [
"pt-BR/changelog"
]
}
]
}
@@ -890,7 +1061,9 @@
"groups": [
{
"group": "환영합니다",
"pages": ["ko/index"]
"pages": [
"ko/index"
]
}
]
},
@@ -900,7 +1073,11 @@
"groups": [
{
"group": "시작 안내",
"pages": ["ko/introduction", "ko/installation", "ko/quickstart"]
"pages": [
"ko/introduction",
"ko/installation",
"ko/quickstart"
]
},
{
"group": "가이드",
@@ -908,17 +1085,23 @@
{
"group": "전략",
"icon": "compass",
"pages": ["ko/guides/concepts/evaluating-use-cases"]
"pages": [
"ko/guides/concepts/evaluating-use-cases"
]
},
{
"group": "에이전트 (Agents)",
"icon": "user",
"pages": ["ko/guides/agents/crafting-effective-agents"]
"pages": [
"ko/guides/agents/crafting-effective-agents"
]
},
{
"group": "크루 (Crews)",
"icon": "users",
"pages": ["ko/guides/crews/first-crew"]
"pages": [
"ko/guides/crews/first-crew"
]
},
{
"group": "플로우 (Flows)",
@@ -945,8 +1128,10 @@
"ko/concepts/tasks",
"ko/concepts/crews",
"ko/concepts/flows",
"ko/concepts/production-architecture",
"ko/concepts/knowledge",
"ko/concepts/llms",
"ko/concepts/files",
"ko/concepts/processes",
"ko/concepts/collaboration",
"ko/concepts/training",
@@ -963,6 +1148,7 @@
"group": "MCP 통합",
"pages": [
"ko/mcp/overview",
"ko/mcp/dsl-integration",
"ko/mcp/stdio",
"ko/mcp/sse",
"ko/mcp/streamable-http",
@@ -1100,9 +1286,12 @@
{
"group": "Observability",
"pages": [
"ko/observability/tracing",
"ko/observability/overview",
"ko/observability/arize-phoenix",
"ko/observability/braintrust",
"ko/observability/datadog",
"ko/observability/galileo",
"ko/observability/langdb",
"ko/observability/langfuse",
"ko/observability/langtrace",
@@ -1132,18 +1321,24 @@
"ko/learn/hierarchical-process",
"ko/learn/human-input-on-execution",
"ko/learn/human-in-the-loop",
"ko/learn/human-feedback-in-flows",
"ko/learn/kickoff-async",
"ko/learn/kickoff-for-each",
"ko/learn/llm-connections",
"ko/learn/multimodal-agents",
"ko/learn/replay-tasks-from-latest-crew-kickoff",
"ko/learn/sequential-process",
"ko/learn/using-annotations"
"ko/learn/using-annotations",
"ko/learn/execution-hooks",
"ko/learn/llm-hooks",
"ko/learn/tool-hooks"
]
},
{
"group": "Telemetry",
"pages": ["ko/telemetry"]
"pages": [
"ko/telemetry"
]
}
]
},
@@ -1153,7 +1348,9 @@
"groups": [
{
"group": "시작 안내",
"pages": ["ko/enterprise/introduction"]
"pages": [
"ko/enterprise/introduction"
]
},
{
"group": "빌드",
@@ -1162,7 +1359,8 @@
"ko/enterprise/features/crew-studio",
"ko/enterprise/features/marketplace",
"ko/enterprise/features/agent-repositories",
"ko/enterprise/features/tools-and-integrations"
"ko/enterprise/features/tools-and-integrations",
"ko/enterprise/features/pii-trace-redactions"
]
},
{
@@ -1170,7 +1368,8 @@
"pages": [
"ko/enterprise/features/traces",
"ko/enterprise/features/webhook-streaming",
"ko/enterprise/features/hallucination-guardrail"
"ko/enterprise/features/hallucination-guardrail",
"ko/enterprise/features/flow-hitl-management"
]
},
{
@@ -1188,10 +1387,20 @@
"ko/enterprise/integrations/github",
"ko/enterprise/integrations/gmail",
"ko/enterprise/integrations/google_calendar",
"ko/enterprise/integrations/google_contacts",
"ko/enterprise/integrations/google_docs",
"ko/enterprise/integrations/google_drive",
"ko/enterprise/integrations/google_sheets",
"ko/enterprise/integrations/google_slides",
"ko/enterprise/integrations/hubspot",
"ko/enterprise/integrations/jira",
"ko/enterprise/integrations/linear",
"ko/enterprise/integrations/microsoft_excel",
"ko/enterprise/integrations/microsoft_onedrive",
"ko/enterprise/integrations/microsoft_outlook",
"ko/enterprise/integrations/microsoft_sharepoint",
"ko/enterprise/integrations/microsoft_teams",
"ko/enterprise/integrations/microsoft_word",
"ko/enterprise/integrations/notion",
"ko/enterprise/integrations/salesforce",
"ko/enterprise/integrations/shopify",
@@ -1204,7 +1413,8 @@
"group": "How-To Guides",
"pages": [
"ko/enterprise/guides/build-crew",
"ko/enterprise/guides/deploy-crew",
"ko/enterprise/guides/prepare-for-deployment",
"ko/enterprise/guides/deploy-to-amp",
"ko/enterprise/guides/kickoff-crew",
"ko/enterprise/guides/update-crew",
"ko/enterprise/guides/enable-crew-studio",
@@ -1220,6 +1430,12 @@
"group": "트리거",
"pages": [
"ko/enterprise/guides/automation-triggers",
"ko/enterprise/guides/gmail-trigger",
"ko/enterprise/guides/google-calendar-trigger",
"ko/enterprise/guides/google-drive-trigger",
"ko/enterprise/guides/outlook-trigger",
"ko/enterprise/guides/onedrive-trigger",
"ko/enterprise/guides/microsoft-teams-trigger",
"ko/enterprise/guides/slack-trigger",
"ko/enterprise/guides/hubspot-trigger",
"ko/enterprise/guides/salesforce-trigger",
@@ -1228,7 +1444,9 @@
},
{
"group": "학습 자원",
"pages": ["ko/enterprise/resources/frequently-asked-questions"]
"pages": [
"ko/enterprise/resources/frequently-asked-questions"
]
}
]
},
@@ -1254,7 +1472,10 @@
"groups": [
{
"group": "예시",
"pages": ["ko/examples/example", "ko/examples/cookbooks"]
"pages": [
"ko/examples/example",
"ko/examples/cookbooks"
]
}
]
},
@@ -1264,7 +1485,9 @@
"groups": [
{
"group": "릴리스 노트",
"pages": ["ko/changelog"]
"pages": [
"ko/changelog"
]
}
]
}
@@ -1331,6 +1554,18 @@
"source": "/api-reference",
"destination": "/en/api-reference/introduction"
},
{
"source": "/",
"destination": "/en/introduction"
},
{
"source": "/en",
"destination": "/en/introduction"
},
{
"source": "/en/examples/example",
"destination": "/en/examples/cookbooks"
},
{
"source": "/introduction",
"destination": "/en/introduction"
@@ -1379,6 +1614,18 @@
"source": "/enterprise/:path*",
"destination": "/en/enterprise/:path*"
},
{
"source": "/en/enterprise/guides/deploy-crew",
"destination": "/en/enterprise/guides/deploy-to-amp"
},
{
"source": "/ko/enterprise/guides/deploy-crew",
"destination": "/ko/enterprise/guides/deploy-to-amp"
},
{
"source": "/pt-BR/enterprise/guides/deploy-crew",
"destination": "/pt-BR/enterprise/guides/deploy-to-amp"
},
{
"source": "/api-reference/:path*",
"destination": "/en/api-reference/:path*"

View File

@@ -0,0 +1,12 @@
---
title: "Agents: Examples"
description: "Runnable examples for robust agent configuration and execution."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/guides/agents/crafting-effective-agents](/en/guides/agents/crafting-effective-agents)
- [/en/learn/customizing-agents](/en/learn/customizing-agents)
- [/en/learn/coding-agents](/en/learn/coding-agents)

View File

@@ -0,0 +1,32 @@
---
title: "Agents: Concepts"
description: "Agent role contracts, task boundaries, and decision criteria for robust agent behavior."
icon: "user"
mode: "wide"
---
## When to use
- You need specialized behavior with explicit role and goal.
- You need tool-enabled execution under constraints.
## When not to use
- Static transformations are enough without model reasoning.
- Task can be solved by deterministic code only.
## Core decisions
| Decision | Choose this when |
|---|---|
| Single agent | Narrow scope, low coordination needs |
| Multi-agent crew | Distinct expertise and review loops needed |
| Tool-enabled agent | Model needs external actions or data |
## Canonical links
- Reference: [/en/ai/agents/reference](/en/ai/agents/reference)
- Patterns: [/en/ai/agents/patterns](/en/ai/agents/patterns)
- Troubleshooting: [/en/ai/agents/troubleshooting](/en/ai/agents/troubleshooting)
- Examples: [/en/ai/agents/examples](/en/ai/agents/examples)
- Existing docs: [/en/concepts/agents](/en/concepts/agents)

View File

@@ -0,0 +1,17 @@
---
title: "Agents: Patterns"
description: "Practical agent patterns for role design, tool boundaries, and reliable outputs."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Role + reviewer pair
- One agent drafts, one agent validates.
2. Tool-bounded agent
- Restrict tool list to minimal action set.
3. Structured output agent
- Force JSON or schema output for automation pipelines.

View File

@@ -0,0 +1,22 @@
---
title: "Agents: Reference"
description: "Reference for agent fields, prompt contracts, tool usage, and output constraints."
icon: "book"
mode: "wide"
---
## Agent contract
- `role`: stable operating identity
- `goal`: measurable completion objective
- `backstory`: bounded style and context
- `tools`: allowed action surface
## Output contract
- Prefer structured outputs for machine workflows.
- Define failure behavior for missing tool data.
## Canonical source
Primary API details live in [/en/concepts/agents](/en/concepts/agents).

View File

@@ -0,0 +1,12 @@
---
title: "Agents: Troubleshooting"
description: "Diagnose and fix common agent reliability and instruction-following failures."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Hallucinated tool results: require tool-call evidence in output.
- Prompt drift: tighten role and success criteria.
- Verbose but low-signal output: enforce concise schema output.

View File

@@ -0,0 +1,12 @@
---
title: "Crews: Examples"
description: "Runnable crew examples for sequential and hierarchical execution."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/guides/crews/first-crew](/en/guides/crews/first-crew)
- [/en/learn/sequential-process](/en/learn/sequential-process)
- [/en/learn/hierarchical-process](/en/learn/hierarchical-process)

View File

@@ -0,0 +1,26 @@
---
title: "Crews: Concepts"
description: "When to use crews, process selection, delegation boundaries, and collaboration strategy."
icon: "users"
mode: "wide"
---
## When to use
- You need multiple agents with specialized roles.
- You need staged execution and reviewer loops.
## Process decision table
| Process | Best for |
|---|---|
| Sequential | Linear pipelines and deterministic ordering |
| Hierarchical | Manager-controlled planning and delegation |
## Canonical links
- Reference: [/en/ai/crews/reference](/en/ai/crews/reference)
- Patterns: [/en/ai/crews/patterns](/en/ai/crews/patterns)
- Troubleshooting: [/en/ai/crews/troubleshooting](/en/ai/crews/troubleshooting)
- Examples: [/en/ai/crews/examples](/en/ai/crews/examples)
- Existing docs: [/en/concepts/crews](/en/concepts/crews)

View File

@@ -0,0 +1,12 @@
---
title: "Crews: Patterns"
description: "Production crew patterns for decomposition, review loops, and hybrid orchestration with Flows."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Researcher + writer + reviewer
2. Manager-directed hierarchical crew
3. Flow-orchestrated multi-crew pipeline

View File

@@ -0,0 +1,21 @@
---
title: "Crews: Reference"
description: "Reference for crew composition, process semantics, task context passing, and execution modes."
icon: "book"
mode: "wide"
---
## Crew contract
- `agents`: available executors
- `tasks`: work units with expected output
- `process`: ordering and delegation semantics
## Runtime
- `kickoff()` for synchronous runs
- `kickoff_async()` for async execution
## Canonical source
Primary API details live in [/en/concepts/crews](/en/concepts/crews).

View File

@@ -0,0 +1,12 @@
---
title: "Crews: Troubleshooting"
description: "Common multi-agent coordination failures and practical fixes."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Agents overlap on responsibilities: tighten role boundaries.
- Output inconsistency: standardize expected outputs per task.
- Slow runs: reduce unnecessary handoffs and model size.

View File

@@ -0,0 +1,17 @@
---
title: "Flows: Examples"
description: "Runnable end-to-end examples for production flow orchestration."
icon: "rocket-launch"
mode: "wide"
---
## Canonical examples
<CardGroup cols={2}>
<Card title="Flowstate Chat History" icon="comments" href="/en/learn/flowstate-chat-history">
Persistent chat history with summary compaction and memory scope.
</Card>
<Card title="Flows Concepts Example" icon="arrow-progress" href="/en/concepts/flows">
Full API and feature-oriented flow examples, including routers and persistence.
</Card>
</CardGroup>

View File

@@ -0,0 +1,39 @@
---
title: "Flows: Concepts"
description: "When to use Flows, when not to use them, and key design constraints for production orchestration."
icon: "arrow-progress"
mode: "wide"
---
## When to use
- You need deterministic orchestration, branching, and resumable execution.
- You need explicit state transitions across steps.
- You need persistence, routing, and event-driven control.
## When not to use
- A single prompt/response interaction is enough.
- You only need one agent call without orchestration logic.
## Core decisions
| Decision | Choose this when |
|---|---|
| Unstructured state | Fast prototyping, highly dynamic fields |
| Structured state | Stable contracts, team development, type safety |
| `@persist()` | Long-running workflows and recovery requirements |
| Router labels | Deterministic branch handling |
## Canonical links
- Reference: [/en/ai/flows/reference](/en/ai/flows/reference)
- Patterns: [/en/ai/flows/patterns](/en/ai/flows/patterns)
- Troubleshooting: [/en/ai/flows/troubleshooting](/en/ai/flows/troubleshooting)
- Examples: [/en/ai/flows/examples](/en/ai/flows/examples)
## Existing docs
- [/en/concepts/flows](/en/concepts/flows)
- [/en/guides/flows/mastering-flow-state](/en/guides/flows/mastering-flow-state)
- [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)

View File

@@ -0,0 +1,29 @@
---
title: "Flows: Patterns"
description: "Production flow patterns: triage routing, flowstate chat history, and human-in-the-loop checkpoints."
icon: "diagram-project"
mode: "wide"
---
## Recommended patterns
1. Triage router flow
- Inputs: normalized request payload
- Output: deterministic route label + action
- Reference: [/en/concepts/flows](/en/concepts/flows)
2. Flowstate chat history
- Inputs: `session_id`, `last_user_message`
- Output: assistant reply + compact context state
- Reference: [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)
3. Human feedback gates
- Inputs: generated artifact + reviewer feedback
- Output: approved/rejected/revision path
- Reference: [/en/learn/human-feedback-in-flows](/en/learn/human-feedback-in-flows)
## Pattern requirements
- declare explicit input schema
- define expected output shape
- list failure modes and retries

View File

@@ -0,0 +1,34 @@
---
title: "Flows: Reference"
description: "API-oriented reference for Flow decorators, lifecycle semantics, state, routing, and persistence."
icon: "book"
mode: "wide"
---
## Decorators
- `@start()` entrypoint, optional conditional trigger
- `@listen(...)` downstream method subscription
- `@router(...)` label-based deterministic routing
- `@persist()` automatic state persistence checkpoints
## Runtime contracts
- `kickoff(inputs=...)` initializes or updates run inputs.
- final output is the value from the last completed method.
- `self.state` always has an auto-generated `id`.
## State contracts
- Use typed state for durable workflows.
- Keep control fields explicit (`route`, `status`, `retry_count`).
- Avoid storing unbounded raw transcripts in state.
## Resume and recovery
- Use persistence for recoverable runs.
- Keep idempotent step logic for safe retries.
## Canonical source
Primary API details live in [/en/concepts/flows](/en/concepts/flows).

View File

@@ -0,0 +1,28 @@
---
title: "Flows: Troubleshooting"
description: "Common flow failures, causes, and fixes for state, routing, persistence, and resumption."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
### Branch did not trigger
- Cause: router label mismatch.
- Fix: align returned label with `@listen("label")` exactly.
### State fields missing
- Cause: untyped dynamic writes or missing inputs.
- Fix: switch to typed state and validate required fields at `@start()`.
### Context window blow-up
- Cause: raw message accumulation.
- Fix: use sliding window + summary compaction pattern.
### Resume behavior inconsistent
- Cause: non-idempotent side effects in retried steps.
- Fix: make side-effecting calls idempotent and record execution markers in state.

View File

@@ -0,0 +1,12 @@
---
title: "LLMs: Examples"
description: "Concrete examples for model setup, routing, and output-control patterns."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/concepts/llms](/en/concepts/llms)
- [/en/learn/llm-connections](/en/learn/llm-connections)
- [/en/learn/custom-llm](/en/learn/custom-llm)

27
docs/en/ai/llms/index.mdx Normal file
View File

@@ -0,0 +1,27 @@
---
title: "LLMs: Concepts"
description: "Model selection strategy, cost-quality tradeoffs, and reliability posture for CrewAI systems."
icon: "microchip-ai"
mode: "wide"
---
## When to use advanced LLM configuration
- You need predictable quality, latency, and cost control.
- You need model routing by task type.
## Core decisions
| Decision | Choose this when |
|---|---|
| Single model | Small systems with uniform task profile |
| Routed models | Mixed workloads with different quality/cost needs |
| Structured output | Automation pipelines and strict parsing needs |
## Canonical links
- Reference: [/en/ai/llms/reference](/en/ai/llms/reference)
- Patterns: [/en/ai/llms/patterns](/en/ai/llms/patterns)
- Troubleshooting: [/en/ai/llms/troubleshooting](/en/ai/llms/troubleshooting)
- Examples: [/en/ai/llms/examples](/en/ai/llms/examples)
- Existing docs: [/en/concepts/llms](/en/concepts/llms)

View File

@@ -0,0 +1,17 @@
---
title: "LLMs: Patterns"
description: "Model routing, reliability defaults, and structured outputs for production AI workflows."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Role-based model routing
2. Reliability defaults (`timeout`, `max_retries`, low temperature)
3. JSON-first outputs for machine consumption
4. Responses API for multi-turn reasoning flows
## Reference
- [/en/concepts/llms#production-llm-patterns](/en/concepts/llms#production-llm-patterns)

View File

@@ -0,0 +1,25 @@
---
title: "LLMs: Reference"
description: "Provider-agnostic LLM configuration reference for CrewAI projects."
icon: "book"
mode: "wide"
---
## Common parameters
- `model`
- `temperature`
- `max_tokens`
- `timeout`
- `max_retries`
- `response_format`
## Contract guidance
- Set low temperature for extraction/classification.
- Use structured outputs for downstream automation.
- Set explicit timeout and retry policy for production.
## Canonical source
Primary API details live in [/en/concepts/llms](/en/concepts/llms).

View File

@@ -0,0 +1,12 @@
---
title: "LLMs: Troubleshooting"
description: "Fix common model behavior failures: drift, latency spikes, malformed output, and cost overruns."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Malformed JSON: enforce `response_format` and validate at boundary.
- Latency spikes: route heavy tasks to smaller models when acceptable.
- Cost growth: add budget-aware model routing and truncation rules.

View File

@@ -0,0 +1,11 @@
---
title: "Memory: Examples"
description: "Runnable examples for scoped storage and semantic retrieval in CrewAI."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/concepts/memory](/en/concepts/memory)
- [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)

View File

@@ -0,0 +1,24 @@
---
title: "Memory: Concepts"
description: "Designing recall systems with scope boundaries and state-vs-memory separation."
icon: "database"
mode: "wide"
---
## When to use memory
- You need semantic recall across runs.
- You need long-term context outside immediate flow state.
## When to use state instead
- Data is only needed for current control flow.
- Data must remain deterministic and explicit per step.
## Canonical links
- Reference: [/en/ai/memory/reference](/en/ai/memory/reference)
- Patterns: [/en/ai/memory/patterns](/en/ai/memory/patterns)
- Troubleshooting: [/en/ai/memory/troubleshooting](/en/ai/memory/troubleshooting)
- Examples: [/en/ai/memory/examples](/en/ai/memory/examples)
- Existing docs: [/en/concepts/memory](/en/concepts/memory)

View File

@@ -0,0 +1,17 @@
---
title: "Memory: Patterns"
description: "Practical memory patterns for session recall, scoped retrieval, and hybrid flow-state designs."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Session-scoped recall (`/chat/{session_id}`)
2. Project-scoped knowledge (`/project/{project_id}`)
3. Hybrid pattern: flow state for control, memory for long-tail context
## Reference
- [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)
- [/en/guides/flows/mastering-flow-state](/en/guides/flows/mastering-flow-state)

View File

@@ -0,0 +1,23 @@
---
title: "Memory: Reference"
description: "Reference for remember/recall contracts, scopes, and retrieval tuning."
icon: "book"
mode: "wide"
---
## API surface
- `remember(content, scope=...)`
- `recall(query, limit=...)`
- `extract_memories(text)`
- `scope(path)` and `subscope(name)`
## Scope rules
- use `/{entity_type}/{identifier}` paths
- keep hierarchy shallow
- isolate sessions by stable identifiers
## Canonical source
Primary API details live in [/en/concepts/memory](/en/concepts/memory).

View File

@@ -0,0 +1,12 @@
---
title: "Memory: Troubleshooting"
description: "Diagnose poor recall quality, scope leakage, and stale memory retrieval."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Irrelevant recall: tighten scopes and query wording.
- Missing recall: check scope path and recency weighting.
- Scope leakage: avoid shared broad scopes for unrelated workflows.

54
docs/en/ai/overview.mdx Normal file
View File

@@ -0,0 +1,54 @@
---
title: "AI-First Documentation"
description: "Canonical, agent-optimized documentation map for Flows, Agents, Crews, LLMs, Memory, and Tools."
icon: "sitemap"
mode: "wide"
---
## Purpose
This section is the canonical map for AI agents and developers.
Use it when you need:
- one source of truth per domain
- predictable page structure
- runnable patterns with explicit inputs and outputs
## Domain Packs
<CardGroup cols={3}>
<Card title="Flows" icon="arrow-progress" href="/en/ai/flows/index">
State, routing, persistence, resume, and orchestration lifecycle.
</Card>
<Card title="Agents" icon="user" href="/en/ai/agents/index">
Agent contracts, tool boundaries, prompt roles, and output discipline.
</Card>
<Card title="Crews" icon="users" href="/en/ai/crews/index">
Multi-agent execution, process choice, delegation, and coordination.
</Card>
<Card title="LLMs" icon="microchip-ai" href="/en/ai/llms/index">
Model configuration contracts, routing, reliability defaults, and providers.
</Card>
<Card title="Memory" icon="database" href="/en/ai/memory/index">
Retrieval semantics, scope design, and state-vs-memory architecture.
</Card>
<Card title="Tools" icon="wrench" href="/en/ai/tools/index">
Tool safety, schema contracts, retries, and integration patterns.
</Card>
</CardGroup>
## Writing Contract
Every domain follows the same structure:
1. Concepts (`index`)
2. Reference (`reference`)
3. Patterns (`patterns`)
4. Troubleshooting (`troubleshooting`)
5. Examples (`examples`)
## Deprecation Policy
When a page is replaced:
- keep a redirect for the old URL
- keep one canonical destination
- avoid duplicated conceptual prose

View File

@@ -0,0 +1,12 @@
---
title: "Tools: Examples"
description: "Practical examples for tool-driven agents and crews."
icon: "rocket-launch"
mode: "wide"
---
## Example links
- [/en/tools/overview](/en/tools/overview)
- [/en/learn/create-custom-tools](/en/learn/create-custom-tools)
- [/en/learn/tool-hooks](/en/learn/tool-hooks)

View File

@@ -0,0 +1,25 @@
---
title: "Tools: Concepts"
description: "Tool selection strategy, safety boundaries, and reliability rules for agentic execution."
icon: "wrench"
mode: "wide"
---
## When to use tools
- Agents need external data or side effects.
- Deterministic systems must be integrated into agent workflows.
## Tool safety rules
- define clear input schemas
- validate outputs before downstream use
- isolate privileged tools behind policy checks
## Canonical links
- Reference: [/en/ai/tools/reference](/en/ai/tools/reference)
- Patterns: [/en/ai/tools/patterns](/en/ai/tools/patterns)
- Troubleshooting: [/en/ai/tools/troubleshooting](/en/ai/tools/troubleshooting)
- Examples: [/en/ai/tools/examples](/en/ai/tools/examples)
- Existing docs: [/en/concepts/tools](/en/concepts/tools)

View File

@@ -0,0 +1,12 @@
---
title: "Tools: Patterns"
description: "Tool execution patterns for retrieval, action safety, and response grounding."
icon: "diagram-project"
mode: "wide"
---
## Patterns
1. Read-first then write pattern
2. Validation gate before side effects
3. Fallback tool chains for degraded mode

View File

@@ -0,0 +1,22 @@
---
title: "Tools: Reference"
description: "Reference for tool invocation contracts, argument schemas, and runtime safeguards."
icon: "book"
mode: "wide"
---
## Tool contract
- deterministic input schema
- stable output schema
- explicit error behavior
## Runtime safeguards
- timeout and retry policy
- idempotency for side effects
- validation before commit
## Canonical source
Primary API details live in [/en/concepts/tools](/en/concepts/tools).

View File

@@ -0,0 +1,12 @@
---
title: "Tools: Troubleshooting"
description: "Common tool-call failures and fixes for schema mismatch, retries, and side effects."
icon: "circle-exclamation"
mode: "wide"
---
## Common issues
- Schema mismatch: align tool args with declared model output schema.
- Repeated side effects: add idempotency keys.
- Tool timeouts: define retries with bounded backoff.

View File

@@ -16,16 +16,17 @@ Welcome to the CrewAI AMP API reference. This API allows you to programmatically
Navigate to your crew's detail page in the CrewAI AMP dashboard and copy your Bearer Token from the Status tab.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive
a `kickoff_id`.
</Step>
<Step title="Monitor Progress">
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Token Types
| Token Type | Scope | Use Case |
|:-----------|:--------|:----------|
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
| Token Type | Scope | Use Case |
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI AMP dashboard.
You can find both token types in the Status tab of your crew's detail page in
the CrewAI AMP dashboard.
</Tip>
## Base URL
@@ -63,29 +65,33 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
4. **Results**: Extract the final output from the completed response
## Error Handling
The API uses standard HTTP status codes:
| Code | Meaning |
|------|:--------|
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| Code | Meaning |
| ----- | :----------------------------------------- |
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| `422` | Validation Error - Missing required inputs |
| `500` | Server Error - Contact support |
| `500` | Server Error - Contact support |
## Interactive Testing
<Info>
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew
URL, we use **reference mode** instead of an interactive playground to avoid
confusion. This shows you exactly what the requests should look like without
non-functional send buttons.
</Info>
Each endpoint page shows you:
- ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Each endpoint page shows you:
</CardGroup>
**Example workflow:**
1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard
@@ -111,10 +118,18 @@ Each endpoint page shows you:
## Need Help?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
Get help with API integration and troubleshooting
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
Manage your crews and view execution logs
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -4,6 +4,584 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Jan 26, 2026">
## v1.9.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.9.0)
## What's Changed
### Features
- Add structured outputs and response_format support across providers
- Add response ID to streaming responses
- Add event ordering with parent-child hierarchies
- Add Keycloak SSO authentication support
- Add multimodal file handling capabilities
- Add native OpenAI responses API support
- Add A2A task execution utilities
- Add A2A server configuration and agent card generation
- Enhance event system and expand transport options
- Improve tool calling mechanisms
### Bug Fixes
- Enhance file store with fallback memory cache when aiocache is not available
- Ensure document list is not empty
- Handle Bedrock stop sequences properly
- Add Google Vertex API key support
- Enhance Azure model stop word detection
- Improve error handling for HumanFeedbackPending in flow execution
- Fix execution span task unlinking
### Documentation
- Add native file handling documentation
- Add OpenAI responses API documentation
- Add agent card implementation guidance
- Refine A2A documentation
- Update changelog for v1.8.0
### Contributors
@Anaisdg, @GininDenis, @Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @koushiv777, @lorenzejay, @nicoferdi96, @vinibrsl
</Update>
<Update label="Jan 15, 2026">
## v1.8.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.8.1)
## What's Changed
### Features
- Add A2A task execution utilities
- Add A2A server configuration and agent card generation
- Add additional transport mechanisms
- Add Galileo integration support
### Bug Fixes
- Improve Azure model compatibility
- Expand frame inspection depth to detect parent_flow
- Resolve task execution span management issues
- Enhance error handling for human feedback scenarios during flow execution
### Documentation
- Add A2A agent card documentation
- Add PII redaction feature documentation
### Contributors
@Anaisdg, @GininDenis, @greysonlalonde, @joaomdmoura, @koushiv777, @lorenzejay, @vinibrsl
</Update>
<Update label="Jan 08, 2026">
## v1.8.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.8.0)
## What's Changed
### Features
- Add native async chain for a2a
- Add a2a update mechanisms (poll/stream/push) with handlers and config
- Introduce global flow configuration for human-in-the-loop feedback
- Add streaming tool call events and fix provider ID tracking
- Introduce production-ready Flows and Crews architecture
- Add HITL for Flows
- Improve EventListener and TraceCollectionListener for enhanced event handling
### Bug Fixes
- Handle missing a2a dependency as optional
- Correct error fetching for WorkOS login polling
- Fix wrong trigger name in sample documentation
### Documentation
- Update webhook-streaming documentation
- Adjust AOP to AMP documentation language
### Contributors
@Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @lorenzejay, @lucasgomide, @mplachta
</Update>
<Update label="Dec 19, 2025">
## v1.7.2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.7.2)
## What's Changed
### Bug Fixes
- Resolve connection issues
### Documentation
- Update api-reference/status docs page
### Contributors
@greysonlalonde, @heitorado, @lorenzejay, @lucasgomide
</Update>
<Update label="Dec 16, 2025">
## v1.7.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.7.1)
## What's Changed
### Improvements
- Add `--no-commit` flag to bump command
- Use JSON schema for tool argument serialization
### Bug Fixes
- Fix error message display from response when tool repository login fails
- Fix graceful termination of future when executing a task asynchronously
- Fix task ordering by adding index
- Fix platform compatibility checks for Windows signals
- Fix RPM controller timer to prevent process hang
- Fix token usage recording and validate response model on stream
### Documentation
- Add translated documentation for async
- Add documentation for AOP Deploy API
- Add documentation for the agent handler connector
- Add documentation on native async
### Contributors
@Llamrei, @dragosmc, @gilfeig, @greysonlalonde, @heitorado, @lorenzejay, @mattatcha, @vinibrsl
</Update>
<Update label="Dec 09, 2025">
## v1.7.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.7.0)
## What's Changed
### Features
- Add async flow kickoff
- Add async crew support
- Add async task support
- Add async knowledge support
- Add async memory support
- Add async support for tools and agent executor; improve typing and docs
- Implement a2a extensions API and async agent card caching; fix task propagation & streaming
- Add native async tool support
- Add async llm support
- Create sys event types and handler
### Bug Fixes
- Fix issue to ensure nonetypes are not passed to otel
- Fix deadlock in token store file operations
- Fix to ensure otel span is closed
- Use HuggingFaceEmbeddingFunction for embeddings, update keys and add tests
- Fix to ensure supports_tools is true for all supported anthropic models
- Ensure hooks work with lite agents flows
### Contributors
@greysonlalonde, @lorenzejay
</Update>
<Update label="Nov 29, 2025">
## v1.6.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.6.1)
## What's Changed
### Bug Fixes
- Fix ChatCompletionsClient call to ensure proper functionality
- Ensure async methods are executable for annotations
- Fix parameters in RagTool.add, add typing, and tests
- Remove invalid parameter from SSE client
- Erase 'oauth2_extra' setting on 'crewai config reset' command
### Refactoring
- Enhance model validation and provider inference in LLM class
### Contributors
@Vidit-Ostwal, @greysonlalonde, @heitorado, @lorenzejay
</Update>
<Update label="Nov 25, 2025">
## v1.6.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.6.0)
## What's Changed
### Features
- Add streaming result support to flows and crews
- Add gemini-3-pro-preview
- Support CLI login with Entra ID
- Add Merge Agent Handler tool
- Enhance flow event state management
### Bug Fixes
- Ensure custom rag store persist path is set if passed
- Ensure fuzzy returns are more strict and show type warning
- Re-add openai response_format parameter and add test
- Fix rag tool embeddings configuration
- Ensure flow execution start panel is not shown on plot
### Documentation
- Update references from AMP to AOP in documentation
- Update AMP to AOP
### Contributors
@Vidit-Ostwal, @gilfeig, @greysonlalonde, @heitorado, @joaomdmoura, @lorenzejay, @markmcd
</Update>
<Update label="Nov 22, 2025">
## v0.203.2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.203.2)
## What's Changed
- Hotfix version bump from 0.203.1 to 0.203.2
</Update>
<Update label="Nov 16, 2025">
## v1.5.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.5.0)
## What's Changed
### Features
- Add a2a trust remote completion status flag
- Fetch and store more data about Okta authorization server
- Implement before and after LLM call hooks in CrewAgentExecutor
- Expose messages to TaskOutput and LiteAgentOutputs
- Enhance schema description of QdrantVectorSearchTool
### Bug Fixes
- Ensure tracing instrumentation flags are correctly applied
- Fix custom tool documentation links and add Mintlify broken links action
### Documentation
- Enhance task guardrail documentation with LLM-based validation support
### Contributors
@danielfsbarreto, @greysonlalonde, @heitorado, @lorenzejay, @theCyberTech
</Update>
<Update label="Nov 07, 2025">
## v1.4.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.4.1)
## What's Changed
### Bug Fixes
- Fix handling of agent max iterations
- Resolve routing issues for LLM model syntax to respected providers
### Contributors
@greysonlalonde
</Update>
<Update label="Nov 07, 2025">
## v1.4.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.4.0)
## What's Changed
### Features
- Add support for non-AST plot routes
- Implement first-class support for MCP
- Add Pydantic validation dunder to BaseInterceptor
- Add support for LLM message interceptor hooks
- Cache i18n prompts for efficient use
- Enhance QdrantVectorSearchTool
### Bug Fixes
- Fix issues with keeping stopwords updated
- Resolve unpickleable values in flow state
- Ensure lite agents course-correct on validation errors
- Fix callback argument hashing to ensure caching works
- Allow adding RAG source content from valid URLs
- Make plot node selection smoother
- Fix duplicating document IDs for knowledge
### Refactoring
- Improve MCP tool execution handling with concurrent futures
- Simplify flow handling, typing, and logging; update UI and tests
- Refactor stop word management to a property
### Documentation
- Migrate embedder to embedding_model and require vectordb across tool docs; add provider examples (en/ko/pt-BR)
### Contributors
@danielfsbarreto, @greysonlalonde, @lorenzejay, @lucasgomide, @tonykipkemboi
</Update>
<Update label="Nov 01, 2025">
## v1.3.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.3.0)
## What's Changed
### Features
- Refactor flow handling, typing, and logging
- Enhance QdrantVectorSearchTool
### Bug Fixes
- Fix Firecrawl tools and add tests
- Refactor use_stop_words to property and add check for stop words
### Documentation
- Migrate embedder to embedding_model and require vectordb across tool docs
- Add provider examples in English, Korean, and Portuguese
### Refactoring
- Improve flow handling and UI updates
### Contributors
@danielfsbarreto, @greysonlalonde, @lorenzejay, @lucasgomide, @tonykipkemboi
</Update>
<Update label="Oct 27, 2025">
## v1.2.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.2.1)
## What's Changed
### Features
- Add support for Datadog integration
- Support apps and mcps in liteagent
### Documentation
- Describe mandatory environment variable for calling Platform tools for each integration
- Add Datadog integration documentation
### Contributors
@barieom, @lorenzejay, @lucasgomide, @sabrenner
</Update>
<Update label="Oct 24, 2025">
## v1.2.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.2.0)
## What's Changed
### Bug Fixes
- Update default LLM model and improve error logging in LLM utilities
- Change flow visualization directory and method inspection
### Dropping Unused
- Remove aisuite
### Contributors
@greysonlalonde, @lorenzejay
</Update>
<Update label="Oct 21, 2025">
## v1.1.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.1.0)
## What's Changed
### Features
- Enhance InternalInstructor to support multiple LLM providers
- Implement mypy plugin base
- Improve QdrantVectorSearchTool
### Bug Fixes
- Correct broken integration documentation links
- Fix double trace call and add types
- Pin template versions to latest
### Documentation
- Update LLM integration details and examples
### Refactoring
- Improve CrewBase typing
### Contributors
@cwarre33, @danielfsbarreto, @greysonlalonde, @lorenzejay
</Update>
<Update label="Oct 20, 2025">
## v1.0.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.0.0)
## What's Changed
### Features
- Bump versions to 1.0.0
- Enhance knowledge and guardrail event handling in Agent class
- Inject tool repository credentials in crewai run command
### Bug Fixes
- Preserve nested condition structure in Flow decorators
- Add standard print parameters to Printer.print method
- Fix errors when there is no input() available
- Add a leeway of 10s when decoding JWT
- Revert bad cron schedule
- Correct cron schedule to run every 5 days at specific dates
- Use system PATH for Docker binary instead of hardcoded path
- Add CodeQL configuration to properly exclude template directories
### Documentation
- Update security policy for vulnerability reporting
- Add guide for capturing telemetry logs in CrewAI AMP
- Add missing /resume files
- Clarify webhook URL parameter in HITL workflows
### Contributors
@Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @lorenzejay, @lucasgomide, @mplachta, @theCyberTech
</Update>
<Update label="Oct 18, 2025">
## v1.0.0b3 (Pre-release)
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.0.0b3)
## What's Changed
### Features
- Enhance task guardrail functionality and validation
- Improve support for importing native SDK
- Add Azure native tests
- Enhance BedrockCompletion class with advanced features
- Enhance GeminiCompletion class with client parameter support
- Enhance AnthropicCompletion class with additional client parameters
### Bug Fixes
- Preserve nested condition structure in Flow decorators
- Add standard print parameters to Printer.print method
- Remove stdout prints and improve test determinism
### Refactoring
- Convert project module to metaclass with full typing
### Contributors
@greysonlalonde, @lorenzejay
</Update>
<Update label="Oct 16, 2025">
## v1.0.0b2 (Pre-release)
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.0.0b2)
## What's Changed
### Features
- Enhance OpenAICompletion class with additional client parameters
- Improve event bus thread safety and async support
- Inject tool repository credentials in crewai run command
### Bug Fixes
- Fix issue where it errors out if there is no input() available
- Add a leeway of 10s when decoding JWT
- Fix copying and adding NOT_SPECIFIED check in task.py
### Documentation
- Ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned in documentation
- Update triggers documentation
### Contributors
@Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="Oct 14, 2025">
## v1.0.0b1 (Pre-release)
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.0.0b1)
## What's Changed
### Features
- Enhance OpenAICompletion class with additional client parameters
- Improve event bus thread safety and async support
- Implement Bedrock LLM integration
### Bug Fixes
- Fix issue with missing input() availability
- Resolve JWT decoding error by adding a leeway of 10 seconds
- Inject tool repository credentials in crewai run command
- Fix copy and add NOT_SPECIFIED check in task.py
### Documentation
- Ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned in documentation
- Update triggers documentation
### Contributors
@Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="Oct 13, 2025">
## v0.203.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.203.1)
## What's Changed
### Core Improvements & Fixes
- Fixed injection of tool repository credentials into the `crewai run` command
- Added a 10-second leeway when decoding JWTs to reduce token validation errors
- Corrected (then reverted) cron schedule fix intended to run jobs every 5 days at specific dates
### Documentation & Guides
- Updated security policy to clarify the process for vulnerability reporting
</Update>
<Update label="Oct 09, 2025">
## v1.0.0a4 (Pre-release)
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.0.0a4)
## What's Changed
### Features
- Enhance knowledge and guardrail event handling in Agent class
- Introduce trigger listing and execution commands for local development
- Update documentation with new approach to consume Platform Actions
- Add guide for capturing telemetry logs in CrewAI AMP
### Bug Fixes
- Revert bad cron schedule
- Correct cron schedule to run every 5 days at specific dates
- Remove duplicate line and add explicit environment variable
- Resolve linting errors across the codebase
- Replace print statements with logger in agent and memory handling
- Use system PATH for Docker binary instead of hardcoded path
- Allow failed PyPI publish
- Match tag and release title, ignore devtools build for PyPI
### Documentation
- Update security policy for vulnerability reporting
- Add missing /resume files
- Clarify webhook URL parameter in HITL workflows
### Contributors
@Vidit-Ostwal, @greysonlalonde, @lorenzejay, @lucasgomide, @theCyberTech
</Update>
<Update label="Sep 30, 2025">
## v1.0.0a1

View File

@@ -8,6 +8,7 @@ mode: "wide"
## Overview of an Agent
In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Perform specific tasks
- Make decisions based on its role and goal
- Use tools to accomplish objectives
@@ -16,15 +17,30 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Delegate tasks when allowed
<Tip>
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
Think of an agent as a specialized team member with specific skills,
expertise, and responsibilities. For example, a `Researcher` agent might excel
at gathering and analyzing information, while a `Writer` agent might be better
at creating content.
</Tip>
## When to Use Agents
- You need role-specific reasoning and decision-making.
- You need tool-enabled execution with delegated responsibilities.
- You need reusable behavioral units across tasks and crews.
## When Not to Use Agents
- Deterministic business logic in plain code is sufficient.
- A static transformation without reasoning is sufficient.
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces
- Real-time testing and validation
- Template library with pre-configured agent types
@@ -33,36 +49,36 @@ The Visual Agent Builder enables:
## Agent Attributes
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
## Creating Agents
@@ -137,7 +153,8 @@ class LatestAiDevelopmentCrew():
```
<Note>
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
The names you use in your YAML files (`agents.yaml`) should match the method
names in your Python code.
</Note>
### Direct Code Definition
@@ -184,6 +201,7 @@ agent = Agent(
Let's break down some key parameter combinations for common use cases:
#### Basic Research Agent
```python Code
research_agent = Agent(
role="Research Analyst",
@@ -195,6 +213,7 @@ research_agent = Agent(
```
#### Code Development Agent
```python Code
dev_agent = Agent(
role="Senior Python Developer",
@@ -208,6 +227,7 @@ dev_agent = Agent(
```
#### Long-Running Analysis Agent
```python Code
analysis_agent = Agent(
role="Data Analyst",
@@ -221,6 +241,7 @@ analysis_agent = Agent(
```
#### Custom Template Agent
```python Code
custom_agent = Agent(
role="Customer Service Representative",
@@ -236,6 +257,7 @@ custom_agent = Agent(
```
#### Date-Aware Agent with Reasoning
```python Code
strategic_agent = Agent(
role="Market Analyst",
@@ -250,6 +272,7 @@ strategic_agent = Agent(
```
#### Reasoning Agent
```python Code
reasoning_agent = Agent(
role="Strategic Planner",
@@ -263,6 +286,7 @@ reasoning_agent = Agent(
```
#### Multimodal Agent
```python Code
multimodal_agent = Agent(
role="Visual Content Analyst",
@@ -276,52 +300,64 @@ multimodal_agent = Agent(
### Parameter Details
#### Critical Parameters
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
- `llm` determines the language model used (default: OpenAI's GPT-4)
#### Memory and Context
- `memory`: Enable to maintain conversation history
- `respect_context_window`: Prevents token limit issues
- `knowledge_sources`: Add domain-specific knowledge bases
#### Execution Control
- `max_iter`: Maximum attempts before giving best answer
- `max_execution_time`: Timeout in seconds
- `max_rpm`: Rate limiting for API calls
- `max_retry_limit`: Retries on error
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
Add the code interpreter tool as a tool in the agent as a tool parameter.
</Note>
This runs a default Docker image. If you want to configure the docker image,
the checkout the Code Interpreter Tool in the tools section. Add the code
interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks
- `inject_date`: Automatically inject current date into task descriptions
#### Templates
- `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format
- `response_template`: Formats agent responses
<Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
When using custom templates, ensure that both `system_template` and
`prompt_template` are defined. The `response_template` is optional but
recommended for consistent output formatting.
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
When using custom templates, you can use variables like `{role}`, `{goal}`,
and `{backstory}` in your templates. These will be automatically populated
during execution.
</Note>
## Agent Tools
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
@@ -360,7 +396,8 @@ analyst = Agent(
```
<Note>
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
When `memory` is enabled, the agent will maintain context across multiple
interactions, improving its ability to handle complex, multi-step tasks.
</Note>
## Context Window Management
@@ -390,6 +427,7 @@ smart_agent = Agent(
```
**What happens when context limits are exceeded:**
- ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."`
- 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history
- ✅ **Continued execution**: Task execution continues seamlessly with the summarized context
@@ -411,6 +449,7 @@ strict_agent = Agent(
```
**What happens when context limits are exceeded:**
- ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."`
- 🛑 **Execution stops**: Task execution halts immediately
- 🔧 **Manual intervention required**: You need to modify your approach
@@ -418,6 +457,7 @@ strict_agent = Agent(
### Choosing the Right Setting
#### Use `respect_context_window=True` (Default) when:
- **Processing large documents** that might exceed context limits
- **Long-running conversations** where some summarization is acceptable
- **Research tasks** where general context is more important than exact details
@@ -436,6 +476,7 @@ document_processor = Agent(
```
#### Use `respect_context_window=False` when:
- **Precision is critical** and information loss is unacceptable
- **Legal or medical tasks** requiring complete context
- **Code review** where missing details could introduce bugs
@@ -458,6 +499,7 @@ precision_agent = Agent(
When dealing with very large datasets, consider these strategies:
#### 1. Use RAG Tools
```python Code
from crewai_tools import RagTool
@@ -475,6 +517,7 @@ rag_agent = Agent(
```
#### 2. Use Knowledge Sources
```python Code
# Use knowledge sources instead of large prompts
knowledge_agent = Agent(
@@ -498,6 +541,7 @@ knowledge_agent = Agent(
### Troubleshooting Context Issues
**If you're getting context limit errors:**
```python Code
# Quick fix: Enable automatic handling
agent.respect_context_window = True
@@ -511,6 +555,7 @@ agent.tools = [RagTool()]
```
**If automatic summarization loses important information:**
```python Code
# Disable auto-summarization and use RAG instead
agent = Agent(
@@ -524,7 +569,10 @@ agent = Agent(
```
<Note>
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
The context window management feature works automatically in the background.
You don't need to call any special functions - just set
`respect_context_window` to your preferred behavior and CrewAI handles the
rest!
</Note>
## Direct Agent Interaction with `kickoff()`
@@ -556,10 +604,10 @@ print(result.raw)
### Parameters and Return Values
| Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
| Parameter | Type | Description |
| :---------------- | :--------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties:
@@ -621,28 +669,34 @@ asyncio.run(main())
```
<Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler
execution flow while preserving all of the agent's configuration (role, goal,
backstory, tools, etc.).
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
### Performance Optimization
- Use `respect_context_window: true` to prevent token limit issues
- Set appropriate `max_rpm` to avoid rate limiting
- Enable `cache: true` to improve performance for repetitive tasks
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Advanced Features
- Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks
- Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts)
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
@@ -650,6 +704,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Enable `multimodal: true` for agents that need to process both text and visual content
### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions
- Consider using different LLMs for different purposes:
@@ -657,6 +712,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- `function_calling_llm` for efficient tool usage
### Date Awareness and Reasoning
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes
- Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc.
@@ -664,22 +720,26 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection
### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling)
## Troubleshooting Common Issues
1. **Rate Limiting**: If you're hitting API rate limits:
- Implement appropriate `max_rpm`
- Use caching for repetitive operations
- Consider batching requests
2. **Context Window Errors**: If you're exceeding context limits:
- Enable `respect_context_window`
- Use more efficient prompts
- Clear agent memory periodically
3. **Code Execution Issues**: If code execution fails:
- Verify Docker is installed for safe mode
- Check execution permissions
- Review code sandbox settings

View File

@@ -5,7 +5,12 @@ icon: terminal
mode: "wide"
---
<Warning>Since release 0.140.0, CrewAI AMP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
<Warning>
Since release 0.140.0, CrewAI AMP started a process of migrating their login
provider. As such, the authentication flow via CLI was updated. Users that use
Google to login, or that created their account after July 3rd, 2025 will be
unable to log in with older versions of the `crewai` library.
</Warning>
## Overview
@@ -41,6 +46,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow
Example:
```shell Terminal
crewai create crew my_new_crew
crewai create flow my_new_flow
@@ -57,6 +63,7 @@ crewai version [OPTIONS]
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell Terminal
crewai version
crewai version --tools
@@ -74,6 +81,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell Terminal
crewai train -n 10 -f my_training_data.pkl
```
@@ -89,6 +97,7 @@ crewai replay [OPTIONS]
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell Terminal
crewai replay -t task_123456
```
@@ -118,6 +127,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories
Example:
```shell Terminal
crewai reset-memories --long --short
crewai reset-memories --all
@@ -135,6 +145,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -148,12 +159,16 @@ crewai run
```
<Note>
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.
Starting from version 0.103.0, the `crewai run` command can be used to run
both standard crews and flows. For flows, it automatically detects the type
from pyproject.toml and runs the appropriate command. This is now the
recommended way to run both crews and flows.
</Note>
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
Make sure to run these commands from the directory where your CrewAI project
is set up. Some commands may require additional configuration or setup within
your project structure.
</Note>
### 9. Chat
@@ -165,6 +180,7 @@ After receiving the results, you can continue interacting with the assistant for
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
@@ -182,6 +198,7 @@ def crew(self) -> Crew:
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. Deploy
@@ -189,17 +206,18 @@ def crew(self) -> Crew:
Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
You can login or create an account with:
```shell Terminal
crewai login
```
You can login or create an account with:
```shell Terminal
crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
@@ -212,63 +230,78 @@ crewai org [COMMAND] [OPTIONS]
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI AMP to use these organization management commands.
You must be authenticated to CrewAI AMP to use these organization management
commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI AMP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI AMP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list
```
This lists all your deployments.
```shell Terminal
crewai deploy list
```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI AMP platform.
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI AMP platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
@@ -290,18 +323,20 @@ crewai login
```
What happens:
- A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL
- Enter/confirm the code to complete authentication
Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
When running `crewai create crew` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider and model, you will be prompted for API keys.
@@ -309,11 +344,11 @@ Once you've selected an LLM provider and model, you will be prompted for API key
Here's a list of the most popular LLM providers suggested by the CLI:
* OpenAI
* Groq
* Anthropic
* Google Gemini
* SambaNova
- OpenAI
- Groq
- Anthropic
- Google Gemini
- SambaNova
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
@@ -325,7 +360,7 @@ When you select a provider, the CLI will prompt you to enter the Key name and th
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
- [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management
@@ -338,16 +373,19 @@ crewai config [COMMAND] [OPTIONS]
#### Commands:
- `list`: Display all CLI configuration parameters
```shell Terminal
crewai config list
```
- `set`: Set a CLI configuration parameter
```shell Terminal
crewai config set <key> <value>
```
- `reset`: Reset all CLI configuration parameters to default values
```shell Terminal
crewai config reset
```
@@ -363,49 +401,141 @@ crewai config reset
#### Examples
Display current configuration:
```shell Terminal
crewai config list
```
Example output:
| Setting | Value | Description |
| Setting | Value | Description |
| :------------------ | :----------------------- | :---------------------------------------------------------- |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
Set the enterprise base URL:
```shell Terminal
crewai config set enterprise_base_url https://my-enterprise.crewai.com
```
Set OAuth2 provider:
```shell Terminal
crewai config set oauth2_provider auth0
```
Set OAuth2 domain:
```shell Terminal
crewai config set oauth2_domain my-company.auth0.com
```
Reset all configuration to defaults:
```shell Terminal
crewai config reset
```
<Tip>
After resetting configuration, re-run `crewai login` to authenticate again.
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
### 14. Trace Management
Manage trace collection preferences for your Crew and Flow executions.
```shell Terminal
crewai traces [COMMAND]
```
#### Commands:
- `enable`: Enable trace collection for crew/flow executions
```shell Terminal
crewai traces enable
```
- `disable`: Disable trace collection for crew/flow executions
```shell Terminal
crewai traces disable
```
- `status`: Show current trace collection status
```shell Terminal
crewai traces status
```
#### How Tracing Works
Trace collection is controlled by checking three settings in priority order:
1. **Explicit flag in code** (highest priority - can enable OR disable):
```python
crew = Crew(agents=[...], tasks=[...], tracing=True) # Always enable
crew = Crew(agents=[...], tasks=[...], tracing=False) # Always disable
crew = Crew(agents=[...], tasks=[...]) # Check lower priorities (default)
```
- `tracing=True` will **always enable** tracing (overrides everything)
- `tracing=False` will **always disable** tracing (overrides everything)
- `tracing=None` or omitted will check lower priority settings
2. **Environment variable** (second priority):
```env
CREWAI_TRACING_ENABLED=true
```
- Checked only if `tracing` is not explicitly set to `True` or `False` in code
- Set to `true` or `1` to enable tracing
3. **User preference** (lowest priority):
```shell Terminal
crewai traces enable
```
- Checked only if `tracing` is not set in code and `CREWAI_TRACING_ENABLED` is not set to `true`
- Running `crewai traces enable` is sufficient to enable tracing by itself
<Note>
**To enable tracing**, use any one of these methods:
- Set `tracing=True` in your Crew/Flow code, OR
- Add `CREWAI_TRACING_ENABLED=true` to your `.env` file, OR
- Run `crewai traces enable`
**To disable tracing**, use any ONE of these methods:
- Set `tracing=False` in your Crew/Flow code (overrides everything), OR
- Remove or set to `false` the `CREWAI_TRACING_ENABLED` env var, OR
- Run `crewai traces disable`
Higher priority settings override lower ones.
</Note>
<Tip>
For more information about tracing, see the [Tracing
documentation](/observability/tracing).
</Tip>
<Tip>
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
CrewAI CLI handles authentication to the Tool Repository automatically when
adding packages to your project. Just append `crewai` before any `uv` command
to use it. E.g. `crewai uv add requests`. For more information, see [Tool
Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
</Tip>
<Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
Configuration settings are stored in `~/.config/crewai/settings.json`. Some
settings like organization name and UUID are read-only and managed through
authentication and organization commands. Tool repository related settings are
hidden and cannot be set directly by users.
</Note>

View File

@@ -9,6 +9,17 @@ mode: "wide"
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
## When to Use Crews
- You need multiple specialized agents collaborating on a shared outcome.
- You need process-level orchestration (`sequential` or `hierarchical`).
- You need task-level handoffs and context propagation.
## When Not to Use Crews
- A single agent can complete the work end-to-end.
- You do not need multi-step task decomposition.
## Crew Attributes
| Attribute | Parameters | Description |
@@ -33,6 +44,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
<Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -306,12 +318,27 @@ print(result)
### Different Ways to Kick Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process.
#### Synchronous Methods
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
#### Asynchronous Methods
CrewAI offers two approaches for async execution:
| Method | Type | Description |
|--------|------|-------------|
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
| `akickoff_for_each()` | Native async | Native async execution for each input in a list |
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
| `kickoff_for_each_async()` | Thread-based | Thread-based async for each input in a list |
<Note>
For high-concurrency workloads, `akickoff()` and `akickoff_for_each()` are recommended as they use native async for task execution, memory operations, and knowledge retrieval.
</Note>
```python Code
# Start the crew's task execution
@@ -324,19 +351,53 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using kickoff_async
# Example of using native async with akickoff
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.akickoff(inputs=inputs)
print(async_result)
# Example of using native async with akickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
for async_result in async_results:
print(async_result)
# Example of using thread-based kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
# Example of using thread-based kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. For detailed async examples, see the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide.
### Streaming Crew Execution
For real-time visibility into crew execution, you can enable streaming to receive output as it's generated:
```python Code
# Enable streaming
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Iterate over streaming output
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
```
Learn more about streaming in the [Streaming Crew Execution](/en/learn/streaming-crew-execution) guide.
### Replaying from a Specific Task
@@ -367,3 +428,17 @@ crewai replay -t <task_id>
```
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.
## Common Failure Modes
### Agents overlap responsibilities
- Cause: role/goal definitions are too broad.
- Fix: tighten role boundaries and task ownership.
### Hierarchical runs stall or degrade
- Cause: weak manager configuration or unclear delegation criteria.
- Fix: define a stronger manager objective and explicit completion criteria.
### Crew outputs are inconsistent
- Cause: expected outputs are underspecified across tasks.
- Fix: enforce structured outputs and stronger task contracts.

View File

@@ -1,6 +1,6 @@
---
title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring'
title: "Event Listeners"
description: "Tap into CrewAI events to build custom integrations and monitoring"
icon: spinner
mode: "wide"
---
@@ -25,6 +25,7 @@ CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event s
![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png)
With Prompt Tracing you can:
- View the complete history of all prompts sent to your LLM
- Track token usage and costs
- Debug agent reasoning failures
@@ -274,7 +275,6 @@ The structure of the event object depends on the event type, but all events inhe
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:

267
docs/en/concepts/files.mdx Normal file
View File

@@ -0,0 +1,267 @@
---
title: Files
description: Pass images, PDFs, audio, video, and text files to your agents for multimodal processing.
icon: file-image
---
## Overview
CrewAI supports native multimodal file inputs, allowing you to pass images, PDFs, audio, video, and text files directly to your agents. Files are automatically formatted for each LLM provider's API requirements.
<Note type="info" title="Optional Dependency">
File support requires the optional `crewai-files` package. Install it with:
```bash
uv add 'crewai[file-processing]'
```
</Note>
<Note type="warning" title="Early Access">
The file processing API is currently in early access.
</Note>
## File Types
CrewAI supports five specific file types plus a generic `File` class that auto-detects the type:
| Type | Class | Use Cases |
|:-----|:------|:----------|
| **Image** | `ImageFile` | Photos, screenshots, diagrams, charts |
| **PDF** | `PDFFile` | Documents, reports, papers |
| **Audio** | `AudioFile` | Voice recordings, podcasts, meetings |
| **Video** | `VideoFile` | Screen recordings, presentations |
| **Text** | `TextFile` | Code files, logs, data files |
| **Generic** | `File` | Auto-detect type from content |
```python
from crewai_files import File, ImageFile, PDFFile, AudioFile, VideoFile, TextFile
image = ImageFile(source="screenshot.png")
pdf = PDFFile(source="report.pdf")
audio = AudioFile(source="meeting.mp3")
video = VideoFile(source="demo.mp4")
text = TextFile(source="data.csv")
file = File(source="document.pdf")
```
## File Sources
The `source` parameter accepts multiple input types and auto-detects the appropriate handler:
### From Path
```python
from crewai_files import ImageFile
image = ImageFile(source="./images/chart.png")
```
### From URL
```python
from crewai_files import ImageFile
image = ImageFile(source="https://example.com/image.png")
```
### From Bytes
```python
from crewai_files import ImageFile, FileBytes
image_bytes = download_image_from_api()
image = ImageFile(source=FileBytes(data=image_bytes, filename="downloaded.png"))
image = ImageFile(source=image_bytes)
```
## Using Files
Files can be passed at multiple levels, with more specific levels taking precedence.
### With Crews
Pass files when kicking off a crew:
```python
from crewai import Crew
from crewai_files import ImageFile
crew = Crew(agents=[analyst], tasks=[analysis_task])
result = crew.kickoff(
inputs={"topic": "Q4 Sales"},
input_files={
"chart": ImageFile(source="sales_chart.png"),
"report": PDFFile(source="quarterly_report.pdf"),
}
)
```
### With Tasks
Attach files to specific tasks:
```python
from crewai import Task
from crewai_files import ImageFile
task = Task(
description="Analyze the sales chart and identify trends in {chart}",
expected_output="A summary of key trends",
input_files={
"chart": ImageFile(source="sales_chart.png"),
}
)
```
### With Flows
Pass files to flows, which automatically inherit to crews:
```python
from crewai.flow.flow import Flow, start
from crewai_files import ImageFile
class AnalysisFlow(Flow):
@start()
def analyze(self):
return self.analysis_crew.kickoff()
flow = AnalysisFlow()
result = flow.kickoff(
input_files={"image": ImageFile(source="data.png")}
)
```
### With Standalone Agents
Pass files directly to agent kickoff:
```python
from crewai import Agent
from crewai_files import ImageFile
agent = Agent(
role="Image Analyst",
goal="Analyze images",
backstory="Expert at visual analysis",
llm="gpt-4o",
)
result = agent.kickoff(
messages="What's in this image?",
input_files={"photo": ImageFile(source="photo.jpg")},
)
```
## File Precedence
When files are passed at multiple levels, more specific levels override broader ones:
```
Flow input_files < Crew input_files < Task input_files
```
For example, if both Flow and Task define a file named `"chart"`, the Task's version is used.
## Provider Support
Different providers support different file types. CrewAI automatically formats files for each provider's API.
| Provider | Image | PDF | Audio | Video | Text |
|:---------|:-----:|:---:|:-----:|:-----:|:----:|
| **OpenAI** (completions API) | ✓ | | | | |
| **OpenAI** (responses API) | ✓ | ✓ | ✓ | | |
| **Anthropic** (claude-3.x) | ✓ | ✓ | | | |
| **Google Gemini** (gemini-1.5, 2.0, 2.5) | ✓ | ✓ | ✓ | ✓ | ✓ |
| **AWS Bedrock** (claude-3) | ✓ | ✓ | | | |
| **Azure OpenAI** (gpt-4o) | ✓ | | ✓ | | |
<Note type="info" title="Gemini for Maximum File Support">
Google Gemini models support all file types including video (up to 1 hour, 2GB). Use Gemini when you need to process video content.
</Note>
<Note type="warning" title="Unsupported File Types">
If you pass a file type that the provider doesn't support (e.g., video to OpenAI), you'll receive an `UnsupportedFileTypeError`. Choose your provider based on the file types you need to process.
</Note>
## How Files Are Sent
CrewAI automatically chooses the optimal method to send files to each provider:
| Method | Description | Used When |
|:-------|:------------|:----------|
| **Inline Base64** | File embedded directly in the request | Small files (< 5MB typically) |
| **File Upload API** | File uploaded separately, referenced by ID | Large files that exceed threshold |
| **URL Reference** | Direct URL passed to the model | File source is already a URL |
### Provider Transmission Methods
| Provider | Inline Base64 | File Upload API | URL References |
|:---------|:-------------:|:---------------:|:--------------:|
| **OpenAI** | ✓ | ✓ (> 5 MB) | ✓ |
| **Anthropic** | ✓ | ✓ (> 5 MB) | ✓ |
| **Google Gemini** | ✓ | ✓ (> 20 MB) | ✓ |
| **AWS Bedrock** | ✓ | | ✓ (S3 URIs) |
| **Azure OpenAI** | ✓ | | ✓ |
<Note type="info" title="Automatic Optimization">
You don't need to manage this yourself. CrewAI automatically uses the most efficient method based on file size and provider capabilities. Providers without file upload APIs use inline base64 for all files.
</Note>
## File Handling Modes
Control how files are processed when they exceed provider limits:
```python
from crewai_files import ImageFile, PDFFile
image = ImageFile(source="large.png", mode="strict")
image = ImageFile(source="large.png", mode="auto")
image = ImageFile(source="large.png", mode="warn")
pdf = PDFFile(source="large.pdf", mode="chunk")
```
## Provider Constraints
Each provider has specific limits for file sizes and dimensions:
### OpenAI
- **Images**: Max 20 MB, up to 10 images per request
- **PDFs**: Max 32 MB, up to 100 pages
- **Audio**: Max 25 MB, up to 25 minutes
### Anthropic
- **Images**: Max 5 MB, max 8000x8000 pixels, up to 100 images
- **PDFs**: Max 32 MB, up to 100 pages
### Google Gemini
- **Images**: Max 100 MB
- **PDFs**: Max 50 MB
- **Audio**: Max 100 MB, up to 9.5 hours
- **Video**: Max 2 GB, up to 1 hour
### AWS Bedrock
- **Images**: Max 4.5 MB, max 8000x8000 pixels
- **PDFs**: Max 3.75 MB, up to 100 pages
## Referencing Files in Prompts
Use the file's key name in your task descriptions to reference files:
```python
task = Task(
description="""
Analyze the provided materials:
1. Review the chart in {sales_chart}
2. Cross-reference with data in {quarterly_report}
3. Summarize key findings
""",
expected_output="Analysis summary with key insights",
input_files={
"sales_chart": ImageFile(source="chart.png"),
"quarterly_report": PDFFile(source="report.pdf"),
}
)
```

View File

@@ -19,82 +19,121 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
## When to Use Flows
- You need deterministic orchestration and branching logic.
- You need explicit state transitions across multiple steps.
- You need resumable workflows with persistence.
- You need to combine crews, direct model calls, and Python logic in one runtime.
## When Not to Use Flows
- A single prompt/response call is sufficient.
- A single crew kickoff with no orchestration logic is sufficient.
- You do not need stateful multi-step execution.
## Getting Started
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
The example below shows a realistic Flow for support-ticket triage. It demonstrates features teams use in production: typed state, routing, memory access, and persistence.
```python Code
from crewai.flow.flow import Flow, listen, start
from dotenv import load_dotenv
from litellm import completion
from crewai.flow.flow import Flow, listen, router, start
from crewai.flow.persistence import persist
from pydantic import BaseModel, Field
class ExampleFlow(Flow):
model = "gpt-4o-mini"
class SupportTriageState(BaseModel):
ticket_id: str = ""
customer_tier: str = "standard" # standard | enterprise
issue: str = ""
urgency: str = "normal"
route: str = ""
draft_reply: str = ""
internal_notes: list[str] = Field(default_factory=list)
@persist()
class SupportTriageFlow(Flow[SupportTriageState]):
@start()
def generate_city(self):
print("Starting flow")
# Each flow state automatically gets a unique ID
print(f"Flow State ID: {self.state['id']}")
def ingest_ticket(self):
# kickoff(inputs={...}) is merged into typed state fields
print(f"Flow State ID: {self.state.id}")
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
self.remember(
f"Ticket {self.state.ticket_id}: {self.state.issue}",
scope=f"/support/{self.state.ticket_id}",
)
random_city = response["choices"][0]["message"]["content"]
# Store the city in our state
self.state["city"] = random_city
print(f"Random City: {random_city}")
issue = self.state.issue.lower()
if "security" in issue or "breach" in issue:
self.state.urgency = "critical"
elif self.state.customer_tier == "enterprise":
self.state.urgency = "high"
else:
self.state.urgency = "normal"
return random_city
return self.state.issue
@listen(generate_city)
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
@router(ingest_ticket)
def route_ticket(self):
issue = self.state.issue.lower()
if "security" in issue or "breach" in issue:
self.state.route = "security"
return "security_review"
if self.state.customer_tier == "enterprise" or self.state.urgency == "high":
self.state.route = "priority"
return "priority_queue"
self.state.route = "standard"
return "standard_queue"
@listen("security_review")
def handle_security(self):
self.state.internal_notes.append("Escalated to Security Incident Response")
self.state.draft_reply = (
"We have escalated your case to our security team and will update you shortly."
)
return self.state.draft_reply
fun_fact = response["choices"][0]["message"]["content"]
# Store the fun fact in our state
self.state["fun_fact"] = fun_fact
return fun_fact
@listen("priority_queue")
def handle_priority(self):
history = self.recall("SLA commitments for enterprise support", limit=2)
self.state.internal_notes.append(
f"Loaded {len(history)} memory hits for priority handling"
)
self.state.draft_reply = (
"Your ticket has been prioritized and assigned to a senior support engineer."
)
return self.state.draft_reply
@listen("standard_queue")
def handle_standard(self):
self.state.internal_notes.append("Routed to standard support queue")
self.state.draft_reply = "Thanks for reporting this. Our team will follow up soon."
return self.state.draft_reply
flow = ExampleFlow()
flow.plot()
result = flow.kickoff()
print(f"Generated fun fact: {result}")
flow = SupportTriageFlow()
flow.plot("support_triage_flow")
result = flow.kickoff(
inputs={
"ticket_id": "TCK-1024",
"customer_tier": "enterprise",
"issue": "Cannot access SSO after enabling new policy",
}
)
print("Final reply:", result)
print("Route:", flow.state.route)
print("Notes:", flow.state.internal_notes)
```
![Flow Visual image](/images/crewai-flow-1.png)
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
In this example, one flow demonstrates several core features together:
1. `@start()` initializes and normalizes state for downstream steps.
2. `@router()` performs deterministic branching into labeled routes.
3. Route listeners implement lane-specific behavior (`security`, `priority`, `standard`).
4. `@persist()` keeps the flow state recoverable between runs.
5. Built-in memory methods (`remember`, `recall`) add durable context beyond a single method call.
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
When you run the Flow, it will:
1. Generate a unique ID for the flow state
2. Generate a random city and store it in the state
3. Generate a fun fact about that city and store it in the state
4. Print the results to the console
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
This pattern mirrors typical production workflows where request classification, policy-aware routing, and auditable state all happen in one orchestrated flow.
### @start()
@@ -117,15 +156,15 @@ The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python Code
@listen("generate_city")
def generate_fun_fact(self, random_city):
@listen("upstream_method")
def downstream_method(self, upstream_result):
# Implementation
```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python Code
@listen(generate_city)
def generate_fun_fact(self, random_city):
@listen(upstream_method)
def downstream_method(self, upstream_result):
# Implementation
```
@@ -572,6 +611,59 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
### Human in the Loop (human feedback)
<Note>
The `@human_feedback` decorator requires **CrewAI version 1.8.0 or higher**.
</Note>
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Do you approve this content?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "Content to be reviewed..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Approved! Feedback: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"Rejected. Reason: {result.feedback}")
```
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
You can also use `@human_feedback` without routing to simply collect feedback:
```python Code
@start()
@human_feedback(message="Any comments on this output?")
def my_method(self):
return "Output for review"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# Access feedback via result.feedback
# Access original output via result.output
pass
```
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
## Adding Agents to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
@@ -688,201 +780,17 @@ This example demonstrates several key features of using Agents in flows:
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
## Adding Crews to Flows
## Multi-Crew Flows and Plotting
Creating a flow with multiple crews in CrewAI is straightforward.
Detailed build walkthroughs and project scaffolding are documented in guide pages to keep this concepts page focused.
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
- Build your first flow: [/en/guides/flows/first-flow](/en/guides/flows/first-flow)
- Master state and persistence: [/en/guides/flows/mastering-flow-state](/en/guides/flows/mastering-flow-state)
- Real-world chat-state pattern: [/en/learn/flowstate-chat-history](/en/learn/flowstate-chat-history)
```bash
crewai create flow name_of_flow
```
This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews.
### Folder Structure
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
| Directory/File | Description |
| :--------------------- | :----------------------------------------------------------------- |
| `name_of_flow/` | Root directory for the flow. |
| ├── `crews/` | Contains directories for specific crews. |
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
| ├── `tools/` | Directory for additional tools used in the flow. |
| │ └── `custom_tool.py` | Custom tool implementation. |
| ├── `main.py` | Main script for running the flow. |
| ├── `README.md` | Project description and instructions. |
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
### Building Your Crews
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
- `config/agents.yaml`: Defines the agents for the crew.
- `config/tasks.yaml`: Defines the tasks for the crew.
- `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself.
You can copy, paste, and edit the `poem_crew` to create other crews.
### Connecting Crews in `main.py`
The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution.
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python Code
#!/usr/bin/env python
from random import randint
from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):
sentence_count: int = 1
poem: str = ""
class PoemFlow(Flow[PoemState]):
@start()
def generate_sentence_count(self):
print("Generating sentence count")
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count)
def generate_poem(self):
print("Generating poem")
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
print("Poem generated", result.raw)
self.state.poem = result.raw
@listen(generate_poem)
def save_poem(self):
print("Saving poem")
with open("poem.txt", "w") as f:
f.write(self.state.poem)
def kickoff():
poem_flow = PoemFlow()
poem_flow.kickoff()
def plot():
poem_flow = PoemFlow()
poem_flow.plot("PoemFlowPlot")
if __name__ == "__main__":
kickoff()
plot()
```
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. The PoemFlowPlot will be generated by `plot()` method.
![Flow Visual image](/images/crewai-flow-8.png)
### Running the Flow
(Optional) Before running the flow, you can install the dependencies by running:
```bash
crewai install
```
Once all of the dependencies are installed, you need to activate the virtual environment by running:
```bash
source .venv/bin/activate
```
After activating the virtual environment, you can run the flow by executing one of the following commands:
```bash
crewai flow kickoff
```
or
```bash
uv run kickoff
```
The flow will execute, and you should see the output in the console.
## Plot Flows
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
### What are Plots?
Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations.
### How to Generate a Plot
CrewAI provides two convenient methods to generate plots of your flows:
#### Option 1: Using the `plot()` Method
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python Code
# Assuming you have a flow instance
flow.plot("my_flow_plot")
```
This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot.
#### Option 2: Using the Command Line
If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup.
```bash
crewai flow plot
```
This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow.
### Understanding the Plot
The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details.
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow)
3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows)
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
Also, check out our YouTube video on how to use flows in CrewAI below!
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/MTb5my6VOT8"
title="CrewAI Flows overview"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerPolicy="strict-origin-when-cross-origin"
allowFullScreen
></iframe>
For visualization:
- Use `flow.plot("my_flow_plot")` in code, or
- Use `crewai flow plot` in CLI projects.
## Running Flows
@@ -893,10 +801,108 @@ There are two ways to run a flow:
You can run a flow programmatically by creating an instance of your flow class and calling the `kickoff()` method:
```python
flow = ExampleFlow()
flow = SupportTriageFlow()
result = flow.kickoff()
```
### Streaming Flow Execution
For real-time visibility into flow execution, you can enable streaming to receive output as it's generated:
```python
class StreamingFlow(Flow):
stream = True # Enable streaming
@start()
def research(self):
# Your flow implementation
pass
# Iterate over streaming output
flow = StreamingFlow()
streaming = flow.kickoff()
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
```
Learn more about streaming in the [Streaming Flow Execution](/en/learn/streaming-flow-execution) guide.
## Memory in Flows
Every Flow automatically has access to CrewAI's unified [Memory](/concepts/memory) system. You can store, recall, and extract memories directly inside any flow method using three built-in convenience methods.
### Built-in Methods
| Method | Description |
| :--- | :--- |
| `self.remember(content, **kwargs)` | Store content in memory. Accepts optional `scope`, `categories`, `metadata`, `importance`. |
| `self.recall(query, **kwargs)` | Retrieve relevant memories. Accepts optional `scope`, `categories`, `limit`, `depth`. |
| `self.extract_memories(content)` | Break raw text into discrete, self-contained memory statements. |
A default `Memory()` instance is created automatically when the Flow initializes. You can also pass a custom one:
```python
from crewai.flow.flow import Flow
from crewai import Memory
custom_memory = Memory(
recency_weight=0.5,
recency_half_life_days=7,
embedder={"provider": "ollama", "config": {"model_name": "mxbai-embed-large"}},
)
flow = MyFlow(memory=custom_memory)
```
### Example: Research and Analyze Flow
```python
from crewai.flow.flow import Flow, listen, start
class ResearchAnalysisFlow(Flow):
@start()
def gather_data(self):
# Simulate research findings
findings = (
"PostgreSQL handles 10k concurrent connections with connection pooling. "
"MySQL caps at around 5k. MongoDB scales horizontally but adds complexity."
)
# Extract atomic facts and remember each one
memories = self.extract_memories(findings)
for mem in memories:
self.remember(mem, scope="/research/databases")
return findings
@listen(gather_data)
def analyze(self, raw_findings):
# Recall relevant past research (from this run or previous runs)
past = self.recall("database performance and scaling", limit=10, depth="shallow")
context_lines = [f"- {m.record.content}" for m in past]
context = "\n".join(context_lines) if context_lines else "No prior context."
return {
"new_findings": raw_findings,
"prior_context": context,
"total_memories": len(past),
}
flow = ResearchAnalysisFlow()
result = flow.kickoff()
print(result)
```
Because memory persists across runs (backed by LanceDB on disk), the `analyze` step will recall findings from previous executions too -- enabling flows that learn and accumulate knowledge over time.
See the [Memory documentation](/concepts/memory) for details on scopes, slices, composite scoring, embedder configuration, and more.
### Using the CLI
Starting from version 0.103.0, you can run flows using the `crewai run` command:
@@ -914,3 +920,21 @@ crewai flow kickoff
```
However, the `crewai run` command is now the preferred method as it works for both crews and flows.
## Common Failure Modes
### Router branch not firing
- Cause: returned label does not match a `@listen("label")` value.
- Fix: align router return strings with listener labels exactly.
### State fields missing at runtime
- Cause: untyped dynamic fields or missing kickoff inputs.
- Fix: use typed state and validate required fields in `@start()`.
### Prompt/token growth over time
- Cause: appending unbounded message history in state.
- Fix: apply sliding-window state and summary compaction patterns.
### Non-idempotent retries
- Cause: side effects executed on retried steps.
- Fix: add idempotency keys/markers to state and guard external writes.

View File

@@ -388,8 +388,8 @@ crew = Crew(
agents=[sales_agent, tech_agent, support_agent],
tasks=[...],
embedder={ # Fallback embedder for agents without their own
"provider": "google",
"config": {"model": "text-embedding-004"}
"provider": "google-generativeai",
"config": {"model_name": "gemini-embedding-001"}
}
)
@@ -629,9 +629,9 @@ agent = Agent(
backstory="Expert researcher",
knowledge_sources=[knowledge_source],
embedder={
"provider": "google",
"provider": "google-generativeai",
"config": {
"model": "models/text-embedding-004",
"model_name": "gemini-embedding-001",
"api_key": "your-google-key"
}
}
@@ -739,7 +739,7 @@ class KnowledgeMonitorListener(BaseEventListener):
knowledge_monitor = KnowledgeMonitorListener()
```
For more information on using events, see the [Event Listeners](https://docs.crewai.com/concepts/event-listener) documentation.
For more information on using events, see the [Event Listeners](/en/concepts/event-listener) documentation.
### Custom Knowledge Sources

View File

@@ -7,7 +7,18 @@ mode: "wide"
## Overview
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
## When to Use Advanced LLM Configuration
- You need strict control of latency, cost, and output format.
- You need model routing by task type.
- You need reproducible, policy-sensitive behavior in production.
## When Not to Over-Configure
- You are in early prototyping with one simple task path.
- You do not yet need structured outputs or model routing.
## What are LLMs?
@@ -106,608 +117,115 @@ There are different places in CrewAI code where you can specify the model to use
</Tab>
</Tabs>
## Provider Configuration Examples
## Production LLM Patterns
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
The basics above show how to configure one model. In real systems, you usually combine several LLM patterns for cost, quality, and reliability.
<AccordionGroup>
<Accordion title="OpenAI">
Set the following environment variables in your `.env` file:
### Pattern 1: Route models by agent role
```toml Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4", # call model by provider/model_name
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
)
```
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
| Model | Context Window | Best For |
|---------------------|------------------|-----------------------------------------------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
</Accordion>
<Accordion title="Meta-Llama">
Meta's Llama API provides access to Meta's family of large language models.
The API is available through the [Meta Llama API](https://llama.developer.meta.com?utm_source=partner-crewai&utm_medium=website).
Set the following environment variables in your `.env` file:
```toml Code
# Meta Llama API Key Configuration
LLAMA_API_KEY=LLM|your_api_key_here
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
# Initialize Meta Llama LLM
llm = LLM(
model="meta_llama/Llama-4-Scout-17B-16E-Instruct-FP8",
temperature=0.8,
stop=["END"],
seed=42
)
```
All models listed here https://llama.developer.meta.com/docs/models/ are supported.
| Model ID | Input context length | Output context length | Input Modalities | Output Modalities |
| --- | --- | --- | --- | --- |
| `meta_llama/Llama-4-Scout-17B-16E-Instruct-FP8` | 128k | 4028 | Text, Image | Text |
| `meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 128k | 4028 | Text, Image | Text |
| `meta_llama/Llama-3.3-70B-Instruct` | 128k | 4028 | Text | Text |
| `meta_llama/Llama-3.3-8B-Instruct` | 128k | 4028 | Text | Text |
</Accordion>
<Accordion title="Anthropic">
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
)
```
</Accordion>
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
```toml .env
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
)
```
### Gemini models
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
```python Code
import json
file_path = 'path/to/vertex_ai_service_account.json'
# Load the JSON file
with open(file_path, 'r') as file:
vertex_credentials = json.load(file)
# Convert the credentials to a JSON string
vertex_credentials_json = json.dumps(vertex_credentials)
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="gemini-1.5-pro-latest", # or vertex_ai/gemini-1.5-pro-latest
temperature=0.7,
vertex_credentials=vertex_credentials_json
)
```
Google offers a range of powerful models optimized for different use cases:
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
</Accordion>
<Accordion title="Azure">
```toml Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
)
```
</Accordion>
<Accordion title="AWS Bedrock">
```toml Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
| Amazon Nova Pro | Up to 300k tokens | High-performance, model balancing accuracy, speed, and cost-effectiveness across diverse tasks. |
| Amazon Nova Micro | Up to 128k tokens | High-performance, cost-effective text-only model optimized for lowest latency responses. |
| Amazon Nova Lite | Up to 300k tokens | High-performance, affordable multimodal processing for images, video, and text with real-time capabilities. |
| Claude 3.7 Sonnet | Up to 128k tokens | High-performance, best for complex reasoning, coding & AI agents |
| Claude 3.5 Sonnet v2 | Up to 200k tokens | State-of-the-art model specialized in software engineering, agentic capabilities, and computer interaction at optimized cost. |
| Claude 3.5 Sonnet | Up to 200k tokens | High-performance model delivering superior intelligence and reasoning across diverse tasks with optimal speed-cost balance. |
| Claude 3.5 Haiku | Up to 200k tokens | Fast, compact multimodal model optimized for quick responses and seamless human-like interactions |
| Claude 3 Sonnet | Up to 200k tokens | Multimodal model balancing intelligence and speed for high-volume deployments. |
| Claude 3 Haiku | Up to 200k tokens | Compact, high-speed multimodal model optimized for quick responses and natural conversational interactions |
| Claude 3 Opus | Up to 200k tokens | Most advanced multimodal model exceling at complex tasks with human-like reasoning and superior contextual understanding. |
| Claude 2.1 | Up to 200k tokens | Enhanced version with expanded context window, improved reliability, and reduced hallucinations for long-form and RAG applications |
| Claude | Up to 100k tokens | Versatile model excelling in sophisticated dialogue, creative content, and precise instruction following. |
| Claude Instant | Up to 100k tokens | Fast, cost-effective model for everyday tasks like dialogue, analysis, summarization, and document Q&A |
| Llama 3.1 405B Instruct | Up to 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| Llama 3.1 70B Instruct | Up to 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3.1 8B Instruct | Up to 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| Llama 3 70B Instruct | Up to 8k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3 8B Instruct | Up to 8k tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| Titan Text G1 - Lite | Up to 4k tokens | Lightweight, cost-effective model optimized for English tasks and fine-tuning with focus on summarization and content generation. |
| Titan Text G1 - Express | Up to 8k tokens | Versatile model for general language tasks, chat, and RAG applications with support for English and 100+ languages. |
| Cohere Command | Up to 4k tokens | Model specialized in following user commands and delivering practical enterprise solutions. |
| Jurassic-2 Mid | Up to 8,191 tokens | Cost-effective model balancing quality and affordability for diverse language tasks like Q&A, summarization, and content generation. |
| Jurassic-2 Ultra | Up to 8,191 tokens | Model for advanced text generation and comprehension, excelling in complex tasks like analysis and content creation. |
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
</Accordion>
<Accordion title="Amazon SageMaker">
```toml Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="sagemaker/<my-endpoint>"
)
```
</Accordion>
<Accordion title="Mistral">
Set the following environment variables in your `.env` file:
```toml Code
MISTRAL_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="mistral/mistral-large-latest",
temperature=0.7
)
```
</Accordion>
<Accordion title="Nvidia NIM">
Set the following environment variables in your `.env` file:
```toml Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
Nvidia NIM provides a comprehensive suite of models for various use cases, from general-purpose tasks to specialized applications.
| Model | Context Window | Best For |
|-------------------------------------------------------------------------|----------------|-------------------------------------------------------------------|
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
| nvidia/nemotron-4-mini-hindi-4b-instruct | 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
| nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Customized for enhanced helpfulness in responses |
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
| nvidia/neva-22 | 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
| nvidia/nemotron-mini-4b-instruct | 8,192 tokens | General-purpose tasks |
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecture to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
3. Configure your crewai local models:
```python Code
from crewai.llm import LLM
local_nvidia_nim_llm = LLM(
model="openai/meta/llama-3.1-8b-instruct", # it's an openai-api compatible model
base_url="http://localhost:8000/v1",
api_key="<your_api_key|any text if you have not configured it>", # api_key is required, but you can use any text
)
# Then you can use it in your crew:
@CrewBase
class MyCrew():
# ...
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
llm=local_nvidia_nim_llm
)
# ...
```
</Accordion>
<Accordion title="Groq">
Set the following environment variables in your `.env` file:
```toml Code
GROQ_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="groq/llama-3.2-90b-text-preview",
temperature=0.7
)
```
| Model | Context Window | Best For |
|-------------------|------------------|--------------------------------------------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
</Accordion>
<Accordion title="IBM watsonx.ai">
Set the following environment variables in your `.env` file:
```toml Code
# Required
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
# Optional
WATSONX_TOKEN=<your-token>
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="watsonx/meta-llama/llama-3-1-70b-instruct",
base_url="https://api.watsonx.ai/v1"
)
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama3`
3. Configure:
```python Code
llm = LLM(
model="ollama/llama3:70b",
base_url="http://localhost:11434"
)
```
</Accordion>
<Accordion title="Fireworks AI">
Set the following environment variables in your `.env` file:
```toml Code
FIREWORKS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Perplexity AI">
Set the following environment variables in your `.env` file:
```toml Code
PERPLEXITY_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/"
)
```
</Accordion>
<Accordion title="Hugging Face">
Set the following environment variables in your `.env` file:
```toml Code
HF_TOKEN=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
</Accordion>
<Accordion title="SambaNova">
Set the following environment variables in your `.env` file:
```toml Code
SAMBANOVA_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="sambanova/Meta-Llama-3.1-8B-Instruct",
temperature=0.7
)
```
| Model | Context Window | Best For |
|--------------------|------------------------|----------------------------------------------|
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
| Llama 3.2 Series | 8,192 tokens | General-purpose, multimodal tasks |
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality |
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
</Accordion>
<Accordion title="Cerebras">
Set the following environment variables in your `.env` file:
```toml Code
# Required
CEREBRAS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="cerebras/llama3.1-70b",
temperature=0.7,
max_tokens=8192
)
```
<Info>
Cerebras features:
- Fast inference speeds
- Competitive pricing
- Good balance of speed and quality
- Support for long context windows
</Info>
</Accordion>
<Accordion title="Open Router">
Set the following environment variables in your `.env` file:
```toml Code
OPENROUTER_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="openrouter/deepseek/deepseek-r1",
base_url="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY
)
```
<Info>
Open Router models:
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
</Accordion>
<Accordion title="Nebius AI Studio">
Set the following environment variables in your `.env` file:
```toml Code
NEBIUS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="nebius/Qwen/Qwen3-30B-A3B"
)
```
<Info>
Nebius AI Studio features:
- Large collection of open source models
- Higher rate limits
- Competitive pricing
- Good balance of speed and quality
</Info>
</Accordion>
</AccordionGroup>
Use faster/cheaper models for extraction and heavier models for synthesis or critical decisions.
```python Code
from crewai import Agent, Crew, Process, Task
researcher = Agent(
role="Researcher",
goal="Collect factual inputs quickly",
backstory="Fast information-gathering specialist",
llm="openai/gpt-4o-mini",
)
reviewer = Agent(
role="Reviewer",
goal="Validate claims and produce final answer",
backstory="Careful editor focused on correctness",
llm="provider/model-id",
)
crew = Crew(
agents=[researcher, reviewer],
tasks=[
Task(
description="Find the latest policy changes and list the key points",
expected_output="Bullet list of validated policy changes",
agent=researcher,
),
Task(
description="Review findings and produce a final executive summary",
expected_output="Concise, decision-ready summary",
agent=reviewer,
),
],
process=Process.sequential,
)
```
### Pattern 2: Set reliability defaults once
Configure retry, timeout, and deterministic sampling in one reusable `LLM` object.
```python Code
from crewai import LLM
reliable_llm = LLM(
model="openai/gpt-4o-mini",
temperature=0.1,
timeout=45,
max_retries=3,
max_tokens=1200,
seed=7,
)
```
Use this for extraction, classification, and policy-sensitive tasks where variance should be low.
### Pattern 3: Use structured outputs for machine-readable responses
For downstream automation, force JSON-shaped outputs rather than free-form prose.
```python Code
from crewai import LLM
json_llm = LLM(
model="openai/gpt-4o",
response_format={"type": "json"},
temperature=0.0,
)
```
This reduces parser fragility in pipelines that feed APIs, databases, or workflow routers.
### Pattern 4: Use OpenAI Responses API for multi-turn reasoning flows
When you need built-in tools, response chaining, or reasoning-model workflows, enable the Responses API explicitly.
```python Code
from crewai import LLM
reasoning_llm = LLM(
model="openai/o4-mini",
api="responses",
auto_chain=True,
store=True,
reasoning_effort="medium",
)
```
This is especially useful in long-running assistants where you want conversation continuity and controllable reasoning depth.
## Provider Configuration
For concept-level usage, keep provider setup minimal and explicit:
1. Set provider credentials via environment variables.
2. Pin model IDs explicitly in code or YAML.
3. Set reliability defaults (`timeout`, `max_retries`, low `temperature`) for production.
Use these pages for deeper provider setup and runtime decisions:
- Connections and provider setup: [/en/learn/llm-connections](/en/learn/llm-connections)
- Custom provider integration: [/en/learn/custom-llm](/en/learn/custom-llm)
- Production routing and reliability patterns: [/en/ai/llms/patterns](/en/ai/llms/patterns)
- Parameter contract reference: [/en/ai/llms/reference](/en/ai/llms/reference)
## Streaming Responses
@@ -750,7 +268,7 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
```
<Tip>
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
[Click here](/en/concepts/event-listener#event-listeners) for more details
</Tip>
</Tab>
@@ -804,6 +322,50 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
</Tab>
</Tabs>
## Async LLM Calls
CrewAI supports asynchronous LLM calls for improved performance and concurrency in your AI workflows. Async calls allow you to run multiple LLM requests concurrently without blocking, making them ideal for high-throughput applications and parallel agent operations.
<Tabs>
<Tab title="Basic Usage">
Use the `acall` method for asynchronous LLM requests:
```python
import asyncio
from crewai import LLM
async def main():
llm = LLM(model="openai/gpt-4o")
# Single async call
response = await llm.acall("What is the capital of France?")
print(response)
asyncio.run(main())
```
The `acall` method supports all the same parameters as the synchronous `call` method, including messages, tools, and callbacks.
</Tab>
<Tab title="With Streaming">
Combine async calls with streaming for real-time concurrent responses:
```python
import asyncio
from crewai import LLM
async def stream_async():
llm = LLM(model="openai/gpt-4o", stream=True)
response = await llm.acall("Write a short story about AI")
print(response)
asyncio.run(stream_async())
```
</Tab>
</Tabs>
## Structured LLM Calls
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
@@ -899,7 +461,7 @@ Learn how to get the most out of your LLM configuration:
</Accordion>
<Accordion title="Drop Additional Parameters">
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
```python
@@ -915,6 +477,52 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Transport Interceptors">
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
**Supported Providers:**
- ✅ OpenAI
- ✅ Anthropic
**Basic Usage:**
```python
import httpx
from crewai import LLM
from crewai.llms.hooks import BaseInterceptor
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
"""Custom interceptor to modify requests and responses."""
def on_outbound(self, request: httpx.Request) -> httpx.Request:
"""Print request before sending to the LLM provider."""
print(request)
return request
def on_inbound(self, response: httpx.Response) -> httpx.Response:
"""Process response after receiving from the LLM provider."""
print(f"Status: {response.status_code}")
print(f"Response time: {response.elapsed}")
return response
# Use the interceptor with an LLM
llm = LLM(
model="openai/gpt-4o",
interceptor=CustomInterceptor()
)
```
**Important Notes:**
- Both methods must return the received object or type of object.
- Modifying received objects may result in unexpected behavior or application crashes.
- Not all providers support interceptors - check the supported providers list above
<Info>
Interceptors operate at the transport layer. This is particularly useful for:
- Message transformation and filtering
- Debugging API interactions
</Info>
</Accordion>
</AccordionGroup>
## Common Issues and Solutions

File diff suppressed because it is too large Load Diff

View File

@@ -10,6 +10,17 @@ mode: "wide"
The planning feature in CrewAI allows you to add planning capability to your crew. When enabled, before each Crew iteration,
all Crew information is sent to an AgentPlanner that will plan the tasks step by step, and this plan will be added to each task description.
## When to Use Planning
- Tasks require multi-step decomposition before execution.
- You need more consistent execution quality on complex tasks.
- You want transparent planning traces in crew runs.
## When Not to Use Planning
- Tasks are simple and deterministic.
- Latency and token budget are strict and planning overhead is not justified.
### Using the Planning Feature
Getting started with the planning feature is very easy, the only step required is to add `planning=True` to your Crew:
@@ -31,7 +42,7 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
Planning model defaults can vary by version and environment. To avoid implicit provider dependencies, set `planning_llm` explicitly in your crew configuration.
</Warning>
#### Planning LLM
@@ -152,4 +163,14 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
**Expected Output:**
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```
</CodeGroup>
</CodeGroup>
## Common Failure Modes
### Planning adds cost/latency without quality gains
- Cause: planning enabled for simple tasks.
- Fix: disable `planning` for straightforward pipelines.
### Unexpected provider authentication errors
- Cause: implicit planner model/provider assumptions.
- Fix: set `planning_llm` explicitly and ensure matching credentials are configured.

View File

@@ -12,11 +12,20 @@ mode: "wide"
These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
</Tip>
## When to Use Each Process
- Use `sequential` when task order is fixed and outputs feed directly into the next task.
- Use `hierarchical` when you need a manager to delegate and validate work dynamically.
## When Not to Use Hierarchical
- You do not need dynamic delegation.
- You cannot provide a reliable `manager_llm` or `manager_agent`.
## Process Implementations
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) or a custom manager agent (`manager_agent`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
- **Consensual Process (Planned)**: Aiming for collaborative decision-making among agents on task execution, this process type introduces a democratic approach to task management within CrewAI. It is planned for future development and is not currently implemented in the codebase.
## The Role of Processes in Teamwork
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
@@ -59,9 +68,17 @@ Emulates a corporate hierarchy, CrewAI allows specifying a custom manager agent
## Process Class: Detailed Overview
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`). The consensual process is planned for future inclusion, emphasizing our commitment to continuous development and innovation.
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`).
## Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents.
This documentation has been updated to reflect the latest features, enhancements, and the planned integration of the Consensual Process, ensuring users have access to the most current and comprehensive information.
## Common Failure Modes
### Hierarchical process fails at startup
- Cause: missing `manager_llm` or `manager_agent`.
- Fix: provide one of them explicitly in crew configuration.
### Sequential process produces weak outputs
- Cause: task boundaries/context are underspecified.
- Fix: improve task descriptions, expected outputs, and task context chaining.

View File

@@ -0,0 +1,154 @@
---
title: Production Architecture
description: Best practices for building production-ready AI applications with CrewAI
icon: server
mode: "wide"
---
# The Flow-First Mindset
When building production AI applications with CrewAI, **we recommend starting with a Flow**.
While it's possible to run individual Crews or Agents, wrapping them in a Flow provides the necessary structure for a robust, scalable application.
## Why Flows?
1. **State Management**: Flows provide a built-in way to manage state across different steps of your application. This is crucial for passing data between Crews, maintaining context, and handling user inputs.
2. **Control**: Flows allow you to define precise execution paths, including loops, conditionals, and branching logic. This is essential for handling edge cases and ensuring your application behaves predictably.
3. **Observability**: Flows provide a clear structure that makes it easier to trace execution, debug issues, and monitor performance. We recommend using [CrewAI Tracing](/en/observability/tracing) for detailed insights. Simply run `crewai login` to enable free observability features.
## The Architecture
A typical production CrewAI application looks like this:
```mermaid
graph TD
Start((Start)) --> Flow[Flow Orchestrator]
Flow --> State{State Management}
State --> Step1[Step 1: Data Gathering]
Step1 --> Crew1[Research Crew]
Crew1 --> State
State --> Step2{Condition Check}
Step2 -- "Valid" --> Step3[Step 3: Execution]
Step3 --> Crew2[Action Crew]
Step2 -- "Invalid" --> End((End))
Crew2 --> End
```
### 1. The Flow Class
Your `Flow` class is the entry point. It defines the state schema and the methods that execute your logic.
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class AppState(BaseModel):
user_input: str = ""
research_results: str = ""
final_report: str = ""
class ProductionFlow(Flow[AppState]):
@start()
def gather_input(self):
# ... logic to get input ...
pass
@listen(gather_input)
def run_research_crew(self):
# ... trigger a Crew ...
pass
```
### 2. State Management
Use Pydantic models to define your state. This ensures type safety and makes it clear what data is available at each step.
- **Keep it minimal**: Store only what you need to persist between steps.
- **Use structured data**: Avoid unstructured dictionaries when possible.
### 3. Crews as Units of Work
Delegate complex tasks to Crews. A Crew should be focused on a specific goal (e.g., "Research a topic", "Write a blog post").
- **Don't over-engineer Crews**: Keep them focused.
- **Pass state explicitly**: Pass the necessary data from the Flow state to the Crew inputs.
```python
@listen(gather_input)
def run_research_crew(self):
crew = ResearchCrew()
result = crew.kickoff(inputs={"topic": self.state.user_input})
self.state.research_results = result.raw
```
## Control Primitives
Leverage CrewAI's control primitives to add robustness and control to your Crews.
### 1. Task Guardrails
Use [Task Guardrails](/en/concepts/tasks#task-guardrails) to validate task outputs before they are accepted. This ensures that your agents produce high-quality results.
```python
def validate_content(result: TaskOutput) -> Tuple[bool, Any]:
if len(result.raw) < 100:
return (False, "Content is too short. Please expand.")
return (True, result.raw)
task = Task(
...,
guardrail=validate_content
)
```
### 2. Structured Outputs
Always use structured outputs (`output_pydantic` or `output_json`) when passing data between tasks or to your application. This prevents parsing errors and ensures type safety.
```python
class ResearchResult(BaseModel):
summary: str
sources: List[str]
task = Task(
...,
output_pydantic=ResearchResult
)
```
### 3. LLM Hooks
Use [LLM Hooks](/en/learn/llm-hooks) to inspect or modify messages before they are sent to the LLM, or to sanitize responses.
```python
@before_llm_call
def log_request(context):
print(f"Agent {context.agent.role} is calling the LLM...")
```
## Deployment Patterns
When deploying your Flow, consider the following:
### CrewAI Enterprise
The easiest way to deploy your Flow is using CrewAI Enterprise. It handles the infrastructure, authentication, and monitoring for you.
Check out the [Deployment Guide](/en/enterprise/guides/deploy-crew) to get started.
```bash
crewai deploy create
```
### Async Execution
For long-running tasks, use `kickoff_async` to avoid blocking your API.
### Persistence
Use the `@persist` decorator to save the state of your Flow to a database. This allows you to resume execution if the process crashes or if you need to wait for human input.
```python
@persist
class ProductionFlow(Flow[AppState]):
# ...
```
## Summary
- **Start with a Flow.**
- **Define a clear State.**
- **Use Crews for complex tasks.**
- **Deploy with an API and persistence.**

View File

@@ -19,6 +19,7 @@ CrewAI AMP includes a Visual Task Builder in Crew Studio that simplifies complex
![Task Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Task Builder enables:
- Drag-and-drop task creation
- Visual task dependencies and flow
- Real-time testing and validation
@@ -28,10 +29,12 @@ The Visual Task Builder enables:
### Task Execution Flow
Tasks can be executed in two ways:
- **Sequential**: Tasks are executed in the order they are defined
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
The execution flow is defined when creating the crew:
```python Code
crew = Crew(
agents=[agent1, agent2],
@@ -42,29 +45,31 @@ crew = Crew(
## Task Attributes
| Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
| Attribute | Parameters | Type | Description |
| :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable]]` | List of guardrails to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
<Note type="warning" title="Deprecated: max_retries">
The task attribute `max_retries` is deprecated and will be removed in v1.0.0.
Use `guardrail_max_retries` instead to control retry attempts when a guardrail fails.
Use `guardrail_max_retries` instead to control retry attempts when a guardrail
fails.
</Note>
## Creating Tasks
@@ -86,7 +91,7 @@ crew.kickoff(inputs={'topic': 'AI Agents'})
Here's an example of how to configure tasks using YAML:
```yaml tasks.yaml
````yaml tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
@@ -106,7 +111,7 @@ reporting_task:
agent: reporting_analyst
markdown: true
output_file: report.md
```
````
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
@@ -164,7 +169,8 @@ class LatestAiDevelopmentCrew():
```
<Note>
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should
match the method names in your Python code.
</Note>
### Direct Code Definition (Alternative)
@@ -201,7 +207,8 @@ reporting_task = Task(
```
<Tip>
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc.
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's
process decide based on roles, availability, etc.
</Tip>
## Task Output
@@ -223,6 +230,7 @@ By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput`
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
| **Agent** | `agent` | `str` | The agent that executed the task. |
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
| **Messages** | `messages` | `list[LLMMessage]` | The messages from the last task execution. |
### Task Methods and Properties
@@ -285,12 +293,13 @@ formatted_task = Task(
```
When `markdown=True`, the agent will receive additional instructions to format the output using:
- `#` for headers
- `**text**` for bold text
- `*text*` for italic text
- `-` or `*` for bullet points
- `` `code` `` for inline code
- ``` ```language ``` for code blocks
- ` `language ``` for code blocks
### YAML Configuration with Markdown
@@ -301,7 +310,7 @@ analysis_task:
expected_output: >
A comprehensive analysis with charts and key findings
agent: analyst
markdown: true # Enable markdown formatting
markdown: true # Enable markdown formatting
output_file: analysis.md
```
@@ -313,7 +322,9 @@ analysis_task:
- **Cross-Platform Compatibility**: Markdown is universally supported
<Note>
The markdown formatting instructions are automatically added to the task prompt when `markdown=True`, so you don't need to specify formatting requirements in your task description.
The markdown formatting instructions are automatically added to the task
prompt when `markdown=True`, so you don't need to specify formatting
requirements in your task description.
</Note>
## Task Dependencies and Context
@@ -341,7 +352,11 @@ Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides
feedback to agents when their output doesn't meet specific criteria.
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
CrewAI supports two types of guardrails:
1. **Function-based guardrails**: Python functions with custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
2. **LLM-based guardrails**: String descriptions that use the agent's LLM to validate outputs based on natural language criteria. These are ideal for complex or subjective validation requirements.
### Function-Based Guardrails
@@ -355,12 +370,12 @@ def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate blog content meets requirements."""
try:
# Check word count
word_count = len(result.split())
word_count = len(result.raw.split())
if word_count > 200:
return (False, "Blog content exceeds 200 words")
# Additional validation logic here
return (True, result.strip())
return (True, result.raw.strip())
except Exception as e:
return (False, "Unexpected error during validation")
@@ -372,9 +387,156 @@ blog_task = Task(
)
```
### LLM-Based Guardrails (String Descriptions)
Instead of writing custom validation functions, you can use string descriptions that leverage LLM-based validation. When you provide a string to the `guardrail` or `guardrails` parameter, CrewAI automatically creates an `LLMGuardrail` that uses the agent's LLM to validate the output based on your description.
**Requirements**:
- The task must have an `agent` assigned (the guardrail uses the agent's LLM)
- Provide a clear, descriptive string explaining the validation criteria
```python Code
from crewai import Task
# Single LLM-based guardrail
blog_task = Task(
description="Write a blog post about AI",
expected_output="A blog post under 200 words",
agent=blog_agent,
guardrail="The blog post must be under 200 words and contain no technical jargon"
)
```
LLM-based guardrails are particularly useful for:
- **Complex validation logic** that's difficult to express programmatically
- **Subjective criteria** like tone, style, or quality assessments
- **Natural language requirements** that are easier to describe than code
The LLM guardrail will:
1. Analyze the task output against your description
2. Return `(True, output)` if the output complies with the criteria
3. Return `(False, feedback)` with specific feedback if validation fails
**Example with detailed validation criteria**:
```python Code
research_task = Task(
description="Research the latest developments in quantum computing",
expected_output="A comprehensive research report",
agent=researcher_agent,
guardrail="""
The research report must:
- Be at least 1000 words long
- Include at least 5 credible sources
- Cover both technical and practical applications
- Be written in a professional, academic tone
- Avoid speculation or unverified claims
"""
)
```
### Multiple Guardrails
You can apply multiple guardrails to a task using the `guardrails` parameter. Multiple guardrails are executed sequentially, with each guardrail receiving the output from the previous one. This allows you to chain validation and transformation steps.
The `guardrails` parameter accepts:
- A list of guardrail functions or string descriptions
- A single guardrail function or string (same as `guardrail`)
**Note**: If `guardrails` is provided, it takes precedence over `guardrail`. The `guardrail` parameter will be ignored when `guardrails` is set.
```python Code
from typing import Tuple, Any
from crewai import TaskOutput, Task
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate word count is within limits."""
word_count = len(result.raw.split())
if word_count < 100:
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
if word_count > 500:
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
return (True, result.raw)
def validate_no_profanity(result: TaskOutput) -> Tuple[bool, Any]:
"""Check for inappropriate language."""
profanity_words = ["badword1", "badword2"] # Example list
content_lower = result.raw.lower()
for word in profanity_words:
if word in content_lower:
return (False, f"Inappropriate language detected: {word}")
return (True, result.raw)
def format_output(result: TaskOutput) -> Tuple[bool, Any]:
"""Format and clean the output."""
formatted = result.raw.strip()
# Capitalize first letter
formatted = formatted[0].upper() + formatted[1:] if formatted else formatted
return (True, formatted)
# Apply multiple guardrails sequentially
blog_task = Task(
description="Write a blog post about AI",
expected_output="A well-formatted blog post between 100-500 words",
agent=blog_agent,
guardrails=[
validate_word_count, # First: validate length
validate_no_profanity, # Second: check content
format_output # Third: format the result
],
guardrail_max_retries=3
)
```
In this example, the guardrails execute in order:
1. `validate_word_count` checks the word count
2. `validate_no_profanity` checks for inappropriate language (using the output from step 1)
3. `format_output` formats the final result (using the output from step 2)
If any guardrail fails, the error is sent back to the agent, and the task is retried up to `guardrail_max_retries` times.
**Mixing function-based and LLM-based guardrails**:
You can combine both function-based and string-based guardrails in the same list:
```python Code
from typing import Tuple, Any
from crewai import TaskOutput, Task
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate word count is within limits."""
word_count = len(result.raw.split())
if word_count < 100:
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
if word_count > 500:
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
return (True, result.raw)
# Mix function-based and LLM-based guardrails
blog_task = Task(
description="Write a blog post about AI",
expected_output="A well-formatted blog post between 100-500 words",
agent=blog_agent,
guardrails=[
validate_word_count, # Function-based: precise word count check
"The content must be engaging and suitable for a general audience", # LLM-based: subjective quality check
"The writing style should be clear, concise, and free of technical jargon" # LLM-based: style validation
],
guardrail_max_retries=3
)
```
This approach combines the precision of programmatic validation with the flexibility of LLM-based assessment for subjective criteria.
### Guardrail Function Requirements
1. **Function Signature**:
- Must accept exactly one parameter (the task output)
- Should return a tuple of `(bool, Any)`
- Type hints are recommended but optional
@@ -383,11 +545,10 @@ blog_task = Task(
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
from crewai import TaskOutput, LLMGuardrail
@@ -403,11 +564,13 @@ def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
```
2. **Error Categories**:
- Use specific error codes
- Include relevant context
- Provide actionable feedback
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
from crewai import TaskOutput
@@ -434,6 +597,7 @@ def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
### Handling Guardrail Results
When a guardrail returns `(False, error)`:
1. The error is sent back to the agent
2. The agent attempts to fix the issue
3. The process repeats until:
@@ -441,6 +605,7 @@ When a guardrail returns `(False, error)`:
- Maximum retries are reached (`guardrail_max_retries`)
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
from crewai import TaskOutput, Task
@@ -466,10 +631,12 @@ task = Task(
## Getting Structured Consistent Outputs from Tasks
<Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
It's also important to note that the output of the final task of a crew
becomes the final output of the actual crew itself.
</Note>
### Using `output_pydantic`
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
Here's an example demonstrating how to use output_pydantic:
@@ -539,18 +706,22 @@ print("Accessing Properties - Option 5")
print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields.
* The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
* After executing the crew, you can access the structured output in multiple ways as shown.
- A Pydantic model Blog is defined with title and content fields.
- The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
- After executing the crew, you can access the structured output in multiple ways as shown.
#### Explanation of Accessing the Output
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the __getitem__ method.
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the **getitem** method.
2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object.
3. Using to_dict() Method: Convert the output to a dictionary and access the fields.
4. Printing the Entire Object: Simply print the result object to see the structured output.
### Using `output_json`
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
Here's an example demonstrating how to use `output_json`:
@@ -610,14 +781,15 @@ print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
* The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
* After executing the crew, you can access the structured JSON output in two ways as shown.
- A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
- The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
- After executing the crew, you can access the structured JSON output in two ways as shown.
#### Explanation of Accessing the Output
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the __getitem__ method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the __str__ method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the **getitem** method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the **str** method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
---
@@ -807,8 +979,6 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Creating Directories when Saving Files
The `create_directory` parameter controls whether CrewAI should automatically create directories when saving task outputs to files. This feature is particularly useful for organizing outputs and ensuring that file paths are correctly structured, especially when working with complex project hierarchies.
@@ -855,7 +1025,7 @@ analysis_task:
A comprehensive financial report with quarterly insights
agent: financial_analyst
output_file: reports/quarterly/q4_2024_analysis.pdf
create_directory: true # Automatically create 'reports/quarterly/' directory
create_directory: true # Automatically create 'reports/quarterly/' directory
audit_task:
description: >
@@ -864,18 +1034,20 @@ audit_task:
A compliance audit report
agent: auditor
output_file: audit/compliance_report.md
create_directory: false # Directory must already exist
create_directory: false # Directory must already exist
```
### Use Cases
**Automatic Directory Creation (`create_directory=True`):**
- Development and prototyping environments
- Dynamic report generation with date-based folders
- Automated workflows where directory structure may vary
- Multi-tenant applications with user-specific folders
**Manual Directory Management (`create_directory=False`):**
- Production environments with strict file system controls
- Security-sensitive applications where directories must be pre-configured
- Systems with specific permission requirements

View File

@@ -9,9 +9,20 @@ mode: "wide"
Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. With crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities.
## When to Use Testing
- Before promoting a crew to production.
- After changing prompts, tools, or model configurations.
- When benchmarking quality/cost/latency tradeoffs.
## When Not to Rely on Testing Alone
- For safety-critical deployments without human review gates.
- When test datasets are too small or unrepresentative.
### Using the Testing Feature
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
Use the CLI command `crewai test` to run repeated crew executions and compare outputs across iterations. The parameters are `n_iterations` and `model`, which are optional and default to `2` and `gpt-4o-mini`.
```bash
crewai test
@@ -47,3 +58,13 @@ A table of scores at the end will show the performance of the crew in terms of t
| Execution Time (s) | 126 | 145 | **135** | | |
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.
## Common Failure Modes
### Scores fluctuate too much between runs
- Cause: high sampling randomness or unstable prompts.
- Fix: lower temperature and tighten output constraints.
### Good test scores but poor production quality
- Cause: test prompts do not match real workload.
- Fix: build a representative test set from real production inputs.

View File

@@ -10,6 +10,17 @@ mode: "wide"
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
## When to Use Tools
- Agents need external data or side effects.
- You need deterministic actions wrapped in reusable interfaces.
- You need to connect APIs, files, databases, or browser actions into agent workflows.
## When Not to Use Tools
- The task can be solved entirely from prompt context.
- The external side effect cannot be made safe or idempotent.
## What is a Tool?
A tool in CrewAI is a skill or function that agents can utilize to perform various actions.
@@ -20,6 +31,7 @@ enabling everything from simple searches to complex interactions and effective t
CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
The Enterprise Tools Repository includes:
- Pre-built connectors for popular enterprise systems
- Custom tool creation interface
- Version control and sharing capabilities
@@ -284,3 +296,17 @@ writer1 = Agent(
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling,
caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.
## Common Failure Modes
### Tool schema mismatch
- Cause: model-generated arguments do not match tool signature.
- Fix: tighten tool descriptions and validate input schemas.
### Repeated side effects
- Cause: retries trigger duplicate writes/actions.
- Fix: add idempotency keys and deduplication checks in tool logic.
### Tool timeouts under load
- Cause: unbounded retries or slow external services.
- Fix: set explicit timeout/retry policy and graceful fallbacks.

View File

@@ -0,0 +1,558 @@
---
title: "Flow HITL Management"
description: "Enterprise-grade human review for Flows with email-first notifications, routing rules, and auto-response capabilities"
icon: "users-gear"
mode: "wide"
---
<Note>
Flow HITL Management features require the `@human_feedback` decorator, available in **CrewAI version 1.8.0 or higher**. These features apply specifically to **Flows**, not Crews.
</Note>
CrewAI Enterprise provides a comprehensive Human-in-the-Loop (HITL) management system for Flows that transforms AI workflows into collaborative human-AI processes. The platform uses an **email-first architecture** that enables anyone with an email address to respond to review requests—no platform account required.
## Overview
<CardGroup cols={3}>
<Card title="Email-First Design" icon="envelope">
Responders can reply directly to notification emails to provide feedback
</Card>
<Card title="Flexible Routing" icon="route">
Route requests to specific emails based on method patterns or flow state
</Card>
<Card title="Auto-Response" icon="clock">
Configure automatic fallback responses when no human replies in time
</Card>
</CardGroup>
### Key Benefits
- **Simple mental model**: Email addresses are universal; no need to manage platform users or roles
- **External responders**: Anyone with an email can respond, even non-platform users
- **Dynamic assignment**: Pull assignee email directly from flow state (e.g., `sales_rep_email`)
- **Reduced configuration**: Fewer settings to configure, faster time to value
- **Email as primary channel**: Most users prefer responding via email over logging into a dashboard
## Setting Up Human Review Points in Flows
Configure human review checkpoints within your Flows using the `@human_feedback` decorator. When execution reaches a review point, the system pauses, notifies the assignee via email, and waits for a response.
```python
from crewai.flow.flow import Flow, start, listen, or_
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ContentApprovalFlow(Flow):
@start()
def generate_content(self):
return "Generated marketing copy for Q1 campaign..."
@human_feedback(
message="Please review this content for brand compliance:",
emit=["approved", "rejected", "needs_revision"],
)
@listen(or_("generate_content", "needs_revision"))
def review_content(self):
return "Marketing copy for review..."
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
print(f"Publishing approved content. Reviewer notes: {result.feedback}")
@listen("rejected")
def archive_content(self, result: HumanFeedbackResult):
print(f"Content rejected. Reason: {result.feedback}")
```
For complete implementation details, see the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide.
### Decorator Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `message` | `str` | The message displayed to the human reviewer |
| `emit` | `list[str]` | Valid response options (displayed as buttons in UI) |
## Platform Configuration
Access HITL configuration from: **Deployment → Settings → Human in the Loop Configuration**
<Frame>
<img src="/images/enterprise/hitl-settings-overview.png" alt="HITL Configuration Settings" />
</Frame>
### Email Notifications
Toggle to enable or disable email notifications for HITL requests.
| Setting | Default | Description |
|---------|---------|-------------|
| Email Notifications | Enabled | Send emails when feedback is requested |
<Note>
When disabled, responders must use the dashboard UI or you must configure webhooks for custom notification systems.
</Note>
### SLA Target
Set a target response time for tracking and metrics purposes.
| Setting | Description |
|---------|-------------|
| SLA Target (minutes) | Target response time. Used for dashboard metrics and SLA tracking |
Leave empty to disable SLA tracking.
## Email Notifications & Responses
The HITL system uses an email-first architecture where responders can reply directly to notification emails.
### How Email Responses Work
<Steps>
<Step title="Notification Sent">
When a HITL request is created, an email is sent to the assigned responder with the review content and context.
</Step>
<Step title="Reply-To Address">
The email includes a special reply-to address with a signed token for authentication.
</Step>
<Step title="User Replies">
The responder simply replies to the email with their feedback—no login required.
</Step>
<Step title="Token Validation">
The platform receives the reply, verifies the signed token, and matches the sender email.
</Step>
<Step title="Flow Resumes">
The feedback is recorded and the flow continues with the human's input.
</Step>
</Steps>
### Response Format
Responders can reply with:
- **Emit option**: If the reply matches an `emit` option (e.g., "approved"), it's used directly
- **Free-form text**: Any text response is passed to the flow as feedback
- **Plain text**: The first line of the reply body is used as feedback
### Confirmation Emails
After processing a reply, the responder receives a confirmation email indicating whether the feedback was successfully submitted or if an error occurred.
### Email Token Security
- Tokens are cryptographically signed for security
- Tokens expire after 7 days
- Sender email must match the token's authorized email
- Confirmation/error emails are sent after processing
## Routing Rules
Route HITL requests to specific email addresses based on method patterns.
<Frame>
<img src="/images/enterprise/hitl-settings-routing-rules.png" alt="HITL Routing Rules Configuration" />
</Frame>
### Rule Structure
```json
{
"name": "Approvals to Finance",
"match": {
"method_name": "approve_*"
},
"assign_to_email": "finance@company.com",
"assign_from_input": "manager_email"
}
```
### Matching Patterns
| Pattern | Description | Example Match |
|---------|-------------|---------------|
| `approve_*` | Wildcard (any chars) | `approve_payment`, `approve_vendor` |
| `review_?` | Single char | `review_a`, `review_1` |
| `validate_payment` | Exact match | `validate_payment` only |
### Assignment Priority
1. **Dynamic assignment** (`assign_from_input`): If configured, pulls email from flow state
2. **Static email** (`assign_to_email`): Falls back to configured email
3. **Deployment creator**: If no rule matches, the deployment creator's email is used
### Dynamic Assignment Example
If your flow state contains `{"sales_rep_email": "alice@company.com"}`, configure:
```json
{
"name": "Route to Sales Rep",
"match": {
"method_name": "review_*"
},
"assign_from_input": "sales_rep_email"
}
```
The request will be assigned to `alice@company.com` automatically.
<Tip>
**Use Case**: Pull the assignee from your CRM, database, or previous flow step to dynamically route reviews to the right person.
</Tip>
## Auto-Response
Automatically respond to HITL requests if no human responds within a timeout. This ensures flows don't hang indefinitely.
### Configuration
| Setting | Description |
|---------|-------------|
| Enabled | Toggle to enable auto-response |
| Timeout (minutes) | Time to wait before auto-responding |
| Default Outcome | The response value (must match an `emit` option) |
<Frame>
<img src="/images/enterprise/hitl-settings-auto-respond.png" alt="HITL Auto-Response Configuration" />
</Frame>
### Use Cases
- **SLA compliance**: Ensure flows don't hang indefinitely
- **Default approval**: Auto-approve low-risk requests after timeout
- **Graceful degradation**: Continue with a safe default when reviewers are unavailable
<Warning>
Use auto-response carefully. Only enable it for non-critical reviews where a default response is acceptable.
</Warning>
## Review Process
### Dashboard Interface
The HITL review interface provides a clean, focused experience for reviewers:
- **Markdown Rendering**: Rich formatting for review content with syntax highlighting
- **Context Panel**: View flow state, execution history, and related information
- **Feedback Input**: Provide detailed feedback and comments with your decision
- **Quick Actions**: One-click emit option buttons with optional comments
<Frame>
<img src="/images/enterprise/hitl-list-pending-feedbacks.png" alt="HITL Pending Requests List" />
</Frame>
### Response Methods
Reviewers can respond via three channels:
| Method | Description |
|--------|-------------|
| **Email Reply** | Reply directly to the notification email |
| **Dashboard** | Use the Enterprise dashboard UI |
| **API/Webhook** | Programmatic response via API |
### History & Audit Trail
Every HITL interaction is tracked with a complete timeline:
- Decision history (approve/reject/revise)
- Reviewer identity and timestamp
- Feedback and comments provided
- Response method (email/dashboard/API)
- Response time metrics
## Analytics & Monitoring
Track HITL performance with comprehensive analytics.
### Performance Dashboard
<Frame>
<img src="/images/enterprise/hitl-metrics.png" alt="HITL Metrics Dashboard" />
</Frame>
<CardGroup cols={2}>
<Card title="Response Times" icon="stopwatch">
Monitor average and median response times by reviewer or flow.
</Card>
<Card title="Volume Trends" icon="chart-bar">
Analyze review volume patterns to optimize team capacity.
</Card>
<Card title="Decision Distribution" icon="chart-pie">
View approval/rejection rates across different review types.
</Card>
<Card title="SLA Tracking" icon="chart-line">
Track percentage of reviews completed within SLA targets.
</Card>
</CardGroup>
### Audit & Compliance
Enterprise-ready audit capabilities for regulatory requirements:
- Complete decision history with timestamps
- Reviewer identity verification
- Immutable audit logs
- Export capabilities for compliance reporting
## Common Use Cases
<AccordionGroup>
<Accordion title="Security Reviews" icon="shield-halved">
**Use Case**: Internal security questionnaire automation with human validation
- AI generates responses to security questionnaires
- Security team reviews and validates accuracy via email
- Approved responses are compiled into final submission
- Full audit trail for compliance
</Accordion>
<Accordion title="Content Approval" icon="file-lines">
**Use Case**: Marketing content requiring legal/brand review
- AI generates marketing copy or social media content
- Route to brand team email for voice/tone review
- Automatic publishing upon approval
</Accordion>
<Accordion title="Financial Approvals" icon="money-bill">
**Use Case**: Expense reports, contract terms, budget allocations
- AI pre-processes and categorizes financial requests
- Route based on amount thresholds using dynamic assignment
- Maintain complete audit trail for financial compliance
</Accordion>
<Accordion title="Dynamic Assignment from CRM" icon="database">
**Use Case**: Route reviews to account owners from your CRM
- Flow fetches account owner email from CRM
- Store email in flow state (e.g., `account_owner_email`)
- Use `assign_from_input` to route to the right person automatically
</Accordion>
<Accordion title="Quality Assurance" icon="magnifying-glass">
**Use Case**: AI output validation before customer delivery
- AI generates customer-facing content or responses
- QA team reviews via email notification
- Feedback loops improve AI performance over time
</Accordion>
</AccordionGroup>
## Webhooks API
When your Flows pause for human feedback, you can configure webhooks to send request data to your own application. This enables:
- Building custom approval UIs
- Integrating with internal tools (Jira, ServiceNow, custom dashboards)
- Routing approvals to third-party systems
- Mobile app notifications
- Automated decision systems
<Frame>
<img src="/images/enterprise/hitl-settings-webhook.png" alt="HITL Webhook Configuration" />
</Frame>
### Configuring Webhooks
<Steps>
<Step title="Navigate to Settings">
Go to your **Deployment** → **Settings** → **Human in the Loop**
</Step>
<Step title="Expand Webhooks Section">
Click to expand the **Webhooks** configuration
</Step>
<Step title="Add Your Webhook URL">
Enter your webhook URL (must be HTTPS in production)
</Step>
<Step title="Save Configuration">
Click **Save Configuration** to activate
</Step>
</Steps>
You can configure multiple webhooks. Each active webhook receives all HITL events.
### Webhook Events
Your endpoint will receive HTTP POST requests for these events:
| Event Type | When Triggered |
|------------|----------------|
| `new_request` | A flow pauses and requests human feedback |
### Webhook Payload
All webhooks receive a JSON payload with this structure:
```json
{
"event": "new_request",
"request": {
"id": "550e8400-e29b-41d4-a716-446655440000",
"flow_id": "flow_abc123",
"method_name": "review_article",
"message": "Please review this article for publication.",
"emit_options": ["approved", "rejected", "request_changes"],
"state": {
"article_id": 12345,
"author": "john@example.com",
"category": "technology"
},
"metadata": {},
"created_at": "2026-01-14T12:00:00Z"
},
"deployment": {
"id": 456,
"name": "Content Review Flow",
"organization_id": 789
},
"callback_url": "https://api.crewai.com/...",
"assigned_to_email": "reviewer@company.com"
}
```
### Responding to Requests
To submit feedback, **POST to the `callback_url`** included in the webhook payload.
```http
POST {callback_url}
Content-Type: application/json
{
"feedback": "Approved. Great article!",
"source": "my_custom_app"
}
```
### Security
<Info>
All webhook requests are cryptographically signed using HMAC-SHA256 to ensure authenticity and prevent tampering.
</Info>
#### Webhook Security
- **HMAC-SHA256 signatures**: Every webhook includes a cryptographic signature
- **Per-webhook secrets**: Each webhook has its own unique signing secret
- **Encrypted at rest**: Signing secrets are encrypted in our database
- **Timestamp verification**: Prevents replay attacks
#### Signature Headers
Each webhook request includes these headers:
| Header | Description |
|--------|-------------|
| `X-Signature` | HMAC-SHA256 signature: `sha256=<hex_digest>` |
| `X-Timestamp` | Unix timestamp when the request was signed |
#### Verification
Verify by computing:
```python
import hmac
import hashlib
expected = hmac.new(
signing_secret.encode(),
f"{timestamp}.{payload}".encode(),
hashlib.sha256
).hexdigest()
if hmac.compare_digest(expected, signature):
# Valid signature
```
### Error Handling
Your webhook endpoint should return a 2xx status code to acknowledge receipt:
| Your Response | Our Behavior |
|---------------|--------------|
| 2xx | Webhook delivered successfully |
| 4xx/5xx | Logged as failed, no retry |
| Timeout (30s) | Logged as failed, no retry |
## Security & RBAC
### Dashboard Access
HITL access is controlled at the deployment level:
| Permission | Capability |
|------------|------------|
| `manage_human_feedback` | Configure HITL settings, view all requests |
| `respond_to_human_feedback` | Respond to requests, view assigned requests |
### Email Response Authorization
For email replies:
1. The reply-to token encodes the authorized email
2. Sender email must match the token's email
3. Token must not be expired (7-day default)
4. Request must still be pending
### Audit Trail
All HITL actions are logged:
- Request creation
- Assignment changes
- Response submission (with source: dashboard/email/API)
- Flow resume status
## Troubleshooting
### Emails Not Sending
1. Check "Email Notifications" is enabled in configuration
2. Verify routing rules match the method name
3. Verify assignee email is valid
4. Check deployment creator fallback if no routing rules match
### Email Replies Not Processing
1. Check token hasn't expired (7-day default)
2. Verify sender email matches assigned email
3. Ensure request is still pending (not already responded)
### Flow Not Resuming
1. Check request status in dashboard
2. Verify callback URL is accessible
3. Ensure deployment is still running
## Best Practices
<Tip>
**Start Simple**: Begin with email notifications to deployment creator, then add routing rules as your workflows mature.
</Tip>
1. **Use Dynamic Assignment**: Pull assignee emails from your flow state for flexible routing.
2. **Configure Auto-Response**: Set up a fallback for non-critical reviews to prevent flows from hanging.
3. **Monitor Response Times**: Use analytics to identify bottlenecks and optimize your review process.
4. **Keep Review Messages Clear**: Write clear, actionable messages in the `@human_feedback` decorator.
5. **Test Email Flow**: Send test requests to verify email delivery before going to production.
## Related Resources
<CardGroup cols={2}>
<Card title="Human Feedback in Flows" icon="code" href="/en/learn/human-feedback-in-flows">
Implementation guide for the `@human_feedback` decorator
</Card>
<Card title="Flow HITL Workflow Guide" icon="route" href="/en/enterprise/guides/human-in-the-loop">
Step-by-step guide for setting up HITL workflows
</Card>
<Card title="RBAC Configuration" icon="shield-check" href="/en/enterprise/features/rbac">
Configure role-based access control for your organization
</Card>
<Card title="Webhook Streaming" icon="bolt" href="/en/enterprise/features/webhook-streaming">
Set up real-time event notifications
</Card>
</CardGroup>

View File

@@ -37,7 +37,7 @@ you can use them locally or refine them to your needs.
<Card title="Tools & Integrations" href="/en/enterprise/features/tools-and-integrations" icon="wrench">
Connect external apps and manage internal tools your agents can use.
</Card>
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox">
<Card title="Tool Repository" href="/en/enterprise/guides/tool-repository#tool-repository" icon="toolbox">
Publish and install tools to enhance your crews' capabilities.
</Card>
<Card title="Agents Repository" href="/en/enterprise/features/agent-repositories" icon="people-group">

View File

@@ -0,0 +1,342 @@
---
title: PII Redaction for Traces
description: "Automatically redact sensitive data from crew and flow execution traces"
icon: "lock"
mode: "wide"
---
## Overview
PII Redaction is a CrewAI AMP feature that automatically detects and masks Personally Identifiable Information (PII) in your crew and flow execution traces. This ensures sensitive data like credit card numbers, social security numbers, email addresses, and names are not exposed in your CrewAI AMP traces. You can also create custom recognizers to protect organization-specific data.
<Info>
PII Redaction is available on the Enterprise plan.
Deployment must be version 1.8.0 or higher.
</Info>
<Frame>
![PII Redaction Overview](/images/enterprise/pii_mask_recognizer_trace_example.png)
</Frame>
## Why PII Redaction Matters
When running AI agents in production, sensitive information often flows through your crews:
- Customer data from CRM integrations
- Financial information from payment processors
- Personal details from form submissions
- Internal employee data
Without proper redaction, this data appears in traces, making compliance with regulations like GDPR, HIPAA, and PCI-DSS challenging. PII Redaction solves this by automatically masking sensitive data before it's stored in traces.
## How It Works
1. **Detect** - Scan trace event data for known PII patterns
2. **Classify** - Identify the type of sensitive data (credit card, SSN, email, etc.)
3. **Mask/Redact** - Replace the sensitive data with masked values based on your configuration
```
Original: "Contact john.doe@company.com or call 555-123-4567"
Redacted: "Contact <EMAIL_ADDRESS> or call <PHONE_NUMBER>"
```
## Enabling PII Redaction
<Info>
You must be on the Enterprise plan and your deployment must be version 1.8.0 or higher to use this feature.
</Info>
<Steps>
<Step title="Navigate to Crew Settings">
In the CrewAI AMP dashboard, select your deployed crew and go to one of your deployments/automations, then navigate to **Settings** → **PII Protection**.
</Step>
<Step title="Enable PII Protection">
Toggle on **PII Redaction for Traces**. This will enable automatic scanning and redaction of trace data.
<Info>
You need to manually enable PII Redaction for each deployment.
</Info>
<Frame>
![Enable PII Redaction](/images/enterprise/pii_mask_recognizer_enable.png)
</Frame>
</Step>
<Step title="Configure Entity Types">
Select which types of PII to detect and redact. Each entity can be individually enabled or disabled.
<Frame>
![Configure Entities](/images/enterprise/pii_mask_recognizer_supported_entities.png)
</Frame>
</Step>
<Step title="Save">
Save your configuration. PII redaction will be active on all subsequent crew executions, no redeployment is needed.
</Step>
</Steps>
## Supported Entity Types
CrewAI supports the following PII entity types, organized by category.
### Global Entities
| Entity | Description | Example |
|--------|-------------|---------|
| `CREDIT_CARD` | Credit/debit card numbers | "4111-1111-1111-1111" |
| `CRYPTO` | Cryptocurrency wallet addresses | "bc1qxy2kgd..." |
| `DATE_TIME` | Dates and times | "January 15, 2024" |
| `EMAIL_ADDRESS` | Email addresses | "john@example.com" |
| `IBAN_CODE` | International bank account numbers | "DE89 3704 0044 0532 0130 00" |
| `IP_ADDRESS` | IPv4 and IPv6 addresses | "192.168.1.1" |
| `LOCATION` | Geographic locations | "New York City" |
| `MEDICAL_LICENSE` | Medical license numbers | "MD12345" |
| `NRP` | Nationalities, religious, or political groups | - |
| `PERSON` | Personal names | "John Doe" |
| `PHONE_NUMBER` | Phone numbers in various formats | "+1 (555) 123-4567" |
| `URL` | Web URLs | "https://example.com" |
### US-Specific Entities
| Entity | Description | Example |
|--------|-------------|---------|
| `US_BANK_NUMBER` | US Bank account numbers | "1234567890" |
| `US_DRIVER_LICENSE` | US Driver's license numbers | "D1234567" |
| `US_ITIN` | Individual Taxpayer ID | "900-70-0000" |
| `US_PASSPORT` | US Passport numbers | "123456789" |
| `US_SSN` | Social Security Numbers | "123-45-6789" |
## Redaction Actions
For each enabled entity, you can configure how the data is redacted:
| Action | Description | Example Output |
|--------|-------------|----------------|
| `mask` | Replace with the entity type label | `<CREDIT_CARD>` |
| `redact` | Completely remove the text | *(empty)* |
## Custom Recognizers
In addition to built-in entities, you can create **custom recognizers** to detect organization-specific PII patterns.
<Frame>
![Custom Recognizers](/images/enterprise/pii_mask_recognizer.png)
</Frame>
### Recognizer Types
You have two options for custom recognizers:
| Type | Best For | Example Use Case |
|------|----------|------------------|
| **Pattern-based (Regex)** | Structured data with predictable formats | Salary amounts, employee IDs, project codes |
| **Deny-list** | Exact string matches | Company names, internal codenames, specific terms |
### Creating a Custom Recognizer
<Steps>
<Step title="Navigate to Custom Recognizers">
Go to your Organization **Settings** → **Organization** → **Add Recognizer**.
</Step>
<Step title="Configure the Recognizer">
<Frame>
![Configure Recognizer](/images/enterprise/pii_mask_recognizer_create.png)
</Frame>
Configure the following fields:
- **Name**: A descriptive name for the recognizer
- **Entity Type**: The entity label that will appear in redacted output (e.g., `EMPLOYEE_ID`, `SALARY`)
- **Type**: Choose between Regex Pattern or Deny List
- **Pattern/Values**: Regex pattern or list of strings to match
- **Confidence Threshold**: Minimum score (0.0-1.0) required for a match to trigger redaction. Higher values (e.g., 0.8) reduce false positives but may miss some matches. Lower values (e.g., 0.5) catch more matches but may over-redact. Default is 0.8.
- **Context Words** (optional): Words that increase detection confidence when found nearby
</Step>
<Step title="Save">
Save the recognizer. It will be available to enable on your deployments.
</Step>
</Steps>
### Understanding Entity Types
The **Entity Type** determines how matched content appears in redacted traces:
```
Entity Type: SALARY
Pattern: salary:\s*\$\s*\d+
Input: "Employee salary: $50,000"
Output: "Employee <SALARY>"
```
### Using Context Words
Context words improve accuracy by increasing confidence when specific terms appear near the matched pattern:
```
Context Words: "project", "code", "internal"
Entity Type: PROJECT_CODE
Pattern: PRJ-\d{4}
```
When "project" or "code" appears near "PRJ-1234", the recognizer has higher confidence it's a true match, reducing false positives.
## Viewing Redacted Traces
Once PII redaction is enabled, your traces will show redacted values in place of sensitive data:
```
Task Output: "Customer <PERSON> placed order #12345.
Contact email: <EMAIL_ADDRESS>, phone: <PHONE_NUMBER>.
Payment processed for card ending in <CREDIT_CARD>."
```
Redacted values are clearly marked with angle brackets and the entity type label (e.g., `<EMAIL_ADDRESS>`), making it easy to understand what data was protected while still allowing you to debug and monitor crew behavior.
## Best Practices
### Performance Considerations
<Steps>
<Step title="Enable Only Needed Entities">
Each enabled entity adds processing overhead. Only enable entities relevant to your data.
</Step>
<Step title="Use Specific Patterns">
For custom recognizers, use specific patterns to reduce false positives and improve performance. Regex patterns are best when identifying specific patterns in the traces such as salary, employee id, project code, etc. Deny-list recognizers are best when identifying exact strings in the traces such as company names, internal codenames, etc.
</Step>
<Step title="Leverage Context Words">
Context words improve accuracy by only triggering detection when surrounding text matches.
</Step>
</Steps>
## Troubleshooting
<Accordion title="PII Not Being Redacted">
**Possible Causes:**
- Entity type not enabled in configuration
- Pattern doesn't match the data format
- Custom recognizer has syntax errors
**Solutions:**
- Verify entity is enabled in Settings → Security
- Test regex patterns with sample data
- Check logs for configuration errors
</Accordion>
<Accordion title="Too Much Data Being Redacted">
**Possible Causes:**
- Overly broad entity types enabled (e.g., `DATE_TIME` catches dates everywhere)
- Custom recognizer patterns are too general
**Solutions:**
- Disable entities that cause false positives
- Make custom patterns more specific
- Add context words to improve accuracy
</Accordion>
<Accordion title="Performance Issues">
**Possible Causes:**
- Too many entities enabled
- NLP-based entities (`PERSON`, `LOCATION`, `NRP`) are computationally expensive as they use machine learning models
**Solutions:**
- Only enable entities you actually need
- Consider using pattern-based alternatives where possible
- Monitor trace processing times in the dashboard
</Accordion>
---
## Practical Example: Salary Pattern Matching
This example demonstrates how to create a custom recognizer to detect and mask salary information in your traces.
### Use Case
Your crew processes employee or financial data that includes salary information in formats like:
- `salary: $50,000`
- `salary: $125,000.00`
- `salary:$1,500.50`
You want to automatically mask these values to protect sensitive compensation data.
### Configuration
<Frame>
![Salary Recognizer Configuration](/images/enterprise/pii_mask_custom_recognizer_salary.png)
</Frame>
| Field | Value |
|-------|-------|
| **Name** | `SALARY` |
| **Entity Type** | `SALARY` |
| **Type** | Regex Pattern |
| **Regex Pattern** | `salary:\s*\$\s*\d{1,3}(,\d{3})*(\.\d{2})?` |
| **Action** | Mask |
| **Confidence Threshold** | `0.8` |
| **Context Words** | `salary, compensation, pay, wage, income` |
### Regex Pattern Breakdown
| Pattern Component | Meaning |
|-------------------|---------|
| `salary:` | Matches the literal text "salary:" |
| `\s*` | Matches zero or more whitespace characters |
| `\$` | Matches the dollar sign (escaped) |
| `\s*` | Matches zero or more whitespace characters after $ |
| `\d{1,3}` | Matches 1-3 digits (e.g., "1", "50", "125") |
| `(,\d{3})*` | Matches comma-separated thousands (e.g., ",000", ",500,000") |
| `(\.\d{2})?` | Optionally matches cents (e.g., ".00", ".50") |
### Example Results
```
Original: "Employee record shows salary: $125,000.00 annually"
Redacted: "Employee record shows <SALARY> annually"
Original: "Base salary:$50,000 with bonus potential"
Redacted: "Base <SALARY> with bonus potential"
```
<Tip>
Adding context words like "salary", "compensation", "pay", "wage", and "income" helps increase detection confidence when these terms appear near the matched pattern, reducing false positives.
</Tip>
### Enable the Recognizer for Your Deployments
<Warning>
Creating a custom recognizer at the organization level does not automatically enable it for your deployments. You must manually enable each recognizer for every deployment where you want it applied.
</Warning>
After creating your custom recognizer, enable it for each deployment:
<Steps>
<Step title="Navigate to Your Deployment">
Go to your deployment/automation and open **Settings** → **PII Protection**.
</Step>
<Step title="Select Custom Recognizers">
Under **Mask Recognizers**, you'll see your organization-defined recognizers. Check the box next to the recognizers you want to enable.
<Frame>
![Enable Custom Recognizer](/images/enterprise/pii_mask_recognizers_options.png)
</Frame>
</Step>
<Step title="Save Configuration">
Save your changes. The recognizer will be active on all subsequent executions for this deployment.
</Step>
</Steps>
<Info>
Repeat this process for each deployment where you need the custom recognizer. This gives you granular control over which recognizers are active in different environments (e.g., development vs. production).
</Info>

View File

@@ -31,7 +31,8 @@ You can configure users and roles in Settings → Roles.
Go to <b>Settings → Roles</b> in CrewAI AMP.
</Step>
<Step title="Choose a role type">
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click <b>Create role</b> to define a custom one.
Use a predefined role (<b>Owner</b>, <b>Member</b>) or click{" "}
<b>Create role</b> to define a custom one.
</Step>
<Step title="Assign to members">
Select users and assign the role. You can change this anytime.
@@ -40,10 +41,10 @@ You can configure users and roles in Settings → Roles.
### Configuration summary
| Area | Where to configure | Options |
|:---|:---|:---|
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
| Area | Where to configure | Options |
| :-------------------- | :--------------------------------- | :-------------------------------------- |
| Users & Roles | Settings → Roles | Predefined: Owner, Member; Custom roles |
| Automation visibility | Automation → Settings → Visibility | Private; Whitelist users/roles |
## Automationlevel Access Control
@@ -70,26 +71,30 @@ You can configure automationlevel access control in Automation → Settings
Navigate to <b>Automation → Settings → Visibility</b>.
</Step>
<Step title="Set visibility">
Choose <b>Private</b> to restrict access. The organization owner always retains access.
Choose <b>Private</b> to restrict access. The organization owner always
retains access.
</Step>
<Step title="Whitelist access">
Add specific users and roles allowed to view, run, and access logs/metrics/settings.
Add specific users and roles allowed to view, run, and access
logs/metrics/settings.
</Step>
<Step title="Save and verify">
Save changes, then confirm that nonwhitelisted users cannot view or run the automation.
Save changes, then confirm that nonwhitelisted users cannot view or run the
automation.
</Step>
</Steps>
### Private visibility: access outcomes
| Action | Owner | Whitelisted user/role | Not whitelisted |
|:---|:---|:---|:---|
| View automation | ✓ | ✓ | ✗ |
| Run automation/API | ✓ | ✓ | ✗ |
| Access logs/metrics/settings | ✓ | ✓ | ✗ |
| Action | Owner | Whitelisted user/role | Not whitelisted |
| :--------------------------- | :---- | :-------------------- | :-------------- |
| View automation | ✓ | ✓ | ✗ |
| Run automation/API | ✓ | ✓ | ✗ |
| Access logs/metrics/settings | ✓ | ✓ | ✗ |
<Tip>
The organization owner always has access. In private mode, only whitelisted users and roles can view, run, and access logs/metrics/settings.
The organization owner always has access. In private mode, only whitelisted
users and roles can view, run, and access logs/metrics/settings.
</Tip>
<Frame>

View File

@@ -18,221 +18,226 @@ Tools & Integrations is the central hub for connecting thirdparty apps and ma
<Tabs>
<Tab title="Integrations" icon="plug">
## Agent Apps (Integrations)
## Agent Apps (Integrations)
Connect enterprisegrade applications (e.g., Gmail, Google Drive, HubSpot, Slack) via OAuth to enable agent actions.
Connect enterprisegrade applications (e.g., Gmail, Google Drive, HubSpot, Slack) via OAuth to enable agent actions.
<Steps>
<Step title="Connect">
Click <b>Connect</b> on an app and complete OAuth.
</Step>
<Step title="Configure">
Optionally adjust scopes, triggers, and action availability.
</Step>
<Step title="Use in Agents">
Connected services become available as tools for your agents.
</Step>
</Steps>
{" "}
<Steps>
<Step title="Connect">
Click <b>Connect</b> on an app and complete OAuth.
</Step>
<Step title="Configure">
Optionally adjust scopes, triggers, and action availability.
</Step>
<Step title="Use in Agents">
Connected services become available as tools for your agents.
</Step>
</Steps>
<Frame>
![Integrations Grid](/images/enterprise/agent-apps.png)
</Frame>
{" "}
<Frame>![Integrations Grid](/images/enterprise/agent-apps.png)</Frame>
### Connect your Account
### Connect your Account
1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link>
2. Click <b>Connect</b> on the desired service
3. Complete the OAuth flow and grant scopes
4. Copy your Enterprise Token from the <b>Integration</b> tab
1. Go to <Link href="https://app.crewai.com/crewai_plus/connectors">Integrations</Link>
2. Click <b>Connect</b> on the desired service
3. Complete the OAuth flow and grant scopes
4. Copy your Enterprise Token from <Link href="https://app.crewai.com/crewai_plus/settings/integrations">Integration Settings</Link>
<Frame>
![Enterprise Token](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
{" "}
<Frame>
![Enterprise Token](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
### Install Integration Tools
### Install Integration Tools
To use the integrations locally, you need to install the latest `crewai-tools` package.
To use the integrations locally, you need to install the latest `crewai-tools` package.
```bash
uv add crewai-tools
```
```bash
uv add crewai-tools
```
### Usage Example
### Environment Variable Setup
<Tip>
All services you have authenticated will be available as tools. Add `CrewaiEnterpriseTools` to your agent and youre set.
</Tip>
{" "}
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
# Get enterprise tools (Gmail tool will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# print the tools
print(enterprise_tools)
Or add it to your `.env` file:
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
tools=enterprise_tools
)
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
# Task to send an email
email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent,
expected_output="Confirmation that email was sent successfully"
)
### Usage Example
# Run the task
crew = Crew(
agents=[email_agent],
tasks=[email_task]
)
{" "}
<Tip>
Use the new streamlined approach to integrate enterprise apps. Simply specify
the app and its actions directly in the Agent configuration.
</Tip>
# Run the crew
crew.kickoff()
```
```python
from crewai import Agent, Task, Crew
### Filtering Tools
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
apps=['gmail', 'gmail/send_email'] # Using canonical name 'gmail'
)
```python
from crewai_tools import CrewaiEnterpriseTools
# Task to send an email
email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent,
expected_output="Confirmation that email was sent successfully"
)
enterprise_tools = CrewaiEnterpriseTools(
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
)
# Run the task
crew = Crew(
agents=[email_agent],
tasks=[email_task]
)
# Run the crew
crew.kickoff()
```
gmail_tool = enterprise_tools["gmail_find_email"]
### Filtering Tools
```python
from crewai import Agent, Task, Crew
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
tools=[gmail_tool]
)
# Create agent with specific Gmail actions only
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
apps=['gmail/fetch_emails'] # Using canonical name with specific action
)
notification_task = Task(
description="Find the email from john@example.com",
agent=gmail_agent,
expected_output="Email found from john@example.com"
)
notification_task = Task(
description="Find the email from john@example.com",
agent=gmail_agent,
expected_output="Email found from john@example.com"
)
crew = Crew(
agents=[gmail_agent],
tasks=[notification_task]
)
```
crew = Crew(
agents=[gmail_agent],
tasks=[notification_task]
)
```
On a deployed crew, you can specify which actions are available for each integration from the service settings page.
On a deployed crew, you can specify which actions are available for each integration from the service settings page.
<Frame>
![Filter Actions](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
{" "}
<Frame>
![Filter Actions](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
### Scoped Deployments (multiuser orgs)
### Scoped Deployments (multiuser orgs)
You can scope each integration to a specific user. For example, a crew that connects to Google can use a specific users Gmail account.
You can scope each integration to a specific user. For example, a crew that connects to Google can use a specific users Gmail account.
<Tip>
Useful when different teams/users must keep data access separated.
</Tip>
{" "}
<Tip>Useful when different teams/users must keep data access separated.</Tip>
Use the `user_bearer_token` to scope authentication to the requesting user. If the user isnt logged in, the crew wont use connected integrations. Otherwise it falls back to the default bearer token configured for the deployment.
Use the `user_bearer_token` to scope authentication to the requesting user. If the user isnt logged in, the crew wont use connected integrations. Otherwise it falls back to the default bearer token configured for the deployment.
<Frame>
![User Bearer Token](/images/enterprise/user_bearer_token.png)
</Frame>
{" "}
<Frame>![User Bearer Token](/images/enterprise/user_bearer_token.png)</Frame>
<div id="catalog"></div>
### Catalog
{" "}
<div id="catalog"></div>
### Catalog
#### Communication & Collaboration
- Gmail — Manage emails and drafts
- Slack — Workspace notifications and alerts
- Microsoft — Office 365 and Teams integration
#### Communication & Collaboration
#### Project Management
- Jira — Issue tracking and project management
- ClickUp — Task and productivity management
- Asana — Team task and project coordination
- Notion — Page and database management
- Linear — Software project and bug tracking
- GitHub — Repository and issue management
- Gmail — Manage emails and drafts
- Slack — Workspace notifications and alerts
- Microsoft — Office 365 and Teams integration
#### Customer Relationship Management
- Salesforce — CRM account and opportunity management
- HubSpot — Sales pipeline and contact management
- Zendesk — Customer support ticket management
#### Project Management
#### Business & Finance
- Stripe — Payment processing and customer management
- Shopify — Ecommerce store and product management
- Jira — Issue tracking and project management
- ClickUp — Task and productivity management
- Asana — Team task and project coordination
- Notion — Page and database management
- Linear — Software project and bug tracking
- GitHub — Repository and issue management
#### Productivity & Storage
- Google Sheets — Spreadsheet data synchronization
- Google Calendar — Event and schedule management
- Box — File storage and document management
#### Customer Relationship Management
…and more to come!
- Salesforce — CRM account and opportunity management
- HubSpot — Sales pipeline and contact management
- Zendesk — Customer support ticket management
#### Business & Finance
- Stripe — Payment processing and customer management
- Shopify — Ecommerce store and product management
#### Productivity & Storage
- Google Sheets — Spreadsheet data synchronization
- Google Calendar — Event and schedule management
- Box — File storage and document management
…and more to come!
</Tab>
<Tab title="Internal Tools" icon="toolbox">
## Internal Tools
## Internal Tools
Create custom tools locally, publish them on CrewAI AMP Tool Repository and use them in your agents.
Create custom tools locally, publish them on CrewAI AMP Tool Repository and use them in your agents.
<Tip>
Before running the commands below, make sure you log in to your CrewAI AMP account by running this command:
```bash
crewai login
```
</Tip>
{" "}
<Tip>
Before running the commands below, make sure you log in to your CrewAI AMP
account by running this command: ```bash crewai login ```
</Tip>
<Frame>
![Internal Tool Detail](/images/enterprise/tools-integrations-internal.png)
</Frame>
{" "}
<Frame>
![Internal Tool Detail](/images/enterprise/tools-integrations-internal.png)
</Frame>
<Steps>
<Step title="Create">
Create a new tool locally.
```bash
crewai tool create your-tool
```
</Step>
<Step title="Publish">
Publish the tool to the CrewAI AMP Tool Repository.
```bash
crewai tool publish
```
</Step>
<Step title="Install">
Install the tool from the CrewAI AMP Tool Repository.
```bash
crewai tool install your-tool
```
</Step>
</Steps>
{" "}
<Steps>
<Step title="Create">
Create a new tool locally. ```bash crewai tool create your-tool ```
</Step>
<Step title="Publish">
Publish the tool to the CrewAI AMP Tool Repository. ```bash crewai tool
publish ```
</Step>
<Step title="Install">
Install the tool from the CrewAI AMP Tool Repository. ```bash crewai tool
install your-tool ```
</Step>
</Steps>
Manage:
Manage:
- Name and description
- Visibility (Private / Public)
- Required environment variables
- Version history and downloads
- Team and role access
- Name and description
- Visibility (Private / Public)
- Required environment variables
- Version history and downloads
- Team and role access
<Frame>
![Internal Tool Detail](/images/enterprise/tool-configs.png)
</Frame>
{" "}
<Frame>![Internal Tool Detail](/images/enterprise/tool-configs.png)</Frame>
</Tab>
</Tabs>
@@ -240,10 +245,18 @@ Tools & Integrations is the central hub for connecting thirdparty apps and ma
## Related
<CardGroup cols={2}>
<Card title="Tool Repository" href="/en/enterprise/features/tool-repository" icon="toolbox">
<Card
title="Tool Repository"
href="/en/enterprise/guides/tool-repository#tool-repository"
icon="toolbox"
>
Create, publish, and version custom tools for your organization.
</Card>
<Card title="Webhook Automation" href="/en/enterprise/guides/webhook-automation" icon="bolt">
<Card
title="Webhook Automation"
href="/en/enterprise/guides/webhook-automation"
icon="bolt"
>
Automate workflows and integrate with external platforms and services.
</Card>
</CardGroup>

View File

@@ -20,9 +20,7 @@ Traces in CrewAI AMP are detailed execution records that capture every aspect of
- Execution times
- Cost estimates
<Frame>
![Traces Overview](/images/enterprise/traces-overview.png)
</Frame>
<Frame>![Traces Overview](/images/enterprise/traces-overview.png)</Frame>
## Accessing Traces
@@ -51,9 +49,7 @@ The top section displays high-level metrics about the execution:
- **Execution Time**: Total duration of the crew run
- **Estimated Cost**: Approximate cost based on token usage
<Frame>
![Execution Summary](/images/enterprise/trace-summary.png)
</Frame>
<Frame>![Execution Summary](/images/enterprise/trace-summary.png)</Frame>
### 2. Tasks & Agents
@@ -64,33 +60,25 @@ This section shows all tasks and agents that were part of the crew execution:
- Status (completed/failed)
- Individual execution time of the task
<Frame>
![Task List](/images/enterprise/trace-tasks.png)
</Frame>
<Frame>![Task List](/images/enterprise/trace-tasks.png)</Frame>
### 3. Final Output
Displays the final result produced by the crew after all tasks are completed.
<Frame>
![Final Output](/images/enterprise/final-output.png)
</Frame>
<Frame>![Final Output](/images/enterprise/final-output.png)</Frame>
### 4. Execution Timeline
A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns.
<Frame>
![Execution Timeline](/images/enterprise/trace-timeline.png)
</Frame>
<Frame>![Execution Timeline](/images/enterprise/trace-timeline.png)</Frame>
### 5. Detailed Task View
When you click on a specific task in the timeline or task list, you'll see:
<Frame>
![Detailed Task View](/images/enterprise/trace-detailed-task.png)
</Frame>
<Frame>![Detailed Task View](/images/enterprise/trace-detailed-task.png)</Frame>
- **Task Key**: Unique identifier for the task
- **Task ID**: Technical identifier in the system
@@ -104,7 +92,6 @@ When you click on a specific task in the timeline or task list, you'll see:
- **Input**: Any input provided to this task from previous tasks
- **Output**: The actual result produced by the agent
## Using Traces for Debugging
Traces are invaluable for troubleshooting issues with your crews:
@@ -121,6 +108,7 @@ Traces are invaluable for troubleshooting issues with your crews:
<Frame>
![Failure Points](/images/enterprise/failure.png)
</Frame>
</Step>
<Step title="Optimize Performance">
@@ -130,6 +118,7 @@ Traces are invaluable for troubleshooting issues with your crews:
- Excessive token usage
- Redundant tool operations
- Unnecessary API calls
</Step>
<Step title="Improve Cost Efficiency">
@@ -139,6 +128,7 @@ Traces are invaluable for troubleshooting issues with your crews:
- Refine prompts to be more concise
- Cache frequently accessed information
- Structure tasks to minimize redundant operations
</Step>
</Steps>
@@ -153,5 +143,6 @@ CrewAI batches trace uploads to reduce overhead on high-volume runs:
This yields more stable tracing under load while preserving detailed task/agent telemetry.
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with trace analysis or any other CrewAI AMP features.
Contact our support team for assistance with trace analysis or any other
CrewAI AMP features.
</Card>

View File

@@ -16,7 +16,7 @@ When using the Kickoff API, include a `webhooks` object to your request, for exa
```json
{
"inputs": {"foo": "bar"},
"inputs": { "foo": "bar" },
"webhooks": {
"events": ["crew_kickoff_started", "llm_call_started"],
"url": "https://your.endpoint/webhook",
@@ -46,8 +46,8 @@ Each webhook sends a list of events:
"data": {
"model": "gpt-4",
"messages": [
{"role": "system", "content": "You are an assistant."},
{"role": "user", "content": "Summarize this article."}
{ "role": "system", "content": "You are an assistant." },
{ "role": "user", "content": "Summarize this article." }
]
}
}
@@ -55,7 +55,7 @@ Each webhook sends a list of events:
}
```
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) on GitHub.
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/lib/crewai/src/crewai/events/types) on GitHub.
As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field.
@@ -65,104 +65,109 @@ CrewAI supports both system events and custom events in Enterprise Event Streami
### Flow Events:
- `flow_created`
- `flow_started`
- `flow_finished`
- `flow_plot`
- `method_execution_started`
- `method_execution_finished`
- `method_execution_failed`
- `flow_created`
- `flow_started`
- `flow_finished`
- `flow_plot`
- `method_execution_started`
- `method_execution_finished`
- `method_execution_failed`
### Agent Events:
- `agent_execution_started`
- `agent_execution_completed`
- `agent_execution_error`
- `lite_agent_execution_started`
- `lite_agent_execution_completed`
- `lite_agent_execution_error`
- `agent_logs_started`
- `agent_logs_execution`
- `agent_evaluation_started`
- `agent_evaluation_completed`
- `agent_evaluation_failed`
- `agent_execution_started`
- `agent_execution_completed`
- `agent_execution_error`
- `lite_agent_execution_started`
- `lite_agent_execution_completed`
- `lite_agent_execution_error`
- `agent_logs_started`
- `agent_logs_execution`
- `agent_evaluation_started`
- `agent_evaluation_completed`
- `agent_evaluation_failed`
### Crew Events:
- `crew_kickoff_started`
- `crew_kickoff_completed`
- `crew_kickoff_failed`
- `crew_train_started`
- `crew_train_completed`
- `crew_train_failed`
- `crew_test_started`
- `crew_test_completed`
- `crew_test_failed`
- `crew_test_result`
- `crew_kickoff_started`
- `crew_kickoff_completed`
- `crew_kickoff_failed`
- `crew_train_started`
- `crew_train_completed`
- `crew_train_failed`
- `crew_test_started`
- `crew_test_completed`
- `crew_test_failed`
- `crew_test_result`
### Task Events:
- `task_started`
- `task_completed`
- `task_failed`
- `task_evaluation`
- `task_started`
- `task_completed`
- `task_failed`
- `task_evaluation`
### Tool Usage Events:
- `tool_usage_started`
- `tool_usage_finished`
- `tool_usage_error`
- `tool_validate_input_error`
- `tool_selection_error`
- `tool_execution_error`
- `tool_usage_started`
- `tool_usage_finished`
- `tool_usage_error`
- `tool_validate_input_error`
- `tool_selection_error`
- `tool_execution_error`
### LLM Events:
- `llm_call_started`
- `llm_call_completed`
- `llm_call_failed`
- `llm_stream_chunk`
- `llm_call_started`
- `llm_call_completed`
- `llm_call_failed`
- `llm_stream_chunk`
### LLM Guardrail Events:
- `llm_guardrail_started`
- `llm_guardrail_completed`
- `llm_guardrail_started`
- `llm_guardrail_completed`
### Memory Events:
- `memory_query_started`
- `memory_query_completed`
- `memory_query_failed`
- `memory_save_started`
- `memory_save_completed`
- `memory_save_failed`
- `memory_retrieval_started`
- `memory_retrieval_completed`
- `memory_query_started`
- `memory_query_completed`
- `memory_query_failed`
- `memory_save_started`
- `memory_save_completed`
- `memory_save_failed`
- `memory_retrieval_started`
- `memory_retrieval_completed`
### Knowledge Events:
- `knowledge_search_query_started`
- `knowledge_search_query_completed`
- `knowledge_search_query_failed`
- `knowledge_query_started`
- `knowledge_query_completed`
- `knowledge_query_failed`
- `knowledge_search_query_started`
- `knowledge_search_query_completed`
- `knowledge_search_query_failed`
- `knowledge_query_started`
- `knowledge_query_completed`
- `knowledge_query_failed`
### Reasoning Events:
- `agent_reasoning_started`
- `agent_reasoning_completed`
- `agent_reasoning_failed`
- `agent_reasoning_started`
- `agent_reasoning_completed`
- `agent_reasoning_failed`
Event names match the internal event bus. See GitHub for the full list of events.
You can emit your own custom events, and they will be delivered through the webhook stream alongside system events.
<CardGroup>
<Card title="GitHub" icon="github" href="https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events">
Full list of events
</Card>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with webhook integration or troubleshooting.
</Card>
<Card
title="GitHub"
icon="github"
href="https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events"
>
Full list of events
</Card>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with webhook integration or
troubleshooting.
</Card>
</CardGroup>

View File

@@ -20,37 +20,61 @@ Deep-dive guides walk through setup and sample workflows for each integration:
<a href="/en/enterprise/guides/gmail-trigger">Enable crews when emails arrive or threads update.</a>
</Card>
<Card title="Google Calendar Trigger" icon="calendar-days">
<a href="/en/enterprise/guides/google-calendar-trigger">React to calendar events as they are created, updated, or cancelled.</a>
</Card>
{" "}
<Card title="Google Calendar Trigger" icon="calendar-days">
<a href="/en/enterprise/guides/google-calendar-trigger">
React to calendar events as they are created, updated, or cancelled.
</a>
</Card>
<Card title="Google Drive Trigger" icon="folder-open">
<a href="/en/enterprise/guides/google-drive-trigger">Handle Drive file uploads, edits, and deletions.</a>
</Card>
{" "}
<Card title="Google Drive Trigger" icon="folder-open">
<a href="/en/enterprise/guides/google-drive-trigger">
Handle Drive file uploads, edits, and deletions.
</a>
</Card>
<Card title="Outlook Trigger" icon="envelope-open">
<a href="/en/enterprise/guides/outlook-trigger">Automate responses to new Outlook messages and calendar updates.</a>
</Card>
{" "}
<Card title="Outlook Trigger" icon="envelope-open">
<a href="/en/enterprise/guides/outlook-trigger">
Automate responses to new Outlook messages and calendar updates.
</a>
</Card>
<Card title="OneDrive Trigger" icon="cloud">
<a href="/en/enterprise/guides/onedrive-trigger">Audit file activity and sharing changes in OneDrive.</a>
</Card>
{" "}
<Card title="OneDrive Trigger" icon="cloud">
<a href="/en/enterprise/guides/onedrive-trigger">
Audit file activity and sharing changes in OneDrive.
</a>
</Card>
<Card title="Microsoft Teams Trigger" icon="comments">
<a href="/en/enterprise/guides/microsoft-teams-trigger">Kick off workflows when new Teams chats start.</a>
</Card>
{" "}
<Card title="Microsoft Teams Trigger" icon="comments">
<a href="/en/enterprise/guides/microsoft-teams-trigger">
Kick off workflows when new Teams chats start.
</a>
</Card>
<Card title="HubSpot Trigger" icon="hubspot">
<a href="/en/enterprise/guides/hubspot-trigger">Launch automations from HubSpot workflows and lifecycle events.</a>
</Card>
{" "}
<Card title="HubSpot Trigger" icon="hubspot">
<a href="/en/enterprise/guides/hubspot-trigger">
Launch automations from HubSpot workflows and lifecycle events.
</a>
</Card>
<Card title="Salesforce Trigger" icon="salesforce">
<a href="/en/enterprise/guides/salesforce-trigger">Connect Salesforce processes to CrewAI for CRM automation.</a>
</Card>
{" "}
<Card title="Salesforce Trigger" icon="salesforce">
<a href="/en/enterprise/guides/salesforce-trigger">
Connect Salesforce processes to CrewAI for CRM automation.
</a>
</Card>
<Card title="Slack Trigger" icon="slack">
<a href="/en/enterprise/guides/slack-trigger">Start crews directly from Slack slash commands.</a>
</Card>
{" "}
<Card title="Slack Trigger" icon="slack">
<a href="/en/enterprise/guides/slack-trigger">
Start crews directly from Slack slash commands.
</a>
</Card>
<Card title="Zapier Trigger" icon="bolt">
<a href="/en/enterprise/guides/zapier-trigger">Bridge CrewAI with thousands of Zapier-supported apps.</a>
@@ -76,7 +100,10 @@ To access and manage your automation triggers:
2. Click on the **Triggers** tab to view all available trigger integrations
<Frame caption="Example of available automation triggers for a Gmail deployment">
<img src="/images/enterprise/list-available-triggers.png" alt="List of available automation triggers" />
<img
src="/images/enterprise/list-available-triggers.png"
alt="List of available automation triggers"
/>
</Frame>
This view shows all the trigger integrations available for your deployment, along with their current connection status.
@@ -86,7 +113,10 @@ This view shows all the trigger integrations available for your deployment, alon
Each trigger can be easily enabled or disabled using the toggle switch:
<Frame caption="Enable or disable triggers with toggle">
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
<img
src="/images/enterprise/trigger-selected.png"
alt="Enable or disable triggers with toggle"
/>
</Frame>
- **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur
@@ -99,7 +129,10 @@ Simply click the toggle to change the trigger state. Changes take effect immedia
Track the performance and history of your triggered executions:
<Frame caption="List of executions triggered by automation">
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
<img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame>
## Building Trigger-Driven Automations
@@ -117,27 +150,51 @@ Before wiring a trigger into production, make sure you:
- Decide whether to pass trigger context automatically using `allow_crewai_trigger_context`
- Set up monitoring—webhook logs, CrewAI execution history, and optional external alerting
### Payload & Crew Examples Repository
### Testing Triggers Locally with CLI
We maintain a comprehensive repository with end-to-end trigger examples to help you build and test your automations:
The CrewAI CLI provides powerful commands to help you develop and test trigger-driven automations without deploying to production.
This repository contains:
#### List Available Triggers
- **Realistic payload samples** for every supported trigger integration
- **Ready-to-run crew implementations** that parse each payload and turn it into a business workflow
- **Multiple scenarios per integration** (e.g., new events, updates, deletions) so you can match the shape of your data
View all available triggers for your connected integrations:
| Integration | When it fires | Payload Samples | Crew Examples |
| :-- | :-- | :-- | :-- |
| Gmail | New messages, thread updates | [New alerts, thread updates](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) | [`new-email-crew.py`, `gmail-alert-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) |
| Google Calendar | Event created / updated / started / ended / cancelled | [Event lifecycle payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) | [`calendar-event-crew.py`, `calendar-meeting-crew.py`, `calendar-working-location-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) |
| Google Drive | File created / updated / deleted | [File lifecycle payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) | [`drive-file-crew.py`, `drive-file-deletion-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) |
| Outlook | New email, calendar event removed | [Outlook payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) | [`outlook-message-crew.py`, `outlook-event-removal-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) |
| OneDrive | File operations (create, update, share, delete) | [OneDrive payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) | [`onedrive-file-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) |
| HubSpot | Record created / updated (contacts, companies, deals) | [HubSpot payloads](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot) | [`hubspot-company-crew.py`, `hubspot-contact-crew.py`, `hubspot-record-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot) |
| Microsoft Teams | Chat thread created | [Teams chat payload](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) | [`teams-chat-created-crew.py`](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) |
```bash
crewai triggers list
```
Use these samples to understand payload shape, copy the matching crew, and then replace the test payload with your live trigger data.
This command displays all triggers available based on your connected integrations, showing:
- Integration name and connection status
- Available trigger types
- Trigger names and descriptions
#### Simulate Trigger Execution
Test your crew with realistic trigger payloads before deployment:
```bash
crewai triggers run <trigger_name>
```
For example:
```bash
crewai triggers run microsoft_onedrive/file_changed
```
This command:
- Executes your crew locally
- Passes a complete, realistic trigger payload
- Simulates exactly how your crew will be called in production
<Warning>
**Important Development Notes:**
- Use `crewai triggers run <trigger>` to simulate trigger execution during development
- Using `crewai run` will NOT simulate trigger calls and won't pass the trigger payload
- After deployment, your crew will be executed with the actual trigger payload
- If your crew expects parameters that aren't in the trigger payload, execution may fail
</Warning>
### Triggers with Crew
@@ -170,10 +227,12 @@ class MyAutomatedCrew:
The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms.
<Note>
Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI automatically injects this payload:
- Tasks: appended to the first task's description by default ("Trigger Payload: {crewai_trigger_payload}")
- Control via `allow_crewai_trigger_context`: set `True` to always inject, `False` to never inject
- Flows: any `@start()` method that accepts a `crewai_trigger_payload` parameter will receive it
Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI
automatically injects this payload: - Tasks: appended to the first task's
description by default ("Trigger Payload: {crewai_trigger_payload}") - Control
via `allow_crewai_trigger_context`: set `True` to always inject, `False` to
never inject - Flows: any `@start()` method that accepts a
`crewai_trigger_payload` parameter will receive it
</Note>
### Integration with Flows
@@ -241,15 +300,23 @@ def delegate_to_crew(self, crewai_trigger_payload: dict = None):
## Troubleshooting
**Trigger not firing:**
- Verify the trigger is enabled
- Check integration connection status
- Verify the trigger is enabled in your deployment's Triggers tab
- Check integration connection status under Tools & Integrations
- Ensure all required environment variables are properly configured
**Execution failures:**
- Check the execution logs for error details
- If you are developing, make sure the inputs include the `crewai_trigger_payload` parameter with the correct payload
- Use `crewai triggers run <trigger_name>` to test locally and see the exact payload structure
- Verify your crew can handle the `crewai_trigger_payload` parameter
- Ensure your crew doesn't expect parameters that aren't included in the trigger payload
**Development issues:**
- Always test with `crewai triggers run <trigger>` before deploying to see the complete payload
- Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead
- Use `crewai triggers list` to verify which triggers are available for your connected integrations
- After deployment, your crew will receive the actual trigger payload, so test thoroughly locally first
Automation triggers transform your CrewAI deployments into responsive, event-driven systems that can seamlessly integrate with your existing business processes and tools.
<Card title="CrewAI AMP Trigger Examples" href="https://github.com/crewAIInc/crewai-enterprise-trigger-examples" icon="github">
Check them out on GitHub!
</Card>

View File

@@ -37,6 +37,7 @@ This guide walks you through connecting Azure OpenAI with Crew Studio for seamle
- Navigate to `Resource Management > Networking`.
- Ensure that `Allow access from all networks` is enabled. If this setting is restricted, CrewAI may be blocked from accessing your Azure OpenAI endpoint.
</Step>
</Steps>
## Verification
@@ -46,6 +47,7 @@ You're all set! Crew Studio will now use your Azure OpenAI connection. Test the
## Troubleshooting
If you encounter issues:
- Verify the Target URI format matches the expected pattern
- Check that the API key is correct and has proper permissions
- Ensure network access is configured to allow CrewAI connections

View File

@@ -22,21 +22,27 @@ mode: "wide"
### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/en/installation">
Follow our standard installation guide to set up CrewAI CLI and create your first project.
<Card
title="Follow Standard Installation"
icon="wrench"
href="/en/installation"
>
Follow our standard installation guide to set up CrewAI CLI and create your
first project.
</Card>
### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration.
Follow our quickstart guide to create your first agent crew using YAML
configuration.
</Card>
## Support and Resources
For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com).
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
Book time with our team to learn more about Enterprise features and how they
can benefit your organization.
</Card>

View File

@@ -14,22 +14,17 @@ CrewAI AMP provides a powerful way to capture telemetry logs from your deploymen
Your organization should have ENTERPRISE OTEL SETUP enabled
</Card>
<Card title="OTEL collector setup" icon="server">
Your organization should have an OTEL collector setup or a provider like Datadog log intake setup
Your organization should have an OTEL collector setup or a provider like
Datadog log intake setup
</Card>
</CardGroup>
## How to capture telemetry logs
1. Go to settings/organization tab
2. Configure your OTEL collector setup
3. Save
Example to setup OTEL log collection capture to Datadog.
<Frame>
![Capture Telemetry Logs](/images/crewai-otel-export.png)
</Frame>
<Frame>![Capture Telemetry Logs](/images/crewai-otel-export.png)</Frame>

View File

@@ -1,291 +0,0 @@
---
title: "Deploy Crew"
description: "Deploying a Crew on CrewAI AMP"
icon: "rocket"
mode: "wide"
---
<Note>
After creating a crew locally or through Crew Studio, the next step is deploying it to the CrewAI AMP platform. This guide covers multiple deployment methods to help you choose the best approach for your workflow.
</Note>
## Prerequisites
<CardGroup cols={2}>
<Card title="Crew Ready for Deployment" icon="users">
You should have a working crew either built locally or created through Crew Studio
</Card>
<Card title="GitHub Repository" icon="github">
Your crew code should be in a GitHub repository (for GitHub integration method)
</Card>
</CardGroup>
## Option 1: Deploy Using CrewAI CLI
The CLI provides the fastest way to deploy locally developed crews to the Enterprise platform.
<Steps>
<Step title="Install CrewAI CLI">
If you haven't already, install the CrewAI CLI:
```bash
pip install crewai[tools]
```
<Tip>
The CLI comes with the main CrewAI package, but the `[tools]` extra ensures you have all deployment dependencies.
</Tip>
</Step>
<Step title="Authenticate with the Enterprise Platform">
First, you need to authenticate your CLI with the CrewAI AMP platform:
```bash
# If you already have a CrewAI AMP account, or want to create one:
crewai login
```
When you run either command, the CLI will:
1. Display a URL and a unique device code
2. Open your browser to the authentication page
3. Prompt you to confirm the device
4. Complete the authentication process
Upon successful authentication, you'll see a confirmation message in your terminal!
</Step>
<Step title="Create a Deployment">
From your project directory, run:
```bash
crewai deploy create
```
This command will:
1. Detect your GitHub repository information
2. Identify environment variables in your local `.env` file
3. Securely transfer these variables to the Enterprise platform
4. Create a new deployment with a unique identifier
On successful creation, you'll see a message like:
```shell
Deployment created successfully!
Name: your_project_name
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
Current Status: Deploy Enqueued
```
</Step>
<Step title="Monitor Deployment Progress">
Track the deployment status with:
```bash
crewai deploy status
```
For detailed logs of the build process:
```bash
crewai deploy logs
```
<Tip>
The first deployment typically takes 10-15 minutes as it builds the container images. Subsequent deployments are much faster.
</Tip>
</Step>
</Steps>
## Additional CLI Commands
The CrewAI CLI offers several commands to manage your deployments:
```bash
# List all your deployments
crewai deploy list
# Get the status of your deployment
crewai deploy status
# View the logs of your deployment
crewai deploy logs
# Push updates after code changes
crewai deploy push
# Remove a deployment
crewai deploy remove <deployment_id>
```
## Option 2: Deploy Directly via Web Interface
You can also deploy your crews directly through the CrewAI AMP web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
<Steps>
<Step title="Pushing to GitHub">
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart).
</Step>
<Step title="Connecting GitHub to CrewAI AMP">
1. Log in to [CrewAI AMP](https://app.crewai.com)
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
</Frame>
</Step>
<Step title="Select the Repository">
After connecting your GitHub account, you'll be able to select which repository to deploy:
<Frame>
![Select Repository](/images/enterprise/select-repo.png)
</Frame>
</Step>
<Step title="Set Environment Variables">
Before deploying, you'll need to set up your environment variables to connect to your LLM provider or other services:
1. You can add variables individually or in bulk
2. Enter your environment variables in `KEY=VALUE` format (one per line)
<Frame>
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
</Step>
<Step title="Deploy Your Crew">
1. Click the "Deploy" button to start the deployment process
2. You can monitor the progress through the progress bar
3. The first deployment typically takes around 10-15 minutes; subsequent deployments will be faster
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)
</Frame>
Once deployment is complete, you'll see:
- Your crew's unique URL
- A Bearer token to protect your crew API
- A "Delete" button if you need to remove the deployment
</Step>
</Steps>
## ⚠️ Environment Variable Security Requirements
<Warning>
**Important**: CrewAI AMP has security restrictions on environment variable names that can cause deployment failures if not followed.
</Warning>
### Blocked Environment Variable Patterns
For security reasons, the following environment variable naming patterns are **automatically filtered** and will cause deployment issues:
**Blocked Patterns:**
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
- Variables ending with `_SECRET` (e.g., `API_SECRET`)
- Variables ending with `_KEY` in certain contexts
**Specific Blocked Variables:**
- `GITHUB_USER`, `GITHUB_TOKEN`
- `AWS_REGION`, `AWS_DEFAULT_REGION`
- Various internal CrewAI system variables
### Allowed Exceptions
Some variables are explicitly allowed despite matching blocked patterns:
- `AZURE_AD_TOKEN`
- `AZURE_OPENAI_AD_TOKEN`
- `ENTERPRISE_ACTION_TOKEN`
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
### How to Fix Naming Issues
If your deployment fails due to environment variable restrictions:
```bash
# ❌ These will cause deployment failures
OPENAI_TOKEN=sk-...
DATABASE_PASSWORD=mypassword
API_SECRET=secret123
# ✅ Use these naming patterns instead
OPENAI_API_KEY=sk-...
DATABASE_CREDENTIALS=mypassword
API_CONFIG=secret123
```
### Best Practices
1. **Use standard naming conventions**: `PROVIDER_API_KEY` instead of `PROVIDER_TOKEN`
2. **Test locally first**: Ensure your crew works with the renamed variables
3. **Update your code**: Change any references to the old variable names
4. **Document changes**: Keep track of renamed variables for your team
<Tip>
If you encounter deployment failures with cryptic environment variable errors, check your variable names against these patterns first.
</Tip>
### Interact with Your Deployed Crew
Once deployment is complete, you can access your crew through:
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
- `/inputs`: Lists the required input parameters
- `/kickoff`: Initiates an execution with provided inputs
- `/status/{kickoff_id}`: Checks the execution status
2. **Web Interface**: Visit [app.crewai.com](https://app.crewai.com) to access:
- **Status tab**: View deployment information, API endpoint details, and authentication token
- **Run tab**: Visual representation of your crew's structure
- **Executions tab**: History of all executions
- **Metrics tab**: Performance analytics
- **Traces tab**: Detailed execution insights
### Trigger an Execution
From the Enterprise dashboard, you can:
1. Click on your crew's name to open its details
2. Select "Trigger Crew" from the management interface
3. Enter the required inputs in the modal that appears
4. Monitor progress as the execution moves through the pipeline
### Monitoring and Analytics
The Enterprise platform provides comprehensive observability features:
- **Execution Management**: Track active and completed runs
- **Traces**: Detailed breakdowns of each execution
- **Metrics**: Token usage, execution times, and costs
- **Timeline View**: Visual representation of task sequences
### Advanced Features
The Enterprise platform also offers:
- **Environment Variables Management**: Securely store and manage API keys
- **LLM Connections**: Configure integrations with various LLM providers
- **Custom Tools Repository**: Create, share, and install tools
- **Crew Studio**: Build crews through a chat interface without writing code
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with deployment issues or questions about the Enterprise platform.
</Card>

View File

@@ -0,0 +1,440 @@
---
title: "Deploy to AMP"
description: "Deploy your Crew or Flow to CrewAI AMP"
icon: "rocket"
mode: "wide"
---
<Note>
After creating a Crew or Flow locally (or through Crew Studio), the next step is
deploying it to the CrewAI AMP platform. This guide covers multiple deployment
methods to help you choose the best approach for your workflow.
</Note>
## Prerequisites
<CardGroup cols={2}>
<Card title="Project Ready for Deployment" icon="check-circle">
You should have a working Crew or Flow that runs successfully locally.
Follow our [preparation guide](/en/enterprise/guides/prepare-for-deployment) to verify your project structure.
</Card>
<Card title="GitHub Repository" icon="github">
Your code should be in a GitHub repository (for GitHub integration
method)
</Card>
</CardGroup>
<Info>
**Crews vs Flows**: Both project types can be deployed as "automations" on CrewAI AMP.
The deployment process is the same, but they have different project structures.
See [Prepare for Deployment](/en/enterprise/guides/prepare-for-deployment) for details.
</Info>
## Option 1: Deploy Using CrewAI CLI
The CLI provides the fastest way to deploy locally developed Crews or Flows to the AMP platform.
The CLI automatically detects your project type from `pyproject.toml` and builds accordingly.
<Steps>
<Step title="Install CrewAI CLI">
If you haven't already, install the CrewAI CLI:
```bash
pip install crewai[tools]
```
<Tip>
The CLI comes with the main CrewAI package, but the `[tools]` extra ensures you have all deployment dependencies.
</Tip>
</Step>
<Step title="Authenticate with the Enterprise Platform">
First, you need to authenticate your CLI with the CrewAI AMP platform:
```bash
# If you already have a CrewAI AMP account, or want to create one:
crewai login
```
When you run either command, the CLI will:
1. Display a URL and a unique device code
2. Open your browser to the authentication page
3. Prompt you to confirm the device
4. Complete the authentication process
Upon successful authentication, you'll see a confirmation message in your terminal!
</Step>
<Step title="Create a Deployment">
From your project directory, run:
```bash
crewai deploy create
```
This command will:
1. Detect your GitHub repository information
2. Identify environment variables in your local `.env` file
3. Securely transfer these variables to the Enterprise platform
4. Create a new deployment with a unique identifier
On successful creation, you'll see a message like:
```shell
Deployment created successfully!
Name: your_project_name
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
Current Status: Deploy Enqueued
```
</Step>
<Step title="Monitor Deployment Progress">
Track the deployment status with:
```bash
crewai deploy status
```
For detailed logs of the build process:
```bash
crewai deploy logs
```
<Tip>
The first deployment typically takes 10-15 minutes as it builds the container images. Subsequent deployments are much faster.
</Tip>
</Step>
</Steps>
## Additional CLI Commands
The CrewAI CLI offers several commands to manage your deployments:
```bash
# List all your deployments
crewai deploy list
# Get the status of your deployment
crewai deploy status
# View the logs of your deployment
crewai deploy logs
# Push updates after code changes
crewai deploy push
# Remove a deployment
crewai deploy remove <deployment_id>
```
## Option 2: Deploy Directly via Web Interface
You can also deploy your Crews or Flows directly through the CrewAI AMP web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine. The platform automatically detects your project type and handles the build appropriately.
<Steps>
<Step title="Pushing to GitHub">
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart).
</Step>
<Step title="Connecting GitHub to CrewAI AMP">
1. Log in to [CrewAI AMP](https://app.crewai.com)
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
</Frame>
</Step>
<Step title="Select the Repository">
After connecting your GitHub account, you'll be able to select which repository to deploy:
<Frame>
![Select Repository](/images/enterprise/select-repo.png)
</Frame>
</Step>
<Step title="Set Environment Variables">
Before deploying, you'll need to set up your environment variables to connect to your LLM provider or other services:
1. You can add variables individually or in bulk
2. Enter your environment variables in `KEY=VALUE` format (one per line)
<Frame>
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
</Step>
<Step title="Deploy Your Crew">
1. Click the "Deploy" button to start the deployment process
2. You can monitor the progress through the progress bar
3. The first deployment typically takes around 10-15 minutes; subsequent deployments will be faster
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)
</Frame>
Once deployment is complete, you'll see:
- Your crew's unique URL
- A Bearer token to protect your crew API
- A "Delete" button if you need to remove the deployment
</Step>
</Steps>
## Option 3: Redeploy Using API (CI/CD Integration)
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
<Steps>
<Step title="Get Your Personal Access Token">
Navigate to your CrewAI AMP account settings to generate an API token:
1. Go to [app.crewai.com](https://app.crewai.com)
2. Click on **Settings** → **Account** → **Personal Access Token**
3. Generate a new token and copy it securely
4. Store this token as a secret in your CI/CD system
</Step>
<Step title="Find Your Automation UUID">
Locate the unique identifier for your deployed crew:
1. Go to **Automations** in your CrewAI AMP dashboard
2. Select your existing automation/crew
3. Click on **Additional Details**
4. Copy the **UUID** - this identifies your specific crew deployment
</Step>
<Step title="Trigger Redeployment via API">
Use the Deploy API endpoint to trigger a redeployment:
```bash
curl -i -X POST \
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
# HTTP/2 200
# content-type: application/json
#
# {
# "uuid": "your-automation-uuid",
# "status": "Deploy Enqueued",
# "public_url": "https://your-crew-deployment.crewai.com",
# "token": "your-bearer-token"
# }
```
<Info>
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
</Info>
</Step>
<Step title="GitHub Actions Integration Example">
Here's a GitHub Actions workflow with more complex deployment triggers:
```yaml
name: Deploy CrewAI Automation
on:
push:
branches: [ main ]
pull_request:
types: [ labeled ]
release:
types: [ published ]
jobs:
deploy:
runs-on: ubuntu-latest
if: |
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
(github.event_name == 'release')
steps:
- name: Trigger CrewAI Redeployment
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
```
<Tip>
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
</Tip>
</Step>
</Steps>
## Interact with Your Deployed Automation
Once deployment is complete, you can access your crew through:
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
- `/inputs`: Lists the required input parameters
- `/kickoff`: Initiates an execution with provided inputs
- `/status/{kickoff_id}`: Checks the execution status
2. **Web Interface**: Visit [app.crewai.com](https://app.crewai.com) to access:
- **Status tab**: View deployment information, API endpoint details, and authentication token
- **Run tab**: Visual representation of your crew's structure
- **Executions tab**: History of all executions
- **Metrics tab**: Performance analytics
- **Traces tab**: Detailed execution insights
### Trigger an Execution
From the Enterprise dashboard, you can:
1. Click on your crew's name to open its details
2. Select "Trigger Crew" from the management interface
3. Enter the required inputs in the modal that appears
4. Monitor progress as the execution moves through the pipeline
### Monitoring and Analytics
The Enterprise platform provides comprehensive observability features:
- **Execution Management**: Track active and completed runs
- **Traces**: Detailed breakdowns of each execution
- **Metrics**: Token usage, execution times, and costs
- **Timeline View**: Visual representation of task sequences
### Advanced Features
The Enterprise platform also offers:
- **Environment Variables Management**: Securely store and manage API keys
- **LLM Connections**: Configure integrations with various LLM providers
- **Custom Tools Repository**: Create, share, and install tools
- **Crew Studio**: Build crews through a chat interface without writing code
## Troubleshooting Deployment Failures
If your deployment fails, check these common issues:
### Build Failures
#### Missing uv.lock File
**Symptom**: Build fails early with dependency resolution errors
**Solution**: Generate and commit the lock file:
```bash
uv lock
git add uv.lock
git commit -m "Add uv.lock for deployment"
git push
```
<Warning>
The `uv.lock` file is required for all deployments. Without it, the platform
cannot reliably install your dependencies.
</Warning>
#### Wrong Project Structure
**Symptom**: "Could not find entry point" or "Module not found" errors
**Solution**: Verify your project matches the expected structure:
- **Both Crews and Flows**: Must have entry point at `src/project_name/main.py`
- **Crews**: Use a `run()` function as entry point
- **Flows**: Use a `kickoff()` function as entry point
See [Prepare for Deployment](/en/enterprise/guides/prepare-for-deployment) for detailed structure diagrams.
#### Missing CrewBase Decorator
**Symptom**: "Crew not found", "Config not found", or agent/task configuration errors
**Solution**: Ensure **all** crew classes use the `@CrewBase` decorator:
```python
from crewai.project import CrewBase, agent, crew, task
@CrewBase # This decorator is REQUIRED
class YourCrew():
"""Your crew description"""
@agent
def my_agent(self) -> Agent:
return Agent(
config=self.agents_config['my_agent'], # type: ignore[index]
verbose=True
)
# ... rest of crew definition
```
<Info>
This applies to standalone Crews AND crews embedded inside Flow projects.
Every crew class needs the decorator.
</Info>
#### Incorrect pyproject.toml Type
**Symptom**: Build succeeds but runtime fails, or unexpected behavior
**Solution**: Verify the `[tool.crewai]` section matches your project type:
```toml
# For Crew projects:
[tool.crewai]
type = "crew"
# For Flow projects:
[tool.crewai]
type = "flow"
```
### Runtime Failures
#### LLM Connection Failures
**Symptom**: API key errors, "model not found", or authentication failures
**Solution**:
1. Verify your LLM provider's API key is correctly set in environment variables
2. Ensure the environment variable names match what your code expects
3. Test locally with the exact same environment variables before deploying
#### Crew Execution Errors
**Symptom**: Crew starts but fails during execution
**Solution**:
1. Check the execution logs in the AMP dashboard (Traces tab)
2. Verify all tools have required API keys configured
3. Ensure agent configurations in `agents.yaml` are valid
4. Check task configurations in `tasks.yaml` for syntax errors
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with deployment issues or questions
about the AMP platform.
</Card>

View File

@@ -6,7 +6,8 @@ mode: "wide"
---
<Tip>
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly
scaffold or build Crews through a conversational interface.
</Tip>
## What is Crew Studio?
@@ -52,6 +53,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
<Frame>
![LLM Connection Configuration](/images/enterprise/llm-connection-config.png)
</Frame>
</Step>
<Step title="Verify Connection Added">
@@ -60,6 +62,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
<Frame>
![Connection Added](/images/enterprise/connection-added.png)
</Frame>
</Step>
<Step title="Configure LLM Defaults">
@@ -73,6 +76,7 @@ Before you can start using Crew Studio, you need to configure your LLM connectio
<Frame>
![LLM Defaults Configuration](/images/enterprise/llm-defaults.png)
</Frame>
</Step>
</Steps>
@@ -93,6 +97,7 @@ Now that you've configured your LLM connection and default settings, you're read
```
The Crew Assistant will ask clarifying questions to better understand your requirements.
</Step>
<Step title="Review Generated Crew">
@@ -104,6 +109,7 @@ Now that you've configured your LLM connection and default settings, you're read
- Tools to be used
This is your opportunity to refine the configuration before proceeding.
</Step>
<Step title="Deploy or Download">
@@ -112,6 +118,7 @@ Now that you've configured your LLM connection and default settings, you're read
- Download the generated code for local customization
- Deploy the crew directly to the CrewAI AMP platform
- Modify the configuration and regenerate the crew
</Step>
<Step title="Test Your Crew">
@@ -120,7 +127,9 @@ Now that you've configured your LLM connection and default settings, you're read
</Steps>
<Tip>
For best results, provide clear, detailed descriptions of what you want your crew to accomplish. Include specific inputs and expected outputs in your description.
For best results, provide clear, detailed descriptions of what you want your
crew to accomplish. Include specific inputs and expected outputs in your
description.
</Tip>
## Example Workflow
@@ -134,11 +143,14 @@ Here's a typical workflow for creating a crew with Crew Studio:
```md
I need a crew that can analyze financial news and provide investment recommendations
```
</Step>
<Step title="Answer Questions">
Respond to clarifying questions from the Crew Assistant to refine your requirements.
</Step>
{" "}
<Step title="Answer Questions">
Respond to clarifying questions from the Crew Assistant to refine your
requirements.
</Step>
<Step title="Review the Plan">
Review the generated crew plan, which might include:
@@ -146,15 +158,18 @@ Here's a typical workflow for creating a crew with Crew Studio:
- A Research Agent to gather financial news
- An Analysis Agent to interpret the data
- A Recommendations Agent to provide investment advice
</Step>
<Step title="Approve or Modify">
Approve the plan or request changes if necessary.
</Step>
{" "}
<Step title="Approve or Modify">
Approve the plan or request changes if necessary.
</Step>
<Step title="Download or Deploy">
Download the code for customization or deploy directly to the platform.
</Step>
{" "}
<Step title="Download or Deploy">
Download the code for customization or deploy directly to the platform.
</Step>
<Step title="Test and Refine">
Test your crew with sample inputs and refine as needed.
@@ -162,5 +177,6 @@ Here's a typical workflow for creating a crew with Crew Studio:
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Crew Studio or any other CrewAI AMP features.
Contact our support team for assistance with Crew Studio or any other CrewAI
AMP features.
</Card>

View File

@@ -10,7 +10,8 @@ mode: "wide"
Use the Gmail Trigger to kick off your deployed crews when Gmail events happen in connected accounts, such as receiving a new email or messages matching a label/filter.
<Tip>
Make sure Gmail is connected in Tools & Integrations and the trigger is enabled for your deployment.
Make sure Gmail is connected in Tools & Integrations and the trigger is
enabled for your deployment.
</Tip>
## Enabling the Gmail Trigger
@@ -20,7 +21,10 @@ Use the Gmail Trigger to kick off your deployed crews when Gmail events happen i
3. Locate **Gmail** and switch the toggle to enable
<Frame>
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
<img
src="/images/enterprise/trigger-selected.png"
alt="Enable or disable triggers with toggle"
/>
</Frame>
## Example: Process new emails
@@ -51,35 +55,43 @@ class GmailProcessingCrew:
)
```
The Gmail payload will be available via the standard context mechanisms. See the payload samples repository for structure and fields.
The Gmail payload will be available via the standard context mechanisms.
### Sample payloads & crews
### Testing Locally
The [CrewAI AMP Trigger Examples repository](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail) includes:
Test your Gmail trigger integration locally using the CrewAI CLI:
- `new-email-payload-1.json` / `new-email-payload-2.json` — production-style new message alerts with matching crews in `new-email-crew.py`
- `thread-updated-sample-1.json` — follow-up messages on an existing thread, processed by `gmail-alert-crew.py`
```bash
# View all available triggers
crewai triggers list
Use these samples to validate your parsing logic locally before wiring the trigger to your live Gmail accounts.
# Simulate a Gmail trigger with realistic payload
crewai triggers run gmail/new_email_received
```
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run gmail/new_email_received` (not `crewai run`) to
simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
Track history and performance of triggered runs:
<Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
<img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame>
## Payload Reference
See the sample payloads and field descriptions:
<Card title="Gmail samples in Trigger Examples Repo" href="https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/gmail" icon="envelopes-bulk">
Gmail samples in Trigger Examples Repo
</Card>
## Troubleshooting
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Test locally with `crewai triggers run gmail/new_email_received` to see the exact payload structure
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -10,7 +10,8 @@ mode: "wide"
Use the Google Calendar trigger to launch automations whenever calendar events change. Common use cases include briefing a team before a meeting, notifying stakeholders when a critical event is cancelled, or summarizing daily schedules.
<Tip>
Make sure Google Calendar is connected in **Tools & Integrations** and enabled for the deployment you want to automate.
Make sure Google Calendar is connected in **Tools & Integrations** and enabled
for the deployment you want to automate.
</Tip>
## Enabling the Google Calendar Trigger
@@ -20,7 +21,10 @@ Use the Google Calendar trigger to launch automations whenever calendar events c
3. Locate **Google Calendar** and switch the toggle to enable
<Frame>
<img src="/images/enterprise/calendar-trigger.png" alt="Enable or disable triggers with toggle" />
<img
src="/images/enterprise/calendar-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame>
## Example: Summarize meeting details
@@ -39,27 +43,41 @@ print(result.raw)
Use `crewai_trigger_payload` exactly as it is delivered by the trigger so the crew can extract the proper fields.
## Sample payloads & crews
## Testing Locally
The [Google Calendar examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_calendar) show how to handle multiple event types:
Test your Google Calendar trigger integration locally using the CrewAI CLI:
- `new-event.json` → standard event creation handled by `calendar-event-crew.py`
- `event-updated.json` / `event-started.json` / `event-ended.json` → in-flight updates processed by `calendar-meeting-crew.py`
- `event-canceled.json` → cancellation workflow that alerts attendees via `calendar-meeting-crew.py`
- Working location events use `calendar-working-location-crew.py` to extract on-site schedules
```bash
# View all available triggers
crewai triggers list
Each crew transforms raw event metadata (attendees, rooms, working locations) into the summaries your teams need.
# Simulate a Google Calendar trigger with realistic payload
crewai triggers run google_calendar/event_changed
```
The `crewai triggers run` command will execute your crew with a complete Calendar payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run google_calendar/event_changed` (not `crewai run`) to
simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
The **Executions** list in the deployment dashboard tracks every triggered run and surfaces payload metadata, output summaries, and errors.
<Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
<img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame>
## Troubleshooting
- Ensure the correct Google account is connected and the trigger is enabled
- Test locally with `crewai triggers run google_calendar/event_changed` to see the exact payload structure
- Confirm your workflow handles all-day events (payloads use `start.date` and `end.date` instead of timestamps)
- Check execution logs if reminders or attendee arrays are missing—calendar permissions can limit fields in the payload
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -10,7 +10,8 @@ mode: "wide"
Trigger your automations when files are created, updated, or removed in Google Drive. Typical workflows include summarizing newly uploaded content, enforcing sharing policies, or notifying owners when critical files change.
<Tip>
Connect Google Drive in **Tools & Integrations** and confirm the trigger is enabled for the automation you want to monitor.
Connect Google Drive in **Tools & Integrations** and confirm the trigger is
enabled for the automation you want to monitor.
</Tip>
## Enabling the Google Drive Trigger
@@ -20,7 +21,10 @@ Trigger your automations when files are created, updated, or removed in Google D
3. Locate **Google Drive** and switch the toggle to enable
<Frame>
<img src="/images/enterprise/gdrive-trigger.png" alt="Enable or disable triggers with toggle" />
<img
src="/images/enterprise/gdrive-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame>
## Example: Summarize file activity
@@ -36,26 +40,41 @@ crew.kickoff({
})
```
## Sample payloads & crews
## Testing Locally
Explore the [Google Drive examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/google_drive) to cover different operations:
Test your Google Drive trigger integration locally using the CrewAI CLI:
- `new-file.json` → new uploads processed by `drive-file-crew.py`
- `updated-file.json` → file edits and metadata changes handled by `drive-file-crew.py`
- `deleted-file.json` → deletion events routed through `drive-file-deletion-crew.py`
```bash
# View all available triggers
crewai triggers list
Each crew highlights the file name, operation type, owner, permissions, and security considerations so downstream systems can respond appropriately.
# Simulate a Google Drive trigger with realistic payload
crewai triggers run google_drive/file_changed
```
The `crewai triggers run` command will execute your crew with a complete Drive payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run google_drive/file_changed` (not `crewai run`) to
simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
Track history and performance of triggered runs with the **Executions** list in the deployment dashboard.
<Frame>
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
<img
src="/images/enterprise/list-executions.png"
alt="List of executions triggered by automation"
/>
</Frame>
## Troubleshooting
- Verify Google Drive is connected and the trigger toggle is enabled
- Test locally with `crewai triggers run google_drive/file_changed` to see the exact payload structure
- If a payload is missing permission data, ensure the connected account has access to the file or folder
- The trigger sends file IDs only; use the Drive API if you need to fetch binary content during the crew run
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -15,50 +15,47 @@ This guide provides a step-by-step process to set up HubSpot triggers for CrewAI
## Setup Steps
<Steps>
<Step title="Connect your HubSpot account with CrewAI AMP">
- Log in to your `CrewAI AMP account > Triggers`
- Select `HubSpot` from the list of available triggers
- Choose the HubSpot account you want to connect with CrewAI AMP
- Follow the on-screen prompts to authorize CrewAI AMP access to your HubSpot account
- A confirmation message will appear once HubSpot is successfully connected with CrewAI AMP
</Step>
<Step title="Create a HubSpot Workflow">
- Log in to your `HubSpot account > Automations > Workflows > New workflow`
- Select the workflow type that fits your needs (e.g., Start from scratch)
- In the workflow builder, click the Plus (+) icon to add a new action.
- Choose `Integrated apps > CrewAI > Kickoff a Crew`.
- Select the Crew you want to initiate.
- Click `Save` to add the action to your workflow
<Frame>
<img src="/images/enterprise/hubspot-workflow-1.png" alt="HubSpot Workflow 1" />
</Frame>
</Step>
<Step title="Use Crew results with other actions">
- After the Kickoff a Crew step, click the Plus (+) icon to add a new action.
- For example, to send an internal email notification, choose `Communications > Send internal email notification`
- In the Body field, click `Insert data`, select `View properties or action outputs from > Action outputs > Crew Result` to include Crew data in the email
<Frame>
<img src="/images/enterprise/hubspot-workflow-2.png" alt="HubSpot Workflow 2" />
</Frame>
- Configure any additional actions as needed
- Review your workflow steps to ensure everything is set up correctly
- Activate the workflow
<Frame>
<img src="/images/enterprise/hubspot-workflow-3.png" alt="HubSpot Workflow 3" />
</Frame>
</Step>
<Step title="Connect your HubSpot account with CrewAI AMP">
- Log in to your `CrewAI AMP account > Triggers` - Select `HubSpot` from the
list of available triggers - Choose the HubSpot account you want to connect
with CrewAI AMP - Follow the on-screen prompts to authorize CrewAI AMP
access to your HubSpot account - A confirmation message will appear once
HubSpot is successfully connected with CrewAI AMP
</Step>
<Step title="Create a HubSpot Workflow">
- Log in to your `HubSpot account > Automations > Workflows > New workflow`
- Select the workflow type that fits your needs (e.g., Start from scratch) -
In the workflow builder, click the Plus (+) icon to add a new action. -
Choose `Integrated apps > CrewAI > Kickoff a Crew`. - Select the Crew you
want to initiate. - Click `Save` to add the action to your workflow
<Frame>
<img
src="/images/enterprise/hubspot-workflow-1.png"
alt="HubSpot Workflow 1"
/>
</Frame>
</Step>
<Step title="Use Crew results with other actions">
- After the Kickoff a Crew step, click the Plus (+) icon to add a new
action. - For example, to send an internal email notification, choose
`Communications > Send internal email notification` - In the Body field,
click `Insert data`, select `View properties or action outputs from > Action
outputs > Crew Result` to include Crew data in the email
<Frame>
<img
src="/images/enterprise/hubspot-workflow-2.png"
alt="HubSpot Workflow 2"
/>
</Frame>
- Configure any additional actions as needed - Review your workflow
steps to ensure everything is set up correctly - Activate the workflow
<Frame>
<img
src="/images/enterprise/hubspot-workflow-3.png"
alt="HubSpot Workflow 3"
/>
</Frame>
</Step>
</Steps>
## Additional Resources
### Sample payloads & crews
You can jump-start development with the [HubSpot examples in the trigger repository](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/hubspot):
- `record-created-contact.json`, `record-updated-contact.json` → contact lifecycle events handled by `hubspot-contact-crew.py`
- `record-created-company.json`, `record-updated-company.json` → company enrichment flows in `hubspot-company-crew.py`
- `record-created-deals.json`, `record-updated-deals.json` → deal pipeline automation in `hubspot-record-crew.py`
Each crew demonstrates how to parse HubSpot record fields, enrich context, and return structured insights.
For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows).

View File

@@ -5,9 +5,54 @@ icon: "user-check"
mode: "wide"
---
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI Enterprise.
## Setting Up HITL Workflows
## HITL Approaches in CrewAI
CrewAI offers two approaches for implementing human-in-the-loop workflows:
| Approach | Best For | Version |
|----------|----------|---------|
| **Flow-based** (`@human_feedback` decorator) | Production with Enterprise UI, email-first workflows, full platform features | **1.8.0+** |
| **Webhook-based** | Custom integrations, external systems (Slack, Teams, etc.), legacy setups | All versions |
## Flow-Based HITL with Enterprise Platform
<Note>
The `@human_feedback` decorator requires **CrewAI version 1.8.0 or higher**.
</Note>
When using the `@human_feedback` decorator in your Flows, CrewAI Enterprise provides an **email-first HITL system** that enables anyone with an email address to respond to review requests:
<CardGroup cols={2}>
<Card title="Email-First Design" icon="envelope">
Responders receive email notifications and can reply directly—no login required.
</Card>
<Card title="Dashboard Review" icon="desktop">
Review and respond to HITL requests in the Enterprise dashboard when preferred.
</Card>
<Card title="Flexible Routing" icon="route">
Route requests to specific emails based on method patterns or pull from flow state.
</Card>
<Card title="Auto-Response" icon="clock">
Configure automatic fallback responses when no human replies within the timeout.
</Card>
</CardGroup>
### Key Benefits
- **External responders**: Anyone with an email can respond, even non-platform users
- **Dynamic assignment**: Pull assignee email from flow state (e.g., `account_owner_email`)
- **Simple configuration**: Email-based routing is easier to set up than user/role management
- **Deployment creator fallback**: If no routing rule matches, the deployment creator is notified
<Tip>
For implementation details on the `@human_feedback` decorator, see the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide.
</Tip>
## Setting Up Webhook-Based HITL Workflows
For custom integrations with external systems like Slack, Microsoft Teams, or your own applications, you can use the webhook-based approach:
<Steps>
<Step title="Configure Your Task">
@@ -99,3 +144,14 @@ HITL workflows are particularly valuable for:
- Sensitive or high-stakes operations
- Creative tasks requiring human judgment
- Compliance and regulatory reviews
## Learn More
<CardGroup cols={2}>
<Card title="Flow HITL Management" icon="users-gear" href="/en/enterprise/features/flow-hitl-management">
Explore the full Enterprise Flow HITL platform capabilities including email notifications, routing rules, auto-response, and analytics.
</Card>
<Card title="Human Feedback in Flows" icon="code" href="/en/learn/human-feedback-in-flows">
Implementation guide for the `@human_feedback` decorator in your Flows.
</Card>
</CardGroup>

View File

@@ -17,9 +17,7 @@ Once you've deployed your crew to the CrewAI AMP platform, you can kickoff execu
2. Click on the crew name from your projects list
3. You'll be taken to the crew's detail page
<Frame>
![Crew Dashboard](/images/enterprise/crew-dashboard.png)
</Frame>
<Frame>![Crew Dashboard](/images/enterprise/crew-dashboard.png)</Frame>
### Step 2: Initiate Execution
@@ -31,9 +29,7 @@ From your crew's detail page, you have two options to kickoff an execution:
2. Enter the required input parameters for your crew in the JSON editor
3. Click the `Send Request` button
<Frame>
![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)
</Frame>
<Frame>![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)</Frame>
#### Option B: Using the Visual Interface
@@ -41,9 +37,7 @@ From your crew's detail page, you have two options to kickoff an execution:
2. Enter the required inputs in the form fields
3. Click the `Run Crew` button
<Frame>
![Run Crew](/images/enterprise/run-crew.png)
</Frame>
<Frame>![Run Crew](/images/enterprise/run-crew.png)</Frame>
### Step 3: Monitor Execution Progress
@@ -52,9 +46,7 @@ After initiating the execution:
1. You'll receive a response containing a `kickoff_id` - **copy this ID**
2. This ID is essential for tracking your execution
<Frame>
![Copy Task ID](/images/enterprise/copy-task-id.png)
</Frame>
<Frame>![Copy Task ID](/images/enterprise/copy-task-id.png)</Frame>
### Step 4: Check Execution Status
@@ -64,11 +56,10 @@ To monitor the progress of your execution:
2. Paste the `kickoff_id` into the designated field
3. Click the "Get Status" button
<Frame>
![Get Status](/images/enterprise/get-status.png)
</Frame>
<Frame>![Get Status](/images/enterprise/get-status.png)</Frame>
The status response will show:
- Current execution state (`running`, `completed`, etc.)
- Details about which tasks are in progress
- Any outputs produced so far
@@ -122,7 +113,7 @@ curl -X GET \
The response will be a JSON object containing an array of required input parameters, for example:
```json
{"inputs":["topic","current_year"]}
{ "inputs": ["topic", "current_year"] }
```
This example shows that this particular crew requires two inputs: `topic` and `current_year`.
@@ -142,7 +133,7 @@ curl -X POST \
The response will include a `kickoff_id` that you'll need for tracking:
```json
{"kickoff_id":"abcd1234-5678-90ef-ghij-klmnopqrstuv"}
{ "kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv" }
```
### Step 3: Check Execution Status
@@ -182,5 +173,6 @@ If an execution fails:
3. Look for LLM responses and tool usage in the trace details
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with execution issues or questions about the Enterprise platform.
Contact our support team for assistance with execution issues or questions
about the Enterprise platform.
</Card>

View File

@@ -10,7 +10,8 @@ mode: "wide"
Use the Microsoft Teams trigger to start automations whenever a new chat is created. Common patterns include summarizing inbound requests, routing urgent messages to support teams, or creating follow-up tasks in other systems.
<Tip>
Confirm Microsoft Teams is connected under **Tools & Integrations** and enabled in the **Triggers** tab for your deployment.
Confirm Microsoft Teams is connected under **Tools & Integrations** and
enabled in the **Triggers** tab for your deployment.
</Tip>
## Enabling the Microsoft Teams Trigger
@@ -20,7 +21,10 @@ Use the Microsoft Teams trigger to start automations whenever a new chat is crea
3. Locate **Microsoft Teams** and switch the toggle to enable
<Frame caption="Microsoft Teams trigger connection">
<img src="/images/enterprise/msteams-trigger.png" alt="Enable or disable triggers with toggle" />
<img
src="/images/enterprise/msteams-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame>
## Example: Summarize a new chat thread
@@ -37,16 +41,30 @@ print(result.raw)
The crew parses thread metadata (subject, created time, roster) and generates an action plan for the receiving team.
## Sample payloads & crews
## Testing Locally
The [Microsoft Teams examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/microsoft-teams) include:
Test your Microsoft Teams trigger integration locally using the CrewAI CLI:
- `chat-created.json` → chat creation payload processed by `teams-chat-created-crew.py`
```bash
# View all available triggers
crewai triggers list
The crew demonstrates how to extract participants, initial messages, tenant information, and compliance metadata from the Microsoft Graph webhook payload.
# Simulate a Microsoft Teams trigger with realistic payload
crewai triggers run microsoft_teams/teams_message_created
```
The `crewai triggers run` command will execute your crew with a complete Teams payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_teams/teams_message_created` (not `crewai
run`) to simulate trigger execution during development. After deployment, your
crew will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Ensure the Teams connection is active; it must be refreshed if the tenant revokes permissions
- Test locally with `crewai triggers run microsoft_teams/teams_message_created` to see the exact payload structure
- Confirm the webhook subscription in Microsoft 365 is still valid if payloads stop arriving
- Review execution logs for payload shape mismatches—Graph notifications may omit fields when a chat is private or restricted
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -10,7 +10,8 @@ mode: "wide"
Start automations when files change inside OneDrive. You can generate audit summaries, notify security teams about external sharing, or update downstream line-of-business systems with new document metadata.
<Tip>
Connect OneDrive in **Tools & Integrations** and toggle the trigger on for your deployment.
Connect OneDrive in **Tools & Integrations** and toggle the trigger on for
your deployment.
</Tip>
## Enabling the OneDrive Trigger
@@ -20,7 +21,10 @@ Start automations when files change inside OneDrive. You can generate audit summ
3. Locate **OneDrive** and switch the toggle to enable
<Frame caption="Microsoft OneDrive trigger connection">
<img src="/images/enterprise/onedrive-trigger.png" alt="Enable or disable triggers with toggle" />
<img
src="/images/enterprise/onedrive-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame>
## Example: Audit file permissions
@@ -36,18 +40,30 @@ crew.kickoff({
The crew inspects file metadata, user activity, and permission changes to produce a compliance-friendly summary.
## Sample payloads & crews
## Testing Locally
The [OneDrive examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/onedrive) showcase how to:
Test your OneDrive trigger integration locally using the CrewAI CLI:
- Parse file metadata, size, and folder paths
- Track who created and last modified the file
- Highlight permission and external sharing changes
```bash
# View all available triggers
crewai triggers list
`onedrive-file-crew.py` bundles the analysis and summarization tasks so you can add remediation steps as needed.
# Simulate a OneDrive trigger with realistic payload
crewai triggers run microsoft_onedrive/file_changed
```
The `crewai triggers run` command will execute your crew with a complete OneDrive payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_onedrive/file_changed` (not `crewai run`)
to simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Ensure the connected account has permission to read the file metadata included in the webhook
- Test locally with `crewai triggers run microsoft_onedrive/file_changed` to see the exact payload structure
- If the trigger fires but the payload is missing `permissions`, confirm the site-level sharing settings allow Graph to return this field
- For large tenants, filter notifications upstream so the crew only runs on relevant directories
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -10,7 +10,8 @@ mode: "wide"
Automate responses when Outlook delivers a new message or when an event is removed from the calendar. Teams commonly route escalations, file tickets, or alert attendees of cancellations.
<Tip>
Connect Outlook in **Tools & Integrations** and ensure the trigger is enabled for your deployment.
Connect Outlook in **Tools & Integrations** and ensure the trigger is enabled
for your deployment.
</Tip>
## Enabling the Outlook Trigger
@@ -20,7 +21,10 @@ Automate responses when Outlook delivers a new message or when an event is remov
3. Locate **Outlook** and switch the toggle to enable
<Frame caption="Microsoft Outlook trigger connection">
<img src="/images/enterprise/outlook-trigger.png" alt="Enable or disable triggers with toggle" />
<img
src="/images/enterprise/outlook-trigger.png"
alt="Enable or disable triggers with toggle"
/>
</Frame>
## Example: Summarize a new email
@@ -36,17 +40,30 @@ crew.kickoff({
The crew extracts sender details, subject, body preview, and attachments before generating a structured response.
## Sample payloads & crews
## Testing Locally
Review the [Outlook examples](https://github.com/crewAIInc/crewai-enterprise-trigger-examples/tree/main/outlook) for two common scenarios:
Test your Outlook trigger integration locally using the CrewAI CLI:
- `new-message.json` → new mail notifications parsed by `outlook-message-crew.py`
- `event-removed.json` → calendar cleanup handled by `outlook-event-removal-crew.py`
```bash
# View all available triggers
crewai triggers list
Each crew demonstrates how to handle Microsoft Graph payloads, normalize headers, and keep humans in-the-loop with concise summaries.
# Simulate an Outlook trigger with realistic payload
crewai triggers run microsoft_outlook/email_received
```
The `crewai triggers run` command will execute your crew with a complete Outlook payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run microsoft_outlook/email_received` (not `crewai run`)
to simulate trigger execution during development. After deployment, your crew
will automatically receive the trigger payload.
</Warning>
## Troubleshooting
- Verify the Outlook connector is still authorized; the subscription must be renewed periodically
- Test locally with `crewai triggers run microsoft_outlook/email_received` to see the exact payload structure
- If attachments are missing, confirm the webhook subscription includes the `includeResourceData` flag
- Review execution logs when events fail to match—cancellation payloads lack attendee lists by design and the crew should account for that
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -0,0 +1,305 @@
---
title: "Prepare for Deployment"
description: "Ensure your Crew or Flow is ready for deployment to CrewAI AMP"
icon: "clipboard-check"
mode: "wide"
---
<Note>
Before deploying to CrewAI AMP, it's crucial to verify your project is correctly structured.
Both Crews and Flows can be deployed as "automations," but they have different project structures
and requirements that must be met for successful deployment.
</Note>
## Understanding Automations
In CrewAI AMP, **automations** is the umbrella term for deployable Agentic AI projects. An automation can be either:
- **A Crew**: A standalone team of AI agents working together on tasks
- **A Flow**: An orchestrated workflow that can combine multiple crews, direct LLM calls, and procedural logic
Understanding which type you're deploying is essential because they have different project structures and entry points.
## Crews vs Flows: Key Differences
<CardGroup cols={2}>
<Card title="Crew Projects" icon="users">
Standalone AI agent teams with `crew.py` defining agents and tasks. Best for focused, collaborative tasks.
</Card>
<Card title="Flow Projects" icon="diagram-project">
Orchestrated workflows with embedded crews in a `crews/` folder. Best for complex, multi-stage processes.
</Card>
</CardGroup>
| Aspect | Crew | Flow |
|--------|------|------|
| **Project structure** | `src/project_name/` with `crew.py` | `src/project_name/` with `crews/` folder |
| **Main logic location** | `src/project_name/crew.py` | `src/project_name/main.py` (Flow class) |
| **Entry point function** | `run()` in `main.py` | `kickoff()` in `main.py` |
| **pyproject.toml type** | `type = "crew"` | `type = "flow"` |
| **CLI create command** | `crewai create crew name` | `crewai create flow name` |
| **Config location** | `src/project_name/config/` | `src/project_name/crews/crew_name/config/` |
| **Can contain other crews** | No | Yes (in `crews/` folder) |
## Project Structure Reference
### Crew Project Structure
When you run `crewai create crew my_crew`, you get this structure:
```
my_crew/
├── .gitignore
├── pyproject.toml # Must have type = "crew"
├── README.md
├── .env
├── uv.lock # REQUIRED for deployment
└── src/
└── my_crew/
├── __init__.py
├── main.py # Entry point with run() function
├── crew.py # Crew class with @CrewBase decorator
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml # Agent definitions
└── tasks.yaml # Task definitions
```
<Warning>
The nested `src/project_name/` structure is critical for Crews.
Placing files at the wrong level will cause deployment failures.
</Warning>
### Flow Project Structure
When you run `crewai create flow my_flow`, you get this structure:
```
my_flow/
├── .gitignore
├── pyproject.toml # Must have type = "flow"
├── README.md
├── .env
├── uv.lock # REQUIRED for deployment
└── src/
└── my_flow/
├── __init__.py
├── main.py # Entry point with kickoff() function + Flow class
├── crews/ # Embedded crews folder
│ └── poem_crew/
│ ├── __init__.py
│ ├── poem_crew.py # Crew with @CrewBase decorator
│ └── config/
│ ├── agents.yaml
│ └── tasks.yaml
└── tools/
├── __init__.py
└── custom_tool.py
```
<Info>
Both Crews and Flows use the `src/project_name/` structure.
The key difference is that Flows have a `crews/` folder for embedded crews,
while Crews have `crew.py` directly in the project folder.
</Info>
## Pre-Deployment Checklist
Use this checklist to verify your project is ready for deployment.
### 1. Verify pyproject.toml Configuration
Your `pyproject.toml` must include the correct `[tool.crewai]` section:
<Tabs>
<Tab title="For Crews">
```toml
[tool.crewai]
type = "crew"
```
</Tab>
<Tab title="For Flows">
```toml
[tool.crewai]
type = "flow"
```
</Tab>
</Tabs>
<Warning>
If the `type` doesn't match your project structure, the build will fail or
the automation won't run correctly.
</Warning>
### 2. Ensure uv.lock File Exists
CrewAI uses `uv` for dependency management. The `uv.lock` file ensures reproducible builds and is **required** for deployment.
```bash
# Generate or update the lock file
uv lock
# Verify it exists
ls -la uv.lock
```
If the file doesn't exist, run `uv lock` and commit it to your repository:
```bash
uv lock
git add uv.lock
git commit -m "Add uv.lock for deployment"
git push
```
### 3. Validate CrewBase Decorator Usage
**Every crew class must use the `@CrewBase` decorator.** This applies to:
- Standalone crew projects
- Crews embedded inside Flow projects
```python
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase # This decorator is REQUIRED
class MyCrew():
"""My crew description"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def my_agent(self) -> Agent:
return Agent(
config=self.agents_config['my_agent'], # type: ignore[index]
verbose=True
)
@task
def my_task(self) -> Task:
return Task(
config=self.tasks_config['my_task'] # type: ignore[index]
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
<Warning>
If you forget the `@CrewBase` decorator, your deployment will fail with
errors about missing agents or tasks configurations.
</Warning>
### 4. Check Project Entry Points
Both Crews and Flows have their entry point in `src/project_name/main.py`:
<Tabs>
<Tab title="For Crews">
The entry point uses a `run()` function:
```python
# src/my_crew/main.py
from my_crew.crew import MyCrew
def run():
"""Run the crew."""
inputs = {'topic': 'AI in Healthcare'}
result = MyCrew().crew().kickoff(inputs=inputs)
return result
if __name__ == "__main__":
run()
```
</Tab>
<Tab title="For Flows">
The entry point uses a `kickoff()` function with a Flow class:
```python
# src/my_flow/main.py
from crewai.flow import Flow, listen, start
from my_flow.crews.poem_crew.poem_crew import PoemCrew
class MyFlow(Flow):
@start()
def begin(self):
# Flow logic here
result = PoemCrew().crew().kickoff(inputs={...})
return result
def kickoff():
"""Run the flow."""
MyFlow().kickoff()
if __name__ == "__main__":
kickoff()
```
</Tab>
</Tabs>
### 5. Prepare Environment Variables
Before deployment, ensure you have:
1. **LLM API keys** ready (OpenAI, Anthropic, Google, etc.)
2. **Tool API keys** if using external tools (Serper, etc.)
<Tip>
Test your project locally with the same environment variables before deploying
to catch configuration issues early.
</Tip>
## Quick Validation Commands
Run these commands from your project root to quickly verify your setup:
```bash
# 1. Check project type in pyproject.toml
grep -A2 "\[tool.crewai\]" pyproject.toml
# 2. Verify uv.lock exists
ls -la uv.lock || echo "ERROR: uv.lock missing! Run 'uv lock'"
# 3. Verify src/ structure exists
ls -la src/*/main.py 2>/dev/null || echo "No main.py found in src/"
# 4. For Crews - verify crew.py exists
ls -la src/*/crew.py 2>/dev/null || echo "No crew.py (expected for Crews)"
# 5. For Flows - verify crews/ folder exists
ls -la src/*/crews/ 2>/dev/null || echo "No crews/ folder (expected for Flows)"
# 6. Check for CrewBase usage
grep -r "@CrewBase" . --include="*.py"
```
## Common Setup Mistakes
| Mistake | Symptom | Fix |
|---------|---------|-----|
| Missing `uv.lock` | Build fails during dependency resolution | Run `uv lock` and commit |
| Wrong `type` in pyproject.toml | Build succeeds but runtime fails | Change to correct type |
| Missing `@CrewBase` decorator | "Config not found" errors | Add decorator to all crew classes |
| Files at root instead of `src/` | Entry point not found | Move to `src/project_name/` |
| Missing `run()` or `kickoff()` | Cannot start automation | Add correct entry function |
## Next Steps
Once your project passes all checklist items, you're ready to deploy:
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
Follow the deployment guide to deploy your Crew or Flow to CrewAI AMP using
the CLI, web interface, or CI/CD integration.
</Card>

View File

@@ -17,6 +17,7 @@ This guide explains how to export CrewAI AMP crews as React components and integ
<img src="/images/enterprise/export-react-component.png" alt="Export React Component" />
</Frame>
</Step>
</Steps>
## Setting Up Your React Environment
@@ -83,6 +84,7 @@ To run this React component locally, you'll need to set up a React development e
```
- This will start the development server, and your default web browser should open automatically to http://localhost:3000, where you'll see your React app running.
</Step>
</Steps>
## Customization
@@ -90,10 +92,16 @@ To run this React component locally, you'll need to set up a React development e
You can then customise the `CrewLead.jsx` to add color, title etc
<Frame>
<img src="/images/enterprise/customise-react-component.png" alt="Customise React Component" />
<img
src="/images/enterprise/customise-react-component.png"
alt="Customise React Component"
/>
</Frame>
<Frame>
<img src="/images/enterprise/customise-react-component-2.png" alt="Customise React Component" />
<img
src="/images/enterprise/customise-react-component-2.png"
alt="Customise React Component"
/>
</Frame>
## Next Steps

View File

@@ -10,31 +10,30 @@ As an administrator of a CrewAI AMP account, you can easily invite new team memb
## Inviting Team Members
<Steps>
<Step title="Access the Settings Page">
- Log in to your CrewAI AMP account
- Look for the gear icon (⚙️) in the top right corner of the dashboard
- Click on the gear icon to access the **Settings** page:
<Frame caption="Settings page">
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame>
</Step>
<Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Members` tab
- Click on the `Members` tab to access the **Members** page:
<Frame caption="Members tab">
<img src="/images/enterprise/members-tab.png" alt="Members Tab" />
</Frame>
</Step>
<Step title="Invite New Members">
- In the Members section, you'll see a list of current members (including yourself)
- Locate the `Email` input field
- Enter the email address of the person you want to invite
- Click the `Invite` button to send the invitation
</Step>
<Step title="Repeat as Needed">
- You can repeat this process to invite multiple team members
- Each invited member will receive an email invitation to join your organization
</Step>
<Step title="Access the Settings Page">
- Log in to your CrewAI AMP account - Look for the gear icon (⚙️) in the top
right corner of the dashboard - Click on the gear icon to access the
**Settings** page:
<Frame caption="Settings page">
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame>
</Step>
<Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Members` tab - Click on the `Members`
tab to access the **Members** page:
<Frame caption="Members tab">
<img src="/images/enterprise/members-tab.png" alt="Members Tab" />
</Frame>
</Step>
<Step title="Invite New Members">
- In the Members section, you'll see a list of current members (including
yourself) - Locate the `Email` input field - Enter the email address of the
person you want to invite - Click the `Invite` button to send the invitation
</Step>
<Step title="Repeat as Needed">
- You can repeat this process to invite multiple team members - Each invited
member will receive an email invitation to join your organization
</Step>
</Steps>
## Adding Roles
@@ -42,40 +41,44 @@ As an administrator of a CrewAI AMP account, you can easily invite new team memb
You can add roles to your team members to control their access to different parts of the platform.
<Steps>
<Step title="Access the Settings Page">
- Log in to your CrewAI AMP account
- Look for the gear icon (⚙️) in the top right corner of the dashboard
- Click on the gear icon to access the **Settings** page:
<Frame>
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame>
</Step>
<Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Roles` tab
- Click on the `Roles` tab to access the **Roles** page.
<Frame>
<img src="/images/enterprise/roles-tab.png" alt="Roles Tab" />
</Frame>
- Click on the `Add Role` button to add a new role.
- Enter the details and permissions of the role and click the `Create Role` button to create the role.
<Frame>
<img src="/images/enterprise/add-role-modal.png" alt="Add Role Modal" />
</Frame>
</Step>
<Step title="Add Roles to Members">
- In the Members section, you'll see a list of current members (including yourself)
<Frame>
<img src="/images/enterprise/member-accepted-invitation.png" alt="Member Accepted Invitation" />
</Frame>
- Once the member has accepted the invitation, you can add a role to them.
- Navigate back to `Roles` tab
- Go to the member you want to add a role to and under the `Role` column, click on the dropdown
- Select the role you want to add to the member
- Click the `Update` button to save the role
<Frame>
<img src="/images/enterprise/assign-role.png" alt="Add Role to Member" />
</Frame>
</Step>
<Step title="Access the Settings Page">
- Log in to your CrewAI AMP account - Look for the gear icon (⚙️) in the top
right corner of the dashboard - Click on the gear icon to access the
**Settings** page:
<Frame>
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame>
</Step>
<Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `Roles` tab - Click on the `Roles` tab
to access the **Roles** page.
<Frame>
<img src="/images/enterprise/roles-tab.png" alt="Roles Tab" />
</Frame>
- Click on the `Add Role` button to add a new role. - Enter the
details and permissions of the role and click the `Create Role` button to
create the role.
<Frame>
<img src="/images/enterprise/add-role-modal.png" alt="Add Role Modal" />
</Frame>
</Step>
<Step title="Add Roles to Members">
- In the Members section, you'll see a list of current members (including
yourself)
<Frame>
<img
src="/images/enterprise/member-accepted-invitation.png"
alt="Member Accepted Invitation"
/>
</Frame>
- Once the member has accepted the invitation, you can add a role to
them. - Navigate back to `Roles` tab - Go to the member you want to add a
role to and under the `Role` column, click on the dropdown - Select the role
you want to add to the member - Click the `Update` button to save the role
<Frame>
<img src="/images/enterprise/assign-role.png" alt="Add Role to Member" />
</Frame>
</Step>
</Steps>
## Important Notes

View File

@@ -21,7 +21,7 @@ The repository is not a version control system. Use Git to track code changes an
Before using the Tool Repository, ensure you have:
- A [CrewAI AMP](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
- [CrewAI CLI](/en/concepts/cli#cli) installed
- uv>=0.5.0 installed. Check out [how to upgrade](https://docs.astral.sh/uv/getting-started/installation/#upgrading-uv)
- [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI AMP organization
@@ -112,7 +112,7 @@ By default, tools are published as private. To make a tool public:
crewai tool publish --public
```
For more details on how to build tools, see [Creating your own tools](https://docs.crewai.com/concepts/tools#creating-your-own-tools).
For more details on how to build tools, see [Creating your own tools](/en/concepts/tools#creating-your-own-tools).
## Updating Tools
@@ -137,7 +137,7 @@ To delete a tool:
4. Click **Delete**
<Warning>
Deletion is permanent. Deleted tools cannot be restored or re-installed.
Deletion is permanent. Deleted tools cannot be restored or re-installed.
</Warning>
## Security Checks
@@ -149,5 +149,6 @@ You can check the security check status of a tool at:
`CrewAI AMP > Tools > Your Tool > Versions`
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
Contact our support team for assistance with API integration or
troubleshooting.
</Card>

View File

@@ -6,8 +6,9 @@ mode: "wide"
---
<Note>
After deploying your crew to CrewAI AMP, you may need to make updates to the code, security settings, or configuration.
This guide explains how to perform these common update operations.
After deploying your crew to CrewAI AMP, you may need to make updates to the
code, security settings, or configuration. This guide explains how to perform
these common update operations.
</Note>
## Why Update Your Crew?
@@ -15,6 +16,7 @@ This guide explains how to perform these common update operations.
CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew.
There are several reasons you might want to update your crew deployment:
- You want to update the code with a latest commit you pushed to GitHub
- You want to reset the bearer token for security reasons
- You want to update environment variables
@@ -26,9 +28,7 @@ When you've pushed new commits to your GitHub repository and want to update your
1. Navigate to your crew in the CrewAI AMP platform
2. Click on the `Re-deploy` button on your crew details page
<Frame>
![Re-deploy Button](/images/enterprise/redeploy-button.png)
</Frame>
<Frame>![Re-deploy Button](/images/enterprise/redeploy-button.png)</Frame>
This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment.
@@ -40,12 +40,11 @@ If you need to generate a new bearer token (for example, if you suspect the curr
2. Find the `Bearer Token` section
3. Click the `Reset` button next to your current token
<Frame>
![Reset Token](/images/enterprise/reset-token.png)
</Frame>
<Frame>![Reset Token](/images/enterprise/reset-token.png)</Frame>
<Warning>
Resetting your bearer token will invalidate the previous token immediately. Make sure to update any applications or scripts that are using the old token.
Resetting your bearer token will invalidate the previous token immediately.
Make sure to update any applications or scripts that are using the old token.
</Warning>
## 3. Updating Environment Variables
@@ -69,7 +68,8 @@ To update the environment variables for your crew:
5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes
<Note>
Updating environment variables will trigger a new deployment, but this will only update the environment configuration and not the code itself.
Updating environment variables will trigger a new deployment, but this will
only update the environment configuration and not the code itself.
</Note>
## After Updating
@@ -81,9 +81,11 @@ After performing any update:
3. Once complete, test your crew to ensure the changes are working as expected
<Tip>
If you encounter any issues after updating, you can view deployment logs in the platform or contact support for assistance.
If you encounter any issues after updating, you can view deployment logs in
the platform or contact support for assistance.
</Tip>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with updating your crew or troubleshooting deployment issues.
Contact our support team for assistance with updating your crew or
troubleshooting deployment issues.
</Card>

View File

@@ -76,6 +76,7 @@ CrewAI AMP allows you to automate your workflow using webhooks. This article wil
<img src="/images/enterprise/activepieces-email.png" alt="ActivePieces Email" />
</Frame>
</Step>
</Steps>
## Webhook Output Examples
@@ -152,4 +153,5 @@ CrewAI AMP allows you to automate your workflow using webhooks. This article wil
}
```
</Tab>
</Tabs>

View File

@@ -93,6 +93,7 @@ This guide will walk you through the process of setting up Zapier triggers for C
<img src="/images/enterprise/zapier-9.png" alt="Zapier 12" />
</Frame>
</Step>
</Steps>
## Tips for Success

View File

@@ -25,7 +25,7 @@ Before using the Asana integration, ensure you have:
2. Find **Asana** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,18 +33,37 @@ Before using the Asana integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="ASANA_CREATE_COMMENT">
<Accordion title="asana/create_comment">
**Description:** Create a comment in Asana.
**Parameters:**
- `task` (string, required): Task ID - The ID of the Task the comment will be added to. The comment will be authored by the currently authenticated user.
- `text` (string, required): Text (example: "This is a comment.").
</Accordion>
<Accordion title="ASANA_CREATE_PROJECT">
<Accordion title="asana/create_project">
**Description:** Create a project in Asana.
**Parameters:**
@@ -52,24 +71,27 @@ uv add crewai-tools
- `workspace` (string, required): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Projects in. Defaults to the user's first Workspace if left blank.
- `team` (string, optional): Team - Use Connect Portal Workflow Settings to allow users to select which Team to share this Project with. Defaults to the user's first Team if left blank.
- `notes` (string, optional): Notes (example: "These are things we need to purchase.").
</Accordion>
<Accordion title="ASANA_GET_PROJECTS">
<Accordion title="asana/get_projects">
**Description:** Get a list of projects in Asana.
**Parameters:**
- `archived` (string, optional): Archived - Choose "true" to show archived projects, "false" to display only active projects, or "default" to show both archived and active projects.
- Options: `default`, `true`, `false`
</Accordion>
<Accordion title="ASANA_GET_PROJECT_BY_ID">
<Accordion title="asana/get_project_by_id">
**Description:** Get a project by ID in Asana.
**Parameters:**
- `projectFilterId` (string, required): Project ID.
</Accordion>
<Accordion title="ASANA_CREATE_TASK">
<Accordion title="asana/create_task">
**Description:** Create a task in Asana.
**Parameters:**
@@ -81,9 +103,10 @@ uv add crewai-tools
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="ASANA_UPDATE_TASK">
<Accordion title="asana/update_task">
**Description:** Update a task in Asana.
**Parameters:**
@@ -96,9 +119,10 @@ uv add crewai-tools
- `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z").
- `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later.
</Accordion>
<Accordion title="ASANA_GET_TASKS">
<Accordion title="asana/get_tasks">
**Description:** Get a list of tasks in Asana.
**Parameters:**
@@ -106,23 +130,26 @@ uv add crewai-tools
- `project` (string, optional): Project - The ID of the Project to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Project.
- `assignee` (string, optional): Assignee - The ID of the assignee to filter tasks on. Use Connect Portal Workflow Settings to allow users to select an Assignee.
- `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00").
</Accordion>
<Accordion title="ASANA_GET_TASKS_BY_ID">
<Accordion title="asana/get_tasks_by_id">
**Description:** Get a list of tasks by ID in Asana.
**Parameters:**
- `taskId` (string, required): Task ID.
</Accordion>
<Accordion title="ASANA_GET_TASK_BY_EXTERNAL_ID">
<Accordion title="asana/get_task_by_external_id">
**Description:** Get a task by external ID in Asana.
**Parameters:**
- `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application.
</Accordion>
<Accordion title="ASANA_ADD_TASK_TO_SECTION">
<Accordion title="asana/add_task_to_section">
**Description:** Add a task to a section in Asana.
**Parameters:**
@@ -130,19 +157,22 @@ uv add crewai-tools
- `taskId` (string, required): Task ID - The ID of the task. (example: "1204619611402340").
- `beforeTaskId` (string, optional): Before Task ID - The ID of a task in this section that this task will be inserted before. Cannot be used with After Task ID. (example: "1204619611402340").
- `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340").
</Accordion>
<Accordion title="ASANA_GET_TEAMS">
<Accordion title="asana/get_teams">
**Description:** Get a list of teams in Asana.
**Parameters:**
- `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user.
</Accordion>
<Accordion title="ASANA_GET_WORKSPACES">
<Accordion title="asana/get_workspaces">
**Description:** Get a list of workspaces in Asana.
**Parameters:** None required.
</Accordion>
</AccordionGroup>
@@ -152,19 +182,13 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Asana tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with Asana capabilities
asana_agent = Agent(
role="Project Manager",
goal="Manage tasks and projects in Asana efficiently",
backstory="An AI assistant specialized in project management and task coordination.",
tools=[enterprise_tools]
apps=['asana'] # All Asana actions will be available
)
# Task to create a new project
@@ -186,19 +210,18 @@ crew.kickoff()
### Filtering Specific Asana Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Asana tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["asana_create_task", "asana_update_task", "asana_get_tasks"]
)
from crewai import Agent, Task, Crew
# Create agent with specific Asana actions only
task_manager_agent = Agent(
role="Task Manager",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and management.",
tools=enterprise_tools
apps=[
'asana/create_task',
'asana/update_task',
'asana/get_tasks'
] # Specific Asana actions
)
# Task to create and assign a task
@@ -220,17 +243,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_coordinator = Agent(
role="Project Coordinator",
goal="Coordinate project activities and track progress",
backstory="An experienced project coordinator who ensures projects run smoothly.",
tools=[enterprise_tools]
apps=['asana']
)
# Complex task involving multiple Asana operations

View File

@@ -25,7 +25,7 @@ Before using the Box integration, ensure you have:
2. Find **Box** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for file and folder management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,28 @@ Before using the Box integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="BOX_SAVE_FILE">
<Accordion title="box/save_file">
**Description:** Save a file from URL in Box.
**Parameters:**
@@ -50,25 +68,28 @@ uv add crewai-tools
}
```
- `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "https://picsum.photos/200/300").
</Accordion>
<Accordion title="BOX_SAVE_FILE_FROM_OBJECT">
<Accordion title="box/save_file_from_object">
**Description:** Save a file in Box.
**Parameters:**
- `file` (string, required): File - Accepts a File Object containing file data. Files must be smaller than 50MB in size.
- `fileName` (string, required): File Name (example: "qwerty.png").
- `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank.
</Accordion>
<Accordion title="BOX_GET_FILE_BY_ID">
<Accordion title="box/get_file_by_id">
**Description:** Get a file by ID in Box.
**Parameters:**
- `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345").
</Accordion>
<Accordion title="BOX_LIST_FILES">
<Accordion title="box/list_files">
**Description:** List files in Box.
**Parameters:**
@@ -91,9 +112,10 @@ uv add crewai-tools
]
}
```
</Accordion>
<Accordion title="BOX_CREATE_FOLDER">
<Accordion title="box/create_folder">
**Description:** Create a folder in Box.
**Parameters:**
@@ -104,9 +126,10 @@ uv add crewai-tools
"id": "123456"
}
```
</Accordion>
<Accordion title="BOX_MOVE_FOLDER">
<Accordion title="box/move_folder">
**Description:** Move a folder in Box.
**Parameters:**
@@ -118,16 +141,18 @@ uv add crewai-tools
"id": "123456"
}
```
</Accordion>
<Accordion title="BOX_GET_FOLDER_BY_ID">
<Accordion title="box/get_folder_by_id">
**Description:** Get a folder by ID in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
</Accordion>
<Accordion title="BOX_SEARCH_FOLDERS">
<Accordion title="box/search_folders">
**Description:** Search folders in Box.
**Parameters:**
@@ -150,14 +175,16 @@ uv add crewai-tools
]
}
```
</Accordion>
<Accordion title="BOX_DELETE_FOLDER">
<Accordion title="box/delete_folder">
**Description:** Delete a folder in Box.
**Parameters:**
- `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0").
- `recursive` (boolean, optional): Recursive - Delete a folder that is not empty by recursively deleting the folder and all of its content.
</Accordion>
</AccordionGroup>
@@ -167,19 +194,14 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Box tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
from crewai import Agent, Task, Crew
# Create an agent with Box capabilities
box_agent = Agent(
role="Document Manager",
goal="Manage files and folders in Box efficiently",
backstory="An AI assistant specialized in document management and file organization.",
tools=[enterprise_tools]
apps=['box'] # All Box actions will be available
)
# Task to create a folder structure
@@ -201,19 +223,14 @@ crew.kickoff()
### Filtering Specific Box Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific Box tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["box_create_folder", "box_save_file", "box_list_files"]
)
from crewai import Agent, Task, Crew
# Create agent with specific Box actions only
file_organizer_agent = Agent(
role="File Organizer",
goal="Organize and manage file storage efficiently",
backstory="An AI assistant that focuses on file organization and storage management.",
tools=enterprise_tools
apps=['box/create_folder', 'box/save_file', 'box/list_files'] # Specific Box actions
)
# Task to organize files
@@ -235,17 +252,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
file_manager = Agent(
role="File Manager",
goal="Maintain organized file structure and manage document lifecycle",
backstory="An experienced file manager who ensures documents are properly organized and accessible.",
tools=[enterprise_tools]
apps=['box']
)
# Complex task involving multiple Box operations

View File

@@ -25,7 +25,7 @@ Before using the ClickUp integration, ensure you have:
2. Find **ClickUp** in the Authentication Integrations section
3. Click **Connect** and complete the OAuth flow
4. Grant the necessary permissions for task and project management
5. Copy your Enterprise Token from [Account Settings](https://app.crewai.com/crewai_plus/settings/account)
5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations)
### 2. Install Required Package
@@ -33,10 +33,28 @@ Before using the ClickUp integration, ensure you have:
uv add crewai-tools
```
### 3. Environment Variable Setup
<Note>
To use integrations with `Agent(apps=[])`, you must set the
`CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise
Token.
</Note>
```bash
export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token"
```
Or add it to your `.env` file:
```
CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
```
## Available Actions
<AccordionGroup>
<Accordion title="CLICKUP_SEARCH_TASKS">
<Accordion title="clickup/search_tasks">
**Description:** Search for tasks in ClickUp using advanced filters.
**Parameters:**
@@ -59,17 +77,19 @@ uv add crewai-tools
}
```
Available fields: `space_ids%5B%5D`, `project_ids%5B%5D`, `list_ids%5B%5D`, `statuses%5B%5D`, `include_closed`, `assignees%5B%5D`, `tags%5B%5D`, `due_date_gt`, `due_date_lt`, `date_created_gt`, `date_created_lt`, `date_updated_gt`, `date_updated_lt`
</Accordion>
<Accordion title="CLICKUP_GET_TASK_IN_LIST">
<Accordion title="clickup/get_task_in_list">
**Description:** Get tasks in a specific list in ClickUp.
**Parameters:**
- `listId` (string, required): List - Select a List to get tasks from. Use Connect Portal User Settings to allow users to select a ClickUp List.
- `taskFilterFormula` (string, optional): Search for tasks that match specified filters. For example: name=task1.
</Accordion>
<Accordion title="CLICKUP_CREATE_TASK">
<Accordion title="clickup/create_task">
**Description:** Create a task in ClickUp.
**Parameters:**
@@ -80,9 +100,10 @@ uv add crewai-tools
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="CLICKUP_UPDATE_TASK">
<Accordion title="clickup/update_task">
**Description:** Update a task in ClickUp.
**Parameters:**
@@ -94,54 +115,62 @@ uv add crewai-tools
- `assignees` (string, optional): Assignees - Select a Member (or an array of member IDs) to be assigned to this task. Use Connect Portal User Settings to allow users to select a ClickUp Member.
- `dueDate` (string, optional): Due Date - Specify a date for this task to be due on.
- `additionalFields` (string, optional): Additional Fields - Specify additional fields to include on this task as JSON.
</Accordion>
<Accordion title="CLICKUP_DELETE_TASK">
<Accordion title="clickup/delete_task">
**Description:** Delete a task in ClickUp.
**Parameters:**
- `taskId` (string, required): Task ID - The ID of the task to delete.
</Accordion>
<Accordion title="CLICKUP_GET_LIST">
<Accordion title="clickup/get_list">
**Description:** Get List information in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the lists.
</Accordion>
<Accordion title="CLICKUP_GET_CUSTOM_FIELDS_IN_LIST">
<Accordion title="clickup/get_custom_fields_in_list">
**Description:** Get Custom Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get custom fields from.
</Accordion>
<Accordion title="CLICKUP_GET_ALL_FIELDS_IN_LIST">
<Accordion title="clickup/get_all_fields_in_list">
**Description:** Get All Fields in a List in ClickUp.
**Parameters:**
- `listId` (string, required): List ID - The ID of the list to get all fields from.
</Accordion>
<Accordion title="CLICKUP_GET_SPACE">
<Accordion title="clickup/get_space">
**Description:** Get Space information in ClickUp.
**Parameters:**
- `spaceId` (string, optional): Space ID - The ID of the space to retrieve.
</Accordion>
<Accordion title="CLICKUP_GET_FOLDERS">
<Accordion title="clickup/get_folders">
**Description:** Get Folders in ClickUp.
**Parameters:**
- `spaceId` (string, required): Space ID - The ID of the space containing the folders.
</Accordion>
<Accordion title="CLICKUP_GET_MEMBER">
<Accordion title="clickup/get_member">
**Description:** Get Member information in ClickUp.
**Parameters:** None required.
</Accordion>
</AccordionGroup>
@@ -151,19 +180,14 @@ uv add crewai-tools
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
from crewai import Agent, Task, Crew
# Get enterprise tools (ClickUp tools will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# Create an agent with ClickUp capabilities
# Create an agent with Clickup capabilities
clickup_agent = Agent(
role="Task Manager",
goal="Manage tasks and projects in ClickUp efficiently",
backstory="An AI assistant specialized in task management and productivity coordination.",
tools=[enterprise_tools]
apps=['clickup'] # All Clickup actions will be available
)
# Task to create a new task
@@ -185,19 +209,12 @@ crew.kickoff()
### Filtering Specific ClickUp Tools
```python
from crewai_tools import CrewaiEnterpriseTools
# Get only specific ClickUp tools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token",
actions_list=["clickup_create_task", "clickup_update_task", "clickup_search_tasks"]
)
task_coordinator = Agent(
role="Task Coordinator",
goal="Create and manage tasks efficiently",
backstory="An AI assistant that focuses on task creation and status management.",
tools=enterprise_tools
apps=['clickup/create_task']
)
# Task to manage task workflow
@@ -219,17 +236,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
project_manager = Agent(
role="Project Manager",
goal="Coordinate project activities and track team productivity",
backstory="An experienced project manager who ensures projects are delivered on time.",
tools=[enterprise_tools]
apps=['clickup']
)
# Complex task involving multiple ClickUp operations
@@ -256,17 +268,12 @@ crew.kickoff()
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
task_analyst = Agent(
role="Task Analyst",
goal="Analyze task patterns and optimize team productivity",
backstory="An AI assistant that analyzes task data to improve team efficiency.",
tools=[enterprise_tools]
apps=['clickup']
)
# Task to analyze and optimize task distribution
@@ -290,5 +297,6 @@ crew.kickoff()
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with ClickUp integration setup or troubleshooting.
Contact our support team for assistance with ClickUp integration setup or
troubleshooting.
</Card>

Some files were not shown because too many files have changed in this diff Show More