* Implement user input handling in Flow class
- Introduced `FlowInputRequestedEvent` and `FlowInputReceivedEvent` to manage user input requests and responses during flow execution.
- Added `InputProvider` protocol and `InputResponse` dataclass for customizable input handling.
- Enhanced `Flow` class with `ask()` method to request user input, including timeout handling and state checkpointing.
- Updated `FlowConfig` to support custom input providers.
- Created `input_provider.py` for default input provider implementations, including a console-based provider.
- Added comprehensive tests for `ask()` functionality, covering basic usage, timeout behavior, and integration with flow machinery.
* Potential fix for pull request finding 'Unused import'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* Refactor test_flow_ask.py to streamline flow kickoff calls
- Removed unnecessary variable assignments for the result of `flow.kickoff()` in two test cases, improving code clarity.
- Updated assertions to ensure the expected execution log entries are present after the flow kickoff, enhancing test reliability.
* Add current_flow_method_name context variable for flow method tracking
- Introduced a new context variable, `current_flow_method_name`, to store the name of the currently executing flow method, defaulting to "unknown".
- Updated the Flow class to set and reset this context variable during method execution, enhancing the ability to track method calls without stack inspection.
- Removed the obsolete `_resolve_calling_method_name` method, streamlining the code and improving clarity.
* Enhance input history management in Flow class
- Introduced a new `InputHistoryEntry` TypedDict to structure user input history for the `ask()` method, capturing details such as the question, user response, method name, timestamp, and associated metadata.
- Updated the `_input_history` attribute in the Flow class to utilize the new `InputHistoryEntry` type, improving type safety and clarity in input history management.
* Enhance timeout handling in Flow class input requests
- Updated the `ask()` method to improve timeout management by manually managing the `ThreadPoolExecutor`, preventing potential deadlocks when the provider call exceeds the timeout duration.
- Added clarifications in the documentation regarding the behavior of the timeout and the underlying request handling, ensuring better understanding for users.
* Enhance memory reset functionality in CLI commands
- Introduced flow memory reset capabilities in the `reset_memories_command`, allowing for both crew and flow memory resets.
- Added a new utility function `_reset_flow_memory` to handle memory resets for individual flow instances, improving modularity and clarity.
- Updated the `get_flows` utility to discover flow instances from project files, enhancing the CLI's ability to manage flow states.
- Expanded test coverage to validate the new flow memory reset features, ensuring robust functionality and error handling.
* LINTER
* Fix resumption flag logic in Flow class and add regression test for cyclic flow persistence
- Updated the logic for setting the `_is_execution_resuming` flag to ensure it only activates when there are completed methods to replay, preventing incorrect suppression of cyclic re-execution during state reloads.
- Added a regression test to validate that cyclic router flows complete all iterations when persistence is enabled and an 'id' is passed in inputs, ensuring robust handling of flow execution in these scenarios.
---------
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
- Added tests to verify self-loop behavior in HITL routers, ensuring they can handle multiple rejections and immediate approvals.
- Implemented `test_hitl_self_loop_routes_back_to_same_method`, `test_hitl_self_loop_multiple_rejections`, and `test_hitl_self_loop_immediate_approval` to validate the expected execution order and outcomes.
- Updated the `or_()` listener to support looping back to the same method based on human feedback outcomes, improving flow control in complex scenarios.
* better DevEx
* Refactor: Update supported native providers and enhance memory handling
- Removed "groq" and "meta" from the list of supported native providers in `llm.py`.
- Added a safeguard in `flow.py` to ensure all background memory saves complete before returning.
- Improved error handling in `unified_memory.py` to prevent exceptions during shutdown, ensuring smoother memory operations and event bus interactions.
* Enhance Memory System with Consolidation and Learning Features
- Introduced memory consolidation mechanisms to prevent duplicate records during content saving, utilizing similarity checks and LLM decision-making.
- Implemented non-blocking save operations in the memory system, allowing agents to continue tasks while memory is being saved.
- Added support for learning from human feedback, enabling the system to distill lessons from past corrections and improve future outputs.
- Updated documentation to reflect new features and usage examples for memory consolidation and HITL learning.
* Enhance cyclic flow handling for or_() listeners
- Updated the Flow class to ensure that all fired or_() listeners are cleared between cycle iterations, allowing them to fire again in subsequent cycles. This change addresses a bug where listeners remained suppressed across iterations.
- Added regression tests to verify that or_() listeners fire correctly on every iteration in cyclic flows, ensuring expected behavior in complex routing scenarios.
* chore: update memory management and dependencies
- Enhance the memory system by introducing a unified memory API that consolidates short-term, long-term, entity, and external memory functionalities.
- Update the `.gitignore` to exclude new memory-related files and blog directories.
- Modify `conftest.py` to handle missing imports for vcr stubs more gracefully.
- Add new development dependencies in `pyproject.toml` for testing and memory management.
- Refactor the `Crew` class to utilize the new unified memory system, replacing deprecated memory attributes.
- Implement memory context injection in `LiteAgent` to improve memory recall during agent execution.
- Update documentation to reflect changes in memory usage and configuration.
* feat: introduce Memory TUI for enhanced memory management
- Add a new command to the CLI for launching a Textual User Interface (TUI) to browse and recall memories.
- Implement the MemoryTUI class to facilitate user interaction with memory scopes and records.
- Enhance the unified memory API by adding a method to list records within a specified scope.
- Update `pyproject.toml` to include the `textual` dependency for TUI functionality.
- Ensure proper error handling for missing dependencies when accessing the TUI.
* feat: implement consolidation flow for memory management
- Introduce the ConsolidationFlow class to handle the decision-making process for inserting, updating, or deleting memory records based on new content.
- Add new data models: ConsolidationAction and ConsolidationPlan to structure the actions taken during consolidation.
- Enhance the memory types with new fields for consolidation thresholds and limits.
- Update the unified memory API to utilize the new consolidation flow for managing memory records.
- Implement embedding functionality for new content to facilitate similarity checks.
- Refactor existing memory analysis methods to integrate with the consolidation process.
- Update translations to include prompts for consolidation actions and user interactions.
* feat: enhance Memory TUI with Rich markup and improved UI elements
- Update the MemoryTUI class to utilize Rich markup for better visual representation of memory scope information.
- Introduce a color palette for consistent branding across the TUI interface.
- Refactor the CSS styles to improve the layout and aesthetics of the memory browsing experience.
- Enhance the display of memory entries, including better formatting for records and importance ratings.
- Implement loading indicators and error messages with Rich styling for improved user feedback during recall operations.
- Update the action bindings and navigation prompts for a more intuitive user experience.
* feat: enhance Crew class memory management and configuration
- Update the Crew class to allow for more flexible memory configurations by accepting Memory, MemoryScope, or MemorySlice instances.
- Refactor memory initialization logic to support custom memory configurations while maintaining backward compatibility.
- Improve documentation for memory-related fields to clarify usage and expectations.
- Introduce a recall oversample factor to optimize memory recall processes.
- Update related memory types and configurations to ensure consistency across the memory management system.
* chore: update dependency overrides and enhance memory management
- Added an override for the 'rich' dependency to allow compatibility with 'textual' requirements.
- Updated the 'pyproject.toml' and 'uv.lock' files to reflect the new dependency specifications.
- Refactored the Crew class to simplify memory configuration handling by allowing any type for the memory attribute.
- Improved error messages in the CLI for missing 'textual' dependency to guide users on installation.
- Introduced new packages and dependencies in the project to enhance functionality and maintain compatibility.
* refactor: enhance thread safety in flow management
- Updated LockedListProxy and LockedDictProxy to subclass list and dict respectively, ensuring compatibility with libraries requiring strict type checks.
- Improved documentation to clarify the purpose of these proxies and their thread-safe operations.
- Ensured that all mutations are protected by locks while reads delegate to the underlying data structures, enhancing concurrency safety.
* chore: update dependency versions and improve Python compatibility
- Downgraded 'vcrpy' dependency to version 7.0.0 for compatibility.
- Enhanced 'uv.lock' to include more granular resolution markers for Python versions and implementations, ensuring better compatibility across different environments.
- Updated 'urllib3' and 'selenium' dependencies to specify versions based on Python implementation, improving stability and performance.
- Removed deprecated resolution markers for 'fastembed' and streamlined its dependencies for better clarity.
* fix linter
* chore: update uv.lock for improved dependency management and memory management enhancements
- Incremented revision number in uv.lock to reflect changes.
- Added a new development dependency group in uv.lock, specifying versions for tools like pytest, mypy, and pre-commit to streamline development workflows.
- Enhanced error handling in CLI memory functions to provide clearer feedback on missing dependencies.
- Refactored memory management classes to improve type hints and maintainability, ensuring better compatibility with future updates.
* fix tests
* refactor: remove obsolete RAGStorage tests and clean up error handling
- Deleted outdated tests for RAGStorage that were no longer relevant, including tests for client failures, save operation failures, and reset failures.
- Cleaned up the test suite to focus on current functionality and improve maintainability.
- Ensured that remaining tests continue to validate the expected behavior of knowledge storage components.
* fix test
* fix texts
* fix tests
* forcing new commit
* fix: add location parameter to Google Vertex embedder configuration for memory integration tests
* debugging CI
* adding debugging for CI
* refactor: remove unnecessary logging for memory checks in agent execution
- Eliminated redundant logging statements related to memory checks in the Agent and CrewAgentExecutor classes.
- Simplified the memory retrieval logic by directly checking for available memory without logging intermediate states.
- Improved code readability and maintainability by reducing clutter in the logging output.
* udpating desp
* feat: enhance thread safety in LockedListProxy and LockedDictProxy
- Added equality comparison methods (__eq__ and __ne__) to LockedListProxy and LockedDictProxy to allow for safe comparison of their contents.
- Implemented consistent locking mechanisms to prevent deadlocks during comparisons.
- Improved the overall robustness of these proxy classes in multi-threaded environments.
* feat: enhance memory functionality in Flows documentation and memory system
- Added a new section on memory usage within Flows, detailing built-in methods for storing and recalling memories.
- Included an example of a Research and Analyze Flow demonstrating the integration of memory for accumulating knowledge over time.
- Updated the Memory documentation to clarify the unified memory system and its capabilities, including adaptive-depth recall and composite scoring.
- Introduced a new configuration parameter, `recall_oversample_factor`, to improve the effectiveness of memory retrieval processes.
* update docs
* refactor: improve memory record handling and pagination in unified memory system
- Simplified the `get_record` method in the Memory class by directly accessing the storage's `get_record` method.
- Enhanced the `list_records` method to include an `offset` parameter for pagination, allowing users to skip a specified number of records.
- Updated documentation for both methods to clarify their functionality and parameters, improving overall code clarity and usability.
* test: update memory scope assertions in unified memory tests
- Modified assertions in `test_lancedb_list_scopes_get_scope_info` and `test_memory_list_scopes_info_tree` to check for the presence of the "/team" scope instead of the root scope.
- Clarified comments to indicate that `list_scopes` returns child scopes rather than the root itself, enhancing test clarity and accuracy.
* feat: integrate memory tools for agents and crews
- Added functionality to inject memory tools into agents during initialization, enhancing their ability to recall and remember information mid-task.
- Implemented a new `_add_memory_tools` method in the Crew class to facilitate the addition of memory tools when memory is available.
- Introduced `RecallMemoryTool` and `RememberTool` classes in a new `memory_tools.py` file, providing agents with active recall and memory storage capabilities.
- Updated English translations to include descriptions for the new memory tools, improving user guidance on their usage.
* refactor: streamline memory recall functionality across agents and tools
- Removed the 'depth' parameter from memory recall calls in LiteAgent and Agent classes, simplifying the recall process.
- Updated the MemoryTUI to use 'deep' depth by default for more comprehensive memory retrieval.
- Enhanced the MemoryScope and MemorySlice classes to default to 'deep' depth, improving recall accuracy.
- Introduced a new 'recall_queries' field in QueryAnalysis to optimize semantic vector searches with targeted phrases.
- Updated documentation and comments to reflect changes in memory recall behavior and parameters.
* refactor: optimize memory management in flow classes
- Enhanced memory auto-creation logic in Flow class to prevent unnecessary Memory instance creation for internal flows (RecallFlow, ConsolidationFlow) by introducing a _skip_auto_memory flag.
- Removed the deprecated time_hints field from QueryAnalysis and replaced it with a more flexible time_filter field to better handle time-based queries.
- Updated documentation and comments to reflect changes in memory handling and query analysis structure, improving clarity and usability.
* updates tests
* feat: introduce EncodingFlow for enhanced memory encoding pipeline
- Added a new EncodingFlow class to orchestrate the encoding process for memory, integrating LLM analysis and embedding.
- Updated the Memory class to utilize EncodingFlow for saving content, improving the overall memory management and conflict resolution.
- Enhanced the unified memory module to include the new EncodingFlow in its public API, facilitating better memory handling.
- Updated tests to ensure proper functionality of the new encoding flow and its integration with existing memory features.
* refactor: optimize memory tool integration and recall flow
- Streamlined the addition of memory tools in the Agent class by using list comprehension for cleaner code.
- Enhanced the RecallFlow class to build task lists more efficiently with list comprehensions, improving readability and performance.
- Updated the RecallMemoryTool to utilize list comprehensions for formatting memory results, simplifying the code structure.
- Adjusted test assertions in LiteAgent to reflect the default behavior of memory recall depth, ensuring clarity in expected outcomes.
* Potential fix for pull request finding 'Empty except'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* chore: gen missing cassette
* fix
* test: enhance memory extraction test by mocking recall to prevent LLM calls
Updated the test for memory extraction to include a mock for the recall method, ensuring that the test focuses on the save path without invoking external LLM calls. This improves test reliability and clarity.
* refactor: enhance memory handling by adding agent role parameter
Updated memory storage methods across multiple classes to include an optional `agent_role` parameter, improving the context of stored memories. Additionally, modified the initialization of several flow classes to suppress flow events, enhancing performance and reducing unnecessary event triggers.
* feat: enhance agent memory functionality with recall and save mechanisms
Implemented memory context injection during agent kickoff, allowing for memory recall before execution and passive saving of results afterward. Added new methods to handle memory saving and retrieval, including error handling for memory operations. Updated the BaseAgent class to support dynamic memory resolution and improved memory record structure with source and privacy attributes for better provenance tracking.
* test
* feat: add utility method to simplify tools field in console formatter
Introduced a new static method `_simplify_tools_field` in the console formatter to transform the 'tools' field from full tool objects to a comma-separated string of tool names. This enhancement improves the readability of tool information in the output.
* refactor: improve lazy initialization of LLM and embedder in Memory class
Refactored the Memory class to implement lazy initialization for the LLM and embedder, ensuring they are only created when first accessed. This change enhances the robustness of the Memory class by preventing initialization failures when constructed without an API key. Additionally, updated error handling to provide clearer guidance for users on resolving initialization issues.
* refactor: consolidate memory saving methods for improved efficiency
Refactored memory handling across multiple classes to replace individual memory saving calls with a batch method, `remember_many`, enhancing performance and reducing redundancy. Updated related tools and schemas to support single and multiple item memory operations, ensuring a more streamlined interface for memory interactions. Additionally, improved documentation and test coverage for the new functionality.
* feat: enhance MemoryTUI with improved layout and entry handling
Updated the MemoryTUI class to incorporate a new vertical layout, adding an OptionList for displaying entries and enhancing the detail view for selected records. Introduced methods for populating entry and recall lists, improving user interaction and data presentation. Additionally, refined CSS styles for better visual organization and focus handling.
* fix test
* feat: inject memory tools into LiteAgent for enhanced functionality
Added logic to the LiteAgent class to inject memory tools if memory is configured, ensuring that memory tools are only added if they are not already present. This change improves the agent's capability to utilize memory effectively during execution.
* feat: add synchronous execution method to ConsolidationFlow for improved integration
Introduced a new `run_sync()` method in the ConsolidationFlow class to facilitate procedural execution of the consolidation pipeline without relying on asynchronous event loops. Updated the EncodingFlow class to utilize this method for conflict resolution, ensuring compatibility within its async context. This change enhances the flow's ability to manage memory records effectively during nested executions.
* refactor: update ConsolidationFlow and EncodingFlow for improved async handling
Removed the synchronous `run_sync()` method from ConsolidationFlow and refactored the consolidate method in EncodingFlow to be asynchronous. This change allows for direct awaiting of the ConsolidationFlow's kickoff method, enhancing compatibility within the async event loop and preventing nested asyncio.run() issues. Additionally, updated the execution plan to listen for multiple paths, streamlining the consolidation process.
* fix: update flow documentation and remove unused ConsolidationFlow
Corrected the comment in Flow class regarding internal flows, replacing "ConsolidationFlow" with "EncodingFlow". Removed the ConsolidationFlow class as it is no longer needed, streamlining the memory handling process. Updated related imports and ensured that the memory module reflects these changes, enhancing clarity and maintainability.
* feat: enhance memory handling with background saving and query analysis optimization
Implemented a background saving mechanism in the Memory class to allow non-blocking memory operations, improving performance during high-load scenarios. Added a query analysis threshold to skip LLM calls for short queries, optimizing recall efficiency. Updated related methods and documentation to reflect these changes, ensuring a more responsive and efficient memory management system.
* fix test
* fix test
* fix: handle synchronous fallback for save operations in Memory class
Updated the Memory class to implement a synchronous fallback mechanism for save operations when the background thread pool is shut down. This change ensures that late save requests still succeed, improving reliability in memory management during shutdown scenarios.
* feat: implement HITL learning features in human feedback decorator
Added support for learning from human feedback in the human feedback decorator. Introduced parameters to enable lesson distillation and pre-review of outputs based on past feedback. Updated related tests to ensure proper functionality of the learning mechanism, including memory interactions and default LLM usage.
---------
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Update documentation to use underscore instead of hyphen in the `--skip_provider` flag across all CLI command examples for consistency with actual CLI implementation.
* docs: add missing integration actions from OAuth config
Sync enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations:
- Google Contacts: 4 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions
* docs: add missing integration actions from OAuth config
Sync pt-BR enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Portuguese:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions
* docs: add missing integration actions from OAuth config
Sync Korean enterprise integration docs with crewai-oauth apps.js config.
Adds ~96 missing actions across 9 integrations, translated to Korean:
- Google Contacts: 2 contact group actions
- Google Slides: 14 slide manipulation/content actions
- Microsoft SharePoint: 27 file, Excel, and Word actions
- Microsoft Excel: 2 actions (get_used_range_metadata, get_table_data)
- Microsoft Word: 2 actions (copy_document, move_document)
- Google Docs: 27 text formatting, table, and header/footer actions
- Microsoft Outlook: 7 message and calendar event actions
- Microsoft OneDrive: 5 path-based and discovery actions
- Microsoft Teams: 8 meeting, channel, and reply actions
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix(security): bump regex from 2024.9.11 to 2026.1.15
Address security vulnerability flagged in regex==2024.9.11
* bump mcp from 1.23.1 to 1.26.0
Address security vulnerability flagged in mcp==1.16.0 (resolved to 1.23.3)
asynchronous human-in-the-loop handling and related fixes.
- Extend human_input provider with async support: AsyncExecutorContext, handle_feedback_async, async prompt helpers (_prompt_input_async, _async_readline), and async training/regular feedback loops in SyncHumanInputProvider.
- Add async handler methods in CrewAgentExecutor and AgentExecutor (_ahandle_human_feedback, _ainvoke_loop) to integrate async provider flows.
- Change PlusAPI.get_agent to an async httpx call and adapt caller in agent_utils to run it via asyncio.run.
- Simplify listener execution in flow.Flow to correctly pass HumanFeedbackResult to listeners and unify execution path for router outcomes.
- Remove deprecated types/hitl.py definitions.
- Add tests covering chained router feedback, rejected paths, and mixed router/non-router listeners to prevent regressions.
* fix: add current task id context and flow updates
introduce a context var for the current task id in `crewai.context` to track task scope. update `Flow._execute_single_listener` to return `(result, event_id)` and adjust callers to unpack it and append `FlowMethodName(str(result))` to `router_results`. set/reset the current task id at the start/end of task execution (async + sync) with minor import and call-site tweaks.
* fix: await event futures and flush event bus
call `crewai_event_bus.flush()` after crew kickoff. in `Flow`, await event handler futures instead of just collecting them: await pending `_event_futures` before finishing, await emitted futures immediately with try/except to log failures, then clear `_event_futures`. ensures handlers complete and errors surface.
* fix: continue iteration on tool completion events
expand the loop bridge listener to also trigger on tool completion events (`tool_completed` and `native_tool_completed`) so agent iteration resumes after tools finish. add a `requests.post` mock and response fixture in the liteagent test to simulate platform tool execution. refresh and sanitize vcr cassettes (updated model responses, timestamps, and header placeholders) to reflect tool-call flows and new recordings.
* fix: thread-safe state proxies & native routing
add thread-safe state proxies and refactor native tool routing.
* introduce `LockedListProxy` and `LockedDictProxy` in `flow.py` and update `StateProxy` to return them for list/dict attrs so mutations are protected by the flow lock.
* update `AgentExecutor` to use `StateProxy` on flow init, guard the messages setter with the state lock, and return a `StateProxy` from the temp state accessor.
* convert `call_llm_native_tools` into a listener (no direct routing return) and add `route_native_tool_result` to route based on state (pending tool calls, final answer, or context error).
* minor cleanup in `continue_iteration` to drop orphan listeners on init.
* update test cassettes for new native tool call responses, timestamps, and ids.
improves concurrency safety for shared state and makes native tool routing explicit.
* chore: regen cassettes
* chore: regen cassettes, remove duplicate listener call path
* feat: add started_event_id and set in eventbus
* chore: update additional test assumption
* fix: restore event bus handlers on context exit
fix rollback in crewai events bus so that exiting the context restores
the previous _sync_handlers, _async_handlers, _handler_dependencies, and _execution_plan_cache by assigning shallow copies of the saved dicts. previously these
were set to empty dicts on exit, which caused registered handlers and cached execution plans to be lost.
Introduce ContextVar-backed hooks and small API/behavior changes to improve extensibility and testability.
Changes include:
- agents: mark configure_structured_output as abstract and change its parameter to task to reflect use of task metadata.
- tracing: convert _first_time_trace_hook to a ContextVar and call .get() to safely retrieve the hook.
- console formatter: add _disable_version_check ContextVar and skip version checks when set (avoids noisy checks in certain contexts).
- flow: use current_triggering_event_id variable when scheduling listener tasks to keep naming consistent.
- hallucination guardrail: make context optional, add _validate_output_hook to allow custom validation hooks, update examples and return contract to allow hooks to override behavior.
- agent utilities: add _create_plus_client_hook for injecting a Plus client (used in tests/alternate flows), ensure structured tools have current_usage_count initialized and propagate to original tool, and fall back to creating PlusAPI client when no hook is provided.
Refactor agent executor to delegate human interactions to a provider: add messages and ask_for_human_input properties, implement _invoke_loop and _format_feedback_message, and replace the internal iterative/training feedback logic with a call to get_provider().handle_feedback.
Make LLMGuardrail kickoff coroutine-aware by detecting coroutines and running them via asyncio.run so both sync and async agents are supported.
Make telemetry more robust by safely handling missing task.output (use empty string) and returning early if span is None before setting attributes.
Improve serialization to detect circular references via an _ancestors set, propagate it through recursive calls, and pass exclude/max_depth/_current_depth consistently to prevent infinite recursion and produce stable serializable output.
Allow hook registration to accept both typed hook types and plain callables by importing and using After*/Before*CallHookCallable types; add explicit LLMCallHookContext and ToolCallHookContext typing in crew_base. Introduce a post-initialize crew hook list and invoke hooks after Crew instance initialization. Refactor filtered hook factory functions to include precise typing and clearer local names (before_llm_hook/after_llm_hook/before_tool_hook/after_tool_hook) and register those with the instance. Update CrewInstance protocol to include _registered_hook_functions and _hooks_being_registered fields.
When a tool raises an error, both ToolUsageErrorEvent and
ToolUsageFinishedEvent were being emitted. Since both events pop the
event scope stack, this caused the agent scope to be incorrectly popped
along with the tool scope.
Enable dynamic extension exports and small behavior fixes across events and flow modules:
- events/__init__.py: Added _extension_exports and extended __getattr__ to lazily resolve registered extension values or import paths.
- events/event_bus.py: Implemented off() to unregister sync/async handlers, clean handler dependencies, and invalidate execution plan cache.
- events/listeners/tracing/utils.py: Added Callable import and _first_time_trace_hook to allow overriding first-time trace auto-collection behavior.
- events/types/tool_usage_events.py: Changed ToolUsageEvent.run_attempts default from None to 0 to avoid nullable handling.
- events/utils/console_formatter.py: Respect CREWAI_DISABLE_VERSION_CHECK env var to skip version checks in CI-like flows.
- flow/async_feedback/__init__.py: Added typing.Any import, _extension_exports and __getattr__ to support extensions via attribute lookup.
These changes add extension points and safer defaults, and provide a way to unregister event handlers.
* refactor: extract hitl to provider pattern
- add humaninputprovider protocol with setup_messages and handle_feedback
- move sync hitl logic from executor to synchuman inputprovider
- add _passthrough_exceptions extension point in agent/core.py
- create crewai.core.providers module for extensible components
- remove _ask_human_input from base_agent_executor_mixin
* feat: enhance AnthropicCompletion to support available functions in tool execution
- Updated the `_prepare_completion_params` method to accept `available_functions` for better tool handling.
- Modified tool execution logic to directly return results from tools when `available_functions` is provided, aligning behavior with OpenAI's model.
- Added new test cases to validate the execution of tools with available functions, ensuring correct argument passing and result formatting.
This change improves the flexibility and usability of the Anthropic LLM integration, allowing for more complex interactions with tools.
* refactor: remove redundant event emission in AnthropicCompletion
* fix test
* dry up
Dependabot's uv updater defaults to Python 3.14.2, which is incompatible
with the project's requires-python constraint (>=3.10, <3.14). Adding
.python-version pins the Python version to 3.13 for dependency updates.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* limit to 0.5.9 due breaking changes + add env vars requirements
* fix tool spec extract that was ignoring with default
* original tool spec
* update spec
When monitoring LLM events, consumers need to know which events belong
to the same API call. Before this change, there was no way to correlate
LLMCallStartedEvent, LLMStreamChunkEvent, and LLMCallCompletedEvent
belonging to the same request.
* feat: add server-side auth schemes and protocol extensions
- add server auth scheme base class and implementations (api key, bearer token, basic/digest auth, mtls)
- add server-side extension system for a2a protocol extensions
- add extensions middleware for x-a2a-extensions header management
- add extension validation and registry utilities
- enhance auth utilities with server-side support
- add async intercept method to match client call interceptor protocol
- fix type_checking import to resolve mypy errors with a2aconfig
* feat: add transport negotiation and content type handling
- add transport negotiation logic with fallback support
- add content type parser and encoder utilities
- add transport configuration models (client and server)
- add transport types and enums
- enhance config with transport settings
- add negotiation events for transport and content type
* feat: add a2a delegation support to LiteAgent
* feat: add file input support to a2a delegation and tasks
Introduces handling of file inputs in A2A delegation flows by converting file dictionaries to protocol-compatible parts and propagating them through delegation and task execution functions. Updates include utility functions for file conversion, changes to message construction, and passing input_files through relevant APIs.
* feat: liteagent a2a delegation support to kickoff methods
* fix: improve output handling and response model integration in agents
- Refactored output handling in the Agent class to ensure proper conversion and formatting of outputs, including support for BaseModel instances.
- Enhanced the AgentExecutor class to correctly utilize response models during execution, improving the handling of structured outputs.
- Updated the Gemini and Anthropic completion providers to ensure compatibility with new response model handling, including the addition of strict mode for function definitions.
- Improved the OpenAI completion provider to enforce strict adherence to function schemas.
- Adjusted translations to clarify instructions regarding output formatting and schema adherence.
* drop what was a print that didnt get deleted properly
* fixes gemini
* azure working
* bedrock works
* added tests
* adjust test
* fix tests and regen
* fix tests and regen
* refactor: ensure stop words are applied correctly in Azure, Gemini, and OpenAI completions; add tests to validate behavior with structured outputs
* linting
* no need post tool reflection on native tools
* refactor: update prompt generation to prevent thought leakage
- Modified the prompt structure to ensure agents without tools use a simplified format, avoiding ReAct instructions.
- Introduced a new 'task_no_tools' slice for agents lacking tools, ensuring clean output without Thought: prefixes.
- Enhanced test coverage to verify that prompts do not encourage thought leakage, ensuring outputs remain focused and direct.
- Added integration tests to validate that real LLM calls produce clean outputs without internal reasoning artifacts.
* dont forget the cassettes
- add gemini 2.0 schema support using response_json_schema with propertyordering while retaining backward compatibility for earlier models
- refactor llm completions to return validated pydantic models when a response_model is provided, updating hooks, types, and tests for consistent structured outputs
- extend agentfinish and executors to support basemodel outputs, improve anthropic structured parsing, and clean up schema utilities, tests, and original_json handling
* feat: implement before and after tool call hooks in CrewAgentExecutor and AgentExecutor
- Added support for before and after tool call hooks in both CrewAgentExecutor and AgentExecutor classes.
- Introduced ToolCallHookContext to manage context for hooks, allowing for enhanced control over tool execution.
- Implemented logic to block tool execution based on before hooks and to modify results based on after hooks.
- Added integration tests to validate the functionality of the new hooks, ensuring they work as expected in various scenarios.
- Enhanced the overall flexibility and extensibility of tool interactions within the CrewAI framework.
* Potential fix for pull request finding 'Unused local variable'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* Potential fix for pull request finding 'Unused local variable'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* test: add integration test for before hook blocking tool execution in Crew
- Implemented a new test to verify that the before hook can successfully block the execution of a tool within a crew.
- The test checks that the tool is not executed when the before hook returns False, ensuring proper control over tool interactions.
- Enhanced the validation of hook calls to confirm that both before and after hooks are triggered appropriately, even when execution is blocked.
- This addition strengthens the testing coverage for tool call hooks in the CrewAI framework.
* drop unused
* refactor(tests): remove OPENAI_API_KEY check from tool hook tests
- Eliminated the check for the OPENAI_API_KEY environment variable in the test cases for tool hooks.
- This change simplifies the test setup and allows for running tests without requiring the API key to be set, improving test accessibility and flexibility.
---------
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
- fix(gemini): prevent tool calls from using stale text content; correct key refs
- fix(agent-executor): resolve type errors
- refactor(schema): extract Pydantic schema utilities from platform tools
- fix(schema): properly serialize schemas and ensure Responses API uses a separate structure
- fix: preserve list identity to avoid mutation/aliasing issues
- chore(tests): update assumptions to match new behavior
- Bumped the version number to 1.9.0 in pyproject.toml files and __init__.py files across the CrewAI library and its tools.
- Updated dependencies to use the new version of crewai-tools (1.9.0) for improved functionality and compatibility.
- Ensured consistency in versioning across the codebase to reflect the latest updates.
* fix: enhance file store with fallback memory cache when aiocache is not installed
- Added a simple in-memory cache implementation to serve as a fallback when the aiocache library is unavailable.
- Improved error handling for the aiocache import, ensuring that the application can still function without it.
- This change enhances the robustness of the file store utility by providing a reliable caching mechanism in various environments.
* drop fallback
* Potential fix for pull request finding 'Unused global variable'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* feat: introduce GoogleGenAIVertexEmbeddingFunction for dual SDK support
- Added a new embedding function to support both the legacy vertexai.language_models SDK and the new google-genai SDK for Google Vertex AI.
- Updated factory methods to route to the new embedding function.
- Enhanced VertexAIProvider and related configurations to accommodate the new model options.
- Added integration tests for Google Vertex embeddings with Crew memory, ensuring compatibility and functionality with both authentication methods.
This update improves the flexibility and compatibility of Google Vertex AI embeddings within the CrewAI framework.
* fix test count
* rm comment
* regen cassettes
* regen
* drop variable from .envtest
* dreict to relevant trest only
* feat: add response_format parameter to Azure and Gemini providers
* feat: add structured outputs support to Bedrock and Anthropic providers
* chore: bump anthropic dep
* fix: use beta structured output for new models
- Updated the GeminiCompletion class to handle non-dict values returned from tools, ensuring that floats are wrapped in a dictionary format for consistent response handling.
- Introduced a new YAML cassette to test the Gemini LLM's ability to process tools that return float values, verifying that the agent can correctly utilize the sum_numbers tool and return the expected results.
- Added a comprehensive test case to validate the integration of the sum_numbers tool within the Gemini LLM, ensuring accurate calculations and proper response formatting.
These changes improve the robustness of tool interactions within the Gemini LLM and enhance testing coverage for float return values.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- add input_files parameter to Crew.kickoff(), Flow.kickoff(), Task, and Agent.kickoff()
- add provider-specific file uploaders for OpenAI, Anthropic, Gemini, and Bedrock
- add file type detection, constraint validation, and automatic format conversion
- add URL file source support for multimodal content
- add streaming uploads for large files
- add prompt caching support for Anthropic
- add OpenAI Responses API support
* wip restrcuturing agent executor and liteagent
* fix: handle None task in AgentExecutor to prevent errors
Added a check to ensure that if the task is None, the method returns early without attempting to access task properties. This change improves the robustness of the AgentExecutor by preventing potential errors when the task is not set.
* refactor: streamline AgentExecutor initialization by removing redundant parameters
Updated the Agent class to simplify the initialization of the AgentExecutor by removing unnecessary task and crew parameters in standalone mode. This change enhances code clarity and maintains backward compatibility by ensuring that the executor is correctly configured without redundant assignments.
* wip: clean
* ensure executors work inside a flow due to flow in flow async structure
* refactor: enhance agent kickoff preparation by separating common logic
Updated the Agent class to introduce a new private method that consolidates the common setup logic for both synchronous and asynchronous kickoff executions. This change improves code clarity and maintainability by reducing redundancy in the kickoff process, while ensuring that the agent can still execute effectively within both standalone and flow contexts.
* linting and tests
* fix test
* refactor: improve test for Agent kickoff parameters
Updated the test for the Agent class to ensure that the kickoff method correctly preserves parameters. The test now verifies the configuration of the agent after kickoff, enhancing clarity and maintainability. Additionally, the test for asynchronous kickoff within a flow context has been updated to reflect the Agent class instead of LiteAgent.
* refactor: update test task guardrail process output for improved validation
Refactored the test for task guardrail process output to enhance the validation of the output against the OpenAPI schema. The changes include a more structured request body and updated response handling to ensure compliance with the guardrail requirements. This update aims to improve the clarity and reliability of the test cases, ensuring that task outputs are correctly validated and feedback is appropriately provided.
* test fix cassette
* test fix cassette
* working
* working cassette
* refactor: streamline agent execution and enhance flow compatibility
Refactored the Agent class to simplify the execution method by removing the event loop check and clarifying the behavior when called from synchronous and asynchronous contexts. The changes ensure that the method operates seamlessly within flow methods, improving clarity in the documentation. Additionally, updated the AgentExecutor to set the response model to None, enhancing flexibility. New test cassettes were added to validate the functionality of agents within flow contexts, ensuring robust testing for both synchronous and asynchronous operations.
* fixed cassette
* Enhance Flow Execution Logic
- Introduced conditional execution for start methods in the Flow class.
- Unconditional start methods are prioritized during kickoff, while conditional starts are executed only if no unconditional starts are present.
- Improved handling of cyclic flows by allowing re-execution of conditional start methods triggered by routers.
- Added checks to continue execution chains for completed conditional starts.
These changes improve the flexibility and control of flow execution, ensuring that the correct methods are triggered based on the defined conditions.
* Enhance Agent and Flow Execution Logic
- Updated the Agent class to automatically detect the event loop and return a coroutine when called within a Flow, simplifying async handling for users.
- Modified Flow class to execute listeners sequentially, preventing race conditions on shared state during listener execution.
- Improved handling of coroutine results from synchronous methods, ensuring proper execution flow and state management.
These changes enhance the overall execution logic and user experience when working with agents and flows in CrewAI.
* Enhance Flow Listener Logic and Agent Imports
- Updated the Flow class to track fired OR listeners, ensuring that multi-source OR listeners only trigger once during execution. This prevents redundant executions and improves flow efficiency.
- Cleared fired OR listeners during cyclic flow resets to allow re-execution in new cycles.
- Modified the Agent class imports to include Coroutine from collections.abc, enhancing type handling for asynchronous operations.
These changes improve the control and performance of flow execution in CrewAI, ensuring more predictable behavior in complex scenarios.
* adjusted test due to new cassette
* ensure native tool calling works with liteagent
* ensure response model is respected
* Enhance Tool Name Handling for LLM Compatibility
- Added a new function to replace invalid characters in function names with underscores, ensuring compatibility with LLM providers.
- Updated the function to sanitize tool names before validation.
- Modified the function to use sanitized names for tool registration.
These changes improve the robustness of tool name handling, preventing potential issues with invalid characters in function names.
* ensure we dont finalize batch on just a liteagent finishing
* max tools per turn wip and ensure we drop print times
* fix sync main issues
* fix llm_call_completed event serialization issue
* drop max_tools_iterations
* for fixing model dump with state
* Add extract_tool_call_info function to handle various tool call formats
- Introduced a new utility function to extract tool call ID, name, and arguments from different provider formats (OpenAI, Gemini, Anthropic, and dictionary).
- This enhancement improves the flexibility and compatibility of tool calls across multiple LLM providers, ensuring consistent handling of tool call information.
- The function returns a tuple containing the call ID, function name, and function arguments, or None if the format is unrecognized.
* Refactor AgentExecutor to support batch execution of native tool calls
- Updated the method to process all tools from in a single batch, enhancing efficiency and reducing the number of interactions with the LLM.
- Introduced a new utility function to streamline the extraction of tool call details, improving compatibility with various tool formats.
- Removed the parameter, simplifying the initialization of the .
- Enhanced logging and message handling to provide clearer insights during tool execution.
- This refactor improves the overall performance and usability of the agent execution flow.
* Update English translations for tool usage and reasoning instructions
- Revised the `post_tool_reasoning` message to clarify the analysis process after tool usage, emphasizing the need to provide only the final answer if requirements are met.
- Updated the `format` message to simplify the instructions for deciding between using a tool or providing a final answer, enhancing clarity for users.
- These changes improve the overall user experience by providing clearer guidance on task execution and response formatting.
* fix
* fixing azure tests
* organizae imports
* dropped unused
* Remove debug print statements from AgentExecutor to clean up the code and improve readability. This change enhances the overall performance of the agent execution flow by eliminating unnecessary console output during LLM calls and iterations.
* linted
* updated cassette
* regen cassette
* revert crew agent executor
* adjust cassettes and dropped tests due to native tool implementation
* adjust
* ensure we properly fail tools and emit their events
* Enhance tool handling and delegation tracking in agent executors
- Implemented immediate return for tools with result_as_answer=True in crew_agent_executor.py.
- Added delegation tracking functionality in agent_utils.py to increment delegations when specific tools are used.
- Updated tool usage logic to handle caching more effectively in tool_usage.py.
- Enhanced test cases to validate new delegation features and tool caching behavior.
This update improves the efficiency of tool execution and enhances the delegation capabilities of agents.
* Enhance tool handling and delegation tracking in agent executors
- Implemented immediate return for tools with result_as_answer=True in crew_agent_executor.py.
- Added delegation tracking functionality in agent_utils.py to increment delegations when specific tools are used.
- Updated tool usage logic to handle caching more effectively in tool_usage.py.
- Enhanced test cases to validate new delegation features and tool caching behavior.
This update improves the efficiency of tool execution and enhances the delegation capabilities of agents.
* fix cassettes
* fix
* regen cassettes
* regen gemini
* ensure we support bedrock
* supporting bedrock
* regen azure cassettes
* Implement max usage count tracking for tools in agent executors
- Added functionality to check if a tool has reached its maximum usage count before execution in both crew_agent_executor.py and agent_executor.py.
- Enhanced error handling to return a message when a tool's usage limit is reached.
- Updated tool usage logic in tool_usage.py to increment usage counts and print current usage status.
- Introduced tests to validate max usage count behavior for native tool calling, ensuring proper enforcement and tracking.
This update improves tool management by preventing overuse and providing clear feedback when limits are reached.
* fix other test
* fix test
* drop logs
* better tests
* regen
* regen all azure cassettes
* regen again placeholder for cassette matching
* fix: unify tool name sanitization across codebase
* fix: include tool role messages in save_last_messages
* fix: update sanitize_tool_name test expectations
Align test expectations with unified sanitize_tool_name behavior
that lowercases and splits camelCase for LLM provider compatibility.
* fix: apply sanitize_tool_name consistently across codebase
Unify tool name sanitization to ensure consistency between tool names
shown to LLMs and tool name matching/lookup logic.
* regen
* fix: sanitize tool names in native tool call processing
- Update extract_tool_call_info to return sanitized tool names
- Fix delegation tool name matching to use sanitized names
- Add sanitization in crew_agent_executor tool call extraction
- Add sanitization in experimental agent_executor
- Add sanitization in LLM.call function lookup
- Update streaming utility to use sanitized names
- Update base_agent_executor_mixin delegation check
* Extract text content from parts directly to avoid warning about non-text parts
* Add test case for Gemini token usage tracking
- Introduced a new YAML cassette for tracking token usage in Gemini API responses.
- Updated the test for Gemini to validate token usage metrics and response content.
- Ensured proper integration with the Gemini model and API key handling.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
adds emission sequencing, parent-child event hierarchy with scope management, and integrates both into the event bus. introduces flush() for deterministic handling, resets emission counters for test isolation, and adds chain tracking via previous_event_id/triggered_by_event_id plus context variables populated during emit and listener execution. includes tracing listener typing/sorting improvements, safer tool event pairing with try/finally, additional stack checks and cache-hit formatting, context isolation fixes, cassette regen/decoding, and test updates to handle vcr race conditions and flaky behavior.
* wip restrcuturing agent executor and liteagent
* fix: handle None task in AgentExecutor to prevent errors
Added a check to ensure that if the task is None, the method returns early without attempting to access task properties. This change improves the robustness of the AgentExecutor by preventing potential errors when the task is not set.
* refactor: streamline AgentExecutor initialization by removing redundant parameters
Updated the Agent class to simplify the initialization of the AgentExecutor by removing unnecessary task and crew parameters in standalone mode. This change enhances code clarity and maintains backward compatibility by ensuring that the executor is correctly configured without redundant assignments.
* ensure executors work inside a flow due to flow in flow async structure
* refactor: enhance agent kickoff preparation by separating common logic
Updated the Agent class to introduce a new private method that consolidates the common setup logic for both synchronous and asynchronous kickoff executions. This change improves code clarity and maintainability by reducing redundancy in the kickoff process, while ensuring that the agent can still execute effectively within both standalone and flow contexts.
* linting and tests
* fix test
* refactor: improve test for Agent kickoff parameters
Updated the test for the Agent class to ensure that the kickoff method correctly preserves parameters. The test now verifies the configuration of the agent after kickoff, enhancing clarity and maintainability. Additionally, the test for asynchronous kickoff within a flow context has been updated to reflect the Agent class instead of LiteAgent.
* refactor: update test task guardrail process output for improved validation
Refactored the test for task guardrail process output to enhance the validation of the output against the OpenAPI schema. The changes include a more structured request body and updated response handling to ensure compliance with the guardrail requirements. This update aims to improve the clarity and reliability of the test cases, ensuring that task outputs are correctly validated and feedback is appropriately provided.
* test fix cassette
* test fix cassette
* working
* working cassette
* refactor: streamline agent execution and enhance flow compatibility
Refactored the Agent class to simplify the execution method by removing the event loop check and clarifying the behavior when called from synchronous and asynchronous contexts. The changes ensure that the method operates seamlessly within flow methods, improving clarity in the documentation. Additionally, updated the AgentExecutor to set the response model to None, enhancing flexibility. New test cassettes were added to validate the functionality of agents within flow contexts, ensuring robust testing for both synchronous and asynchronous operations.
* fixed cassette
* Enhance Flow Execution Logic
- Introduced conditional execution for start methods in the Flow class.
- Unconditional start methods are prioritized during kickoff, while conditional starts are executed only if no unconditional starts are present.
- Improved handling of cyclic flows by allowing re-execution of conditional start methods triggered by routers.
- Added checks to continue execution chains for completed conditional starts.
These changes improve the flexibility and control of flow execution, ensuring that the correct methods are triggered based on the defined conditions.
* Enhance Agent and Flow Execution Logic
- Updated the Agent class to automatically detect the event loop and return a coroutine when called within a Flow, simplifying async handling for users.
- Modified Flow class to execute listeners sequentially, preventing race conditions on shared state during listener execution.
- Improved handling of coroutine results from synchronous methods, ensuring proper execution flow and state management.
These changes enhance the overall execution logic and user experience when working with agents and flows in CrewAI.
* Enhance Flow Listener Logic and Agent Imports
- Updated the Flow class to track fired OR listeners, ensuring that multi-source OR listeners only trigger once during execution. This prevents redundant executions and improves flow efficiency.
- Cleared fired OR listeners during cyclic flow resets to allow re-execution in new cycles.
- Modified the Agent class imports to include Coroutine from collections.abc, enhancing type handling for asynchronous operations.
These changes improve the control and performance of flow execution in CrewAI, ensuring more predictable behavior in complex scenarios.
* adjusted test due to new cassette
* ensure we dont finalize batch on just a liteagent finishing
* feat: cancellable parallelized flow methods
* feat: allow methods to be cancelled & run parallelized
* feat: ensure state is thread safe through proxy
* fix: check for proxy state
* fix: mimic BaseModel method
* chore: update final attr checks; test
* better description
* fix test
* chore: update test assumptions
* extra
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* doc changes for better deplyment guidelines and checklist
* chore: remove .claude folder from version control
The .claude folder contains local Claude Code skills and configuration
that should not be tracked in the repository. Already in .gitignore.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* Better project structure for flows
* docs.json updated structure
* Ko and Pt traslations for deploying guidelines to AMP
* fix broken links
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- Updated the `supports_stop_words` method to accurately reflect support for stop sequences based on model type, specifically excluding GPT-5 and O-series models.
- Added comprehensive tests to verify that GPT-5 family and O-series models do not support stop words, ensuring correct behavior in completion parameter preparation.
- Ensured that stop words are not included in parameters for unsupported models while maintaining expected behavior for supported models.
This commit fixes a bug where `parent_flow` was not being set because
the maximum depth was not sufficient to search for an instance of `Flow`
in the current call stack frame during Flow instantiation.
* WIP docs for pii-redaction feat
* fix
* updated image
* Update PII Redaction documentation to clarify Enterprise plan requirements and version constraints
* visual re-ordering
* dropping not useful info
* improve docs
* better wording
* Add PII Redaction feature documentation in Korean and Portuguese, including details on activation, supported entity types, and best practices for usage.
- Added new features including native async chain for a2a, a2a update mechanisms, and global flow configuration for human-in-the-loop feedback.
- Improved event handling with enhancements to EventListener and TraceCollectionListener.
- Fixed bugs related to missing a2a dependencies and WorkOS login polling.
- Updated documentation for webhook-streaming and adjusted language in AOP to AMP documentation.
- Acknowledged contributors for this release.
* fix: handle HumanFeedbackPending in flow error management
Updated the flow error handling to treat HumanFeedbackPending as expected control flow rather than an error. This change ensures that the flow can appropriately manage human feedback scenarios without signaling an error, improving the robustness of the flow execution.
* fix: improve error handling for HumanFeedbackPending in flow execution
Refined the flow error management to emit a paused event for HumanFeedbackPending exceptions instead of treating them as failures. This enhancement allows the flow to better manage human feedback scenarios, ensuring that the execution state is preserved and appropriately handled without signaling an error. Regular failure events are still emitted for other exceptions, maintaining robust error reporting.
Updated the flow error handling to treat HumanFeedbackPending as expected control flow rather than an error. This change ensures that the flow can appropriately manage human feedback scenarios without signaling an error, improving the robustness of the flow execution.
* Adding usage info everywhere
* Changing the check
* Changing the logic
* Adding tests
* Adding casellets
* Minor change
* Fixing testcase
* remove the duplicated test case, thanks to cursor
* Adding async test cases
* Updating test case
---------
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* feat: Introduce global flow configuration for human-in-the-loop feedback
- Added a new `flow_config` module to manage global Flow configuration, allowing customization of Flow behavior at runtime.
- Integrated the `hitl_provider` attribute to specify the human-in-the-loop feedback provider, enhancing flexibility in feedback collection.
- Updated the `human_feedback` function to utilize the configured HITL provider, improving the handling of feedback requests.
* TYPO
* feat: Introduce production-ready Flows and Crews architecture with new runner and updated documentation across multiple languages.
* ko and pt-br for tracing missing links
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* Refactor EventListener and TraceCollectionListener for improved event handling
- Removed unused threading and method branches from EventListener to simplify the code.
- Updated event handling methods in EventListener to use new formatter methods for better clarity and consistency.
- Refactored TraceCollectionListener to eliminate unnecessary parameters in formatter calls, enhancing readability.
- Simplified ConsoleFormatter by removing outdated tree management methods and focusing on panel-based output for status updates.
- Enhanced ToolUsage to track run attempts for better tool usage metrics.
* clearer for knowledge retrieval and dropped some reduancies
* Refactor EventListener and ConsoleFormatter for improved clarity and consistency
- Removed the MCPToolExecutionCompletedEvent handler from EventListener to streamline event processing.
- Updated ConsoleFormatter to enhance output formatting by adding line breaks for better readability in status content.
- Renamed status messages for MCP Tool execution to provide clearer context during tool operations.
* fix run attempt incrementation
* task name consistency
* memory events consistency
* ensure hitl works
* linting
* WIP gh pr refactor: update agent executor handling and introduce flow-based executor
* wip
* refactor: clean up comments and improve code clarity in agent executor flow
- Removed outdated comments and unnecessary explanations in and classes to enhance code readability.
- Simplified parameter updates in the agent executor to avoid confusion regarding executor recreation.
- Improved clarity in the method to ensure proper handling of non-final answers without raising errors.
* bumping pytest-randomly numpy
* also bump versions of anthropic sdk
* ensure flow logs are not passed if its on executor
* revert anthropic bump
* fix
* refactor: update dependency markers in uv.lock for platform compatibility
- Enhanced dependency markers for , , , and others to ensure compatibility across different platforms (Linux, Darwin, and architecture-specific conditions).
- Removed unnecessary event emission in the class during kickoff.
- Cleaned up commented-out code in the class for better readability and maintainability.
* drop dupllicate
* test: enhance agent executor creation and stop word assertions
- Added calls to create_agent_executor in multiple test cases to ensure proper agent execution setup.
- Updated assertions for stop words in the agent tests to remove unnecessary checks and improve clarity.
- Ensured consistency in task handling by invoking create_agent_executor with the appropriate task parameter.
* refactor: reorganize agent executor imports and introduce CrewAgentExecutorFlow
- Removed the old import of CrewAgentExecutorFlow and replaced it with the new import from the experimental module.
- Updated relevant references in the codebase to ensure compatibility with the new structure.
- Enhanced the organization of imports in core.py and base_agent.py for better clarity and maintainability.
* updating name
* dropped usage of printer here for rich console and dropped non-added value logging
* address i18n
* Enhance concurrency control in CrewAgentExecutorFlow by introducing a threading lock to prevent concurrent executions. This change ensures that the executor instance cannot be invoked while already running, improving stability and reliability during flow execution.
* string literal returns
* string literal returns
* Enhance CrewAgentExecutor initialization by allowing optional i18n parameter for improved internationalization support. This change ensures that the executor can utilize a provided i18n instance or fallback to the default, enhancing flexibility in multilingual contexts.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* feat: introduce human feedback events and decorator for flow methods
- Added HumanFeedbackRequestedEvent and HumanFeedbackReceivedEvent classes to handle human feedback interactions within flows.
- Implemented the @human_feedback decorator to facilitate human-in-the-loop workflows, allowing for feedback collection and routing based on responses.
- Enhanced Flow class to store human feedback history and manage feedback outcomes.
- Updated flow wrappers to preserve attributes from methods decorated with @human_feedback.
- Added integration and unit tests for the new human feedback functionality, ensuring proper validation and routing behavior.
* adding deployment docs
* New docs
* fix printer
* wrong change
* Adding Async Support
feat: enhance human feedback support in flows
- Updated the @human_feedback decorator to use 'message' parameter instead of 'request' for clarity.
- Introduced new FlowPausedEvent and MethodExecutionPausedEvent to handle flow and method pauses during human feedback.
- Added ConsoleProvider for synchronous feedback collection and integrated async feedback capabilities.
- Implemented SQLite persistence for managing pending feedback context.
- Expanded documentation to include examples of async human feedback usage and best practices.
* linter
* fix
* migrating off printer
* updating docs
* new tests
* doc update
* fix: use CREWAI_PLUS_URL env var in precedence over PlusAPI configured value
* feat: bypass TLS certificate verification when calling platform
* test: fix test
- Replace Python representation with JsonSchema for tool arguments
- Remove deprecated PydanticSchemaParser in favor of direct schema generation
- Add handling for VAR_POSITIONAL and VAR_KEYWORD parameters
- Improve tool argument schema collection
* fix: gracefully terminate the future when executing a task async
* core: add unit test
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* supporting thinking for anthropic models
* drop comments here
* thinking and tool calling support
* fix: properly mock tool use and text block types in Anthropic tests
- Updated the test for the Anthropic tool use conversation flow to include type attributes for mocked ToolUseBlock and text blocks, ensuring accurate simulation of tool interactions during testing.
* feat: add AnthropicThinkingConfig for enhanced thinking capabilities
This update introduces the AnthropicThinkingConfig class to manage thinking parameters for the Anthropic completion model. The LLM and AnthropicCompletion classes have been updated to utilize this new configuration. Additionally, new test cassettes have been added to validate the functionality of thinking blocks across interactions.
Adds async support for tools with tests, async execution in the agent executor, and async operations for memory (with aiosqlite). Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and regenerates lockfiles.
Introduces async tool support with new tests, adds async execution to the agent executor, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, and adds additional tests.
* refactor: enhance model validation and provider inference in LLM class
- Updated the model validation logic to support pattern matching for new models and "latest" versions, improving flexibility for various providers.
- Refactored the `_validate_model_in_constants` method to first check hardcoded constants and then fall back to pattern matching.
- Introduced `_matches_provider_pattern` to streamline provider-specific model checks.
- Enhanced the `_infer_provider_from_model` method to utilize pattern matching for better provider inference.
This refactor aims to improve the extensibility of the LLM class, allowing it to accommodate new models without requiring constant updates to the hardcoded lists.
* feat: add new Anthropic model versions to constants
- Introduced "claude-opus-4-5-20251101" and "claude-opus-4-5" to the AnthropicModels and ANTHROPIC_MODELS lists for enhanced model support.
- Added "anthropic.claude-opus-4-5-20251101-v1:0" to BedrockModels and BEDROCK_MODELS to ensure compatibility with the latest model offerings.
- Updated test cases to ensure proper environment variable handling for model validation, improving robustness in testing scenarios.
* dont infer this way - dropped
- Changed "AMP" to "AOP" in multiple locations across JSON and MDX files to reflect the correct terminology for the Agent Operations Platform.
- Updated the introduction sections in English, Korean, and Portuguese to ensure consistency in the platform's naming.
* Adding drop parameters
* Adding test case
* Just some spacing addition
* Adding drop params to maintain consistency
* Changing variable name
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* feat: enhance flow event state management
- Added `state` attribute to `FlowFinishedEvent` to capture the flow's state as a JSON-serialized dictionary.
- Updated flow event emissions to include the serialized state, improving traceability and debugging capabilities during flow execution.
* fix: improve state serialization in Flow class
- Enhanced the `_copy_and_serialize_state` method to handle exceptions during JSON serialization of Pydantic models, ensuring robustness in state management.
- Updated test assertions to access the state as a dictionary, aligning with the new state structure.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* Add gemini-3-pro-preview
Also refactors the tool support check for better forward compatibility.
* Add cassette for Gemini 3 Pro
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- Deleted the `__init__.py` file from the tests/hooks directory as it contained no tests or functionality. This cleanup helps maintain a tidy test structure.
- add trust_remote_completion_status flag to A2AConfig, Adds configuration flag to control whether to trust A2A agent completion status. Resolves#3899
- update docs
* feat: implement LLM call hooks and enhance agent execution context
- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.
* feat: implement LLM call hooks and enhance agent execution context
- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.
* fix verbose
* feat: introduce crew-scoped hook decorators and refactor hook registration
- Added decorators for before and after LLM and tool calls to enhance flexibility in modifying execution behavior.
- Implemented a centralized hook registration mechanism within CrewBase to automatically register crew-scoped hooks.
- Removed the obsolete base.py file as its functionality has been integrated into the new decorators and registration system.
- Enhanced tests for the new hook decorators to ensure proper registration and execution flow.
- Updated existing hook handling to accommodate the new decorator-based approach, improving code organization and maintainability.
* feat: enhance hook management with clear and unregister functions
- Introduced functions to unregister specific before and after hooks for both LLM and tool calls, improving flexibility in hook management.
- Added clear functions to remove all registered hooks of each type, facilitating easier state management and cleanup.
- Implemented a convenience function to clear all global hooks in one call, streamlining the process for testing and execution context resets.
- Enhanced tests to verify the functionality of unregistering and clearing hooks, ensuring robust behavior in various scenarios.
* refactor: enhance hook type management for LLM and tool hooks
- Updated hook type definitions to use generic protocols for better type safety and flexibility.
- Replaced Callable type annotations with specific BeforeLLMCallHookType and AfterLLMCallHookType for clarity.
- Improved the registration and retrieval functions for before and after hooks to align with the new type definitions.
- Enhanced the setup functions to handle hook execution results, allowing for blocking of LLM calls based on hook logic.
- Updated related tests to ensure proper functionality and type adherence across the hook management system.
* feat: add execution and tool hooks documentation
- Introduced new documentation for execution hooks, LLM call hooks, and tool call hooks to provide comprehensive guidance on their usage and implementation in CrewAI.
- Updated existing documentation to include references to the new hooks, enhancing the learning resources available for users.
- Ensured consistency across multiple languages (English, Portuguese, Korean) for the new documentation, improving accessibility for a wider audience.
- Added examples and troubleshooting sections to assist users in effectively utilizing hooks for agent operations.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- Added support for before and after LLM call hooks to allow modification of messages and responses during LLM interactions.
- Introduced LLMCallHookContext to provide hooks with access to the executor state, enabling in-place modifications of messages.
- Updated get_llm_response function to utilize the new hooks, ensuring that modifications persist across iterations.
- Enhanced tests to verify the functionality of the hooks and their error handling capabilities, ensuring robust execution flow.
* feat: add messages to task and agent outputs
- Introduced a new field in and to capture messages from the last task execution.
- Updated the class to store the last messages and provide a property for easy access.
- Enhanced the and classes to include messages in their outputs.
- Added tests to ensure that messages are correctly included in task outputs and agent outputs during execution.
* using typing_extensions for 3.10 compatability
* feat: add last_messages attribute to agent for improved task tracking
- Introduced a new `last_messages` attribute in the agent class to store messages from the last task execution.
- Updated the `Crew` class to handle the new messages attribute in task outputs.
- Enhanced existing tests to ensure that the `last_messages` attribute is correctly initialized and utilized across various guardrail scenarios.
* fix: add messages field to TaskOutput in tests for consistency
- Updated multiple test cases to include the new `messages` field in the `TaskOutput` instances.
- Ensured that all relevant tests reflect the latest changes in the TaskOutput structure, maintaining consistency across the test suite.
- This change aligns with the recent addition of the `last_messages` attribute in the agent class for improved task tracking.
* feat: preserve messages in task outputs during replay
- Added functionality to the Crew class to store and retrieve messages in task outputs.
- Enhanced the replay mechanism to ensure that messages from stored task outputs are preserved and accessible.
- Introduced a new test case to verify that messages are correctly stored and replayed, ensuring consistency in task execution and output handling.
- This change improves the overall tracking and context retention of task interactions within the CrewAI framework.
* fix original test, prev was debugging
- Added section on LLM-based guardrails, explaining their usage and requirements.
- Updated examples to demonstrate the implementation of multiple guardrails, including both function-based and LLM-based approaches.
- Clarified the distinction between single and multiple guardrails in task configurations.
- Improved explanations of guardrail functionality to ensure better understanding of validation processes.
- Enhanced the MCP tool execution in both synchronous and asynchronous contexts by utilizing for better event loop management.
- Updated error handling to provide clearer messages for connection issues and task cancellations.
- Added tests to validate MCP tool execution in both sync and async scenarios, ensuring robust functionality across different contexts.
* WIP transport support mcp
* refactor: streamline MCP tool loading and error handling
* linted
* Self type from typing with typing_extensions in MCP transport modules
* added tests for mcp setup
* added tests for mcp setup
* docs: enhance MCP overview with detailed integration examples and structured configurations
* feat: implement MCP event handling and logging in event listener and client
- Added MCP event types and handlers for connection and tool execution events.
- Enhanced MCPClient to emit events on connection status and tool execution.
- Updated ConsoleFormatter to handle MCP event logging.
- Introduced new MCP event types for better integration and monitoring.
* fix: update document ID handling in ChromaDB utility functions to use SHA-256 hashing and include index for uniqueness
* test: add tests for hash-based ID generation in ChromaDB utility functions
* drop idx for preventing dups, upsert should handle dups
* fix: update document ID extraction logic in ChromaDB utility functions to check for doc_id at the top level of the document
* fix: enhance document ID generation in ChromaDB utility functions to deduplicate documents and ensure unique hash-based IDs without suffixes
* fix: improve error handling and document ID generation in ChromaDB utility functions to ensure robust processing and uniqueness
fix: refine nested flow conditionals and ensure router methods and routes are fully parsed
fix: improve docstrings, typing, and logging coverage across all events
feat: update flow.plot feature with new UI enhancements
chore: apply Ruff linting, reorganize imports, and remove deprecated utilities/files
chore: split constants and utils, clean JS comments, and add typing for linters
tests: strengthen test coverage for flow execution paths and router logic
* fix: update default LLM model and improve error logging in LLM utilities
* Updated the default LLM model from "gpt-4o-mini" to "gpt-4.1-mini" for better performance.
* Enhanced error logging in the LLM utilities to use logger.error instead of logger.debug, ensuring that errors are properly reported and raised.
* Added tests to verify behavior when OpenAI API key is missing and when Anthropic dependency is not available, improving robustness and error handling in LLM creation.
* fix: update test for default LLM model usage
* Refactored the test_create_llm_with_none_uses_default_model to use the imported DEFAULT_LLM_MODEL constant instead of a hardcoded string.
* Ensured that the test correctly asserts the model used is the current default, improving maintainability and consistency across tests.
* change default model to gpt-4.1-mini
* change default model use defualt
* feat: enhance InternalInstructor to support multiple LLM providers
- Updated InternalInstructor to conditionally create an instructor client based on the LLM provider.
- Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern.
- Added functionality to extract the provider from the LLM model name.
- Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling.
This update improves flexibility and extensibility for different LLM integrations.
* fix test
Fix navigation paths for two integration tool cards that were redirecting to the
introduction page instead of their intended documentation pages.
Fixes#3516
Co-authored-by: Cwarre33 <cwarre33@charlotte.edu>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* Implement improvements on QdrantVectorSearchTool
- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned
* Implement improvements on QdrantVectorSearchTool
- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* chore: update codeql config paths to new folders
* tests: use threading.Condition for event check
---------
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* docs: update LLM integration details and examples
- Changed references from LiteLLM to native SDKs for LLM providers.
- Enhanced OpenAI and AWS Bedrock sections with new usage examples and advanced configuration options.
- Added structured output examples and supported environment variables for better clarity.
- Improved documentation on additional parameters and features for LLM configurations.
* drop this example - should use strucutred output from task instead
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* feat: add `apps` & `actions` attributes to Agent (#3504)
* feat: add app attributes to Agent
* feat: add actions attribute to Agent
* chore: resolve linter issues
* refactor: merge the apps and actions parameters into a single one
* fix: remove unnecessary print
* feat: logging error when CrewaiPlatformTools fails
* chore: export CrewaiPlatformTools directly from crewai_tools
* style: resolver linter issues
* test: fix broken tests
* style: solve linter issues
* fix: fix broken test
* feat: monorepo restructure and test/ci updates
- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules
* chore: update python version to 3.13 and package metadata
* feat: add crewai-tools workspace and fix tests/dependencies
* feat: add crewai-tools workspace structure
* Squashed 'temp-crewai-tools/' content from commit 9bae5633
git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9
* feat: configure crewai-tools workspace package with dependencies
* fix: apply ruff auto-formatting to crewai-tools code
* chore: update lockfile
* fix: don't allow tool tests yet
* fix: comment out extra pytest flags for now
* fix: remove conflicting conftest.py from crewai-tools tests
* fix: resolve dependency conflicts and test issues
- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0
* chore: update CI workflows and docs for monorepo structure
* chore: update CI workflows and docs for monorepo structure
* fix: actions syntax
* chore: ci publish and pin versions
* fix: add permission to action
* chore: bump version to 1.0.0a1 across all packages
- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency
* WIP: v1 docs (#3626)
(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)
* docs: parity for all translations
* docs: full name of acronym AMP
* docs: fix lingering unused code
* docs: expand contextual options in docs.json
* docs: add contextual action to request feature on GitHub (#3635)
* chore: apply linting fixes to crewai-tools
* feat: add required env var validation for brightdata
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* fix: handle properly anyOf oneOf allOf schema's props
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* feat: bump version to 1.0.0a2
* Lorenze/native inference sdks (#3619)
* ruff linted
* using native sdks with litellm fallback
* drop exa
* drop print on completion
* Refactor LLM and utility functions for type consistency
- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.
* fix agent_tests
* fix litellm tests and usagemetrics fix
* drop print
* Refactor LLM event handling and improve test coverage
- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.
* fix test_converter
* fixed tests/agents/test_agent.py
* Refactor LLM context length exception handling and improve provider integration
- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.
* Enhance LLM context window handling across providers
- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.
* fix test agent again
* fix test agent
* feat: add native LLM providers for Anthropic, Azure, and Gemini
- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.
* chore: update dependencies and improve context management
- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.
* refactor(tests): update LLM instantiation to include is_litellm flag in test cases
- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.
* linter
* linted
* revert constants
* fix(tests): correct type hint in expected model description
- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.
* refactor(llm): enhance LLM instantiation and error handling
- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.
* fixed test
* refactor(llm): enhance token usage tracking and add copy methods
- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.
* refactor(tests): reorganize imports and enhance error messages in test cases
- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.
* feat: add base devtooling
* fix: ensure dep refs are updated for devtools
* fix: allow pre-release
* feat: allow release after tag
* feat: bump versions to 1.0.0a3
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix: match tag and release title, ignore devtools build for pypi
* fix: allow failed pypi publish
* feat: introduce trigger listing and execution commands for local development (#3643)
* chore: exclude tests from ruff linting
* chore: exclude tests from GitHub Actions linter
* fix: replace print statements with logger in agent and memory handling
* chore: add noqa for intentional print in printer utility
* fix: resolve linting errors across codebase
* feat: update docs with new approach to consume Platform Actions (#3675)
* fix: remove duplicate line and add explicit env var
* feat: bump versions to 1.0.0a4 (#3686)
* Update triggers docs (#3678)
* docs: introduce triggers list & triggers run command
* docs: add KO triggers docs
* docs: ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned on docs (#3687)
* Lorenze/bedrock llm (#3693)
* feat: add AWS Bedrock support and update dependencies
- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.
* using converse api
* converse
* linted
* refactor: update BedrockCompletion class to improve parameter handling
- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.
* feat: improve event bus thread safety and async support
Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.
Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.
* feat: enhance OpenAICompletion class with additional client parameters (#3701)
* feat: enhance OpenAICompletion class with additional client parameters
- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.
* fix: correct test case for unsupported OpenAI model
- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.
* fix: enhance error handling in OpenAICompletion class
- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.
* fix: improve test for unsupported OpenAI model handling
- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.
* feat: bump versions to 1.0.0b1 (#3706)
* Lorenze/tools drop litellm (#3710)
* completely drop litellm and correctly pass config for qdrant
* feat: add support for additional embedding models in EmbeddingService
- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.
* fix: update collection parameter handling in CrewAIRagAdapter
- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.
* moved stagehand as optional dep (#3712)
* feat: bump versions to 1.0.0b2 (#3713)
* feat: enhance AnthropicCompletion class with additional client parame… (#3707)
* feat: enhance AnthropicCompletion class with additional client parameters and tool handling
- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.
* drop print statements
* test: add fixture to mock ANTHROPIC_API_KEY for tests
- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.
* refactor: streamline streaming message handling in AnthropicCompletion class
- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.
* fix streaming here too
* fix: improve error handling in tool conversion for AnthropicCompletion class
- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.
* feat: enhance GeminiCompletion class with client parameter support (#3717)
* feat: enhance GeminiCompletion class with client parameter support
- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.
* add optional dependancies
* refactor: update test fixture to mock GOOGLE_API_KEY
- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.
* fix tests
* feat: enhance BedrockCompletion class with advanced features
* feat: enhance BedrockCompletion class with advanced features and error handling
- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.
* chore: add boto typing
* fix: use typing_extensions.Required for Python 3.10 compatibility
---------
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* feat: azure native tests
* feat: add Azure AI Inference support and related tests
- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.
* feat: enhance AzureCompletion class with Azure OpenAI support
- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.
* refactor: convert project module to metaclass with full typing
* Lorenze/OpenAI base url backwards support (#3723)
* fix: enhance OpenAICompletion class base URL handling
- Updated the base URL assignment in the OpenAICompletion class to prioritize the new `api_base` attribute and fallback to the environment variable `OPENAI_BASE_URL` if both are not set.
- Added `api_base` to the list of parameters in the OpenAICompletion class to ensure proper configuration and flexibility in API endpoint management.
* feat: enhance OpenAICompletion class with api_base support
- Added the `api_base` parameter to the OpenAICompletion class to allow for flexible API endpoint configuration.
- Updated the `_get_client_params` method to prioritize `base_url` over `api_base`, ensuring correct URL handling.
- Introduced comprehensive tests to validate the behavior of `api_base` and `base_url` in various scenarios, including environment variable fallback.
- Enhanced test coverage for client parameter retrieval, ensuring robust integration with the OpenAI API.
* fix: improve OpenAICompletion class configuration handling
- Added a debug print statement to log the client configuration parameters during initialization for better traceability.
- Updated the base URL assignment logic to ensure it defaults to None if no valid base URL is provided, enhancing robustness in API endpoint configuration.
- Refined the retrieval of the `api_base` environment variable to streamline the configuration process.
* drop print
* feat: improvements on import native sdk support (#3725)
* feat: add support for Anthropic provider and enhance logging
- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.
* feat: enhance LLM support with Bedrock provider and update dependencies
- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.
* test: update native provider fallback tests to expect ImportError
* adjust the test with the expected bevaior - raising ImportError
* this is exoecting the litellm format, all gemini native tests are in test_google.py
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix: remove stdout prints, improve test determinism, and update trace handling
Removed `print` statements from the `LLMStreamChunkEvent` handler to prevent
LLM response chunks from being written directly to stdout. The listener now
only tracks chunks internally.
Fixes#3715
Added explicit return statements for trace-related tests.
Updated cassette for `test_failed_evaluation` to reflect new behavior where
an empty trace dict is used instead of returning early.
Ensured deterministic cleanup order in test fixtures by making
`clear_event_bus_handlers` depend on `setup_test_environment`. This guarantees
event bus shutdown and file handle cleanup occur before temporary directory
deletion, resolving intermittent “Directory not empty” errors in CI.
* chore: remove lib/crewai exclusion from pre-commit hooks
* feat: enhance task guardrail functionality and validation
* feat: enhance task guardrail functionality and validation
- Introduced support for multiple guardrails in the Task class, allowing for sequential processing of guardrails.
- Added a new `guardrails` field to the Task model to accept a list of callable guardrails or string descriptions.
- Implemented validation to ensure guardrails are processed correctly, including handling of retries and error messages.
- Enhanced the `_invoke_guardrail_function` method to manage guardrail execution and integrate with existing task output processing.
- Updated tests to cover various scenarios involving multiple guardrails, including success, failure, and retry mechanisms.
This update improves the flexibility and robustness of task execution by allowing for more complex validation scenarios.
* refactor: enhance guardrail type handling in Task model
- Updated the Task class to improve guardrail type definitions, introducing GuardrailType and GuardrailsType for better clarity and type safety.
- Simplified the validation logic for guardrails, ensuring that both single and multiple guardrails are processed correctly.
- Enhanced error messages for guardrail validation to provide clearer feedback when incorrect types are provided.
- This refactor improves the maintainability and robustness of task execution by standardizing guardrail handling.
* feat: implement per-guardrail retry tracking in Task model
- Introduced a new private attribute `_guardrail_retry_counts` to the Task class for tracking retry attempts on a per-guardrail basis.
- Updated the guardrail processing logic to utilize the new retry tracking, allowing for independent retry counts for each guardrail.
- Enhanced error handling to provide clearer feedback when guardrails fail validation after exceeding retry limits.
- Modified existing tests to validate the new retry tracking behavior, ensuring accurate assertions on guardrail retries.
This update improves the robustness and flexibility of task execution by allowing for more granular control over guardrail validation and retry mechanisms.
* chore: 1.0.0b3 bump (#3734)
* chore: full ruff and mypy
improved linting, pre-commit setup, and internal architecture. Configured Ruff to respect .gitignore, added stricter rules, and introduced a lock pre-commit hook with virtualenv activation. Fixed type shadowing in EXASearchTool using a type_ alias to avoid PEP 563 conflicts and resolved circular imports in agent executor and guardrail modules. Removed agent-ops attributes, deprecated watson alias, and dropped crewai-enterprise tools with corresponding test updates. Refactored cache and memoization for thread safety and cleaned up structured output adapters and related logic.
* New MCL DSL (#3738)
* Adding MCP implementation
* New tests for MCP implementation
* fix tests
* update docs
* Revert "New tests for MCP implementation"
This reverts commit 0bbe6dee90.
* linter
* linter
* fix
* verify mcp pacakge exists
* adjust docs to be clear only remote servers are supported
* reverted
* ensure args schema generated properly
* properly close out
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* feat: a2a experimental
experimental a2a support
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Mike Plachta <mplachta@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Fixes nested boolean conditions being flattened in @listen, @start, and @router decorators. The or_() and and_() combinators now preserve their nested structure using a "conditions" key instead of flattening to a list. Added recursive evaluation logic to properly handle complex patterns like or_(and_(A, B), and_(C, D)).
- Bumped the `crewai` version in `__init__.py` to 0.203.1.
- Updated the dependency versions in the crew, flow, and tool templates' `pyproject.toml` files to reflect the new `crewai` version.
- Revised the security policy to clarify the reporting process for vulnerabilities.
- Added detailed sections on scope, reporting requirements, and our commitment to addressing reported issues.
- Emphasized the importance of not disclosing vulnerabilities publicly and provided guidance on how to report them securely.
- Included a new section on coordinated disclosure and safe harbor provisions for ethical reporting.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- Updated the `crewai-tools` dependency in `pyproject.toml` and `uv.lock` to version 0.76.0.
- Updated the `crewai` version in `__init__.py` to 0.203.0.
- Updated the dependency versions in the crew, flow, and tool templates to reflect the new `crewai` version.
- Introduced a new documentation page detailing how to capture telemetry logs from CrewAI AMP deployments.
- Updated the main documentation to include the new guide in the enterprise section.
- Added prerequisites and step-by-step instructions for configuring OTEL collector setup.
- Included an example image for OTEL log collection capture to Datadog.
* feat: enhance knowledge event handling in Agent class
- Updated the Agent class to include task context in knowledge retrieval events.
- Emitted new events for knowledge retrieval and query processes, capturing task and agent details.
- Refactored knowledge event classes to inherit from a base class for better structure and maintainability.
- Added tracing for knowledge events in the TraceCollectionListener to improve observability.
This change improves the tracking and management of knowledge queries and retrievals, facilitating better debugging and performance monitoring.
* refactor: remove task_id from knowledge event emissions in Agent class
- Removed the task_id parameter from various knowledge event emissions in the Agent class to streamline event handling.
- This change simplifies the event structure and focuses on the essential context of knowledge retrieval and query processes.
This refactor enhances the clarity of knowledge events and aligns with the recent improvements in event handling.
* surface association for guardrail events
* fix: improve LLM selection logic in converter
- Updated the logic for selecting the LLM in the convert_with_instructions function to handle cases where the agent may not have a function_calling_llm attribute.
- This change ensures that the converter can still function correctly by falling back to the standard LLM if necessary, enhancing robustness and preventing potential errors.
This fix improves the reliability of the conversion process when working with different agent configurations.
* fix test
* fix: enforce valid LLM instance requirement in converter
- Updated the convert_with_instructions function to ensure that a valid LLM instance is provided by the agent.
- If neither function_calling_llm nor the standard llm is available, a ValueError is raised, enhancing error handling and robustness.
- Improved error messaging for conversion failures to provide clearer feedback on issues encountered during the conversion process.
This change strengthens the reliability of the conversion process by ensuring that agents are properly configured with a valid LLM.
- Introduced a new documentation page for CrewAI Tracing, detailing setup and usage.
- Updated the main documentation to include the new tracing page in the observability section.
- Added example code snippets for enabling tracing in both Crews and Flows.
- Included instructions for global tracing configuration via environment variables.
- Added a new image for the CrewAI Tracing interface.
- prefix provider env vars with embeddings_
- rename watson → watsonx in providers
- add deprecation warning and alias for legacy 'watson' key (to be removed in v1.0.0)
- Bump CrewAI version to 0.201.0 in __init__.py
- Update dependency versions in pyproject.toml for crew, flow, and tool templates to require CrewAI 0.201.0
- Remove unnecessary blank line in pyproject.toml
- update imports and include handling for chromadb v1.1.0
- fix mypy and typing_compat issues (required, typeddict, voyageai)
- refine embedderconfig typing and allow base provider instances
- handle mem0 as special case for external memory storage
- bump tools and clean up redundant deps
- introduce baseembeddingsprovider and helper for embedding functions
- add core embedding types and migrate providers, factory, and storage modules
- remove unused type aliases and fix pydantic schema error
- update providers with env var support and related fixes
- Add pydantic-settings>=2.10.1 dependency for configuration management
- Update pydantic to 2.11.9 and python-dotenv to 1.1.1
- Migrate from deprecated tool.uv.dev-dependencies to dependency-groups.dev format
- Remove unnecessary dev dependencies: pillow, cairosvg
- Update all dev tooling to latest versions
- Remove duplicate python-dotenv from dev dependencies
- add batch_size field to baseragconfig (default=100)
- update chromadb/qdrant clients and factories to use batch_size
- extract and filter batch_size from embedder config in knowledgestorage
- fix large csv files exceeding embedder token limits (#3574)
- remove unneeded conditional for type
Co-authored-by: Vini Brasil <vini@hey.com>
- support nested config format with embedderconfig typeddict
- fix parsing for model/model_name compatibility
- add validation, typing_extensions, and improved type hints
- enhance embedding factory with env var injection and provider support
- add tests for openai, azure, and all embedding providers
- misc fixes: test file rename, updated mocking patterns
* fix: Make 'ready' parameter optional in _create_reasoning_plan function
This PR fixes Issue #3466 where the _create_reasoning_plan function was missing
the 'ready' parameter when called by the LLM. The fix makes the 'ready' parameter
optional with a default value of False, which allows the function to be called
with only the 'plan' argument.
Fixes#3466
* Change default value of 'ready' parameter to True
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* chore: update dependencies and versioning for CrewAI
- Bump `crewai-tools` dependency version from `0.71.0` to `0.73.0` in `pyproject.toml`.
- Update CrewAI version from `0.186.1` to `0.193.0` in `__init__.py`.
- Adjust dependency versions in CLI templates for crew, flow, and tool to reflect the new CrewAI version.
This update ensures compatibility with the latest features and improvements in CrewAI.
* remove embedchain mock
* fix: remove last embedchain mocks
* fix: remove langchain_openai from tests
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* feat(tracing): enhance first-time trace display and auto-open browser
* avoinding line breaking
* set tracing if user enables it
* linted
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
* docs: update RagTool references from EmbedChain to CrewAI native RAG
* change ref to qdrant
* docs: update RAGTool to use Qdrant and add embedding_model example
- Refactor the default embedding function to utilize OpenAI's embedding function with API key support.
- Import necessary OpenAI embedding function and configure it with the environment variable for the API key.
- Ensure compatibility with existing ChromaDB configuration model.
- Add limit and score_threshold to BaseRagConfig, propagate to clients
- Update default search params in RAG storage, knowledge, and memory (limit=5, threshold=0.6)
- Fix linting (ruff, mypy, PERF203) and refactor save logic
- Update tests for new defaults and ChromaDB behavior
* feat(tracing): implement first-time trace handling and improve event management
- Added FirstTimeTraceHandler for managing first-time user trace collection and display.
- Enhanced TraceBatchManager to support ephemeral trace URLs and improved event buffering.
- Updated TraceCollectionListener to utilize the new FirstTimeTraceHandler.
- Refactored type annotations across multiple files for consistency and clarity.
- Improved error handling and logging for trace-related operations.
- Introduced utility functions for trace viewing prompts and first execution checks.
* brought back crew finalize batch events
* refactor(trace): move instance variables to __init__ in TraceBatchManager
- Refactored TraceBatchManager to initialize instance variables in the constructor instead of as class variables.
- Improved clarity and encapsulation of the class state.
* fix(tracing): improve error handling in user data loading and saving
- Enhanced error handling in _load_user_data and _save_user_data functions to log warnings for JSON decoding and file access issues.
- Updated documentation for trace usage to clarify the addition of tracing parameters in Crew and Flow initialization.
- Refined state management in Flow class to ensure proper handling of state IDs when persistence is enabled.
* add some tests
* fix test
* fix tests
* refactor(tracing): enhance user input handling for trace viewing
- Replaced signal-based timeout handling with threading for user input in prompt_user_for_trace_viewing function.
- Improved user experience by allowing a configurable timeout for viewing execution traces.
- Updated tests to mock threading behavior and verify timeout handling correctly.
* fix(tracing): improve machine ID retrieval with error handling
- Added error handling to the _get_machine_id function to log warnings when retrieving the machine ID fails.
- Ensured that the function continues to provide a stable, privacy-preserving machine fingerprint even in case of errors.
* refactor(flow): streamline state ID assignment in Flow class
- Replaced direct attribute assignment with setattr for improved flexibility in handling state IDs.
- Enhanced code readability by simplifying the logic for setting the state ID when persistence is enabled.
Code QL, when configured through the GUI, does not allow for advanced configuration. This PR upgrades from an advanced file-based config which allows us to exclude certain paths.
- Updated CrewAI version from 0.186.0 to 0.186.1 in `__init__.py`.
- Updated `crewai[tools]` dependency version in `pyproject.toml` for crew, flow, and tool templates to reflect the new CrewAI version.
- Updated `crewai-tools` dependency from version 0.69.0 to 0.71.0 in `pyproject.toml`.
- Bumped CrewAI version from 0.177.0 to 0.186.0 in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool to reflect the new CrewAI version.
* test: add test to ensure tool is called only once during crew execution
- Introduced a new test case to validate that the counting_tool is executed exactly once during crew execution.
- Created a CountingTool class to track execution counts and log call history.
- Enhanced the test suite with a YAML cassette for consistent tool behavior verification.
* ensure tool function called only once
* refactor: simplify error handling in CrewStructuredTool
- Removed unnecessary try-except block around the tool function call to streamline execution flow.
- Ensured that the tool function is called directly, improving readability and maintainability.
* linted
* need to ignore for now as we cant infer the complex generic type within pydantic create_model_func
* fix tests
- Build and cache uv dependencies; update type-checker, tests, and linter to use cache
- Remove separate security-checker
- Add explicit workflow permissions for compliance
- Remove pull_request trigger from build-cache workflow
- Update to Python 3.10+ typing across LLM, callbacks, storage, and errors
- Complete typing updates for crew_chat and hitl
- Add stop attr to mock LLM, suppress test warnings
- Add type-ignore for aisuite import
* fix: support to define MPC connection timeout on CrewBase instance
* fix: resolve linter issues
* chore: ignore specific rule N802 on CrewBase class
* fix: ignore untyped import
- Disable E501 line length linting rule
- Add Google-style docstrings to tasks leaf file
- Modernize typing and docs in task_output.py
- Improve typing and documentation in conditional_task.py
* docs(cli): document device-code login and config reset guidance; renumber sections
* docs(cli): fix duplicate numbering (renumber Login/API Keys/Configuration sections)
* docs: Fix webhook documentation to include meta dict in all webhook payloads
- Add note explaining that meta objects from kickoff requests are included in all webhook payloads
- Update webhook examples to show proper payload structure including meta field
- Fix webhook examples to match actual API implementation
- Apply changes to English, Korean, and Portuguese documentation
Resolves the documentation gap where meta dict passing to webhooks was not documented despite being implemented in the API.
* WIP: CrewAI docs theme, changelog, GEO, localization
* docs(cli): fix merge markers; ensure mode: "wide"; convert ASCII tables to Markdown (en/pt-BR/ko)
* docs: add group icons across locales; split Automation/Integrations; update tools overviews and links
- Updated `crewai-tools` dependency from version 0.65.0 to 0.69.0 in `pyproject.toml` and `uv.lock`.
- Bumped crewAI version from 0.175.0 to 0.177.0 in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new crewAI version.
* fix: suppress Pydantic deprecation warnings in initialization
- Implemented a function to filter out Pydantic deprecation warnings, enhancing the user experience by preventing unnecessary warning messages during execution.
- Removed the previous warning filter setup to streamline the warning suppression process.
- Updated the User-Agent header formatting for consistency.
* fix type check
* dropped
* fix: update type-checker workflow and suppress warnings
- Updated the Python version matrix in the type-checker workflow to use double quotes for consistency.
- Added the `# type: ignore[assignment]` comment to the warning suppression assignment in `__init__.py` to address type checking issues.
- Ensured that the mypy command in the workflow allows for untyped calls and generics, enhancing type checking flexibility.
* better
chore(dev): update tooling & CI workflows
- Upgrade ruff, mypy (strict), pre-commit; add hooks, stubs, config consolidation
- Add bandit to dev deps and update uv.lock
- Enhance ruff rules (modern Python style, B006 for mutable defaults)
- Update workflows to use uv, matrix strategy, and changed-file type checking
- Include tests in type checking; fix job names and add summary job for branch protection
refactor(events): relocate events module & update imports
- Move events from utilities/ to top-level events/ with types/, listeners/, utils/ structure
- Update all source/tests/docs to new import paths
- Add backwards compatibility stubs in crewai.utilities.events with deprecation warnings
- Restore test mocks and fix related test imports
* fix: enhance LLM event handling with task and agent metadata
- Added `from_task` and `from_agent` parameters to LLM event emissions for improved traceability.
- Updated `_send_events_to_backend` method in TraceBatchManager to return status codes for better error handling.
- Modified `CREWAI_BASE_URL` to remove trailing slash for consistency.
- Improved logging and graceful failure handling in event sending process.
* drop print
- Sanitize ChromaDB collection names and use original dir naming
- Add persistent client with file locking to the ChromaDB factory
- Add upsert support to the ChromaDB client
- Suppress ChromaDB deprecation warnings for `model_fields`
- Extract `suppress_logging` into shared `logger_utils`
- Update tests to reflect upsert behavior
- Docs: add additional note
* Bump crewAI version from 0.165.1 to 0.175.0 in __init__.py.
* Update tools dependency from 0.62.1 to 0.65.0 in pyproject.toml and uv.lock files.
* Reflect changes in CLI templates for crew, flow, and tool configurations.
* feat: implement tool usage limit exception handling
- Introduced `ToolUsageLimitExceeded` exception to manage maximum usage limits for tools.
- Enhanced `CrewStructuredTool` to check and raise this exception when the usage limit is reached.
- Updated `_run` and `_execute` methods to include usage limit checks and handle exceptions appropriately, improving reliability and user feedback.
* feat: enhance PlusAPI and ToolUsage with task metadata
- Removed the `send_trace_batch` method from PlusAPI to streamline the API.
- Added timeout parameters to trace event methods in PlusAPI for improved reliability.
- Updated ToolUsage to include task metadata (task name and ID) in event emissions, enhancing traceability and context during tool usage.
- Refactored event handling in LLM and ToolUsage events to ensure task information is consistently captured.
* feat: enhance memory and event handling with task and agent metadata
- Added task and agent metadata to various memory and event classes, improving traceability and context during memory operations.
- Updated the `ContextualMemory` and `Memory` classes to associate tasks and agents, allowing for better context management.
- Enhanced event emissions in `LLM`, `ToolUsage`, and memory events to include task and agent information, facilitating improved debugging and monitoring.
- Refactored event handling to ensure consistent capture of task and agent details across the system.
* drop
* refactor: clean up unused imports in memory and event modules
- Removed unused TYPE_CHECKING imports from long_term_memory.py to streamline the code.
- Eliminated unnecessary import from memory_events.py, enhancing clarity and maintainability.
* fix memory tests
* fix task_completed payload
* fix: remove unused test agent variable in external memory tests
* refactor: remove unused agent parameter from Memory class save method
- Eliminated the agent parameter from the save method in the Memory class to streamline the code and improve clarity.
- Updated the TraceBatchManager class by moving initialization of attributes into the constructor for better organization and readability.
* refactor: enhance ExecutionState and ReasoningEvent classes with optional task and agent identifiers
- Added optional `current_agent_id` and `current_task_id` attributes to the `ExecutionState` class for better tracking of agent and task states.
- Updated the `from_task` attribute in the `ReasoningEvent` class to use `Optional[Any]` instead of a specific type, improving flexibility in event handling.
* refactor: update ExecutionState class by removing unused agent and task identifiers
- Removed the `current_agent_id` and `current_task_id` attributes from the `ExecutionState` class to simplify the code and enhance clarity.
- Adjusted the import statements to include `Optional` for better type handling.
* refactor: streamline LLM event handling in LiteAgent
- Removed unused LLM event emissions (LLMCallStartedEvent, LLMCallCompletedEvent, LLMCallFailedEvent) from the LiteAgent class to simplify the code and improve performance.
- Adjusted the flow of LLM response handling by eliminating unnecessary event bus interactions, enhancing clarity and maintainability.
* flow ownership and not emitting events when a crew is done
* refactor: remove unused agent parameter from ShortTermMemory save method
- Eliminated the agent parameter from the save method in the ShortTermMemory class to streamline the code and improve clarity.
- This change enhances the maintainability of the memory management system by reducing unnecessary complexity.
* runtype check fix
* fixing tests
* fix lints
* fix: update event assertions in test_llm_emits_event_with_lite_agent
- Adjusted the expected counts for completed and started events in the test to reflect the correct behavior of the LiteAgent.
- Updated assertions for agent roles and IDs to match the expected values after recent changes in event handling.
* fix: update task name assertions in event tests
- Modified assertions in `test_stream_llm_emits_event_with_task_and_agent_info` and `test_llm_emits_event_with_task_and_agent_info` to use `task.description` as a fallback for `task.name`. This ensures that the tests correctly validate the task name even when it is not explicitly set.
* fix: update test assertions for output values and improve readability
- Updated assertions in `test_output_json_dict_hierarchical` to reflect the correct expected score value.
- Enhanced readability of assertions in `test_output_pydantic_to_another_task` and `test_key` by formatting the error messages for clarity.
- These changes ensure that the tests accurately validate the expected outputs and improve overall code quality.
* test fixes
* fix crew_test
* added another fixture
* fix: ensure agent and task assignments in contextual memory are conditional
- Updated the ContextualMemory class to check for the existence of short-term, long-term, external, and extended memory before assigning agent and task attributes. This prevents potential attribute errors when memory types are not initialized.
* Added Qdrant provider support with factory, config, and protocols
* Improved default embeddings and type definitions
* Fixed ChromaDB factory embedding assignment
### RAG Config System
* Added ChromaDB client creation via config with sensible defaults
* Introduced optional imports and shared RAG config utilities/schema
* Enabled embedding function support with ChromaDB provider integration
* Refactored configs for immutability and stronger type safety
* Removed unused code and expanded test coverage
Add ChromaDB client implementation with async support
- Implement core collection operations (create, get_or_create, delete)
- Add search functionality with cosine similarity scoring
- Include both sync and async method variants
- Add type safety with NamedTuples and TypeGuards
- Extract utility functions to separate modules
- Default to cosine distance metric for text similarity
- Add comprehensive test coverage
TODO:
- l2, ip score calculations are not settled on
fix: resolve flaky tests and race conditions in test suite
- Fix telemetry/event tests by patching class methods instead of instances
- Use unique temp files/directories to prevent CI race conditions
- Reset singleton state between tests
- Mock embedchain.Client.setup() to prevent JSON corruption
- Rename test files to test_*.py convention
- Move agent tests to tests/agents directory
- Fix repeated tool usage detection
- Remove database-dependent tools causing initialization errors
* docs: fix API Reference OpenAPI sources and redirects; clarify training data usage; add Mermaid diagram; correct CLI usage and notes
* docs(mintlify): use explicit openapi {source, directory} with absolute paths to fix branch deployment routing
* docs(mintlify): add explicit endpoint MDX pages and include in nav; keep OpenAPI auto-gen as fallback
* docs(mintlify): remove OpenAPI Endpoints groups; add localized MDX endpoint pages for pt-BR and ko
* fix: flow listener resumability for HITL and cyclic flows
- Add resumption context flag to distinguish HITL resumption from cyclic execution
- Skip method re-execution only during HITL resumption, not for cyclic flows
- Ensure cyclic flows like test_cyclic_flow continue to work correctly
* fix: prevent duplicate execution of conditional start methods in flows
* fix: resolve type error in flow.py line 1040 assignment
2025-08-20 10:06:18 -04:00
2848 changed files with 428918 additions and 135990 deletions
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
### Scope
Email all vulnerability reports directly to:
**security@crewai.com**
We welcome reports for vulnerabilities that could impact:
### Required Information
To help us quickly validate and remediate the issue, your report must include:
- CrewAI-maintained source code and repositories
- CrewAI-operated infrastructure and services
- Official CrewAI releases, packages, and distributions
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
### How to Report
### Reward Notice
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
# required to fetch internal or private CodeQL packs
packages:read
# only required for workflows in private repositories
actions:read
contents:read
strategy:
fail-fast:false
matrix:
include:
- language:actions
build-mode:none
- language:python
build-mode:none
# CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name:Checkout repository
uses:actions/checkout@v4
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
- name:Initialize CodeQL
uses:github/codeql-action/init@v4
with:
languages:${{ matrix.language }}
build-mode:${{ matrix.build-mode }}
config-file:./.github/codeql/codeql-config.yml
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if:matrix.build-mode == 'manual'
shell:bash
run:|
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
- **CrewAI Flows**: The **enterprise and production architecture** for building and deploying multi-agent systems. Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI Enterprise Suite
# CrewAI AMP Suite
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
@@ -76,9 +76,9 @@ You can try one part of the suite the [Crew Control Plane for free](https://app.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI Enterprise is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
## Table of contents
@@ -124,7 +124,8 @@ Setup and run your first CrewAI agents by following this tutorial.
[](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learning Resources
Learn CrewAI through our comprehensive courses:
@@ -141,6 +142,7 @@ CrewAI offers two powerful, complementary approaches that work seamlessly togeth
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2.**Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
@@ -166,13 +168,13 @@ Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](h
First, install CrewAI:
```shell
pip install crewai
uv pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
pip install 'crewai[tools]'
uv pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
@@ -185,14 +187,15 @@ If you encounter issues during installation or usage, here are some common solut
1.**ModuleNotFoundError: No module named 'tiktoken'**
@@ -556,7 +559,7 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
_P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb))._
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -611,7 +614,7 @@ uv build
### Installing Locally
```bash
pip install dist/*.tar.gz
uv pip install dist/*.tar.gz
```
## Telemetry
@@ -674,9 +677,9 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b
### Enterprise Features
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-amp-offer)
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-amp-for-free)
### Q: What exactly is CrewAI?
@@ -687,13 +690,13 @@ A: CrewAI is a standalone, lean, and fast Python framework built specifically fo
A: Install CrewAI using pip:
```shell
pip install crewai
uv pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
uv pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
@@ -732,17 +735,17 @@ A: Check out practical examples in the [CrewAI-examples repository](https://gith
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI Enterprise offer?
### Q: What additional features does CrewAI AMP offer?
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI Enterprise for free?
### Q: Can I try CrewAI AMP for free?
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
@@ -762,7 +765,7 @@ A: CrewAI is highly scalable, supporting simple automations and large-scale ente
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.