Address Bugbot feedback: remove format reminder logic for add_image tool
to avoid in-place mutation of cached results. The cached dict was being
modified when format reminders were appended, causing reminders to
accumulate across cache hits.
Co-Authored-By: João <joao@crewai.com>
Address Bugbot feedback: when skipping _format_result() for add_image tool,
the task.used_tools counter was not being incremented. Now we increment
the counter before the conditional check and pass skip_counter=True to
_format_result() to avoid double-counting.
Co-Authored-By: João <joao@crewai.com>
Address Bugbot feedback: ensure both tool_usage.py and crew_agent_executor.py
use the same comparison approach (casefold().strip()) for detecting the
add_image tool, preventing edge cases where fuzzy matching could cause
inconsistent behavior.
Co-Authored-By: João <joao@crewai.com>
This commit fixes the multimodal image handling in CrewAI agents:
1. CrewAgentExecutor._handle_agent_action: Fixed to properly append
multimodal messages without double-wrapping. The add_image tool
result is now appended directly if it's already a properly formatted
message dict with role and content.
2. ToolUsage._use: Fixed to preserve the raw dict result for add_image
tool instead of stringifying it via _format_result(). This ensures
the multimodal content structure is maintained.
3. Gemini provider _format_messages_for_gemini: Added proper handling
for multimodal content with image_url parts. The provider now:
- Converts image_url parts to Gemini's inline_data format
- Supports HTTP(S) URLs by fetching and converting to base64
- Supports data URLs by parsing and extracting base64 data
- Supports local file paths by reading and converting to base64
4. Added comprehensive tests for all multimodal handling fixes.
Fixes#4016
Co-Authored-By: João <joao@crewai.com>
* refactor: enhance model validation and provider inference in LLM class
- Updated the model validation logic to support pattern matching for new models and "latest" versions, improving flexibility for various providers.
- Refactored the `_validate_model_in_constants` method to first check hardcoded constants and then fall back to pattern matching.
- Introduced `_matches_provider_pattern` to streamline provider-specific model checks.
- Enhanced the `_infer_provider_from_model` method to utilize pattern matching for better provider inference.
This refactor aims to improve the extensibility of the LLM class, allowing it to accommodate new models without requiring constant updates to the hardcoded lists.
* feat: add new Anthropic model versions to constants
- Introduced "claude-opus-4-5-20251101" and "claude-opus-4-5" to the AnthropicModels and ANTHROPIC_MODELS lists for enhanced model support.
- Added "anthropic.claude-opus-4-5-20251101-v1:0" to BedrockModels and BEDROCK_MODELS to ensure compatibility with the latest model offerings.
- Updated test cases to ensure proper environment variable handling for model validation, improving robustness in testing scenarios.
* dont infer this way - dropped
- Changed "AMP" to "AOP" in multiple locations across JSON and MDX files to reflect the correct terminology for the Agent Operations Platform.
- Updated the introduction sections in English, Korean, and Portuguese to ensure consistency in the platform's naming.
* Adding drop parameters
* Adding test case
* Just some spacing addition
* Adding drop params to maintain consistency
* Changing variable name
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* feat: enhance flow event state management
- Added `state` attribute to `FlowFinishedEvent` to capture the flow's state as a JSON-serialized dictionary.
- Updated flow event emissions to include the serialized state, improving traceability and debugging capabilities during flow execution.
* fix: improve state serialization in Flow class
- Enhanced the `_copy_and_serialize_state` method to handle exceptions during JSON serialization of Pydantic models, ensuring robustness in state management.
- Updated test assertions to access the state as a dictionary, aligning with the new state structure.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* Add gemini-3-pro-preview
Also refactors the tool support check for better forward compatibility.
* Add cassette for Gemini 3 Pro
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- Deleted the `__init__.py` file from the tests/hooks directory as it contained no tests or functionality. This cleanup helps maintain a tidy test structure.
- add trust_remote_completion_status flag to A2AConfig, Adds configuration flag to control whether to trust A2A agent completion status. Resolves#3899
- update docs
* feat: implement LLM call hooks and enhance agent execution context
- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.
* feat: implement LLM call hooks and enhance agent execution context
- Introduced LLM call hooks to allow modification of messages and responses during LLM interactions.
- Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow.
- Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications.
- Added validation for hook callables to ensure proper functionality.
- Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities.
- Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes.
* fix verbose
* feat: introduce crew-scoped hook decorators and refactor hook registration
- Added decorators for before and after LLM and tool calls to enhance flexibility in modifying execution behavior.
- Implemented a centralized hook registration mechanism within CrewBase to automatically register crew-scoped hooks.
- Removed the obsolete base.py file as its functionality has been integrated into the new decorators and registration system.
- Enhanced tests for the new hook decorators to ensure proper registration and execution flow.
- Updated existing hook handling to accommodate the new decorator-based approach, improving code organization and maintainability.
* feat: enhance hook management with clear and unregister functions
- Introduced functions to unregister specific before and after hooks for both LLM and tool calls, improving flexibility in hook management.
- Added clear functions to remove all registered hooks of each type, facilitating easier state management and cleanup.
- Implemented a convenience function to clear all global hooks in one call, streamlining the process for testing and execution context resets.
- Enhanced tests to verify the functionality of unregistering and clearing hooks, ensuring robust behavior in various scenarios.
* refactor: enhance hook type management for LLM and tool hooks
- Updated hook type definitions to use generic protocols for better type safety and flexibility.
- Replaced Callable type annotations with specific BeforeLLMCallHookType and AfterLLMCallHookType for clarity.
- Improved the registration and retrieval functions for before and after hooks to align with the new type definitions.
- Enhanced the setup functions to handle hook execution results, allowing for blocking of LLM calls based on hook logic.
- Updated related tests to ensure proper functionality and type adherence across the hook management system.
* feat: add execution and tool hooks documentation
- Introduced new documentation for execution hooks, LLM call hooks, and tool call hooks to provide comprehensive guidance on their usage and implementation in CrewAI.
- Updated existing documentation to include references to the new hooks, enhancing the learning resources available for users.
- Ensured consistency across multiple languages (English, Portuguese, Korean) for the new documentation, improving accessibility for a wider audience.
- Added examples and troubleshooting sections to assist users in effectively utilizing hooks for agent operations.
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- Added support for before and after LLM call hooks to allow modification of messages and responses during LLM interactions.
- Introduced LLMCallHookContext to provide hooks with access to the executor state, enabling in-place modifications of messages.
- Updated get_llm_response function to utilize the new hooks, ensuring that modifications persist across iterations.
- Enhanced tests to verify the functionality of the hooks and their error handling capabilities, ensuring robust execution flow.
* feat: add messages to task and agent outputs
- Introduced a new field in and to capture messages from the last task execution.
- Updated the class to store the last messages and provide a property for easy access.
- Enhanced the and classes to include messages in their outputs.
- Added tests to ensure that messages are correctly included in task outputs and agent outputs during execution.
* using typing_extensions for 3.10 compatability
* feat: add last_messages attribute to agent for improved task tracking
- Introduced a new `last_messages` attribute in the agent class to store messages from the last task execution.
- Updated the `Crew` class to handle the new messages attribute in task outputs.
- Enhanced existing tests to ensure that the `last_messages` attribute is correctly initialized and utilized across various guardrail scenarios.
* fix: add messages field to TaskOutput in tests for consistency
- Updated multiple test cases to include the new `messages` field in the `TaskOutput` instances.
- Ensured that all relevant tests reflect the latest changes in the TaskOutput structure, maintaining consistency across the test suite.
- This change aligns with the recent addition of the `last_messages` attribute in the agent class for improved task tracking.
* feat: preserve messages in task outputs during replay
- Added functionality to the Crew class to store and retrieve messages in task outputs.
- Enhanced the replay mechanism to ensure that messages from stored task outputs are preserved and accessible.
- Introduced a new test case to verify that messages are correctly stored and replayed, ensuring consistency in task execution and output handling.
- This change improves the overall tracking and context retention of task interactions within the CrewAI framework.
* fix original test, prev was debugging
- Added section on LLM-based guardrails, explaining their usage and requirements.
- Updated examples to demonstrate the implementation of multiple guardrails, including both function-based and LLM-based approaches.
- Clarified the distinction between single and multiple guardrails in task configurations.
- Improved explanations of guardrail functionality to ensure better understanding of validation processes.
- Enhanced the MCP tool execution in both synchronous and asynchronous contexts by utilizing for better event loop management.
- Updated error handling to provide clearer messages for connection issues and task cancellations.
- Added tests to validate MCP tool execution in both sync and async scenarios, ensuring robust functionality across different contexts.